Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-setting-mongodb
Packt
12 Aug 2016
10 min read
Save for later

Setting up MongoDB

Packt
12 Aug 2016
10 min read
In this article by Samer Buna author of the book Learning GraphQL and Relay, we're mostly going to be talking about how an API is nothing without access to a database. Let's set up a local MongoDB instance, add some data in there, and make sure we can access that data through our GraphQL schema. (For more resources related to this topic, see here.) MongoDB can be locally installed on multiple platforms. Check the documentation site for instructions for your platform (https://docs.mongodb.com/manual/installation/). For Mac, the easiest way is probably Homebrew: ~ $ brew install mongodb Create a db folder inside a data folder. The default location is /data/db: ~ $ sudo mkdir -p /data/db Change the owner of the /data folder to be the current logged-in user: ~ $ sudo chown -R $USER /data Start the MongoDB server: ~ $ mongod If everything worked correctly, we should be able to open a new terminal and test the mongo CLI: ~/graphql-project $ mongo MongoDB shell version: 3.2.7 connecting to: test > db.getName() test > We're using MongoDB version 3.2.7 here. Make sure that you have this version or newer versions of MongoDB. Let's go ahead and create a new collection to hold some test data. Let's name that collection users: > db.createCollection("users")" { "ok" : 1 } Now we can use the users collection to add documents that represent users. We can use the MongoDB insertOne() function for that: > db.users.insertOne({ firstName: "John"," lastName: "Doe"," email: "john@example.com" }) We should see an output like: { "acknowledged" : true, "insertedId" : ObjectId("56e729d36d87ae04333aa4e1") } Let's go ahead and add another user: > db.users.insertOne({ firstName: "Jane"," lastName: "Doe"," email: "jane@example.com" }) We can now verify that we have two user documents in the users collection using: > db.users.count() 2 MongoDB has a built-in unique object ID which you can see in the output for insertOne(). Now that we have a running MongoDB, and we have some test data in there, it's time to see how we can read this data using a GraphQL API. To communicate with a MongoDB from a Node.js application, we need to install a driver. There are many options that we can choose from, but GraphQL requires a driver that supports promises. We will use the official MongoDB Node.js driver which supports promises. Instructions on how to install and run the driver can be found at: https://docs.mongodb.com/ecosystem/drivers/node-js/. To install the MongoDB official Node.js driver under our graphql-project app, we do: ~/graphql-project $ npm install --save mongodb └─┬ mongodb@2.2.4 We can now use this mongodb npm package to connect to our local MongoDB server from within our Node application. In index.js: const mongodb = require('mongodb'); const assert = require('assert'); const MONGO_URL = 'mongodb'://localhost:27017/test'; mongodb.MongoClient.connect(MONGO_URL, (err, db) => { assert.equal(null, err); console.log('Connected' to MongoDB server'); // The readline interface code }); The MONGO_URL variable value should not be hardcoded in code like this. Instead, we can use a node process environment variable to set it to a certain value before executing the code. On a production machine, we would be able to use the same code and set the process environment variable to a different value. Use the export command to set the environment variable value: export MONGO_URL=mongodb://localhost:27017/test Then in the Node code, we can read the exported value by using: process.env.MONGO_URL If we now execute the node index.js command, we should see the Connected to MongoDB server line right before we ask for the Client Request. At this point, the Node.js process will not exit after our interaction with it. We'll need to force exit the process with Ctrl + C to restart it. Let's start our database API with a simple field that can answer this question: How many total users do we have in the database? The query could be something like: { usersCount } To be able to use a MongoDB driver call inside our schema main.js file, we need access to the db object that the MongoClient.connect() function exposed for us in its callback. We can use the db object to count the user documents by simply running the promise: db.collection('users').count() .then(usersCount => console.log(usersCount)); Since we only have access to the db object in index.js within the connect() function's callback, we need to pass a reference to that db object to our graphql() function. We can do that using the fourth argument for the graphql() function, which accepts a contextValue object of globals, and the GraphQL engine will pass this context object to all the resolver functions as their third argument. Modify the graphql function call within the readline interface in index.js to be: graphql.graphql(mySchema, inputQuery, {}, { db }).then(result => { console.log('Server' Answer :', result.data); db.close(() => rli.close()); }); The third argument to the graphql() function is called the rootValue, which gets passed as the first argument to the resolver function on the top level type. We are not using that feature here. We passed the connected database object db as part of the global context object. This will enable us to use db within any resolver function. Note also how we're now closing the rli interface within the callback for the operation that closes the db. We should not leave any open db connections behind. Here's how we can now use the resolver third argument to resolve our usersCount top-level field with the db count() operation: fields: { // "hello" and "diceRoll"..." usersCount: { type: GraphQLInt, resolve: (_, args, { db }) => db.collection('users').count() } } A couple of things to notice about this code: We destructured the db object from the third argument for the resolve() function so that we can use it directly (instead of context.db). We returned the promise itself from the resolve() function. The GraphQL executor has native support for promises. Any resolve() function that returns a promise will be handled by the executor itself. The executor will either successfully resolve the promise and then resolve the query field with the promise-resolved value, or it will reject the promise and return an error to the user. We can test our query now: ~/graphql-project $ node index.js Connected to MongoDB server Client Request: { usersCount } Server Answer : { usersCount: 2 } *** #GitTag: chapter1-setting-up-mongodb *** Setting up an HTTP interface Let's now see how we can use the graphql() function under another interface, an HTTP one. We want our users to be able to send us a GraphQL request via HTTP. For example, to ask for the same usersCount field, we want the users to do something like: /graphql?query={usersCount} We can use the Express.js node framework to handle and parse HTTP requests, and within an Express.js route, we can use the graphql() function. For example (don't add these lines yet): const app = express(); app.use('/graphql', (req, res) => { // use graphql.graphql() to respond with JSON objects }); However, instead of manually handling the req/res objects, there is a GraphQL Express.js middleware that we can use, express-graphql. This middleware wraps the graphql() function and prepares it to be used by Express.js directly. Let's go ahead and bring in both the Express.js library and this middleware: ~/graphql-project $ npm install --save express express-graphql ├─┬ express@4.14.0 └─┬ express-graphql@0.5.3 In index.js, we can now import both express and the express-graphql middleware: const graphqlHTTP = require('express-graphql'); const express = require('express'); const app = express(); With these imports, the middleware main function will now be available as graphqlHTTP(). We can now use it in an Express route handler. Inside the MongoClient.connect() callback, we can do: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db } })); app.listen(3000, () => console.log('Running Express.js on port 3000') ); Note that at this point we can remove the readline interface code as we are no longer using it. Our GraphQL interface from now on will be an HTTP endpoint. The app.use line defines a route at /graphql and delegates the handling of that route to the express-graphql middleware that we imported. We pass two objects to the middleware, the mySchema object, and the context object. We're not passing any input query here because this code just prepares the HTTP endpoint, and we will be able to read the input query directly from a URL field. The app.listen() function is the call we need to start our Express.js app. Its first argument is the port to use, and its second argument is a callback we can use after Express.js has started. We can now test our HTTP-mounted GraphQL executor with: ~/graphql-project $ node index.js Connected to MongoDB server Running Express.js on port 3000 In a browser window go to: http://localhost:3000/graphql?query={usersCount} *** #GitTag: chapter1-setting-up-an-http-interface *** The GraphiQL editor The graphqlHTTP() middleware function accepts another property on its parameter object graphiql, let's set it to true: app.use('/graphql', graphqlHTTP({ schema: mySchema, context: { db }, graphiql: true })); When we restart the server now and navigate to http://localhost:3000/graphql, we'll get an instance of the GraphiQL editor running locally on our GraphQL schema: GraphiQL is an interactive playground where we can explore our GraphQL queries and mutations before we officially use them. GraphiQL is written in React and GraphQL, and it runs completely within the browser. GraphiQL has many powerful editor features such as syntax highlighting, code folding, and error highlighting and reporting. Thanks to GraphQL introspective nature, GraphiQL also has intelligent type-ahead of fields, arguments, and types. Put the cursor in the left editor area, and type a selection set: { } Place the cursor inside that selection set and press Ctrl + space. You should see a list of all fields that our GraphQL schema support, which are the three fields that we have defined so far (hello, diceRoll, and usersCount): If Ctrl +space does not work, try Cmd + space, Alt + space, or Shift + space. The __schema and __type fields can be used to introspectively query the GraphQL schema about what fields and types it supports. When we start typing, this list starts to get filtered accordingly. The list respects the context of the cursor, if we place the cursor inside the arguments of diceRoll(), we'll get the only argument we defined for diceRoll, the count argument. Go ahead and read all the root fields that our schema support, and see how the data gets reported on the right side with the formatted JSON object: *** #GitTag: chapter1-the-graphiql-editor *** Summary In this article, we learned how to set up a local MongoDB instance, add some data in there, so that we can access that data through our GraphQL schema. Resources for Article: Further resources on this subject: Apache Solr and Big Data – integration with MongoDB [article] Getting Started with Java Driver for MongoDB [article] Documents and Collections in Data Modeling with MongoDB [article]
Read more
  • 0
  • 0
  • 14694

article-image-developing-basic-site-nodejs-and-express
Packt
17 Feb 2016
21 min read
Save for later

Developing a Basic Site with Node.js and Express

Packt
17 Feb 2016
21 min read
In this article, we will continue with the Express framework. It's one of the most popular frameworks available and is certainly a pioneering one. Express is still widely used and several developers use it as a starting point. (For more resources related to this topic, see here.) Getting acquainted with Express Express (http://expressjs.com/) is a web application framework for Node.js. It is built on top of Connect (http://www.senchalabs.org/connect/), which means that it implements middleware architecture. In the previous chapter, when exploring Node.js, we discovered the benefit of such a design decision: the framework acts as a plugin system. Thus, we can say that Express is suitable for not only simple but also complex applications because of its architecture. We may use only some of the popular types of middleware or add a lot of features and still keep the application modular. In general, most projects in Node.js perform two functions: run a server that listens on a specific port, and process incoming requests. Express is a wrapper for these two functionalities. The following is basic code that runs the server: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); This is an example extracted from the official documentation of Node.js. As shown, we use the native module http and run a server on the port 1337. There is also a request handler function, which simply sends the Hello world string to the browser. Now, let's implement the same thing but with the Express framework, using the following code: var express = require('express'); var app = express(); app.get("/", function(req, res, next) { res.send("Hello world"); }).listen(1337); console.log('Server running at http://127.0.0.1:1337/'); It's pretty much the same thing. However, we don't need to specify the response headers or add a new line at the end of the string because the framework does it for us. In addition, we have a bunch of middleware available, which will help us process the requests easily. Express is like a toolbox. We have a lot of tools to do the boring stuff, allowing us to focus on the application's logic and content. That's what Express is built for: saving time for the developer by providing ready-to-use functionalities. Installing Express There are two ways to install Express. We'll will start with the simple one and then proceed to the more advanced technique. The simpler approach generates a template, which we may use to start writing the business logic directly. In some cases, this can save us time. From another viewpoint, if we are developing a custom application, we need to use custom settings. We can also use the boilerplate, which we get with the advanced technique; however, it may not work for us. Using package.json Express is like every other module. It has its own place in the packages register. If we want to use it, we need to add the framework in the package.json file. The ecosystem of Node.js is built on top of the Node Package Manager. It uses the JSON file to find out what we need and installs it in the current directory. So, the content of our package.json file looks like the following code: { "name": "projectname", "description": "description", "version": "0.0.1", "dependencies": { "express": "3.x" } } These are the required fields that we have to add. To be more accurate, we have to say that the mandatory fields are name and version. However, it is always good to add descriptions to our modules, particularly if we want to publish our work in the registry, where such information is extremely important. Otherwise, the other developers will not know what our library is doing. Of course, there are a bunch of other fields, such as contributors, keywords, or development dependencies, but we will stick to limited options so that we can focus on Express. Once we have our package.json file placed in the project's folder, we have to call npm install in the console. By doing so, the package manager will create a node_modules folder and will store Express and its dependencies there. At the end of the command's execution, we will see something like the following screenshot: The first line shows us the installed version, and the proceeding lines are actually modules that Express depends on. Now, we are ready to use Express. If we type require('express'), Node.js will start looking for that library inside the local node_modules directory. Since we are not using absolute paths, this is normal behavior. If we miss running the npm install command, we will be prompted with Error: Cannot find module 'express'. Using a command-line tool There is a command-line instrument called express-generator. Once we run npm install -g express-generator, we will install and use it as every other command in our terminal. If you use the framework inseveral projects, you will notice that some things are repeated. We can even copy and paste them from one application to another, and this is perfectly fine. We may even end up with our own boiler plate and can always start from there. The command-line version of Express does the same thing. It accepts few arguments and based on them, creates a skeleton for use. This can be very handy in some cases and will definitely save some time. Let's have a look at the available arguments: -h, --help: This signifies output usage information. -V, --version: This shows the version of Express. -e, --ejs: This argument adds the EJS template engine support. Normally, we need a library to deal with our templates. Writing pure HTML is not very practical. The default engine is set to JADE. -H, --hogan: This argument is Hogan-enabled (another template engine). -c, --css: If wewant to use the CSS preprocessors, this option lets us use LESS(short forLeaner CSS) or Stylus. The default is plain CSS. -f, --force: This forces Express to operate on a nonempty directory. Let's try to generate an Express application skeleton with LESS as a CSS preprocessor. We use the following line of command: express --css less myapp A new myapp folder is created with the file structure, as seen in the following screenshot: We still need to install the dependencies, so cd myapp && npm install is required. We will skip the explanation of the generated directories for now and will move to the created app.js file. It starts with initializing the module dependencies, as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var bodyParser = require('body-parser'); var routes = require('./routes/index'); var users = require('./routes/users'); var app = express(); Our framework is express, and path is a native Node.js module. The middleware are favicon, logger, cookieParser, and bodyParser. The routes and users are custom-made modules, placed in local for the project folders. Similarly, as in the Model-View-Controller(MVC) pattern, these are the controllers for our application. Immediately after, an app variable is created; this represents the Express library. We use this variable to configure our application. The script continues by setting some key-value pairs. The next code snippet defines the path to our views and the default template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); The framework uses the methods set and get to define the internal properties. In fact, we may use these methods to define our own variables. If the value is a Boolean, we can replace set and get with enable and disable. For example, see the following code: app.set('color', 'red'); app.get('color'); // red app.enable('isAvailable'); The next code adds middleware to the framework. Wecan see the code as follows: app.use(favicon()); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); app.use(cookieParser()); app.use(require('less-middleware')({ src: path.join(__dirname, 'public') })); app.use(express.static(path.join(__dirname, 'public'))); The first middleware serves as the favicon of our application. The second is responsible for the output in the console. If we remove it, we will not get information about the incoming requests to our server. The following is a simple output produced by logger: GET / 200 554ms - 170b GET /stylesheets/style.css 200 18ms - 110b The json and urlencoded middleware are related to the data sent along with the request. We need them because they convert the information in an easy-to-use format. There is also a middleware for the cookies. It populates the request object, so we later have access to the required data. The generated app uses LESS as a CSS preprocessor, and we need to configure it by setting the directory containing the .less files. Eventually, we define our static resources, which should be delivered by the server. These are just few lines, but we've configured the whole application. We may remove or replace some of the modules, and the others will continue working. The next code in the file maps two defined routes to two different handlers, as follows: app.use('/', routes); app.use('/users', users); If the user tries to open a missing page, Express still processes the request by forwarding it to the error handler, as follows: app.use(function(req, res, next) { var err = new Error('Not Found'); err.status = 404; next(err); }); The framework suggests two types of error handling:one for the development environment and another for the production server. The difference is that the second one hides the stack trace of the error, which should be visible only for the developers of the application. As we can see in the following code, we are checking the value of the env property and handling the error differently: // development error handler if (app.get('env') === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); At the end, the app.js file exports the created Express instance, as follows: module.exports = app; To run the application, we need to execute node ./bin/www. The code requires app.js and starts the server, which by default listens on port 3000. #!/usr/bin/env node var debug = require('debug')('my-application'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The process.env declaration provides an access to variables defined in the current development environment. If there is no PORT setting, Express uses 3000 as the value. The required debug module uses a similar approach to find out whether it has to show messages to the console. Managing routes The input of our application is the routes. The user visits our page at a specific URL and we have to map this URL to a specific logic. In the context of Express, this can be done easily, as follows: var controller = function(req, res, next) { res.send("response"); } app.get('/example/url', controller); We even have control over the HTTP's method, that is, we are able to catch POST, PUT, or DELETE requests. This is very handy if we want to retain the address path but apply a different logic. For example, see the following code: var getUsers = function(req, res, next) { // ... } var createUser = function(req, res, next) { // ... } app.get('/users', getUsers); app.post('/users', createUser); The path is still the same, /users, but if we make a POST request to that URL, the application will try to create a new user. Otherwise, if the method is GET, it will return a list of all the registered members. There is also a method, app.all, which we can use to handle all the method types at once. We can see this method in the following code snippet: app.all('/', serverHomePage); There is something interesting about the routing in Express. We may pass not just one but many handlers. This means that we can create a chain of functions that correspond to one URL. For example, it we need to know if the user is logged in, there is a module for that. We can add another method that validates the current user and attaches a variable to the request object, as follows: var isUserLogged = function(req, res, next) { req.userLogged = Validator.isCurrentUserLogged(); next(); } var getUser = function(req, res, next) { if(req.userLogged) { res.send("You are logged in. Hello!"); } else { res.send("Please log in first."); } } app.get('/user', isUserLogged, getUser); The Validator class is a class that checks the current user's session. The idea is simple: we add another handler, which acts as an additional middleware. After performing the necessary actions, we call the next function, which passes the flow to the next handler, getUser. Because the request and response objects are the same for all the middlewares, we have access to the userLogged variable. This is what makes Express really flexible. There are a lot of great features available, but they are optional. At the end of this chapter, we will make a simple website that implements the same logic. Handling dynamic URLs and the HTML forms The Express framework also supports dynamic URLs. Let's say we have a separate page for every user in our system. The address to those pages looks like the following code: /user/45/profile Here, 45 is the unique number of the user in our database. It's of course normal to use one route handler for this functionality. We can't really define different functions for every user. The problem can be solved by using the following syntax: var getUser = function(req, res, next) { res.send("Show user with id = " + req.params.id); } app.get('/user/:id/profile', getUser); The route is actually like a regular expression with variables inside. Later, that variable is accessible in the req.params object. We can have more than one variable. Here is a slightly more complex example: var getUser = function(req, res, next) { var userId = req.params.id; var actionToPerform = req.params.action; res.send("User (" + userId + "): " + actionToPerform) } app.get('/user/:id/profile/:action', getUser); If we open http://localhost:3000/user/451/profile/edit, we see User (451): edit as a response. This is how we can get a nice looking, SEO-friendly URL. Of course, sometimes we need to pass data via the GET or POST parameters. We may have a request like http://localhost:3000/user?action=edit. To parse it easily, we need to use the native url module, which has few helper functions to parse URLs: var getUser = function(req, res, next) { var url = require('url'); var url_parts = url.parse(req.url, true); var query = url_parts.query; res.send("User: " + query.action); } app.get('/user', getUser); Once the module parses the given URL, our GET parameters are stored in the .query object. The POST variables are a bit different. We need a new middleware to handle that. Thankfully, Express has one, which is as follows: app.use(express.bodyParser()); var getUser = function(req, res, next) { res.send("User: " + req.body.action); } app.post('/user', getUser); The express.bodyParser() middleware populates the req.body object with the POST data. Of course, we have to change the HTTP method from .get to .post or .all. If we want to read cookies in Express, we may use the cookieParser middleware. Similar to the body parser, it should also be installed and added to the package.json file. The following example sets the middleware and demonstrates its usage: var cookieParser = require('cookie-parser'); app.use(cookieParser('optional secret string')); app.get('/', function(req, res, next){ var prop = req.cookies.propName }); Returning a response Our server accepts requests, does some stuff, and finally, sends the response to the client's browser. This can be HTML, JSON, XML, or binary data, among others. As we know, by default, every middleware in Express accepts two objects, request and response. The response object has methods that we can use to send an answer to the client. Every response should have a proper content type or length. Express simplifies the process by providing functions to set HTTP headers and sending content to the browser. In most cases, we will use the .send method, as follows: res.send("simple text"); When we pass a string, the framework sets the Content-Type header to text/html. It's great to know that if we pass an object or array, the content type is application/json. If we develop an API, the response status code is probably going to be important for us. With Express, we are able to set it like in the following code snippet: res.send(404, 'Sorry, we cannot find that!'); It's even possible to respond with a file from our hard disk. If we don't use the framework, we will need to read the file, set the correct HTTP headers, and send the content. However, Express offers the .sendfile method, which wraps all these operations as follows: res.sendfile(__dirname + "/images/photo.jpg"); Again, the content type is set automatically; this time it is based on the filename's extension. When building websites or applications with a user interface, we normally need to serve an HTML. Sure, we can write it manually in JavaScript, but it's good practice to use a template engine. This means we save everything in external files and the engine reads the markup from there. It populates them with some data and, at the end, provides ready-to-show content. In Express, the whole process is summarized in one method, .render. However, to work properly, we have to instruct the framework regarding which template engine to use. We already talked about this in the beginning of this chapter. The following two lines of code, set the path to our views and the template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); Let's say we have the following template ( /views/index.jade ): h1= title p Welcome to #{title} Express provides a method to serve templates. It accepts the path to the template, the data to be applied, and a callback. To render the previous template, we should use the following code: res.render("index", {title: "Page title here"}); The HTML produced looks as follows: <h1>Page title here</h1><p>Welcome to Page title here</p> If we pass a third parameter, function, we will have access to the generated HTML. However, it will not be sent as a response to the browser. The example-logging system We've seen the main features of Express. Now let's build something real. The next few pages present a simple website where users can read only if they are logged in. Let's start and set up the application. We are going to use Express' command-line instrument. It should be installed using npm install -g express-generator. We create a new folder for the example, navigate to it via the terminal, and execute express --css less site. A new directory, site, will be created. If we go there and run npm install, Express will download all the required dependencies. As we saw earlier, by default, we have two routes and two controllers. To simplify the example, we will use only the first one: app.use('/', routes). Let's change the views/index.jade file content to the following HTML code: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr p That's a simple application using Express. Now, if we run node ./bin/www and open http://127.0.0.1:3000, we will see the page. Jade uses indentation to parse our template. So, we should not mix tabs and spaces. Otherwise, we will get an error. Next, we need to protect our content. We check whether the current user has a session created; if not, a login form is shown. It's the perfect time to create a new middleware. To use sessions in Express, install an additional module: express-session. We need to open our package.json file and add the following line of code: "express-session": "~1.0.0" Once we do that, a quick run of npm install will bring the module to our application. All we have to do is use it. The following code goes to app.js: var session = require('express-session'); app.use(session({ secret: 'app', cookie: { maxAge: 60000 }})); var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { res.send("show login form"); } } app.use('/', verifyUser, routes); Note that we changed the original app.use('/', routes) line. The session middleware is initialized and added to Express. The verifyUser function is called before the page rendering. It uses the req.session object, and checks whether there is a loggedIn variable defined and if its value is true. If we run the script again, we will see that the show login form text is shown for every request. It's like this because no code sets the session exactly the way we want it. We need a form where users can type their username and password. We will process the result of the form and if the credentials are correct, the loggedIn variable will be set to true. Let's create a new Jade template, /views/login.jade: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr form(method='post') label Username: br input(type='text', name='username') br label Password: br input(type='password', name='password') br input(type='submit') Instead of sending just a text with res.send("show login form"); we should render the new template, as follows: res.render("login", {title: "Please log in."}); We choose POST as the method for the form. So, we need to add the middleware that populates the req.body object with the user's data, as follows: app.use(bodyParser()); Process the submitted username and password as follows: var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { var username = "admin", password = "admin"; if(req.body.username === username && req.body.password === password) { req.session.loggedIn = true; res.redirect('/'); } else { res.render("login", {title: "Please log in."}); } } } The valid credentials are set to admin/admin. In a real application, we may need to access a database or get this information from another place. It's not really a good idea to place the username and password in the code; however, for our little experiment, it is fine. The previous code checks whether the passed data matches our predefined values. If everything is correct, it sets the session, after which the user is forwarded to the home page. Once you log in, you should be able to log out. Let's add a link for that just after the content on the index page (views/index.jade ): a(href='/logout') logout Once users clicks on this link, they will be forward to a new page. We just need to create a handler for the new route, remove the session, and forward them to the index page where the login form is reflected. Here is what our logging out handler looks like: // in app.js var logout = function(req, res, next) { req.session.loggedIn = false; res.redirect('/'); } app.all('/logout', logout); Setting loggedIn to false is enough to make the session invalid. The redirect sends users to the same content page they came from. However, this time, the content is hidden and the login form pops up. Summary In this article, we learned about one of most widely used Node.js frameworks, Express. We discussed its fundamentals, how to set it up, and its main characteristics. The middleware architecture, which we mentioned in the previous chapter, is the base of the library and gives us the power to write complex but, at the same time, flexible applications. The example we used was a simple one. We required a valid session to provide page access. However, it illustrates the usage of the body parser middleware and the process of registering the new routes. We also updated the Jade templates and saw the results in the browser. For more information on Node.js Refer to the following URLs: https://www.packtpub.com/web-development/instant-nodejs-starter-instant https://www.packtpub.com/web-development/learning-nodejs-net-developers https://www.packtpub.com/web-development/nodejs-essentials Resources for Article: Further resources on this subject: Writing a Blog Application with Node.js and AngularJS [article] Testing in Node and Hapi [article] Learning Node.js for Mobile Application Development [article]
Read more
  • 0
  • 0
  • 14618

article-image-building-grid-system-susy
Packt
09 Aug 2016
14 min read
Save for later

Building a Grid System with Susy

Packt
09 Aug 2016
14 min read
In this article by Luke Watts, author of the book Mastering Sass, we will build a responsive grid system using the Susy library and a few custom mixins and functions. We will set a configuration map with our breakpoints which we will then loop over to automatically create our entire grid, using interpolation to create our class names. (For more resources related to this topic, see here.) Detailing the project requirements For this example, we will need bower to download Susy. After Susy has been downloaded we will only need two files. We'll place them all in the same directory for simplicity. These files will be style.scss and _helpers.scss. We'll place the majority of our SCSS code in style.scss. First, we'll import susy and our _helpers.scss at the beginning of this file. After that we will place our variables and finally our code which will create our grid system. Bower and Susy To check if you have bower installed open your command line (Terminal on Unix or CMD on Windows) and run: bower -v If you see a number like "1.7.9" you have bower. If not you will need to install bower using npm, a package manager for NodeJS. If you don't already have NodeJS installed, you can download it from: https://nodejs.org/en/. To install bower from your command line using npm you will need to run: npm install -g bower Once bower is installed cd into the root of your project and run: bower install susy This will create a directory called bower_components. Inside that you will find a folder called susy. The full path to file we will be importing in style.scss is bower_components/susy/sass/_susy.scss. However we can leave off the underscore (_) and also the extension (.scss). Sass will still load import the file just fine. In style.scss add the following at the beginning of our file: // style.scss @import 'bower_components/susy/sass/susy'; Helpers (mixins and functions) Next, we'll need to import our _helpers.scss file in style.scss. Our _helpers.scss file will contain any custom mixins or functions we'll create to help us in building our grid. In style.scss import _helpers.scss just below where we imported Susy: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; Mixin: bp (breakpoint) I don't know about you, but writing media queries always seems like bit of a chore to me. I just don't like to write (min-width: 768px) all the time. So for that reason I'm going to include the bp mixin, which means instead of writing: @media(min-width: 768px) { // ... } We can simply use: @include bp(md) { // ... } First we are going to create a map of our breakpoints. Add the $breakpoints map to style.scss just below our imports: // style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); Then, inside _helpers.scss we're going to create our bp mixin which will handle creating our media queries from the $breakpoints map. Here's the breakpoint (bp) mixin: @mixin bp($size: md) { @media (min-width: map-get($breakpoints, $size)) { @content; } } Here we are setting the default breakpoint to be md (768px). We then use the built in Sass function map-get to get the relevant value using the key ($size). Inside our @media rule we use the @content directive which will allows us pass any Sass or CSS directly into our bp mixin to our @media rule. The container mixin The container mixin sets the max-width of the containing element, which will be the .container element for now. However, it is best to use the container mixin to semantically restrict certain parts of the design to your max width instead of using presentational classes like container or row. The container mixin takes a width argument, which will be the max-width. It also automatically applies the micro-clearfix hack. This prevents the containers height from collapsing when the elements inside it are floated. I prefer the overflow: hidden method myself, but they do the same thing essentially. By default, the container will be set to max-width: 100%. However, you can set it to be any valid unit of dimension, such as 60em, 1160px, 50%, 90vw, or whatever. As long as it's a valid CSS unit it will work. In style.scss let's create our .container element using the container mixin: // style.scss .container { @include container(1160px); } The preceding code will give the following CSS output: .container { max-width: 1160px; margin-left: auto; margin-right: auto; } .container:after { content: " "; display: block; clear: both; } Due to the fact the container uses a max-width we don't need to specify different dimensions for various screen sizes. It will be 100% until the screen is above 1160px and then the max-width value will kick in. The .container:after rule is the micro-clearfix hack. The span mixin To create columns in Susy we use the span mixin. The span mixin sets the width of that element and applies a padding or margin depending on how Susy is set up. By default, Susy will apply a margin to the right of each column, but you can set it to be on the left, or to be padding on the left or right or padding or margin on both sides. Susy will do the necessary work to make everything work behind the scenes. To create a half width column in a 12 column grid you would use: .col-6 { @include span(6 of 12); } The of 12 let's Susy know this is a 12 column grid. When we define our $susy map later we can tell Susy how many columns we are using via the columns property. This means we can drop the of 12 part and simply use span(6) instead. Susy will then know we are using 12 columns unless we explicitly pass another value. The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } Notice the width and margin together would actually be 50.84746%, not 50% as you might expect. Therefor two of these column would actually be 101.69492%. That will cause the last column to wrap to the next row. To prevent this, you would need to remove the margin from the last column. The last keyword To address this, Susy uses the last keyword. When you pass this to the span mixin it lets Susy know this is the last column in a row. This removes the margin right and also floats the element in question to the right to ensure it's at the very end of the row. Let's take the previous example where we would have two col-6 elements. We could create a class of col-6-last and apply the last keyword to that span mixin: .col-6 { @include span(6 of 12); &-last { @include span(last 6 of 12) } } The preceding SCSS will output: .col-6 { width: 49.15254%; float: left; margin-right: 1.69492%; } .col-6-last { width: 49.15254%; float: right; margin-right: 0; } You can also place the last keyword at the end. This will also work: .col-6 { @include span(6 of 12); &-last { @include span(6 of 12 last) } } The $susy configuration map Susy allows for a lot of configuration through its configuration map which is defined as $susy. The settings in the $susy map allow us to set how wide the container should be, how many columns our grid should have, how wide the gutters are, whether those gutters should be margins or padding, and whether the gutters should be on the left, right or both sides of each column. Actually, there are even more settings available depending what type of grid you'd like to build. Let's, define our $susy map with the container set to 1160px just after our $breakpoints map: // style.scss $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); Here we've set our containers max-width to be 1160px. This is used when we use the container mixin without entering a value. We've also set our grid to be 12 columns with the gutters, (padding or margin) to be 1/3 the width of a column. That's about all we need to set for our purposes, however, Susy has a lot more to offer. In fact, to cover everything in Susy would need an entirely book of its own. If you want to explore more of what Susy can do you should read the documentation at http://susydocs.oddbird.net/en/latest/. Setting up a grid system We've all used a 12 column grid which has various sizes (small, medium, large) or a set breakpoint (or breakpoints). These are the most popular methods for two reasons...it works, and it's easy to understand. Furthermore, with the help of Susy we can achieve this with less than 30 lines of Sass! Don't believe me? Let's begin. The concept of our grid system Our grid system will be similar to that of Foundation and Bootstrap. It will have 3 breakpoints and will be mobile-first. It will have a container, which will act as both .container and .row, therefore removing the need for a .row class. The breakpoints Earlier we defined three sizes in our $breakpoints map. These were: $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); So our grid will have small, medium and large breakpoints. The columns naming convention Our columns will use a similar naming convention to that of Bootstrap. There will be four available sets of columns. The first will start from 0px up to the 399px (example: .col-12) The next will start from 480px up to 767px (example: .col-12-sm) The medium will start from 768px up to 979px (example: .col-12-md) The large will start from 980px (example: .col-12-lg) Having four options will give us the most flexibility. Building the grid From here we can use an @for loop and our bp mixin to create our four sets of classes. Each will go from 1 through 12 (or whatever our Susy columns property is set to) and will use the breakpoints we defined for small (sm), medium (md) and large (lg). In style.scss add the following: // style.scss @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } These 9 lines of code are responsible for our mobile-first set of column classes. This loops from one through 12 (which is currently the value of the $susy columns property) and creates a class for each. It also adds a class which handles removing the final columns right margin so our last column doesn't wrap onto a new line. Having control of when this happens will give us the most control. The preceding code would create: .col-1 { width: 6.38298%; float: left; margin-right: 2.12766%; } .col-1-last { width: 6.38298%; float: right; margin-right: 0; } /* 2, 3, 4, and so on up to col-12 */ That means our loop which is only 9 lines of Sass will generate 144 lines of CSS! Now let's create our 3 breakpoints. We'll use an @each loop to get the sizes from our $breakpoints map. This will mean if we add another breakpoint, such as extra-large (xl) it will automatically create the correct set of classes for that size. @each $size, $value in $breakpoints { // Breakpoint will go here and will use $size } Here we're looping over the $breakpoints map and setting a $size variable and a $value variable. The $value variable will not be used, however the $size variable will be set to small, medium and large for each respective loop. We can then use that to set our bp mixin accordingly: @each $size, $value in $breakpoints { @include bp($size) { // The @for loop will go here similar to the above @for loop... } } Now, each loop will set a breakpoint for small, medium and large, and any additional sizes we might add in the future will be generated automatically. Now we can use the same @for loop inside the bp mixin with one small change, we'll add a size to the class name: @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } That's everything we need for our grid system. Here's the full stye.scss file: / /style.scss @import 'bower_components/susy/sass/susy'; @import 'helpers'; $breakpoints: ( sm: 480px, md: 768px, lg: 980px ); $susy: ( container: 1160px, columns: 12, gutters: 1/3 ); .container { @include container; } @for $i from 1 through map-get($susy, columns) { .col-#{$i} { @include span($i); &-last { @include span($i last); } } } @each $size, $value in $breakpoints { @include bp($size) { @for $i from 1 through map-get($susy, columns) { .col-#{$i}-#{$size} { @include span($i); &-last { @include span($i last); } } } } } With our bp mixin that's 45 lines of SCSS. And how many lines of CSS does that generate? Nearly 600 lines of CSS! Also, like I've said, if we wanted to create another breakpoint it would only require a change to the $breakpoint map. Then, if we wanted to have 16 columns instead we would only need to the $susy columns property. The above code would then automatically loop over each and create the correct amount of columns for each breakpoint. Testing our grid Next we need to check our grid works. We mainly want to check a few column sizes for each breakpoint and we want to be sure our last keyword is doing what we expect. I've created a simple piece of HTML to do this. I've also add a small bit of CSS to the file to correct box-sizing issues which will happen because of the additional 1px border. I've also restricted the height so text which wraps to a second line won't affect the heights. This is simply so everything remains in line so it's easy to see our widths are working. I don't recommend setting heights on elements. EVER. Instead using padding or line-height if you can to give an element more height and let the content dictate the size of the element. Create a file called index.html in the root of the project and inside add the following: <!doctype html> <html lang="en-GB"> <head> <meta charset="UTF-8"> <title>Susy Grid Test</title> <link rel="stylesheet" type="text/css" href="style.css" /> <style type="text/css"> *, *::before, *::after { box-sizing: border-box; } [class^="col"] { height: 1.5em; background-color: grey; border: 1px solid black; } </style> </head> <body> <div class="container"> <h1>Grid</h1> <div class="col-12 col-10-sm col-2-md col-10-lg">.col-sm-10.col-2-md.col-10-lg</div> <div class="col-12 col-2-sm-last col-10-md-last col-2-lg-last">.col-sm-2-last.col-10-md-last.col-2-lg-last</div> <div class="col-12 col-9-sm col-3-md col-9-lg">.col-sm-9.col-3-md.col-9-lg</div> <div class="col-12 col-3-sm-last col-9-md-last col-3-lg-last">.col-sm-3-last.col-9-md-last.col-3-lg-last</div> <div class="col-12 col-8-sm col-4-md col-8-lg">.col-sm-8.col-4-md.col-8-lg</div> <div class="col-12 col-4-sm-last col-8-md-last col-4-lg-last">.col-sm-4-last.col-8-md-last.col-4-lg-last</div> <div class="col-12 col-7-sm col-md-5 col-7-lg">.col-sm-7.col-md-5.col-7-lg</div> <div class="col-12 col-5-sm-last col-7-md-last col-5-lg-last">.col-sm-5-last.col-7-md-last.col-5-lg-last</div> <div class="col-12 col-6-sm col-6-md col-6-lg">.col-sm-6.col-6-md.col-6-lg</div> <div class="col-12 col-6-sm-last col-6-md-last col-6-lg-last">.col-sm-6-last.col-6-md-last.col-6-lg-last</div> </div> </body> </html> Use your dev tools responsive tools or simply resize the browser from full size down to around 320px and you'll see our grid works as expected. Summary In this article we used Susy grids as well as a simple breakpoint mixin (bp) to create a solid, flexible grid system. With just under 50 lines of Sass we generated our grid system which consists of almost 600 lines of CSS.  Resources for Article: Further resources on this subject: Implementation of SASS [article] Use of Stylesheets for Report Designing using BIRT [article] CSS Grids for RWD [article]
Read more
  • 0
  • 0
  • 14613

article-image-writing-reddit-reader-rxphp
Packt
09 Jan 2017
9 min read
Save for later

Writing a Reddit Reader with RxPHP

Packt
09 Jan 2017
9 min read
In this article by Martin Sikora, author of the book, PHP Reactive Programming, we will cover writing a CLI Reddit reader app using RxPHP, and we will see how Disposables are used in the default classes that come with RxPHP, and how these are going to be useful for unsubscribing from Observables in our app. (For more resources related to this topic, see here.) Examining RxPHP's internals As we know, Disposables as a means for releasing resources used by Observers, Observables, Subjects, and so on. In practice, a Disposable is returned, for example, when subscribing to an Observable. Consider the following code from the default RxObservable::subscribe() method: function subscribe(ObserverI $observer, $scheduler = null) { $this->observers[] = $observer; $this->started = true; return new CallbackDisposable(function () use ($observer) { $this->removeObserver($observer); }); } This method first adds the Observer to the array of all subscribed Observers. It then marks this Observable as started and, at the end, it returns a new instance of the CallbackDisposable class, which takes a Closure as an argument and invokes it when it's disposed. This is probably the most common use case for Disposables. This Disposable just removes the Observer from the array of subscribers and therefore, it receives no more events emitted from this Observable. A closer look at subscribing to Observables It should be obvious that Observables need to work in such way that all subscribed Observables iterate. Then, also unsubscribing via a Disposable will need to remove one particular Observer from the array of all subscribed Observables. However, if we have a look at how most of the default Observables work, we find out that they always override the Observable::subscribe() method and usually completely omit the part where it should hold an array of subscribers. Instead, they just emit all available values to the subscribed Observer and finish with the onComplete() signal immediately after that. For example, we can have a look at the actual source code of the subscribe() method of the RxReturnObservable class: function subscribe(ObserverI $obs, SchedulerI $sched = null) { $value = $this->value; $scheduler = $scheduler ?: new ImmediateScheduler(); $disp = new CompositeDisposable(); $disp->add($scheduler->schedule(function() use ($obs, $val) { $obs->onNext($val); })); $disp->add($scheduler->schedule(function() use ($obs) { $obs->onCompleted(); })); return $disp; } The ReturnObservable class takes a single value in its constructor and emits this value to every Observer as they subscribe. The following is a nice example of how the lifecycle of an Observable might look: When an Observer subscribes, it checks whether a Scheduler was also passed as an argument. Usually, it's not, so it creates an instance of ImmediateScheduler. Then, an instance of CompositeDisposable is created, which is going to keep an array of all Disposables used by this method. When calling CompositeDisposable::dispose(), it iterates all disposables it contains and calls their respective dispose() methods. Right after that we start populating our CompositeDisposable with the following: $disposable->add($scheduler->schedule(function() { ... })); This is something we'll see very often. SchedulerInterface::schedule() returns a DisposableInterface, which is responsible for unsubscribing and releasing resources. In this case, when we're using ImmediateScheduler, which has no other logic, it just evaluates the Closure immediately: function () use ($obs, $val) { $observer->onNext($val); } Since ImmediateScheduler::schedule() doesn't need to release any resources (it didn't use any), it just returns an instance of RxDisposableEmptyDisposable that does literally nothing. Then the Disposable is returned, and could be used to unsubscribe from this Observable. However, as we saw in the preceding source code, this Observable doesn't let you unsubscribe, and if we think about it, it doesn't even make sense because ReturnObservable class's value is emitted immediately on subscription. The same applies to other similar Observables, such as IteratorObservable, RangeObservable or ArrayObservable. These just contain recursive calls with Schedulers but the principle is the same. A good question is, why on Earth is this so complicated? All the preceding code does could be stripped into the following three lines (assuming we're not interested in using Schedulers): function subscribe(ObserverI $observer) { $observer->onNext($this->value); $observer->onCompleted(); } Well, for ReturnObservable this might be true, but in real applications, we very rarely use any of these primitive Observables. It's true that we usually don't even need to deal with Schedulers. However, the ability to unsubscribe from Observables or clean up any resources when unsubscribing is very important and we'll use it in a few moments. A closer look at Operator chains Before we start writing our Reddit reader, we should talk briefly about an interesting situation that might occur, so it doesn't catch us unprepared later. We're also going to introduce a new type of Observable, called ConnectableObservable. Consider this simple Operator chain with two subscribers: // rxphp_filters_observables.php use RxObservableRangeObservable; use RxObservableConnectableObservable; $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObs = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { return $val % 2;, }); $disposable1 = $filteredObs->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $filteredObs->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); The ConnectableObservable class is a special type of Observable that behaves similarly to Subject (in fact, internally, it really uses an instance of the Subject class). Any other Observable emits all available values right after you subscribe to it. However, ConnectableObservable takes another Observable as an argument and lets you subscribe Observers to it without emitting anything. When you call ConnectableObservable::connect(), it connects Observers with the source Observables, and all values go one by one to all subscribers. Internally, it contains an instance of the Subject class and when we called subscribe(), it just subscribed this Observable to its internal Subject. Then when we called the connect() method, it subscribed the internal Subject to the source Observable. In the $filteredObs variable we keep a reference to the last Observable returned from filter() call, which is an instance of AnnonymousObservable where, on next few lines, we subscribe both Observers. Now, let's see what this Operator chain prints: $ php rxphp_filters_observables.php S1: 1 S2: 1 S1: 9 S2: 9 S1: 25 S2: 25 As we can see, each value went through both Observers in the order they were emitted. Just out of curiosity, we can also have a look at what would happen if we didn't use ConnectableObservable, and used just the RangeObservable instead: $ php rxphp_filters_observables.php S1: 1 S1: 9 S1: 25 S2: 1 S2: 9 S2: 25 This time, RangeObservable emitted all values to the first Observer and then, again, all values to the second Observer. Right now, we can tell that the Observable had to generate all the values twice, which is inefficient, and with a large dataset, this might cause a performance bottleneck. Let's go back to the first example with ConnectableObservable, and modify the filter() call so it prints all the values that go through: $filteredObservable = $connObservable ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }); Now we run the code again and see what happens: $ php rxphp_filters_observables.php Filter: 0 Filter: 0 Filter: 1 S1: 1 Filter: 1 S2: 1 Filter: 4 Filter: 4 Filter: 9 S1: 9 Filter: 9 S2: 9 Filter: 16 Filter: 16 Filter: 25 S1: 25 Filter: 25 S2: 25 Well, this is unexpected! Each value is printed twice. This doesn't mean that the Observable had to generate all the values twice, however. It's not obvious at first sight what happened, but the problem is that we subscribed to the Observable at the end of the Operator chain. As stated previously, $filteredObservable is an instance of AnnonymousObservable that holds many nested Closures. By calling its subscribe() method, it runs a Closure that's created by its predecessor, and so on. This leads to the fact that every call to subscribe() has to invoke the entire chain. While this might not be an issue in many use cases, there are situations where we might want to do some special operation inside one of the filters. Also, note that calls to the subscribe() method might be out of our control, performed by another developer who wanted to use an Observable we created for them. It's good to know that such a situation might occur and could lead to unwanted behavior. It's sometimes hard to see what's going on inside Observables. It's very easy to get lost, especially when we have to deal with multiple Closures. Schedulers are prime examples. Feel free to experiment with the examples shown here and use debugger to examine step-by-step what code gets executed and in what order. So, let's figure out how to fix this. We don't want to subscribe at the end of the chain multiple times, so we can create an instance of Subject class, where we'll subscribe both Observers, and the Subject class itself will subscribe to the AnnonymousObservable as discussed a moment ago: // ... use RxSubjectSubject; $subject = new Subject(); $connObs = new ConnectableObservable(new RangeObservable(0, 6)); $filteredObservable = $connObs ->map(function($val) { return $val ** 2; }) ->filter(function($val) { echo "Filter: $valn"; return $val % 2; }) ->subscribe($subject); $disposable1 = $subject->subscribeCallback(function($val) { echo "S1: ${val}n"; }); $disposable2 = $subject->subscribeCallback(function($val) { echo "S2: ${val}n"; }); $connObservable->connect(); Now we can run the script again and see that it does what we wanted it to do: $ php rxphp_filters_observables.php Filter: 0 Filter: 1 S1: 1 S2: 1 Filter: 4 Filter: 9 S1: 9 S2: 9 Filter: 16 Filter: 25 S1: 25 S2: 25 This might look like an edge case, but soon we'll see that this issue, left unhandled, could lead to some very unpredictable behavior. We'll bring out both these issues (proper usage of Disposables and Operator chains) when we start writing our Reddit reader. Summary In this article, we looked in more depth at how to use Disposables and Operators, how these work internally, and what it means for us. We also looked at a couple of new classes from RxPHP, such as ConnectableObservable, and CompositeDisposable. Resources for Article: Further resources on this subject: Working with JSON in PHP jQuery [article] Working with Simple Associations using CakePHP [article] Searching Data using phpMyAdmin and MySQL [article]
Read more
  • 0
  • 0
  • 14589

article-image-writing-modules
Packt
14 Aug 2017
15 min read
Save for later

Writing Modules

Packt
14 Aug 2017
15 min read
In this article, David Mark Clements, the author of the book, Node.js Cookbook, we will be covering the following points to introduce you to using Node.js  for exploratory data analysis: Node's module system Initializing a module Writing a module Tooling around modules Publishing modules Setting up a private module repository Best practices (For more resources related to this topic, see here.) In idiomatic Node, the module is the fundamental unit of logic. Any typical application or system consists of generic code and application code. As a best practice, generic shareable code should be held in discrete modules, which can be composed together at the application level with minimal amounts of domain-specific logic. In this article, we'll learn how Node's module system works, how to create modules for various scenarios, and how we can reuse and share our code. Scaffolding a module Let's begin our exploration by setting up a typical file and directory structure for a Node module. At the same time, we'll be learning how to automatically generate a package.json file (we refer to this throughout as initializing a folder as a package) and to configure npm (Node's package managing tool) with some defaults, which can then be used as part of the package generation process. In this recipe, we'll create the initial scaffolding for a full Node module. Getting ready Installing Node If we don't already have Node installed, we can go to https://nodejs.org to pick up the latest version for our operating system. If Node is on our system, then so is the npm executable; npm is the default package manager for Node. It's useful for creating, managing, installing, and publishing modules. Before we run any commands, let's tweak the npm configuration a little: npm config set init.author.name "<name here>" This will speed up module creation and ensure that each package we create has a consistent author name, thus avoiding typos and variations of our name. npm stands for... Contrary to popular belief, npm is not an acronym for Node Package Manager; in fact, it stands for npm is Not An Acronym, which is why it's not called NINAA. How to do it… Let's say we want to create a module that converts HSL (hue, saturation, luminosity) values into a hex-based RGB representation, such as will be used in CSS (for example,  #fb4a45 ). The name hsl-to-hex seems good, so let's make a new folder for our module and cd into it: mkdir hsl-to-hex cd hsl-to-hex Every Node module must have a package.json file, which holds metadata about the module. Instead of manually creating a package.json file, we can simply execute the following command in our newly created module folder: npm init This will ask a series of questions. We can hit enter for every question without supplying an answer. Note how the default module name corresponds to the current working directory, and the default author is the init.author.name value we set earlier. An npm init should look like this: Upon completion, we should have a package.json file that looks something like the following: { "name": "hsl-to-hex", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT" } How it works… When Node is installed on our system, npm comes bundled with it. The npm executable is written in JavaScript and runs on Node. The npm config command can be used to permanently alter settings. In our case, we changed the init.author.name setting so that npm init would reference it for the default during a module's initialization. We can list all the current configuration settings with npm config ls . Config Docs Refer to https://docs.npmjs.com/misc/config for all possible npm configuration settings. When we run npm init, the answers to prompts are stored in an object, serialized as JSON and then saved to a newly created package.json file in the current directory. There's more… Let's find out some more ways to automatically manage the content of the package.json file via the npm command. Reinitializing Sometimes additional metadata can be available after we've created a module. A typical scenario can arise when we initialize our module as a git repository and add a remote endpoint after creating the module. Git and GitHub If we've not used the git tool and GitHub before, we can refer to http://help.github.com to get started. If we don't have a GitHub account, we can head to http://github.com to get a free account. To demonstrate, let's create a GitHub repository for our module. Head to GitHub and click on the plus symbol in the top-right, then select New repository: Select New repository. Specify the name as hsl-to-hex and click on Create Repository. Back in the Terminal, inside our module folder, we can now run this: echo -e "node_modulesn*.log" > .gitignore git init git add . git commit -m '1st' git remote add origin http://github.com/<username>/hsl-to-hex git push -u origin master Now here comes the magic part; let's initialize again (simply press enter for every question): npm init This time the Git remote we just added was detected and became the default answer for the git repository question. Accepting this default answer meant that the repository, bugs, and homepage fields were added to package.json . A repository field in package.json is an important addition when it comes to publishing open source modules since it will be rendered as a link on the modules information page at http://npmjs.com. A repository link enables potential users to peruse the code prior to installation. Modules that can't be viewed before use are far less likely to be considered viable. Versioning The npm tool supplies other functionalities to help with module creation and management workflow. For instance, the npm version command can allow us to manage our module's version number according to SemVer semantics. SemVer SemVer is a versioning standard. A version consists of three numbers separated by a dot, for example, 2.4.16. The position of a number denotes specific information about the version in comparison to the other versions. The three positions are known as MAJOR.MINOR.PATCH. The PATCH number is increased when changes have been made that don't break the existing functionality or add any new functionality. For instance, a bug fix will be considered a patch. The MINOR number should be increased when new backward compatible functionality is added. For instance, the adding of a method. The MAJOR number increases when backwards-incompatible changes are made. Refer to http://semver.org/ for more information. If we were to a fix a bug, we would want to increase the PATCH number. We can either manually edit the version field in package.json , setting it to 1.0.1, or we can execute the following: npm version patch This will increase the version field in one command. Additionally, if our module is a Git repository, it will add a commit based on the version (in our case, v1.0.1), which we can then immediately push. When we ran the command, npm output the new version number. However, we can double-check the version number of our module without opening package.json: npm version This will output something similar to the following: { 'hsl-to-hex': '1.0.1', npm: '2.14.17', ares: '1.10.1-DEV', http_parser: '2.6.2', icu: '56.1', modules: '47', node: '5.7.0', openssl: '1.0.2f', uv: '1.8.0', v8: '4.6.85.31', zlib: '1.2.8' } The first field is our module along with its version number. If we added a new backwards-compatible functionality, we can run this: npm version minor Now our version is 1.1.0. Finally, we can run the following for a major version bump: npm version major This sets our modules version to 2.0.0. Since we're just experimenting and didn't make any changes, we should set our version back to 1.0.0. We can do this via the npm command as well: npm version 1.0.0 See also Refer to the following recipes: Writing module code Publishing a module Installing dependencies In most cases, it's most wise to compose a module out of other modules. In this recipe, we will install a dependency. Getting ready For this recipe, all we need is Command Prompt open in the hsl-to-hex folder from the Scaffolding a module recipe. How to do it… Our hsl-to-hex module can be implemented in two steps: Convert the hue degrees, saturation percentage, and luminosity percentage to corresponding red, green, and blue numbers between 0 and 255. Convert the RGB values to HEX. Before we tear into writing an HSL to the RGB algorithm, we should check whether this problem has already been solved. The easiest way to check is to head to http://npmjs.com and perform a search: Oh, look! Somebody already solved this. After some research, we decide that the hsl-to-rgb-for-reals module is the best fit. Ensuring that we are in the hsl-to-hex folder, we can now install our dependency with the following: npm install --save hsl-to-rgb-for-reals Now let's take a look at the bottom of package.json: tail package.json #linux/osx type package.json #windows Tail output should give us this: "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to-hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" } } We can see that the dependency we installed has been added to a dependencies object in the package.json file. How it works… The top two results of the npm search are hsl-to-rgb and hsl-to-rgb-for-reals . The first result is unusable because the author of the package forgot to export it and is unresponsive to fixing it. The hsl-to-rgb-for-reals module is a fixed version of hsl-to-rgb . This situation serves to illustrate the nature of the npm ecosystem. On the one hand, there are over 200,000 modules and counting, and on the other many of these modules are of low value. Nevertheless, the system is also self-healing in that if a module is broken and not fixed by the original maintainer, a second developer often assumes responsibility and publishes a fixed version of the module. When we run npm install in a folder with a package.json file, a node_modules folder is created (if it doesn't already exist). Then, the package is downloaded from the npm registry and saved into a subdirectory of node_modules (for example, node_modules/hsl-to-rgb-for-reals ). npm 2 vs npm 3 Our installed module doesn't have any dependencies of its own. However, if it did, the sub-dependencies would be installed differently depending on whether we're using version 2 or version 3 of npm. Essentially, npm 2 installs dependencies in a tree structure, for instance, node_modules/dep/node_modules/sub-dep-of-dep/node_modules/sub-dep-of-sub-dep. Conversely, npm 3 follows a maximally flat strategy where sub-dependencies are installed in the top level node_modules folder when possible, for example, node_modules/dep, node_modules/sub-dep-of-dep, and node_modules/sub-dep-of-sub-dep. This results in fewer downloads and less disk space usage; npm 3 resorts to a tree structure in cases where there are two versions of a sub-dependency, which is why it's called a maximally flat strategy. Typically, if we've installed Node 4 or above, we'll be using npm version 3. There's more… Let's explore development dependencies, creating module management scripts and installing global modules without requiring root access. Installing development dependencies We usually need some tooling to assist with development and maintenance of a module or application. The ecosystem is full of programming support modules, from linting to testing to browser bundling to transpilation. In general, we don't want consumers of our module to download dependencies they don't need. Similarly, if we're deploying a system built-in node, we don't want to burden the continuous integration and deployment processes with superfluous, pointless work. So, we separate our dependencies into production and development categories. When we use npm --save install <dep>, we're installing a production module. To install a development dependency, we use --save-dev. Let's go ahead and install a linter. JavaScript Standard Style A standard is a JavaScript linter that enforces an unconfigurable ruleset. The premise of this approach is that we should stop using precious time up on bikeshedding about syntax. All the code in this article uses the standard linter, so we'll install that: npm install --save-dev standard semistandard If the absence of semicolons is abhorrent, we can choose to install semistandard instead of standard at this point. The lint rules match those of standard, with the obvious exception of requiring semicolons. Further, any code written using standard can be reformatted to semistandard using the semistandard-format command tool. Simply, run npm -g i semistandard-format to get started with it. Now, let's take a look at the package.json file: { "name": "hsl-to-hex", "version": "1.0.0", "main": "index.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "David Mark Clements", "license": "MIT", "repository": { "type": "git", "url": "git+ssh://git@github.com/davidmarkclements/hsl-to-hex.git" }, "bugs": { "url": "https://github.com/davidmarkclements/hsl-to-hex/issues" }, "homepage": "https://github.com/davidmarkclements/hsl-to- hex#readme", "description": "", "dependencies": { "hsl-to-rgb-for-reals": "^1.1.0" }, "devDependencies": { "standard": "^6.0.8" } } We now have a devDependencies field alongside the dependencies field. When our module is installed as a sub-dependency of another package, only the hsl-to-rgb-for-reals module will be installed while the standard module will be ignored since it's irrelevant to our module's actual implementation. If this package.json file represented a production system, we could run the install step with the --production flag, as shown: npm install --production Alternatively, this can be set in the production environment with the following command: npm config set production true Currently, we can run our linter using the executable installed in the node_modules/.bin folder. Consider this example: ./node_modules/.bin/standard This is ugly and not at all ideal. Refer to Using npm run scripts for a more elegant approach. Using npm run scripts Our package.json file currently has a scripts property that looks like this: "scripts": { "test": "echo "Error: no test specified" && exit 1" }, Let's edit the package.json file and add another field, called lint, as follows: "scripts": { "test": "echo "Error: no test specified" && exit 1", "lint": "standard" }, Now, as long as we have standard installed as a development dependency of our module (refer to Installing Development Dependencies), we can run the following command to run a lint check on our code: npm run-script lint This can be shortened to the following: npm run lint When we run an npm script, the current directory's node_modules/.bin folder is appended to the execution context's PATH environment variable. This means even if we don't have the standard executable in our usual system PATH, we can reference it in an npm script as if it was in our PATH. Some consider lint checks to be a precursor to tests. Let's alter the scripts.test field, as illustrated: "scripts": { "test": "npm run lint", "lint": "standard" }, Chaining commands Later, we can append other commands to the test script using the double ampersand (&&) to run a chain of checks. For instance, "test": "npm run lint && tap test". Now, let's run the test script: npm run test Since the test script is special, we can simply run this: npm test Eliminating the need for sudo The npm executable can install both the local and global modules. Global modules are mostly installed so to allow command line utilities to be used system wide. On OS X and Linux, the default npm setup requires sudo access to install a module. For example, the following will fail on a typical OS X or Linux system with the default npm setup: npm -g install cute-stack # <-- oh oh needs sudo This is unsuitable for several reasons. Forgetting to use sudo becomes frustrating; we're trusting npm with root access and accidentally using sudo for a local install causes permission problems (particularly with the npm local cache). The prefix setting stores the location for globally installed modules; we can view this with the following: npm config get prefix Usually, the output will be /usr/local . To avoid the use of sudo, all we have to do is set ownership permissions on any subfolders in /usr/local used by npm: sudo chown -R $(whoami) $(npm config get prefix)/{lib/node_modules,bin,share} Now we can install global modules without root access: npm -g install cute-stack # <-- now works without sudo If changing ownership of system folders isn't feasible, we can use a second approach, which involves changing the prefix setting to a folder in our home path: mkdir ~/npm-global npm config set prefix ~/npm-global We'll also need to set our PATH: export PATH=$PATH:~/npm-global/bin source ~/.profile The source essentially refreshes the Terminal environment to reflect the changes we've made. See also Scaffolding a module Writing module code Publishing a module Resources for Article: Further resources on this subject: Understanding and Developing Node Modules [article] Working with Pluginlib, Nodelets, and Gazebo Plugins [article] Basic Website using Node.js and MySQL database [article]
Read more
  • 0
  • 0
  • 14417

article-image-playing-tic-tac-toe-against-ai
Packt
11 May 2016
30 min read
Save for later

Playing Tic-Tac-Toe against an AI

Packt
11 May 2016
30 min read
In this article by Ivo Gabe de Wolff, author of the book TypeScript Blueprints, we will build a game in which the computer will play well. The game is called Tic-Tac-Toe. The game is played by two players on a grid, usually three by three. The players try to place their symbols threein a row (horizontal, vertical or diagonal). The first player can place crosses, the second player placescircles. If the board is full, and no one has three symbols in a row, it is a draw. (For more resources related to this topic, see here.) The game is usually played on a three by three grid and the target is to have three symbols in a row. To make the application more interesting, we will make the dimension and the row length variable. We will not create a graphical interface for this application. We will only build the game mechanics and the artificial intelligence(AI). An AI is a player controlled by the computer. If implemented correctly, the computer should never lose on a standard three by three grid. When the computer plays against the computer, it will result in a draft. We will also write various unit tests for the application. We will build the game as a command line application. That means you can play the game in a terminal. You can interact with the game only with text input. It's player one's turn! Choose one out of these options: 1X|X|-+-+- |O|-+-+- | |   2X| |X-+-+- |O|-+-+- | |   3X| |-+-+-X|O|-+-+- | |   4X| |-+-+- |O|X-+-+- | |   5X| |-+-+- |O|-+-+-X| |   6X| |-+-+- |O|-+-+- |X|   7X| |-+-+- |O|-+-+- | |X Creating the project structure We will locate the source files in lib and the tests in lib/test. We use gulp to compile the project and AVA to run tests. We can install the dependencies of our project with NPM: npm init -y npm install ava gulp gulp-typescript --save-dev In gulpfile.js, we configure gulp to compile our TypeScript files. var gulp = require("gulp"); var ts = require("gulp-typescript");   var tsProject = ts.createProject("./lib/tsconfig.json");   gulp.task("default", function() { return tsProject.src() .pipe(ts(tsProject)) .pipe(gulp.dest("dist")); }); Configure TypeScript We can download type definitions for NodeJS with NPM: npm install @types/node --save-dev We must exclude browser files in TypeScript. In lib/tsconfig.json, we add the configuration for TypeScript: {   "compilerOptions": {     "target": "es6",     "module": "commonjs" }   } For applications that run in the browser, you will probably want to target ES5, since ES6 is not supported in all browsers. However, this application will only beexecuted in NodeJS, so we do not have such limitations. You have to use NodeJS 6 or later for ES6 support. Adding utility functions Since we will work a lot with arrays, we can use some utility functions. First, we create a function that flattens a two dimensional array into a one dimensional array. export function flatten<U>(array: U[][]) { return (<U[]>[]).concat(...array); } Next, we create a functionthat replaces a single element of an array with a specified value. We will use functional programming in this article again, so we must use immutable data structures. We can use map for this, since this function provides both the element and the index to the callback. With this index, we can determine whether that element should be replaced. export function arrayModify<U>(array: U[], index: number, newValue: U) { return array.map((oldValue, currentIndex) => currentIndex === index ? newValue : oldValue); } We also create a function that returns a random integer under a certain upper bound. export function randomInt(max: number) { return Math.floor(Math.random() * max); } We will use these functions in the next sessions. Creating the models In lib/model.ts, we will create the model for our game. The model should contain the game state. We start with the player. The game is played by two players. Each field of the grid contains the symbol of a player or no symbol. We will model the grid as a two dimensional array, where each field can contain a player. export type Grid = Player[][]; A player is either Player1, Player2 or no player. export enum Player { Player1 = 1, Player2 = -1, None = 0 } We have given these members values so we can easily get the opponent of a player. export function getOpponent(player: Player): Player { return -player; } We create a type to represent an index of the grid. Since the grid is two dimensional, such an index requires two values. export type Index = [number, number]; We can use this type to create two functions that get or update one field of the grid. We use functional programming in this article, so we will not modify the grid. Instead, we return a new grid with one field changed. export function get(grid: Grid, [rowIndex, columnIndex]: Index) { const row = grid[rowIndex]; if (!row) return undefined; return row[columnIndex]; } export function set(grid: Grid, [row, column]: Index, value: Player) { return arrayModify(grid, row, arrayModify(grid[row], column, value) ); } Showing the grid To show the game to the user, we must convert a grid to a string. First, we will create a function that converts a player to a string, then a function that uses the previous function to show a row, finally a function that uses these functions to show the complete grid. The string representation of a grid should have lines between the fields. We create these lines with standard characters (+, -, and |). This gives the following result: X|X|O-+-+- |O|-+-+-X| | To convert a player to the string, we must get his symbol. For Player1, that is a cross and for Player2, a circle. If a field of the grid contains no symbol, we return a space to keep the grid aligned. function showPlayer(player: Player) { switch (player) { case Player.Player1: return "X"; case Player.Player2: return "O"; default: return ""; } } We can use this function to the tokens of all fields in a row. We add a separator between these fields. function showRow(row: Player[]) { return row.map(showPlayer).reduce((previous, current) => previous + "|" + current); } Since we must do the same later on, but with a different separator, we create a small helper function that does this concatenation based on a separator. const concat = (separator: string) => (left: string, right: string) => left + separator + right; This function requires the separator and returns a function that can be passed to reduce. We can now use this function in showRow. function showRow(row: Player[]) { return row.map(showPlayer).reduce(concat("|")); } We can also use this helper function to show the entire grid. First we must compose the separator, which is almost the same as showing a single row. Next, we can show the grid with this separator. export function showGrid(grid: Grid) { const separator = "n" + grid[0].map(() =>"-").reduce(concat("+")) + "n"; return grid.map(showRow).reduce(concat(separator)); } Creating operations on the grid We will now create some functions that do operations on the grid. These functions check whether the board is full, whether someone has won, and what options a player has. We can check whether the board is full by looking at all fields. If no field exists that has no symbol on it, the board is full, as every field has a symbol. export function isFull(grid: Grid) { for (const row of grid) { for (const field of row) { if (field === Player.None) return false; } } return true; } To check whether a user has won, we must get a list of all horizontal, vertical and diagonal rows. For each row, we can check whether a row consists of a certain amount of the same symbols on a row. We store the grid as an array of the horizontal rows, so we can easily get those rows. We can also get the vertical rows relatively easily. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ... ];   function getVertical(index: number) { return grid.map(row => row[index]); } } Getting a diagonal row requires some more work. We create a helper function that will walk on the grid from a start point, in a certain direction. We distinguish two different kinds of diagonals: a diagonal that goes to the lower-right and a diagonal that goes to the lower-left. For a standard three by three game, only two diagonals exist. However, a larger grid may have more diagonals. If the grid is 5 by 5, and the users should get three in a row, ten diagonals with a length of at least three exist: 0, 0 to 4, 40, 1 to 3, 40, 2 to 2, 41, 0 to 4, 32, 0 to 4, 24, 0 to 0, 43, 0 to 0, 32, 0 to 0, 24, 1 to 1, 44, 2 to 2, 4 The diagonals that go toward the lower-right, start at the first column or at the first horizontal row. Other diagonals start at the last column or at the first horizontal row. In this function, we will just return all diagonals, even if they only have one element, since that is easy to implement. We implement this with a function that walks the grid to find the diagonal. That function requires a start position and a step function. The step function increments the position for a specific direction. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ...grid.map((row, index) => getDiagonal([index, 0], stepDownRight)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index + 1], stepDownRight)), ...grid.map((row, index) => getDiagonal([index, grid[0].length - 1], stepDownLeft)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index], stepDownLeft)) ];   function getVertical(index: number) { return grid.map(row => row[index]); }   function getDiagonal(start: Index, step: (index: Index) => Index) { const row: Player[] = []; for (let index = start; get(grid, index) !== undefined; index = step(index)) { row.push(get(grid, index)); } return row; } function stepDownRight([i, j]: Index): Index { return [i + 1, j + 1]; } function stepDownLeft([i, j]: Index): Index { return [i + 1, j - 1]; } function stepUpRight([i, j]: Index): Index { return [i - 1, j + 1]; } } To check whether a row has a certain amount of the same elements on a row, we will create a function with some nice looking functional programming. The function requires the array, the player, and the index at which the checking starts. That index will usually be zero, but during recursion we can set it to a different value. originalLength contains the original length that a sequence should have. The last parameter, length, will have the same value in most cases, but in recursion we will change the value. We start with some base cases. Every row contains a sequence of zero symbols, so we can always return true in such a case. function isWinningRow(row: Player[], player: Player, index: number, originalLength: number, length: number): boolean { if (length === 0) { return true; } If the row does not contain enough elements to form a sequence, the row will not have such a sequence and we can return false. if (index + length > row.length) { return false; } For other cases, we use recursion. If the current element contains a symbol of the provided player, this row forms a sequence if the next length—1 fields contain the same symbol. if (row[index] === player) { return isWinningRow(row, player, index + 1, originalLength, length - 1); } Otherwise, the row should contain a sequence of the original length in some other position. return isWinningRow(row, player, index + 1, originalLength, originalLength); } If the grid is large enough, a row could contain a long enough sequence after a sequence that was too short. For instance, XXOXXX contains a sequence of length three. This function handles these rows correctly with the parameters originalLength and length. Finally, we must create a function that returns all possible sets that a player can do. To implement this function, we must first find all indices. We filter these indices to indices that reference an empty field. For each of these indices, we change the value of the grid into the specified player. This results in a list of options for the player. export function getOptions(grid: Grid, player: Player) { const rowIndices = grid.map((row, index) => index); const columnIndices = grid[0].map((column, index) => index);   const allFields = flatten(rowIndices.map( row => columnIndices.map(column =><Index> [row, column]) ));   return allFields .filter(index => get(grid, index) === Player.None) .map(index => set(grid, index, player)); } The AI will use this to choose the best option and a human player will get a menu with these options. Creating the grid Before the game can be started, we must create an empty grid. We will write a function that creates an empty grid with the specified size. export function createGrid(width: number, height: number) { const grid: Grid = []; for (let i = 0; i < height; i++) { grid[i] = []; for (let j = 0; j < width; j++) { grid[i][j] = Player.None; } } return grid; } In the next section, we will add some tests for the functions that we have written. These functions work on the grid, so it will be useful to have a function that can parse a grid based on a string. We will separate the rows of a grid with a semicolon. Each row contains tokens for each field. For instance, "XXO; O ;X  " results in this grid: X|X|O-+-+- |O|-+-+-X| | We can implement this by splitting the string into an array of lines. For each line, we split the line into an array of characters. We map these characters to a Player value. export function parseGrid(input: string) { const lines = input.split(";"); return lines.map(parseLine);   function parseLine(line: string) { return line.split("").map(parsePlayer); } function parsePlayer(character: string) { switch (character) { case "X": return Player.Player1; case "O": return Player.Player2; default: return Player.None; } } } In the next section we will use this function to write some tests. Adding tests We will use AVA to write tests for our application. Since the functions do not have side effects, we can easily test them. In lib/test/winner.ts, we test the findWinner function. First, we check whether the function recognizes the winner in some simple cases. import test from "ava"; import { Player, parseGrid, findWinner } from "../model";   test("player winner", t => { t.is(findWinner(parseGrid("   ;XXX;   "), 3), Player.Player1); t.is(findWinner(parseGrid("   ;OOO;   "), 3), Player.Player2); t.is(findWinner(parseGrid("   ;   ;   "), 3), Player.None); }); We can also test all possible three-in-a-row positions in the three by three grid. With this test, we can find out whether horizontal, vertical, and diagonal rows are checked correctly. test("3x3 winner", t => { const grids = [     "XXX;   ;   ",     "   ;XXX;   ",     "   ;   ;XXX",     "X  ;X  ;X  ",     " X ; X ; X ",     "  X;  X;  X",     "X  ; X ;  X",     "  X; X ;X  " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.Player1); } });   We must also test that the function does not claim that someone won too often. In the next test, we validate that the function does not return a winner for grids that do not have a winner. test("3x3 no winner", t => { const grids = [     "XXO;OXX;XOO",     "   ;   ;   ",     "XXO;   ;OOX",     "X  ;X  ; X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.None); } }); Since the game also supports other dimensions, we should check these too. We check that all diagonals of a four by three grid are checked correctly, where the length of a sequence should be two. test("4x3 winner", t => { const grids = [     "X   ; X  ;    ",     " X  ;  X ;    ",     "  X ;   X;    ",     "    ;X   ; X  ",     "  X ;   X;    ",     " X  ;  X ;    ",     "X   ; X  ;    ",     "    ;   X;  X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 2), Player.Player1); } }); You can of course add more test grids yourself. Add tests before you fix a bug. These tests should show the wrong behavior related to the bug. When you have fixed the bug, these tests should pass. This prevents the bug returning in a future version. Random testing Instead of running the test on some predefined set of test cases, you can also write tests that run on random data. You cannot compare the output of a function directly with an expected value, but you can check some properties of it. For instance, getOptions should return an empty list if and only if the board is full. We can use this property to test getOptions and isFull. First, we create a function that randomly chooses a player. To higher the chance of a full grid, we add some extra weight on the players compared to an empty field. import test from "ava"; import { createGrid, Player, isFull, getOptions } from "../model"; import { randomInt } from "../utils";   function randomPlayer() { switch (randomInt(4)) { case 0: case 1: return Player.Player1; case 2: case 3: return Player.Player2; default: return Player.None; } } We create 10000 random grids with this function. The dimensions and the fields are chosen randomly. test("get-options", t => { for (let i = 0; i < 10000; i++) { const grid = createGrid(randomInt(10) + 1, randomInt(10) + 1) .map(row => row.map(randomPlayer)); Next, we check whether the property that we describe holds for this grid. const options = getOptions(grid, Player.Player1) t.is(isFull(grid), options.length === 0); We also check that the function does not give the same option twice. for (let i = 1; i < options.length; i++) { for (let j = 0; j < i; j++) { t.notSame(options[i], options[j]); } } } }); Depending on how critical a function is, you can add more tests. In this case, you could check that only one field is modified in an option or that only an empty field can be modified in an option. Now you can run the tests using gulp && ava dist/test.You can add this to your package.json file. In the scripts section, you can add commands that you want to run. With npm run xxx, you can run task xxx. npm testthat was added as shorthand for npm run test, since the test command is often used. { "name": "article-7", "version": "1.0.0", "scripts": { "test": "gulp && ava dist/test"   }, ... Implementing the AI using Minimax We create an AI based on Minimax. The computer cannot know what his opponent will do in the next steps, but he can check what he can do in the worst-case. The minimum outcome of these worst cases is maximized by this algorithm. This behavior has given Minimax its name. To learn how Minimax works, we will take a look at the value or score of a grid. If the game is finished, we can easily define its value: if you won, the value is 1; if you lost, -1 and if it is a draw, 0. Thus, for player 1 the next grid has value 1 and for player 2 the value is -1. X|X|X-+-+-O|O|-+-+-X|O| We will also define the value of a grid for a game that has not been finished. We take a look at the following grid: X| |X-+-+-O|O|-+-+-O|X| It is player 1's turn. He can place his stone on the top row, and he would win, resulting in a value of 1. He can also choose to lay his stone on the second row. Then the game will result in a draft, if player 2 is not dumb, with score 0. If he chooses to place the stone on the last row, player 2 can win resulting in -1. We assume that player 1 is smart and that he will go for the first option. Thus, we could say that the value of this unfinished game is 1. We will now formalize this. In the previous paragraph, we have summed up all options for the player. For each option, we have calculated the minimum value that the player could get if he would choose that option. From these options, we have chosen the maximum value. Minimax chooses the option with the highest value of all options. Implementing Minimax in TypeScript As you can see, the definition of Minimax looks like you can implement it with recursion. We create a function that returns both the best option and the value of the game. A function can only return a single value, but multiple values can be combined into a single value in a tuple, which is an array with these values. First, we handle the base cases. If the game is finished, the player has no options and the value can be calculated directly. import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; Otherwise, we list all options. For all options, we calculate the value. The value of an option is the same as the opposite of the value of the option for the opponent. Finally, we choose the option with the best value. } else { let options = getOptions(grid, player); const opponent = getOpponent(player); return options.map<[Grid, number]>( option => [option, -(minimax(option, rowLength, opponent)[1])] ).reduce( (previous, current) => previous[1] < current[1] ? current : previous ); } } When you use tuple types, you should explicitly add a type definition for it. Since tuples are arrays too, an array type is automatically inferred. When you add the tuple as return type, expressions after the return keyword will be inferred as these tuples. For options.map, you can mention the union type as a type argument or by specifying it in the callback function (options.map((option): [Grid, number] => ...);). You can easily see that such an AI can also be used for other kinds of games. Actually, the minimax function has no direct reference to Tic-Tac-Toe, only findWinner, isFull and getOptions are related to Tic-Tac-Toe. Optimizing the algorithm The Minimax algorithm can be slow. Choosing the first set, especially, takes a long time since the algorithm tries all ways of playing the game. We will use two techniques to speed up the algorithm. First, we can use the symmetry of the game. When the board is empty it does not matter whether you place a stone in the upper-left corner or the lower-right corner. Rotating the grid around the center 180 degrees gives an equivalent board. Thus, we only need to take a look at half the options when the board is empty. Secondly, we can stop searching for options if we found an option with value 1. Such an option is already the best thing to do. Implementing these techniques gives the following function: import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; } else { let options = getOptions(grid, player); const gridSize = grid.length * grid[0].length; if (options.length === gridSize) { options = options.slice(0, Math.ceil(gridSize / 2)); } const opponent = getOpponent(player); let best: [Grid, number]; for (const option of options) { const current: [Grid, number] = [option, -(minimax(option, rowLength, opponent)[1])]; if (current[1] === 1) { return current; } else if (best === undefined || current[1] > best[1]) { best = current; } } return best; } } This will speed up the AI. In the next sections we will implement the interface for the game and we will write some tests for the AI. Creating the interface NodeJS can be used to create servers. You can also create tools with a command line interface (CLI). For instance, gulp, NPM and typings are command line interfaces built with NodeJS. We will use NodeJS to create the interface for our game. Handling interaction The interaction from the user can only happen by text input in the terminal. When the game starts, it will ask the user some questions about the configuration: width, height, row length for a sequence, and the player(s) that are played by the computer. The highlighted lines are the input of the user. Tic-Tac-Toe Width3 Height3 Row length2 Who controls player 1?1You 2Computer1 Who controls player 2?1You 2Computer1 During the game, the game asks the user which of the possible options he wants to do. All possible steps are shown on the screen, with an index. The user can type the index of the option he wants. X| |-+-+-O|O|-+-+- |X|   It's player one's turn! Choose one out of these options: 1X|X|-+-+-O|O|-+-+- |X|   2X| |X-+-+-O|O|-+-+- |X|   3X| |-+-+-O|O|X-+-+- |X|   4X| |-+-+-O|O|-+-+-X|X|   5X| |-+-+-O|O|-+-+- |X|X A NodeJS application has three standard streams to interact with the user. Standard input, stdin, is used to receive input from the user. Standard output, stdout, is used to show text to the user. Standard error, stderr, is used to show error messages to the user. You can access these streams with process.stdin, process.stdout and process.stderr. You have probably already used console.log to write text to the console. This function writes the text to stdout. We will use console.log to write text to stdout and we will not use stderr. We will create a helper function that reads a line from stdin. This is an asynchronous task, the function starts listening and resolves when the user hits enter. In lib/cli.ts, we start by importing the types and function that we have written. import { Grid, Player, getOptions, getOpponent, showGrid, findWinner, isFull, createGrid } from "./model"; import { minimax } from "./ai"; We can listen to input from stdin using the data event. The process sends either the string or a buffer, an efficient way to store binary data in memory. With once, the callback will only be fired once. If you want to listen to all occurrences of the event, you can use on. function readLine() { return new Promise<string>(resolve => { process.stdin.once("data", (data: string | Buffer) => resolve(data.toString())); }); } We can easily use readLinein async functions. For instance, we can now create a function that reads, parses and validates a line. We can use this to read the input of the user, parse it to a number, and finally check that the number is within a certain range. This function will return the value if it passes the validator. Otherwise it shows a message and retries. async function readAndValidate<U>(message: string, parse: (data: string) => U, validate: (value: U) => boolean): Promise<U> { const data = await readLine(); const value = parse(data); if (validate(value)) { return value; } else { console.log(message); return readAndValidate(message, parse, validate); } } We can use this function to show a question where the user has various options. The user should type the index of his answer. This function validates that the index is within bounds. We will show indices starting at 1 to the user, so we must carefully handle these. async function choose(question: string, options: string[]) { console.log(question); for (let i = 0; i < options.length; i++) { console.log((i + 1) + "t" + options[i].replace(/n/g, "nt")); console.log(); } return await readAndValidate( `Enter a number between 1 and ${ options.length }`, parseInt, index => index >= 1 && index <= options.length ) - 1; } Creating players A player could either be a human or the computer. We create a type that can contain both kinds of players. type PlayerController = (grid: Grid) => Grid | Promise<Grid>; Next we create a function that creates such a player. For a user, we must first know whether he is the first or the second player. Then we return an async function that asks the player which step he wants to do. const getUserPlayer = (player: Player) => async (grid: Grid) => { const options = getOptions(grid, player); const index = await choose("Choose one out of these options:", options.map(showGrid)); return options[index]; }; For the AI player, we must know the player index and the length of a sequence. We use these variables and the grid of the game to run the Minimax algorithm. const getAIPlayer = (player: Player, rowLength: number) => (grid: Grid) => minimax(grid, rowLength, player)[0]; Now we can create a function that asks the player whether a player should be played by the user or the computer. async function getPlayer(index: number, player: Player, rowLength: number): Promise<PlayerController> { switch (await choose(`Who controls player ${ index }?`, ["You", "Computer"])) { case 0: return getUserPlayer(player); default: return getAIPlayer(player, rowLength); } } We combine these functions in a function that handles the whole game. First, we must ask the user to provide the width, height and length of a sequence. export async function game() { console.log("Tic-Tac-Toe"); console.log(); console.log("Width"); const width = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Height"); const height = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Row length"); const rowLength = await readAndValidate("Enter an integer", parseInt, isFinite); We ask the user which players should be controlled by the computer. const player1 = await getPlayer(1, Player.Player1, rowLength); const player2 = await getPlayer(2, Player.Player2, rowLength); The user can now play the game. We do not use a loop, but we use recursion to give the player their turns. return play(createGrid(width, height), Player.Player1);   async function play(grid: Grid, player: Player): Promise<[Grid, Player]> { In every step, we show the grid. If the game is finished, we show which player has won. console.log(); console.log(showGrid(grid)); console.log();   const winner = findWinner(grid, rowLength); if (winner === Player.Player1) { console.log("Player 1 has won!"); return <[Grid, Player]> [grid, winner]; } else if (winner === Player.Player2) { console.log("Player 2 has won!"); return <[Grid, Player]>[grid, winner]; } else if (isFull(grid)) { console.log("It's a draw!"); return <[Grid, Player]>[grid, Player.None]; } If the game is not finished, we ask the current player or the computer which set he wants to do. console.log(`It's player ${ player === Player.Player1 ? "one's" : "two's" } turn!`);   const current = player === Player.Player1 ? player1 : player2; return play(await current(grid), getOpponent(player)); } } In lib/index.ts, we can start the game. When the game is finished, we must manually exit the process. import { game } from "./cli";   game().then(() => process.exit()); We can compile and run this in a terminal: gulp && node --harmony_destructuring dist At the time of writing, NodeJS requires the --harmony_destructuring flag to allow destructuring, like [x, y] = z. In future versions of NodeJS, this flag will be removed and you can run it without it. Testing the AI We will add some tests to check that the AI works properly. For a standard three by three game, the AI should never lose. That means when an AI plays against an AI, it should result in a draw. We can add a test for this. In lib/test/ai.ts, we import AVA and our own definitions. import test from "ava"; import { createGrid, Grid, findWinner, isFull, getOptions, Player } from "../model"; import { minimax } from "../ai"; import { randomInt } from "../utils"; We create a function that simulates the whole gameplay. type PlayerController = (grid: Grid) => Grid; function run(grid: Grid, a: PlayerController, b: PlayerController): Player { const winner = findWinner(grid, 3); if (winner !== Player.None) return winner; if (isFull(grid)) return Player.None; return run(a(grid), b, a); } We write a function that executes a step for the AI. const aiPlayer = (player: Player) => (grid: Grid) => minimax(grid, 3, player)[0]; Now we create the test that validates that a game where the AI plays against the AI results in a draw. test("AI vs AI", t => { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), aiPlayer(Player.Player2)) t.is(result, Player.None); }); Testing with a random player We can also test what happens when the AI plays against a random player or when a player plays against the AI. The AI should win or it should result in a draw. We run these multiple times; what you should always do when you use randomization in your test. We create a function that creates the random player. const randomPlayer = (player: Player) => (grid: Grid) => { const options = getOptions(grid, player); return options[randomInt(options.length)]; }; We write the two tests that both run 20 games with a random player and an AI. test("random vs AI", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), randomPlayer(Player.Player1), aiPlayer(Player.Player2)) t.not(result, Player.Player1); } });   test("AI vs random", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), randomPlayer(Player.Player2)) t.not(result, Player.Player2); } }); We have written different kinds of tests: Tests that check the exact results of single function Tests that check a certain property of results of a function Tests that check a big component Always start writing tests for small components. If the AI tests should fail, that could be caused by a mistake in hasWinner, isFull or getOptions, so it is hard to find the location of the error. Only testing small components is not enough; bigger tests, such as the AI tests, are closer to what the user will do. Bigger tests are harder to create, especially when you want to test the user interface. You must also not forget that tests cannot guarantee that your code runs correctly, it just guarantees that your test cases work correctly. Summary In this article, we have written an AI for Tic-Tac-Toe. With the command line interface, you can play this game against the AI or another human. You can also see how the AI plays against the AI. We have written various tests for the application. You have learned how Minimax works for turn-based games. You can apply this to other turn-based games as well. If you want to know more on strategies for such games, you can take a look at game theory, the mathematical study of these games. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Data Science with R [article] Web Typography [article]
Read more
  • 0
  • 0
  • 14233
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-test-node-applications-using-mocha-framework
Sunith Shetty
20 Apr 2018
12 min read
Save for later

How to test node applications using Mocha framework

Sunith Shetty
20 Apr 2018
12 min read
In today’s tutorial, you will learn how to create your very first test case that tests whether your code is working as expected. If we make a function that's supposed to add two numbers together, we can automatically verify it's doing that. And if we have a function that's supposed to fetch a user from the database, we can make sure it's doing that as well. Now to get started in this section, we'll look at the very basics of setting up a testing suite inside a Node.js project. We'll be testing a real-world function. Installing the testing module In order to get started, we will make a directory to store our code for this chapter. We'll make one on the desktop using mkdir and we'll call this directory node-tests: mkdir node-tests Then we'll change directory inside it using cd, so we can go ahead and run npm init. We'll be installing modules and this will require a package.json file: cd node-tests npm init We'll run npm init using the default values for everything, simply hitting enter throughout every single step: Now once that package.json file is generated, we can open up the directory inside Atom. It's on the desktop and it's called node-tests. From here, we're ready to actually define a function we want to test. The goal in this section is to learn how to set up testing for a Node project, so the actual functions we'll be testing are going to be pretty trivial, but it will help illustrate exactly how to set up our tests. Testing a Node project To get started, let's make a fake module. This module will have some functions and we'll test those functions. In the root of the project, we'll create a brand new directory and I'll call this directory utils: We can assume this will store some utility functions, such as adding a number to another number, or stripping out whitespaces from a string, anything kind of hodge-podge that doesn't really belong to any specific location. We'll make a new file in the utils folder called utils.js, and this is a similar pattern to what we did when we created the weather and location directories in our weather app: You're probably wondering why we have a folder and a file with the same name. This will be clear when we start testing. Now before we can write our first test case to make sure something works, we need something to test. I'll make a very basic function that takes two numbers and adds them together. We'll create an adder function as shown in the following code block: module.exports.add = () => { } This arrow function (=>) will take two arguments, a and b, and inside the function, we'll return the value a + b. Nothing too complex here: module.exports.add = () => { return a + b; }; Now since we just have one expression inside our arrow function (=>) and we want to return it, we can actually use the arrow function (=>) expression syntax, which lets us add our expression as shown in the following code, a + b, and it'll be implicitly returned: module.exports.add = (a, b) => a + b; There's no need to explicitly add a return keyword on to the function. Now that we have utils.js ready to go, let's explore testing. We'll be using a framework called Mocha in order to set up our test suite. This will let us configure our individual test cases and also run all of our test files. This will be really important for creating and running tests. The goal here is to make testing simple and we'll use Mocha to do just that. Now that we have a file and a function we actually want to test, let's explore how to create and run a test suite. Mocha – the testing framework We'll be doing the testing using the super popular testing framework Mocha, which you can find at mochajs.org. This is a fantastic framework for creating and running test suites. It's super popular and their page has all the information you'd ever want to know about setting it up, configuring it, and all the cool bells and whistles it has included: If you scroll down on this page, you'll be able to see a table of contents: Here you can explore everything Mocha has to offer. We'll be covering most of it in this article, but for anything we don't cover, I do want to make you aware you can always learn about it on this page. Now that we've explored the Mocha documentation page, let's install it and start using it. Inside the Terminal, we'll install Mocha. First up, let's clear the Terminal output. Then we'll install it using the npm install command. When you use npm install, you can also use the shortcut npm i. This has the exact same effect. I'll use npm i with mocha, specifying the version @3.0.0. This is the most recent version of the library as of this filming: npm i mocha@3.0.0 Now we do want to save this into the package.json file. Previously, we've used the save flag, but we'll talk about a new flag, called save-dev. The save-dev flag is will save this package for development purposes only—and that's exactly what Mocha will be for. We don't actually need Mocha to run our app on a service like Heroku. We just need Mocha locally on our machine to test our code. When you use the save-dev flag, it installs the module much the same way: npm i mocha@5.0.0 --save-dev But if you explore package.json, you'll see things are a little different. Inside our package.json file, instead of a dependencies attribute, we have a devDependencies attribute: In there we have Mocha, with the version number as the value. The devDependencies are fantastic because they're not going to be installed on Heroku, but they will be installed locally. This will keep the Heroku boot times really, really quick. It won't need to install modules that it's not going to actually need. We'll be installing both devDependencies and dependencies in most of our projects from here on out. Creating a test file for the add function Now that we have Mocha installed, we can go ahead and create a test file. In the utils folder, we'll make a new file called utils.test.js:   This file will store our test cases. We'll not store our test cases in utils.js. This will be our application code. Instead, we'll make a file called utils.test.js. When we use this test.js extension, we're basically telling our app that this will store our test cases. When Mocha goes through our app looking for tests to run, it should run any file with this Extension. Now we have a test file, the only thing left to do is create a test case. A test case is a function that runs some code, and if things go well, great, the test is considered to have passed. And if things do not go well, the test is considered to have failed. We can create a new test case, using it. It is a function provided by Mocha. We'll be running our project test files through Mocha, so there's no reason to import it or do anything like that. We simply call it just like this: it(); Now it lets us define a new test case and it takes two arguments. These are: The first argument is a string The second argument is a function First up, we'll have a string description of what exactly the test is doing. If we're testing that the adder function works, we might have something like: it('should add two numbers'); Notice here that it plays into the sentence. It should read like this, it should add two numbers; describes exactly what the test will verify. This is called behavior-driven development, or BDD, and that's the principles that Mocha was built on. Now that we've set up the test string, the next thing to do is add a function as the second argument: it('should add two numbers', () => { }); Inside this function, we'll add the code that tests that the add function works as expected. This means it will probably call add and check that the value that comes back is the appropriate value given the two numbers passed in. That means we do need to import the util.js file up at the top. We'll create a constant, call utils, setting it equal to the return result from requiring utils. We're using ./ since we will be requiring a local file. It's in the same directory so I can simply type utils without the js extension as shown here: const utils = require('./utils'); it('should add two numbers', () => { }); Now that we have the utils library loaded in, inside the callback we can call it. Let's make a variable to store the return results. We'll call this one results. And we'll set it equal to utils.add passing in two numbers. Let's use something like 33 and 11: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); }); We would expect it to get 44 back. Now at this point, we do have some code inside of our test suites so we run it. We'll do that by configuring the test script. Currently, the test script simply prints a message to the screen saying that no tests exist. What we'll do instead is call Mocha. As shown in the following code, we'll be calling Mocha, passing in as the one and only argument the actual files we want to test. We can use a globbing pattern to specify multiple files. In this case, we'll be using ** to look in every single directory. We're looking for a file called utils.test.js: "scripts": { "test": "mocha **/utils.test.js" }, Now this is a very specific pattern. It's not going to be particularly useful. Instead, we can swap out the file name with a star as well. Now we're looking for any file on the project that has a file name ending in .test.js: "scripts": { "test": "mocha **/*.test.js" }, And this is exactly what we want. From here, we can run our test suite by saving package.json and moving to the Terminal. We'll use the clear command to clear the Terminal output and then we can run our test script using command shown as follows: npm test When we run this, we'll execute that Mocha command: It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. Now in our case, we don't actually assert anything about the number that comes back. It could be 700 and we wouldn't care. The test will always pass. To make a test fail what we have to do is throw an error. That means we can throw a new error and we pass into the constructor function whatever message we want to use as the error as shown in the following code block. In this case, I could say something like Value not correct: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); throw new Error('Value not correct') }); Now with this in place, I can save the test file and rerun things from the Terminal by rerunning npm test, and when we do that now we have 0 tests passing and we have 1 test failing: Next we can see the one test is should add two numbers, and we get our error message, Value not correct. When we throw a new error, the test fails and that's exactly what we want to do for add. Creating the if condition for the test Now, we'll create an if statement for the test. If the response value is not equal to 44, that means we have a problem on our hands and we'll throw an error: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ } }); Inside the if condition, we can throw a new error and we'll use a template string as our message string because I do want to use the value that comes back in the error message. I'll say Expected 44, but got, then I'll inject the actual value, whatever happens to come back: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ throw new Error(`Expected 44, but got ${res}.`); } }); Now in our case, everything will line up great. But what if the add method wasn't working correctly? Let's simulate this by simply tacking on another addition, adding on something like 22 in utils.js: module.exports.add = (a, b) => a + b + 22; I'll save the file, rerun the test suite: Now we get an error message: Expected 44, but got 66. This error message is fantastic. It lets us know that something is going wrong with the test and it even tells us exactly what we got back and what we expected. This will let us go into the add function, look for errors, and hopefully fix them. Creating test cases doesn't need to be something super complex. In this case, we have a simple test case that tests a simple function. To summarize, we looked into basic testing of a node app. We explored the testing framework, Mocha which can be used for creating and running test suites. You read an excerpt from a book written by Andrew Mead, titled Learning Node.js Development. In this book, you will learn how to build, deploy, and test Node apps. Developing Node.js Web Applications How is Node.js Changing Web Development?        
Read more
  • 0
  • 0
  • 14214

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 14049

article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 13484

article-image-component-composition
Packt
22 Feb 2016
38 min read
Save for later

Component Composition

Packt
22 Feb 2016
38 min read
In this article, we understand how large-scale JavaScript applications amount to a series of communicating components. Composition is a big topic, and one that's relevant to scalable JavaScript code. When we start thinking about the composition of our components, we start to notice certain flaws in our design; limitations that prevent us from scaling in response to influencers. (For more resources related to this topic, see here.) The composition of a component isn't random—there's a handful of prevalent patterns for JavaScript components. We'll begin the article with a look at some of these generic component types that encapsulate common patterns found in every web application. Understanding that components implement patterns is crucial for extending these generic components in a way that scales. It's one thing to get our component composition right from a purely technical standpoint, it's another to easily map these components to features. The same challenge holds true for components we've already implemented. The way we compose our code needs to provide a level of transparency, so that it's feasible to decompose our components and understand what they're doing, both at runtime and at design time. Finally, we'll take a look at the idea of decoupling business logic from our components. This is nothing new, the idea of separation-of-concerns has been around for a long time. The challenge with JavaScript applications is that it touches so many things—it's difficult to clearly separate business logic from other implementation concerns. The way in which we organize our source code (relative to the components that use them) can have a dramatic effect on our ability to scale. Generic component types It's exceedingly unlikely that anyone, in this day and age, would set out to build a large scale JavaScript application without the help of libraries, a framework, or both. Let's refer to these collectively as tools, since we're more interested in using the tools that help us scale, and not necessarily which tools are better than other tools. At the end of the day, it's up to the development team to decide which tool is best for the application we're building, personal preferences aside. Guiding factors in choosing the tools we use are the type of components they provide, and what these are capable of. For example, a larger web framework may have all the generic components we need. On the other hand, a functional programming utility library might provide a lot of the low-level functionality we need. How these things are composed into a cohesive feature that scales, is for us to figure out. The idea is to find tools that expose generic implementations of the components we need. Often, we'll extend these components, building specific functionality that's unique to our application. This section walks through the most typical components we'd want in a large-scale JavaScript application. Modules Modules exist, in one form or another, in almost every programming language. Except in JavaScript. That's almost untrue though—ECMAScript 6, in it's final draft status at the time of this writing, introduces the notion of modules. However, there're tools out there today that allow us to modularize our code, without relying on the script tag. Large-scale JavaScript code is still a relatively new thing. Things like the script tag weren't meant to address issues like modular code and dependency management. RequireJS is probably the most popular module loader and dependency resolver. The fact that we need a library just to load modules into our front-end application speaks of the complexities involved. For example, module dependencies aren't a trivial matter when there's network latency and race conditions to consider. Another option is to use a transpiler like Browserify. This approach is gaining traction because it lets us declare our modules using the CommonJS format. This format is used by NodeJS, and the upcoming ECMAScript module specification is a lot closer to CommonJS than to AMD. The advantage is that the code we write today has better compatibility with back-end JavaScript code, and with the future. Some frameworks, like Angular or Marionette, have their own ideas of what modules are, albeit, more abstract ideas. These modules are more about organizing our code, than they are about tactfully delivering code from the server to the browser. These types of modules might even map better to other features of the framework. For example, if there's a centralized application instance that's used to manage our modules, the framework might provide a mean to manage modules from the application. Take a look at the following diagram: A global application component using modules as it's building blocks. Modules can be small, containing only one feature, or large, containing several features This lets us perform higher-level tasks at the module level (things like disabling modules or configuring them with arguments). Essentially, modules speak for features. They're a packaging mechanism that allows us to encapsulate things about a given feature that the rest of the application doesn't care about. Modules help us scale our application by adding high-level operations to our features, by treating our features as the building blocks. Without modules, we'd have no meaningful way to do this. The composition of modules look different depending on the mechanism used to declare the module. A module could be straightforward, providing a namespace from which objects can be exported. Or if we're using a specific framework module flavor, there could be much more to it. Like automatic event life cycles, or methods for performing boilerplate setup tasks. However we slice it, modules in the context of scalable JavaScript are a means to create larger building blocks, and a means to handle complex dependencies: // main.js // Imports a log() function from the util.js model. import log from 'util.js'; log('Initializing...'); // util.js // Exports a basic console.log() wrapper function. 'use strict'; export default function log(message) { if (console) { console.log(message); } } While it's easier to build large-scale applications with module-sized building blocks, it's also easier to tear a module out of an application and work with it in isolation. If our application is monolithic or our modules are too plentiful and fine-grained, it's very difficult for us to excise problem-spots from our code, or to test work in progress. Our component may function perfectly well on its own. It could have negative side-effects somewhere else in the system, however. If we can remove pieces of the puzzle, one at a time and without too much effort, we can scale the trouble-shooting process. Routers Any large-scale JavaScript application has a significant number of possible URIs. The URI is the address of the page that the user is looking at. They can navigate to this resource by clicking on links, or they may be taken to a new URI automatically by our code, perhaps in response to some user action. The web has always relied on URIs, long before the advent of large-scale JavaScript applications. URIs point to resources, and resources can be just about anything. The larger the application, the more resources, and the more potential URIs. Router components are tools we use in the front-end, to listen for these URI change events and respond to them accordingly. There's less reliance on the back-end web servers parsing the URI, and returning the new content. Most web sites still do this, but there're several disadvantages with this approach when it comes to building applications: The browser triggers events when the URI changes, and the router component responds to these changes. The URI changes can be triggered from the history API, or from location.hash The main problem is that we want the UI to be portable, as in, we want to be able to deploy it against any back-end and things should work. Since we're not assembling markup for the URI in the back-end, it doesn't make sense to parse the URI in the back-end either. We declaratively specify all the URI patterns in our router components. We generally refer to these as, routes. Think of a route as a blueprint, and a URI as an instance of that blueprint. This means that when the router receives a URI, it can correlate it to a route. That, in essence, is the responsibility of router components. Which is easy with smaller applications, but when we're talking about scale, further deliberation on router design is in order. As a starting point, we have to consider the URI mechanism we want to use. The two choices are basically listening to hash change events, or utilizing the history API. Using hash-bang URIs is probably the simplest approach. The history API available in every modern browser, on the other hand, lets us format URI's without the hash-bang—they look like real URIs. The router component in the framework we're using may support only one or the other, thus simplifying the decision. Some support both URI approaches, in which case we need to decide which one works best for our application. The next thing to consider about routing in our architecture is how to react to route changes. There're generally two approaches to this. The first is to declaratively bind a route to a callback function. This is ideal when the router doesn't have a lot of routes. The second approach is to trigger events when routes are activated. This means that there's nothing directly bound to the router. Instead, some other component listens for such an event. This approach is beneficial when there are lots of routes, because the router has no knowledge of the components, just the routes. Here's an example that shows a router component listening to route events: // router.js import Events from 'events.js' // A router is a type of event broker, it // can trigger routes, and listen to route // changes. export default class Router extends Events { // If a route configuration object is passed, // then we iterate over it, calling listen() // on each route name. This is translating from // route specs to event listeners. constructor(routes) { super(); if (routes != null) { for (let key of Object.keys(routes)) { this.listen(key, routes[key]); } } } // This is called when the caller is ready to start // responding to route events. We listen to the // "onhashchange" window event. We manually call // our handler here to process the current route. start() { window.addEventListener('hashchange', this.onHashChange.bind(this)); this.onHashChange(); } // When there's a route change, we translate this into // a triggered event. Remember, this router is also an // event broker. The event name is the current URI. onHashChange() { this.trigger(location.hash, location.hash); } }; // Creates a router instance, and uses two different // approaches to listening to routes. // // The first is by passing configuration to the Router. // The key is the actual route, and the value is the // callback function. // // The second uses the listen() method of the router, // where the event name is the actual route, and the // callback function is called when the route is activated. // // Nothing is triggered until the start() method is called, // which gives us an opportunity to set everything up. For // example, the callback functions that respond to routes // might require something to be configured before they can // run. import Router from 'router.js' function logRoute(route) { console.log(`${route} activated`); } var router = new Router({ '#route1': logRoute }); router.listen('#route2', logRoute); router.start(); Some of the code required to run these examples is omitted from the listings. For example, the events.js module is included in the code bundle that comes with this book, it's just not that relevant to the example. Also in the interest of space, the code examples avoid using specific frameworks and libraries. In practice, we're not going to write our own router or events API—our frameworks do that already. We're instead using vanillaES6 JavaScript, to illustrate points pertinent to scaling our applications Another architectural consideration we'll want to make when it comes to routing is whether we want a global, monolithic router, or a router per module, or some other component. The downside to having a monolithic router is that it becomes difficult to scale when it grows sufficiently large, as we keep adding features and routes. The advantage is that the routes are all declared in one place. Monolithic routers can still trigger events that all our components can listen to. The per-module approach to routing involves multiple router instances. For example, if our application has five components, each would have their own router. The advantage here is that the module is completely self-contained. Anyone working with this module doesn't need to look elsewhere to figure out which routes it responds to. Using this approach, we can also have a tighter coupling between the route definitions and the functions that respond to them, which could mean simpler code. The downside to this approach is that we lose the consolidated aspect of having all our routes declared in a central place. Take a look at the following diagram: The router to the left is global—all modules use the same instance to respond to URI events. The modules to the right have their own routers. These instances contain configuration specific to the module, not the entire application Depending on the capabilities of the framework we're using, the router components may or may not support multiple router instances. It may only be possible to have one callback function per route. There may be subtle nuances to the router events we're not yet aware of. Models/Collections The API our application interacts with exposes entities. Once these entities have been transferred to the browser, we will store a model of those entities. Collections are a bunch of related entities, usually of the same type. The tools we're using may or may not provide a generic model and/or collection components, or they may have something similar but named differently. The goal of modeling API data is a rough approximation of the API entity. This could be as simple as storing models as plain JavaScript objects and collections as arrays. The challenge with simply storing our API entities as plain objects in arrays is that some other component is then responsible for talking to the API, triggering events when the data changes, and for performing data transformations. We want other components to be able to transform collections and models where needed, in order to fulfill their duties. But we don't want repetitive code, and it's best if we're able to encapsulate the common things like transformations, API calls, and event life cycles. Take a look at the next diagram: Models encapsulate interaction with APIs, parsing data, and triggering events when data changes. This leads to simpler code outside of the models Hiding the details of how the API data is loaded into the browser, or how we issue commands, helps us scale our application as we grow. As we add more entities to the API, the complexity of our code grows too. We can throttle this complexity by constraining the API interactions to our model and collection components. Another scalability issue we'll face with our models and collections is where they fit in the big picture. That is, our application is really just one big component, composed of smaller components. Our models and collections map well to our API, but not necessarily to features. API entities are more generic than specific features, and are often used by several features. Which leaves us with an open question—where do our models and collections fit into components? Here's an example that shows specific views extending generic views. The same model can be passed to both: // A super simple model class. class Model { constructor(first, last, age) { this.first = first; this.last = last; this.age = age; } } // The base view, with a name method that // generates some output. class BaseView { name() { return `${this.model.first} ${this.model.last}`; } } // Extends BaseView with a constructor that accepts // a model and stores a reference to it. class GenericModelView extends BaseView { constructor(model) { super(); this.model = model; } } // Extends GenericModelView with specific constructor // arguments. class SpecificModelView extends BaseView { constructor(first, last, age) { super(); this.model = new Model(...arguments); } } var properties = [ 'Terri', 'Hodges', 41 ]; // Make sure the data is the same in both views. // The name() method should return the same result... console.log('generic view', new GenericModelView(new Model(...properties)).name()); console.log('specific view', new SpecificModelView(...properties).name()); On one hand, components can be completely generic with regard to the models and collections they use. On the other hand, some components are specific with their requirements—they can directly instantiate their collections. Configuring generic components with specific models and collections at runtime only benefits us when the component truly is generic, and is used in several places. Otherwise, we might as well encapsulate the models within the components that use them. Choosing the right approach helps us scale. Because, not all our components will be entirely generic or entirely specific. Controllers/Views Depending on the framework we're using, and the design patterns our team is following, controllers and views can represent different things. There's simply too many MV* pattern and style variations to provide a meaningful distinction in terms of scale. The minute differences have trade-offs relative to similar but different MV* approaches. For our purpose of discussing large scale JavaScript code, we'll treat them as the same type of component. If we decide to separate the two concepts in our implementation, the ideas in this section will be relevant to both types. Let's stick with the term views for now, knowing that we're covering both views and controllers, conceptually. These components interact with several other component types, including routers, models or collections, and templates, which are discussed in the next section. When something happens, the user needs to be notified about it. The view's job is to update the DOM. This could be as simple as changing an attribute on a DOM element, or as involved as rendering a new template: A view component updating the DOM in response to router and model events A view can update the DOM in response to several types of events. A route could have changed. A model could have been updated. Or something more direct, like a method call on the view component. Updating the DOM is not as straightforward as one might think. There's the performance to think about—what happens when our view is flooded with events? There's the latency to think about—how long will this JavaScript call stack run, before stopping and actually allowing the DOM to render? Another responsibility of our views is responding to DOM events. These are usually triggered by the user interacting with our UI. The interaction may start and end with our view. For example, depending on the state of something like user input or one of our models, we might update the DOM with a message. Or we might do nothing, if the event handler is debounced, for instance. A debounced function groups multiple calls into one. For example, calling foo() 20 times in 10 milliseconds may only result in the implementation of foo() being called once. For a more detailed explanation, look at: http://drupalmotion.com/article/debounce-and-throttle-visual-explanation. Most of the time, the DOM events get translated into something else, either a method call or another event. For example, we might call a method on a model, or transform a collection. The end result, most of the time, is that we provide feedback by updating the DOM. This can be done either directly, or indirectly. In the case of direct DOM updates, it's simple to scale. In the case of indirect updates, or updates through side-effects, scaling becomes more of a challenge. This is because as the application acquires more moving parts, the more difficult it becomes to form a mental map of cause and effect. Here's an example that shows a view listening to DOM events and model events. import Events from 'events.js'; // A basic model. It extending "Events" so it // can listen to events triggered by other components. class Model extends Events { constructor(enabled) { super(); this.enabled = !!enabled; } // Setters and getters for the "enabled" property. // Setting it also triggers an event. So other components // can listen to the "enabled" event. set enabled(enabled) { this._enabled = enabled; this.trigger('enabled', enabled); } get enabled() { return this._enabled; } } // A view component that takes a model and a DOM element // as arguments. class View { constructor(element, model) { // When the model triggers the "enabled" event, // we adjust the DOM. model.listen('enabled', (enabled) => { element.setAttribute('disabled', !enabled); }); // Set the state of the model when the element is // clicked. This will trigger the listener above. element.addEventListener('click', () => { model.enabled = false; }); } } new View(document.getElementById('set'), new Model()); On the plus side to all this complexity, we actually get some reusable code. The view is agnostic as to how the model or router it's listening to is updated. All it cares about is specific events on specific components. This is actually helpful to us because it reduces the amount of special-case handling we need to implement. The DOM structure that's generated at runtime, as a result of rendering all our views, needs to be taken into consideration as well. For example, if we look at some of the top-level DOM nodes, they have nested structure within them. It's these top-level nodes that form the skeleton of our layout. Perhaps this is rendered by the main application view, and each of our views has a child-relationship to it. Or perhaps the hierarchy extends further down than that. The tools we're using most likely have mechanisms for dealing with these parent-child relationships. However, bear in mind that vast view hierarchies are difficult to scale. Templates Template engines used to reside mostly in the back-end framework. That's less true today, thanks in a large part to the sophisticated template rendering libraries available in the front-end. With large-scale JavaScript applications, we rarely talk to back-end services about UI-specific things. We don't say, "here's a URL, render the HTML for me". The trend is to give our JavaScript components a certain level autonomy—letting them render their own markup. Having component markup coupled with the components that render them is a good thing. It means that we can easily discern where the markup in the DOM is being generated. We can then diagnose issues and tweak the design of a large scale application. Templates help establish a separation of concerns with each of our components. The markup that's rendered in the browser mostly comes from the template. This keeps markup-specific code out of our JavaScript. Front-end template engines aren't just tools for string replacement; they often have other tools to help reduce the amount of boilerplate JavaScript code to write. For example, we can embed things like conditionals and for-each loops in our markup, where they're suited. Application-specific components The component types we've discussed so far are very useful for implementing scalable JavaScript code, but they're also very generic. Inevitably, during implementation we're going to hit a road block—the component composition patterns we've been following will not work for certain features. This is when it's time to step back and think about possibly adding a new type of component to our architecture. For example, consider the idea of widgets. These are generic components that are mainly focused on presentation and user interactions. Let's say that many of our views are using the exact same DOM elements, and the exact same event handlers. There's no point in repeating them in every view throughout our application. Might it be better if we were to factor it into a common component? A view might be overkill, perhaps we need a new type of widget component? Sometimes we'll create components for the sole purpose of composition. For example, we might have a component that glues together router, view, model/collection, and template components together to form a cohesive unit. Modules partially solve this problem but they aren't always enough. Sometimes we're missing that added bit of orchestration that our components need in order to communicate. Extending generic components We often discover, late in the development process, that the components we rely on are lacking something we need. If the base component we're using is designed well, then we can extend it, plugging in the new properties or functionality we need. In this section, we'll walk through some scenarios where we might need to extend the common generic components used throughout our application. If we're going to scale our code, we need to leverage these base components where we can. We'll probably want to start extending our own base components at some point too. Some tools are better than others at facilitating the extension mechanism through which we implement this specialized behavior. Identifying common data and functionality Before we look at extending the specific component types, it's worthwhile to consider the common properties and functionality that's common across all component types. Some of these things will be obvious up-front, while others are less pronounced. Our ability to scale depends, in part, on our ability to identify commonality across our components. If we have a global application instance, quite common in large JavaScript applications, global values and functionality can live there. This can grow unruly down the line though, as more common things are discovered. Another approach might be to have several global modules, as shown in the following diagram, instead of just a single application instance. Or both. But this doesn't scale from an understandability perspective: The ideal component hierarchy doesn't extend beyond three levels. The top level is usually found in a framework our application depends on As a rule-of-thumb, we should, for any given component, avoid extending it more than three levels down. For example, a generic view component from the tools we're using could be extended by our generic version of it. This would include properties and functionality that every view instance in our application requires. This is only a two-level hierarchy, and easy to manage. This means that if any given component needs to extend our generic view, it can do so without complicating things. Three-levels should be the maximum extension hierarchy depth for any given type. This is just enough to avoid unnecessary global data, going beyond this presents scaling issues because the hierarchy isn't easily grasped. Extending router components Our application may only require a single router instance. Even in this case, we may still need to override certain extension points of the generic router. In case of multiple router instances, there's bound to be common properties and functionality that we don't want to repeat. For example, if every route in our application follows the same pattern, with only subtle differences, we can implement the tools in our base router to avoid repetitious code. In addition to declaring routes, events take place when a given route is activated. Depending on the architecture of our application, different things need to happen. Maybe certain things always need to happen, no matter which route has been activated. This is where extending the router to provide our own functionality comes in hand. For example, we have to validate permission for a given route. It wouldn't make much sense for us to handle this through individual components, as this would not scale well with complex access control rules and a lot of routes. Extending models/collections Our models and collections, no matter what their specific implementation looks like, will share some common properties with one another. Especially if they're targeting the same API, which is the common case. The specifics of a given model or collection revolve around the API endpoint, the data returned, and the possible actions taken. It's likely that we'll target the same base API path for all entities, and that all entities have a handful of shared properties. Rather than repeat ourselves in every model or collection instance, it's better to abstract the common data. In addition to sharing properties among our models and collections, we can share common behavior. For instance, it's quite likely that a given model isn't going to have sufficient data for a given feature. Perhaps that data can be derived by transforming the model. These types of transformations can be common, and abstracted in a base model or collection. It really depends on the types of features we're implementing and how consistent they are with one another. If we're growing fast and getting lots of requests for "outside-the-box" features, then we're more likely to implement data transformations inside the views that require these one-off changes to the models or collections they're using. Most frameworks take care of the nuances for performing XHR requests to fetch our data or perform actions. That's not the whole story unfortunately, because our features will rarely map one-to-one with a single API entity. More likely, we will have a feature that requires several collections that are related to one another somehow, and a transformed collection. This type of operation can grow complex quickly, because we have to work with multiple XHR requests. We'll likely use promises to synchronize the fetching of these requests, and then perform the data transformation once we have all the necessary sources. Here's an example that shows a specific model extending a generic model, to provide new fetching behavior: // The base fetch() implementation of a model, sets // some property values, and resolves the promise. class BaseModel { fetch() { return new Promise((resolve, reject) => { this.id = 1; this.name = 'foo'; resolve(this); }); } } // Extends BaseModel with a specific implementation // of fetch(). class SpecificModel extends BaseModel { // Overrides the base fetch() method. Returns // a promise with combines the original // implementation and result of calling fetchSettings(). fetch() { return Promise.all([ super.fetch(), this.fetchSettings() ]); } // Returns a new Promise instance. Also sets a new // model property. fetchSettings() { return new Promise((resolve, reject) => { this.enabled = true; resolve(this); }); } } // Make sure the properties are all in place, as expected, // after the fetch() call completes. new SpecificModel().fetch().then((result) => { var [ model ] = result; console.assert(model.id === 1, 'id'); console.assert(model.name === 'foo'); console.assert(model.enabled, 'enabled'); console.log('fetched'); }); Extending controllers/views When we have a base model or base collection, there're often properties shared between our controllers or views. That's because the job of a controller or a view is to render model or collection data. For example, if the same view is rendering the same model properties over and over, we can probably move that bit to a base view, and extend from that. Perhaps the repetitive parts are in the templates themselves. This means that we might want to consider having a base template inside a base view, as shown in the following diagram. Views that extend this base view, inherit this base template. Depending on the library or framework at our disposal, extending templates like this may not be feasible. Or the nature of our features may make this difficult to achieve. For example, there might not be a common base template, but there might be a lot of smaller views and templates that can plug-into larger components: A view that extends a base view can populate the template of the base view, as well as inherit other base view functionalities Our views also need to respond to user interactions. They may respond directly, or forward the events up the component hierarchy. In either case, if our features are at all consistent, there will be some common DOM event handling that we'll want to abstract into a common base view. This is a huge help in scaling our application, because as we add more features, the DOM event handling code additions is minimized. Mapping features to components Now that we have a handle on the most common JavaScript components, and the ways we'll want to extend them for use in our application, it's time to think about how to glue those components together. A router on it's own isn't very useful. Nor is a standalone model, template, or controller. Instead, we want these things to work together, to form a cohesive unit that realizes a feature in our application. To do that, we have to map our features to components. We can't do this haphazardly either—we need to think about what's generic about our feature, and about what makes it unique. These feature properties will guide our design decisions on producing something that scales. Generic features Perhaps the most important aspects of component composition are consistency and reusability. While considering that the scaling influences our application faces, we'll come up with a list of traits that all our components must carry. Things like user management, access control, and other traits unique to our application. Along with the other architectural perspectives (explored in more depth throughout the remainder of the book), which form the core of our generic features: A generic component, composed of other generic components from our framework. The generic aspects of every feature in our application serve as a blueprint. They inform us in composing larger building blocks. These generic features account for the architectural factors that help us scale. And if we can encode these factors as parts of an aggregate component, we'll have an easier time scaling our application. What makes this design task challenging is that we have to look at these generic components not only from a scalable architecture perspective, but also from a feature-complete perspective. As much as we'd like to think that if every feature behaves the same way, we'd be all set. If only every feature followed an identical pattern, the sky's the limit when it comes the time to scale. But 100% consistent feature functionality is an illusion, more visible to JavaScript programmers than to users. The pattern breaks out of necessity. It's responding to this breakage in a scalable way that matters. This is why successful JavaScript applications will continuously revisit the generic aspects of our features to ensure they reflect reality. Specific features When it's time to implement something that doesn't fit the pattern, we're faced with a scaling challenge. We have to pivot, and consider the consequences of introducing such a feature into our architecture. When patterns are broken, our architecture needs to change. This isn't a bad thing—it's a necessity. The limiting factor in our ability to scale in response to these new features, lies with generic aspects of our existing features. This means that we can't be too rigid with our generic feature components. If we're too demanding, we're setting ourselves up for failure. Before making any brash architectural decisions stemming from offbeat features, think about the specific scaling consequences. For example, does it really matter that the new feature uses a different layout and requires a template that's different from all other feature components? The state of the JavaScript scaling art revolves around finding the handful of essential blueprints to follow for our component composition. Everything else is up for discussion on how to proceed. Decomposing components Component composition is an activity that creates order; larger behavior out of smaller parts. We often need to move in the opposite direction during development. Even after development, we can learn how a component works by tearing the code apart and watching it run in different contexts. Component decomposition means that we're able to take the system apart and examine individual parts in a somewhat structured approach. Maintaining and debugging components Over the course of application development, our components accumulate abstractions. We do this to support a feature's requirement better, while simultaneously supporting some architectural property that helps us scale. The problem is that as the abstractions accumulate, we lose transparency into the functioning of our components. This is not only essential for diagnosing and fixing issues, but also in terms of how easy the code is to learn. For example, if there's a lot of indirection, it takes longer for a programmer to trace cause to effect. Time wasted on tracing code, reduces our ability to scale from a developmental point of view. We're faced with two opposing problems. First, we need abstractions to address real world feature requirements and architectural constraints. Second, is our inability to master our own code due to a lack of transparency. 'Following is an example that shows a renderer component and a feature component. Renderers used by the feature are easily substitutable: // A Renderer instance takes a renderer function // as an argument. The render() method returns the // result of calling the function. class Renderer { constructor(renderer) { this.renderer = renderer; } render() { return this.renderer ? this.renderer(this) : ''; } } // A feature defines an output pattern. It accepts // header, content, and footer arguments. These are // Renderer instances. class Feature { constructor(header, content, footer) { this.header = header; this.content = content; this.footer = footer; } // Renders the sections of the view. Each section // either has a renderer, or it doesn't. Either way, // content is returned. render() { var header = this.header ? `${this.header.render()}n` : '', content = this.content ? `${this.content.render()}n` : '', footer = this.footer ? this.footer.render() : ''; return `${header}${content}${footer}`; } } // Constructs a new feature with renderers for three sections. var feature = new Feature( new Renderer(() => { return 'Header'; }), new Renderer(() => { return 'Content'; }), new Renderer(() => { return 'Footer'; }) ); console.log(feature.render()); // Remove the header section completely, replace the footer // section with a new renderer, and check the result. delete feature.header; feature.footer = new Renderer(() => { return 'Test Footer'; }); console.log(feature.render()); A tactic that can help us cope with these two opposing scaling influencers is substitutability. In particular, the ease with which one of our components, or sub-components, can be replaced with something else. This should be really easy to do. So before we go introducing layers of abstraction, we need to consider how easy it's going to be to replace a complex component with a simple one. This can help programmers learn the code, and also help with debugging. For example, if we're able to take a complex component out of the system and replace it with a dummy component, we can simplify the debugging process. If the error goes away after the component is replaced, we have found the problematic component. Otherwise, we can rule out a component and keep digging elsewhere. Re-factoring complex components It's of course easier said than done to implement substitutability with our components, especially in the face of deadlines. Once it becomes impractical to easily replace components with others, it's time to consider re-factoring our code. Or at least the parts that make substitutability infeasible. It's a balancing act, getting the right level of encapsulation, and the right level of transparency. Substitution can also be helpful at a more granular level. For example, let's say a view method is long and complex. If there are several stages during the execution of that method, where we would like to run something custom, we can't. It's better to re-factor the single method into a handful of methods, each of which can be overridden. Pluggable business logic Not all of our business logic needs to live inside our components, encapsulated from the outside world. Instead, it would be ideal if we could write our business logic as a set of functions. In theory, this provides us with a clear separation of concerns. The components are there to deal with the specific architectural concerns that help us scale, and the business logic can be plugged into any component. In practice, excising business logic from components isn't trivial. Extending versus configuring There're two approaches we can take when it comes to building our components. As a starting point, we have the tools provided by our libraries and frameworks. From there, we can keep extending these tools, getting more specific as we drill deeper and deeper into our features. Alternatively, we can provide our component instances with configuration values. These instruct the component on how to behave. The advantage of extending things that would otherwise need to be configured is that the caller doesn't need to worry about them. And if we can get by, using this approach, all the better, because it leads to simpler code. Especially the code that's using the component. On the other hand, we could have generic feature components that can be used for a specific purpose, if only they support this configuration or that configuration option. This approach has the advantage of simpler component hierarchies, and less overall components. Sometimes it's better to keep components as generic as possible, within the realm of understandability. That way, when we need a generic component for a specific feature, we can use it without having to re-define our hierarchy. Of course, there's more complexity involved for the caller of that component, because they need to supply it with the configuration values. It's a trade-off that's up to us, the JavaScript architects of our application. Do we want to encapsulate everything, configure everything, or do we want to strike a balance between the two? Stateless business logic With functional programming, functions don't have side effects. In some languages, this property is enforced, in JavaScript it isn't. However, we can still implement side-effect-free functions in JavaScript. If a function takes arguments, and always returns the same output based on those arguments, then the function can be said to be stateless. It doesn't depend on the state of a component, and it doesn't change the state of a component. It just computes a value. If we can establish a library of business logic that's implemented this way, we can design some super flexible components. Rather than implement this logic directly in a component, we pass the behavior into the component. That way, different components can utilize the same stateless business logic functions. The tricky part is finding the right functions that can be implemented this way. It's not a good idea to implement these up-front. Instead, as the iterations of our application development progress, we can use this strategy to re-factor code into generic stateless functions that are shared by any component capable of using them. This leads to business logic that's implemented in a focused way, and components that are small, generic, and reusable in a variety of contexts. Organizing component code In addition to composing our components in such a way that helps our application scale, we need to consider the structure of our source code modules too. When we first start off with a given project, our source code files tend to map well to what's running in the client's browser. Over time, as we accumulate more features and components, earlier decisions on how to organize our source tree can dilute this strong mapping. When tracing runtime behavior to our source code, the less mental effort involved, the better. We can scale to more stable features this way because our efforts are focused more on the design problems of the day—the things that directly provide customer value: The diagram shows the mapping component parts to their implementation artifacts There's another dimension to code organization in the context of our architecture, and that's our ability to isolate specific code. We should treat our code just like our runtime components, which are self-sustained units that we can turn on or off. That is, we should be able to find all the source code files required for a given component, without having to hunt them down. If a component requires, say, 10 source code files—JavaScript, HTML, and CSS—then ideally these should all be found in the same directory. The exception, of course, is generic base functionality that's shared by all components. These should be as close to the surface as possible. Then it's easy to trace our component dependencies; they will all point to the top of the hierarchy. It's a challenge to scale the dependency graph when our component dependencies are all over the place. Summary This article introduced us to the concept of component composition. Components are the building blocks of a scalable JavaScript application. The common components we're likely to encounter include things like modules, models/collections, controllers/views, and templates. While these patterns help us achieve a level of consistency, they're not enough on their own to make our code work well under various scaling influencers. This is why we need to extend these components, providing our own generic implementations that specific features of our application can further extend and use. Depending on the various scaling factors our application encounters, different approaches may be taken in getting generic functionality into our components. One approach is to keep extending the component hierarchy, and keep everything encapsulated and hidden away from the outside world. Another approach is to plug logic and properties into components when they're created. The cost is more complexity for the code that's using the components. We ended the article with a look at how we might go about organizing our source code; so that it's structure better reflects that of our logical component design. This helps us scale our development effort, and helps isolate one component's code from others'. It's one thing to have well crafted components that stand by themselves. It's quite another to implement scalable component communication. For more information, refer to: https://www.packtpub.com/web-development/javascript-and-json-essentials https://www.packtpub.com/application-development/learning-javascript-data-structures-and-algorithms Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack [Article] Components of PrimeFaces Extensions [Article] Unlocking the JavaScript Core [Article]
Read more
  • 0
  • 0
  • 13386
article-image-importing-structure-and-data-using-phpmyadmin
Packt
12 Oct 2009
9 min read
Save for later

Importing Structure and Data Using phpMyAdmin

Packt
12 Oct 2009
9 min read
A feature was added in version 2.11.0: an import file may contain the DELIMITER keyword. This enables phpMyAdmin to mimic the mysql command-line interpreter. The DELIMITER separator is used to delineate the part of the file containing a stored procedure, as these procedures can themselves contain semicolons. The default values for the Import interface are defined in $cfg['Import']. Before examining the actual import dialog, let's discuss some limits issues. Limits for the transfer When we import, the source file is usually on our client machine; so, it must travel to the server via HTTP. This transfer takes time and uses resources that may be limited in the web server's PHP configuration. Instead of using HTTP, we can upload our file to the server using a protocol such as FTP, as described in the Web Server Upload Directories section. This method circumvents the web server's PHP upload limits. Time limits First, let's consider the time limit. In config.inc.php, the $cfg['ExecTimeLimit'] configuration directive assigns, by default, a maximum execution time of 300 seconds (five minutes) for any phpMyAdmin script, including the scripts that process data after the file has been uploaded. A value of 0 removes the limit, and in theory, gives us infinite time to complete the import operation. If the PHP server is running in safe mode, modifying $cfg['ExecTimeLimit'] will have no effect. This is because the limits set in php.ini or in user-related web server configuration file (such as .htaccess or virtual host configuration files) take precedence over this parameter. Of course, the time it effectively takes, depends on two key factors: Web server load MySQL server load The time taken by the file, as it travels between the client and the server,does not count as execution time because the PHP script starts to execute only once the file has been received on the server. Therefore, the $cfg['ExecTimeLimit'] parameter has an impact only on the time used to process data (like decompression or sending it to the MySQL server). Other limits The system administrator can use the php.ini file or the web server's virtual host configuration file to control uploads on the server. The upload_max_filesize parameter specifies the upper limit or the maximum file size that can be uploaded via HTTP. This one is obvious, but another less obvious parameter is post_max_size. As HTTP uploading is done via the POST method, this parameter may limit our transfers. For more details about the POST method, please refer to http://en.wikipedia.org/wiki/Http#Request_methods. The memory_limit parameter is provided to avoid web server child processes from grabbing too much of the server memory—phpMyAdmin also runs as a child process. Thus, the handling of normal file uploads, especially compressed dumps, can be compromised by giving this parameter a small value. Here, no preferred value can be recommended; the value depends on the size of uploaded data. The memory limit can also be tuned via the $cfg['MemoryLimit'] parameter in config.inc.php. Finally, file uploads must be allowed by setting file_uploads to On. Otherwise, phpMyAdmin won't even show the Location of the textfile dialog. It would be useless to display this dialog, as the connection would be refused later by the PHP component of the web server. Partial imports If the file is too big, there are ways in which we can resolve the situation. If we still have access to the original data, we could use phpMyAdmin to generate smaller CSV export files, choosing the Dump n rows starting at record # n dialog. If this were not possible, we will have to use a text editor to split the file into smaller sections. Another possibility is to use the upload directory mechanism, which accesses the directory defined in $cfg['UploadDir']. This feature is explained later in this article. In recent phpMyAdmin versions, the Partial import feature can also solve this file size problem. By selecting the Allow interrupt… checkbox, the import process will interrupt itself if it detects that it is close to the time limit. We can also specify a number of queries to skip from the start, in case we successfully import a number of rows and wish to continue from that point. Temporary directory On some servers, a security feature called open_basedir can be set up in a way that impedes the upload mechanism. In this case, or for any other reason, when uploads are problematic, the $cfg['TempDir'] parameter can be set with the value of a temporary directory. This is probably a subdirectory of phpMyAdmin's main directory, into which the web server is allowed to put the uploaded file. Importing SQL files Any file containing MySQL statements can be imported via this mechanism. The dialog is available in the Database view or the Table view, via the Import subpage, or in the Query window. There is no relation between the currently selected table (here author) and the actual contents of the SQL file that will be imported. All the contents of the SQL file will be imported, and it is those contents that determine which tables or databases are affected. However, if the imported file does not contain any SQL statements to select a database, all statements in the imported file will be executed on the currently selected database. Let's try an import exercise. First, we make sure that we have a current SQL export of the book table. This export file must contain the structure and the data. Then we drop the book table—yes, really! We could also simply rename it. Now it is time to import the file back. We should be on the Import subpage, where we can see the Location of the text file dialog. We just have to hit the Browse button and choose our file. phpMyAdmin is able to detect which compression method (if any) has been applied to the file. Depending on the phpMyAdmin version, and the extensions that are available in the PHP component of the web server, there is variation in the formats that the program can decompress. However, to import successfully, phpMyAdmin must be informed of the character set of the file to be imported. The default value is utf8. However, if we know that the import file was created with another character set, we should specify it here. An SQL compatibility mode selector is available at import time. This mode should be adjusted to match the actual data that we are about to import, according to the type of the server where the data was previously exported. To start the import, we click Go. The import procedure continues and we receive a message: Import has been successfully finished, 2 queries executed. We can browse our newly-created tables to confirm the success of the import operation. The file could be imported for testing in a different database or even in a MySQL server. Importing CSV files In this section, we will examine how to import CSV files. There are two possible methods—CSV and CSV using LOAD DATA. The first method is implemented internally by phpMyAdmin and is the recommended one for its simplicity. With the second method, phpMyAdmin receives the file to be loaded, and passes it to MySQL. In theory, this method should be faster. However, it has more requirements due to MySQL itself (see the Requirements sub-section of the CSV using LOAD DATA section). Differences between SQL and CSV formats There are some differences between these two formats. The CSV file format contains data only, so we must already have an existing table in place. This table does not need to have the same structure as the original table (from which the data comes); the Column names dialog enables us to choose which columns are affected in the target table. Because the table must exist prior to the import, the CSV import dialog is available only from the Import subpage in the Table view, and not in the Database view.   Exporting a test file Before trying an import, let's generate an author.csv export file from the author table. We use the default values in the CSV export options. We can then Empty the author table—we should avoid dropping this table because we still need the table structure. CSV From the author table menu, we select Import and then CSV. We can influence the behavior of the import in a number of ways. By default, importing does not modify existing data (based on primary or unique keys). However, the Replace table data with file option instructs phpMyAdmin to use REPLACE statement instead of INSERT statement, so that existing rows are replaced with the imported data. Using Ignore duplicate rows, INSERT IGNORE statements are generated. These cause MySQL to ignore any duplicate key problems during insertion. A duplicate key from the import file does not replace existing data, and the procedure continues for the next line of CSV data. We can then specify the character that terminates each field, the character that encloses data, and the character that escapes the enclosing character. Usually this is . For example, for a double quote enclosing character, if the data field contains a double quote, it must be expressed as "some data " some other data". For Lines terminated by, recent versions of phpMyAdmin offer the auto choice, which should be tried first as it detects the end-of-line character automatically. We can also specify manually which characters terminate the lines. The usual choice is n for UNIX-based systems, rn for DOS or Windows systems, and r for Mac-based system (up to Mac OS 9). If in doubt, we can use a hexadecimal file editor on our client computer (not part of phpMyAdmin) to examine the exact codes. By default, phpMyAdmin expects a CSV file with the same number of fields and the same field order as the target table. But this can be changed by entering a comma-separated list of column names in Column names, respecting the source file format. For example, let's say our source file contains only the author ID and the author name information: "1","John Smith" "2","Maria Sunshine" We'd have to put id, name in Column names to match the source file. When we click Go, the import is executed and we get a confirmation. We might also see the actual INSERT queries generated if the total size of the file is not too big. Import has been successfully finished, 2 queries executed.INSERT INTO `author` VALUES ('1', 'John Smith', '+01 445 789-1234')# 1 row(s) affected.INSERT INTO `author` VALUES ('2', 'Maria Sunshine', '333-3333')# 1 row(s) affected.
Read more
  • 0
  • 0
  • 13349

article-image-web-components
Packt
17 Jun 2016
12 min read
Save for later

Web Components

Packt
17 Jun 2016
12 min read
In this article by Arshak Khachatryan, the author of Getting Started with Polymer, we will discuss web components. Currently, web technologies are growing rapidly. Though most websites use these technologies nowadays, we come across many with a bad, unresponsive UI design and awful performance. The only reason we should think about a responsive website is that users are now moving to the mobile web. 55% of the web users use mobile phones because they are faster and more comfortable. This is why we need to provide mobile content in the simplest way possible. Everything is moving to minimalism, even the Web. The new web standards are changing rapidly too. In this article, we will cover one of these new technologies, web components, and what they do. We will discuss the following specifications of web components in this article: Templates Shadow DOM (For more resources related to this topic, see here.) Templates In this section, we will discuss what we can do with templates. However, let's answer a few questions before this. What are templates, and why should we use them? Templates are basically fragments of HTML, but let's call these fragments as the "zombie" fragments of HTML as they are neither alive nor dead. What is meant by "neither alive nor dead"? Let me explain this with a real-life example. Once, when I was working on the ucraft.me project (it's a website built with a lot of cool stuff in it), we faced some rather new challenges with the templates. We had a lot of form elements, but we didn't know where to save the form elements content. We didn't want to load the DOM of each form element, but what could we do? As always, we did some magic; we created a lot of div elements with the form elements and hid it with CSS. But the CSS display: none property did not render the element, but it loaded the element. This was also a problem because there were a lot of form element templates, and it affected the performance of the website. I recommended to my team that they work with templates. Templates can contain HTML content, but they do not load the element nor render. We call template elements "dead elements" because they do not load the content until you get their content with JavaScript. Let's move ahead, and let me show you some examples of how you can create templates and do some stuff with its content. Imagine that you are working on a big project where you need to load some dynamic content without AJAX. If I had a task such as this, I would create a PHP file and get its content by calling the jQuery .load() function. However, now, you can save your content inside of the <template> element and get the content without any jQuery and AJAX but with a single line of JavaScript code. Let's create a template. In index.html, we have <template> and some content we want to get in the future, as shown in the following code block: <template class="superman"> <div> <img src="assets/img/superman.png" class="animated_superman" /> </div> </template> The time has now come for JavaScript! Execute the following code: <script> // selecting the template element with querySelector() var tmpl = document.querySelector('.superman'); //getting the <template> content var content = tmpl.content; // making some changes in the content content.querySelector('.animated_superman').width = 200; // appending the template to the body document.body.appendChild(content); </script> So, that's it! Cool, right? The content will load only after you append the content to the document. So, do you realize that templates are a part of the future web? If you are using Chrome Canary, just turn on the flags of experimental web platform features and enable HTML imports and experimental JavaScript. There are four ways to use templates, which are: Add templates with hidden elements in the document and just copy and paste the data when you need it, as follows: <div hidden data-template="superman"> <div> <p>SuperMan Head</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> However, the problem is that a browser will load all the content. It means that the browser will load but not render images, video, audio, and so on. Get the content of the template as a string (by requesting with AJAX or from <script type="x-template">). However, we might have some problems in working with the string. This can be dangerous for XSS attacks; we just need to pay some more attention to this: <script data-template="batman" type="x-template"> <div> <p>Batman Head this time!</p> <img src="assets/img/superman.png" class="animated_superman" /> </div> </div> Compiled templates such as Hogan.js (http://twitter.github.io/hogan.js/) work with strings. So, they have the same flaw as the patterns of the second type. Templates do not have these disadvantages. We will work with DOM and not with the strings. We will then decide when to run the code. In conclusion: The <template> tag is not intended to replace the system of standardization. There are no tricky iteration operators or data bindings. Its main feature is to be able to insert "live" content along with scripts. Lastly, it does not require any libraries. Shadow DOM The Shadow DOM specification is a separate standard. A part of it is used for standard DOM elements, but it is also used to create with web components. In this section, you will learn what Shadow DOM is and how to use it. Shadow DOM is an internal DOM element that is separated from an external document. It can store your ID, styles, and so on. Most importantly, Shadow DOM is not visible outside of its scope without the use of special techniques. Hence, there are no conflicts with the external world; it's like an iframe. Inside the browser The Shadow DOM concept has been used for a long time inside browsers themselves. When the browser shows complex controls, such as a <input type = "range"> slider or a <input type = "date"> calendar within itself, it constructs them out of the most ordinary styled <div>, <span>, and other elements. They are invisible at the first glance, but they can be easily seen if the checkbox in Chrome DevTools is set to display Shadow DOM: In the preceding code, #shadow-root is the Shadow DOM. Getting items from the Shadow DOM can only be done using special JavaScript calls or selectors. They are not children but a more powerful separation of content from the parent. In the preceding Shadow DOM, you can see a useful pseudo attribute. It is nonstandard and is present for solely historical reasons. It can be styled via CSS with the help of subelements—for example, let's change the form input dates to red via the following code: <style> input::-webkit-datetime-edit { background: red; } </style> <input type="date" /> Once again, make a note of the pseudo custom attribute. Speaking chronologically, in the beginning, the browsers started to experiment with encapsulated DOM structure inside their scopes, then Shadow DOM appeared which allowed developers to do the same. Now, let's work with the Shadow DOM from JavaScript or the standard Shadow DOM. Creating a Shadow DOM The Shadow DOM can create any element within the elem.createShadowRoot() call, as shown by the following code: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = "Because I'm Batman!"; </script> If you run this example, you will see that the contents of the #container element disappeared somewhere, and it only shows "Because I'm Batman!". This is because the element has a Shadow DOM and ignores the previous content of the element. Because of the creation of Shadow DOM, instead of the content, the browser has shown only the Shadow DOM. If you wish, you can put the contents of the ordinary inside this Shadow DOM. To do this, you need to specify where it is to be done. The Shadow DOM is done through the "insertion point", and it is declared using the <content> tag; here's an example: <div id="container">You know why?</div> <script> var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1><p>Winter is coming!</p>'; </script> Now, you will see "You know why?" in the title followed by "Winter is coming!". Here's a Shadow DOM example in Chrome DevTool: The following are some important details about the Shadow DOM: The <content> tag affects only the display, and it does not move the nodes physically. As you can see in the preceding picture, the node "You know why?" remained inside the div#container. It can even be obtained using container.firstElementChild. Inside the <content> tag, we have the content of the element itself. In this example string "You know why?". With the select attribute of the <content> element, you can specify a particular selector content you want to transfer; for example, <content select="p"></content> will transfer only paragraphs. Inside the Shadow DOM, you can use the <content> tag multiple times with different values of select, thus indicating where to place which part of the original content. However, it is impossible to duplicate nodes. If the node is shown in a <content> tag, then the next node will be missed. For example, if there is a <content select="h3.title"> tag and then <content select= "h3">, the first <content> will show the headers <h3> with the class title, while the second will show all the others, except for the ones already shown. In the preceding example from DevTools, the <content></content> tag is empty. If we add some content in the <content> tag, it will show that in that case, if there are no other nodes. Check out the following code: <div id="container"> <h3>Once upon a time, in Westeros</h3> <strong>Ruled a king by name Joffrey and he's dead!</strong> </div> <script> var root = container.createShadowRoot(); root.innerHTML = '<content select='h3'></content> <content select=".writer"> Jon Snow </content> <content></content>'; </script> When you run the JS code, you will see the following: The first <content select='h3'> tag will display the title The second <content select = ".hero"> tag would show the hero name, but if there's no any element with this selector, it will take the default value: <content select=".hero"> The third <content> tag displays the rest of the original contents of the elements without the header <h3>, which it had launched earlier Once again, note that <content> moves nodes on the DOM physically. Root shadowRoot After the creation of a root in the internal DOM, the tree will be available as container.shadowRoot. It is a special object that supports the basic methods of CSS requests and is described in detail in ShadowRoot. You need to go through container.shadowRoot if you need to work with content in the Shadow DOM. You can create a new Shadow DOM tree of JavaScript; here's an example: <div id="container">Polycasts</div> <script> // create a new Shadow DOM tree for element var root = container.createShadowRoot(); root.innerHTML = '<h1><content></content></h1> <strong>Hey googlers! Let's code today.</strong>'; </script> <script> // read data from Shadow DOM for elem var root = container.shadowRoot; // Hey googlers! Let's code today. document.write('<br/><em>container: ' + root. querySelector('strong').innerHTML); // empty as physical nodes - is content document.write('<br/><em>content: ' + root. querySelector('content').innerHTML); </script> To finish up, Shadow DOM is a tool to create a separate DOM tree inside the cell, which is not visible from outside without using special techniques: A lot of browser components with complex structures have Shadow DOM already. You can create Shadow DOM inside every element by calling elem.createShadowRoot(). In the future, it will be available as a elem.shadowRoot root, and you can access it inside the Shadow DOM. It is not available for custom elements. Once the Shadow DOM appears in the element, the content of it is hidden. You can see just the Shadow DOM. The <content> element moves the contents of the original item in the Shadow DOM only visually. However, it remains in the same place in the DOM structure. Detailed specifications are given at http://w3c.github.io/webcomponents/spec/shadow/. Summary Using web components, you can easily create your web application by splitting it into parts/components. Resources for Article: Further resources on this subject: Handling the DOM in Dart [article] Manipulation of DOM Objects using Firebug [article] jQuery 1.4 DOM Manipulation Methods for Style Properties and Class Attributes [article]
Read more
  • 0
  • 0
  • 13341

article-image-extending-yii
Packt
03 Oct 2016
14 min read
Save for later

Extending Yii

Packt
03 Oct 2016
14 min read
Introduction      In this article by Dmitry Eliseev, the author of the book Yii Application Development Cookbook Third Edition, we will see three Yii extensions—helpers, behaviors, and components. In addition, we will learn how to make your extension reusable and useful for the community and will focus on the many things you should do in order to make your extension as efficient as possible. (For more resources related to this topic, see here.) Helpers There are a lot of built-in framework helpers, like StringHelper in the yiihelpers namespace. It contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. In many cases, for additional behavior you can create your own helper and put any static functions into one. For example, we will implement a number helper in this recipe. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… Create the helpers directory in your project and write the NumberHelper class: <?php namespace apphelpers; class NumberHelper { public static function format($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Add the actionNumbers method into SiteController: <?php ... class SiteController extends Controller { … public function actionNumbers() { return $this->render('numbers', ['value' => 18878334526.3]); } } Add the views/site/numbers.php view: <?php use apphelpersNumberHelper; use yiihelpersHtml; /* @var $this yiiwebView */ /* @var $value float */ $this->title = 'Numbers'; $this->params['breadcrumbs'][] = $this->title; ?> <div class="site-numbers"> <h1><?= Html::encode($this->title) ?></h1> <p> Raw number:<br /> <b><?= $value ?></b> </p> <p> Formatted number:<br /> <b><?= NumberHelper::format($value) ?></b> </p> </div> Open the action and see this result: In other cases you can specify another count of decimal numbers; for example: NumberHelper::format($value, 3) How it works… Any helper in Yii2 is just a set of functions implemented as static methods in corresponding classes. You can use one to implement any different format of output for manipulations with values of any variable, and for other cases. Note: Usually, static helpers are light-weight clean functions with a small count of arguments. Avoid putting your business logic and other complicated manipulations into helpers . Use widgets or other components instead of helpers in other cases. See also For more information about helpers, refer to http://www.yiiframework.com/doc-2.0/guide-helper-overview.html. And for examples of built-in helpers, see sources in the helpers directory of the framework, refer to https://github.com/yiisoft/yii2/tree/master/framework/helpers. Creating model behaviors There are many similar solutions in today's web applications. Leading products such as Google's Gmail are defining nice UI patterns; one of these is soft delete. Instead of a permanent deletion with multiple confirmations, Gmail allows users to immediately mark messages as deleted and then easily undo it. The same behavior can be applied to any object such as blog posts, comments, and so on. Let's create a behavior that will allow marking models as deleted, restoring models, selecting not yet deleted models, deleted models, and all models. In this recipe we'll follow a test-driven development approach to plan the behavior and test if the implementation is correct. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. Create two databases for working and for tests. Configure Yii to use the first database in your primary application in config/db.php. Make sure the test application uses a second database in tests/codeception/config/config.php. Create a new migration: <?php use yiidbMigration; class m160427_103115_create_post_table extends Migration { public function up() { $this->createTable('{{%post}}', [ 'id' => $this->primaryKey(), 'title' => $this->string()->notNull(), 'content_markdown' => $this->text(), 'content_html' => $this->text(), ]); } public function down() { $this->dropTable('{{%post}}'); } } Apply the migration to both working and testing databases: ./yii migrate tests/codeception/bin/yii migrate Create a Post model: <?php namespace appmodels; use appbehaviorsMarkdownBehavior; use yiidbActiveRecord; /** * @property integer $id * @property string $title * @property string $content_markdown * @property string $content_html */ class Post extends ActiveRecord { public static function tableName() { return '{{%post}}'; } public function rules() { return [ [['title'], 'required'], [['content_markdown'], 'string'], [['title'], 'string', 'max' => 255], ]; } } How to do it… Let's prepare a test environment, starting with defining the fixtures for the Post model. Create the tests/codeception/unit/fixtures/PostFixture.php file: <?php namespace apptestscodeceptionunitfixtures; use yiitestActiveFixture; class PostFixture extends ActiveFixture { public $modelClass = 'appmodelsPost'; public $dataFile = '@tests/codeception/unit/fixtures/data/post.php'; } Add a fixture data file in tests/codeception/unit/fixtures/data/post.php: <?php return [ [ 'id' => 1, 'title' => 'Post 1', 'content_markdown' => 'Stored *markdown* text 1', 'content_html' => "<p>Stored <em>markdown</em> text 1</p>n", ], ]; Then, we need to create a test case tests/codeception/unit/MarkdownBehaviorTest: . .php: <?php namespace apptestscodeceptionunit; use appmodelsPost; use apptestscodeceptionunitfixturesPostFixture; use yiicodeceptionDbTestCase; class MarkdownBehaviorTest extends DbTestCase { public function testNewModelSave() { $post = new Post(); $post->title = 'Title'; $post->content_markdown = 'New *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>New <em>markdown</em> text</p>n", $post->content_html); } public function testExistingModelSave() { $post = Post::findOne(1); $post->content_markdown = 'Other *markdown* text'; $this->assertTrue($post->save()); $this->assertEquals("<p>Other <em>markdown</em> text</p>n", $post->content_html); } public function fixtures() { return [ 'posts' => [ 'class' => PostFixture::className(), ] ]; } } Run unit tests: codecept run unit MarkdownBehaviorTest and ensure that tests have not passed Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Error Trying to test ... MarkdownBehaviorTest::testExistingModelSave Error --------------------------------------------------------------------------- Time: 289 ms, Memory: 16.75MB Now we need to implement a behavior, attach it to the model, and make sure the test passes. Create a new directory, behaviors. Under this directory, create the MarkdownBehavior class: <?php namespace appbehaviors; use yiibaseBehavior; use yiibaseEvent; use yiibaseInvalidConfigException; use yiidbActiveRecord; use yiihelpersMarkdown; class MarkdownBehavior extends Behavior { public $sourceAttribute; public $targetAttribute; public function init() { if (empty($this->sourceAttribute) || empty($this->targetAttribute)) { throw new InvalidConfigException('Source and target must be set.'); } parent::init(); } public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } private function processContent() { $model = $this->owner; $source = $model->{$this->sourceAttribute}; $model->{$this->targetAttribute} = Markdown::process($source); } } Let's attach the behavior to the Post model: class Post extends ActiveRecord { ... public function behaviors() { return [ 'markdown' => [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Run the test and make sure it passes: Codeception PHP Testing Framework v2.0.9 Powered by PHPUnit 4.8.27 by Sebastian Bergmann and contributors. Unit Tests (2) --------------------------------------------------------------------------- Trying to test ... MarkdownBehaviorTest::testNewModelSave Ok Trying to test ... MarkdownBehaviorTest::testExistingModelSave Ok --------------------------------------------------------------------------- Time: 329 ms, Memory: 17.00MB That's it. We've created a reusable behavior and can use it for all future projects by just connecting it to a model. How it works… Let's start with the test case. Since we want to use a set of models, we will define fixtures. A fixture set is put into the DB each time the test method is executed. We will prepare unit tests for specifying how the behavior works: First, we test processing new model content. The behavior must convert Markdown text from a source attribute to HTML and store the second one to target attribute. Second, we test updated content of an existing model. After changing Markdown content and saving the model, we must get updated HTML content. Now let's move to the interesting implementation details. In behavior, we can add our own methods that will be mixed into the model that the behavior is attached to. We can also subscribe to our own component events. We are using it to add our own listener: public function events() { return [ ActiveRecord::EVENT_BEFORE_INSERT => 'onBeforeSave', ActiveRecord::EVENT_BEFORE_UPDATE => 'onBeforeSave', ]; } And now we can implement this listener: public function onBeforeSave(Event $event) { if ($this->owner->isAttributeChanged($this->sourceAttribute)) { $this->processContent(); } } In all methods, we can use the owner property to get the object the behavior is attached to. In general we can attach any behavior to your models, controllers, application, and other components that extend the yiibaseComponent class. We can also attach one behavior again and again to model for the processing of different attributes: class Post extends ActiveRecord { ... public function behaviors() { return [ [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'description_markdown', 'targetAttribute' => 'description_html', ], [ 'class' => MarkdownBehavior::className(), 'sourceAttribute' => 'content_markdown', 'targetAttribute' => 'content_html', ], ]; } } Besides, we can also extend the yiibaseAttributeBehavior class, like yiibehaviorsTimestampBehavior, to update specified attributes for any event. See also To learn more about behaviors and events, refer to the following pages: http://www.yiiframework.com/doc-2.0/guide-concept-behaviors.html http://www.yiiframework.com/doc-2.0/guide-concept-events.html For more information about Markdown syntax, refer to http://daringfireball.net/projects/markdown/. Creating components If you have some code that looks like it can be reused but you don't know if it's a behavior, widget, or something else, it's most probably a component. The component should be inherited from the yiibaseComponent class. Later on, the component can be attached to the application and configured using the components section of a configuration file. That's the main benefit compared to using just a plain PHP class. We are also getting behaviors, events, getters, and setters support. For our example, we'll implement a simple Exchange application component that will be able to get currency rates from the http://fixer.io site, attach them to the application, and use them. Getting ready Create a new yii2-app-basic application by using composer, as described in the official guide at http://www.yiiframework.com/doc-2.0/guide-start-installation.html. How to do it… To get a currency rate, our component should send an HTTP GET query to a service URL, like http://api.fixer.io/2016-05-14?base=USD. The service must return all supported rates on the nearest working day: { "base":"USD", "date":"2016-05-13", "rates": { "AUD":1.3728, "BGN":1.7235, ... "ZAR":15.168, "EUR":0.88121 } } The component should extract needle currency from the response in a JSON format and return a target rate. Create a components directory in your application structure. Create the component class example with the following interface: <?php namespace appcomponents; use yiibaseComponent; class Exchange extends Component { public function getRate($source, $destination, $date = null) { } } Implement the component functional: <?php namespace appcomponents; use yiibaseComponent; use yiibaseInvalidConfigException; use yiibaseInvalidParamException; use yiicachingCache; use yiidiInstance; use yiihelpersJson; class Exchange extends Component { /** * @var string remote host */ public $host = 'http://api.fixer.io'; /** * @var bool cache results or not */ public $enableCaching = false; /** * @var string|Cache component ID */ public $cache = 'cache'; public function init() { if (empty($this->host)) { throw new InvalidConfigException('Host must be set.'); } if ($this->enableCaching) { $this->cache = Instance::ensure($this->cache, Cache::className()); } parent::init(); } public function getRate($source, $destination, $date = null) { $this->validateCurrency($source); $this->validateCurrency($destination); $date = $this->validateDate($date); $cacheKey = $this->generateCacheKey($source, $destination, $date); if (!$this->enableCaching || ($result = $this->cache->get($cacheKey)) === false) { $result = $this->getRemoteRate($source, $destination, $date); if ($this->enableCaching) { $this->cache->set($cacheKey, $result); } } return $result; } private function getRemoteRate($source, $destination, $date) { $url = $this->host . '/' . $date . '?base=' . $source; $response = Json::decode(file_get_contents($url)); if (!isset($response['rates'][$destination])) { throw new RuntimeException('Rate not found.'); } return $response['rates'][$destination]; } private function validateCurrency($source) { if (!preg_match('#^[A-Z]{3}$#s', $source)) { throw new InvalidParamException('Invalid currency format.'); } } private function validateDate($date) { if (!empty($date) && !preg_match('#d{4}-d{2}-d{2}#s', $date)) { throw new InvalidParamException('Invalid date format.'); } if (empty($date)) { $date = date('Y-m-d'); } return $date; } private function generateCacheKey($source, $destination, $date) { return [__CLASS__, $source, $destination, $date]; } } Attach our component in the config/console.php or config/web.php configuration files: 'components' => [ 'cache' => [ 'class' => 'yiicachingFileCache', ], 'exchange' => [ 'class' => 'appcomponentsExchange', 'enableCaching' => true, ], // ... db' => $db, ], We can now use a new component directly or via a get method: echo Yii::$app->exchange->getRate('USD', 'EUR'); echo Yii::$app->get('exchange')->getRate('USD', 'EUR', '2014-04-12'); Create a demonstration console controller: <?phpnamespace appcommands;use yiiconsoleController;class ExchangeController extends Controller{ public function actionTest($currency, $date = null) { echo Yii::$app->exchange->getRate('USD', $currency, $date) . PHP_EOL; }} And try to run any commands: $ ./yii exchange/test EUR > 0.90196 $ ./yii exchange/test EUR 2015-11-24 > 0.93888 $ ./yii exchange/test OTHER > Exception 'yiibaseInvalidParamException' with message 'Invalid currency format.' $ ./yii exchange/test EUR 2015/24/11 Exception 'yiibaseInvalidParamException' with message 'Invalid date format.' $ ./yii exchange/test ASD > Exception 'RuntimeException' with message 'Rate not found.' As a result you must see rate values in success cases or specific exceptions in error ones. In addition to creating your own components, you can do more. Overriding existing application components Most of the time there will be no need to create your own application components, since other types of extensions, such as widgets or behaviors, cover almost all types of reusable code. However, overriding core framework components is a common practice and can be used to customize the framework's behavior for your specific needs without hacking into the core. For example, to be able to format numbers using the Yii::app()->formatter->asNumber($value) method instead of the NumberHelper::format method from the Helpers recipe, follow the next steps: Extend the yiii18nFormatter component like the following: <?php namespace appcomponents; class Formatter extends yiii18nFormatter { public function asNumber($value, $decimal = 2) { return number_format($value, $decimal, '.', ','); } } Override the class of the built-in formatter component: 'components' => [ // ... formatter => [ 'class' => 'appcomponentsFormatter, ], // … ], Right now, we can use this method directly: echo Yii::app()->formatter->asNumber(1534635.2, 3); or as a new format for GridView and DetailView widgets: <?= yiigridGridView::widget([ 'dataProvider' => $dataProvider, 'columns' => [ 'id', 'created_at:datetime', 'title', 'value:number', ], ]) ?> You can also extend every existing component without overwriting its source code. How it works… To be able to attach a component to an application it can be extended from the yiibaseComponent class. Attaching is as simple as adding a new array to the components’ section of configuration. There, a class value specifies the component's class and all other values are set to a component through the corresponding component's public properties and setter methods. Implementation itself is very straightforward; We are wrapping http://api.fixer.io calls into a comfortable API with validators and caching. We can access our class by its component name using Yii::$app. In our case, it will be Yii::$app->exchange. See also For official information about components, refer to http://www.yiiframework.com/doc-2.0/guide-concept-components.html. For the NumberHelper class sources, see Helpers recipe. Summary In this article we learnt about the Yii extensions—helpers, behavior, and components. Helpers contains sets of helpful static methods for manipulating strings, files, arrays, and other subjects. Behaviors allow you to enhance the functionality of an existing component class without needing to change the class's inheritance. Components are the main building blocks of Yii applications. A component is an instance of CComponent or its derived class. Using a component mainly involves accessing its properties and raising/handling its events. Resources for Article: Further resources on this subject: Creating an Extension in Yii 2 [article] Atmosfall – Managing Game Progress with Coroutines [article] Optimizing Games for Android [article]
Read more
  • 0
  • 0
  • 13205
article-image-breaking-microservices-architecture
Packt
08 Nov 2016
15 min read
Save for later

Breaking into Microservices Architecture

Packt
08 Nov 2016
15 min read
In this article by Narayan Prusty, the author of the book Modern JavaScript Applications, we will see the architecture of server side application development for complex and large applications (applications with huge number of users and large volume of data) shouldn't just involve faster response and providing web services for wide variety of platforms. It should be easy to scale, upgrade, update, test, and deploy. It should also be highly available, allowing the developers write components of the server side application in different programming languages and use different databases. Therefore, this leads the developers who build large and complex applications to switch from the common monolithic architecture to microservices architecture that allows us to do all this easily. As microservices architecture is being widely used in enterprises that build large and complex applications, it's really important to learn how to design and create server side applications using this architecture. In this chapter, we will discuss how to create applications based on microservices architecture with Node.js using the Seneca toolkit. (For more resources related to this topic, see here.) What is monolithic architecture? To understand microservices architecture, it's important to first understand monolithic architecture, which is its opposite. In monolithic architecture, different functional components of the server side application, such as payment processing, account management, push notifications, and other components, all blend together in a single unit. For example, applications are usually divided into three parts. The parts are HTML pages or native UI that run on the user's machine, server side application that runs on the server, and database that also runs on the server. The server side application is responsible for handling HTTP requests, retrieving and storing data in a database, executing algorithms, and so on. If the server side application is a single executable (that is running is a single process) that does all these task, than we say that the server side application is monolithic. This is a common way of building server side applications. Almost every major CMS, web servers, server side frameworks, and so on are built using monolithic architecture. This architecture may seem successful, but problems are likely to arise when your application is large and complex. Demerits of monolithic architecture The following are some of the issues caused by server side applications built using the monolithic architecture. Scaling monolithic architecture As traffic to your server side application increases, you will need to scale your server side application to handle the traffic. In case of monolithic architecture, you can scale the server side application by running the same executable on multiple servers and place the servers behind a load balancer or you can use round robin DNS to distribute the traffic among the servers: In the preceding diagram, all the servers will be running the same server side application. Although scaling is easy, scaling monolithic server side application ends up with scaling all the components rather than the components that require greater resource. Thus, causing unbalanced utilization of resources sometimes, depending on the quantity and types of resources the components need. Let's consider some examples to understand the issues caused while scaling monolithic server side applications: Suppose there is a component of server side application that requires a more powerful or special kind of hardware, we cannot simply scale this particular component as all the components are packed together, therefore everything needs to be scaled together. So, to make sure that the component gets enough resources, you need to run the server side application on some more servers with powerful or special hardware, leading to consumption of more resources than actually required. Suppose we have a component that requires to be executed on a specific server operating system that is not free of charge, we cannot simply run this particular component in a non-free operating system as all the components are packed together and therefore, just to execute this specific component, we need to install the non-free operating system in all servers, increasing the cost largely. These are just some examples. There are many more issues that you are likely to come across while scaling a monolithic server side application. So, when we scale monolithic server side applications, the components that don't need more powerful or special kind of resource starts receiving them, therefore deceasing resources for the component that needs them. We can say that scaling monolithic server side application involves scaling all components that are forcing to duplicate everything in the new servers. Writing monolithic server side applications Monolithic server side applications are written in a particular programming language using a particular framework. Enterprises usually have developers who are experts in different programming languages and frameworks to build server side applications; therefore, if they are asked to build a monolithic server side application, then it will be difficult for them to work together. The components of a monolithic server side application can be reused only in the same framework using, which it's built. So, you cannot reuse them for some other kind of project that's built using different technologies. Other issues of monolithic architecture Here are some other issues that developers might face. Depending on the technology that is used to build the monolithic server side application: It may need to be completely rebuild and redeployed for every small change made to it. This is a time-consuming task and makes your application inaccessible for a long time. It may completely fail if any one of the component fails. It's difficult to build a monolithic application to handle failure of specific components and degrade application features accordingly. It may be difficult to find how much resources are each components consuming. It may be difficult to test and debug individual components separately. Microservices architecture to the rescue We saw the problems caused by monolithic architecture. These problems lead developers to switch from monolithic architecture to microservices architecture. In microservices architecture, the server side application is divided into services. A service (or microservice) is a small and independent process that constitutes a particular functionality of the complete server side application. For example, you can have a service for payment processing, another service for account management, and so on; the services need to communicate with each other via network. What do you mean by "small" service? You must be wondering how small a service needs to be and how to tell whether a service is small or not? Well, it actually depends on many factors such as the type of application, team management, availability of resources, size of application, and how small you think is small? However, a small service doesn't have to be the one that is written is less lines of code or provides a very basic functionality. A small service can be the one on which a team of developers can work independently, which can be scaled independently to other services, scaling it doesn't cause unbalanced utilization of recourses, and overall they are highly decoupled (independent and unaware) of other services. You don't have to run each service in a different server, that is, you can run multiple services in a single computer. The ratio of server to services depends on different factors. A common factor is the amount and type of resources and technologies required. For example, if a service needs a lot of RAM and CPU time, then it would be better to run it individually on a server. If there are some services that don't need much resources, then you can run them all in a single server together. The following diagram shows an example of the microservices architecture: Here, you can think of Service 1 as the web server with which a browser communicates and other services providing APIs for various functionalities. The web services communicate with other services to get data. Merits of microservices architecture Due to the fact that services are small and independent and communicate via network, it solves many problems that monolithic architecture had. Here are some of the benefits of microservices architecture: As the services communicate via network, they can be written in different programming languages using different frameworks Making a change to a service only requires that particular service to be redeployed instead of all the services, which is a faster procedure It becomes easier to measure how much resources are consumed by each service as each service runs in a different process It becomes easier to test and debug, as you can analyze each service separately Services can be reused by other applications as they interact via network calls Scaling services Apart from the preceding benefits, one of the major benefits of microservices architecture is that you can scale individual services that require scaling instead of all the services, therefore preventing duplication of resources and unbalanced utilization of resources. Suppose we want to scale Service 1 in the preceding diagram. Here is a diagram that shows how it can be scaled: Here, we are running two instances of Service 1 on two different servers kept behind a load balancer, which distributes the traffic between them. All other services run the same way as scaling them wasn't required. If you wanted to scale Service 3, then you can run multiple instances of Service 3 on multiple servers and place them behind a load balancer. Demerits of microservices architecture Although there are a lot of merits of using microservices architecture compared to monolithic architecture, there are some demerits of microservices architecture as well: As the server side application is divided into services, deploying, and optionally, configuring each service separately is cumbersome and a time-consuming task. Note that developers often use some sort automation technology (such as AWS, Docker, and so on) to make deployment somewhat easier; however, to use it, you still need a good level of experience and expertise of that technology. Communication between services is likely to lag as it's done via network. This sort of server side applications is more prone to network security vulnerabilities as services communicate via network. Writing code for communicating with other services can be harder, that is, you need to make network calls and then parse the data to read it. This also requires more processing. Note that although there are frameworks to build server side applications using microservices that make fetching and parsing of data easier, it still doesn't deduct the processing and network wait time. You will surely need some sort of monitoring tool to monitor services as they may go down due to network, hardware, or software failure. Although you may use the monitoring tool only when your application suddenly stops, to build the monitoring software or use some sort of service, monitoring software needs some level of extra experience and expertise. Microservices-based server side applications are slower than monolithic-based server side applications as communication via networks is slower compared to memory. When to use microservices architecture? It may seem like its difficult to choose between monolithic and microservices architecture, but it's actually not so hard to decide between them. If you are building a server side application using monolithic architecture and you feel that you are unlikely to face any monolithic issues that we discussed earlier, then you can stick to monolithic architecture. In future, if you are facing issues that can be solved using microservices architecture, then you should switch to microservices architecture. If you are switching from a monolithic architecture to microservices architecture, then you don't have to rewrite the complete application, instead you can only convert the components that are causing issues to services by doing some code refactoring. This sort of server side applications where the main application logic is monolithic but some specific functionality is exposed via services is called microservices architecture with monolithic core. As issues increase further, you can start converting more components of the monolithic core to services. If you are building a server side application using monolithic architecture and you feel that you are likely to face any of the monolithic issues that we discussed earlier, then you should immediately switch to microservices architecture or microservices architecture with monolithic core, depending on what suits you the best. Data management In microservices architecture, each service can have its own database to store data and can also use a centralized database to store. Some developers don't use a centralized database at all, instead all services have their own database to store the data. To synchronize the data between the services, the services omit events when their data is changed and other services subscribe to the event and update the data. The problem with this mechanism is that if a service is down, then it may miss some events. There is also going to be a lot of duplicate data, and finally, it is difficult to code this kind of system. Therefore, it's a good idea to have a centralized database and also let each service to maintain their own database if they want to store something that they don't want to share with others. Services should not connect to the centralized database directly, instead there should be another service called database service that provides APIs to work with the centralized database. This extra layer has many advantages, such as the underlying schema can be changed without updating and redeploying all the services that are dependent on the schema, we can add a caching layer without making changes to the services, you can change the type of database without making any changes to the services and there are many other benefits. We can also have multiple database services if there are multiple schemas, or if there are different types of databases, or due to some other reason that benefits the overall architecture and decouples the services. Implementing microservices using Seneca Seneca is a Node.js framework for creating server side applications using microservices architecture with monolithic core. Earlier, we discussed that in microservices architecture, we create a separate service for every component, so you must be wondering what's the point of using a framework for creating services that can be done by simply writing some code to listen to a port and reply to requests. Well, writing code to make requests, send responses, and parse data requires a lot of time and work, but a framework like Seneca make all this easy. Also converting components of monolithic core to services is also a cumbersome task as it requires a lot of code refactoring, but Seneca makes it easy by introducing a concept of actions and plugins. Finally, services written in any other programming language or framework will be able to communicate with Seneca services. In Seneca, an action represents a particular operation. An action is a function that's identified by an object literal or JSON string called as the action's pattern. In Seneca, these operations of a component of monolithic core are written using actions, which we may later want to move from monolithic core to a service and expose it to other services and monolithic core via network. Why actions? You might be wondering what is the benefit of using actions instead of functions to write operations and how actions make it easy to convert components of monolithic core to services? Suppose you want to move an operation of monolithic core that is written using a function to a separate service and expose the function via network then you cannot simply copy and paste the function to the new service, instead you need to define a route (if you are using Express). To call the function inside the monolithic core, you will need to write code to make an HTTP request to the service. To call this operation inside the service, you can simply call a function so that there are two different code snippets depending from where you are executing the operation. Therefore, moving operations requires a lot of code refactoring. However, if you would have written the preceding operation using the Seneca action, then it would have been really easy to move the operation to a separate service. In case the operation is written using action, and you want to move the operation to a separate service and expose the operation via network, then you can simply copy and paste the action to the new service. That's it. Obviously, we also need to tell the service to expose the action via network and tell the monolithic core where to find the action, but all these require just couple of lines of code. A Seneca service exposes actions to other services and monolithic core. While making request to a service, we need to provide a pattern matching an action's pattern to be called in the service. Why patterns? Patterns make it easy to map a URL to action, patterns can overwrite other patterns for specific conditions, therefore it prevents editing of the existing code, as editing of the existing code in a production site is not safe and have many other disadvantages. Seneca also has a concept of plugins. A seneca plugin is actually a set of actions that can be easily distributed and plugged in to a service or monolithic core. As our monolithic core becomes larger and complex, we can convert components to services. That is, move actions of certain components to services. Summary In this chapter, we saw the difference between monolithic and microservices architecture. Then we discussed what microservices architecture with monolithic core means and its benefits. Finally, we jumped into the Seneca framework for implementing microservices architecture with monolithic core and discussed how to create a basic login and registration functionality to demonstrate various features of the Seneca framework and how to use it. In the next chapter, we will create a fully functional e-commerce website using Seneca and Express frameworks Resources for Article: Further resources on this subject: Microservices – Brave New World [article] Patterns for Data Processing [article] Domain-Driven Design [article]
Read more
  • 0
  • 0
  • 12965

article-image-getting-organized-npm-and-bower
Packt
06 Oct 2016
13 min read
Save for later

Getting Organized with NPM and Bower

Packt
06 Oct 2016
13 min read
In this article by Philip Klauzinski and John Moore, the authors of the book Mastering JavaScript Single Page Application Development, we will learn about the basics of NMP and Bower. JavaScript was the bane of the web development industry during the early days of the browser-rendered Internet. Now, powers hugely impactful libraries such as jQuery, and JavaScript-rendered content (as opposed to server-side-rendered content) is even indexed by many search engines. What was once largely considered an annoying language used primarily to generate popup windows and alert boxes has now become, arguably, the most popular programming language in the world. (For more resources related to this topic, see here.) Not only is JavaScript now more prevalent than ever in frontend architecture, but it has become a server-side language as well, thanks to the Node.js runtime. We have also now seen the proliferation of document-oriented databases, such as MongoDB, which store and return JSON data. With JavaScript present throughout the development stack, the door is now open for JavaScript developers to become full-stack developers without the need to learn a traditional server-side language. Given the right tools and know-how, any JavaScript developer can create single page applications (SPAs) comprising entirely the language they know best, and they can do so using an architecture such as MEAN (MongoDB, Express, AngularJS, and Node.js). Organization is key to the development of any complex single page application. If you don't get organized from the beginning, you are sure to introduce an inordinate number of regressions to your app. The Node.js ecosystem will help you do this with a full suite of indispensable and open source tools, three of which we will discuss here. In this article, you will learn about: Node Package Manager The Bower front-end package manager What is Node Package Manager? Within any full-stack JavaScript environment, Node Package Manager (NPM) will be your go-to tool for setting up your development environment and managing server-side libraries. NPM can be used within both global and isolated environment contexts. We will first explore the use of NPM globally. Installing Node.js and NPM NPM is a component of Node.js, so before you can use it, you must install Node.js. You can find installers for both Mac and Windows at nodejs.org. Once you have Node.js installed, using NPM is incredibly easy and is done from the command-line interface (CLI). Start by ensuring you have the latest version of NPM installed, as it is updated more often than Node.js itself: $ npm install -g npm When using NPM, the -g option will apply your changes to your global environment. In this case, you want your version of NPM to apply globally. As stated previously, NPM can be used to manage packages both globally and within isolated environments. Therefore, we want essential development tools to be applied globally so that you can use them in multiple projects on the same system. On Mac and some Unix-based systems, you may have to run the npm command as the superuser (prefix the command with sudo) in order to install packages globally, depending on how NPM was installed. If you run into this issue and wish to remove the need to prefix npm with sudo, see docs.npmjs.com/getting-started/fixing-npm-permissions. Configuring your package.json file For any project you develop, you will keep a local package.json file to manage your Node.js dependencies. This file should be stored at the root of your project directory, and it will only pertain to that isolated environment. This allows you to have multiple Node.js projects with different dependency chains on the same system. When beginning a new project, you can automate the creation of the package.json file from the command line: $ npm init Running npm init will take you through a series of JSON property names to define through command-line prompts, including your app's name, version number, description, and more. The name and version properties are required, and your Node.js package will not install without them being defined. Several of the properties will have a default value given within parentheses in the prompt so that you may simply hit Enter to continue. Other properties will simply allow you to hit Enter with a blank entry and will not be saved to the package.json file or be saved with a blank value: name: (my-app) version: (1.0.0) description: entry point: (index.js) The entry point prompt will be defined as the main property in package.json and is not necessary unless you are developing a Node.js application. In our case, we can forgo this field. The npm init command may in fact force you to save the main property, so you will have to edit package.json afterward to remove it; however, that field will have no effect on your web app. You may also choose to create the package.json file manually using a text editor if you know the appropriate structure to employ. Whichever method you choose, your initial version of the package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "description": "My JavaScript single page application." } If you want your project to be private and want to ensure that it does not accidently get published to the NPM registry, you may want to add the private property to your package.json file and set it to true. Additionally, you may remove some properties that only apply to a registered package: { "name": "my-app", "author": "Philip Klauzinski", "description": "My JavaScript single page application.", "private": true } Once you have your package.json file set up the way you like it, you can begin installing Node.js packages locally for your app. This is where the importance of dependencies begins to surface. NPM dependencies There are three types of dependencies that can be defined for any Node.js project in your package.json file: dependencies, devDependencies, and peerDependencies. For the purpose of building a web-based SPA, you will only need to use the devDependencies declaration. The devDependencies ones are those that are required for developing your application, but not required for its production environment or for simply running it. If other developers want to contribute to your Node.js application, they will need to run npm install from the command line to set up the proper development environment. For information on the other types of dependencies, see docs.npmjs.com. When adding devDependencies to your package.json file, the command line again comes to the rescue. Let's use the installation of Browserify as an example: $ npm install browserify --save-dev This will install Browserify locally and save it along with its version range to the devDependencies object in your package.json file. Once installed, your package.json file should look similar to the following example: { "name": "my-app", "version": "1.0.0", "author": "Philip Klauzinski", "license": "MIT", "devDependencies": { "browserify": "^12.0.1" } } The devDependencies object will store each package as key-value pairs, in which the key is the package name and the value is the version number or version range. Node.js uses semantic versioning, where the three digits of the version number represent MAJOR.MINOR.PATCH. For more information on semantic version formatting, see semver.org. Updating your development dependencies You will notice that the version number of the installed package is preceded by a caret (^) symbol by default. This means that package updates will only allow patch and minor updates for versions above 1.0.0. This is meant to prevent major version changes from breaking your dependency chain when updating your packages to the latest versions. To update your devDependencies and save the new version numbers, you will enter the following from the command line. $ npm update --save-dev Alternatively, you can use the -D option as a shortcut for --save-dev: $ npm update -D To update all globally installed NPM packages to their latest versions, run npm update with the -g option: $ npm update -g For more information on semantic versioning within NPM, see docs.npmjs.com/misc/semver. Now that you have NPM set up and you know how to install your development dependencies, you can move on to installing Bower. Bower Bower is a package manager for frontend web assets and libraries. You will use it to maintain your frontend stack and control version chains for libraries such as jQuery, AngularJS, and any other components necessary to your app's web interface. Installing Bower Bower is also a Node.js package, so you will install it using NPM, much like you did with the Browserify example installation in the previous section, but this time you will be installing the package globally. This will allow you to run bower from the command line anywhere on your system without having to install it locally for each project. $ npm install -g bower You can alternatively install Bower locally as a development dependency so that you may maintain different versions of it for different projects on the same system, but this is generally not necessary. $ npm install bower --save-dev Next, check that Bower is properly installed by querying the version from the command line. $ bower -v Bower also requires the Git version control system (VCS) to be installed on your system in order to work with packages. This is because Bower communicates directly with GitHub for package management data. If you do not have Git installed on your system, you can find instructions for Linux, Mac, and Windows at git-scm.com. Configuring your bower.json file The process of setting up your bower.json file is comparable to that of the package.json file for NPM. It uses the same JSON format, has both dependencies and devDependencies, and can also be automatically created. $ bower init Once you type bower init from the command line, you will be prompted to define several properties with some defaults given within parentheses: ? name: my-app ? version: 0.0.0 ? description: My app description. ? main file: index.html ? what types of modules does this package expose? (Press <space> to? what types of modules does this package expose? globals ? keywords: my, app, keywords ? authors: Philip Klauzinski ? license: MIT ? homepage: http://gui.ninja ? set currently installed components as dependencies? No ? add commonly ignored files to ignore list? Yes ? would you like to mark this package as private which prevents it from being accidentally published to the registry? Yes These questions may vary depending on the version of Bower you install. Most properties in the bower.json file are not necessary unless you are publishing your project to the Bower registry, indicated in the final prompt. You will most likely want to mark your package as private unless you plan to register it and allow others to download it as a Bower package. Once you have created the bower.json file, you can open it in a text editor and change or remove any properties you wish. It should look something like the following example: { "name": "my-app", "version": "0.0.0", "authors": [ "Philip Klauzinski" ], "description": "My app description.", "main": "index.html", "moduleType": [ "globals" ], "keywords": [ "my", "app", "keywords" ], "license": "MIT", "homepage": "http://gui.ninja", "ignore": [ "**/.*", "node_modules", "bower_components", "test", "tests" ], "private": true } If you wish to keep your project private, you can reduce your bower.json file to two properties before continuing: { "name": "my-app", "private": true } Once you have the initial version of your bower.json file set up the way you like it, you can begin installing components for your app. Bower components location and the .bowerrc file Bower will install components into a directory named bower_components by default. This directory will be located directly under the root of your project. If you wish to install your Bower components under a different directory name, you must create a local system file named .bowerrc and define the custom directory name there: { "directory": "path/to/my_components" } An object with only a single directory property name is all that is necessary to define a custom location for your Bower components. There are many other properties that can be configured within a .bowerrc file. For more information on configuring Bower, see bower.io/docs/config/. Bower dependencies Bower also allows you to define both the dependencies and devDependencies objects like NPM. The distinction with Bower, however, is that the dependencies object will contain the components necessary for running your app, while the devDependencies object is reserved for components that you might use for testing, transpiling, or anything that does not need to be included in your frontend stack. Bower packages are managed using the bower command from the CLI. This is a user command, so it does not require super user (sudo) permissions. Let's begin by installing jQuery as a frontend dependency for your app: $ bower install jquery --save The --save option on the command line will save the package and version number to the dependencies object in bower.json. Alternatively, you can use the -S option as a shortcut for --save: $ bower install jquery -S Next, let's install the Mocha JavaScript testing framework as a development dependency: $ bower install mocha --save-dev In this case, we will use --save-dev on the command line to save the package to the devDependencies object instead. Your bower.json file should now look similar to the following example: { "name": "my-app", "private": true, "dependencies": { "jquery": "~2.1.4" }, "devDependencies": { "mocha": "~2.3.4" } } Alternatively, you can use the -D option as a shortcut for --save-dev: $ bower install mocha –D You will notice that the package version numbers are preceded by the tilde (~) symbol by default, in contrast to the caret (^) symbol, as is the case with NPM. The tilde serves as a more stringent guard against package version updates. With a MAJOR.MINOR.PATCH version number, running bower update will only update to the latest patch version. If a version number is composed of only the major and minor versions, bower update will update the package to the latest minor version. Searching the Bower registry All registered Bower components are indexed and searchable through the command line. If you don't know the exact package name of a component you wish to install, you can perform a search to retrieve a list of matching names. Most components will have a list of keywords within their bower.json file so that you can more easily find the package without knowing the exact name. For example, you may want to install PhantomJS for headless browser testing: $ bower search phantomjs The list returned will include any package with phantomjs in the package name or within its keywords list: phantom git://github.com/ariya/phantomjs.git dt-phantomjs git://github.com/keesey/dt-phantomjs qunit-phantomjs-runner git://github.com/jonkemp/... parse-cookie-phantomjs git://github.com/sindresorhus/... highcharts-phantomjs git://github.com/pesla/highcharts-phantomjs.git mocha-phantomjs git://github.com/metaskills/mocha-phantomjs.git purescript-phantomjs git://github.com/cxfreeio/purescript-phantomjs.git You can see from the returned list that the correct package name for PhantomJS is in fact phantom and not phantomjs. You can then proceed to install the package now that you know the correct name: $ bower install phantom --save-dev Now, you have Bower installed and know how to manage your frontend web components and development tools, but how do you integrate them into your SPA? This is where Grunt comes in. Summary Now that you have learned to set up an optimal development environment with NPM and supply it with frontend dependencies using Bower, it's time to start learning more about building a real app. Resources for Article: Further resources on this subject: API with MongoDB and Node.js [article] Tips & Tricks for Ext JS 3.x [article] Responsive Visualizations Using D3.js and Bootstrap [article]
Read more
  • 0
  • 0
  • 12818
Modal Close icon
Modal Close icon