Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-setting-complete-django-e-commerce-store-30-minutes
Packt
21 May 2010
7 min read
Save for later

Setting up a Complete Django E-commerce store in 30 minutes

Packt
21 May 2010
7 min read
In order to demonstrate Django's rapid development potential, we will begin by constructing a simple, but fully-featured, e-commerce store. The goal is to be up and running with a product catalog and products for sale, including a simple payment processing interface, in about half-an-hour. If this seems ambitious, remember that Django offers a lot of built-in shortcuts for the most common web-related development tasks. We will be taking full advantage of these and there will be side discussions of their general use. In addition to building our starter storefront, this article aims to demonstrate some other Django tools and techniques. In this article by Jesse Legg, author of Django 1.2 e-commerce, we will: Create our Django Product model to take advantage of the automatic admin tool Build a flexible but easy to use categorization system, to better organize our catalog of products Utilize Django's generic view framework to expose a quick set of views on our catalog data Finally, create a simple template for selling products through the Google Checkout API (Read more interesting articles on Django 1.2 e-commerce here.) Before we begin, let's take a moment to check our project setup. Our project layout includes two directories: one for files specific to our personal project (settings, URLs, and so on), and the other for our collection of e-commerce Python modules (coleman). This latter location is where the bulk of the code will live. If you have downloaded the source code from the Packt website, the contents of the archive download represents everything in this second location. Designing a product catalog The starting point of our e-commerce application is the product catalog. In the real world, businesses may produce multiple catalogs for mutually exclusive or overlapping subsets of their products. Some examples are: fall and spring catalogs, catalogs based on a genre or sub-category of product such as catalogs for differing kinds of music (for example, rock versus classical), and many other possibilities. In some cases a single catalog may suffice, but allowing for multiple catalogs is a simple enhancement that will add flexibility and robustness to our application. As an example, we will imagine a fictitious food and beverage company, CranStore.com, that specializes in cranberry products: cranberry drinks, food, and desserts. In addition, to promote tourism at their cranberry bog, they sell numerous gift items, including t-shirts, hats, mouse pads, and the like. We will consider this business to illustrate examples as they relate to the online store we are building. We will begin by defining a catalog model called Catalog. The basic model structure will look like this: class Catalog(models.Model): name = models.CharField(max_length=255 slug = models.SlugField(max_length=150) publisher = models.CharField(max_length=300) description = models.TextField() pub_date = models.DateTimeField(default=datetime.now) This is potentially the simplest model we will create. It contains only five, very simple fields. But it is a good starting point for a short discussion about Django model design. Notice that we have not included any relationships to other models here. For example, there is no products ManyToManyField. New Django developers tend to overlook simple design decisions such as the one shown previously, but the ramifications are quite important. The first reason for this design is a purely practical one. Using Django's built-in admin tool can be a pleasure or a burden, depending on the design of your models. If we were to include a products field in the Catalog design, it would be a ManyToManyField represented in the admin as an enormous multiple-select HTML widget. This is practically useless in cases where there could be thousands of possible selections. If, instead, we attach a ForeignKey to Catalog on a Product model (which we will build shortly), we instantly increase the usability of Django's automatic admin tool. Instead of a select-box where we must shift-click to choose multiple products, we have a much simpler HTML drop-down interface with significantly fewer choices. This should ultimately increase the usability of the admin for our users. For example, CranStore.com sells lots of t-shirts during the fall when cranberries are ready to harvest and tourism spikes. They may wish to run a special catalog of touristy products on their website during this time. For the rest of the year, they sell a smaller selection of items online. The developers at CranStore create two catalogs: one is named Fall Collection and the other is called Standard Collection. When creating product information, the marketing team can decide which catalog an individual product belongs to by simply selecting them from the product editing page. This is more intuitive than selecting individual products out of a giant list of all products from the catalog admin page. Secondly, designing the Catalog model this way prevents potential "bloat" from creeping into our models. Imagine that CranStore decides to start printing paper versions of their catalogs and mailing them to a subscriber list. This would be a second potential ManyToManyField on our Catalog model, a field called subscribers. As you can see, this pattern could repeat with each new feature CranStore decides to implement. By keeping models as simple as possible, we prevent all kinds of needless complexity. In addition we also adhere to a basic Django design principle, that of "loose coupling". At the database level, the tables Django generates will be very similar regardless of where our ManyToManyField lives. Usually the only difference will be in the table name. Thus it generally makes more sense to focus on the practical aspects of Django model design. Django's excellent reverse relationship feature also allows a great deal of flexibility when it comes time to using the ORM to access our data. Model design is difficult and planning up-front can pay great dividends later. Ideally, we want to take advantage of the automatic, built-in features that make Django so great. The admin tool is a huge part of this. Anyone who has had to build a CRUD interface by hand so that non-developers can manage content should recognize the power of this feature. In many ways it is Django's "killer app". Creating the product model Finally, let's implement our product model. We will start with a very basic set of fields that represent common and shared properties amongst all the products we're likely to sell. Things like a picture of the item, its name, a short description, and pricing information. class Product(models.Model): name = models.CharField(max_length=300) slug = models.SlugField(max_length=150) description = models.TextField() photo = models.ImageField(upload_to='product_photo', blank=True) manufacturer = models.CharField(max_length=300, blank=True) price_in_dollars = models.DecimalField(max_digits=6, decimal_places=2) Most e-commerce applications will need to capture many additional details about their products. We will add the ability to create arbitrary sets of attributes and add them as details to our products later in this article. For now, let's assume that these six fields are sufficient. A few notes about this model: first, we have used a DecimalField to represent the product's price. Django makes it relatively simple to implement a custom field and such a field may be appropriate here. But for now we'll keep it simple and use a plain and built-in DecimalField to represent currency values. Notice, too, the way we're storing the manufacturer information as a plain CharField. Depending on your application, it may be beneficial to build a Manufacturer model and convert this field to a ForeignKey. Lastly, you may have realized by now that there is no connection to a Catalog model, either by a ForeignKey or ManyToManyField. Earlier we discussed the placement of this field in terms of whether it belonged to the Catalog or in the Product model and decided, for several reasons, that the Product was the better place. We will be adding a ForeignKey to our Product model, but not directly to the Catalog. In order to support categorization of products within a catalog, we will be creating a new model in the next section and using that as the connection point for our products.
Read more
  • 0
  • 0
  • 14411

article-image-webhooks-slack
Packt
01 Jun 2016
11 min read
Save for later

Webhooks in Slack

Packt
01 Jun 2016
11 min read
In this article by Paul Asjes, the author of the book, Building Slack Bots, we'll have a look at webhooks in Slack. (For more resources related to this topic, see here.) Slack is a great way of communicating at your work environment—it's easy to use, intuitive, and highly extensible. Did you know that you can make Slack do even more for you and your team by developing your own bots? This article will teach you how to implement incoming and outgoing webhooks for Slack, supercharging your Slack team into even greater levels of productivity. The programming language we'll use here is JavaScript; however, webhooks can be programmed with any language capable of HTTP requests. Webhooks First let's talk basics: a webhook is a way of altering or augmenting a web application through HTTP methods. Webhooks allow us to post messages to and from Slack using regular HTTP requests with a JSON payloads. What makes a webhook a bot is its ability to post messages to Slack as if it were a bot user. These webhooks can be divided into incoming and outgoing webhooks, each with their own purposes and uses. Incoming webhooks An example of an incoming webhook is a service that relays information from an external source to a Slack channel without being explicitly requested, such as GitHub Slack integration: The GitHub integration posts messages about repositories we are interested in In the preceding screenshot, we see how a message was sent to Slack after a new branch was made on a repository this team was watching. This data wasn't explicitly requested by a team member but automatically sent to the channel as a result of the incoming webhook. Other popular examples include Jenkins integration, where infrastructure changes can be monitored in Slack (for example, if a server watched by Jenkins goes down, a warning message can be posted immediately to a relevant Slack channel). Let's start with setting up an incoming webhook that sends a simple "Hello world" message: First, navigate to the Custom Integration Slack team page, as shown in the following screenshot (https://my.slack.com/apps/build/custom-integration): The various flavors of custom integrations Select Incoming WebHooks from the list and then select which channel you'd like your webhook app to post messages to: Webhook apps will post to a channel of your choosing Once you've clicked on the Add Incoming WebHooks integration button, you will be presented with this options page, which allows you to customize your integration a little further: Names, descriptions, and icons can be set from this menu Set a customized icon for your integration (for this example, the wave emoji was used) and copy down the webhook URL, which has the following format:https://hooks.slack.com/services/T00000000/B00000000/XXXXXXXXXXXXXXXXXXXXXXXX This generated URL is unique to your team, meaning that any JSON payloads sent via this URL will only appear in your team's Slack channels. Now, let's throw together a quick test of our incoming webhook in Node. Start a new Node project (remember: you can use npm init to create your package.json file) and install the superagent AJAX library by running the following command in your terminal: npm install superagent –save Create a file named index.js and paste the following JavaScript code within it: const WEBHOOK_URL = [YOUR_WEBHOOK_URL]; const request = require('superagent'); request .post(WEBHOOK_URL) .send({ text: 'Hello! I am an incoming Webhook bot!' }) .end((err, res) => { console.log(res); }); Remember to replace [YOUR_WEBHOOK_URL] with your newly generated URL, and then run the program by executing the following command: nodemon index.js Two things should happen now: firstly, a long response should be logged in your terminal, and secondly, you should see a message like the following in the Slack client: The incoming webhook equivalent of "hello world" The res object we logged in our terminal is the response from the AJAX request. Taking the form of a large JavaScript object, it displays information about the HTTP POST request we made to our webhook URL. Looking at the message received in the Slack client, notice how the name and icon are the same ones we set in our integration setup on the team admin site. Remember that the default icon, name, and channel are used if none are provided, so let's see what happens when we change that. Replace your request AJAX call in index.js with the following: request .post(WEBHOOK_URL) .send({ username: "Incoming bot", channel: "#general", icon_emoji: ":+1:", text: 'Hello! I am different from the previous bot!' }) .end((err, res) => { console.log(res); }); Save the file, and nodemon will automatically restart the program. Switch over to the Slack client and you should see a message like the following pop up in your #general channel: New name, icon, and message In place of icon_emoji, you could also use icon_url to link to a specific image of your choosing. If you wish your message to be sent only to one user, you can supply a username as the value for the channel property: channel: "@paul" This will cause the message to be sent from within the Slackbot direct message. The message's icon and username will match either what you configured in the setup or set in the body of the POST request. Finally, let's look at sending links in our integration. Replace the text property with the following and save index.js: text: 'Hello! Here is a fun link: <http://www.github.com|Github is great!>' Slack will automatically parse any links it finds, whether it's in the http://www.example.com or www.example.com formats. By enclosing the URL in angled brackets and using the | character, we can specify what we would like the URL to be shown as: Formatted links are easier to read than long URLs For more information on message formatting, visit https://api.slack.com/docs/formatting. Note that as this is a custom webhook integration, we can change the name, icon, and channel of the integration. If we were to package the integration as a Slack app (an app installable by other teams), then it is not possible to override the default channel, username, and icon set. Incoming webhooks are triggered by external sources; an example would be when a new user signs up to your service or a product is sold. The goal of the incoming webhook is to provide information to your team that is easy to reach and comprehend. The opposite of this would be if you wanted users to get data out of Slack, which can be done via the medium of outgoing webhooks. Outgoing webhooks Outgoing webhooks differ from the incoming variety in that they send data out of Slack and to a service of your choosing, which in turn can respond with a message to the Slack channel. To set up an outgoing webhook, visit the custom integration page of your Slack team's admin page again—https://my.slack.com/apps/build/custom-integration—and this time, select the Outgoing WebHooks option. On the next screen, be sure to select a channel, name, and icon. Notice how there is a target URL field to be filled in; we will fill this out shortly. When an outgoing webhook is triggered in Slack, an HTTP POST request is made to the URL (or URLs, as you can specify multiple ones) you provide. So first, we need to build a server that can accept our webhook. In index.js, paste the following code: 'use strict'; const http = require('http'); // create a simple server with node's built in http module http.createServer((req, res) => { res.writeHead(200, {'Content-Type': 'text/plain'}); // get the data embedded in the POST request req.on('data', (chunk) => { // chunk is a buffer, so first convert it to // a string and split it to make it more legible as an array console.log('Body:', chunk.toString().split('&')); }); // create a response let response = JSON.stringify({ text: 'Outgoing webhook received!' }); // send the response to Slack as a message res.end(response); }).listen(8080, '0.0.0.0'); console.log('Server running at http://0.0.0.0:8080/'); Notice how we require the http module despite not installing it with NPM. This is because the http module is a core Node dependency and is automatically included with your installation of Node. In this block of code, we start a simple server on port 8080 and listen for incoming requests. In this example, we set our server to run at 0.0.0.0 rather than localhost. This is important as Slack is sending a request to our server, so it needs to be accessible from the Internet. Setting the IP of your server to 0.0.0.0 tells Node to use your computer's network-assigned IP address. Therefore, if you set the IP of your server to 0.0.0.0, Slack can reach your server by hitting your IP on port 8080 (for example, http://123.456.78.90:8080). If you are having trouble with Slack reaching your server, it is most likely because you are behind a router or firewall. To circumvent this issue, you can use a service such as ngrok (https://ngrok.com/). Alternatively, look at port forwarding settings for your router or firewall. Let's update our outgoing webhook settings accordingly: The outgoing webhook settings, with a destination URL Save your settings and run your Node app; test whether the outgoing webhook works by typing a message into the channel you specified in the webhook's settings. You should then see something like this in Slack: We built a spam bot Well, the good news is that our server is receiving requests and returning a message to send to Slack each time. The issue here is that we skipped over the Trigger Word(s) field in the webhook settings page. Without a trigger word, any message sent to the specified channel will trigger the outgoing webhook. This causes our webhook to be triggered by a message sent by the outgoing webhook in the first place, creating an infinite loop. To fix this, we could do one of two things: Refrain from returning a message to the channel when listening to all the channel's messages. Specify one or more trigger words to ensure we don't spam the channel. Returning a message is optional yet encouraged to ensure a better user experience. Even a confirmation message such as Message received! is better than no message as it confirms to the user that their message was received and is being processed. Let's therefore presume we prefer the second option, and add a trigger word: Trigger words keep our webhooks organized Let's try that again, this time sending a message with the trigger word at the beginning of the message. Restart your Node app and send a new message: Our outgoing webhook app now functions a lot like our bots from earlier Great, now switch over to your terminal and see what that message logged: Body: [ 'token=KJcfN8xakBegb5RReelRKJng', 'team_id=T000001', 'team_domain=buildingbots', 'service_id=34210109492', 'channel_id=C0J4E5SG6', 'channel_name=bot-test', 'timestamp=1460684994.000598', 'user_id=U0HKKH1TR', 'user_name=paul', 'text=webhook+hi+bot%21', 'trigger_word=webhook' ] This array contains the body of the HTTP POST request sent by Slack; in it, we have some useful data, such as the user's name, the message sent, and the team ID. We can use this data to customize the response or to perform some validation to make sure the user is authorized to use this webhook. In our response, we simply sent back a Message received string; however, like with incoming webhooks, we can set our own username and icon. The channel cannot be different from the channel specified in the webhook's settings, however. The same restrictions apply when the webhook is not a custom integration. This means that if the webhook was installed as a Slack app for another team, it can only post messages as the username and icon specified in the setup screen. An important thing to note is that webhooks, either incoming or outgoing, can only be set up in public channels. This is predominantly to discourage abuse and uphold privacy, as we've seen that it's simple to set up a webhook that can record all the activity on a channel. Summary In this article, you learned what webhooks are and how you can use them to get data in and out of Slack. You learned how to send messages as a bot user and how to interact with your users in the native Slack client. Resources for Article: Further resources on this subject: Keystone – OpenStack Identity Service[article] A Sample LEMP Stack[article] Implementing Stacks using JavaScript[article]
Read more
  • 0
  • 0
  • 14392

article-image-objects-and-types-documentum-65-content-management-foundations
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations

Packt
04 Jun 2010
11 min read
Objects Documentum uses an object-oriented model to store information within the repository. Everything stored in the repository participates in this object model in some way. For example, a user, a document, and a folder are all represented as objects. An object store s data in its properties (also known as attributes) and has methods that can be used to interact with the object. Properties A content item stored in the repository has an associated object to store its metadata. Since metadata is stored in object properties, the terms metadata and properties are used interchangeably. For example, a document stored in the repository may have its title, subject, and keywords stored in the associated object. However, note that objects can exist in the repository without an associated content item. Such objects are sometimes referred to as contentless objects. For example, user objects and permission set objects do not have any associated content. Each object property has a data type, which can be one of boolean, integer, string, double, time, or ID. A boolean value is true or false. A string value consists of text. A double value is a floating point number. A time value represents a timestamp, including dates. An ID value represents an object ID that uniquely identifi es an object in the repository. Object IDs are discussed in detail later in this article. A property can be single-valued or repeating. Each single-valued property holds one value. For example, the object_name property of a document contains one value and it is of type string. This means that the document can only have one name. On the other hand, keywords is a repeating property and can have multiple string values. For example, a document may have object_name='LoanApp_1234567891.txt' and keywords='John Doe','application','1234567891'. The following figure shows a visual representation of this object. Typically, only properties are shown on the object while methods are shown when needed. Furthermore, only the properties relevant to the discussion are shown. Objects will be illustrated in this manner throughout the article series: Methods Methods are operations that can be performed on an object. An operation often alters some properties of the object. For example, the checkout method can be used to check out an object. Checking out an object sets the r_lock_owner property with the name of the user performing the checkout. Methods are usually invoked using Documentum Foundation Classes (DFCs) programmatically, though they can be indirectly invoked using API. In general, Documentum Query Language (DQL) cannot be used to invoke arbitrary methods on objects. DQL is discussed later in this article. Note that the term method may be used in two different contexts within Documentum. A method as a defined operation on an object type is usually invoked programmatically through DFC. There is also the concept of a method representing code that can be invoked via a job, workflow activity, or a lifecycle operation. This qualification will be made explicit when the context is not clear. Working with objects We used Webtop for performing various operations on documents, where the term document referred to an object with content. Some of these operations are not specific to content and apply to objects in general. For example, checkout and checkin can be performed on contentless objects as well. On the other hand, import, export, and renditions deal specifi cally with content. Talking specifically about operations on metadata, we can view, modify, and export object properties using Webtop. Viewing and editing properties Using Webtop, object properties can be viewed using the View | Properties menu item, shortcut P, or the right-click context menu. The following screenshot shows the properties of the example object discussed earlier. Note that the same screen can be used to modify and save the properties as well. Multiple objects can be selected before viewing properties. In this case, a special dialog shows the common properties for the selected objects, as shown in the following figure. Any changes made on this dialog are applied to all the selected objects. On the properties screen, single-valued properties can be edited directly while repeating properties provide a separate screen for editing through Edit links. Some properties cannot be modified by users at any time. Other properties may not be editable because object security prevents it or if the object is immutable. Object immutability Certain operations on an object mark it as immutable, which means that object properties cannot be changed. An object is marked immutable by setting r_immutable_flag to true. Content Server prevents changes to the content and metadata of an immutable object with the exception of a few special attributes that relate to the operations that are still allowed on immutable objects. For example, users can set a version label on the object, link the object to a folder, unlink it from a folder, delete it, change its lifecycle, and perform one of the lifecycle operations such as promote/demote/suspend/resume. The attributes affected by the allowed operations are allowed to be updated. An object is marked immutable in the following situations: When an object is versioned or branched, it becomes an old version and is marked immutable. An object can be frozen which makes it immutable and imposes some other restrictions. Some virtual document operations can freeze the involved objects. A retention policy can make the documents under its control immutable. Certain operations such as unfreezing a document can reset the immutability flag making the object changeable again. Exporting properties Metadata can be exported from repository lists, such as folder contents and search results. Property values of the objects are exported and saved as a .csv (comma-separated values) file, which can be opened in Microsoft Excel or in a text editor. Metadata export can be performed using Tools | Export to CSV menu item or the right-click context menu. Before exporting the properties, the user is able to choose the properties to export from the available ones. Object types Objects in a repository may represent different kinds of entities – one object may represent a workflow while another object may represent a document, for example. As a result, these objects may have different properties and methods. Every time Content Server creates an object, it needs to determine the properties and methods that the object is going to possess. This information comes from an object type (also referred to as type). The term attribute is synonymous with property and the two are used interchangeably. It is common to use the term attribute when talking about a property name and to use property when referring to its value. We will use a dot notation to indicate that an attribute belongs to an object or a type. For example, objectA.title or dm_sysobject. object_name. This notation is succinct and unambiguous and is consistent with many programming languages. An object type is a template for creating objects. In other words, an object is an instance of its type. A Documentum repository contains many predefined types and allows addition of new user-defined types (also known as custom types). The most commonly used predefined object type for storing documents in the repository is dm_document. We have already seen how folders are used to organize documents. Folders are stored as objects of type dm_folder. A cabinet is a special kind of folder that does not have a parent folder and is stored as an object of type dm_cabinet. Users are represented as objects of type dm_user and a group of users is represented as an object of dm_group. Workflows use a process definition object of type dm_process, while the definition of a lifecycle is stored in an object of type dm_policy. The following figure shows some of these types: Just like everything else in the repository, a type is also represented as an object, which holds structural information about the type. This object is of type dm_type and stores information such as the name of the type, name of its supertype, and details about the attributes in the type. The following figure shows an object of type dm_document and an object of type dm_type representing dm_document. It also indicates how the type hierarchy information is stored in the object of type dm_type. The types present in the repository can be viewed using Documentum Administrator (DA). The following screenshot shows some attributes for the type dm_sysobject. This screen provides controls to scroll through the attributes when there are a large number of attributes present. The Info tab provides information about the type other than the attributes. While the obvious use of a type is to define the structure and behavior of one kind of object, there is another very important utility of types. A type can be used to refer to all the objects of that type as a set. For example, queries restrict their scope by specifying a type where only the objects of that type are considered for matches. In our example scenario, the loan officer may want to search for all loan applications assigned to her. This query will be straightforward if there is an object type for loan applications. Queries are introduced later in this article. As another example, audit events can be restricted to a particular object type resulting in only the objects of this type being audited. Type names and property names Each object type uses an internal type name, such as dm_document, which is used for uniquely identifying the type within queries and application code. Each type also has a label, which is a user-friendly name often used by applications for displaying information to the end users. For example, the type dm_document has the label Document. Conventionally, internal names of predefined (defined by Documentum for Content Server or other client products) types start with dm, as described here: dm_: (general) represents commonly used object types such as dm_document, which is generally used for storing documents. dmr_: (read only) represents read-only object types such as dmr_content, which stores information about a content file. dmi_: (internal) represents internal object types such as dmi_workitem, which stores information about a task. dmc_: (client) represents object types supporting Documentum client applications. For example, dmc_calendar objects are used by Collaboration Services for holding calendar events. Just like an object type each property also has an internal name and a label. For example, the label for property object_name is Name. There are some additional conventions for internal names for properties. These names may begin with the following prefixes: r_: (read only) normally indicates that the property is controlled by the Content Server and cannot be modified by users or applications. For example, r_object_id represents the unique ID for the object. On the other hand, r_version_label is an interesting property. It is a repeating property and has at least one value supplied by the Content Server while others may be supplied by users or applications. i_: (internal) is similar to r_ except that this property is used internally by the Content Server and normally not seen by users and applications. i_chronicle_id binds all the versions together in to a version tree and is managed by the Content Server. a_: (application) indicates that this property is intended to be used by applications and can be modified by applications and users. For example, the format of a document is stored in a_content_type. This property helps Webtop launch an appropriate desktop application to open a document. The other three prefixes can also be considered to imply system or non-application attributes, in general. _: (computed) indicates that this property is not stored in the repository and is computed by Content Server as needed. These properties are also normally read-only for applications. For example, each object has a property called _changed, which indicates whether it has been changed since it was last saved. Many of the computed properties are related to security and most are used for caching information in user sessions.
Read more
  • 0
  • 0
  • 14293

article-image-playing-tic-tac-toe-against-ai
Packt
11 May 2016
30 min read
Save for later

Playing Tic-Tac-Toe against an AI

Packt
11 May 2016
30 min read
In this article by Ivo Gabe de Wolff, author of the book TypeScript Blueprints, we will build a game in which the computer will play well. The game is called Tic-Tac-Toe. The game is played by two players on a grid, usually three by three. The players try to place their symbols threein a row (horizontal, vertical or diagonal). The first player can place crosses, the second player placescircles. If the board is full, and no one has three symbols in a row, it is a draw. (For more resources related to this topic, see here.) The game is usually played on a three by three grid and the target is to have three symbols in a row. To make the application more interesting, we will make the dimension and the row length variable. We will not create a graphical interface for this application. We will only build the game mechanics and the artificial intelligence(AI). An AI is a player controlled by the computer. If implemented correctly, the computer should never lose on a standard three by three grid. When the computer plays against the computer, it will result in a draft. We will also write various unit tests for the application. We will build the game as a command line application. That means you can play the game in a terminal. You can interact with the game only with text input. It's player one's turn! Choose one out of these options: 1X|X|-+-+- |O|-+-+- | |   2X| |X-+-+- |O|-+-+- | |   3X| |-+-+-X|O|-+-+- | |   4X| |-+-+- |O|X-+-+- | |   5X| |-+-+- |O|-+-+-X| |   6X| |-+-+- |O|-+-+- |X|   7X| |-+-+- |O|-+-+- | |X Creating the project structure We will locate the source files in lib and the tests in lib/test. We use gulp to compile the project and AVA to run tests. We can install the dependencies of our project with NPM: npm init -y npm install ava gulp gulp-typescript --save-dev In gulpfile.js, we configure gulp to compile our TypeScript files. var gulp = require("gulp"); var ts = require("gulp-typescript");   var tsProject = ts.createProject("./lib/tsconfig.json");   gulp.task("default", function() { return tsProject.src() .pipe(ts(tsProject)) .pipe(gulp.dest("dist")); }); Configure TypeScript We can download type definitions for NodeJS with NPM: npm install @types/node --save-dev We must exclude browser files in TypeScript. In lib/tsconfig.json, we add the configuration for TypeScript: {   "compilerOptions": {     "target": "es6",     "module": "commonjs" }   } For applications that run in the browser, you will probably want to target ES5, since ES6 is not supported in all browsers. However, this application will only beexecuted in NodeJS, so we do not have such limitations. You have to use NodeJS 6 or later for ES6 support. Adding utility functions Since we will work a lot with arrays, we can use some utility functions. First, we create a function that flattens a two dimensional array into a one dimensional array. export function flatten<U>(array: U[][]) { return (<U[]>[]).concat(...array); } Next, we create a functionthat replaces a single element of an array with a specified value. We will use functional programming in this article again, so we must use immutable data structures. We can use map for this, since this function provides both the element and the index to the callback. With this index, we can determine whether that element should be replaced. export function arrayModify<U>(array: U[], index: number, newValue: U) { return array.map((oldValue, currentIndex) => currentIndex === index ? newValue : oldValue); } We also create a function that returns a random integer under a certain upper bound. export function randomInt(max: number) { return Math.floor(Math.random() * max); } We will use these functions in the next sessions. Creating the models In lib/model.ts, we will create the model for our game. The model should contain the game state. We start with the player. The game is played by two players. Each field of the grid contains the symbol of a player or no symbol. We will model the grid as a two dimensional array, where each field can contain a player. export type Grid = Player[][]; A player is either Player1, Player2 or no player. export enum Player { Player1 = 1, Player2 = -1, None = 0 } We have given these members values so we can easily get the opponent of a player. export function getOpponent(player: Player): Player { return -player; } We create a type to represent an index of the grid. Since the grid is two dimensional, such an index requires two values. export type Index = [number, number]; We can use this type to create two functions that get or update one field of the grid. We use functional programming in this article, so we will not modify the grid. Instead, we return a new grid with one field changed. export function get(grid: Grid, [rowIndex, columnIndex]: Index) { const row = grid[rowIndex]; if (!row) return undefined; return row[columnIndex]; } export function set(grid: Grid, [row, column]: Index, value: Player) { return arrayModify(grid, row, arrayModify(grid[row], column, value) ); } Showing the grid To show the game to the user, we must convert a grid to a string. First, we will create a function that converts a player to a string, then a function that uses the previous function to show a row, finally a function that uses these functions to show the complete grid. The string representation of a grid should have lines between the fields. We create these lines with standard characters (+, -, and |). This gives the following result: X|X|O-+-+- |O|-+-+-X| | To convert a player to the string, we must get his symbol. For Player1, that is a cross and for Player2, a circle. If a field of the grid contains no symbol, we return a space to keep the grid aligned. function showPlayer(player: Player) { switch (player) { case Player.Player1: return "X"; case Player.Player2: return "O"; default: return ""; } } We can use this function to the tokens of all fields in a row. We add a separator between these fields. function showRow(row: Player[]) { return row.map(showPlayer).reduce((previous, current) => previous + "|" + current); } Since we must do the same later on, but with a different separator, we create a small helper function that does this concatenation based on a separator. const concat = (separator: string) => (left: string, right: string) => left + separator + right; This function requires the separator and returns a function that can be passed to reduce. We can now use this function in showRow. function showRow(row: Player[]) { return row.map(showPlayer).reduce(concat("|")); } We can also use this helper function to show the entire grid. First we must compose the separator, which is almost the same as showing a single row. Next, we can show the grid with this separator. export function showGrid(grid: Grid) { const separator = "n" + grid[0].map(() =>"-").reduce(concat("+")) + "n"; return grid.map(showRow).reduce(concat(separator)); } Creating operations on the grid We will now create some functions that do operations on the grid. These functions check whether the board is full, whether someone has won, and what options a player has. We can check whether the board is full by looking at all fields. If no field exists that has no symbol on it, the board is full, as every field has a symbol. export function isFull(grid: Grid) { for (const row of grid) { for (const field of row) { if (field === Player.None) return false; } } return true; } To check whether a user has won, we must get a list of all horizontal, vertical and diagonal rows. For each row, we can check whether a row consists of a certain amount of the same symbols on a row. We store the grid as an array of the horizontal rows, so we can easily get those rows. We can also get the vertical rows relatively easily. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ... ];   function getVertical(index: number) { return grid.map(row => row[index]); } } Getting a diagonal row requires some more work. We create a helper function that will walk on the grid from a start point, in a certain direction. We distinguish two different kinds of diagonals: a diagonal that goes to the lower-right and a diagonal that goes to the lower-left. For a standard three by three game, only two diagonals exist. However, a larger grid may have more diagonals. If the grid is 5 by 5, and the users should get three in a row, ten diagonals with a length of at least three exist: 0, 0 to 4, 40, 1 to 3, 40, 2 to 2, 41, 0 to 4, 32, 0 to 4, 24, 0 to 0, 43, 0 to 0, 32, 0 to 0, 24, 1 to 1, 44, 2 to 2, 4 The diagonals that go toward the lower-right, start at the first column or at the first horizontal row. Other diagonals start at the last column or at the first horizontal row. In this function, we will just return all diagonals, even if they only have one element, since that is easy to implement. We implement this with a function that walks the grid to find the diagonal. That function requires a start position and a step function. The step function increments the position for a specific direction. function allRows(grid: Grid) { return [ ...grid, ...grid[0].map((field, index) => getVertical(index)), ...grid.map((row, index) => getDiagonal([index, 0], stepDownRight)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index + 1], stepDownRight)), ...grid.map((row, index) => getDiagonal([index, grid[0].length - 1], stepDownLeft)), ...grid[0].slice(1).map((field, index) => getDiagonal([0, index], stepDownLeft)) ];   function getVertical(index: number) { return grid.map(row => row[index]); }   function getDiagonal(start: Index, step: (index: Index) => Index) { const row: Player[] = []; for (let index = start; get(grid, index) !== undefined; index = step(index)) { row.push(get(grid, index)); } return row; } function stepDownRight([i, j]: Index): Index { return [i + 1, j + 1]; } function stepDownLeft([i, j]: Index): Index { return [i + 1, j - 1]; } function stepUpRight([i, j]: Index): Index { return [i - 1, j + 1]; } } To check whether a row has a certain amount of the same elements on a row, we will create a function with some nice looking functional programming. The function requires the array, the player, and the index at which the checking starts. That index will usually be zero, but during recursion we can set it to a different value. originalLength contains the original length that a sequence should have. The last parameter, length, will have the same value in most cases, but in recursion we will change the value. We start with some base cases. Every row contains a sequence of zero symbols, so we can always return true in such a case. function isWinningRow(row: Player[], player: Player, index: number, originalLength: number, length: number): boolean { if (length === 0) { return true; } If the row does not contain enough elements to form a sequence, the row will not have such a sequence and we can return false. if (index + length > row.length) { return false; } For other cases, we use recursion. If the current element contains a symbol of the provided player, this row forms a sequence if the next length—1 fields contain the same symbol. if (row[index] === player) { return isWinningRow(row, player, index + 1, originalLength, length - 1); } Otherwise, the row should contain a sequence of the original length in some other position. return isWinningRow(row, player, index + 1, originalLength, originalLength); } If the grid is large enough, a row could contain a long enough sequence after a sequence that was too short. For instance, XXOXXX contains a sequence of length three. This function handles these rows correctly with the parameters originalLength and length. Finally, we must create a function that returns all possible sets that a player can do. To implement this function, we must first find all indices. We filter these indices to indices that reference an empty field. For each of these indices, we change the value of the grid into the specified player. This results in a list of options for the player. export function getOptions(grid: Grid, player: Player) { const rowIndices = grid.map((row, index) => index); const columnIndices = grid[0].map((column, index) => index);   const allFields = flatten(rowIndices.map( row => columnIndices.map(column =><Index> [row, column]) ));   return allFields .filter(index => get(grid, index) === Player.None) .map(index => set(grid, index, player)); } The AI will use this to choose the best option and a human player will get a menu with these options. Creating the grid Before the game can be started, we must create an empty grid. We will write a function that creates an empty grid with the specified size. export function createGrid(width: number, height: number) { const grid: Grid = []; for (let i = 0; i < height; i++) { grid[i] = []; for (let j = 0; j < width; j++) { grid[i][j] = Player.None; } } return grid; } In the next section, we will add some tests for the functions that we have written. These functions work on the grid, so it will be useful to have a function that can parse a grid based on a string. We will separate the rows of a grid with a semicolon. Each row contains tokens for each field. For instance, "XXO; O ;X  " results in this grid: X|X|O-+-+- |O|-+-+-X| | We can implement this by splitting the string into an array of lines. For each line, we split the line into an array of characters. We map these characters to a Player value. export function parseGrid(input: string) { const lines = input.split(";"); return lines.map(parseLine);   function parseLine(line: string) { return line.split("").map(parsePlayer); } function parsePlayer(character: string) { switch (character) { case "X": return Player.Player1; case "O": return Player.Player2; default: return Player.None; } } } In the next section we will use this function to write some tests. Adding tests We will use AVA to write tests for our application. Since the functions do not have side effects, we can easily test them. In lib/test/winner.ts, we test the findWinner function. First, we check whether the function recognizes the winner in some simple cases. import test from "ava"; import { Player, parseGrid, findWinner } from "../model";   test("player winner", t => { t.is(findWinner(parseGrid("   ;XXX;   "), 3), Player.Player1); t.is(findWinner(parseGrid("   ;OOO;   "), 3), Player.Player2); t.is(findWinner(parseGrid("   ;   ;   "), 3), Player.None); }); We can also test all possible three-in-a-row positions in the three by three grid. With this test, we can find out whether horizontal, vertical, and diagonal rows are checked correctly. test("3x3 winner", t => { const grids = [     "XXX;   ;   ",     "   ;XXX;   ",     "   ;   ;XXX",     "X  ;X  ;X  ",     " X ; X ; X ",     "  X;  X;  X",     "X  ; X ;  X",     "  X; X ;X  " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.Player1); } });   We must also test that the function does not claim that someone won too often. In the next test, we validate that the function does not return a winner for grids that do not have a winner. test("3x3 no winner", t => { const grids = [     "XXO;OXX;XOO",     "   ;   ;   ",     "XXO;   ;OOX",     "X  ;X  ; X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 3), Player.None); } }); Since the game also supports other dimensions, we should check these too. We check that all diagonals of a four by three grid are checked correctly, where the length of a sequence should be two. test("4x3 winner", t => { const grids = [     "X   ; X  ;    ",     " X  ;  X ;    ",     "  X ;   X;    ",     "    ;X   ; X  ",     "  X ;   X;    ",     " X  ;  X ;    ",     "X   ; X  ;    ",     "    ;   X;  X " ]; for (const grid of grids) { t.is(findWinner(parseGrid(grid), 2), Player.Player1); } }); You can of course add more test grids yourself. Add tests before you fix a bug. These tests should show the wrong behavior related to the bug. When you have fixed the bug, these tests should pass. This prevents the bug returning in a future version. Random testing Instead of running the test on some predefined set of test cases, you can also write tests that run on random data. You cannot compare the output of a function directly with an expected value, but you can check some properties of it. For instance, getOptions should return an empty list if and only if the board is full. We can use this property to test getOptions and isFull. First, we create a function that randomly chooses a player. To higher the chance of a full grid, we add some extra weight on the players compared to an empty field. import test from "ava"; import { createGrid, Player, isFull, getOptions } from "../model"; import { randomInt } from "../utils";   function randomPlayer() { switch (randomInt(4)) { case 0: case 1: return Player.Player1; case 2: case 3: return Player.Player2; default: return Player.None; } } We create 10000 random grids with this function. The dimensions and the fields are chosen randomly. test("get-options", t => { for (let i = 0; i < 10000; i++) { const grid = createGrid(randomInt(10) + 1, randomInt(10) + 1) .map(row => row.map(randomPlayer)); Next, we check whether the property that we describe holds for this grid. const options = getOptions(grid, Player.Player1) t.is(isFull(grid), options.length === 0); We also check that the function does not give the same option twice. for (let i = 1; i < options.length; i++) { for (let j = 0; j < i; j++) { t.notSame(options[i], options[j]); } } } }); Depending on how critical a function is, you can add more tests. In this case, you could check that only one field is modified in an option or that only an empty field can be modified in an option. Now you can run the tests using gulp && ava dist/test.You can add this to your package.json file. In the scripts section, you can add commands that you want to run. With npm run xxx, you can run task xxx. npm testthat was added as shorthand for npm run test, since the test command is often used. { "name": "article-7", "version": "1.0.0", "scripts": { "test": "gulp && ava dist/test"   }, ... Implementing the AI using Minimax We create an AI based on Minimax. The computer cannot know what his opponent will do in the next steps, but he can check what he can do in the worst-case. The minimum outcome of these worst cases is maximized by this algorithm. This behavior has given Minimax its name. To learn how Minimax works, we will take a look at the value or score of a grid. If the game is finished, we can easily define its value: if you won, the value is 1; if you lost, -1 and if it is a draw, 0. Thus, for player 1 the next grid has value 1 and for player 2 the value is -1. X|X|X-+-+-O|O|-+-+-X|O| We will also define the value of a grid for a game that has not been finished. We take a look at the following grid: X| |X-+-+-O|O|-+-+-O|X| It is player 1's turn. He can place his stone on the top row, and he would win, resulting in a value of 1. He can also choose to lay his stone on the second row. Then the game will result in a draft, if player 2 is not dumb, with score 0. If he chooses to place the stone on the last row, player 2 can win resulting in -1. We assume that player 1 is smart and that he will go for the first option. Thus, we could say that the value of this unfinished game is 1. We will now formalize this. In the previous paragraph, we have summed up all options for the player. For each option, we have calculated the minimum value that the player could get if he would choose that option. From these options, we have chosen the maximum value. Minimax chooses the option with the highest value of all options. Implementing Minimax in TypeScript As you can see, the definition of Minimax looks like you can implement it with recursion. We create a function that returns both the best option and the value of the game. A function can only return a single value, but multiple values can be combined into a single value in a tuple, which is an array with these values. First, we handle the base cases. If the game is finished, the player has no options and the value can be calculated directly. import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; Otherwise, we list all options. For all options, we calculate the value. The value of an option is the same as the opposite of the value of the option for the opponent. Finally, we choose the option with the best value. } else { let options = getOptions(grid, player); const opponent = getOpponent(player); return options.map<[Grid, number]>( option => [option, -(minimax(option, rowLength, opponent)[1])] ).reduce( (previous, current) => previous[1] < current[1] ? current : previous ); } } When you use tuple types, you should explicitly add a type definition for it. Since tuples are arrays too, an array type is automatically inferred. When you add the tuple as return type, expressions after the return keyword will be inferred as these tuples. For options.map, you can mention the union type as a type argument or by specifying it in the callback function (options.map((option): [Grid, number] => ...);). You can easily see that such an AI can also be used for other kinds of games. Actually, the minimax function has no direct reference to Tic-Tac-Toe, only findWinner, isFull and getOptions are related to Tic-Tac-Toe. Optimizing the algorithm The Minimax algorithm can be slow. Choosing the first set, especially, takes a long time since the algorithm tries all ways of playing the game. We will use two techniques to speed up the algorithm. First, we can use the symmetry of the game. When the board is empty it does not matter whether you place a stone in the upper-left corner or the lower-right corner. Rotating the grid around the center 180 degrees gives an equivalent board. Thus, we only need to take a look at half the options when the board is empty. Secondly, we can stop searching for options if we found an option with value 1. Such an option is already the best thing to do. Implementing these techniques gives the following function: import { Player, Grid, findWinner, isFull, getOpponent, getOptions } from "./model";   export function minimax(grid: Grid, rowLength: number, player: Player): [Grid, number] { const winner = findWinner(grid, rowLength); if (winner === player) { return [undefined, 1]; } else if (winner !== Player.None) { return [undefined, -1]; } else if (isFull(grid)) { return [undefined, 0]; } else { let options = getOptions(grid, player); const gridSize = grid.length * grid[0].length; if (options.length === gridSize) { options = options.slice(0, Math.ceil(gridSize / 2)); } const opponent = getOpponent(player); let best: [Grid, number]; for (const option of options) { const current: [Grid, number] = [option, -(minimax(option, rowLength, opponent)[1])]; if (current[1] === 1) { return current; } else if (best === undefined || current[1] > best[1]) { best = current; } } return best; } } This will speed up the AI. In the next sections we will implement the interface for the game and we will write some tests for the AI. Creating the interface NodeJS can be used to create servers. You can also create tools with a command line interface (CLI). For instance, gulp, NPM and typings are command line interfaces built with NodeJS. We will use NodeJS to create the interface for our game. Handling interaction The interaction from the user can only happen by text input in the terminal. When the game starts, it will ask the user some questions about the configuration: width, height, row length for a sequence, and the player(s) that are played by the computer. The highlighted lines are the input of the user. Tic-Tac-Toe Width3 Height3 Row length2 Who controls player 1?1You 2Computer1 Who controls player 2?1You 2Computer1 During the game, the game asks the user which of the possible options he wants to do. All possible steps are shown on the screen, with an index. The user can type the index of the option he wants. X| |-+-+-O|O|-+-+- |X|   It's player one's turn! Choose one out of these options: 1X|X|-+-+-O|O|-+-+- |X|   2X| |X-+-+-O|O|-+-+- |X|   3X| |-+-+-O|O|X-+-+- |X|   4X| |-+-+-O|O|-+-+-X|X|   5X| |-+-+-O|O|-+-+- |X|X A NodeJS application has three standard streams to interact with the user. Standard input, stdin, is used to receive input from the user. Standard output, stdout, is used to show text to the user. Standard error, stderr, is used to show error messages to the user. You can access these streams with process.stdin, process.stdout and process.stderr. You have probably already used console.log to write text to the console. This function writes the text to stdout. We will use console.log to write text to stdout and we will not use stderr. We will create a helper function that reads a line from stdin. This is an asynchronous task, the function starts listening and resolves when the user hits enter. In lib/cli.ts, we start by importing the types and function that we have written. import { Grid, Player, getOptions, getOpponent, showGrid, findWinner, isFull, createGrid } from "./model"; import { minimax } from "./ai"; We can listen to input from stdin using the data event. The process sends either the string or a buffer, an efficient way to store binary data in memory. With once, the callback will only be fired once. If you want to listen to all occurrences of the event, you can use on. function readLine() { return new Promise<string>(resolve => { process.stdin.once("data", (data: string | Buffer) => resolve(data.toString())); }); } We can easily use readLinein async functions. For instance, we can now create a function that reads, parses and validates a line. We can use this to read the input of the user, parse it to a number, and finally check that the number is within a certain range. This function will return the value if it passes the validator. Otherwise it shows a message and retries. async function readAndValidate<U>(message: string, parse: (data: string) => U, validate: (value: U) => boolean): Promise<U> { const data = await readLine(); const value = parse(data); if (validate(value)) { return value; } else { console.log(message); return readAndValidate(message, parse, validate); } } We can use this function to show a question where the user has various options. The user should type the index of his answer. This function validates that the index is within bounds. We will show indices starting at 1 to the user, so we must carefully handle these. async function choose(question: string, options: string[]) { console.log(question); for (let i = 0; i < options.length; i++) { console.log((i + 1) + "t" + options[i].replace(/n/g, "nt")); console.log(); } return await readAndValidate( `Enter a number between 1 and ${ options.length }`, parseInt, index => index >= 1 && index <= options.length ) - 1; } Creating players A player could either be a human or the computer. We create a type that can contain both kinds of players. type PlayerController = (grid: Grid) => Grid | Promise<Grid>; Next we create a function that creates such a player. For a user, we must first know whether he is the first or the second player. Then we return an async function that asks the player which step he wants to do. const getUserPlayer = (player: Player) => async (grid: Grid) => { const options = getOptions(grid, player); const index = await choose("Choose one out of these options:", options.map(showGrid)); return options[index]; }; For the AI player, we must know the player index and the length of a sequence. We use these variables and the grid of the game to run the Minimax algorithm. const getAIPlayer = (player: Player, rowLength: number) => (grid: Grid) => minimax(grid, rowLength, player)[0]; Now we can create a function that asks the player whether a player should be played by the user or the computer. async function getPlayer(index: number, player: Player, rowLength: number): Promise<PlayerController> { switch (await choose(`Who controls player ${ index }?`, ["You", "Computer"])) { case 0: return getUserPlayer(player); default: return getAIPlayer(player, rowLength); } } We combine these functions in a function that handles the whole game. First, we must ask the user to provide the width, height and length of a sequence. export async function game() { console.log("Tic-Tac-Toe"); console.log(); console.log("Width"); const width = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Height"); const height = await readAndValidate("Enter an integer", parseInt, isFinite); console.log("Row length"); const rowLength = await readAndValidate("Enter an integer", parseInt, isFinite); We ask the user which players should be controlled by the computer. const player1 = await getPlayer(1, Player.Player1, rowLength); const player2 = await getPlayer(2, Player.Player2, rowLength); The user can now play the game. We do not use a loop, but we use recursion to give the player their turns. return play(createGrid(width, height), Player.Player1);   async function play(grid: Grid, player: Player): Promise<[Grid, Player]> { In every step, we show the grid. If the game is finished, we show which player has won. console.log(); console.log(showGrid(grid)); console.log();   const winner = findWinner(grid, rowLength); if (winner === Player.Player1) { console.log("Player 1 has won!"); return <[Grid, Player]> [grid, winner]; } else if (winner === Player.Player2) { console.log("Player 2 has won!"); return <[Grid, Player]>[grid, winner]; } else if (isFull(grid)) { console.log("It's a draw!"); return <[Grid, Player]>[grid, Player.None]; } If the game is not finished, we ask the current player or the computer which set he wants to do. console.log(`It's player ${ player === Player.Player1 ? "one's" : "two's" } turn!`);   const current = player === Player.Player1 ? player1 : player2; return play(await current(grid), getOpponent(player)); } } In lib/index.ts, we can start the game. When the game is finished, we must manually exit the process. import { game } from "./cli";   game().then(() => process.exit()); We can compile and run this in a terminal: gulp && node --harmony_destructuring dist At the time of writing, NodeJS requires the --harmony_destructuring flag to allow destructuring, like [x, y] = z. In future versions of NodeJS, this flag will be removed and you can run it without it. Testing the AI We will add some tests to check that the AI works properly. For a standard three by three game, the AI should never lose. That means when an AI plays against an AI, it should result in a draw. We can add a test for this. In lib/test/ai.ts, we import AVA and our own definitions. import test from "ava"; import { createGrid, Grid, findWinner, isFull, getOptions, Player } from "../model"; import { minimax } from "../ai"; import { randomInt } from "../utils"; We create a function that simulates the whole gameplay. type PlayerController = (grid: Grid) => Grid; function run(grid: Grid, a: PlayerController, b: PlayerController): Player { const winner = findWinner(grid, 3); if (winner !== Player.None) return winner; if (isFull(grid)) return Player.None; return run(a(grid), b, a); } We write a function that executes a step for the AI. const aiPlayer = (player: Player) => (grid: Grid) => minimax(grid, 3, player)[0]; Now we create the test that validates that a game where the AI plays against the AI results in a draw. test("AI vs AI", t => { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), aiPlayer(Player.Player2)) t.is(result, Player.None); }); Testing with a random player We can also test what happens when the AI plays against a random player or when a player plays against the AI. The AI should win or it should result in a draw. We run these multiple times; what you should always do when you use randomization in your test. We create a function that creates the random player. const randomPlayer = (player: Player) => (grid: Grid) => { const options = getOptions(grid, player); return options[randomInt(options.length)]; }; We write the two tests that both run 20 games with a random player and an AI. test("random vs AI", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), randomPlayer(Player.Player1), aiPlayer(Player.Player2)) t.not(result, Player.Player1); } });   test("AI vs random", t => { for (let i = 0; i < 20; i++) { const result = run(createGrid(3, 3), aiPlayer(Player.Player1), randomPlayer(Player.Player2)) t.not(result, Player.Player2); } }); We have written different kinds of tests: Tests that check the exact results of single function Tests that check a certain property of results of a function Tests that check a big component Always start writing tests for small components. If the AI tests should fail, that could be caused by a mistake in hasWinner, isFull or getOptions, so it is hard to find the location of the error. Only testing small components is not enough; bigger tests, such as the AI tests, are closer to what the user will do. Bigger tests are harder to create, especially when you want to test the user interface. You must also not forget that tests cannot guarantee that your code runs correctly, it just guarantees that your test cases work correctly. Summary In this article, we have written an AI for Tic-Tac-Toe. With the command line interface, you can play this game against the AI or another human. You can also see how the AI plays against the AI. We have written various tests for the application. You have learned how Minimax works for turn-based games. You can apply this to other turn-based games as well. If you want to know more on strategies for such games, you can take a look at game theory, the mathematical study of these games. Resources for Article: Further resources on this subject: Basic Website using Node.js and MySQL database [article] Data Science with R [article] Web Typography [article]
Read more
  • 0
  • 0
  • 14233

article-image-test-node-applications-using-mocha-framework
Sunith Shetty
20 Apr 2018
12 min read
Save for later

How to test node applications using Mocha framework

Sunith Shetty
20 Apr 2018
12 min read
In today’s tutorial, you will learn how to create your very first test case that tests whether your code is working as expected. If we make a function that's supposed to add two numbers together, we can automatically verify it's doing that. And if we have a function that's supposed to fetch a user from the database, we can make sure it's doing that as well. Now to get started in this section, we'll look at the very basics of setting up a testing suite inside a Node.js project. We'll be testing a real-world function. Installing the testing module In order to get started, we will make a directory to store our code for this chapter. We'll make one on the desktop using mkdir and we'll call this directory node-tests: mkdir node-tests Then we'll change directory inside it using cd, so we can go ahead and run npm init. We'll be installing modules and this will require a package.json file: cd node-tests npm init We'll run npm init using the default values for everything, simply hitting enter throughout every single step: Now once that package.json file is generated, we can open up the directory inside Atom. It's on the desktop and it's called node-tests. From here, we're ready to actually define a function we want to test. The goal in this section is to learn how to set up testing for a Node project, so the actual functions we'll be testing are going to be pretty trivial, but it will help illustrate exactly how to set up our tests. Testing a Node project To get started, let's make a fake module. This module will have some functions and we'll test those functions. In the root of the project, we'll create a brand new directory and I'll call this directory utils: We can assume this will store some utility functions, such as adding a number to another number, or stripping out whitespaces from a string, anything kind of hodge-podge that doesn't really belong to any specific location. We'll make a new file in the utils folder called utils.js, and this is a similar pattern to what we did when we created the weather and location directories in our weather app: You're probably wondering why we have a folder and a file with the same name. This will be clear when we start testing. Now before we can write our first test case to make sure something works, we need something to test. I'll make a very basic function that takes two numbers and adds them together. We'll create an adder function as shown in the following code block: module.exports.add = () => { } This arrow function (=>) will take two arguments, a and b, and inside the function, we'll return the value a + b. Nothing too complex here: module.exports.add = () => { return a + b; }; Now since we just have one expression inside our arrow function (=>) and we want to return it, we can actually use the arrow function (=>) expression syntax, which lets us add our expression as shown in the following code, a + b, and it'll be implicitly returned: module.exports.add = (a, b) => a + b; There's no need to explicitly add a return keyword on to the function. Now that we have utils.js ready to go, let's explore testing. We'll be using a framework called Mocha in order to set up our test suite. This will let us configure our individual test cases and also run all of our test files. This will be really important for creating and running tests. The goal here is to make testing simple and we'll use Mocha to do just that. Now that we have a file and a function we actually want to test, let's explore how to create and run a test suite. Mocha – the testing framework We'll be doing the testing using the super popular testing framework Mocha, which you can find at mochajs.org. This is a fantastic framework for creating and running test suites. It's super popular and their page has all the information you'd ever want to know about setting it up, configuring it, and all the cool bells and whistles it has included: If you scroll down on this page, you'll be able to see a table of contents: Here you can explore everything Mocha has to offer. We'll be covering most of it in this article, but for anything we don't cover, I do want to make you aware you can always learn about it on this page. Now that we've explored the Mocha documentation page, let's install it and start using it. Inside the Terminal, we'll install Mocha. First up, let's clear the Terminal output. Then we'll install it using the npm install command. When you use npm install, you can also use the shortcut npm i. This has the exact same effect. I'll use npm i with mocha, specifying the version @3.0.0. This is the most recent version of the library as of this filming: npm i mocha@3.0.0 Now we do want to save this into the package.json file. Previously, we've used the save flag, but we'll talk about a new flag, called save-dev. The save-dev flag is will save this package for development purposes only—and that's exactly what Mocha will be for. We don't actually need Mocha to run our app on a service like Heroku. We just need Mocha locally on our machine to test our code. When you use the save-dev flag, it installs the module much the same way: npm i mocha@5.0.0 --save-dev But if you explore package.json, you'll see things are a little different. Inside our package.json file, instead of a dependencies attribute, we have a devDependencies attribute: In there we have Mocha, with the version number as the value. The devDependencies are fantastic because they're not going to be installed on Heroku, but they will be installed locally. This will keep the Heroku boot times really, really quick. It won't need to install modules that it's not going to actually need. We'll be installing both devDependencies and dependencies in most of our projects from here on out. Creating a test file for the add function Now that we have Mocha installed, we can go ahead and create a test file. In the utils folder, we'll make a new file called utils.test.js:   This file will store our test cases. We'll not store our test cases in utils.js. This will be our application code. Instead, we'll make a file called utils.test.js. When we use this test.js extension, we're basically telling our app that this will store our test cases. When Mocha goes through our app looking for tests to run, it should run any file with this Extension. Now we have a test file, the only thing left to do is create a test case. A test case is a function that runs some code, and if things go well, great, the test is considered to have passed. And if things do not go well, the test is considered to have failed. We can create a new test case, using it. It is a function provided by Mocha. We'll be running our project test files through Mocha, so there's no reason to import it or do anything like that. We simply call it just like this: it(); Now it lets us define a new test case and it takes two arguments. These are: The first argument is a string The second argument is a function First up, we'll have a string description of what exactly the test is doing. If we're testing that the adder function works, we might have something like: it('should add two numbers'); Notice here that it plays into the sentence. It should read like this, it should add two numbers; describes exactly what the test will verify. This is called behavior-driven development, or BDD, and that's the principles that Mocha was built on. Now that we've set up the test string, the next thing to do is add a function as the second argument: it('should add two numbers', () => { }); Inside this function, we'll add the code that tests that the add function works as expected. This means it will probably call add and check that the value that comes back is the appropriate value given the two numbers passed in. That means we do need to import the util.js file up at the top. We'll create a constant, call utils, setting it equal to the return result from requiring utils. We're using ./ since we will be requiring a local file. It's in the same directory so I can simply type utils without the js extension as shown here: const utils = require('./utils'); it('should add two numbers', () => { }); Now that we have the utils library loaded in, inside the callback we can call it. Let's make a variable to store the return results. We'll call this one results. And we'll set it equal to utils.add passing in two numbers. Let's use something like 33 and 11: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); }); We would expect it to get 44 back. Now at this point, we do have some code inside of our test suites so we run it. We'll do that by configuring the test script. Currently, the test script simply prints a message to the screen saying that no tests exist. What we'll do instead is call Mocha. As shown in the following code, we'll be calling Mocha, passing in as the one and only argument the actual files we want to test. We can use a globbing pattern to specify multiple files. In this case, we'll be using ** to look in every single directory. We're looking for a file called utils.test.js: "scripts": { "test": "mocha **/utils.test.js" }, Now this is a very specific pattern. It's not going to be particularly useful. Instead, we can swap out the file name with a star as well. Now we're looking for any file on the project that has a file name ending in .test.js: "scripts": { "test": "mocha **/*.test.js" }, And this is exactly what we want. From here, we can run our test suite by saving package.json and moving to the Terminal. We'll use the clear command to clear the Terminal output and then we can run our test script using command shown as follows: npm test When we run this, we'll execute that Mocha command: It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. It'll go off. It'll fetch all of our test files. It'll run all of them and print the results on the screen inside Terminal as shown in the preceding screenshot. Here we can see we have a green checkmark next to our test, should add two numbers. Next, we have a little summary, one passing test, and it happened in 8 milliseconds. Now in our case, we don't actually assert anything about the number that comes back. It could be 700 and we wouldn't care. The test will always pass. To make a test fail what we have to do is throw an error. That means we can throw a new error and we pass into the constructor function whatever message we want to use as the error as shown in the following code block. In this case, I could say something like Value not correct: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); throw new Error('Value not correct') }); Now with this in place, I can save the test file and rerun things from the Terminal by rerunning npm test, and when we do that now we have 0 tests passing and we have 1 test failing: Next we can see the one test is should add two numbers, and we get our error message, Value not correct. When we throw a new error, the test fails and that's exactly what we want to do for add. Creating the if condition for the test Now, we'll create an if statement for the test. If the response value is not equal to 44, that means we have a problem on our hands and we'll throw an error: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ } }); Inside the if condition, we can throw a new error and we'll use a template string as our message string because I do want to use the value that comes back in the error message. I'll say Expected 44, but got, then I'll inject the actual value, whatever happens to come back: const utils = require('./utils'); it('should add two numbers', () => { var res = utils.add(33, 11); if (res != 44){ throw new Error(`Expected 44, but got ${res}.`); } }); Now in our case, everything will line up great. But what if the add method wasn't working correctly? Let's simulate this by simply tacking on another addition, adding on something like 22 in utils.js: module.exports.add = (a, b) => a + b + 22; I'll save the file, rerun the test suite: Now we get an error message: Expected 44, but got 66. This error message is fantastic. It lets us know that something is going wrong with the test and it even tells us exactly what we got back and what we expected. This will let us go into the add function, look for errors, and hopefully fix them. Creating test cases doesn't need to be something super complex. In this case, we have a simple test case that tests a simple function. To summarize, we looked into basic testing of a node app. We explored the testing framework, Mocha which can be used for creating and running test suites. You read an excerpt from a book written by Andrew Mead, titled Learning Node.js Development. In this book, you will learn how to build, deploy, and test Node apps. Developing Node.js Web Applications How is Node.js Changing Web Development?        
Read more
  • 0
  • 0
  • 14214

article-image-developing-nodejs-web-applications
Packt
18 Feb 2016
11 min read
Save for later

Developing Node.js Web Applications

Packt
18 Feb 2016
11 min read
Node.js is a platform that supports various types of applications, but the most popular kind is the development of web applications. Node's style of coding depends on the community to extend the platform through third-party modules; these modules are then built upon to create new modules, and so on. Companies and single developers around the globe are participating in this process by creating modules that wrap the basic Node APIs and deliver a better starting point for application development. (For more resources related to this topic, see here.) There are many modules to support web application development but none as popular as the Connect module. The Connect module delivers a set of wrappers around the Node.js low-level APIs to enable the development of rich web application frameworks. To understand what Connect is all about, let's begin with a basic example of a basic Node web server. In your working folder, create a file named server.js, which contains the following code snippet: var http = require('http'); http.createServer(function(req, res) { res.writeHead(200, { 'Content-Type': 'text/plain' }); res.end('Hello World'); }).listen(3000); console.log('Server running at http://localhost:3000/'); To start your web server, use your command-line tool, and navigate to your working folder. Then, run the node CLI tool and run the server.js file as follows: $ node server Now open http://localhost:3000 in your browser, and you'll see the Hello World response. So how does this work? In this example, the http module is used to create a small web server listening to the 3000 port. You begin by requiring the http module and use the createServer() method to return a new server object. The listen() method is then used to listen to the 3000 port. Notice the callback function that is passed as an argument to the createServer() method. The callback function gets called whenever there's an HTTP request sent to the web server. The server object will then pass the req and res arguments, which contain the information and functionality needed to send back an HTTP response. The callback function will then do the following two steps: First, it will call the writeHead() method of the response object. This method is used to set the response HTTP headers. In this example, it will set the Content-Type header value to text/plain. For instance, when responding with HTML, you just need to replace text/plain with html/plain. Then, it will call the end() method of the response object. This method is used to finalize the response. The end() method takes a single string argument that it will use as the HTTP response body. Another common way of writing this is to add a write() method before the end() method and then call the end() method, as follows: res.write('Hello World'); res.end(); This simple application illustrates the Node coding style where low-level APIs are used to simply achieve certain functionality. While this is a nice example, running a full web application using the low-level APIs will require you to write a lot of supplementary code to support common requirements. Fortunately, a company called Sencha has already created this scaffolding code for you in the form of a Node module called Connect. Meet the Connect module Connect is a module built to support interception of requests in a more modular approach. In the first web server example, you learned how to build a simple web server using the http module. If you wish to extend this example, you'd have to write code that manages the different HTTP requests sent to your server, handles them properly, and responds to each request with the correct response. Connect creates an API exactly for that purpose. It uses a modular component called middleware, which allows you to simply register your application logic to predefined HTTP request scenarios. Connect middleware are basically callback functions, which get executed when an HTTP request occurs. The middleware can then perform some logic, return a response, or call the next registered middleware. While you will mostly write custom middleware to support your application needs, Connect also includes some common middleware to support logging, static file serving, and more. The way a Connect application works is by using an object called dispatcher. The dispatcher object handles each HTTP request received by the server and then decides, in a cascading way, the order of middleware execution. To understand Connect better, take a look at the following diagram: The preceding diagram illustrates two calls made to the Connect application: the first one should be handled by a custom middleware and the second is handled by the static files middleware. Connect's dispatcher initiates the process, moving on to the next handler using the next() method, until it gets to middleware responding with the res.end() method, which will end the request handling. Express is based on Connect's approach, so in order to understand how Express works, we'll begin with creating a Connect application. In your working folder, create a file named server.js that contains the following code snippet: var connect = require('connect'); var app = connect(); app.listen(3000); console.log('Server running at http://localhost:3000/'); As you can see, your application file is using the connect module to create a new web server. However, Connect isn't a core module, so you'll have to install it using NPM. As you already know, there are several ways of installing third-party modules. The easiest one is to install it directly using the npm install command. To do so, use your command-line tool, and navigate to your working folder. Then execute the following command: $ npm install connect NPM will install the connect module inside a node_modules folder, which will enable you to require it in your application file. To run your Connect web server, just use Node's CLI and execute the following command: $ node server Node will run your application, reporting the server status using the console.log() method. You can try reaching your application in the browser by visiting http://localhost:3000. However, you should get a response similar to what is shown in the following screenshot: Connect application's empty response What this response means is that there isn't any middleware registered to handle the GET HTTP request. This means two things: You've successfully managed to install and use the Connect module It's time for you to write your first Connect middleware Connect middleware Connect middleware is just JavaScript function with a unique signature. Each middleware function is defined with the following three arguments: req: This is an object that holds the HTTP request information res: This is an object that holds the HTTP response information and allows you to set the response properties next: This is the next middleware function defined in the ordered set of Connect middleware When you have a middleware defined, you'll just have to register it with the Connect application using the app.use() method. Let's revise the previous example to include your first middleware. Change your server.js file to look like the following code snippet: var connect = require('connect'); var app = connect(); var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; app.use(helloWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); Then, start your connect server again by issuing the following command in your command-line tool: $ node server Try visiting http://localhost:3000 again. You will now get a response similar to that in the following screenshot: Connect application's response Congratulations, you've just created your first Connect middleware! Let's recap. First, you added a middleware function named helloWorld(), which has three arguments: req, res, and next. In your middleware, you used the res.setHeader() method to set the response Content-Type header and the res.end() method to set the response text. Finally, you used the app.use() method to register your middleware with the Connect application. Understanding the order of Connect middleware One of Connect's greatest features is the ability to register as many middleware functions as you want. Using the app.use() method, you'll be able to set a series of middleware functions that will be executed in a row to achieve maximum flexibility when writing your application. Connect will then pass the next middleware function to the currently executing middleware function using the next argument. In each middleware function, you can decide whether to call the next middleware function or stop at the current one. Notice that each Connect middleware function will be executed in first-in-first-out (FIFO) order using the next arguments until there are no more middleware functions to execute or the next middleware function is not called. To understand this better, we will go back to the previous example and add a logger function that will log all the requests made to the server in the command line. To do so, go back to the server.js file and update it to look like the following code snippet: var connect = require('connect'); var app = connect(); var logger = function(req, res, next) { console.log(req.method, req.url); next(); }; var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; app.use(logger); app.use(helloWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); In the preceding example, you added another middleware called logger(). The logger() middleware uses the console.log() method to simply log the request information to the console. Notice how the logger() middleware is registered before the helloWorld() middleware. This is important as it determines the order in which each middleware is executed. Another thing to notice is the next() call in the logger() middleware, which is responsible for calling the helloWorld() middleware. Removing the next() call would stop the execution of middleware function at the logger() middleware, which means that the request would hang forever as the response is never ended by calling the res.end() method. To test your changes, start your connect server again by issuing the following command in your command-line tool: $ node server Then, visit http://localhost:3000 in your browser and notice the console output in your command-line tool. Mounting Connect middleware As you may have noticed, the middleware you registered responds to any request regardless of the request path. This does not comply with modern web application development because responding to different paths is an integral part of all web applications. Fortunately, Connect middleware supports a feature called mounting, which enables you to determine which request path is required for the middleware function to get executed. Mounting is done by adding the path argument to the app.use() method. To understand this better, let's revisit our previous example. Modify your server.js file to look like the following code snippet: var connect = require('connect'); var app = connect(); var logger = function(req, res, next) { console.log(req.method, req.url); next(); }; var helloWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Hello World'); }; var goodbyeWorld = function(req, res, next) { res.setHeader('Content-Type', 'text/plain'); res.end('Goodbye World'); }; app.use(logger); app.use('/hello', helloWorld); app.use('/goodbye', goodbyeWorld); app.listen(3000); console.log('Server running at http://localhost:3000/'); A few things have been changed in the previous example. First, you mounted the helloWorld() middleware to respond only to requests made to the /hello path. Then, you added another (a bit morbid) middleware called goodbyeWorld() that will respond to requests made to the /goodbye path. Notice how, as a logger should do, we left the logger() middleware to respond to all the requests made to the server. Another thing you should be aware of is that any requests made to the base path will not be responded by any middleware because we mounted the helloWorld() middleware to a specific path. Connect is a great module that supports various features of common web applications. Connect middleware is super simple as it is built with a JavaScript style in mind. It allows the endless extension of your application logic without breaking the nimble philosophy of the Node platform. While Connect is a great improvement over writing your web application infrastructure, it deliberately lacks some basic features you're used to having in other web frameworks. The reason lies in one of the basic principles of the Node community: create your modules lean and let other developers build their modules on top of the module you created. The community is supposed to extend Connect with its own modules and create its own web infrastructures. In fact, one very energetic developer named TJ Holowaychuk, did it better than most when he released a Connect-based web framework known as Express. Summary In this article, you learned about the basic principles of Node.js web applications and discovered the Connect web module. You created your first Connect application and learned how to use middleware functions. To learn more about Node.js and Web Development, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Node.js Design Patterns (https://www.packtpub.com/web-development/nodejs-design-patterns) Web Development with MongoDB and Node.js (https://www.packtpub.com/web-development/web-development-mongodb-and-nodejs) Resources for Article: Further resources on this subject: So, what is Node.js?[article] AngularJS[article] So, what is MongoDB?[article]
Read more
  • 0
  • 0
  • 14175
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-css-stylus-preprocessor
Packt
26 Nov 2014
5 min read
Save for later

Creating CSS via the Stylus preprocessor

Packt
26 Nov 2014
5 min read
Instead of manually typing out each line of CSS you're going to require for your Ghost theme, we're going to get you setup to become highly efficient in your development through use of the CSS preprocessor named Stylus. Stylus can be described as a way of making CSS smart. It gives you the ability to define variables, create blocks of code that can be easily reused, perform mathematical calculations, and more. After Stylus code is written, it is compiled into a regular CSS file that is then linked into your design in the usual fashion. It is an extremely powerful tool with many capabilities, so we won't go into them all here; however, we will cover some of the essential features that will feature heavily in our theme development process. This article by Kezz Bracey, David Balderston, and Andy Boutte, author of the book Getting Started with Ghost, covers how to create CSS via the Stylus preprocessor. (For more resources related to this topic, see here.) Variables Stylus has the ability to create variables to hold any piece of information from color codes to numerical values for use in your layout. For example, you could map out the color scheme of your design like this: default_background_color = #F2F2F2 default_foreground_color = #333 default_highlight_color = #77b6f9 You could then use these variables all throughout your code instead of having to type them out multiple times: body { background-color: default_background_color; } a { color: default_highlight_color; } hr { border-color: default_foreground_color; } .post { border-color: default_highlight_color; color: default_foreground_color; } After the preceding Stylus code was compiled into CSS, it would look like this: body { background-color: #F2F2F2;} a { color: #77b6f9; } hr { border-color: #333; } .post { border-color: #77b6f9; color: #333; } So not only have you been saved the trouble of typing out these color code values repeatedly, which in a real style sheet means a lot of work, but you can also now easily update the color scheme of your site simply by changing the value of the variables you created. Variables come in very handy for many purposes, as you'll see when we get started on theme creation. Stylus syntax Stylus code uses a syntax that reads very much like CSS, but with the ability to take shortcuts in order to code faster and more smoothly. With Stylus, you don't need to include curly braces, colons, or semicolons. Instead, you use tab indentations, spaces, and new lines. For example, the code I used in the last section could actually be written like this in Stylus: body background-color default_background_color   a color default_highlight_color   hr border-color default_foreground_color   .post border-color default_highlight_color color default_foreground_color You may think at first glance that this code is more difficult to read than regular CSS; however, shortly we'll be getting you running with a syntax highlighting package that will make your code look like this: With the syntax highlighting package in place you don't need punctuation to make your code readable as the colors and emphasis allow you to easily differentiate between one thing and another. The chances are very high that you'll find coding in this manner much faster and easier than regular CSS syntax. However, if you're not comfortable, you can still choose to include the curly braces, colons, and semicolons you're used to and your code will still compile just fine. The golden rules of writing in Stylus syntax are as follows: After a class, ID, or element declaration, use a new line and then a tab indentation instead of curly braces Ensure each line of a style is also subsequently tab indented After a property, use a space instead of a colon At the end of a line, after a value, use a new line instead of a semicolon Mixins Mixins are a very useful way of preventing yourself from having to repeat code, and also to allow you to keep your code well organized and compartmentalized. The best way to understand what a mixin is, is to see one in action. For example, you may want to apply the same font-family, font-weight, and color to each of your heading tags. So instead of writing the same thing out manually for each H tag level, you could create a mixin as follows: header_settings() font-family Georgia font-weight 700 color #454545 You could then call that mixin into the styles for your heading tags: h1 header_settings() font-size 3em h2 header_settings() font-size 2.25em h3 header_settings() font-size 1.5em When compiled, you would get the following CSS: h1 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 3em; }   h2 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 2.25em; } h3 { font-family: Georgia; font-weight: 700; color: #454545; font-size: 1.5em; } As we move through the Ghost theme development process, you'll see just how useful and powerful Stylus is, and you'll never want to go back to handcoding CSS again! Summary You now have everything in place and ready to begin your Ghost theme development process. You understand the essentials of the Stylus, the means by which we'll be creating your theme's CSS. Resources for Article: Further resources on this subject: Advanced SOQL Statements [Article] Enabling your new theme in Magento [Article] Introduction to a WordPress application's frontend [Article]
Read more
  • 0
  • 0
  • 14156

article-image-getting-started-spring-boot
Packt
03 Jan 2017
17 min read
Save for later

Getting Started with Spring Boot

Packt
03 Jan 2017
17 min read
In this article by, Greg Turnquist, author of the book, Learning Spring Boot – Second Edition, we will cover the following topics: Introduction Creating a bare project using http://start.spring.io Seeing how to run our app straight inside our IDE with no stand alone containers (For more resources related to this topic, see here.) Perhaps you've heard about Spring Boot? It's only cultivated the most popular explosion in software development in years. Clocking millions of downloads per month, the community has exploded since it's debut in 2013. I hope you're ready for some fun, because we are going to take things to the next level as we use Spring Boot to build a social media platform. We'll explore its many valuable features all the way from tools designed to speed up development efforts to production-ready support as well as cloud native features. Despite some rapid fire demos you might have caught on YouTube, Spring Boot isn't just for quick demos. Built atop the de facto standard toolkit for Java, the Spring Framework, Spring Boot will help us build this social media platform with lightning speed AND stability. In this article, we'll get a quick kick off with Spring Boot using Java the programming language. Maybe that makes you chuckle? People have been dinging Java for years as being slow, bulky, and not the means for agile shops. Together, we'll see how that is not the case. At any time, if you're interested in a more visual medium, feel free to checkout my Learning Spring Boot [Video] at https://www.packtpub.com/application-development/learning-spring-boot-video. What is step #1 when we get underway with a project? We visit Stack Overflow and look for an example project to build a project! Seriously, the amount of time spent adapting another project's build file, picking dependencies, and filling in other details about our project adds up. At the Spring Initializr (http://start.spring.io), we can enter minimal details about our app, pick our favorite build system, the version of Spring Boot we wish to use, and then choose our dependencies off a menu. Click the Download button, and we have a free standing, ready-to-run application. In this article, let's take a quick test drive and build small web app. We can start by picking Gradle from the dropdown. Then, select 1.4.1.BUILD-SNAPSHOT as the version of Spring Boot we wish to use. Next, we need to pick our application's coordinates: Group - com.greglturnquist.learningspringboot Artifact - learning-spring-boot Now comes the fun part. We get to pick the ingredients for our application like picking off a delicious menu. If we start typing, for example, "Web", into the Dependencies box, we'll see several options appear. To see all the available options, click on the Switch to the full version link toward the bottom. There are lots of overrides, such as switching from JAR to WAR, or using an older version of Java. You can also pick Kotlin or Groovy as the primary language for your application. For starters, in this day and age, there is no reason to use anything older than Java 8. And JAR files are the way to go. WAR files are only needed when applying Spring Boot to an old container. To build our social media platform, we need a few ingredients as shown: Web (embedded Tomcat + Spring MVC) WebSocket JPA (Spring Data JPA) H2 (embedded SQL data store) Thymeleaf template engine Lombok (to simplify writing POJOs) The following diagram shows an overview of these ingredients: With these items selected, click on Generate Project. There are LOTS of other tools that leverage this site. For example, IntelliJ IDEA lets you create a new project inside the IDE, giving you the same options shown here. It invokes the web site's REST API, and imports your new project. You can also interact with the site via cURL or any other REST-based tool. Now let's unpack that ZIP file and see what we've got: a build.gradle build file a Gradle wrapper, so there's no need to install Gradle a LearningSpringBootApplication.java application class an application.properties file a LearningSpringBootApplicationTests.java test class We built an empty Spring Boot project. Now what? Before we sink our teeth into writing code, let's take a peek at the build file. It's quite terse, but carries some key bits. Starting from the top: buildscript { ext { springBootVersion = '1.4.1.BUILD-SNAPSHOT' } repositories { mavenCentral() maven { url "https://repo.spring.io/snapshot" } maven { url "https://repo.spring.io/milestone" } } dependencies { classpath("org.springframework.boot:spring-boot-gradle- plugin:${springBootVersion}") } } This contains the basis for our project: springBootVersion shows us we are using Spring Boot 1.4.1.BUILD-SNAPSHOT The Maven repositories it will pull from are listed next Finally, we see the spring-boot-gradle-plugin, a critical tool for any Spring Boot project The first piece, the version of Spring Boot, is important. That's because Spring Boot comes with a curated list of 140 third party library versions extending well beyond the Spring portfolio and into some of the most commonly used libraries in the Java ecosystem. By simply changing the version of Spring Boot, we can upgrade all these libraries to newer versions known to work together. There is an extra project, the Spring IO Platform (https://spring.io/platform), which includes an additional 134 curated versions, bringing the total to 274. The repositories aren't as critical, but it's important to add milestones and snapshots if fetching a library not released to Maven central or hosted on some vendor's local repository. Thankfully, the Spring Initializr does this for us based on the version of Spring Boot selected on the site. Finally, we have the spring-boot-gradle-plugin (and there is a corresponding spring-boot-maven-plugin for Maven users). This plugin is responsible for linking Spring Boot's curated list of versions with the libraries we select in the build file. That way, we don't have to specify the version number. Additionally, this plugin hooks into the build phase and bundle our application into a runnable über JAR, also known as a shaded or fat JAR. Java doesn't provide a standardized way to load nested JAR files into the classpath. Spring Boot provides the means to bundle up third-party JARs inside an enclosing JAR file and properly load them at runtime. Read more at http://docs.spring.io/spring-boot/docs/1.4.1.BUILD-SNAPSHOT/reference/htmlsingle/#executable-jar. With an über JAR in hand, we only need put it on a thumb drive, and carry it to another machine, to a hundred virtual machines in the cloud or your data center, or anywhere else, and it simply runs where we can find a JVM. Peeking a little further down in build.gradle, we can see the plugins that are enabled by default: apply plugin: 'java' apply plugin: 'eclipse' apply plugin: 'spring-boot' The java plugin indicates the various tasks expected for a Java project The eclipse plugin helps generate project metadata for Eclipse users The spring-boot plugin is where the actual spring-boot-gradle-plugin is activated An up-to-date copy of IntelliJ IDEA can ready a plain old Gradle build file fine without extra plugins. Which brings us to the final ingredient used to build our application: dependencies. Spring Boot starters No application is complete without specifying dependencies. A valuable facet of Spring Boot are its virtual packages. These are published packages that don't contain any code, but instead simply list other dependencies. The following list shows all the dependencies we selected on the Spring Initializr site: dependencies { compile('org.springframework.boot:spring-boot-starter-data-jpa') compile('org.springframework.boot:spring-boot-starter- thymeleaf') compile('org.springframework.boot:spring-boot-starter-web') compile('org.springframework.boot:spring-boot-starter- websocket') compile('org.projectlombok:lombok') runtime('com.h2database:h2') testCompile('org.springframework.boot:spring-boot-starter-test') } If you'll notice, most of these packages are Spring Boot starters: spring-boot-starter-data-jpa pulls in Spring Data JPA, Spring JDBC, Spring ORM, and Hibernate spring-boot-starter-thymeleaf pulls in Thymeleaf template engine along with Spring Web and the Spring dialect of Thymeleaf spring-boot-starter-web pulls in Spring MVC, Jackson JSON support, embedded Tomcat, and Hibernate's JSR-303 validators spring-boot-starter-websocket pulls in Spring WebSocket and Spring Messaging These starter packages allow us to quickly grab the bits we need to get up and running. Spring Boot starters have gotten so popular that lots of other third party library developers are crafting their own. In addition to starters, we have three extra libraries: Project Lombok makes it dead simple to define POJOs without getting bogged down in getters, setters, and others details. H2 is an embedded database allowing us to write tests, noodle out solutions, and get things moving before getting involved with an external database. spring-boot-starter-test pulls in Spring Boot Test, JSON Path, JUnit, AssertJ, Mockito, Hamcrest, JSON Assert, and Spring Test, all within test scope. The value of this last starter, spring-boot-starter-test, cannot be overstated. With a single line, the most powerful test utilities are at our fingertips, allowing us to write unit tests, slice tests, and full blown our-app-inside-embedded-Tomcat tests. It's why this starter is included in all projects without checking a box on the Spring Initializr site. Now to get things off the ground, we need to shift focus to the tiny bit of code written for us by the Spring Initializr. Running a Spring Boot application The fabulous http://start.spring.io website created a tiny class, LearningSpringBootApplication as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; @SpringBootApplication public class LearningSpringBootApplication { public static void main(String[] args) { SpringApplication.run( LearningSpringBootApplication.class, args); } } This tiny class is actually a fully operational web application! The @SpringBootApplication annotation tells Spring Boot, when launched, to scan recursively for Spring components inside this package and register them. It also tells Spring Boot to enable autoconfiguration, a process where beans are automatically created based on classpath settings, property settings, and other factors. Finally, it indicates that this class itself can be a source for Spring bean definitions. It holds a public static void main(), a simple method to run the application. There is no need to drop this code into an application server or servlet container. We can just run it straight up, inside our IDE. The amount of time saved by this feature, over the long haul, adds up fast. SpringApplication.run() points Spring Boot at the leap off point. In this case, this very class. But it's possible to run other classes. This little class is runnable. Right now! In fact, let's give it a shot. . ____ _ __ _ _ /\ / ___'_ __ _ _(_)_ __ __ _ ( ( )___ | '_ | '_| | '_ / _` | \/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |___, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v1.4.1.BUILD-SNAPSHOT) 2016-09-18 19:52:44.214: Starting LearningSpringBootApplication on ret... 2016-09-18 19:52:44.217: No active profile set, falling back to defaul... 2016-09-18 19:52:44.513: Refreshing org.springframework.boot.context.e... 2016-09-18 19:52:45.785: Bean 'org.springframework.transaction.annotat... 2016-09-18 19:52:46.188: Tomcat initialized with port(s): 8080 (http) 2016-09-18 19:52:46.201: Starting service Tomcat 2016-09-18 19:52:46.202: Starting Servlet Engine: Apache Tomcat/8.5.5 2016-09-18 19:52:46.323: Initializing Spring embedded WebApplicationCo... 2016-09-18 19:52:46.324: Root WebApplicationContext: initialization co... 2016-09-18 19:52:46.466: Mapping servlet: 'dispatcherServlet' to [/] 2016-09-18 19:52:46.469: Mapping filter: 'characterEncodingFilter' to:... 2016-09-18 19:52:46.470: Mapping filter: 'hiddenHttpMethodFilter' to: ... 2016-09-18 19:52:46.470: Mapping filter: 'httpPutFormContentFilter' to... 2016-09-18 19:52:46.470: Mapping filter: 'requestContextFilter' to: [/*] 2016-09-18 19:52:46.794: Building JPA container EntityManagerFactory f... 2016-09-18 19:52:46.809: HHH000204: Processing PersistenceUnitInfo [ name: default ...] 2016-09-18 19:52:46.882: HHH000412: Hibernate Core {5.0.9.Final} 2016-09-18 19:52:46.883: HHH000206: hibernate.properties not found 2016-09-18 19:52:46.884: javassist 2016-09-18 19:52:46.976: HCANN000001: Hibernate Commons Annotations {5... 2016-09-18 19:52:47.169: Using dialect: org.hibernate.dialect.H2Dialect 2016-09-18 19:52:47.358: HHH000227: Running hbm2ddl schema export 2016-09-18 19:52:47.359: HHH000230: Schema export complete 2016-09-18 19:52:47.390: Initialized JPA EntityManagerFactory for pers... 2016-09-18 19:52:47.628: Looking for @ControllerAdvice: org.springfram... 2016-09-18 19:52:47.702: Mapped "{[/error]}" onto public org.springfra... 2016-09-18 19:52:47.703: Mapped "{[/error],produces=[text/html]}" onto... 2016-09-18 19:52:47.724: Mapped URL path [/webjars/**] onto handler of... 2016-09-18 19:52:47.724: Mapped URL path [/**] onto handler of type [c... 2016-09-18 19:52:47.752: Mapped URL path [/**/favicon.ico] onto handle... 2016-09-18 19:52:47.778: Cannot find template location: classpath:/tem... 2016-09-18 19:52:48.229: Registering beans for JMX exposure on startup 2016-09-18 19:52:48.278: Tomcat started on port(s): 8080 (http) 2016-09-18 19:52:48.282: Started LearningSpringBootApplication in 4.57... Scrolling through the output, we can see several things: The banner at the top gives us a readout of the version of Spring Boot. (BTW, you can create your own ASCII art banner by creating either banner.txt or banner.png into src/main/resources/) Embedded Tomcat is initialized on port 8080, indicating it's ready for web requests Hibernate is online with the H2 dialect enabled A few Spring MVC routes are registered, such as /error, /webjars, a favicon.ico, and a static resource handler And the wonderful Started LearningSpringBootApplication in 4.571 seconds message Spring Boot uses embedded Tomcat, so there's no need to install a container on our target machine. Non-web apps don't even require Apache Tomcat. The JAR itself is the new container that allows us to stop thinking in terms of old fashioned servlet containers. Instead, we can think in terms of apps. All these factors add up to maximum flexibility in application deployment. How does Spring Boot use embedded Tomcat among other things? As mentioned earlier, it has autoconfiguration meaning it has Spring beans that are created based on different conditions. When Spring Boot sees Apache Tomcat on the classpath, it creates an embedded Tomcat instance along with several beans to support that. When it spots Spring MVC on the classpath, it creates view resolution engines, handler mappers, and a whole host of other beans needed to support that, letting us focus on coding custom routes. With H2 on the classpath, it spins up an in-memory, embedded SQL data store. Spring Data JPA will cause Spring Boot to craft an EntityManager along with everything else needed to start speaking JPA, letting us focus on defining repositories. At this stage, we have a running web application, albeit an empty one. There are no custom routes and no means to handle data. But we can add some real fast. Let's draft a simple REST controller: package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RequestParam; import org.springframework.web.bind.annotation.RestController; @RestController public class HomeController { @GetMapping public String greeting(@RequestParam(required = false, defaultValue = "") String name) { return name.equals("") ? "Hey!" : "Hey, " + name + "!"; } } Let's examine this tiny REST controller in detail: The @RestController annotation indicates that we don't want to render views, but instead write the results straight into the response body. @GetMapping is Spring's shorthand annotation for @RequestMapping(method = RequestMethod.GET, …[]). In this case, it defaults the route to "/". Our greeting() method has one argument: @RequestParam(required = false, defaultValue = "") String name. It indicates that this value can be requested via an HTTP query (?name=Greg), the query isn't required, and in case it's missing, supply an empty string. Finally, we are returning one of two messages depending on whether or not name is empty using Java's classic ternary operator. If we re-launch the LearningSpringBootApplication in our IDE, we'll see a new entry in the console. 2016-09-18 20:13:08.149: Mapped "{[],methods=[GET]}" onto public java.... We can then ping our new route in the browser at http://localhost:8080 and http://localhost:8080?name=Greg. Try it out! That's nice, but since we picked Spring Data JPA, how hard would it be to load some sample data and retrieve it from another route? (Spoiler alert: not hard at all.) We can start out by defining a simple Chapter entity to capture book details as shown in the following code: package com.greglturnquist.learningspringboot; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import lombok.Data; @Data @Entity public class Chapter { @Id @GeneratedValue private Long id; private String name; private Chapter() { // No one but JPA uses this. } public Chapter(String name) { this.name = name; } } This little POJO let's us the details about the chapter of a book as follows: The @Data annotation from Lombok will generate getters, setters, a toString() method, a constructor for all required fields (those marked final), an equals() method, and a hashCode() method. The @Entity annotation flags this class as suitable for storing in a JPA data store. The id field is marked with JPA's @Id and @GeneratedValue annotations, indicating this is the primary key, and that writing new rows into the corresponding table will create a PK automatically. Spring Data JPA will by default create a table named CHAPTER with two columns, ID, and NAME. The key field is name, which is populated by the publicly visible constructor. JPA requires a no-arg constructor, so we have included one, but marked it private so no one but JPA may access it. To interact with this entity and it's corresponding table in H2, we could dig in and start using the autoconfigured EntityManager supplied by Spring Boot. By why do that, when we can declare a repository-based solution? To do so, we'll create an interface defining the operations we need. Check out this simple interface: package com.greglturnquist.learningspringboot; import org.springframework.data.repository.CrudRepository; public interface ChapterRepository extends CrudRepository<Chapter, Long> { } This declarative interface creates a Spring Data repository as follows: CrudRepository extends Repository, a Spring Data Commons marker interface that signals Spring Data to create a concrete implementation while also capturing domain information. CrudRepository, also from Spring Data Commons, has some pre-defined CRUD operations (save, delete, deleteAll, findOne, findAll). It specifies the entity type (Chapter) and the type of the primary key (Long). Spring Data JPA will automatically wire up a concrete implementation of our interface. Spring Data doesn't engage in code generation. Code generation has a sordid history of being out of date at some of the worst times. Instead, Spring Data uses proxies and other mechanisms to support all these operations. Never forget - the code you don't write has no bugs. With Chapter and ChapterRepository defined, we can now pre-load the database, as shown in the following code: package com.greglturnquist.learningspringboot; import org.springframework.boot.CommandLineRunner; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; @Configuration public class LoadDatabase { @Bean CommandLineRunner init(ChapterRepository repository) { return args -> { repository.save( new Chapter("Quick start with Java")); repository.save( new Chapter("Reactive Web with Spring Boot")); repository.save( new Chapter("...and more!")); }; } } This class will be automatically scanned in by Spring Boot and run in the following way: @Configuration marks this class as a source of beans. @Bean indicates that the return value of init() is a Spring Bean. In this case, a CommandLineRunner. Spring Boot runs all CommandLineRunner beans after the entire application is up and running. This bean definition is requesting a copy of the ChapterRepository. Using Java 8's ability to coerce the args → {} lambda function into a CommandLineRunner, we are able to write several save() operations, pre-loading our data. With this in place, all that's left is write a REST controller to serve up the data! package com.greglturnquist.learningspringboot; import org.springframework.web.bind.annotation.GetMapping; import org.springframework.web.bind.annotation.RestController; @RestController public class ChapterController { private final ChapterRepository repository; public ChapterController(ChapterRepository repository) { this.repository = repository; } @GetMapping("/chapters") public Iterable<Chapter> listing() { return repository.findAll(); } } This controller is able to serve up our data as follows: @RestController indicates this is another REST controller. Constructor injection is used to automatically load it with a copy of the ChapterRepository. With Spring, if there is a only one constructor call, there is no need to include an @Autowired annotation. @GetMapping tells Spring that this is the place to route /chapters calls. In this case, it returns the results of the findAll() call found in CrudRepository. If we re-launch our application and visit http://localhost:8080/chapters, we can see our pre-loaded data served up as a nicely formatted JSON document: It's not very elaborate, but this small collection of classes has helped us quickly define a slice of functionality. And if you'll notice, we spent zero effort configuring JSON converters, route handlers, embedded settings, or any other infrastructure. Spring Boot is designed to let us focus on functional needs, not low level plumbing. Summary So in this article we introduced the Spring Boot concept in brief and we rapidly crafted a Spring MVC application using the Spring stack on top of Apache Tomcat with little configuration from our end. Resources for Article: Further resources on this subject: Writing Custom Spring Boot Starters [article] Modernizing our Spring Boot app [article] Introduction to Spring Framework [article]
Read more
  • 0
  • 0
  • 14149

article-image-introduction-php-nuke
Packt
22 Feb 2010
6 min read
Save for later

An Introduction to PHP-Nuke

Packt
22 Feb 2010
6 min read
This is the first article of the series in which we will cover the introduction to PHP-Nuke. PHP-Nuke is a free tool to manage the content of dynamic websites. To be more specific, PHP-Nuke is an open-source content management system. In fact, you could say it is 'the' open-source content management system. Since it is vastly popular, a number of other similar systems have sprung from it, and even similar systems based around very different technologies that owe nothing to it in terms of code have added 'Nuke' to their name as homage. Although the first paragraph conveys something of the history and grandeur of PHP-Nuke, it doesn't answer the basic question of what it can actually do for you. PHP-Nuke allows you to create a dynamic, community-driven website with minimum effort and programming knowledge. To get the most out of PHP-Nuke, a knowledge of web development will prove to be useful, but even then, PHP-Nuke is written in the PHP scripting language (as can be deduced from the name), which is probably the most popular and straightforward language for creating websites and web applications. The first PHP-Nuke release in June 2000 was created by a developer named Francisco Burzi to power his site, Linux Preview. Since then, PHP-Nuke has evolved under his guidance to the system it is today. PHP-Nuke is truly one of the Internet's legendary applications. In this article, we will take our first look at PHP-Nuke, understand what it can do, find out where to go for further resources, and briefly discuss the site we will create in this article series. What PHP-Nuke Can Do for You PHP-Nuke is ideal for creating community-driven websites. The 'community' part of 'community-driven' means that the site is geared towards a particular group of people with similar interests. Maybe this community is concerned with wine making, flowers, programming, zombie films, or even dinosaurs. Maybe the community is actually a group of customers of a particular product. Of course, we are talking about an online community here. Whatever the community is into, the site can be structured to hold information relevant to the members; maybe it will be news stories about a forthcoming zombie film, links to other zombie sites, reviews, or synopses of other zombie films. The 'driven' part of 'community-driven' suggests that the information available on this site can be extended or enhanced by the members of the community. Members of the community may be able to shape what is on the site by posting comments, contributing or rating stories, and participating in discussions. After all, communities are made up of people, and people have views and opinions, and often like to express them! This is exactly what PHP-Nuke enables. More than being just a website, a PHP-Nuke site provides a rich and interactive environment for its visitors. The best bit is, you don't have to be an expert programmer to achieve all this. With only rudimentary knowledge of HTML, you can engineer a unique-looking PHP-Nuke website. The Visitor Experience The standard installation of PHP-Nuke provides many features for its visitors. Some of them are: Searchable news articles, organized into topics Ability of visitors to create an account on the site, and log in to their own personal area Ability of visitors to rate articles, and create discussions about them Straw polls and surveys Ability of visitors to submit their own stories to be published on the site An encyclopedia, in other words, a collection of entries organized alphabetically A catalog of web links or downloadable files Discussion forums Ability of visitors to select their own look for the site from a list of different 'themes' RSS syndication of your articles to share your content with other sites This is not a complete list either. And these are only some of the features that come with the standard installation. PHP-Nuke is a modular system; it can be customized and extended, and there is a huge range of third-party customizations and extensions to be found on the Internet. Any of these can add to the range of features your site provides. The Management Experience As a potential 'manager' of a PHP-Nuke site, as you read through the list of features above you may think they sound rather attractive, but you might also wonder how you will handle all of that. PHP-Nuke provides a web-based management interface. You, as the manager of the site, visit the site and log in with a special super user, or site administrator, account. After this, from the comfort of your web browser, you run the show: You can add new information, and edit, delete, or move existing pieces of information. You can approve articles submitted by the user to be shown on the site. You can decide the features of the site. You can control what is displayed on the pages. You can control who is able to see what. What Exactly is PHP-Nuke? PHP-Nuke is a collection of PHP scripts that run on a web server, connect to a database, and display the retrieved data in a systematic way. In other words, PHP-Nuke is a data-driven PHP web application. PHP-Nuke can be downloaded for free, and then installed to your local machine for testing and development. The files and the database can be uploaded to a web hosting service, so that your site will be available to anyone on the Internet. There are even web hosting services that offer PHP-Nuke installation at the click of a button. Modular Structure PHP-Nuke is built around a 'core' set of functions, which perform mundane tasks such as selecting what part of the application the user should be shown, checking who the user is, and what they can do on the site. The thing that makes PHP-Nuke exciting to the world is the set of modules that comes with it. These modules provide the real functionality of the site, and include ones for news and article management, downloads, and forums, among others. The modules can be switched on and off with ease, and other modules can be added to the system. There is no shortage of third-party modules on the Internet, and you can find a PHP-Nuke module for almost any imaginable purpose. Themed Interface The look of a PHP-Nuke site is controlled by a theme. This is a collection of images, colors, and other resources, and instructions that determine the layout of the page. A new theme can be selected, and your site will be transformed immediately. Visitors with a user account on the site are able to select their own personal theme. Multi-Lingual Interface PHP-Nuke comes with many language files. These contain translations of standard elements on the site interface. The availability of these translations reflects the international nature of the PHP-Nuke community. PHP-Nuke as an Open-Source Content Management System We used the expression 'open-source content management system' earlier in the article to describe PHP-Nuke. Let's take a closer a look at this term.
Read more
  • 0
  • 24
  • 14122

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 14049
article-image-ngi0-consortium-to-award-grants-worth-5-6-million-euro-to-open-internet-projects-that-promote-inclusivity-privacy-interoperability-and-data-protection
Bhagyashree R
03 Dec 2018
3 min read
Save for later

NGI0 Consortium to award grants worth 5.6 million euro to open internet projects that promote inclusivity, privacy, interoperability, and data protection

Bhagyashree R
03 Dec 2018
3 min read
NLnet Foundation, on Saturday, announced that they are now taking in submissions for project proposals for projects that will deliver “potential break-through contributions to the open internet”. These project will be judged on their technical merits, strategic relevance to the Next Generation Internet and overall value for money. For this, they have created separate themes such as NGI Zero PET and NGI Zero Discovery under which you can list your projects. The foundation will be investing 5.6 million euro on small to medium-size R&D grants towards improving search and discovery and privacy and trust enhancing technologies from 2018 to 2021. They are seeking project proposals between 5.000 and 50.000 euros, with the chances to scale them up if there is proven potential. Deadline for submitting these proposals is February 1st, 2019 12:00 CET. NLnet Foundation supports the open internet and the privacy and security of internet users. The foundation helps independent organizations and people that contribute to an open information society by providing them microgrants, advice, and access to a global network. Next Generation Internet (NGI): Creating open, trustworthy, and reliable internet for all The European Commission launched the NGI initiative in 2016 aiming to make the internet an interoperable platform ecosystem. This future internet will respect human and societal values such as openness, inclusivity, transparency, privacy, cooperation, and protection of data. NGI wants to make the internet more human-centric while also driving the adoption of advanced concepts and methodologies in domains such as artificial intelligence, Internet of Things, interactive technologies, and more. To achieve these goals NLnet has launched projects like NGI Zero Discovery and NGI Zero Privacy and Trust Enhancing Technologies (PET). NGI Zero Discovery aims to provide individual researchers and developers an agile, effective, and low-threshold funding mechanism. This project will help researchers and developers bring in new ideas that contribute to the establishment of the Next Generation Internet. These new projects will be made available as free/libre/open source software. NGI Zero PET is the sister project of NGI Zero Discovery. The objective of this project is to equip people with new technologies that will provide them better privacy. NLnet on their website said these investments are for helping researchers and developers towards creating an open internet: “Trust is one of the key drivers for the Next Generation Internet, and an adequate level of privacy is a non-negotiable requirement for that. We want to assist independent researchers and developers to create powerful new technology and to help them put it in the hands of future generations as building blocks for a fair and democratic society and an open economy that benefits all.” To read more, check out the NLnet Foundation’s official website. The State of Mozilla 2017 report focuses on internet health and user privacy Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” Has the EU just ended the internet as we know it?
Read more
  • 0
  • 0
  • 14021

article-image-getting-started-react-and-bootstrap
Packt
13 Dec 2016
18 min read
Save for later

Getting Started with React and Bootstrap

Packt
13 Dec 2016
18 min read
In this article by Mehul Bhat and Harmeet Singh, the authors of the book Learning Web Development with React and Bootstrap, you will learn to build two simple real-time examples: Hello World! with ReactJS A simple static form application with React and Bootstrap There are many different ways to build modern web applications with JavaScript and CSS, including a lot of different tool choices, and there is a lot of new theory to learn. This article will introduce you to ReactJS and Bootstrap, which you will likely come across as you learn about modern web application development. It will then take you through the theory behind the Virtual DOM and managing the state to create interactive, reusable and stateful components ourselves and then push it to a Redux architecture. You will have to learn very carefully about the concept of props and state management and where it belongs. If you want to learn code then you have to write code, in whichever way you feel comfortable. Try to create small components/code samples, which will give you more clarity/understanding of any technology. Now, let's see how this article is going to make your life easier when it comes to Bootstrap and ReactJS. Facebook has really changed the way we think about frontend UI development with the introduction of React. One of the main advantages of this component-based approach is that it is easy to understand, as the view is just a function of the properties and state. We're going to cover the following topics: Setting up the environment ReactJS setup Bootstrap setup Why Bootstrap? Static form example with React and Bootstrap (For more resources related to this topic, see here.) ReactJS React (sometimes called React.js or ReactJS) is an open-source JavaScript library that provides a view for data rendered as HTML. Components have been used typically to render React views that contain additional components specified as custom HTML tags. React gives you a trivial virtual DOM, powerful views without templates, unidirectional data flow, and explicit mutation. It is very methodical in updating the HTML document when the data changes; and provides a clean separation of components on a modern single-page application. From below example, we will have clear idea on normal HTML encapsulation and ReactJS custom HTML tag. JavaScript code: <section> <h2>Add your Ticket</h2> </section> <script> var root = document.querySelector('section').createShadowRoot(); root.innerHTML = '<style>h2{ color: red; }</style>' + '<h2>Hello World!</h2>'; </script> ReactJS code: var sectionStyle = { color: 'red' }; var AddTicket = React.createClass({ render: function() { return (<section><h2 style={sectionStyle}> Hello World!</h2></section>)} }) ReactDOM.render(<AddTicket/>, mountNode); As your app comes into existence and develops, it's advantageous to ensure that your components are used in the right manner. The React app consists of reusable components, which makes code reuse, testing, and separation of concerns easy. React is not only the V in MVC, it has stateful components (stateful components remembers everything within this.state). It handles mapping from input to state changes, and it renders components. In this sense, it does everything that an MVC does. Let's look at React's component life cycle and its different levels. Observe the following screenshot: React isn't an MVC framework; it's a library for building a composable user interface and reusable components. React used at Facebook in production and https://www.instagram.com/ is entirely built on React. Setting up the environment When we start to make an application with ReactJS, we need to do some setup, which just involves an HTML page and includes a few files. First, we create a directory (folder) called Chapter 1. Open it up in any code editor of your choice. Create a new file called index.html directly inside it and add the following HTML5 boilerplate code: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <p>Hello world! This is HTML5 Boilerplate.</p> </body> </html> This is a standard HTML page that we can update once we have included the React and Bootstrap libraries. Now we need to create couple of folders inside the Chapter 1 folder named images, css, and js (JavaScript) to make your application manageable. Once you have completed the folder structure it will look like this: Installing ReactJS and Bootstrap Once we have finished creating the folder structure, we need to install both of our frameworks, ReactJS and Bootstrap. It's as simple as including JavaScript and CSS files in your page. We can do this via a content delivery network (CDN), such as Google or Microsoft, but we are going to fetch the files manually in our application so we don't have to be dependent on the Internet while working offline. Installing React First, we have to go to this URL https://facebook.github.io/react/ and hit the download button. This will give you a ZIP file of the latest version of ReactJS that includes ReactJS library files and some sample code for ReactJS. For now, we will only need two files in our application: react.min.js and react-dom.min.js from the build directory of the extracted folder. Here are a few steps we need to follow: Copy react.min.js and react-dom.min.js to your project directory, the Chapter 1/js folder, and open up your index.html file in your editor. Now you just need to add the following script in your page's head tag section: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> Now we need to include the compiler in our project to build the code because right now we are using tools such as npm. We will download the file from the following CDN path, https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js, or you can give the CDN path directly. The Head tag section will look this: <script type="text/js" src="js/react.min.js"></script> <script type="text/js" src="js/react-dom.min.js"></script> <script type="text/js" src="js/browser.min.js"></script> Here is what the final structure of your js folder will look like: Bootstrap Bootstrap is an open source frontend framework maintained by Twitter for developing responsive websites and web applications. It includes HTML, CSS, and JavaScript code to build user interface components. It's a fast and easy way to develop a powerful mobile first user interface. The Bootstrap grid system allows you to create responsive 12-column grids, layouts, and components. It includes predefined classes for easy layout options (fixed-width and full width). Bootstrap has a dozen pre-styled reusable components and custom jQuery plugins, such as button, alerts, dropdown, modal, tooltip tab, pagination, carousal, badges, icons, and much more. Installing Bootstrap Now, we need to install Bootstrap. Visit http://getbootstrap.com/getting-started/#download and hit on the Download Bootstrap button. This includes the compiled and minified version of css and js for our app; we just need the CSS bootstrap.min.css and fonts folder. This style sheet will provide you with the look and feel of all of the components, and is responsive layout structure for our application. Previous versions of Bootstrap included icons as images but, in version 3, icons have been replaced as fonts. We can also customize the Bootstrap CSS style sheet as per the component used in your application: Extract the ZIP folder and copy the Bootstrap CSS from the css folder to your project folder CSS. Now copy the fonts folder of Bootstrap into your project root directory. Open your index.html in your editor and add this link tag in your head section: <link rel="stylesheet" href="css/bootstrap.min.css"> That's it. Now we can open up index.html again, but this time in your browser, to see what we are working with. Here is the code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> </body> </html> Using React So now we've got the ReactJS and Bootstrap style sheet and in there we've initialized our app. Now let's start to write our first Hello World app using reactDOM.render(). The first argument of the ReactDOM.render method is the component we want to render and the second is the DOM node to which it should mount (append) to: ReactDOM.render( ReactElement element, DOMElement container, [function callback]) In order to translate it to vanilla JavaScript, we use wraps in our React code, <script type"text/babel">, tag that actually performs the transformation in the browser. Let's start out by putting one div tag in our body tag: <div id="hello"></div> Now, add the script tag with the React code: <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> Let's open the HTML page in your browser. If you see Hello, world! in your browser then we are on good track. In the preceding screenshot, you can see it shows the Hello, world! in your browser. That's great. We have successfully completed our setup and built our first Hello World app. Here is the full code that we have written so far: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/babel-core/5.8.23/browser.min.js"></script> </head> <body> <!--[if lt IE 8]> <p class="browserupgrade">You are using an <strong>outdated</strong> browser. Please <a href="http://browsehappy.com/">upgrade your browser</a> to improve your experience.</p> <![endif]--> <!-- Add your site or application content here --> <div id="hello"></div> <script type="text/babel"> ReactDOM.render( <h1>Hello, world!</h1>, document.getElementById('hello') ); </script> </body> </html> Static form with React and Bootstrap We have completed our first Hello World app with React and Bootstrap and everything looks good and as expected. Now it's time do more and create one static login form, applying the Bootstrap look and feel to it. Bootstrap is a great way to make you app responsive grid system for different mobile devices and apply the fundamental styles on HTML elements with the inclusion of a few classes and div's. The responsive grid system is an easy, flexible, and quick way to make your web application responsive and mobile first that appropriately scales up to 12 columns per device and viewport size. First, let's start to make an HTML structure to follow the Bootstrap grid system. Create a div and add a className .container for (fixed width) and .container-fluid for (full width). Use the className attribute instead of using class: <div className="container-fluid"></div> As we know, class and for are discouraged as XML attribute names. Moreover, these are reserved words in many JavaScript libraries so, to have clear difference and identical understanding, instead of using class and for, we can use className and htmlFor create a div and adding the className row. The row must be placed within .container-fluid: <div className="container-fluid"> <div className="row"></div> </div> Now create columns that must be immediate children of a row: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"></div> </div> </div> .row and .col-xs-4 are predefined classes that available for quickly making grid layouts. Add the h1 tag for the title of the page: <div className="container-fluid"> <div className="row"> <div className="col-sm-6"> <h1>Login Form</h1> </div> </div> </div> Grid columns are created by the given specified number of col-sm-* of 12 available columns. For example, if we are using a four column layout, we need to specify to col-sm-3 lead-in equal columns. Col-sm-* Small devices Col-md-* Medium devices Col-lg-* Large devices We are using the col-sm-* prefix to resize our columns for small devices. Inside the columns, we need to wrap our form elements label and input tags into a div tag with the form-group class: <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> Forget the style of Bootstrap; we need to add the form-control class in our input elements. If we need extra padding in our label tag then we can add the control-label class on the label. Let's quickly add the rest of the elements. I am going to add a password and submit button. In previous versions of Bootstrap, form elements were usually wrapped in an element with the form-actions class. However, in Bootstrap 3, we just need to use the same form-group instead of form-actions. Here is our complete HTML code: <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> Now create one object inside the var loginFormHTML script tag and assign this HTML them: Var loginFormHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form- control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> We will pass this object in the React.DOM()method instead of directly passing the HTML: ReactDOM.render(LoginformHTML,document.getElementById('hello')); Our form is ready. Now let's see how it looks in the browser: The compiler is unable to parse our HTML because we have not enclosed one of the div tags properly. You can see in our HTML that we have not closed the wrapper container-fluid at the end. Now close the wrapper tag at the end and open the file again in your browser. Note: Whenever you hand-code (write) your HTML code, please double check your "Start Tag" and "End Tag". It should be written/closed properly, otherwise it will break your UI/frontend look and feel. Here is the HTML after closing the div tag: <!doctype html> <html class="no-js" lang=""> <head> <meta charset="utf-8"> <title>ReactJS Chapter 1</title> <link rel="stylesheet" href="css/bootstrap.min.css"> <script type="text/javascript" src="js/react.min.js"></script> <script type="text/javascript" src="js/react-dom.min.js"></script> <script src="js/browser.min.js"></script> </head> <body> <!-- Add your site or application content here --> <div id="loginForm"></div> <script type="text/babel"> var LoginformHTML = <div className="container-fluid"> <div className="row"> <div className="col-lg-6"> <form> <h1>Login Form</h1> <hr/> <div className="form-group"> <label for="emailInput">Email address</label> <input type="email" className="form-control" id="emailInput" placeholder="Email"/> </div> <div className="form-group"> <label for="passwordInput">Password</label> <input type="password" className="form-control" id="passwordInput" placeholder="Password"/> </div> <button type="submit" className="btn btn-default col-xs-offset-9 col-xs-3">Submit</button> </form> </div> </div> </div> ReactDOM.render(LoginformHTML,document.getElementById('loginForm'); </script> </body> </html> Now, you can check your page on browser and you will be able to see the form with below shown look and feel. Now it's working fine and looks good. Bootstrap also provides two additional classes to make your elements smaller and larger: input-lg and input-sm. You can also check the responsive behavior by resizing the browser. That's look great. Our small static login form application is ready with responsive behavior. Some of the benefits are: Rendering your component is very easy Reading component's code would be very easy with help of JSX JSX will also help you to check your layout as well as components plug-in with each other You can test your code easily and it also allows other tools integration for enhancement As we know, React is view layer, you can also use it with other JavaScript frameworks Summary Our simple static login form application and Hello World examples are looking great and working exactly how they should, so let's recap what we've learned in the this article. To begin with, we saw just how easy it is to get ReactJS and Bootstrap installed with the inclusion of JavaScript files and a style sheet. We also looked at how the React application is initialized and started building our first form application. The Hello World app and form application which we have created demonstrates some of React's and Bootstrap's basic features such as the following: ReactDOM Render Browserify Bootstrap With Bootstrap, we worked towards having a responsive grid system for different mobile devices and applied the fundamental styles of HTML elements with the inclusion of a few classes and divs. We also saw the framework's new mobile-first responsive design in action without cluttering up our markup with unnecessary classes or elements. Resources for Article: Further resources on this subject: Getting Started with ASP.NET Core and Bootstrap 4 [article] Frontend development with Bootstrap 4 [article] Gearing Up for Bootstrap 4 [article]
Read more
  • 0
  • 0
  • 13983

article-image-building-our-first-app-7-minute-workout
Packt
14 Nov 2016
27 min read
Save for later

Building Our First App – 7 Minute Workout

Packt
14 Nov 2016
27 min read
In this article by, Chandermani Arora and Kevin Hennessy, the authors of the book Angular 2 By Example, we will build a new app in Angular, and in the process, develop a better understanding of the framework. This app will also help us explore some new capabilities of the framework. (For more resources related to this topic, see here.) The topics that we will cover in this article include the following: 7 Minute Workout problem description: We detail the functionality of the app that we build. Code organization: For our first real app, we will try to explain how to organize code, specifically Angular code. Designing the model: One of the building blocks of our app is its model. We design the app model based on the app's requirements. Understanding the data binding infrastructure: While building the 7 Minute Workout view, we will look at the data binding capabilities of the framework, which include property, attribute, class, style, and event bindings. Let's get started! The first thing we will do is define the scope of our 7 Minute Workout app. What is 7 Minute Workout? We want everyone reading this article to be physically fit. Our purpose is to simulate your grey matter. What better way to do it than to build an app that targets physical fitness! 7 Minute Workout is an exercise/workout plan that requires us to perform a set of twelve exercises in quick succession within the seven minute time span. 7 Minute Workout has become quite popular due to its benefits and the short duration of the workout. We cannot confirm or refute the claims but doing any form of strenuous physical activity is better than doing nothing at all. If you are interested to know more about the workout, then check out http://well.blogs.nytimes.com/2013/05/09/the-scientific-7-minute-workout/. The technicalities of the app include performing a set of 12 exercises, dedicating 30 seconds for each of the exercises. This is followed by a brief rest period before starting the next exercise. For the app that we are building, we will be taking rest periods of 10 seconds each. So, the total duration comes out be a little more than 7 minutes. Once the 7 Minute Workout app ready, it will look something like this: Downloading the code base The code for this app can be downloaded from the GitHub site https://github.com/chandermani/angular2byexample dedicated to this article. Since we are building the app incrementally, we have created multiple checkpoints that map to GitHub branches such as checkpoint2.1, checkpoint2.2, and so on. During the narration, we will highlight the branch for reference. These branches will contain the work done on the app up to that point in time. The 7 Minute Workout code is available inside the repository folder named trainer. So let's get started! Setting up the build Remember that we are building on a modern platform for which browsers still lack support. Therefore, directly referencing script files in HTML is out of question (while common, it's a dated approach that we should avoid anyway). The current browsers do not understand TypeScript; as a matter of fact, even ES 2015 (also known as ES6) is not supported. This implies that there has to be a process that converts code written in TypeScript into standard JavaScript (ES5), which browsers can work with. Hence, having a build setup for almost any Angular 2 app becomes imperative. Having a build process may seem like overkill for a small application, but it has some other advantages as well. If you are a frontend developer working on the web stack, you cannot avoid Node.js. This is the most widely used platform for Web/JavaScript development. So, no prizes for guessing that the Angular 2 build setup too is supported over Node.js with tools such as Grunt, Gulp, JSPM, and webpack. Since we are building on the Node.js platform, install Node.js before starting. While there are quite elaborate build setup options available online, we go for a minimal setup using Gulp. The reason is that there is no one size fits all solution out there. Also, the primary aim here is to learn about Angular 2 and not to worry too much about the intricacies of setting up and running a build. Some of the notable starter sites plus build setups created by the community are as follows: Start site Location angular2-webpack-starter http://bit.ly/ng2webpack angular2-seed http://bit.ly/ng2seed angular-cli— It allows us to generate the initial code setup, including the build configurations, and has good scaffolding capabilities too. http://bit.ly/ng2-cli A natural question arises if you are very new to Node.js or the overall build process: what does a typical Angular build involve? It depends! To get an idea about this process, it would be beneficial if we look at the build setup defined for our app. Let's set up the app's build locally then. Follow these steps to have the boilerplate Angular 2 app up and running: Download the base version of this app from http://bit.ly/ng2be-base and unzip it to a location on your machine. If you are familiar with how Git works, you can just checkout the branch base: git checkout base This code serves as the starting point for our app. Navigate to the trainer folder from command line and execute these commands: npm i -g gulp npm install The first command installs Gulp globally so that you can invoke the Gulp command line tool from anywhere and execute Gulp tasks. A Gulp task is an activity that Gulp performs during the build execution. If we look at the Gulp build script (which we will do shortly), we realize that it is nothing but a sequence of tasks performed whenever a build occurs. The second command installs the app's dependencies (in the form of npm packages). Packages in the Node.js world are third-party libraries that are either used by the app or support the app's building process. For example, Gulp itself is a Node.js package. The npm is a command-line tool for pulling these packages from a central repository. Once Gulp is installed and npm pulls dependencies from the npm store, we are ready to build and run the application. From the command line, enter the following command: gulp play This compiles and runs the app. If the build process goes fine, the default browser window/tab will open with a rudimentary "Hello World" page (http://localhost:9000/index.html). We are all set to begin developing our app in Angular 2! But before we do that, it would be interesting to know what has happened under the hood. The build internals Even if you are new to Gulp, looking at gulpfile.js gives you a fair idea about what the build process is doing. A Gulp build is a set of tasks performed in a predefined order. The end result of such a process is some form of package code that is ready to be run. And if we are building our apps using TypeScript/ES2015 or some other similar language that browsers do not understand natively, then we need an additional build step, called transpilation. Code transpiling As it stands in 2016, browsers still cannot run ES2015 code. While we are quick to embrace languages that hide the not-so-good parts of JavaScript (ES5), we are still limited by the browser's capabilities. When it comes to language features, ES5 is still the safest bet as all browsers support it. Clearly, we need a mechanism to convert our TypeScript code into plain JavaScript (ES5). Microsoft has a TypeScript compiler that does this job. The TypeScript compiler takes the TypeScript code and converts it into ES5-format code that can run in all browsers. This process is commonly referred to as transpiling, and since the TypeScript compiler does it, it's called a transpiler. Interestingly, transpilation can happen at both build/compile time and runtime: Build-time transpilation: Transpilation as part of the build process takes the script files (in our case, TypeScript .ts files) and compiles them into plain JavaScript. Our build setup uses build-time transpilation. Runtime transpilation: This happens in the browser at runtime. We include the raw language-specific script files (.ts in our case), and the TypeScript compiler—which is loaded in the browser beforehand—compiles these script files on the fly.  While runtime transpilation simplifies the build setup process, as a recommendation, it should be limited to development workflows only, considering the additional performance overheard involved in loading the transpiler and transpiling the code on the fly. The process of transpiling is not limited to TypeScript. Every language targeted towards the Web such as CoffeeScript, ES2015, or any other language that is not inherently understood by a browser needs transpilation. There are transpilers for most languages, and the prominent ones (other than TypeScript) are tracuer and babel. To compile TypeScript files, we can install the TypeScript compiler manually from the command line using this: npm install -g typescript Once installed, we can compile any TypeScript file into ES5 format using the compiler (tsc.exe). But for our build setup, this process is automated using the ts2js Gulp task (check out gulpfile.js). And if you are wondering when we installed TypeScript… well, we did it as part of the npm install step, when setting up the code for the first time. The gulp-typescript package downloads the TypeScript compiler as a dependency. With this basic understanding of transpilation, we can summarize what happens with our build setup: The gulp play command kicks off the build process. This command tells Gulp to start the build process by invoking the play task. Since the play task has a dependency on the ts2js task, ts2js is executed first. The ts2js compiles the TypeScript files (.ts) located in src folder and outputs them to the dist folder at the root. Post build, a static file server is started that serves all the app files, including static files (images, videos, and HTML) and script files (check gulp.play task). Thenceforth, the build process keeps a watch on any script file changes (the gulp.watch task) you make and recompiles the code on the fly. livereload also has been set up for the app. Any changes to the code refresh the browser running the app automatically. In case browser refresh fails, we can always do a manual refresh. This is a rudimentary build setup required to run an Angular app. For complex build requirements, we can always look at the starter/seed projects that have a more complete and robust build setup, or build something of our own. Next let's look at the boilerplate app code already there and the overall code organization. Organizing code This is how we are going to organize our code and other assets for the app: The trainer folder is the root folder for the app and it has a folder (static) for the static content (such as images, CSS, audio files, and others) and a folder (src) for the app's source code. The organization of the app's source code is heavily influenced by the design of Angular and the Angular style guide (http://bit.ly/ng2-style-guide) released by the Angular team. The components folder hosts all the components that we create. We will be creating subfolders in this folder for every major component of the application. Each component folder will contain artifacts related to that component, which includes its template, its implementation and other related item. We will also keep adding more top-level folders (inside the src folder) as we build the application. If we look at the code now, the components/app folder has defined a root level component TrainerAppComponent and root level module AppModule. bootstrap.ts contains code to bootstrap/load the application module (AppModule). 7 Minute Workout uses Just In Time (JIT) compilation to compile Angular views. This implies that views are compiled just before they are rendered in the browser. Angular has a compiler running in the browser that compiles these views. Angular also supports the Ahead Of Time (AoT) compilation model. With AoT, the views are compiled on the server side using a server version of the Angular compiler. The views returned to the browser are precompiled and ready to be used. For 7 Minute Workout, we stick to the JIT compilation model just because it is easy to set up as compared to AoT, which requires server-side tweaks and package installation. We highly recommend that you use AoT compilation for production apps due the numerous benefits it offers. AoT can improve the application's initial load time and reduce its size too. Look at the AoT platform documentation (cookbook) at http://bit.ly/ng2-aot to understand how AoT compilation can benefit you. Time to start working on our first focus area, which is the app's model! The 7 Minute Workout model Designing the model for this app requires us to first detail the functional aspects of the 7 Minute Workout app, and then derive a model that satisfies those requirements. Based on the problem statement defined earlier, some of the obvious requirements are as follows: Being able to start the workout. Providing a visual clue about the current exercise and its progress. This includes the following: Providing a visual depiction of the current exercise Providing step-by-step instructions on how to do a specific exercise The time left for the current exercise Notifying the user when the workout ends. Some other valuable features that we will add to this app are as follows: The ability to pause the current workout. Providing information about the next exercise to follow. Providing audio clues so that the user can perform the workout without constantly looking at the screen. This includes: A timer click sound Details about the next exercise Signaling that the exercise is about to start Showing related videos for the exercise in progress and the ability to play them. As we can see, the central theme for this app is workout and exercise. Here, a workout is a set of exercises performed in a specific order for a particular duration. So, let's go ahead and define the model for our workout and exercise. Based on the requirements just mentioned, we will need the following details about an exercise: The name. This should be unique. The title. This is shown to the user. The description of the exercise. Instructions on how to perform the exercise. Images for the exercise. The name of the audio clip for the exercise. Related videos. With TypeScript, we can define the classes for our model. Create a folder called workout-runner inside the src/components folder and copy the model.ts file from the checkpoint2.1 branch folder workout-runner(http://bit.ly/ng2be-2-1-model-ts) to the corresponding local folder. model.ts contains the model definition for our app. The Exercise class looks like this: export class Exercise { constructor( public name: string, public title: string, public description: string, public image: string, public nameSound?: string, public procedure?: string, public videos?: Array<string>) { } } TypeScript Tips Passing constructor parameters with public or private is a shorthand for creating and initializing class members at one go. The ? suffix after nameSound, procedure, and videos implies that these are optional parameters. For the workout, we need to track the following properties: The name. This should be unique. The title. This is shown to the user. The exercises that are part of the workout. The duration for each exercise. The rest duration between two exercises. So, the model class (WorkoutPlan) looks like this: export class WorkoutPlan { constructor( public name: string, public title: string, public restBetweenExercise: number, public exercises: ExercisePlan[], public description?: string) { } totalWorkoutDuration(): number { … } } The totalWorkoutDuration function returns the total duration of the workout in seconds. WorkoutPlan has a reference to another class in the preceding definition—ExercisePlan. It tracks the exercise and the duration of the exercise in a workout, which is quite apparent once we look at the definition of ExercisePlan: export class ExercisePlan { constructor( public exercise: Exercise, public duration: number) { } } These three classes constitute our base model, and we will decide in the future whether or not we need to extend this model as we start implementing the app's functionality. Since we have started with a preconfigured and basic Angular app, you just need to understand how this app bootstrapping is occurring. App bootstrapping Look at the src folder. There is a bootstrap.ts file with only the execution bit (other than imports): platformBrowserDynamic().bootstrapModule(AppModule); The boostrapModule function call actually bootstraps the application by loading the root module, AppModule. The process is triggered by this call in index.html: System.import('app').catch(console.log.bind(console)); The System.import statement sets off the app bootstrapping process by loading the first module from bootstrap.ts. Modules defined in the context of Angular 2, (using @NgModule decorator) are different from modules SystemJS loads. SystemJS modules are JavaScript modules, which can be in different formats adhering to CommonJS, AMD, or ES2015 specifications. Angular modules are constructs used by Angular to segregate and organize its artifacts. Unless the context of discussion is SystemJS, any reference to module implies Angular module. Now we have details on how SystemJS loads our Angular app. App loading with SystemJS SystemJS starts loading the JavaScript module with the call to System.import('app') in index.html. SystemJS starts by loading bootstrap.ts first. The imports defined inside bootstrap.ts cause SystemJS to then load the imported modules. If these module imports have further import statements, SystemJS loads them too, recursively. And finally the platformBrowserDynamic().bootstrapModule(AppModule); function gets executed once all the imported modules are loaded. For the SystemJS import function to work, it needs to know where the module is located. We define this in the file, systemjs.config.js, and reference it in index.html, before the System.import script: <script src="systemjs.config.js"></script> This configuration file contains all of the necessary configuration for SystemJS to work correctly. Open systemjs.config.js, the app parameter to import function points to a folder dist as defined on the map object: var map = { 'app': 'dist', ... } And the next variable, packages, contains settings that hint SystemJS how to load a module from a package when no filename/extension is specified. For app, the default module is bootstrap.js: var packages = { 'app': { main: 'bootstrap.js', defaultExtension: 'js' }, ... }; Are you wondering what the dist folder has to do with our application? Well, this is where our transpiled scripts end up.  As we build our app in TypeScript, the TypeScript compiler converts these .ts script files in the src folder to JavaScript modules and deposits them into the dist folder. SystemJS then loads these compiled JavaScript modules. The transpiled code location has been configured as part of the build definition in gulpfile.js. Look for this excerpt in gulpfile.ts: return tsResult.js .pipe(sourcemaps.write()) .pipe(gulp.dest('dist')) The module specification used by our app can again be verified in gulpfile.js. Take a look at this line: noImplicitAny: true, module: 'system', target: 'ES5', These are TypeScript compiler options, with one being module, that is, the target module definition format. The system module type is a new module format designed to support the exact semantics of ES2015 modules within ES5. Once the scripts are transpiled and the module definitions created (in the target format), SystemJS can load these modules and their dependencies. It's time to get into the thick of action; let's build our first component. Our first component – WorkoutRunnerComponent To implement the WorkoutRunnerComponent, we need to outline the behavior of the application. What we are going to do in the WorkoutRunnerComponent implementation is as follows: Start the workout. Show the workout in progress and show the progress indicator. After the time elapses for an exercise, show the next exercise. Repeat this process until all the exercises are over. Let's start with the implementation. The first thing that we will create is the WorkoutRunnerComponent implementation. Open workout-runner folder in the src/components folder and add a new code file called workout-runner.component.ts to it. Add this chunk of code to the file: import {WorkoutPlan, ExercisePlan, Exercise} from './model' export class WorkoutRunnerComponent { } The import module declaration allows us to reference the classes defined in the model.ts file in WorkoutRunnerComponent. We first need to set up the workout data. Let's do that by adding a constructor and related class properties to the WorkoutRunnerComponent class: workoutPlan: WorkoutPlan; restExercise: ExercisePlan; constructor() { this.workoutPlan = this.buildWorkout(); this.restExercise = new ExercisePlan( new Exercise("rest", "Relax!", "Relax a bit", "rest.png"), this.workoutPlan.restBetweenExercise); } The buildWorkout on WorkoutRunnerComponent sets up the complete workout, as we will see shortly. We also initialize a restExercise variable to track even the rest periods as exercise (note that restExercise is an object of type ExercisePlan). The buildWorkout function is a lengthy function, so it's better if we copy the implementation from the workout runner's implementation available in Git branch checkpoint2.1 (http://bit.ly/ng2be-2-1-workout-runner-component-ts). The buildWorkout code looks like this: buildWorkout(): WorkoutPlan { let workout = new WorkoutPlan("7MinWorkout", "7 Minute Workout", 10, []); workout.exercises.push( new ExercisePlan( new Exercise( "jumpingJacks", "Jumping Jacks", "A jumping jack or star jump, also called side-straddle hop is a physical jumping exercise.", "JumpingJacks.png", "jumpingjacks.wav", `Assume an erect position, with feet together and arms at your side. …`, ["dmYwZH_BNd0", "BABOdJ-2Z6o", "c4DAnQ6DtF8"]), 30)); // (TRUNCATED) Other 11 workout exercise data. return workout; } This code builds the WorkoutPlan object and pushes the exercise data into the exercises array (an array of ExercisePlan objects), returning the newly built workout. The initialization is complete; now, it's time to actually implement the start workout. Add a start function to the WorkoutRunnerComponent implementation, as follows: start() { this.workoutTimeRemaining = this.workoutPlan.totalWorkoutDuration(); this.currentExerciseIndex = 0; this.startExercise(this.workoutPlan.exercises[this.currentExerciseIndex]); } Then declare the new variables used in the function at the top, with other variable declarations: workoutTimeRemaining: number; currentExerciseIndex: number; The workoutTimeRemaining variable tracks the total time remaining for the workout, and currentExerciseIndex tracks the currently executing exercise index. The call to startExercise actually starts an exercise. This how the code for startExercise looks: startExercise(exercisePlan: ExercisePlan) { this.currentExercise = exercisePlan; this.exerciseRunningDuration = 0; let intervalId = setInterval(() => { if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); } else { this.exerciseRunningDuration++; } }, 1000); } We start by initializing currentExercise and exerciseRunningDuration. The currentExercise variable tracks the exercise in progress and exerciseRunningDuration tracks its duration. These two variables also need to be declared at the top: currentExercise: ExercisePlan; exerciseRunningDuration: number; We use the setInterval JavaScript function with a delay of 1 second (1,000 milliseconds) to track the exercise progress by incrementing exerciseRunningDuration. The setInterval invokes the callback every second. The clearInterval call stops the timer once the exercise duration lapses. TypeScript Arrow functions The callback parameter passed to setInterval (()=>{…}) is a lambda function (or an arrow function in ES 2015). Lambda functions are short-form representations of anonymous functions, with added benefits. You can learn more about them at https://basarat.gitbooks.io/typescript/content/docs/arrow-functions.html. As of now, we have a WorkoutRunnerComponent class. We need to convert it into an Angular component and define the component view. Add the import for Component and a component decorator (highlighted code): import {WorkoutPlan, ExercisePlan, Exercise} from './model' import {Component} from '@angular/core'; @Component({ selector: 'workout-runner', template: ` <pre>Current Exercise: {{currentExercise | json}}</pre> <pre>Time Left: {{currentExercise.duration- exerciseRunningDuration}}</pre>` }) export class WorkoutRunnerComponent { As you already know how to create an Angular component. You understand the role of the @Component decorator, what selector does, and how the template is used. The JavaScript generated for the @Component decorator contains enough metadata about the component. This allows Angular framework to instantiate the correct component at runtime. String enclosed in backticks (` `) are a new addition to ES2015. Also called template literals, such string literals can be multi line and allow expressions to be embedded inside (not to be confused with Angular expressions). Look at the MDN article here at http://bit.ly/template-literals for more details. The preceding template HTML will render the raw ExercisePlan object and the exercise time remaining. It has an interesting expression inside the first interpolation: currentExercise | json. The currentExercise property is defined in WorkoutRunnerComponent, but what about the | symbol and what follows it (json)? In the Angular 2 world, it is called a pipe. The sole purpose of a pipe is to transform/format template data. The json pipe here does JSON data formatting. A general sense of what the json pipe does, we can remove the json pipe plus the | symbol and render the template; we are going to do this next. As our app currently has only one module (AppModule), we add the WorkoutRunnerComponent declaration to it. Update app.module.ts by adding the highlighted code: import {WorkoutRunnerComponent} from '../workout-runner/workout-runner.component'; @NgModule({ imports: [BrowserModule], declarations: [TrainerAppComponent, WorkoutRunnerComponent], Now WorkoutRunnerComponent can be referenced in the root component so that it can be rendered. Modify src/components/app/app.component.ts as highlighted in the following code: @Component({ ... template: ` <div class="navbar ...> ... </div> <div class="container ...> <workout-runner></workout-runner> </div>` }) We have changed the root component template and added the workout-runner element to it. This will render the WorkoutRunnerComponent inside our root component. While the implementation may look complete there is a crucial piece missing. Nowhere in the code do we actually start the workout. The workout should start as soon as we load the page. Component life cycle hooks are going to rescue us! Component life cycle hooks Life of an Angular component is eventful. Components get created, change state during their lifetime and finally they are destroyed. Angular provides some life cycle hooks/functions that the framework invokes (on the component) when such event occurs. Consider these examples: When component is initialized, Angular invokes ngOnInit When a component's input properties change, Angular invokes ngOnChanges When a component is destroyed, Angular invokes ngOnDestroy As developers, we can tap into these key moments and perform some custom logic inside the respective component. Angular has TypeScript interfaces for each of these hooks that can be applied to the component class to clearly communicate the intent. For example: class WorkoutRunnerComponent implements OnInit { ngOnInit (){ ... } ... The interface name can be derived by removing the prefix ng from the function names. The hook we are going to utilize here is ngOnInit. The ngOnInit function gets fired when the component's data-bound properties are initialized but before the view initialization starts. Add the ngOnInit function to the WorkoutRunnerComponent class with a call to start the workout: ngOnInit() { this.start(); } And implement the OnInit interface on WorkoutRunnerComponent; it defines the ngOnInit method: import {Component,OnInit} from '@angular/core'; … export class WorkoutRunnerComponent implements OnInit { There are a number of other life cycle hooks, including ngOnDestroy, ngOnChanges, and ngAfterViewInit, that components support; but we are not going to dwell into any of them here. Look at the developer guide (http://bit.ly/ng2-lifecycle) on Life cycle Hooks to learn more about other such hooks. Time to run our app! Open the command line, navigate to the trainer folder, and type this line: gulp play If there are no compilation errors and the browser automatically loads the app (http://localhost:9000/index.html), we should see the following output: The model data updates with every passing second! Now you'll understand why interpolations ({{ }}) are a great debugging tool. This will also be a good time to try rendering currentExercise without the json pipe (use {{currentExercise}}), and see what gets rendered. We are not done yet! Wait long enough on the index.html page and you will realize that the timer stops after 30 seconds. The app does not load the next exercise data. Time to fix it! Update the code inside the setInterval if condition: if (this.exerciseRunningDuration >= this.currentExercise.duration) { clearInterval(intervalId); let next: ExercisePlan = this.getNextExercise(); if (next) { if (next !== this.restExercise) { this.currentExerciseIndex++; } this.startExercise(next); } else { console.log("Workout complete!"); } } The if condition if (this.exerciseRunningDuration >= this.currentExercise.duration) is used to transition to the next exercise once the time duration of the current exercise lapses. We use getNextExercise to get the next exercise and call startExercise again to repeat the process. If no exercise is returned by the getNextExercise call, the workout is considered complete. During exercise transitioning, we increment currentExerciseIndex only if the next exercise is not a rest exercise. Remember that the original workout plan does not have a rest exercise. For the sake of consistency, we have created a rest exercise and are now swapping between rest and the standard exercises that are part of the workout plan. Therefore, currentExerciseIndex does not change when the next exercise is rest. Let's quickly add the getNextExercise function too. Add the function to the WorkoutRunnerComponent class: getNextExercise(): ExercisePlan { let nextExercise: ExercisePlan = null; if (this.currentExercise === this.restExercise) { nextExercise = this.workoutPlan.exercises[this.currentExerciseIndex + 1]; } else if (this.currentExerciseIndex < this.workoutPlan.exercises.length - 1) { nextExercise = this.restExercise; } return nextExercise; } The WorkoutRunnerComponent.getNextExercise returns the next exercise that needs to be performed. Note that the returned object for getNextExercise is an ExercisePlan object that internally contains the exercise details and the duration for which the exercise runs. The implementation is quite self-explanatory. If the current exercise is rest, take the next exercise from the workoutPlan.exercises array (based on currentExerciseIndex); otherwise, the next exercise is rest, given that we are not on the last exercise (the else if condition check). With this, we are ready to test our implementation. So go ahead and refresh index.html. Exercises should flip after every 10 or 30 seconds. Great! The current build setup automatically compiles any changes made to the script files when the files are saved; it also refreshes the browser post these changes. But just in case the UI does not update or things do not work as expected, refresh the browser window. If you are having a problem with running the code, look at the Git branch checkpoint2.1 for a working version of what we have done thus far. Or if you are not using Git, download the snapshot of checkpoint2.1 (a zip file) from http://bit.ly/ng2be-checkpoint2-1. Refer to the README.md file in the trainer folder when setting up the snapshot for the first time. We have are done with controller. Summary We started this article with the aim of creating an Angular app which is  complex. The 7 Minute Workout app fitted the bill, and you learned a lot about the Angular framework while building this app. We started by defining the functional specifications of the 7 Minute Workout app. We then focused our efforts on defining the code structure for the app. To build the app, we started off by defining the model of the app. Once the model was in place, we started the actual implementation, by building an Angular component. Angular components are nothing but classes that are decorated with a framework-specific decorator, @Component. We now have a basic 7 Minute Workout app. For a better user experience, we have added a number of small enhancements to it too, but we are still missing some good-to-have features that would make our app more usable. Resources for Article: Further resources on this subject: Angular.js in a Nutshell [article] Introduction to JavaScript [article] Create Your First React Element [article] <hr noshade="noshade" size="1"
Read more
  • 0
  • 0
  • 13971
article-image-setting-environment
Packt
04 Jan 2017
14 min read
Save for later

Setting Up the Environment

Packt
04 Jan 2017
14 min read
In this article by Sohail Salehi, the author of the book Angular 2 Services, you can ask the two fundamental questions, when a new development tool is announced or launched are: how different is the new tool from other competitor tools and how enhanced it is when compared to its own the previous versions? If we are going to invest our time in learning a new framework, common sense says we need to make sure we get a good return on our investment. There are so many good articles out there about cons and pros of each framework. To me, choosing Angular 2 boils down to three aspects: The foundation: Angular is introduced and supported by Google and targeted at “ever green” modern browsers. This means we, as developers, don't need to lookout for hacky solutions in each browser upgrade, anymore. The browser will always be updated to the latest version available, letting Angular worry about the new changes and leaving us out of it. This way we can focus more on our development tasks. The community: Think about the community as an asset. The bigger the community, the wider range of solutions to a particular problem. Looking at the statistics, Angular community still way ahead of others and the good news is this community is leaning towards being more involved and more contributing on all levels. The solution: If you look at the previous JS frameworks, you will see most of them focusing on solving a problem for a browser first, and then for mobile devices. The argument for that could be simple: JS wasn't meant to be a language for mobile development. But things have changed to a great extent over the recent years and people now use mobile devices more than before. I personally believe a complex native mobile application – which is implemented in Java or C – is more performant, as compared to its equivalent implemented in JS. But the thing here is that not every mobile application needs to be complex. So business owners have started asking questions like: Why do I need a machine-gun to kill a fly? (For more resources related to this topic, see here.) With that question in mind, Angular 2 chose a different approach. It solves the performance challenges faced by mobile devices first. In other words, if your Angular 2 application is fast enough on mobile environments, then it is lightning fast inside the “ever green” browsers. So that is what we are going to do in this article. First we are going to learn about Angular 2 and the main problem it is going to solve. Then we talk a little bit about the JavaScript history and the differences between Angular 2 and AngularJS 1. Introducing “The Sherlock Project” is next and finally we install the tools and libraries we need to implement this project. Introducing Angular 2 The previous JS frameworks we've used already have a fluid and easy workflow towards building web applications rapidly. But as developers what we are struggling with is the technical debt. In simple words, we could quickly build a web application with an impressive UI. But as the product kept growing and the change requests started kicking in, we had to deal with all maintenance nightmares which forces a long list of maintenance tasks and heavy costs to the businesses. Basically the framework that used to be an amazing asset, turned into a hairy liability (or technical debt if you like). One of the major "revamps" in Angular 2 is the removal of a lot of modules resulting in a lighter and faster core. For example, if you are coming from an Angular 1.x background and don't see $scope or $log in the new version, don't panic, they are still available to you via other means, But there is no need to add overhead to the loading time if we are not going to use all modules. So taking the modules out of the core results in a better performance. So to answer the question, one of the main issues Angular 2 addresses is the performance issues. This is done through a lot of structural changes. There is no backward compatibility We don't have backward compatibility. If you have some Angular projects implemented with the previous version (v1.x), depending on the complexity of the project, I wouldn't recommend migrating to the new version. Most likely, you will end up hammering a lot of changes into your migrated Angular 2 project and at the end you will realize it was more cost effective if you would just create a new project based on Angular 2 from scratch. Please keep in mind, the previous versions of AngularJS and Angular 2 share just a name, but they have huge differences in their nature and that is the price we pay for a better performance. Previous knowledge of AngularJS 1.x is not necessary You might wondering if you need to know AngularJS 1.x before diving into Angular 2. Absolutely not. To be more specific, it might be even better if you didn't have any experience using Angular at all. This is because your mind wouldn't be preoccupied with obsolete rules and syntaxes. For example, we will see a lot of annotations in Angular 2 which might make you feel uncomfortable if you come from a Angular 1 background. Also, there are different types of dependency injections and child injectors which are totally new concepts that have been introduced in Angular 2. Moreover there are new features for templating and data-binding which help to improve loading time by asynchronous processing. The relationship between ECMAScript, AtScript and TypeScript The current edition of ECMAScript (ES5) is the one which is widely accepted among all well known browsers. You can think of it as the traditional JavaScript. Whatever code is written in ES5 can be executed directly in the browsers. The problem is most of modern JavaScript frameworks contain features which require more than the traditional JavaScript capabilities. That is why ES6 was introduced. With this edition – and any future ECMAScript editions – we will be able to empower JavaScript with the features we need. Now, the challenge is running the new code in the current browsers. Most browsers, nowadays recognize standard JavaScript codes only. So we need a mediator to transform ES6 to ES5. That mediator is called a transpiler and the technical term for transformations is transpiling. There are many good transpilers out there and you are free to choose whatever you feel comfortable with. Apart from TypeScript, you might want to consider Babel (babeljs.io) as your main transpiler. Google originally planned to use AtScript to implement Angular 2, but later they joined forces with Microsoft and introduced TypeScript as the official transpiler for Angular 2. The following figure summarizes the relationship between various editions of ECMAScript, AtScript and TypeScript. For more details about JavaScript, ECMAScript and how they evolved during the past decade visit the following link: https://en.wikipedia.org/wiki/ECMAScript Setting up tools and getting started! It is important to get the foundation right before installing anything else. Depending on your operating system, install Node.js and its package manager- npm. . You can find a detailed installation manual on Node.js official website. https://nodejs.org/en/ Make sure both Node.js and npm are installed globally (they are accessible system wide) and have the right permissions. At the time of writing npm comes with Node.js out of the box. But in case their policy changes in the future, you can always download the npm and follow the installation process from the following link. https://npmjs.com The next stop would be the IDE. Feel free to choose anything that you are comfortable with. Even a simple text editor will do. I am going to use WebStorm because of its embedded TypeScript syntax support and Angular 2 features which speeds up development process. Moreover it is light weight enough to handle the project we are about to develop. You can download it from here: https://jetbrains.com/webstorm/download We are going to use simple objects and arrays as a place holder for the data. But at some stage we will need to persist the data in our application. That means we need a database. We will use the Google's Firebase Realtime database for our needs. It is fast, it doesn't need to download or setup anything locally and more over it responds instantly to your requests and aligns perfectly with Angular's two-way data-binding. For now just leave the database as it is. You don't need to create any connections or database objects. Setting up the seed project The final requirement to get started would be an Angular 2 seed project to shape the initial structure of our project. If you look at the public source code repositories you can find several versions of these seeds. But I prefer the official one for two reasons: Custom made seeds usually come with a personal twist depending on the developers taste. Although sometimes it might be a great help, but since we are going to build everything from scratch and learn the fundamental concepts, they are not favorable to our project. The official seeds are usually minimal. They are very slim and don't contain overwhelming amount of 3rd party packages and environmental configurations. Speaking about packages, you might be wondering what happened to the other JavaScript packages we needed for this application. We didn't install anything else other than Node and NPM. The next section will answer this question. Setting up an Angular 2 project in WebStorm Assuming you have installed WebStorm, fire the IDE and and checkout a new project from a git repository. Now set the repository URL to: https://github.com/angular/angular2-seed.git and save the project in a folder called “the sherlock project” as shown in the figure below: Hit the Clone button and open the project in WebStorm. Next, click on the package.json file and observe the dependencies. As you see, this is a very lean seed with minimal configurations. It contains the Angular 2 plus required modules to get the project up and running. The first thing we need to do is install all required dependencies defined inside the package.json file. Right click on the file and select the “run npm install” option. Installing the packages for the first time will take a while. In the mean time, explore the devDependencies section of package.json in the editor. As you see, we have all the required bells and whistles to run the project including TypeScript, web server and so on to start the development: "devDependencies": { "@types/core-js": "^0.9.32", "@types/node": "^6.0.38", "angular2-template-loader": "^0.4.0", "awesome-typescript-loader": "^1.1.1", "css-loader": "^0.23.1", "raw-loader": "^0.5.1", "to-string-loader": "^1.1.4", "typescript": "^2.0.2", "webpack": "^1.12.9", "webpack-dev-server": "^1.14.0", "webpack-merge": "^0.8.4" }, We also have some nifty scripts defined inside package.json that automate useful processes. For example, to start a webserver and see the seed in action we can simply execute following command in a terminal: $ npm run server or we can right click on package.json file and select “Show npm Scripts”. This will open another side tab in WebStorm and shows all available scripts inside the current file. Basically all npm related commands (which you can run from command-line) are located inside the package.json file under the scripts key. That means if you have a special need, you can create your own script and add it there. You can also modify the current ones. Double click on start script and it will run the web server and loads the seed application on port 3000. That means if you visit http://localhost:3000 you will see the seed application in your browser: If you are wondering where the port number comes from, look into package.json file and examine the server key under the scripts section:    "server": "webpack-dev-server --inline --colors --progress --display-error-details --display-cached --port 3000 --content-base src", There is one more thing before we move on to the next topic. If you open any .ts file, WebStorm will ask you if you want it to transpile the code to JavaScript. If you say No once and it will never show up again. We don't need WebStorm to transpile for us because the start script is already contains a transpiler which takes care of all transformations for us. Front-end developers versus back-end developers Recently, I had an interesting conversation with a couple of colleagues of mine which worth sharing here in this article. One of them is an avid front-end developer and the other is a seasoned back-end developer. You guessed what I'm going to talk about: The debate between back-end/front-end developers and who is the better half. We have seen these kind of debates between back-end and front-end people in development communities long enough. But the interesting thing which – in my opinion – will show up more often in the next few months (years) is a fading border between the ends (front-end/back-end). It feels like the reason that some traditional front-end developers are holding up their guard against new changes in Angular 2, is not just because the syntax has changed thus causing a steep learning curve, but mostly because they now have to deal with concepts which have existed natively in back-end development for many years. Hence, the reason that back-end developers are becoming more open to the changes introduced in Angular 2 is mostly because these changes seem natural to them. Annotations or child dependency injections for example is not a big deal to back-enders, as much as it bothers the front-enders. I won't be surprised to see a noticeable shift in both camps in the years to come. Probably we will see more back-enders who are willing to use Angular as a good candidate for some – if not all – of their back-end projects and probably we will see more front-enders taking Object-Oriented concepts and best practices more seriously. Given that JavaScript was originally a functional scripting language they probably will try to catch-up with the other camp as fast as they can. There is no comparison here and I am not saying which camp has advantage over the other one. My point is, before modern front-end frameworks, JavaScript was open to be used in a quick and dirty inline scripts to solve problems quickly. While this is a very convenient approach it causes serious problems when you want to scale a web application. Imagine the time and effort you might have to make finding all of those dependent codes and re-factor them to reflect the required changes. When it comes to scalability, we need a full separation between layers and that requires developers to move away from traditional JavaScript and embrace more OOP best practices in their day to day development tasks. That what has been practiced in all modern front-end frameworks and Angular 2 takes it to the next level by completely restructuring the model-view-* concept and opening doors to the future features which eventually will be native part of any web browser. Introducing “The Sherlock Project” During the course of this journey we are going to dive into all new Angular 2 concepts by implementing a semi-AI project called: “The Sherlock Project”. This project will basically be about collecting facts, evaluating and ranking them, and making a decision about how truthful they are. To achieve this goal, we will implement a couple of services and inject them to the project, wherever they are needed. We will discuss one aspect of the project and will focus on one related service. At the end, all services will come together to act as one big project. Summary This article covered a brief introduction to Angular 2. We saw where Angular2 comes from, what problems is it going to solve and how we can benefit from it. We talked about the project that we will be creating with Angular 2. We also saw whic other tools are needed for our project and how to install and configure them. Finally, we cloned an Angular 2 seed project, installed all its dependencies and started a web server to examine the application output in a browser Resources for Article: Further resources on this subject: Introduction to JavaScript [article] Third Party Libraries [article] API with MongoDB and Node.js [article]
Read more
  • 0
  • 0
  • 13931

article-image-ibm-websphere-mq-commands
Packt
25 Aug 2010
10 min read
Save for later

IBM WebSphere MQ commands

Packt
25 Aug 2010
10 min read
(For more resources on IBM, see here.) After reading this article, you will not be a WebSphere MQ expert, but you will have brought your knowledge of MQ to a level where you can have a sensible conversation with your site's MQ administrator about what the Q replication requirements are. An introduction to MQ In a nutshell, WebSphere MQ is an assured delivery mechanism, which consists of queues managed by Queue Managers. We can put messages onto, and retrieve messages from queues, and the movement of messages between queues is facilitated by components called Channels and Transmission Queues. There are a number of fundamental points that we need to know about WebSphere MQ: All objects in WebSphere MQ are case sensitive We cannot read messages from a Remote Queue (only from a Local Queue) We can only put a message onto a Local Queue (not a Remote Queue) It does not matter at this stage if you do not understand the above points, all will become clear in the following sections. There are some standards regarding WebSphere MQ object names: Queue names, processes and Queue Manager names can have a maximum length of 48 characters Channel names can have a maximum length of 20 characters The following characters are allowed in names: A-Z,a-z,0-9, and . / _ % symbols There is no implied structure in a name — dots are there for readability Now let's move on to look at MQ queues in a little more detail. MQ queues MQ queues can be thought of as conduits to transport messages between Queue Managers. There are four different types of MQ queues and one related object. The four different types of queues are: Local Queue (QL), Remote Queue (QR), Transmission Queue (TQ), and Dead Letter Queue, and the related object is a Channel (CH). One of the fundamental processes of WebSphere MQ is the ability to move messages between Queue Managers. Let's take a high-level look at how messages are moved, as shown in the following diagram: When the application Appl1 wants to send a message to application Appl2, it opens a queue - the local Queue Manager (QM1) determines if it is a Local Queue or a Remote Queue. When Appl1 issues an MQPUT command to put a message onto the queue, then if the queue is local, the Queue Manager puts the message directly onto that queue. If the queue is a Remote Queue, then the Queue Manager puts the message onto a Transmission Queue. The Transmission Queue sends the message using the Sender Channel on QM1 to the Receiver Channel on the remote Queue Manager (QM2). The Receiver Channel puts the message onto a Local Queue on QM2. Appl2 issues a MQGET command to retrieve the message from this queue. Now let's move on to look at the queues used by Q replication and in particular, unidirectional replication, as shown in the following diagram. What we want to show here is the relationship between Remote Queues, Transmission Queues, Channels, and Local Queues. As an example, let's look at the path a message will take from Q Capture on QMA to Q Apply on QMB. (Move the mouse over the image to enlarge.) Note that in this diagram the Listeners are not shown. Q Capture puts the message onto a remotely-defned queue on QMA (the local Queue Manager for Q Capture). This Remote Queue (CAPA.TO.APPB.SENDQ.REMOTE) is effectively a "place holder" and points to a Local Queue (CAPA.TO.APPB.RECVQ) on QMB and specifes the Transmission Queue (QMB.XMITQ) it should use to get there. The Transmission Queue has, as part of its defnition, the Channel (QMA.TO.QMB) to use. The Channel QMA.TO.QMB has, as part of its defnition, the IP address and Listening port number of the remote Queue Manager (note that we do not name the remote Queue Manager in this defnition—it is specifed in the defnition for the Remote Queue). The defnition for unidirectional Replication Queue Map (circled queue names) is: SENDQ: CAPA.TO.APPB.SENDQ.REMOTE on the source RECVQ: CAPA.TO.APPB.RECVQ on the target ADMINQ: CAPA.ADMINQ.REMOTE on the target Let's look at the Remote Queue defnition for CAPA.TO.APPB.SENDQ.REMOTE, shown next. On the left-hand side are the defnitions on QMA, which comprise the Remote Queue, the Transmission Queue, and the Channel defnition. The defnitions on QMB are on the right-hand side and comprise the Local Queue and the Receiver Channel. Let's break down these defnitions to the core values to show the relationship between the different parameters, as shown next: We defne a Remote Queue by matching up the superscript numbers in the defnitions in the two Queue Managers: For defnitions on QMA, QMA is the local system and QMB is the remote system. For defnitions on QMB, QMB is the local system and QMA is the remote system. Remote Queue Manager name Name of the queue on the remote system Transmission Queue name Port number that the remote system is listening on The IP address of the Remote Queue Manager Local Queue Manager name Channel name Queue names: QMB: Decide on the Local Queue name on QMB—CAPA.TO.APPB.RECVQ. QMA: Decide on the Remote Queue name on QMB—CAPA.TO.APPB.SENDQ.REMOTE. Channels: QMB: Defne a Receiver Channel on QMB, QMA.TO.QMB—make sure the channel type (CHLTYPE) is RCVR. The Channel names on QMA and QMB have to match: QMA.TO.QMB. QMA: Defne a Sender Channel, which takes the messages from the Transmission Queue QMB.XMITQ and which points to the IP address and Listening port number of QMB. The Sender Channel name must be QMA.TO.QMB. Let's move on from unidirectional replication to bidirectional replication. The bidirectional queue diagram is shown next, which is a cut-down version of the full diagram of the The WebSphere MQ layer and just shows the queue names and types without the details. The principles in bidirectional replication are the same as for unidirectional replication. There are two Replication Queue Maps—one going from QMA to QMB (as unidirectional replication) and one going from QMB to QMA. MQ queue naming standards The naming of the WebSphere MQ queues is an important part of Q replication setup. It may be that your site already has a naming standard for MQ queues, but if it does not, then here are some thoughts on the subject (WebSphere MQ naming standards were discussed in the Introduction to MQ section): Queues are related to Q Capture and Q Apply programs, so it would be useful to have that fact refected in the name of the queues. A Q Capture needs a local Restart Queue and we use the name CAPA.RESTARTQ. Each Queue Manager can have a Dead Letter Queue. We use the prefx DEAD.LETTER.QUEUE with a suffx of the Queue Manager name giving DEAD.LETTER.QUEUE.QMA. Receive Queues are related to Send Queues. For every Send Queue, we need a Receive Queue. Our Send Queue names are made up of where they are coming from, Q Capture on QMA (CAPA), and where they are going to, Q Apply on QMB (APPB), and we also want to put in that it is a Send Queue and that it is a remote defnition, so we end up with CAPA.TO.APPB.SENDQ.REMOTE. The corresponding Receive Queue will be called CAPA.TO.APPB.RECVQ. Transmission Queues should refect the names of the "to" Queue Manager. Our Transmission Queue on QMA is called QMB.XMITQ, refecting the Queue Manager that it is going to, and that it is a Transmission Queue. Using this naming convention on QMB, the Transmission Queue is called QMA.XMITQ. Channels should refect the names of the "from" and "to" Queue Managers. Our Sender Channel defnition on QMA is QMA.TO.QMB refecting that it is a channel from QMA to QMB and the Receiver Channel on QMB is also called QMA.TO.QMB. The Receiver Queue on QMA is called QMB.TO.QMA for a Sender Channel of the same name on QMB. A Replication Queue Map defnition requires a local Send Queue, and a remote Receive Queue and a remote Administration Queue. The Send Queue is the queue that Q Capture writes to, the Receive Queue is the queue that Q Apply reads from, and the Administration Queue is the queue that Q Apply writes messages back to Q Capture with. MQ queues required for different scenarios This section lists the number of Local and Remote Queues and Channels that are needed for each type of replication scenario. The queues and channels required for unidirectional replication (including replicating to a Stored Procedure) and Event Publishing are shown in the following tables. Note that the queues and channels required for Event Publishing are a subset of those required for unidirectional replication, but creating extra queues and not using them is not a problem. The queues and channels required for unidirectional (including replicating to a Stored Procedure) are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Remote Queue: CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ   The queues required for Event Publishing are shown in the following table: QMA (7) QMB (7) 3 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA 1 Remote Queue: CAPA.TO.APPB.SENDQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 2 Local Queues: CAPA.TO.APPB.REVCQ DEAD.LETTER.QUEUE.QMB 1 Receiver Channel: QMA.TO.QMB The queues and channels required for bidirectional/P2P two-way replication are shown in the following table: QMA (10) QMB (10) 4 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE.QMA CAPB.TO.APPA.RECVQ 2 Remote Queues: CAPA.TO.APPB.SENDQ.REMOTE CAPB.ADMINQ.REMOTE 1 Transmission Queue: QMB.XMITQ 1 Sender Channel: QMA.TO.QMB 1 Receiver Channel: QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL.MODELQ 4 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE.QMB CAPA.TO.APPB.RECVQ 2 Remote Queues: CAPB.TO.APPA.SENDQ.REMOTE CAPA.ADMINQ.REMOTE 1 Transmission Queue: QMA.XMITQ 1 Sender Channel: QMB.TO.QMA 1 Receiver Channel: QMA.TO.QMB 1 Model Queue: IBMQREP.SPILL.MODELQ The queues and channels required for P2P three-way replication are shown in the following table: QMA (16) QMB (16) QMC (16) 5 Local Queues: CAPA.ADMINQ CAPA.RESTARTQ DEAD.LETTER.QUEUE. QMA CAPB.TO.APPA.RECVQ CAPC.TO.APPA.RECVQ 4 Remote Queues: CAPA.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE CAPA.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMB.XMITQ QMC.XMITQ 2 Sender Channels: QMA.TO.QMC QMA.TO.QMB 2 Receiver Channels: QMC.TO.QMA QMB.TO.QMA 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPB.ADMINQ CAPB.RESTARTQ DEAD.LETTER.QUEUE. QMB CAPA.TO.APPB.RECVQ CAPC.TO.APPB.RECVQ 4 Remote Queues: CAPB.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPB.TO.APPC. SENDQ.REMOTE CAPC.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMC.XMITQ 2 Sender Channels: QMB.TO.QMA QMB.TO.QMC 2 Receiver Channels: QMA.TO.QMB QMC.TO.QMB 1 Model Queue: IBMQREP.SPILL. MODELQ 5 Local Queues: CAPC.ADMINQ CAPC.RESTARTQ DEAD.LETTER. QUEUE.QMC CAPA.TO.APPC.RECVQ CAPB.TO.APPC.RECVQ 4 Remote Queues: CAPC.TO.APPA. SENDQ.REMOTE CAPA.ADMINQ.REMOTE CAPC.TO.APPB. SENDQ.REMOTE CAPB.ADMINQ.REMOTE 2 Transmission Queues: QMA.XMITQ QMB.XMITQ 2 Sender Channels: QMC.TO.QMA QMC.TO.QMB 2 Receiver Channels: QMA.TO.QMC QMB.TO.QMC 1 Model Queue: IBMQREP.SPILL. MODELQ
Read more
  • 0
  • 0
  • 13908
Modal Close icon
Modal Close icon