Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-getting-started-rstudio
Packt
16 Feb 2016
5 min read
Save for later

Getting Started with RStudio

Packt
16 Feb 2016
5 min read
The number of users adopting the R programming language has been increasing faster and faster in the last few years. The functions of the R console are limited when it comes to managing a lot of files, or when we want to work with version control systems. This is the reason, in combination with the increasing adoption rate, why a need for a better development environment arose. To serve this need, a team of R fans began to develop an integrated development environment (IDE) to make it easier to work on bigger projects and to collaborate with others. This IDE has the name, RStudio. In this article, we will see how to work with RStudio and projects (For more resources related to this topic, see here.) Working with RStudio and projects In the times before RStudio, it was very hard to manage bigger projects with R in the R console, as you had to create all the folder structures on your own. When you work with projects or open a project, RStudio will instantly take several actions. For example, it will start a new and clean R session, it will source the .Rprofile file in the project's main directory, and it will set the current working directory to the project directory. So, you have a complete working environment individually for every project. RStudio will even adjust its own settings, such as active tabs, splitter positions, and so on, to where they were when the project was closed. But just because you can create projects with RStudio easily, it does not mean that you should create a project for every single time that you write R code. For example, if you just want to do a small analysis, we would recommend that you create a project where you save all your smaller scripts. Creating a project with RStudio RStudio offers you an easy way to create projects. Just navigate to File | New Project and you will see a popup window as with the options shown in the following screenshot: These options let you decide from where you want to create your project. So, if you want to start it from scratch and create a new directory, associate your new project to an existing one, or if you want to create a project from a version control repository, you can avail of the respective options. For now, we will focus on creating a new directory. The following screenshot shows you the next options available: Locating your project A very important question you have to ask yourself when creating a new project is where you want to save it? There are several options and details you have to pay attention to especially when it comes to collaboration and different people working on the same project. You can save your project locally, on a cloud storage or with the help of a revision control system such as Git. Creating your first project To begin your first project, choose the New Directory option we described before and create an empty project. Then, choose a name for the directory and the location that you want to save it in. You should create a projects folder on your Dropbox. The first project will be a small data analysis based on a dataset that was extracted from the 1974 issue of the Motor Trend US magazine. It comprises fuel consumption and ten aspects of automobile design and performance, such as the weight or number of cylinders for 32 automobiles, and is included in the base R package. So, we do not have to install a separate package to work with this dataset, as it is automatically loaded when you start R: As you can see, we left the Use packrat with this project option unchecked. Packrat is a dependency management tool that makes your R code more isolated, portable, and reproducible by giving your project its own privately managed package library. This is especially important when you want to create projects in an organizational context where the code has to run on various computer systems, and has to be usable for a lot of different users. This first project will just run locally and will not focus on a specific combination of package versions. Organizing your folders RStudio creates an empty directory for you that includes just the file, Motor-Car-Trend-Analysis.Rproj. This file will store all the information on your project that RStudio will need for loading. But to stay organized, we have to create some folders in the directory. Create the following folders: data: This includes all the data that we need for our analysis code: This includes all the code files for cleaning up data, generating plots, and so on plots: This includes all graphical outputs reports: This comprises all the reports that we create from our dataset Saving the data The Motor Trend Car Road Tests dataset is part of the dataset package, which is one of the preinstalled packages in R. But, we will save the data in a CSV file in our data folder, after extracting the data from the mtcars variable, to make sure our analysis is reproducible. Put the following line of code in a new R script and save it as data.R in the code folder: #write data into csv file write.csv(mtcars, file = "data/cars.csv", row.names=FALSE) Analyzing the data The analysis script will first have to load the data from the CSV file with the following line: cars_data <- read.csv(file = "data/cars.csv", header = TRUE, sep = ",") Summary To learn more about RStudio, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Mastering Machine Learning with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-machine-learning-r) R Data Analysis Cookbook (https://www.packtpub.com/big-data-and-business-intelligence/r-data-analysis-cookbook) Mastering Data Analysis with R (https://www.packtpub.com/big-data-and-business-intelligence/mastering-data-analysis-r) Resources for Article: Further resources on this subject: RefresheR [article] Deep learning in R [article] Aspects of Data Manipulation in R [article]
Read more
  • 0
  • 0
  • 41798

article-image-creating-our-first-app-ionic
Packt
16 Feb 2016
20 min read
Save for later

Creating Our First App with Ionic

Packt
16 Feb 2016
20 min read
There are many options for developing mobile applications today. Native applications require a unique implementation for each platform, such as iOS, Android, and Windows Phone. It's required for some use cases such as high-performance CPU and GPU processing with lots of memory consumption. Any application that does not need over-the-top graphics and intensive CPU processing could benefit greatly from a cost-effective, write once, and run everywhere HTML5 mobile implementation. In this article, we will cover: Setting up a development environment Creating a HelloWorld app via CLI Creating a HelloWorld app via Ionic Creator Copying examples from Ionic Codepen Demos Viewing the app using your web browser Viewing the app using iOS Simulator Viewing the app using Xcode for iOS Viewing the app using Genymotion for Android Viewing the app using Ionic View Customizing the app folder structure (For more resources related to this topic, see here.) For those who choose the HTML5 route, there are many great choices in this active market. Some options may be very easy to start but could be very hard to scale or could face performance problems. Commercial options are generally expensive for small developers to discover product and market fit. It's a best practice to think of the users first. There are instances where a simple responsive design website is a better choice; for example, the business has mainly fixed content with minimal updating required or the content is better off on the web for SEO purposes. Ionic has several advantages over its competitors: It's written on top of AngularJS UI performance is strong because of its use of the requestAnimationFrame() technique It offers a beautiful and comprehensive set of default styles, similar to a mobile-focused Twitter Bootstrap Sass is available for quick, easy, and effective theme customization You will go through several HelloWorld examples to bootstrap your Ionic app. This process will give you a quick skeleton to start building more comprehensive apps. The majority of apps have similar user experience flows such as tabs and a side menu. Setting up a development environment Before you create the first app, your environment must have the required components ready. Those components ensure a smooth process of development, build, and test. The default Ionic project folder is based on Cordova's. Therefore you will need the Ionic CLI to automatically add the correct platform (that is, iOS, Android, or Windows Phone) and build the project. This will ensure all Cordova plugins are included properly. The tool has many options to run your app in the browser or simulator with live reload. Getting ready You need to install Ionic and its dependencies to get started. Ionic itself is just a collection of CSS styles and AngularJS Directives and Services. It also has a command-line tool to help manage all of the technologies such as Cordova and Bower. The installation process will give you a command line to generate initial code and build the app. Ionic uses npm as the installer, which is included when installing Node.js. Please install the latest version of Node.js from http://nodejs.org/download/. You will need Cordova, ios-sim (iOS Simulator), and Ionic: $ npm install -g cordova ionic ios-sim This single command line will install all three components instead of issuing three command lines separately. The -g parameter is to install the package globally (not just in the current directory). For Linux and Mac, you may need to use the sudo command to allow system access: $ sudo npm install -g cordova ionic ios-sim There are a few common options for an integrated development environment: Xcode for iOS Eclipse or Android Studio for Android Microsoft Visual Studio Express or Visual Studio for Windows Phone Sublime Text (http://www.sublimetext.com/) for web development All of those have a free license. Sublime Text is free for non-commercial use only but you have to purchase a license if you are a commercial developer. Most frontend developers would prefer to use Sublime Text for coding HTML and JavaScript because it's very lightweight and comes with a well-supported developer community. You could code directly in Xcode, Eclipse, or Visual Studio Express, but those are somewhat heavy duty for web apps, especially when you have a lot of windows open and just need something simple to code. How to do it… If you decide to use Sublime Text, you will need Package Control (https://packagecontrol.io/installation), which is similar to a Plugin Manager. Since Ionic uses Sass, it's optional to install the Sass Syntax Highlighting package: Select Sublime Text | Preferences | Package Control: Select Package Control: Install Package. You could also just type the commands partially (that is, inst) and it will automatically select the right option. Type Sass and the search results will show one option for TextMate & Sublime Text. Select that item to install. See also There are tons of packages that you may want to use, such as Haml, JSHint, JSLint, Tag, ColorPicker, and so on. You can browse around this website: https://sublime.wbond.net/browse/popular, for more information. Creating a HelloWorld app via CLI It's quickest to start your app using existing templates. Ionic gives you three standard templates out of the box via the command line: Blank: This template has a simple one page with minimal JavaScript code. Tabs: This template has multiple pages with routes. A route URL goes to one tab or tabs. Sidemenu: This is template with left and/or right menu and with center content area. There are two other additional templates: maps and salesforce. But these are very specific to apps using Google Maps or for integration with the Salesforce.com API. How to do it… To set up the app with a blank template from Ionic, use this command: $ ionic start HelloWorld_Blank blank If you don't have an account in http://ionic.io/, the command line will ask for it. You could either press y or n to continue. It's not required to have an account at this step. If you replace blank with tabs, it will create a tab template: $ ionic start HelloWorld_Tabs tabs Similarly, this command will create an app with a sidemenu: $ ionic start HelloWorld_Sidemenu sidemenu The sidemenu template is the most common template as it provides a very nice routing example with different pages in the templates folder under /www. Additional guidance for the Ionic CLI is available on the GitHub page: https://github.com/driftyco/ionic-cli How it works… This article will show you how to quickly start your codebase and visually see the result. However, the following are the core concepts: Controller: Manage variables and models in the scope and trigger others, such as services or states. Directive: Where you manipulate the DOM, since the directive is bound to a DOM object. Service: Abstraction to manage models or collections of complex logic beside get/set required. Filter: Mainly used to process an expression in the template and return some data (that is, rounding number, add currency) by using the format {{ expression | filter }}. For example, {{amount | currency}} will return $100 if the amount variable is 100. The project folder structure will look like the following:   You will spend most of your time in the /www folder, because that's where your application logic and views will be placed. By default from the Ionic template, the AngularJS module name is called starter. You will see something like this in app.js, which is the bootstrap file for the entire app: angular.module('starter', ['ionic', 'ngCordova', 'starter.controllers', 'starter.services', 'starter.directives', 'starter.filters']) This basically declares starter to be included in ng-app="starter" of index.html. We would always have ionic and ngCordova. The other modules are required and listed in the array of string [...] as well. They can be defined in separate files. Note that if you double click on the index.html file to open in the browser, it will show a blank page. This doesn't mean the app isn't working. The reason is that the AngularJS component of Ionic dynamically loads all the .js files and this behavior requires server access via a http protocol (http://). If you open a file locally, the browser automatically treats it as a file protocol (file://) and therefore AngularJS will not have the ability to load additional .js modules to run the app properly. There are several methods of running the app that will be discussed. Creating a HelloWorld app via Ionic Creator Another way to start your app codebase is to use Ionic Creator. This is a great interface builder to accelerate your app development with a drag-and-drop style. You can quickly take existing components and position them to visualize how it should look in the app via a web-based interface. Most common components like buttons, images, checkboxes, and so on are available. Ionic Creator allows the user to export everything as a project with all .html, .css, and .js files. You should be able edit content in the /www folder to build on top of the interface. Getting ready Ionic Creator requires registration for a free account at https://creator.ionic.io/ to get started. How to do it… Create a new project called myApp:   You will see this simple screen:   The center area is your app interface. The left side gives you a list of pages. Each page is a single route. You also have access to a number of UI components that you would normally have to code by hand in an HTML file. The right panel shows the properties of any selected component. You're free to do whatever you need to do here by dropping components to the center screen. If you need to create a new page, you have to click the plus sign in the Pages panel. Each page is represented as a link, which is basically a route in AngularJS UI Router's definition. To navigate to another page (for example, after clicking a button), you can just change the Link property and point to that page. There is an Edit button on top where you can toggle back and forth between Edit Mode and Preview Mode. It's very useful to see how your app will look and behave. Once completed, click on the Export button on the top navigation. You have three options: Use the Ionic CLI tool to get the code Download the project as a zip file Review the raw HTML The best way to learn Ionic Creator is to play with it. You can add a new page and pick out any existing templates. This example shows a Login page template:   Here is how it should look out of the box:   There's more... To switch to Preview Mode where you can see the UI in a device simulator, click the switch button on the top right to enable Test:   In this mode, you should be able to interact with the components in the web browser as if it's actually deployed on the device. If you break something, it's very simple to start a new project. It's a great tool to use for "prototyping" and to get initial template or project scaffolding. You should continue to code in your regular IDE for the rest of the app. Ionic Creator doesn't do everything for you, yet. For example, if you want to access specific Cordova plugin features, you have to write that code separately. Also, if you want to tweak the interface outside of what is allowed within Ionic Creator, it will also require specific modifications to the .html and .css files. Copying examples from Ionic Codepen Demos Sometimes it's easier to just get snippets of code from the example library. Ionic Codepen Demos (http://codepen.io/ionic/public-list/) is a great website to visit. Codepen.io is a playground (or sandbox) to demonstrate and learn web development. There are other alternatives such as http://plnkr.com or http://jsfiddle.com. It's just a developer's personal preference which one to choose. However, all Ionic's demos are already available on Codepen, where you can experiment and clone to your own account. http://plnkr.com has an existing AngularJS boilerplate and could be used to just practice specific AngularJS areas because you can copy the link of sample code and post on http://stackoverflow.com/ if you have questions. How to do it… There are several tags of interest to browse through if you want specific UI component examples: You don't need a Codepen account to view. However, if there is a need to save a custom pen and share with others, free registration will be required. The Ionic Codepen Demos site has more collections of demos comparing to the CLI. Some are based on a nightly build of the platform so they could be unstable to use. There's more... You can find the same side menu example on this site: Navigate to http://codepen.io/ionic/public-list/ from your browser. Select Tag: menus and then click on Side Menu and Navigation: Nightly. Change the layout to fit a proper mobile screen by clicking on the first icon of the layout icons row on the bottom right of the screen. Viewing the app using your web browser In order to "run" the web app, you need to turn your /www folder into a web server. Again there are many methods to do this and people tend to stick with one or two ways to keep things simple. A few other options are unreliable such as Sublime Text's live watch package or static page generator (for example, Jekyll, Middleman App, and so on). They are slow to detect changes and may freeze your IDE so these won't be mentioned here. Getting ready The recommended method is to use the ionic serve command line. It basically launches an HTTP server so you can open your app in a desktop browser. How to do it… First you need to be in the project folder. Let's assume it is the Side Menu HelloWorld: $ cd HelloWorld_Sidemenu From there, just issue the simple command line: $ ionic serve  That's it! There is no need to go into the /www folder or figure out which port to use. The command line will provide these options while the web server is running: The most common option to use here is r to restart or q to quit when you are done. There is an additional step to view the app with the correct device resolution: Install Google Chrome if it's not already on your computer. Open the link (for example, http://localhost:8100/#/app/playlists) from ionic serve in Google Chrome. Turn on Developer Tools. For example, in Mac's Google Chrome, select View | Developer | Developer Tools: Click on the small mobile icon in the Chrome Developer Tools area: There will be a long list of devices to pick from: After selecting a device, you need to refresh the page to ensure the UI is updated. Chrome should give you the exact view resolution of the device. Most developers would prefer to use this method to code as you can debug the app using Chrome Developer Tools. It works exactly like any web application. You can create breakpoints or output variables to the console. How it works... Note that ionic serve is actually watching everything under the /www folder except the JavaScript modules in the /lib folder. This makes sense because there is no need for the system to scan through every single file when the probability for it to change is very small. People don't code directly in the /lib folder but only update when there is a new version of Ionic. However, there is some flexibility to change this. You can specify a watchPatterns property in the ionic.project file located in your project root to watch (or not watch) for specific changes: { "name": "myApp", "app_id": "", "watchPatterns": [ "www/**/*", "!www/css/**/*", "your_folder_here/**/*" ] } While the web server is running, you can go back to the IDE and continue coding. For example, let's open the playlists.html file under /www/templates and change the first line to this: <ion-view view-title="Updated Playlists"> Go back to the web browser where Ionic opened the new page; the app interface will change the title bar right away without requiring you to refresh the browser. This is a very nice feature when there is a lot of back and between code changes and allows checking on how it works or looks in the app instantly. Viewing the app using iOS Simulator So far you have been testing the web-app portion of Ionic. In order to view the app in the simulator, follow the next steps. How to do it... Add the specific platform using: $ ionic platform add ios Note that you need to do the "platform add" before building the app. The last step is to emulate the app: $ ionic emulate ios Viewing the app using Xcode for iOS Depending on personal preference, you may find it more convenient to just deploy the app using ionic ios --device on a regular basis. This command line will push the app to your physical device connected via USB without ever running Xcode. However, you could run the app using Xcode (in Mac), too. How to do it... Go to the /platforms/ios folder. Look for the folder with .xcodeproj and open in Xcode. Click on the iOS Device icon and select your choice of iOS Simulator. Click on the Run button and you should be able to see the app running in the simulator. There's more... You can connect a physical device via a USB port and it will show up in the iOS Device list for you to pick. Then you can deploy the app directly on your device. Note that iOS Developer Membership is required for this. This method is more complex than just viewing the app via a web browser. However, it's a must when you want to test your code related to device features such as camera or maps. If you change code in the /www folder and want to run it again in Xcode, you have to do ionic build ios first, because the running code is in the Staging folder of your Xcode project: For debugging, the Xcode Console can output JavaScript logs as well. However, you could use the more advanced features of Safari's Web Inspector (which is similar to Google Chrome's Developer Tools) to debug your app. Note that only Safari can debug a web app running on a connected physical iOS device because Chrome does not support this on a Mac. It's simple to enable this capability: Allow remote debugging for an iOS device by going to Settings | Safari | Advanced and enable Web Inspector. Connect the physical iOS device to your Mac via USB and run the app. Open the Safari browser. Select Develop, click on your device's name (or iOS Simulator), and click on index.html. Note: If you don't see the Develop menu in Safari, you need to navigate to menu Preferences | Advanced and check on Show Develop menu in menu bar. Safari will open a new console just for that specific device just as it's running within the computer's Safari. Viewing the app using Genymotion for Android Although it's possible to install the Google Android simulator, many developers have inconsistent experiences on a Mac computer. There are many commercial and free alternatives that offer more convenience and a wide range of device support. Genymotion provides some unique advantages such as allowing users to switch Android model and version, supporting networking from within the app, and allowing SD card simulation. You will learn how to set up an Android developer environment (on a Mac in this case) first. Then you will install and configure Genymotion for mobile app development. How to do it... The first step is to set up the Android environment properly for development. Download and install Android Studio from https://developer.android.com/sdk/index.html. Run Android Studio. You need to install all required packages such as the Android SDK. Just click on Next twice at the Setup Wizard screen and select the Finish button to start packages installation. After installation is complete, you need to install additional packages and other SDK versions. At the Quick Start screen, select Configure: Then select SDK Manager: It's a good practice to install a previous version such as Android 5.0.1 and 5.1.1. You may also want to install all Tools and Extras for later use. Select the Install packages... button. Check the box on Accept License and click on Install. The SDK Manager will give you SDK Path on the top. Make a copy of this path because you need to modify the environment path. Go to Terminal and type: $ touch ~/.bash_profile; open ~/.bash_profile It will open a text editor to edit your bash profile file. Insert the following line where /YOUR_PATH_TO/android-sdk should be the SDK Path that you copied earlier: export ANDROID_HOME=/YOUR_PATH_TO/android-sdk export PATH=$ANDROID_HOME/platform-tools:$PATH export PATH=$ANDROID_HOME/tools:$PATH Save and close that text editor. Go back to Terminal and type: $ source ~/.bash_profile $ echo $ANDROID_HOME You should see the output as your SDK Path. This verifies that you have correctly configured the Android developer environment. The second step is to install and configure Genymotion. Download and install Genymotion and Genymotion Shell from http://Genymotion.com. Run Genymotion. Select the Add button to start adding a new Android device. Select a device you want to simulate. In this case, let's select Samsung Galaxy S5: You will see the device being added to "Your virtual devices". Click on that device: Then click on Start. The simulator will take a few seconds to start and will show another window. This is just a blank simulator without your app running inside yet. Run Genymotion Shell. From Genymotion Shell, you need to get a device list and keep the IP address of the device attached, which is Samsung Galaxy S5. Type devices list: Type adb connect 192.168.56.101 (or whatever the IP address was you saw earlier from the devices list command line). Type adb devices to confirm that it is connected. Type ionic platform add android to add Android as a platform for your app. Finally, type ionic run android. You should be able to see the Genymotion window showing your app. Although there are many steps to get this working, it's a lot less likely that you will have to go through the same process again. Once your environment is set up, all you need to do is to leave Genymotion running while writing code. If there is a need to test the app in different Android devices, it's simple just to add another virtual device in Genymotion and connect to it. Summary In this article, we learned how to create your first ionic App. We also covered various ways in which we can view the App on Various platforms, that is, web browser, iOS Simulator, Xcode, Genymotion . You can also refer the following books on the similar topics: Learning Ionic: https://www.packtpub.com/application-development/learning-ionic Getting Started with Ionic: https://www.packtpub.com/application-development/getting-started-ionic Ionic Framework By Example: https://www.packtpub.com/application-development/ionic-framework-example Resources for Article: Further resources on this subject: Directives and Services of Ionic [article] First Look at Ionic [article] Ionic JS Components [article]
Read more
  • 0
  • 0
  • 20031

article-image-machine-learning-r
Packt
16 Feb 2016
39 min read
Save for later

Machine Learning with R

Packt
16 Feb 2016
39 min read
If science fiction stories are to be believed, the invention of artificial intelligence inevitably leads to apocalyptic wars between machines and their makers. In the early stages, computers are taught to play simple games of tic-tac-toe and chess. Later, machines are given control of traffic lights and communications, followed by military drones and missiles. The machines' evolution takes an ominous turn once the computers become sentient and learn how to teach themselves. Having no more need for human programmers, humankind is then "deleted." Thankfully, at the time of this writing, machines still require user input. Though your impressions of machine learning may be colored by these mass media depictions, today's algorithms are too application-specific to pose any danger of becoming self-aware. The goal of today's machine learning is not to create an artificial brain, but rather to assist us in making sense of the world's massive data stores. Putting popular misconceptions aside, by the end of this article, you will gain a more nuanced understanding of machine learning. You also will be introduced to the fundamental concepts that define and differentiate the most commonly used machine learning approaches. (For more resources related to this topic, see here.) You will learn: The origins and practical applications of machine learning How computers turn data into knowledge and action How to match a machine learning algorithm to your data The field of machine learning provides a set of algorithms that transform data into actionable knowledge. Keep reading to see how easy it is to use R to start applying machine learning to real-world problems. The origins of machine learning Since birth, we are inundated with data. Our body's sensors—the eyes, ears, nose, tongue, and nerves—are continually assailed with raw data that our brain translates into sights, sounds, smells, tastes, and textures. Using language, we are able to share these experiences with others. From the advent of written language, human observations have been recorded. Hunters monitored the movement of animal herds, early astronomers recorded the alignment of planets and stars, and cities recorded tax payments, births, and deaths. Today, such observations, and many more, are increasingly automated and recorded systematically in the ever-growing computerized databases. The invention of electronic sensors has additionally contributed to an explosion in the volume and richness of recorded data. Specialized sensors see, hear, smell, taste, and feel. These sensors process the data far differently than a human being would. Unlike a human's limited and subjective attention, an electronic sensor never takes a break and never lets its judgment skew its perception. Although sensors are not clouded by subjectivity, they do not necessarily report a single, definitive depiction of reality. Some have an inherent measurement error, due to hardware limitations. Others are limited by their scope. A black and white photograph provides a different depiction of its subject than one shot in color. Similarly, a microscope provides a far different depiction of reality than a telescope. Between databases and sensors, many aspects of our lives are recorded. Governments, businesses, and individuals are recording and reporting information, from the monumental to the mundane. Weather sensors record temperature and pressure data, surveillance cameras watch sidewalks and subway tunnels, and all manner of electronic behaviors are monitored: transactions, communications, friendships, and many others. This deluge of data has led some to state that we have entered an era of Big Data, but this may be a bit of a misnomer. Human beings have always been surrounded by large amounts of data. What makes the current era unique is that we have vast amounts of recorded data, much of which can be directly accessed by computers. Larger and more interesting data sets are increasingly accessible at the tips of our fingers, only a web search away. This wealth of information has the potential to inform action, given a systematic way of making sense from it all. The field of study interested in the development of computer algorithms to transform data into intelligent action is known as machine learning. This field originated in an environment where available data, statistical methods, and computing power rapidly and simultaneously evolved. Growth in data necessitated additional computing power, which in turn spurred the development of statistical methods to analyze large datasets. This created a cycle of advancement, allowing even larger and more interesting data to be collected.   A closely related sibling of machine learning, data mining, is concerned with the generation of novel insights from large databases. As the implies, data mining involves a systematic hunt for nuggets of actionable intelligence. Although there is some disagreement over how widely machine learning and data mining overlap, a potential point of distinction is that machine learning focuses on teaching computers how to use data to solve a problem, while data mining focuses on teaching computers to identify patterns that humans then use to solve a problem. Virtually all data mining involves the use of machine learning, but not all machine learning involves data mining. For example, you might apply machine learning to data mine automobile traffic data for patterns related to accident rates; on the other hand, if the computer is learning how to drive the car itself, this is purely machine learning without data mining. The phrase "data mining" is also sometimes used as a pejorative to describe the deceptive practice of cherry-picking data to support a theory. Uses and abuses of machine learning Most people have heard of the chess-playing computer Deep Blue—the first to win a game against a world champion—or Watson, the computer that defeated two human opponents on the television trivia game show Jeopardy. Based on these stunning accomplishments, some have speculated that computer intelligence will replace humans in many information technology occupations, just as machines replaced humans in the fields, and robots replaced humans on the assembly line. The truth is that even as machines reach such impressive milestones, they are still relatively limited in their ability to thoroughly understand a problem. They are pure intellectual horsepower without direction. A computer may be more capable than a human of finding subtle patterns in large databases, but it still needs a human to motivate the analysis and turn the result into meaningful action. Machines are not good at asking questions, or even knowing what questions to ask. They are much better at answering them, provided the question is stated in a way the computer can comprehend. Present-day machine learning algorithms partner with people much like a bloodhound partners with its trainer; the dog's sense of smell may be many times stronger than its master's, but without being carefully directed, the hound may end up chasing its tail.   To better understand the real-world applications of machine learning, we'll now consider some cases where it has been used successfully, some places where it still has room for improvement, and some situations where it may do more harm than good. Machine learning successes Machine learning is most successful when it augments rather than replaces the specialized knowledge of a subject-matter expert. It works with medical doctors at the forefront of the fight to eradicate cancer, assists engineers and programmers with our efforts to create smarter homes and automobiles, and helps social scientists build knowledge of how societies function. Toward these ends, it is employed in countless businesses, scientific laboratories, hospitals, and governmental organizations. Any organization that generates or aggregates data likely employs at least one machine learning algorithm to help make sense of it. Though it is impossible to list every use case of machine learning, a survey of recent success stories includes several prominent applications: Identification of unwanted spam messages in e-mail Segmentation of customer behavior for targeted advertising Forecasts of weather behavior and long-term climate changes Reduction of fraudulent credit card transactions Actuarial estimates of financial damage of storms and natural disasters Prediction of popular election outcomes Development of algorithms for auto-piloting drones and self-driving cars Optimization of energy use in homes and office buildings Projection of areas where criminal activity is most likely Discovery of genetic sequences linked to diseases The limits of machine learning Although machine learning is used widely and has tremendous potential, it is important to understand its limits. Machine learning, at this time, is not in any way a substitute for a human brain. It has very little flexibility to extrapolate outside of the strict parameters it learned and knows no common sense. With this in mind, one should be extremely careful to recognize exactly what the algorithm has learned before setting it loose in the real-world settings. Without a lifetime of past experiences to build upon, computers are also limited in their ability to make simple common sense inferences about logical next steps. Take, for instance, the banner advertisements seen on many web sites. These may be served, based on the patterns learned by data mining the browsing history of millions of users. According to this data, someone who views the websites selling shoes should see advertisements for shoes, and those viewing websites for mattresses should see advertisements for mattresses. The problem is that this becomes a never-ending cycle in which additional shoe or mattress advertisements are served rather than advertisements for shoelaces and shoe polish, or bed sheets and blankets. Many are familiar with the deficiencies of machine learning's ability to understand or translate language or to recognize speech and handwriting. Perhaps the earliest example of this type of failure is in a 1994 episode of the television show, The Simpsons, which showed a parody of the Apple Newton tablet. For its time, the Newton was known for its state-of-the-art handwriting recognition. Unfortunately for Apple, it would occasionally fail to great effect. The television episode illustrated this through a sequence in which a bully's note to Beat up Martin was misinterpreted by the Newton as Eat up Martha, as depicted in the following screenshots:   Screenshots from "Lisa on Ice" The Simpsons, 20th Century Fox (1994) Machines' ability to understand language has improved enough since 1994, such that Google, Apple, and Microsoft are all confident enough to offer virtual concierge services operated via voice recognition. Still, even these services routinely struggle to answer relatively simple questions. Even more, online translation services sometimes misinterpret sentences that a toddler would readily understand. The predictive text feature on many devices has also led to a number of humorous autocorrect fail sites that illustrate the computer's ability to understand basic language but completely misunderstand context. Some of these mistakes are to be expected, for sure. Language is complicated with multiple layers of text and subtext and even human beings, sometimes, understand the context incorrectly. This said, these types of failures in machines illustrate the important fact that machine learning is only as good as the data it learns from. If the context is not directly implicit in the input data, then just like a human, the computer will have to make its best guess. Machine learning ethics At its core, machine learning is simply a tool that assists us in making sense of the world's complex data. Like any tool, it can be used for good or evil. Machine learning may lead to problems when it is applied so broadly or callously that humans are treated as lab rats, automata, or mindless consumers. A process that may seem harmless may lead to unintended consequences when automated by an emotionless computer. For this reason, those using machine learning or data mining would be remiss not to consider the ethical implications of the art. Due to the relative youth of machine learning as a discipline and the speed at which it is progressing, the associated legal issues and social norms are often quite uncertain and constantly in flux. Caution should be exercised while obtaining or analyzing data in order to avoid breaking laws, violating terms of service or data use agreements, and abusing the trust or violating the privacy of customers or the public. The informal corporate motto of Google, an organization that collects perhaps more data on individuals than any other, is "don't be evil." While this seems clear enough, it may not be sufficient. A better approach may be to follow the Hippocratic Oath, a medical principle that states "above all, do no harm." Retailers routinely use machine learning for advertising, targeted promotions, inventory management, or the layout of the items in the store. Many have even equipped checkout lanes with devices that print coupons for promotions based on the customer's buying history. In exchange for a bit of personal data, the customer receives discounts on the specific products he or she wants to buy. At first, this appears relatively harmless. But consider what happens when this practice is taken a little bit further. One possibly apocryphal tale concerns a large retailer in the U.S. that employed machine learning to identify expectant mothers for coupon mailings. The retailer hoped that if these mothers-to-be received substantial discounts, they would become loyal customers, who would later purchase profitable items like diapers, baby formula, and toys. Equipped with machine learning methods, the retailer identified items in the customer purchase history that could be used to predict with a high degree of certainty, not only whether a woman was pregnant, but also the approximate timing for when the baby was due. After the retailer used this data for a promotional mailing, an angry man contacted the chain and demanded to know why his teenage daughter received coupons for maternity items. He was furious that the retailer seemed to be encouraging teenage pregnancy! As the story goes, when the retail chain's manager called to offer an apology, it was the father that ultimately apologized because, after confronting his daughter, he discovered that she was indeed pregnant! Whether completely true or not, the lesson learned from the preceding tale is that common sense should be applied before blindly applying the results of a machine learning analysis. This is particularly true in cases where sensitive information such as health data is concerned. With a bit more care, the retailer could have foreseen this scenario, and used greater discretion while choosing how to reveal the pattern its machine learning analysis had discovered. Certain jurisdictions may prevent you from using racial, ethnic, religious, or other protected class data for business reasons.  Keep in mind that excluding this data from your analysis may not be enough, because machine learning algorithms might inadvertently learn this information independently. For instance, if a certain segment of people generally live in a certain region, buy a certain product, or otherwise behave in a way that uniquely identifies them as a group, some machine learning algorithms can infer the protected information from these other factors. In such cases, you may need to fully "de-identify" these people by excluding any potentially identifying data in addition to the protected information. Apart from the legal consequences, using data inappropriately may hurt the bottom line. Customers may feel uncomfortable or become spooked if the aspects of their lives they consider private are made public. In recent years, several high-profile web applications have experienced a mass exodus of users who felt exploited when the applications' terms of service agreements changed, and their data was used for purposes beyond what the users had originally agreed upon. The fact that privacy expectations differ by context, age cohort, and locale adds complexity in deciding the appropriate use of personal data. It would be wise to consider the cultural implications of your work before you begin your project. The fact that you can use data for a particular end does not always mean that you should. How machines learn A formal definition of machine learning proposed by computer scientist Tom M. Mitchellstates that a machine learns whenever it is able to utilize its an experience such that its performance improves on similar experiences in the future. Although this definition is intuitive, it completely ignores the process of exactly how experience can be translated into future action—and of course learning is always easier said than done! While human brains are naturally capable of learning from birth, the conditions necessary for computers to learn must be made explicit. For this reason, although it is not strictly necessary to understand the theoretical basis of learning, this foundation helps understand, distinguish, and implement machine learning algorithms. As you compare machine learning to human learning, you may discover yourself examining your own mind in a different light. Regardless of whether the learner is a human or machine, the basic learning process is similar. It can be divided into four interrelated components: Data storage utilizes observation, memory, and recall to provide a factual basis for further reasoning. Abstraction involves the translation of stored data into broader representations and concepts. Generalization uses abstracted data to create knowledge and inferences that drive action in new contexts. Evaluation provides a feedback mechanism to measure the utility of learned knowledge and inform potential improvements.  Keep in mind that although the learning process has been conceptualized as four distinct components, they are merely organized this way for illustrative purposes. In reality, the entire learning process is inextricably linked. In human beings, the process occurs subconsciously. We recollect, deduce, induct, and intuit with the confines of our mind's eye, and because this process is hidden, any differences from person to person are attributed to a vague notion of subjectivity. In contrast, with computers these processes are explicit, and because the entire process is transparent, the learned knowledge can be examined, transferred, and utilized for future action. Data storage All learning must begin with data. Humans and computers alike utilize data storage as a foundation for more advanced reasoning. In a human being, this consists of a brain that uses electrochemical signals in a network of biological cells to store and process observations for short- and long-term future recall. Computers have similar capabilities of short- and long-term recall using hard disk drives, flash memory, and random access memory (RAM) in combination with a central processing unit (CPU). It may seem obvious to say so, but the ability to store and retrieve data alone is not sufficient for learning. Without a higher level of understanding, knowledge is limited exclusively to recall, meaning exclusively what is seen before and nothing else. The data is merely ones and zeros on a disk. They are stored memories with no broader meaning. To better understand the nuances of this idea, it may help to think about the last time you studied for a difficult test, perhaps for a university final exam or a career certification. Did you wish for an eidetic (photographic) memory? If so, you may be disappointed to learn that perfect recall is unlikely to be of much assistance. Even if you could memorize material perfectly, your rote learning is of no use, unless you know in advance the exact questions and answers that will appear in the exam. Otherwise, you would be stuck in an attempt to memorize answers to every question that could conceivably be asked. Obviously, this is an unsustainable strategy. Instead, a better approach is to spend time selectively, memorizing a small set of representative ideas while developing strategies on how the ideas relate and how to use the stored information. In this way, large ideas can be understood without needing to memorize them by rote. Abstraction This work of assigning meaning to stored data occurs during the abstraction process, in which raw data comes to have a more abstract meaning. This type of connection, say between an object and its representation, is exemplified by the famous René Magritte painting The Treachery of Images:   Source: http://collections.lacma.org/node/239578 The painting depicts a tobacco pipe with the caption Ceci n'est pas une pipe ("this is not a pipe"). The point Magritte was illustrating is that a representation of a pipe is not truly a pipe. Yet, in spite of the fact that the pipe is not real, anybody viewing the painting easily recognizes it as a pipe. This suggests that the observer's mind is able to connect the picture of a pipe to the idea of a pipe, to a memory of a physical pipe that could be held in the hand. Abstracted connections like these are the basis of knowledge representation, the formation of logical structures that assist in turning raw sensory information into a meaningful insight. During a machine's process of knowledge representation, the computer summarizes stored raw data using a model, an explicit description of the patterns within the data. Just like Magritte's pipe, the model representation takes on a life beyond the raw data. It represents an idea greater than the sum of its parts. There are many different types of models. You may be already familiar with some. Examples include: Mathematical equations Relational diagrams such as trees and graphs Logical if/else rules Groupings of data known as clusters The choice of model is typically not left up to the machine. Instead, the learning task and data on hand inform model selection. Later in this article, we will discuss methods to choose the type of model in more detail. The process of fitting a model to a dataset is known as training. When the model has been trained, the data is transformed into an abstract form that summarizes the original information. You might wonder why this step is called training rather than learning. First, note that the process of learning does not end with data abstraction; the learner must still generalize and evaluate its training. Second, the word training better connotes the fact that the human teacher trains the machine student to understand the data in a specific way. It is important to note that a learned model does not itself provide new data, yet it does result in new knowledge. How can this be? The answer is that imposing an assumed structure on the underlying data gives insight into the unseen by supposing a concept about how data elements are related. Take for instance the discovery of gravity. By fitting equations to observational data, Sir Isaac Newton inferred the concept of gravity. But the force we now know as gravity was always present. It simply wasn't recognized until Newton recognized it as an abstract concept that relates some data to others—specifically, by becoming the g term in a model that explains observations of falling objects.   Most models may not result in the development of theories that shake up scientific thought for centuries. Still, your model might result in the discovery of previously unseen relationships among data. A model trained on genomic data might find several genes that, when combined, are responsible for the onset of diabetes; banks might discover a seemingly innocuous type of transaction that systematically appears prior to fraudulent activity; and psychologists might identify a combination of personality characteristics indicating a new disorder. These underlying patterns were always present, but by simply presenting information in a different format, a new idea is conceptualized. Generalization The learning process is not complete until the learner is able to use its abstracted knowledge for future action. However, among the countless underlying patterns that might be identified during the abstraction process and the myriad ways to model these patterns, some will be more useful than others. Unless the production of abstractions is limited, the learner will be unable to proceed. It would be stuck where it started—with a large pool of information, but no actionable insight. The term generalization describes the process of turning abstracted knowledge into a form that can be utilized for future action, on tasks that are similar, but not identical, to those it has seen before. Generalization is a somewhat vague process that is a bit difficult to describe. Traditionally, it has been imagined as a search through the entire set of models (that is, theories or inferences) that could be abstracted during training. In other words, if you can imagine a hypothetical set containing every possible theory that could be established from the data, generalization involves the reduction of this set into a manageable number of important findings. In generalization, the learner is tasked with limiting the patterns it discovers to only those that will be most relevant to its future tasks. Generally, it is not feasible to reduce the number of patterns by examining them one-by-one and ranking them by future utility. Instead, machine learning algorithms generally employ shortcuts that reduce the search space more quickly. Toward this end, the algorithm will employ heuristics, which are educated guesses about where to find the most useful inferences. Because heuristics utilize approximations and other rules of thumb, they do not guarantee to find the single best model. However, without taking these shortcuts, finding useful information in a large dataset would be infeasible. Heuristics are routinely used by human beings to quickly generalize experience to new scenarios. If you have ever utilized your gut instinct to make a snap decision prior to fully evaluating your circumstances, you were intuitively using mental heuristics. The incredible human ability to make quick decisions often relies not on computer-like logic, but rather on heuristics guided by emotions. Sometimes, this can result in illogical conclusions. For example, more people express fear of airline travel versus automobile travel, despite automobiles being statistically more dangerous. This can be explained by the availability heuristic, which is the tendency of people to estimate the likelihood of an event by how easily its examples can be recalled. Accidents involving air travel are highly publicized. Being traumatic events, they are likely to be recalled very easily, whereas car accidents barely warrant a mention in the newspaper. The folly of misapplied heuristics is not limited to human beings. The heuristics employed by machine learning algorithms also sometimes result in erroneous conclusions. The algorithm is said to have a bias if the conclusions are systematically erroneous, or wrong in a predictable manner. For example, suppose that a machine learning algorithm learned to identify faces by finding two dark circles representing eyes, positioned above a straight line indicating a mouth. The algorithm might then have trouble with, or be biased against, faces that do not conform to its model. Faces with glasses, turned at an angle, looking sideways, or with various skin tones might not be detected by the algorithm. Similarly, it could be biased toward faces with certain skin tones, face shapes, or other characteristics that do not conform to its understanding of the world.   In modern usage, the word bias has come to carry quite negative connotations. Various forms of media frequently claim to be free from bias, and claim to report the facts objectively, untainted by emotion. Still, consider for a moment the possibility that a little bias might be useful. Without a bit of arbitrariness, might it be a bit difficult to decide among several competing choices, each with distinct strengths and weaknesses? Indeed, some recent studies in the field of psychology have suggested that individuals born with damage to portions of the brain responsible for emotion are ineffectual in decision making, and might spend hours debating simple decisions such as what color shirt to wear or where to eat lunch. Paradoxically, bias is what blinds us from some information while also allowing us to utilize other information for action. It is how machine learning algorithms choose among the countless ways to understand a set of data. Evaluation Bias is a necessary evil associated with the abstraction and generalization processes inherent in any learning task. In order to drive action in the face of limitless possibility, each learner must be biased in a particular way. Consequently, each learner has its weaknesses and there is no single learning algorithm to rule them all. Therefore, the final step in the generalization process is to evaluate or measure the learner's success in spite of its biases and use this information to inform additional training if needed. Once you've had success with one machine learning technique, you might be tempted to apply it to everything. It is important to resist this temptation because no machine learning approach is the best for every circumstance. This fact is described by the No Free Lunch theorem, introduced by David Wolpert in 1996. For more information, visit: http://www.no-free-lunch.org. Generally, evaluation occurs after a model has been trained on an initial training dataset. Then, the model is evaluated on a new test dataset in order to judge how well its characterization of the training data generalizes to new, unseen data. It's worth noting that it is exceedingly rare for a model to perfectly generalize to every unforeseen case. In parts, models fail to perfectly generalize due to the problem of noise, a term that describes unexplained or unexplainable variations in data. Noisy data is caused by seemingly random events, such as: Measurement error due to imprecise sensors that sometimes add or subtract a bit from the readings Issues with human subjects, such as survey respondents reporting random answers to survey questions, in order to finish more quickly Data quality problems, including missing, null, truncated, incorrectly coded, or corrupted values Phenomena that are so complex or so little understood that they impact the data in ways that appear to be unsystematic Trying to model noise is the basis of a problem called overfitting. Because most noisy data is unexplainable by definition, attempting to explain the noise will result in erroneous conclusions that do not generalize well to new cases. Efforts to explain the noise will also typically result in more complex models that will miss the true pattern that the learner tries to identify. A model that seems to perform well during training, but does poorly during evaluation, is said to be overfitted to the training dataset, as it does not generalize well to the test dataset.   Solutions to the problem of overfitting are specific to particular machine learning approaches. For now, the important point is to be aware of the issue. How well the models are able to handle noisy data is an important source of distinction among them. Machine learning in practice So far, we've focused on how machine learning works in theory. To apply the learning process to real-world tasks, we'll use a five-step process. Regardless of the task at hand, any machine learning algorithm can be deployed by following these steps: Data collection: The data collection step involves gathering the learning material an algorithm will use to generate actionable knowledge. In most cases, the data will need to be combined into a single source like a text file, spreadsheet, or database. Data exploration and preparation: The quality of any machine learning project is based largely on the quality of its input data. Thus, it is important to learn more about the data and its nuances during a practice called data exploration. Additional work is required to prepare the data for the learning process. This involves fixing or cleaning so-called "messy" data, eliminating unnecessary data, and recoding the data to conform to the learner's expected inputs. Model training: By the time the data has been prepared for analysis, you are likely to have a sense of what you are capable of learning from the data. The specific machine learning task chosen will inform the selection of an appropriate algorithm, and the algorithm will represent the data in the form of a model. Model evaluation: Because each machine learning model results in a biased solution to the learning problem, it is important to evaluate how well the algorithm learns from its experience. Depending on the type of model used, you might be able to evaluate the accuracy of the model using a test dataset or you may need to develop measures of performance specific to the intended application. Model improvement: If better performance is needed, it becomes necessary to utilize more advanced strategies to augment the performance of the model. Sometimes, it may be necessary to switch to a different type of model altogether. You may need to supplement your data with additional data or perform additional preparatory work as in step two of this process. After these steps are completed, if the model appears to be performing well, it can be deployed for its intended task. As the case may be, you might utilize your model to provide score data for predictions (possibly in real time), for projections of financial data, to generate useful insight for marketing or research, or to automate tasks such as mail delivery or flying aircraft. The successes and failures of the deployed model might even provide additional data to train your next generation learner. Types of input data The practice of machine learning involves matching the characteristics of input data to the biases of the available approaches. Thus, before applying machine learning to real-world problems, it is important to understand the terminology that distinguishes among input datasets. The phrase unit of observation is used to describe the smallest entity with measured properties of interest for a study. Commonly, the unit of observation is in the form of persons, objects or things, transactions, time points, geographic regions, or measurements. Sometimes, units of observation are combined to form units such as person-years, which denote cases where the same person is tracked over multiple years; each person-year comprises of a person's data for one year. The unit of observation is related, but not identical, to the unit of analysis, which is the smallest unit from which the inference is made. Although it is often the case, the observed and analyzed units are not always the same. For example, data observed from people might be used to analyze trends across countries. Datasets that store the units of observation and their properties can be imagined as collections of data consisting of: Examples: Instances of the unit of observation for which properties have been recorded Features: Recorded properties or attributes of examples that may be useful for learning It is easiest to understand features and examples through real-world cases. To build a learning algorithm to identify spam e-mail, the unit of observation could be e-mail messages, the examples would be specific messages, and the features might consist of the words used in the messages. For a cancer detection algorithm, the unit of observation could be patients, the examples might include a random sample of cancer patients, and the features may be the genomic markers from biopsied cells as well as the characteristics of patient such as weight, height, or blood pressure. While examples and features do not have to be collected in any specific form, they are commonly gathered in matrix format, which means that each example has exactly the same features. The following spreadsheet shows a dataset in matrix format. In matrix data, each row in the spreadsheet is an example and each column is a feature. Here, the rows indicate examples of automobiles, while the columns record various each automobile's features, such as price, mileage, color, and transmission type. Matrix format data is by far the most common form used in machine learning: Features also come in various forms. If a feature represents a characteristic measured in numbers, it is unsurprisingly called numeric. Alternatively, if a feature is an attribute that consists of a set of categories, the feature is called categorical or nominal. A special case of categorical variables is called ordinal, which designates a nominal variable with categories falling in an ordered list. Some examples of ordinal variables include clothing sizes such as small, medium, and large; or a measurement of customer satisfaction on a scale from "not at all happy" to "very happy." It is important to consider what the features represent, as the type and number of features in your dataset will assist in determining an appropriate machine learning algorithm for your task. Types of machine learning algorithms Machine learning algorithms are divided into categories according to their purpose. Understanding the categories of learning algorithms is an essential first step towards using data to drive the desired action. A predictive model is used for tasks that involve, as the name implies, the prediction of one value using other values in the dataset. The learning algorithm attempts to discover and model the relationship between the target feature (the feature being predicted) and the other features. Despite the common use of the word "prediction" to imply forecasting, predictive models need not necessarily foresee events in the future. For instance, a predictive model could be used to predict past events, such as the date of a baby's conception using the mother's present-day hormone levels. Predictive models can also be used in real time to control traffic lights during rush hours. Because predictive models are given clear instruction on what they need to learn and how they are intended to learn it, the process of training a predictive model is known as supervised learning. The supervision does not refer to human involvement, but rather to the fact that the target values provide a way for the learner to know how well it has learned the desired task. Stated more formally, given a set of data, a supervised learning algorithm attempts to optimize a function (the model) to find the combination of feature values that result in the target output. The often used supervised machine learning task of predicting which category an example belongs to is known as classification. It is easy to think of potential uses for a classifier. For instance, you could predict whether: An e-mail message is spam A person has cancer A football team will win or lose An applicant will default on a loan In classification, the target feature to be predicted is a categorical feature known as the class, and is divided into categories called levels. A class can have two or more levels, and the levels may or may not be ordinal. Because classification is so widely used in machine learning, there are many types of classification algorithms, with strengths and weaknesses suited for different types of input data. Supervised learners can also be used to predict numeric data such as income, laboratory values, test scores, or counts of items. To predict such numeric values, a common form of numeric prediction fits linear regression models to the input data. Although regression models are not the only type of numeric models, they are, by far, the most widely used. Regression methods are widely used for forecasting, as they quantify in exact terms the association between inputs and the target, including both, the magnitude and uncertainty of the relationship. Since it is easy to convert numbers into categories (for example, ages 13 to 19 are teenagers) and categories into numbers (for example, assign 1 to all males, 0 to all females), the boundary between classification models and numeric prediction models is not necessarily firm. A descriptive model is used for tasks that would benefit from the insight gained from summarizing data in new and interesting ways. As opposed to predictive models that predict a target of interest, in a descriptive model, no single feature is more important than any other. In fact, because there is no target to learn, the process of training a descriptive model is called unsupervised learning. Although it can be more difficult to think of applications for descriptive models—after all, what good is a learner that isn't learning anything in particular—they are used quite regularly for data mining. For example, the descriptive modeling task called pattern discovery is used to identify useful associations within data. Pattern discovery is often used for market basket analysis on retailers' transactional purchase data. Here, the goal is to identify items that are frequently purchased together, such that the learned information can be used to refine marketing tactics. For instance, if a retailer learns that swimming trunks are commonly purchased at the same time as sunglasses, the retailer might reposition the items more closely in the store or run a promotion to "up-sell" customers on associated items. Originally used only in retail contexts, pattern discovery is now starting to be used in quite innovative ways. For instance, it can be used to detect patterns of fraudulent behavior, screen for genetic defects, or identify hot spots for criminal activity. The descriptive modeling task of dividing a dataset into homogeneous groups is called clustering. This is sometimes used for segmentation analysis that identifies groups of individuals with similar behavior or demographic information, so that advertising campaigns could be tailored for particular audiences. Although the machine is capable of identifying the clusters, human intervention is required to interpret them. For example, given five different clusters of shoppers at a grocery store, the marketing team will need to understand the differences among the groups in order to create a promotion that best suits each group. Lastly, a class of machine learning algorithms known as meta-learners is not tied to a specific learning task, but is rather focused on learning how to learn more effectively. A meta-learning algorithm uses the result of some learnings to inform additional learning. This can be beneficial for very challenging problems or when a predictive algorithm's performance needs to be as accurate as possible. Machine learning with R Many of the algorithms needed for machine learning with R are not included as part of the base installation. Instead, the algorithms needed for machine learning are available via a large community of experts who have shared their work freely. These must be installed on top of base R manually. Thanks to R's status as free open source software, there is no additional charge for this functionality. A collection of R functions that can be shared among users is called a package. Free packages exist for each of the machine learning algorithms covered in this book. In fact, this book only covers a small portion of all of R's machine learning packages. If you are interested in the breadth of R packages, you can view a list at Comprehensive R Archive Network (CRAN), a collection of web and FTP sites located around the world to provide the most up-to-date versions of R software and packages. If you obtained the R software via download, it was most likely from CRAN at http://cran.r-project.org/index.html. If you do not already have R, the CRAN website also provides installation instructions and information on where to find help if you have trouble. The Packages link on the left side of the page will take you to a page where you can browse packages in an alphabetical order or sorted by the publication date. At the time of writing this, a total 6,779 packages were available—a jump of over 60 percent in the time since the first edition was written, and this trend shows no sign of slowing! The Task Views link on the left side of the CRAN page provides a curated list of packages as per the subject area. The task view for machine learning, which lists the packages covered in this book (and many more), is available at http://cran.r-project.org/web/views/MachineLearning.html. Installing R packages Despite the vast set of available R add-ons, the package format makes installation and use a virtually effortless process. To demonstrate the use of packages, we will install and load the RWeka package, which was developed by Kurt Hornik, Christian Buchta, and Achim Zeileis (see Open-Source Machine Learning: R Meets Weka in Computational Statistics 24: 225-232 for more information). The RWeka package provides a collection of functions that give R access to the machine learning algorithms in the Java-based Weka software package by Ian H. Witten and Eibe Frank. More information on Weka is available at http://www.cs.waikato.ac.nz/~ml/weka/ To use the RWeka package, you will need to have Java installed (many computers come with Java preinstalled). Java is a set of programming tools available for free, which allow for the use of cross-platform applications such as Weka. For more information, and to download Java on your system, you can visit http://java.com. The most direct way to install a package is via the install.packages() function. To install the RWeka package, at the R command prompt, simply type: > install.packages("RWeka") R will then connect to CRAN and download the package in the correct format for your OS. Some packages such as RWeka require additional packages to be installed before they can be used (these are called dependencies). By default, the installer will automatically download and install any dependencies. The first time you install a package, R may ask you to choose a CRAN mirror. If this happens, choose the mirror residing at a location close to you. This will generally provide the fastest download speed. The default installation options are appropriate for most systems. However, in some cases, you may want to install a package to another location. For example, if you do not have root or administrator privileges on your system, you may need to specify an alternative installation path. This can be accomplished using the lib option, as follows: > install.packages("RWeka", lib="/path/to/library") The installation function also provides additional options for installation from a local file, installation from source, or using experimental versions. You can read about these options in the help file, by using the following command: > ?install.packages More generally, the question mark operator can be used to obtain help on any R function. Simply type ? before the name of the function. Loading and unloading R packages In order to conserve memory, R does not load every installed package by default. Instead, packages are loaded by users as they are needed, using the library() function. The name of this function leads some people to incorrectly use the terms library and package interchangeably. However, to be precise, a library refers to the location where packages are installed and never to a package itself. To load the RWeka package we installed previously, you can type the following: > library(RWeka) To unload an R package, use the detach() function. For example, to unload the RWeka package shown previously use the following command: > detach("package:RWeka", unload = TRUE) This will free up any resources used by the package. Summary To learn more about Machine Learning, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Machine Learning with R - Second Edition Machine Learning with R Cookbook  Mastering Machine Learning with R  Learning Data Mining with R Resources for Article: Further resources on this subject: Getting Started with RStudio [article] Machine learning and Python – the Dream Team [article] Deep learning in R [article]
Read more
  • 0
  • 0
  • 4225

article-image-developing-your-first-cordova-application
Packt
16 Feb 2016
24 min read
Save for later

Developing Your First Cordova Application

Packt
16 Feb 2016
24 min read
In this article, you will develop, build, and deploy your first Apache Cordova application from scratch. The application you will develop is a Sound Recorder utility that you can use to record your voice or any sound and play it back. In this chapter, you will learn about the following topics: Generating your initial Apache Cordova project artifacts by utilizing the Apache Cordova Command-line Interface (CLI) Developing and building your mobile application from the initial Cordova generated code Deploying your developed mobile application to a real Android mobile device to see your application in action (For more resources related to this topic, see here.) An introduction to Cordova CLI In order to create, develop, build, and test a Cordova application, you first need to use the Cordova CLI. Using this, you can create new Apache Cordova project(s), build them on mobile platforms such as iOS, Android, Windows Phone, and so on, and run them on real devices or within emulators. Note that in this chapter, we will focus on deploying our Sound Recorder application in Android devices only. In the next chapter, we will learn how to deploy our Sound Recorder application in iOS and Windows Phone devices. Installing Apache Cordova Before installing Apache Cordova CLI, you need to make sure that you install the following software: Target platform SDK: For Android, you can download its SDK from http://developer.android.com/sdk/index.html (for other platforms, you need to download and install their corresponding SDKs) Node.js: This is accessible at http://nodejs.org and can be downloaded and installed from http://nodejs.org/download/ After installing Node.js, you should be able to run Node.js or node package manager (npm) from the command line. In order to install Apache Cordova using npm, run the following command (you can omit sudo if you are working in a Windows environment): > sudo npm install -g cordova It's worth mentioning that npm is the official package manager for Node.js and it is written completely in JavaScript. npm is a tool that allows users to install Node.js modules, which are available in the npm registry. The sudo command allows a privileged Unix user to execute a command as the super user, or as any other user, according to the sudoers file. The sudo command, by default, requires you to authenticate with a password. Once you are authenticated, you can use the command without a password, by default, for 5 minutes. After successfully installing Apache Cordova (Version 3.4.0), you should be able to execute Apache Cordova commands from the command line, for example, the following command will show you the current installed version of Apache Cordova: > cordova -version In order to execute the Cordova commands without any problem, you also need to have Apache Ant installed and configured in your operating system. You can download Apache Ant from http://ant.apache.org. The complete instructions on how to install Ant are mentioned at https://ant.apache.org/manual/install.html. Generating our Sound Recorder's initial code After installing Apache Cordova, we can start creating our Sound Recorder project by executing the following command: > cordova create soundRecorder com.jsmobile.soundrecorder SoundRecorder After successfully executing this command, you will find a message similar to the following one (note that the location path will be different on your machine): Creating a new cordova project with name "SoundRecorder" and id "com.jsmobile.soundrecorder" at location "/Users/xyz/projects/soundRecorder" If we analyze the cordova create command, we will find that its first parameter represents the path of your project. In this command, a soundRecorder directory will be generated for your project under the directory from which the cordova create command is executed. The second and third parameters are optional. The second parameter, com.jsmobile.soundrecorder, provides your project's namespace (it should be noted that in Android projects, this namespace will be translated to a Java package with this name), and the last parameter, SoundRecorder, provides the application's display text. You can edit both these values in the config.xml configuration file later, which will be illustrated soon. The following screenshot shows our SoundRecorder project's generated artifacts:   The Sound Recorder's initial structure As shown in the preceding screenshot, the generated Apache Cordova project contains the following main directories: www: This directory includes your application's HTML, JavaScript, and CSS code. You will also find the application's starting page (index.html), along with various subdirectories, which are as follows: css: This directory includes the default Apache Cordova application's CSS file (index.css) js: This directory includes the default Apache Cordova application's JavaScript file (index.js) img: This directory includes the default Apache Cordova application's logo file (logo.png) config.xml: This file contains the application configuration. The following code snippet shows the initial code of the config.xml file: <?xml version='1.0' encoding='utf-8'?> <widget id="com.jsmobile.soundrecorder" version="0.0.1" > <name>SoundRecorder</name> <description> A sample Apache Cordova application that responds to the deviceready event. </description> <author email="dev@cordova.apache.org" href="http://cordova.io"> Apache Cordova Team </author> <content src="index.html" /> <access origin="*" /> </widget> As shown in the preceding config.xml file, config.xml contains the following elements that are available on all the supported Apache Cordova platforms: The <widget> element's id attribute represents the application's namespace identifier as specified in our cordova create command, and the <widget> element's version attribute represents its full version number in the form of major.minor.patch. The <name> element specifies the application's name. The <description> and <author> elements specify the application's description and author, respectively. The <content> element (which is optional) specifies the application's starting page that is placed directly under the www directory. The default value is index.html. The <access> element(s) defines the set of external domains that the application is allowed to access. The default value is *, which means that the application is allowed to access any external server(s). Specifying the <access> element's origin to * is fine during application development, but it is considered a bad practice in production due to security concerns. Note that before moving your application to production, you should review its whitelist and declare its access to specific network domains and subdomains. There is another element that is not included in the default config.xml, and this is the <preference> element. The <preference> element(s) can be used to set the different preferences of the Cordova application and can work on all or a subset of the Apache Cordova-supported platforms. Take the example of the following code: <preference name="Fullscreen" value="true" /> If the Fullscreen preference is set to true, it means that the application will be in fullscreen mode on all Cordova-supported platforms (by default, this option is set to false). It is important to note that not all preferences work on all Cordova-supported platforms. Consider the following example: <preference name="HideKeyboardFormAccessoryBar" value="true"/> If the HideKeyboardFormAccessoryBar preference is set to true, then the additional helper toolbar, which appears above the device keyboard, will be hidden. This preference works only on iOS and BlackBerry platforms. platforms: This directory includes the application's supported platforms. After adding a new platform using Apache Cordova CLI, you will find a newly created directory that contains the platform-specific generated code under the platforms directory. The platforms directory is initially empty because we have not added any platforms yet. We will add support to the Android platform in the next step. plugins: This directory includes your application's used plugins. If you aren't already aware, a plugin is the mechanism to access the device's native functions in Apache Cordova. After adding a plugin (such as the Media plugin) to the project, you will find a newly created directory under the plugins directory, which contains the plugin code. Note that we will add three plugins in our Sound Recorder application example. merges: This directory can be used to override the common resources under the www directory. The files placed under the merges/[platform] directory will override the matching files (or add new files) under the www directory for the specified platform (the [platform] value can be iOS, Android, or any other valid supported platform). hooks: This directory contains scripts that can be used to customize Apache Cordova commands. A hook is a piece of code that executes before and/or after the Apache Cordova command runs. An insight into the www files If we look in the www directory, we will find that it contains the following three files: index.html: This file is placed under the application's www directory, and it contains the HTML content of the application page index.js: This file is placed under the www/js directory, and it contains a simple JavaScript logic that we will illustrate soon index.css: This file is placed under the www/css directory, and it contains the style classes of the HTML elements The following code snippet includes the most important part of the index.html page: <div class="app"> <h1>Apache Cordova</h1> <div id="deviceready" class="blink"> <p class="event listening">Connecting to Device</p> <p class="event received">Device is Ready</p> </div> </div> <script type="text/javascript" src="cordova.js"></script> <script type="text/javascript" src="js/index.js"></script> <script type="text/javascript"> app.initialize(); </script> The index.html page has a single div "app", which contains a child div "deviceready". The "deviceready" div has two paragraph elements, the "event listening" and "event received" paragraphs. The "event received" paragraph is initially hidden as indicated by index.css: .event.received { background-color:#4B946A; display:none; } In the index.html page, there are two main JavaScript-included files, as follows: cordova.js: This file contains Apache Cordova JavaScript APIs index.js: This file contains the application's simple logic Finally, the index.html page calls the initialize() method of the app object. Let's see the details of the app object in index.js: var app = { initialize: function() { this.bindEvents(); }, bindEvents: function() { document.addEventListener('deviceready', this.onDeviceReady, false); }, onDeviceReady: function() { app.receivedEvent('deviceready'); }, receivedEvent: function(id) { var parentElement = document.getElementById(id); var listeningElement = parentElement.querySelector('.listening'); var receivedElement = parentElement.querySelector('.received'); listeningElement.setAttribute('style', 'display:none;'); receivedElement.setAttribute('style', 'display:block;'); console.log('Received Event: ' + id); } }; The initialize() method calls the bindEvents() method, which adds an event listener for the 'deviceready' event. When the device is ready, the onDeviceReady() method is called, and this in turn calls the receivedEvent() method of the app object. In the receivedEvent() method, the "event listening" paragraph is hidden and the "event received" paragraph is shown to the user. This is to display the Device is Ready message to the user once Apache Cordova is fully loaded. It is important to note that you must not call any Apache Cordova API before the 'deviceready' event fires. This is because the 'deviceready' event fires only once Apache Cordova is fully loaded. Now you have an Apache Cordova project that has common cross-platform code, so we need to generate a platform-specific code in order to deploy our code on a real device. To generate Android platform code, you need to add the Android platform as follows: > cd soundRecorder > cordova platform add android In order to add any platform, you need to execute the cordova platform command from the application directory. Note that in order to execute the cordova platform command without problems, you need to perform the following instructions: Have Apache Ant installed and configured in your operating system as described in the Installing Apache Cordova section Make sure that the path to your Android SDK platform tools and the tools directory are added to your operating system's PATH environment variable After executing the cordova platform add command, you will find a new subdirectory Android added under the soundRecorder/platforms directory, which is added by Android. In order to build the project, use the following command: > cordova build Finally, you can run and test the generated Android project in the emulator by executing the following command: > cordova emulate android You might see the ERROR: No emulator images (avds) found message flash if no Android AVDs are available in your operating system. So, make sure you create one! The following screenshot shows our Sound Recorder application's initial screen:   It is recommended that you make your code changes in the root www directory, and not in the platforms/android/assets/www directory (especially if you are targeting multiple platforms) as the platforms directory will be overridden every time you execute the cordova build command, unless you are willing to use Apache Cordova CLI to initialize the project for a single platform only. Developing Sound Recorder application After generating the initial application code, it's time to understand what to do next. Sound Recorder functionality The following screenshot shows our Sound Recorder page:   When the user clicks on the Record Sound button, they will be able to record their voices; they can stop recording their voices by clicking on the Stop Recording button. You can see this in the following screenshot:   As shown in the following screenshot, when the user clicks on the Playback button, the recorded voice will be played back:   Sound Recorder preparation In order to implement this functionality using Apache Cordova, we need to add the following plugins using the indicated commands, which should be executed from the application directory: media: This plugin is used to record and play back sound files: > cordova plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-media.git device: This plugin is required to access the device information: > cordova plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-device.git file: This plugin is used to access the device's filesystem: > cordova plugin add https://git-wip-us.apache.org/repos/asf/cordova-plugin-file.git In order to apply these plugins to our Apache Cordova project, we need to run the cordova build command again from the project directory, as follows: > cordova build Sound Recorder details Now we are done with the preparation of our Sound Recorder application. Before moving to the code details, let's see the hierarchy of our Sound Recorder application, as shown in the following screenshot: The application's www directory contains the following directories: css: This directory contains the custom application CSS file(s) img: This directory contains the custom application image file(s) js: This directory contains the custom application JavaScript code jqueryMobile: This directory (which is a newly added one) contains jQuery mobile framework files Finally, the index.html file contains the application's single page whose functionality was illustrated earlier in this section. It is important to note that Apache Cordova does not require you to use a JavaScript mobile User Interface (UI) framework. However, it is recommended that you use a JavaScript mobile UI framework in addition to Apache Cordova. This is in order to facilitate building the application UI and speed up the application development process. Let's see the details of the index.html page of our Sound Recorder application. The following code snippet shows the included files in the page: <link rel="stylesheet" type="text/css" href="css/app.css" /> <link rel="stylesheet" href="jqueryMobile/jquery.mobile-1.4.0.min.css"> <script src="jqueryMobile/jquery-1.10.2.min.js"></script> <script src="jqueryMobile/jquery.mobile-1.4.0.min.js"></script> ... <script type="text/javascript" src="cordova.js"></script> <script type="text/javascript" src="js/app.js"></script> In the preceding code, the following files are included: app.css: This is the custom style file of our Sound Recorder application The files required by the jQuery mobile framework, which are: jquery.mobile-1.4.0.min.css jquery-1.10.2.min.js jquery.mobile-1.4.0.min.js cordova.js: This is the Apache Cordova JavaScript API's file app.js: This is the custom JavaScript file of our Sound Recorder application It is important to know that you can download the jQuery mobile framework files from http://jquerymobile.com/download/. The following code snippet shows the HTML content of our application's single page, whose id is "main": <div data-role="page" id="main"> <div data-role="header"> <h1>Sound Recorder</h1> </div> <div data-role="content"> <div data-role="fieldcontain"> <h1>Welcome to the Sound Recorder Application</h1> <p>Click 'Record Sound' button in order to start recording. You will be able to see the playback button once the sound recording finishes.<br/><br/></p> <input type="hidden" id="location"/> <div class="center-wrapper"> <input type="button" id="recordSound" data- icon="audio" value="Record Sound" class="center-button" data- inline="true"/> <input type="button" id="playSound" data- icon="refresh" value="Playback" class="center-button" data- inline="true"/><br/> </div> <div data-role="popup" id="recordSoundDialog" data- dismissible="false" style="width:250px"> <div data-role="header"> <h1>Recording</h1> </div> <div data-role="content"> <div class="center-wrapper"> <div id="soundDuration"></div> <input type="button" id="stopRecordingSound" value="Stop Recording" class="center-button" data- inline="true"/> </div> </div> </div> </div> </div> <div data-role="footer" data-position="fixed"> <h1>Powered by Apache Cordova</h1> </div> </div> Looking at the preceding code, our Sound Recording page ("main") is defined by setting a div's data-role attribute to "page". It has a header defined by setting a div's data-role to "header". It has content defined by setting a div's data-role to "content", which contains the recording and playback buttons. The content also contains a "recordSoundDialog" pop up, which is defined by setting a div's data-role to "popup". The "recordSoundDialog" pop up has a header and content. The pop-up content displays the recorded audio duration in the "soundDuration" div, and it has a "stopRecordingSound" button that stops recording the sound. Finally, the page has a footer defined by setting a div's data-role to "footer", which contains a statement about the application. Now, it's time to learn how we can define event handlers on the the page HTML elements and use the Apache Cordova API inside our defined event handlers to implement the application's functionality. The following code snippet shows the page initialization code: (function() { $(document).on("pageinit", "#main", function(e) { e.preventDefault(); function onDeviceReady() { $("#recordSound").on("tap", function(e) { // Action is defined here ... }); $("#recordSoundDialog").on("popupafterclose", function(event, ui) { // Action is defined here ... }); $("#stopRecordingSound").on("tap", function(e) { // Action is defined here ... }); $("#playSound").on("tap", function(e) { // Action is defined here ... }); } $(document).on('deviceready', onDeviceReady); initPage(); }); // Code is omitted here for simplicity function initPage() { $("#playSound").closest('.ui-btn').hide(); } })(); In jQuery mobile, the "pageinit" event is called once during page initialization. In this event, the event handlers are defined and the page is initialized. Note that all of the event handlers are defined after the 'deviceready' event fires. The event handlers are defined for the following: Tapping the "recordSound" button Closing the "recordSoundDailog" dialog Tapping the "stopRecordingSound" button Tapping the "playSound" button In initPage(), the "playSound" button is hidden as no voice has been recorded yet. As you noticed, in order to hide an element in jQuery mobile, you just need to call its hide() method. We can now see the details of each event handler; the next code snippet shows the "recordSound" tap event handler: var recInterval; $("#recordSound").on("tap", function(e) { e.preventDefault(); var recordingCallback = {}; recordingCallback.recordSuccess = handleRecordSuccess; recordingCallback.recordError = handleRecordError; startRecordingSound(recordingCallback); var recTime = 0; $("#soundDuration").html("Duration: " + recTime + " seconds"); $("#recordSoundDialog").popup("open"); recInterval = setInterval(function() { recTime = recTime + 1; $("#soundDuration").html("Duration: " + recTime + " seconds"); }, 1000); }); The following actions are performed in the "recordSound" tap event handler: A call to the startRecordingSound(recordingCallback) function is performed. The startRecordingSound(recordingCallback) function is a helper function that starts the sound recording process using the Apache Cordova Media API. Its recordingCallback parameter represents a JSON object, which has the recordSuccess and recordError callback attributes. The recordSuccess callback will be called if the recording operation is a success, and the recordError callback will be called if the recording operation is a failure. Then, the "recordSoundDialog" dialog is opened and its "soundDuration" div is updated every second with the duration of the recorded sound. The following code snippet shows the startRecordingSound(recordingCallback), stopRecordingSound(), and requestApplicationDirectory(callback) functions: var BASE_DIRECTORY = "CS_Recorder"; var recordingMedia; function startRecordingSound(recordingCallback) { var recordVoice = function(dirPath) { var basePath = ""; if (dirPath) { basePath = dirPath + "/"; } var mediaFilePath = basePath + (new Date()).getTime() + ".wav"; var recordingSuccess = function() { recordingCallback.recordSuccess(mediaFilePath); }; recordingMedia = new Media(mediaFilePath, recordingSuccess, recordingCallback.recordError); // Record audio recordingMedia.startRecord(); }; if (device.platform === "Android") { var callback = {}; callback.requestSuccess = recordVoice; callback.requestError = recordingCallback.recordError; requestApplicationDirectory(callback); } else { recordVoice(); } } function stopRecordingSound() { recordingMedia.stopRecord(); recordingMedia.release(); } function requestApplicationDirectory(callback) { var directoryReady = function (dirEntry) { callback.requestSuccess(dirEntry.toURL()); }; var fileSystemReady = function(fileSystem) { fileSystem.root.getDirectory(BASE_DIRECTORY, {create: true}, directoryReady); }; window.requestFileSystem(LocalFileSystem.PERSISTENT, 0, fileSystemReady, callback.requestError); } The next section illustrates the preceding code snippet. Recording and playing the audio files back In order to record the audio files using Apache Cordova, we need to create a Media object, as follows: recordingMedia = new Media(src, mediaSuccess, mediaError); The Media object constructor has the following parameters: src: This refers to the URI of the media file mediaSuccess: This refers to the callback that will be invoked if the media operation (play/record or stop function) succeeds mediaError: This refers to the callback that will be invoked if the media operation (again a play/record or stop function) fails In order to start recording an audio file, a call to the startRecord() method of the Media object must be performed. When the recording is over, a call to stopRecord() of the Media object method must be performed. In startRecordingSound(recordingCallback), the function gets the current device platform by using device.platform, as follows: If the current platform is Android, then a call to requestApplicationDirectory(callback) is performed in order to create an application directory (if it is not already created) called "CS_Recorder" under the device's SD card root directory using the Apache Cordova File API. If the directory creation operation succeeds, recordVoice() will be called by passing the application directory path as a parameter. The recordVoice() function starts recording the sound and saves the resulting audio file under the application directory. Note that if there is no SD card in your Android device, then the application directory will be created under the app's private data directory (/data/data/[app_directory]), and the audio file will be saved under it. In the else block which refers to the other supported platforms (Windows Phone 8 and iOS), recordVoice() is called without creating an application-specific directory. At the time of writing this article, in iOS and Windows Phone 8, every application has a private directory, and applications cannot store their files in any place other than this directory, using the Apache Cordova APIs. In the case of iOS, the audio files will be stored under the tmp directory of the application's sandbox directory (the application's private directory). In the case of Windows Phone 8, the audio files will be stored under the application's local directory. Note that using the native Windows Phone 8 API (Window.Storage), you can read and write files in an SD card with some restrictions. However, until the moment you cannot do this using Apache Cordova; hopefully this capability will soon be supported by Cordova (http://msdn.microsoft.com/en-us/library/windows/apps/xaml/dn611857.aspx). In recordVoice(), it starts creating a media file using the Media object's startRecord() function. After calling the media file's stopRecord() function and after the success of the recording operation, recordingCallback.recordSuccess will be called by recordingSuccess. The recordingCallback.recordSuccess function calls handleRecordSuccess, passing the audio file's full path mediaFilePath as a parameter. The following code snippet shows the handleRecordSuccess function: function handleRecordSuccess(currentFilePath) { $("#location").val(currentFilePath); $("#playSound").closest('.ui-btn').show(); } The handleRecordSuccess function stores the recorded audio filepath in the "location" hidden field, which is used later by the playback button, and shows the "playSound" button. In requestApplicationDirectory(callback), which is called in case of Android, it does the following: Calls window.requestFileSystem in order to request the device filesystem before performing any file operation(s) Calls fileSystem.root.getDirectory when the filesystem is ready in order to create our custom application directory When our custom application directory is created successfully, the path of the created directory, or the existing directory, is passed to recordVoice() that was illustrated earlier In the other application actions, the following code snippet shows the "stopRecordingSound" tapping and "recordSoundDialog" closing event handlers: $("#recordSoundDialog").on("popupafterclose", function(event, ui) { clearInterval(recInterval); stopRecordingSound(); }); $("#stopRecordingSound").on("tap", function(e) { $("#recordSoundDialog").popup("close"); }); function stopRecordingSound(recordingCallback) { recordingMedia.stopRecord(); recordingMedia.release(); } In the "stopRecordingSound" tapping event handler, it closes the open "recordSoundDialog" pop up. Generally, if "recordSoundDialog" is closed by the "stopRecordingSound" button's tapping action or by pressing special device keys, such as the back button in Android devices, then the recording timer stops as a result of calling clearInterval(recInterval), and then it calls the stopRecordingSound() function to stop recording the sound. The stopRecordingSound() function calls the Media object's stopRecord() method, and then releases it by calling the Media object's release() method. The following code snippet shows the "playSound" tap event handler: var audioMedia; var recordingMedia; $("#playSound").on("tap", function(e) { e.preventDefault(); var playCallback = {}; playCallback.playSuccess = handlePlaySuccess; playCallback.playError = handlePlayError; playSound($("#location").val(), playCallback); }); function playSound(filePath, playCallback) { if (filePath) { cleanUpResources(); audioMedia = new Media(filePath, playCallback.playSuccess, playCallback.playError); // Play audio audioMedia.play(); } } function cleanUpResources() { if (audioMedia) { audioMedia.stop(); audioMedia.release(); audioMedia = null; } if (recordingMedia) { recordingMedia.stop(); recordingMedia.release(); recordingMedia = null; } } In the "playSound" tap event handler, it calls the playSound(filePath, playCallback) function by passing the audio file location, which is stored in the "location" hidden field and playCallback. The playSound(filePath, playCallback) function uses the Media object's play() method to play back the saved audio file after releasing used Media objects. Note that this is a requirement to avoid running out of system audio resources. Building and running Sound Recorder application Now, after developing our application code, we can start building our application using the following cordova build command: > cordova build In order to run the application in your Android mobile or tablet, just make sure you enable USB debugging in your Android device. Then, plug your Android device into your development machine and execute the following command from the application directory: > cordova run android Congratulations! After running this command, you will see the Sound Recorder application deployed in your Android device; you can now start testing it on your real device. Summary In this article, you developed your first Apache Cordova application. You now know how to use the Apache Cordova Device API at a basic level. You also know how to use the Media and File APIs along with jQuery mobile to develop the Sound Recorder application. You now understand how to use Apache Cordova CLI in order to manage your Cordova mobile application. In addition, you know how to create a Cordova project, add a new platform (in our case, Android), build your own Cordova mobile application, and deploy your Cordova mobile application to the emulator, and most importantly, to a real device! To learn more, refer to these books: Creating Mobile Apps with jQuery Mobile (https://www.packtpub.com/web-development/creating-mobile-apps-jquery-mobile) Building Mobile Applications Using Kendo UI Mobile and ASP. NET Web API (https://www.packtpub.com/application-development/building-mobile-applications-using-kendo-ui-mobile-and-aspnet-web-api) jQuery Mobile First Look (https://www.packtpub.com/web-development/jquery-mobile-first-look) jQuery Mobile Web Development Essentials (https://www.packtpub.com/web-development/jquery-mobile-web-development-essentials-second-edition) Resources for Article: Further resources on this subject: Understanding mutability and immutability in Python, C#, and JavaScript[article] Object Detection Using Image Features in JavaScript[article] Learning Node.js for Mobile Application Development[article]
Read more
  • 0
  • 0
  • 7059

article-image-writing-blog-application-nodejs-and-angularjs
Packt
16 Feb 2016
35 min read
Save for later

Writing a Blog Application with Node.js and AngularJS

Packt
16 Feb 2016
35 min read
In this article, we are going to build a blog application by using Node.js and AngularJS. Our system will support adding, editing, and removing articles, so there will be a control panel. The MongoDB or MySQL database will handle the storing of the information and the Express framework will be used as the site base. It will deliver the JavaScript, CSS, and the HTML to the end user, and will provide an API to access the database. We will use AngularJS to build the user interface and control the client-side logic in the administration page. (For more resources related to this topic, see here.) This article will cover the following topics: AngularJS fundamentals Choosing and initializing a database Implementing the client-side part of an application with AngularJS Exploring AngularJS AngularJS is an open source, client-side JavaScript framework developed by Google. It's full of features and is really well documented. It has almost become a standard framework in the development of single-page applications. The official site of AngularJS, http://angularjs.org, provides a well-structured documentation. As the framework is widely used, there is a lot of material in the form of articles and video tutorials. As a JavaScript library, it collaborates pretty well with Node.js. In this article, we will build a simple blog with a control panel. Before we start developing our application, let's first take a look at the framework. AngularJS gives us very good control over the data on our page. We don't have to think about selecting elements from the DOM and filling them with values. Thankfully, due to the available data-binding, we may update the data in the JavaScript part and see the change in the HTML part. This is also true for the reverse. Once we change something in the HTML part, we get the new values in the JavaScript part. The framework has a powerful dependency injector. There are predefined classes in order to perform AJAX requests and manage routes. You could also read Mastering Web Development with AngularJS by Peter Bacon Darwin and Pawel Kozlowski, published by Packt Publishing. Bootstrapping AngularJS applications To bootstrap an AngularJS application, we need to add the ng-app attribute to some of our HTML tags. It is important that we pick the right one. Having ng-app somewhere means that all the child nodes will be processed by the framework. It's common practice to put that attribute on the <html> tag. In the following code, we have a simple HTML page containing ng-app: <html ng-app> <head> <script src="angular.min.js"></script> </head> <body> ... </body> </html>   Very often, we will apply a value to the attribute. This will be a module name. We will do this while developing the control panel of our blog application. Having the freedom to place ng-app wherever we want means that we can decide which part of our markup will be controlled by AngularJS. That's good, because if we have a giant HTML file, we really don't want to spend resources parsing the whole document. Of course, we may bootstrap our logic manually, and this is needed when we have more than one AngularJS application on the page. Using directives and controllers In AngularJS, we can implement the Model-View-Controller pattern. The controller acts as glue between the data (model) and the user interface (view). In the context of the framework, the controller is just a simple function. For example, the following HTML code illustrates that a controller is just a simple function: <html ng-app> <head> <script src="angular.min.js"></script> <script src="HeaderController.js"></script> </head> <body> <header ng-controller="HeaderController"> <h1>{{title}}</h1> </header> </body> </html>   In <head> of the page, we are adding the minified version of the library and HeaderController.js; a file that will host the code of our controller. We also set an ng-controller attribute in the HTML markup. The definition of the controller is as follows: function HeaderController($scope) { $scope.title = "Hello world"; } Every controller has its own area of influence. That area is called the scope. In our case, HeaderController defines the {{title}} variable. AngularJS has a wonderful dependency-injection system. Thankfully, due to this mechanism, the $scope argument is automatically initialized and passed to our function. The ng-controller attribute is called the directive, that is, an attribute, which has meaning to AngularJS. There are a lot of directives that we can use. That's maybe one of the strongest points of the framework. We can implement complex logic directly inside our templates, for example, data binding, filtering, or modularity. Data binding Data binding is a process of automatically updating the view once the model is changed. As we mentioned earlier, we can change a variable in the JavaScript part of the application and the HTML part will be automatically updated. We don't have to create a reference to a DOM element or attach event listeners. Everything is handled by the framework. Let's continue and elaborate on the previous example, as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> </header>   A link is added and it contains the ng-click directive. The updateTitle function is a function defined in the controller, as seen in the following code snippet: function HeaderController($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } }   We don't care about the DOM element and where the {{title}} variable is. We just change a property of $scope and everything works. There are, of course, situations where we will have the <input> fields and we want to bind their values. If that's the case, then the ng-model directive can be used. We can see this as follows: <header ng-controller="HeaderController"> <h1>{{title}}</h1> <a href="#" ng-click="updateTitle()">change title</a> <input type="text" ng-model="title" /> </header>   The data in the input field is bound to the same title variable. This time, we don't have to edit the controller. AngularJS automatically changes the content of the h1 tag. Encapsulating logic with modules It's great that we have controllers. However, it's not a good practice to place everything into globally defined functions. That's why it is good to use the module system. The following code shows how a module is defined: angular.module('HeaderModule', []); The first parameter is the name of the module and the second one is an array with the module's dependencies. By dependencies, we mean other modules, services, or something custom that we can use inside the module. It should also be set as a value of the ng-app directive. The code so far could be translated to the following code snippet: angular.module('HeaderModule', []) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   So, the first line defines a module. We can chain the different methods of the module and one of them is the controller method. Following this approach, that is, putting our code inside a module, we will be encapsulating logic. This is a sign of good architecture. And of course, with a module, we have access to different features such as filters, custom directives, and custom services. Preparing data with filters The filters are very handy when we want to prepare our data, prior to be displayed to the user. Let's say, for example, that we need to mention our title in uppercase once it reaches a length of more than 20 characters: angular.module('HeaderModule', []) .filter('customuppercase', function() { return function(input) { if(input.length > 20) { return input.toUpperCase(); } else { return input; } }; }) .controller('HeaderController', function($scope) { $scope.title = "Hello world"; $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   That's the definition of the custom filter called customuppercase. It receives the input and performs a simple check. What it returns, is what the user sees at the end. Here is how this filter could be used in HTML: <h1>{{title | customuppercase}}</h1> Of course, we may add more than one filter per variable. There are some predefined filters to limit the length, such as the JavaScript to JSON conversion or, for example, date formatting. Dependency injection Dependency management can be very tough sometimes. We may split everything into different modules/components. They have nicely written APIs and they are very well documented. However, very soon, we may realize that we need to create a lot of objects. Dependency injection solves this problem by providing what we need, on the fly. We already saw this in action. The $scope parameter passed to our controller, is actually created by the injector of AngularJS. To get something as a dependency, we need to define it somewhere and let the framework know about it. We do this as follows: angular.module('HeaderModule', []) .factory("Data", function() { return { getTitle: function() { return "A better title."; } } }) .controller('HeaderController', function($scope, Data) { $scope.title = Data.getTitle(); $scope.updateTitle = function() { $scope.title = "That's a new title."; } });   The Module class has a method called factory. It registers a new service that could later be used as a dependency. The function returns an object with only one method, getTitle. Of course, the name of the service should match the name of the controller's parameter. Otherwise, AngularJS will not be able to find the dependency's source. The model in the context of AngularJS In the well-known Model-View-Controller pattern, the model is the part that stores the data in the application. AngularJS doesn't have a specific workflow to define models. The $scope variable could be considered a model. We keep the data in properties attached to the current scope. Later, we can use the ng-model directive and bind a property to the DOM element. We already saw how this works in the previous sections. The framework may not provide the usual form of a model, but it's made like that so that we can write our own implementation. The fact that AngularJS works with plain JavaScript objects, makes this task easily doable. Final words on AngularJS AngularJS is one of the leading frameworks, not only because it is made by Google, but also because it's really flexible. We could use just a small piece of it or build a solid architecture using the giant collection of features. Selecting and initializing the database To build a blog application, we need a database that will store the published articles. In most cases, the choice of the database depends on the current project. There are factors such as performance and scalability and we should keep them in mind. In order to have a better look at the possible solutions, we will have a look at the two of the most popular databases: MongoDB and MySQL. The first one is a NoSQL type of database. According to the Wikipedia entry (http://en.wikipedia.org/wiki/ NoSQL) on NoSQL databases: "A NoSQL or Not Only SQL database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases." In other words, it's simpler than a SQL database, and very often stores information in the key value type. Usually, such solutions are used when handling and storing large amounts of data. It is also a very popular approach when we need flexible schema or when we want to use JSON. It really depends on what kind of system we are building. In some cases, MySQL could be a better choice, while in some other cases, MongoDB. In our example blog, we're going to use both. In order to do this, we will need a layer that connects to the database server and accepts queries. To make things a bit more interesting, we will create a module that has only one API, but can switch between the two database models. Using NoSQL with MongoDB Let's start with MongoDB. Before we start storing information, we need a MongoDB server running. It can be downloaded from the official page of the database https://www.mongodb.org/downloads. We are not going to handle the communication with the database manually. There is a driver specifically developed for Node.js. It's called mongodb and we should include it in our package.json file. After successful installation via npm install, the driver will be available in our scripts. We can check this as follows: "dependencies": { "mongodb": "1.3.20" }   We will stick to the Model-View-Controller architecture and the database-related operations in a model called Articles. We can see this as follows: var crypto = require("crypto"), type = "mongodb", client = require('mongodb').MongoClient, mongodb_host = "127.0.0.1", mongodb_port = "27017", collection; module.exports = function() { if(type == "mongodb") { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } else { return { add: function(data, callback) { ... }, update: function(data, callback) { ... }, get: function(callback) { ... }, remove: function(id, callback) { ... } } } }   It starts with defining a few dependencies and settings for the MongoDB connection. Line number one requires the crypto module. We will use it to generate unique IDs for every article. The type variable defines which database is currently accessed. The third line initializes the MongoDB driver. We will use it to communicate with the database server. After that, we set the host and port for the connection and at the end a global collection variable, which will keep a reference to the collection with the articles. In MongoDB, the collections are similar to the tables in MySQL. The next logical step is to establish a database connection and perform the needed operations, as follows: connection = 'mongodb://'; connection += mongodb_host + ':' + mongodb_port; connection += '/blog-application'; client.connect(connection, function(err, database) { if(err) { throw new Error("Can't connect"); } else { console.log("Connection to MongoDB server successful."); collection = database.collection('articles'); } });   We pass the host and the port, and the driver is doing everything else. Of course, it is a good practice to handle the error (if any) and throw an exception. In our case, this is especially needed because without the information in the database, the frontend has nothing to show. The rest of the module contains methods to add, edit, retrieve, and delete records: return { add: function(data, callback) { var date = new Date(); data.id = crypto.randomBytes(20).toString('hex'); data.date = date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate(); collection.insert(data, {}, callback || function() {}); }, update: function(data, callback) { collection.update( {ID: data.id}, data, {}, callback || function(){ } ); }, get: function(callback) { collection.find({}).toArray(callback); }, remove: function(id, callback) { collection.findAndModify( {ID: id}, [], {}, {remove: true}, callback ); } }   The add and update methods accept the data parameter. That's a simple JavaScript object. For example, see the following code: { title: "Blog post title", text: "Article's text here ..." }   The records are identified by an automatically generated unique id. The update method needs it in order to find out which record to edit. All the methods also have a callback. That's important, because the module is meant to be used as a black box, that is, we should be able to create an instance of it, operate with the data, and at the end continue with the rest of the application's logic. Using MySQL We're going to use an SQL type of database with MySQL. We will add a few more lines of code to the already working Articles.js model. The idea is to have a class that supports the two databases like two different options. At the end, we should be able to switch from one to the other, by simply changing the value of a variable. Similar to MongoDB, we need to first install the database to be able use it. The official download page is http://www.mysql.com/downloads. MySQL requires another Node.js module. It should be added again to the package. json file. We can see the module as follows: "dependencies": { "mongodb": "1.3.20", "mysql": "2.0.0" }   Similar to the MongoDB solution, we need to firstly connect to the server. To do so, we need to know the values of the host, username, and password fields. And because the data is organized in databases, a name of the database. In MySQL, we put our data into different databases. So, the following code defines the needed variables: var mysql = require('mysql'), mysql_host = "127.0.0.1", mysql_user = "root", mysql_password = "", mysql_database = "blog_application", connection;   The previous example leaves the password field empty but we should set the proper value of our system. The MySQL database requires us to define a table and its fields before we start saving data. So, the following code is a short dump of the table used in this article: CREATE TABLE IF NOT EXISTS `articles` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` longtext NOT NULL, `text` longtext NOT NULL, `date` varchar(100) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 AUTO_INCREMENT=1;   Once we have a database and its table set, we can continue with the database connection, as follows: connection = mysql.createConnection({ host: mysql_host, user: mysql_user, password: mysql_password }); connection.connect(function(err) { if(err) { throw new Error("Can't connect to MySQL."); } else { connection.query("USE " + mysql_database, function(err, rows, fields) { if(err) { throw new Error("Missing database."); } else { console.log("Successfully selected database."); } }) } });   The driver provides a method to connect to the server and execute queries. The first executed query selects the database. If everything is ok, you should see Successfully selected database as an output in your console. Half of the job is done. What we should do now is replicate the methods returned in the first MongoDB implementation. We need to do this because when we switch to the MySQL usage, the code using the class will not work. And by replicating them we mean that they should have the same names and should accept the same arguments. If we do everything correctly, at the end our application will support two types of databases. And all we have to do is change the value of the type variable: return { add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); }, update: function(data, callback) { var query = "UPDATE articles SET "; query += "title=" + connection.escape(data.title) + ", "; query += "text=" + connection.escape(data.text) + " "; query += "WHERE id='" + data.id + "'"; connection.query(query, callback); }, get: function(callback) { var query = "SELECT * FROM articles ORDER BY id DESC"; connection.query(query, function(err, rows, fields) { if (err) { throw new Error("Error getting."); } else { callback(rows); } }); }, remove: function(id, callback) { var query = "DELETE FROM articles WHERE id='" + id + "'"; connection.query(query, callback); } }   The code is a little longer than the one generated in the first MongoDB variant. That's because we needed to construct MySQL queries from the passed data. Keep in mind that we have to escape the information, which comes to the module. That's why we use connection.escape(). With these lines of code, our model is completed. Now we can add, edit, remove, or get data. Let's continue with the part that shows the articles to our users. Developing the client side with AngularJS Let's assume that there is some data in the database and we are ready to present it to the users. So far, we have only developed the model, which is the class that takes care of the access to the information. To simplify the process, we will Express here. We need to first update the package.json file and include that in the framework, as follows: "dependencies": { "express": "3.4.6", "jade": "0.35.0", "mongodb": "1.3.20", "mysql": "2.0.0" }   We are also adding Jade, because we are going to use it as a template language. The writing of markup in plain HTML is not very efficient nowadays. By using the template engine, we can split the data and the HTML markup, which makes our application much better structured. Jade's syntax is kind of similar to HTML. We can write tags without the need to close them: body p(class="paragraph", data-id="12"). Sample text here footer a(href="#"). my site   The preceding code snippet is transformed to the following code snippet: <body> <p data-id="12" class="paragraph">Sample text here</p> <footer><a href="#">my site</a></footer> </body>   Jade relies on the indentation in the content to distinguish the tags. Let's start with the project structure, as seen in the following screenshot: We placed our already written class, Articles.js, inside the models directory. The public directory will contain CSS styles, and all the necessary client-side JavaScript: the AngularJS library, the AngularJS router module, and our custom code. We will skip some of the explanations about the following code. Our index.js file looks as follows: var express = require('express'); var app = express(); var articles = require("./models/Articles")(); app.set('views', __dirname + '/views'); app.set('view engine', 'jade'); app.use(express.static(__dirname + '/public')); app.use(function(req, res, next) { req.articles = articles; next(); }); app.get('/api/get', require("./controllers/api/get")); app.get('/', require("./controllers/index")); app.listen(3000); console.log('Listening on port 3000');   At the beginning, we require the Express framework and our model. Maybe it's better to initialize the model inside the controller, but in our case this is not necessary. Just after that, we set up some basic options for Express and define our own middleware. It has only one job to do and that is to attach the model to the request object. We are doing this because the request object is passed to all the route handlers. In our case, these handlers are actually the controllers. So, Articles.js becomes accessible everywhere via the req.articles property. At the end of the script, we placed two routes. The second one catches the usual requests that come from the users. The first one, /api/get, is a bit more interesting. We want to build our frontend on top of AngularJS. So, the data that is stored in the database should not enter the Node.js part but on the client side where we use Google's framework. To make this possible, we will create routes/controllers to get, add, edit, and delete records. Everything will be controlled by HTTP requests performed by AngularJS. In other words, we need an API. Before we start using AngularJS, let's take a look at the /controllers/api/get.js controller: module.exports = function(req, res, next) { req.articles.get(function(rows) { res.send(rows); }); }   The main job is done by our model and the response is handled by Express. It's nice because if we pass a JavaScript object, as we did, (rows is actually an array of objects) the framework sets the response headers automatically. To test the result, we could run the application with node index.js and open http://localhost:3000/api/ get. If we don't have any records in the database, we will get an empty array. If not, the stored articles will be returned. So, that's the URL, which we should hit from within the AngularJS controller in order to get the information. The code of the /controller/index.js controller is also just a few lines. We can see the code as follows: module.exports = function(req, res, next) { res.render("list", { app: "" }); }   It simply renders the list view, which is stored in the list.jade file. That file should be saved in the /views directory. But before we see its code, we will check another file, which acts as a base for all the pages. Jade has a nice feature called blocks. We may define different partials and combine them into one template. The following is our layout.jade file: doctype html html(ng-app="#{app}") head title Blog link(rel='stylesheet', href='/style.css') script(src='/angular.min.js') script(src='/angular-route.min.js') body block content   There is only one variable passed to this template, which is #{app}. We will need it later to initialize the administration's module. The angular.min.js and angular-route.min.js files should be downloaded from the official AngularJS site, and placed in the /public directory. The body of the page contains a block placeholder called content, which we will later fill with the list of the articles. The following is the list.jade file: extends layout block content .container(ng-controller="BlogCtrl") section.articles article(ng-repeat="article in articles") h2 {{article.title}} br small published on {{article.date}} p {{article.text}} script(src='/blog.js')   The two lines in the beginning combine both the templates into one page. The Express framework transforms the Jade template into HTML and serves it to the browser of the user. From there, the client-side JavaScript takes control. We are using the ng-controller directive saying that the div element will be controlled by an AngularJS controller called BlogCtrl. The same class should have variable, articles, filled with the information from the database. ng-repeat goes through the array and displays the content to the users. The blog.js class holds the code of the controller: function BlogCtrl($scope, $http) { $scope.articles = [ { title: "", text: "Loading ..."} ]; $http({method: 'GET', url: '/api/get'}) .success(function(data, status, headers, config) { $scope.articles = data; }) .error(function(data, status, headers, config) { console.error("Error getting articles."); }); }   The controller has two dependencies. The first one, $scope, points to the current view. Whatever we assign as a property there is available as a variable in our HTML markup. Initially, we add only one element, which doesn't have a title, but has text. It is shown to indicate that we are still loading the articles from the database. The second dependency, $http, provides an API in order to make HTTP requests. So, all we have to do is query /api/get, fetch the data, and pass it to the $scope dependency. The rest is done by AngularJS and its magical two-way data binding. To make the application a little more interesting, we will add a search field, as follows: // views/list.jade header .search input(type="text", placeholder="type a filter here", ng-model="filterText") h1 Blog hr   The ng-model directive, binds the value of the input field to a variable inside our $scope dependency. However, this time, we don't have to edit our controller and can simply apply the same variable as a filter to the ng-repeat: article(ng-repeat="article in articles | filter:filterText") As a result, the articles shown will be filtered based on the user's input. Two simple additions, but something really valuable is on the page. The filters of AngularJS can be very powerful. Implementing a control panel The control panel is the place where we will manage the articles of the blog. Several things should be made in the backend before continuing with the user interface. They are as follows: app.set("username", "admin"); app.set("password", "pass"); app.use(express.cookieParser('blog-application')); app.use(express.session());   The previous lines of code should be added to /index.js. Our administration should be protected, so the first two lines define our credentials. We are using Express as data storage, simply creating key-value pairs. Later, if we need the username we can get it with app.get("username"). The next two lines enable session support. We need that because of the login process. We added a middleware, which attaches the articles to the request object. We will do the same with the current user's status, as follows: app.use(function(req, res, next) { if (( req.session && req.session.admin === true ) || ( req.body && req.body.username === app.get("username") && req.body.password === app.get("password") )) { req.logged = true; req.session.admin = true; }; next(); });   Our if statement is a little long, but it tells us whether the user is logged in or not. The first part checks whether there is a session created and the second one checks whether the user submitted a form with the correct username and password. If these expressions are true, then we attach a variable, logged, to the request object and create a session that will be valid during the following requests. There is only one thing that we need in the main application's file. A few routes that will handle the control panel operations. In the following code, we are defining them along with the needed route handler: var protect = function(req, res, next) { if (req.logged) { next(); } else { res.send(401, 'No Access.'); } } app.post('/api/add', protect, require("./controllers/api/add")); app.post('/api/edit', protect, require("./controllers/api/edit")); app.post('/api/delete', protect, require("./controllers/api/ delete")); app.all('/admin', require("./controllers/admin"));   The three routes, which start with /api, will use the model Articles.js to add, edit, and remove articles from the database. These operations should be protected. We will add a middleware function that takes care of this. If the req.logged variable is not available, it simply responds with a 401 - Unauthorized status code. The last route, /admin, is a little different because it shows a login form instead. The following is the controller to create new articles: module.exports = function(req, res, next) { req.articles.add(req.body, function() { res.send({success: true}); }); }   We transfer most of the logic to the frontend, so again, there are just a few lines. What is interesting here is that we pass req.body directly to the model. It actually contains the data submitted by the user. The following code, is how the req.articles.add method looks for the MongoDB implementation: add: function(data, callback) { data.ID = crypto.randomBytes(20).toString('hex'); collection.insert(data, {}, callback || function() {}); } And the MySQL implementation is as follows: add: function(data, callback) { var date = new Date(); var query = ""; query += "INSERT INTO articles (title, text, date) VALUES ("; query += connection.escape(data.title) + ", "; query += connection.escape(data.text) + ", "; query += "'" + date.getFullYear() + "-" + date.getMonth() + "-" + date.getDate() + "'"; query += ")"; connection.query(query, callback); } In both the cases, we need title and text in the passed data object. Thankfully, due to Express' bodyParser middleware, this is what we have in the req.body object. We can directly forward it to the model. The other route handlers are almost the same: // api/edit.js module.exports = function(req, res, next) { req.articles.update(req.body, function() { res.send({success: true}); }); } What we changed is the method of the Articles.js class. It is not add but update. The same technique is applied in the route to delete an article. We can see it as follows: // api/delete.js module.exports = function(req, res, next) { req.articles.remove(req.body.id, function() { res.send({success: true}); }); }   What we need for deletion is not the whole body of the request but only the unique ID of the record. Every API method sends {success: true} as a response. While we are dealing with API requests, we should always return a response. Even if something goes wrong. The last thing in the Node.js part, which we have to cover, is the controller responsible for the user interface of the administration panel, that is, the. / controllers/admin.js file: module.exports = function(req, res, next) { if(req.logged) { res.render("admin", { app: "admin" }); } else { res.render("login", { app: "" }); } }   There are two templates that are rendered: /views/admin.jade and /views/login. jade. Based on the variable, which we set in /index.js, the script decides which one to show. If the user is not logged in, then a login form is sent to the browser, as follows: extends layout block content .container header h1 Administration hr section.articles article form(method="post", action="/admin") span Username: br input(type="text", name="username") br span Password: br input(type="password", name="password") br br input(type="submit", value="login")   There is no AngularJS code here. All we have is the good old HTML form, which submits its data via POST to the same URL—/admin. If the username and password are correct, the .logged variable is set to true and the controller renders the other template: extends layout block content .container header h1 Administration hr a(href="/") Public span | a(href="#/") List span | a(href="#/add") Add section(ng-view) script(src='/admin.js')   The control panel needs several views to handle all the operations. AngularJS has a great router module, which works with hashtags-type URLs, that is, URLs such as / admin#/add. The same module requires a placeholder for the different partials. In our case, this is a section tag. The ng-view attribute tells the framework that this is the element prepared for that logic. At the end of the template, we are adding an external file, which keeps the whole client-side JavaScript code that is needed by the control panel. While the client-side part of the applications needs only loading of the articles, the control panel requires a lot more functionalities. It is good to use the modular system of AngularJS. We need the routes and views to change, so the ngRoute module is needed as a dependency. This module is not added in the main angular.min.js build. It is placed in the angular-route.min.js file. The following code shows how our module starts: var admin = angular.module('admin', ['ngRoute']); admin.config(['$routeProvider', function($routeProvider) { $routeProvider .when('/', {}) .when('/add', {}) .when('/edit/:id', {}) .when('/delete/:id', {}) .otherwise({ redirectTo: '/' }); } ]);   We configured the router by mapping URLs to specific routes. At the moment, the routes are just empty objects, but we will fix that shortly. Every controller will need to make HTTP requests to the Node.js part of the application. It will be nice if we have such a service and use it all over our code. We can see an example as follows: admin.factory('API', function($http) { var request = function(method, url) { return function(callback, data) { $http({ method: method, url: url, data: data }) .success(callback) .error(function(data, status, headers, config) { console.error("Error requesting '" + url + "'."); }); } } return { get: request('GET', '/api/get'), add: request('POST', '/api/add'), edit: request('POST', '/api/edit'), remove: request('POST', '/api/delete') } });   One of the best things about AngularJS is that it works with plain JavaScript objects. There are no unnecessary abstractions and no extending or inheriting special classes. We are using the .factory method to create a simple JavaScript object. It has four methods that can be called: get, add, edit, and remove. Each one of them calls a function, which is defined in the helper method request. The service has only one dependency, $http. We already know this module; it handles HTTP requests nicely. The URLs that we are going to query are the same ones that we defined in the Node.js part. Now, let's create a controller that will show the articles currently stored in the database. First, we should replace the empty route object .when('/', {}) with the following object: .when('/', { controller: 'ListCtrl', template: ' <article ng-repeat="article in articles"> <hr /> <strong>{{article.title}}</strong><br /> (<a href="#/edit/{{article.id}}">edit</a>) (<a href="#/delete/{{article.id}}">remove</a>) </article> ' })   The object has to contain a controller and a template. The template is nothing more than a few lines of HTML markup. It looks a bit like the template used to show the articles on the client side. The difference is the links used to edit and delete. JavaScript doesn't allow new lines in the string definitions. The backward slashes at the end of the lines prevent syntax errors, which will eventually be thrown by the browser. The following is the code for the controller. It is defined, again, in the module: admin.controller('ListCtrl', function($scope, API) { API.get(function(articles) { $scope.articles = articles; }); });   And here is the beauty of the AngularJS dependency injection. Our custom-defined service API is automatically initialized and passed to the controller. The .get method fetches the articles from the database. Later, we send the information to the current $scope dependency and the two-way data binding does the rest. The articles are shown on the page. The work with AngularJS is so easy that we could combine the controller to add and edit in one place. Let's store the route object in an external variable, as follows: var AddEditRoute = { controller: 'AddEditCtrl', template: ' <hr /> <article> <form> <span>Title</spna><br /> <input type="text" ng-model="article.title"/><br /> <span>Text</spna><br /> <textarea rows="7" ng-model="article.text"></textarea> <br /><br /> <button ng-click="save()">save</button> </form> </article> ' };   And later, assign it to the both the routes, as follows: .when('/add', AddEditRoute) .when('/edit/:id', AddEditRoute)   The template is just a form with the necessary fields and a button, which calls the save method in the controller. Notice that we bound the input field and the text area to variables inside the $scope dependency. This comes in handy because we don't need to access the DOM to get the values. We can see this as follows: admin.controller( 'AddEditCtrl', function($scope, API, $location, $routeParams) { var editMode = $routeParams.id ? true : false; if (editMode) { API.get(function(articles) { articles.forEach(function(article) { if (article.id == $routeParams.id) { $scope.article = article; } }); }); } $scope.save = function() { API[editMode ? 'edit' : 'add'](function() { $location.path('/'); }, $scope.article); } })   The controller receives four dependencies. We already know about $scope and API. The $location dependency is used when we want to change the current route, or, in other words, to forward the user to another view. The $routeParams dependency is needed to fetch parameters from the URL. In our case, /edit/:id is a route with a variable inside. Inside the code, the id is available in $routeParams.id. The adding and editing of articles uses the same form. So, with a simple check, we know what the user is currently doing. If the user is in the edit mode, then we fetch the article based on the provided id and fill the form. Otherwise, the fields are empty and new records will be created. The deletion of an article can be done by using a similar approach, which is adding a route object and defining a new controller. We can see the deletion as follows: .when('/delete/:id', { controller: 'RemoveCtrl', template: ' ' })   We don't need a template in this case. Once the article is deleted from the database, we will forward the user to the list page. We have to call the remove method of the API. Here is how the RemoveCtrl controller looks like: admin.controller( 'RemoveCtrl', function($scope, $location, $routeParams, API) { API.remove(function() { $location.path('/'); }, $routeParams); } );   The preceding code depicts same dependencies like in the previous controller. This time, we simply forward the $routeParams dependency to the API. And because it is a plain JavaScript object, everything works as expected. Summary In this article, we built a simple blog by writing the backend of the application in Node.js. The module for database communication, which we wrote, can work with the MongoDB or MySQL database and store articles. The client-side part and the control panel of the blog were developed with AngularJS. We then defined a custom service using the built-in HTTP and routing mechanisms. Node.js works well with AngularJS, mainly because both are written in JavaScript. We found out that AngularJS is built to support the developer. It removes all those boring tasks such as DOM element referencing, attaching event listeners, and so on. It's a great choice for the modern client-side coding stack. You can refer to the following books to learn more about Node.js: Node.js Essentials Learning Node.js for Mobile Application Development Node.js Design Patterns Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] AngularJS Project [Article] Working with Live Data and AngularJS [Article]
Read more
  • 0
  • 2
  • 13484

article-image-types-variables-and-function-techniques
Packt
16 Feb 2016
39 min read
Save for later

Types, Variables, and Function Techniques

Packt
16 Feb 2016
39 min read
This article is an introduction to the syntax used in the TypeScript language to apply strong typing to JavaScript. It is intended for readers that have not used TypeScript before, and covers the transition from standard JavaScript to TypeScript. We will cover the following topics in this article: Basic types and type syntax: strings, numbers, and booleans Inferred typing and duck-typing Arrays and enums The any type and explicit casting Functions and anonymous functions Optional and default function parameters Argument arrays Function callbacks and function signatures Function scoping rules and overloads (For more resources related to this topic, see here.) Basic types JavaScript variables can hold a number of data types, including numbers, strings, arrays, objects, functions, and more. The type of an object in JavaScript is determined by its assignment–so if a variable has been assigned a string value, then it will be of type string. This can, however, introduce a number of problems in our code. JavaScript is not strongly typed JavaScript objects and variables can be changed or reassigned on the fly. As an example of this, consider the following JavaScript code: var myString = "test"; var myNumber = 1; var myBoolean = true; We start by defining three variables, named myString, myNumber and myBoolean. The myString variable is set to a string value of "test", and as such will be of type string. Similarly, myNumber is set to the value of 1, and is therefore of type number, and myBoolean is set to true, making it of type boolean. Now let's start assigning these variables to each other, as follows: myString = myNumber; myBoolean = myString; myNumber = myBoolean; We start by setting the value of myString to the value of myNumber (which is the numeric value of 1). We then set the value of myBoolean to the value of myString, (which would now be the numeric value of 1). Finally, we set the value of myNumber to the value of myBoolean. What is happening here, is that even though we started out with three different types of variables—a string, a number, and a boolean—we are able to reassign any of these variables to one of the other types. We can assign a number to a string, a string to boolean, or a boolean to a number. While this type of assignment in JavaScript is legal, it shows that the JavaScript language is not strongly typed. This can lead to unwanted behaviour in our code. Parts of our code may be relying on the fact that a particular variable is holding a string, and if we inadvertently assign a number to this variable, our code may start to break in unexpected ways. TypeScript is strongly typed TypeScript, on the other hand, is a strongly typed language. Once you have declared a variable to be of type string, you can only assign string values to it. All further code that uses this variable must treat it as though it has a type of string. This helps to ensure that code that we write will behave as expected. While strong typing may not seem to be of any use with simple strings and numbers—it certainly does become important when we apply the same rules to objects, groups of objects, function definitions and classes. If you have written a function that expects a string as the first parameter and a number as the second, you cannot be blamed, if someone calls your function with a boolean as the first parameter and something else as the second. JavaScript programmers have always relied heavily on documentation to understand how to call functions, and the order and type of the correct function parameters. But what if we could take all of this documentation and include it within the IDE? Then, as we write our code, our compiler could point out to us—automatically—that we were using objects and functions in the wrong way. Surely this would make us more efficient, more productive programmers, allowing us to generating code with fewer errors? TypeScript does exactly that. It introduces a very simple syntax to define the type of a variable or a function parameter to ensure that we are using these objects, variables, and functions in the correct manner. If we break any of these rules, the TypeScript compiler will automatically generate errors, pointing us to the lines of code that are in error. This is how TypeScript got its name. It is JavaScript with strong typing - hence TypeScript. Let's take a look at this very simple language syntax that enables the "Type" in TypeScript. Type syntax The TypeScript syntax for declaring the type of a variable is to include a colon (:), after the variable name, and then indicate its type. Consider the following TypeScript code: var myString : string = "test"; var myNumber: number = 1; var myBoolean : boolean = true; This code snippet is the TypeScript equivalent of our preceding JavaScript code. We can now see an example of the TypeScript syntax for declaring a type for the myString variable. By including a colon and then the keyword string (: string), we are telling the compiler that the myString variable is of type string. Similarly, the myNumber variable is of type number, and the myBoolean variable is of type boolean. TypeScript has introduced the string, number and boolean keywords for each of these basic JavaScript types. If we attempt to assign a value to a variable that is not of the same type, the TypeScript compiler will generate a compile-time error. Given the variables declared in the preceding code, the following TypeScript code will generate some compile errors: myString = myNumber; myBoolean = myString; myNumber = myBoolean; TypeScript build errors when assigning incorrect types The TypeScript compiler is generating compile errors, because we are attempting to mix these basic types. The first error is generated by the compiler because we cannot assign a number value to a variable of type string. Similarly, the second compile error indicates that we cannot assign a string value to a variable of type boolean. Again, the third error is generated because we cannot assign a boolean value to a variable of type number. The strong typing syntax that the TypeScript language introduces, means that we need to ensure that the types on the left-hand side of an assignment operator (=) are the same as the types on the right-hand side of the assignment operator. To fix the preceding TypeScript code, and remove the compile errors, we would need to do something similar to the following: myString = myNumber.toString(); myBoolean = (myString === "test"); if (myBoolean) { myNumber = 1; } Our first line of code has been changed to call the .toString() function on the myNumber variable (which is of type number), in order to return a value that is of type string. This line of code, then, does not generate a compile error because both sides of the equal sign are of the same type. Our second line of code has also been changed so that the right hand side of the assignment operator returns the result of a comparison, myString === "test", which will return a value of type boolean. The compiler will therefore allow this code, because both sides of the assignment resolve to a value of type boolean. The last line of our code snippet has been changed to only assign the value 1 (which is of type number) to the myNumber variable, if the value of the myBoolean variable is true. Anders Hejlsberg describes this feature as "syntactic sugar". With a little sugar on top of comparable JavaScript code, TypeScript has enabled our code to conform to strong typing rules. Whenever you break these strong typing rules, the compiler will generate errors for your offending code. Inferred typing TypeScript also uses a technique called inferred typing, in cases where you do not explicitly specify the type of your variable. In other words, TypeScript will find the first usage of a variable within your code, figure out what type the variable is first initialized to, and then assume the same type for this variable in the rest of your code block. As an example of this, consider the following code: var myString = "this is a string"; var myNumber = 1; myNumber = myString; We start by declaring a variable named myString, and assign a string value to it. TypeScript identifies that this variable has been assigned a value of type string, and will, therefore, infer any further usages of this variable to be of type string. Our second variable, named myNumber has a number assigned to it. Again, TypeScript is inferring the type of this variable to be of type number. If we then attempt to assign the myString variable (of type string) to the myNumber variable (of type number) in the last line of code, TypeScript will generate a familiar error message: error TS2011: Build: Cannot convert 'string' to 'number' This error is generated because of TypeScript's inferred typing rules. Duck-typing TypeScript also uses a method called duck-typing for more complex variable types. Duck-typing means that if it looks like a duck, and quacks like a duck, then it probably is a duck. Consider the following TypeScript code: var complexType = { name: "myName", id: 1 }; complexType = { id: 2, name: "anotherName" }; We start with a variable named complexType that has been assigned a simple JavaScript object with a name and id property. On our second line of code, we can see that we are re-assigning the value of this complexType variable to another object that also has an id and a name property. The compiler will use duck-typing in this instance to figure out whether this assignment is valid. In other words, if an object has the same set of properties as another object, then they are considered to be of the same type. To further illustrate this point, let's see how the compiler reacts if we attempt to assign an object to our complexType variable that does not conform to this duck-typing: var complexType = { name: "myName", id: 1 }; complexType = { id: 2 }; complexType = { name: "anotherName" }; complexType = { address: "address" }; The first line of this code snippet defines our complexType variable, and assigns to it an object that contains both an id and name property. From this point, TypeScript will use this inferred type on any value we attempt to assign to the complexType variable. On our second line of code, we are attempting to assign a value that has an id property but not the name property. On the third line of code, we again attempt to assign a value that has a name property, but does not have an id property. On the last line of our code snippet, we have completely missed the mark. Compiling this code will generate the following errors: error TS2012: Build: Cannot convert '{ id: number; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ name: string; }' to '{ name: string; id: number; }': error TS2012: Build: Cannot convert '{ address: string; }' to '{ name: string; id: number; }': As we can see from the error messages, TypeScript is using duck-typing to ensure type safety. In each message, the compiler gives us clues as to what is wrong with the offending code – by explicitly stating what it is expecting. The complexType variable has both an id and a name property. To assign a value to the complexType variable, then, this value will need to have both an id and a name property. Working through each of these errors, TypeScript is explicitly stating what is wrong with each line of code. Note that the following code will not generate any error messages: var complexType = { name: "myName", id: 1 }; complexType = { name: "name", id: 2, address: "address" }; Again, our first line of code defines the complexType variable, as we have seen previously, with an id and a name property. Now, look at the second line of this example. The object we are using actually has three properties: name, id, and address. Even though we have added a new address property, the compiler will only check to see if our new object has both an id and a name. Because our new object has these properties, and will therefore match the original type of the variable, TypeScript will allow this assignment through duck-typing. Inferred typing and duck-typing are powerful features of the TypeScript language – bringing strong typing to our code, without the need to use explicit typing, that is, a colon : and then the type specifier syntax. Arrays Besides the base JavaScript types of string, number, and boolean, TypeScript has two other data types: Arrays and enums. Let's look at the syntax for defining arrays. An array is simply marked with the [] notation, similar to JavaScript, and each array can be strongly typed to hold a specific type as seen in the code below: var arrayOfNumbers: number[] = [1, 2, 3]; arrayOfNumbers = [3, 4, 5]; arrayOfNumbers = ["one", "two", "three"]; On the first line of this code snippet, we are defining an array named arrayOfNumbers, and further specify that each element of this array must be of type number. The second line then reassigns this array to hold some different numerical values. The last line of this snippet, however, will generate the following error message: error TS2012: Build: Cannot convert 'string[]' to 'number[]': This error message is warning us that the variable arrayOfNumbers is strongly typed to only accept values of type number. Our code tries to assign an array of strings to this array of numbers, and is therefore, generating a compile error. The any type All this type checking is well and good, but JavaScript is flexible enough to allow variables to be mixed and matched. The following code snippet is actually valid JavaScript code: var item1 = { id: 1, name: "item 1" }; item1 = { id: 2 }; Our first line of code assigns an object with an id property and a name property to the variable item1. The second line then re-assigns this variable to an object that has an id property but not a name property. Unfortunately, as we have seen previously, TypeScript will generate a compile time error for the preceding code: error TS2012: Build: Cannot convert '{ id: number; }' to '{ id: number; name: string; }' TypeScript introduces the any type for such occasions. Specifying that an object has a type of any in essence relaxes the compiler's strict type checking. The following code shows how to use the any type: var item1 : any = { id: 1, name: "item 1" }; item1 = { id: 2 }; Note how our first line of code has changed. We specify the type of the variable item1 to be of type : any so that our code will compile without errors. Without the type specifier of : any, the second line of code, would normally generate an error. Explicit casting As with any strongly typed language, there comes a time where you need to explicitly specify the type of an object. An object can be cast to the type of another by using the < > syntax. This is not a cast in the strictest sense of the word; it is more of an assertion that is used at runtime by the TypeScript compiler. Any explicit casting that you use will be compiled away in the resultant JavaScript and will not affect the code at runtime. Let's modify our previous code snippet to use explicit casting: var item1 = <any>{ id: 1, name: "item 1" }; item1 = { id: 2 }; Note that on the first line of this snippet, we have now replaced the : any type specifier on the left hand side of the assignment, with an explicit cast of <any> on the right hand side. This snippet of code is telling the compiler to explicitly cast, or to explicitly treat the { id: 1, name: "item 1" } object on the right-hand side as a type of any. So the item1 variable, therefore, also has the type of any (due to TypeScript's inferred typing rules). This then allows us to assign an object with only the { id: 2 } property to the variable item1 on the second line of code. This technique of using the < > syntax on the right hand side of an assignment, is called explicit casting. While the any type is a necessary feature of the TypeScript language – its usage should really be limited as much as possible. It is a language shortcut that is necessary to ensure compatibility with JavaScript, but over-use of the any type will quickly lead to coding errors that will be difficult to find. Rather than using the type any, try to figure out the correct type of the object you are using, and then use this type instead. We use an acronym within our programming teams: S.F.I.A.T. (pronounced sviat or sveat). Simply Find an Interface for the Any Type. While this may sound silly – it brings home the point that the any type should always be replaced with an interface – so simply find it. Just remember that by actively trying to define what an object's type should be, we are building strongly typed code, and therefore protecting ourselves from future coding errors and bugs. Enums Enums are a special type that has been borrowed from other languages such as C#, and provide a solution to the problem of special numbers. An enum associates a human-readable name for a specific number. Consider the following code: enum DoorState { Open, Closed, Ajar } In this code snippet, we have defined an enum called DoorState to represent the state of a door. Valid values for this door state are Open, Closed, or Ajar. Under the hood (in the generated JavaScript), TypeScript will assign a numeric value to each of these human-readable enum values. In this example, the DoorState.Open enum value will equate to a numeric value of 0. Likewise, the enum value DoorState.Closed will be equate to the numeric value of 1, and the DoorState.Ajar enum value will equate to 2. Let's have a quick look at how we would use these enum values: window.onload = () => { var myDoor = DoorState.Open; console.log("My door state is " + myDoor.toString()); }; The first line within the window.onload function creates a variable named myDoor, and sets its value to DoorState.Open. The second line simply logs the value of myDoor to the console. The output of this console.log function would be: My door state is 0 This clearly shows that the TypeScript compiler has substituted the enum value of DoorState.Open with the numeric value 0. Now let's use this enum in a slightly different way: window.onload = () => { var openDoor = DoorState["Closed"]; console.log("My door state is " + openDoor.toString()); }; This code snippet uses a string value of "Closed" to lookup the enum type, and assign the resulting enum value to the openDoor variable. The output of this code would be: My door state is 1 This sample clearly shows that the enum value of DoorState.Closed is the same as the enum value of DoorState["Closed"], because both variants resolve to the numeric value of 1. Finally, let's have a look at what happens when we reference an enum using an array type syntax: window.onload = () => { var ajarDoor = DoorState[2]; console.log("My door state is " + ajarDoor.toString()); }; Here, we assign the variable openDoor to an enum value based on the 2nd index value of the DoorState enum. The output of this code, though, is surprising: My door state is Ajar You may have been expecting the output to be simply 2, but here we are getting the string "Ajar" – which is a string representation of our original enum name. This is actually a neat little trick – allowing us to access a string representation of our enum value. The reason that this is possible is down to the JavaScript that has been generated by the TypeScript compiler. Let's have a look, then, at the closure that the TypeScript compiler has generated: var DoorState; (function (DoorState) { DoorState[DoorState["Open"] = 0] = "Open"; DoorState[DoorState["Closed"] = 1] = "Closed"; DoorState[DoorState["Ajar"] = 2] = "Ajar"; })(DoorState || (DoorState = {})); This strange looking syntax is building an object that has a specific internal structure. It is this internal structure that allows us to use this enum in the various ways that we have just explored. If we interrogate this structure while debugging our JavaScript, we will see the internal structure of the DoorState object is as follows: DoorState {...} [prototype]: {...} [0]: "Open" [1]: "Closed" [2]: "Ajar" [prototype]: [] Ajar: 2 Closed: 1 Open: 0 The DoorState object has a property called "0", which has a string value of "Open". Unfortunately, in JavaScript the number 0 is not a valid property name, so we cannot access this property by simply using DoorState.0. Instead, we must access this property using either DoorState[0] or DoorState["0"]. The DoorState object also has a property named Open, which is set to the numeric value 0. The word Open IS a valid property name in JavaScript, so we can access this property using DoorState["Open"], or simply DoorState.Open, which equate to the same property in JavaScript. While the underlying JavaScript can be a little confusing, all we need to remember about enums is that they are a handy way of defining an easily remembered, human-readable name to a special number. Using human-readable enums, instead of just scattering various special numbers around in our code, also makes the intent of the code clearer. Using an application wide value named DoorState.Open or DoorState.Closed is far simpler than remembering to set a value to 0 for Open, 1 for Closed, and 3 for ajar. As well as making our code more readable, and more maintainable, using enums also protects our code base whenever these special numeric values change – because they are all defined in one place. One last note on enums – we can set the numeric value manually, if needs be: enum DoorState { Open = 3, Closed = 7, Ajar = 10 } Here, we have overridden the default values of the enum to set DoorState.Open to 3, DoorState.Closed to 7, and DoorState.Ajar to 10. Const enums With the release of TypeScript 1.4, we are also able to define const enums as follows: const enum DoorStateConst { Open, Closed, Ajar } var myState = DoorStateConst.Open; These types of enums have been introduced largely for performance reasons, and the resultant JavaScript will not contain the full closure definition for the DoorStateConst enum as we saw previously. Let's have a quick look at the JavaScript that is generated from this DoorStateConst enum: var myState = 0 /* Open */; Note how we do not have a full JavaScript closure for the DoorStateConst at all. The compiler has simply resolved the DoorStateConst.Open enum to its internal value of 0, and removed the const enum definition entirely. With const enums, we therefore cannot reference the internal string value of an enum, as we did in our previous code sample. Consider the following example: // generates an error console.log(DoorStateConst[0]); // valid usage console.log(DoorStateConst["Open"]); The first console.log statement will now generate a compile time error – as we do not have the full closure available with the property of [0] for our const enum. The second usage of this const enum is valid, however, and will generate the following JavaScript: console.log(0 /* "Open" */); When using const enums, just keep in mind that the compiler will strip away all enum definitions and simply substitute the numeric value of the enum directly into our JavaScript code. Functions JavaScript defines functions using the function keyword, a set of braces, and then a set of curly braces. A typical JavaScript function would be written as follows: function addNumbers(a, b) { return a + b; } var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); This code snippet is fairly self-explanatory; we have defined a function named addNumbers that takes two variables and returns their sum. We then invoke this function, passing in the values of 1 and 2. The value of the variable result would then be 1 + 2, which is 3. Now have a look at the last line of code. Here, we are invoking the addNumbers function, passing in two strings as arguments, instead of numbers. The value of the variable result2 would then be a string, "12". This string value seems like it may not be the desired result, as the name of the function is addNumbers. Copying the preceding code into a TypeScript file would not generate any errors, but let's insert some type rules to the preceding JavaScript to make it more robust: function addNumbers(a: number, b: number): number { return a + b; }; var result = addNumbers(1, 2); var result2 = addNumbers("1", "2"); In this TypeScript code, we have added a :number type to both of the parameters of the addNumbers function (a and b), and we have also added a :number type just after the ( ) braces. Placing a type descriptor here means that the return type of the function itself is strongly typed to return a value of type number. In TypeScript, the last line of code, however, will cause a compilation error: error TS2082: Build: Supplied parameters do not match any signature of call target: This error message is generate because we have explicitly stated that the function should accept only numbers for both of the arguments a and b, but in our offending code, we are passing two strings. The TypeScript compiler, therefore, cannot match the signature of a function named addNumbers that accepts two arguments of type string. Anonymous functions The JavaScript language also has the concept of anonymous functions. These are functions that are defined on the fly and don't specify a function name. Consider the following JavaScript code: var addVar = function(a, b) { return a + b; }; var result = addVar(1, 2); This code snippet defines a function that has no name and adds two values. Because the function does not have a name, it is known as an anonymous function. This anonymous function is then assigned to a variable named addVar. The addVar variable, then, can then be invoked as a function with two parameters, and the return value will be the result of executing the anonymous function. In this case, the variable result will have a value of 3. Let's now rewrite the preceding JavaScript function in TypeScript, and add some type syntax, in order to ensure that the function only accepts two arguments of type number, and returns a value of type number: var addVar = function(a: number, b: number): number { return a + b; } var result = addVar(1, 2); var result2 = addVar("1", "2"); In this code snippet, we have created an anonymous function that accepts only arguments of type number for the parameters a and b, and also returns a value of type number. The types for both the a and b parameters, as well as the return type of the function, are now using the :number syntax. This is another example of the simple "syntactic sugar" that TypeScript injects into the language. If we compile this code, TypeScript will reject the code on the last line, where we try to call our anonymous function with two string parameters: error TS2082: Build: Supplied parameters do not match any signature of call target: Optional parameters When we call a JavaScript function that has is expecting parameters, and we do not supply these parameters, then the value of the parameter within the function will be undefined. As an example of this, consider the following JavaScript code: var concatStrings = function(a, b, c) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); Here, we have defined a function called concatStrings that takes three parameters, a, b, and c, and simply returns the sum of these values. If we call this function with all three parameters, as seen in the second last line of this snipped, we will end up with the string "abc" logged to the console. If, however, we only supply two parameters, as seen in the last line of this snippet, the string "abundefined" will be logged to the console. Again, if we call a function and do not supply a parameter, then this parameter, c in our case, will be simply undefined. TypeScript introduces the question mark ? syntax to indicate optional parameters. Consider the following TypeScript function definition: var concatStrings = function(a: string, b: string, c?: string) { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); console.log(concatStrings("a")); This is a strongly typed version of the original concatStrings JavaScript function that we were using previously. Note the addition of the ? character in the syntax for the third parameter: c?: string. This indicates that the third parameter is optional, and therefore, all of the preceding code will compile cleanly, except for the last line. The last line will generate an error: error TS2081: Build: Supplied parameters do not match any signature of call target. This error is generated because we are attempting to call the concatStrings function with only a single parameter. Our function definition, though, requires at least two parameters, with only the third parameter being optional. The optional parameters must be the last parameters in the function definition. You can have as many optional parameters as you want, as long as non-optional parameters precede the optional parameters. Default parameters A subtle variant on the optional parameter function definition, allows us to specify the value of a parameter if it is not passed in as an argument from the calling code. Let's modify our preceding function definition to use an optional parameter: var concatStrings = function(a: string, b: string, c: string = "c") { return a + b + c; } console.log(concatStrings("a", "b", "c")); console.log(concatStrings("a", "b")); This function definition has now dropped the ? optional parameter syntax, but instead has assigned a value of "c" to the last parameter: c:string = "c". By using default parameters, if we do not supply a value for the final parameter named c, the concatStrings function will substitute the default value of "c" instead. The argument c, therefore, will not be undefined. The output of the last two lines of code will both be "abc". Note that using the default parameter syntax will automatically make the parameter optional. The arguments variable The JavaScript language allows a function to be called with a variable number of arguments. Every JavaScript function has access to a special variable, named arguments, that can be used to retrieve all arguments that have been passed into the function. As an example of this, consider the following JavaScript code: function testParams() { if (arguments.length > 0) { for (var i = 0; i < arguments.length; i++) { console.log("Argument " + i + " = " + arguments[i]); } } } testParams(1, 2, 3, 4); testParams("first argument"); In this code snippet, we have defined a function name testParams that does not have any named parameters. Note, though, that we can use the special variable, named arguments, to test whether the function was called with any arguments. In our sample, we can simply loop through the arguments array, and log the value of each argument to the console, by using an array indexer : arguments[i]. The output of the console.log calls are as follows: Argument 0 = 1 Argument 1 = 2 Argument 2 = 3 Argument 3 = 4 Argument 0 = first argument So, how do we express a variable number of function parameters in TypeScript? The answer is to use what are called rest parameters, or the three dots (…) syntax. Here is the equivalent testParams function, expressed in TypeScript: function testParams(...argArray: number[]) { if (argArray.length > 0) { for (var i = 0; i < argArray.length; i++) { console.log("argArray " + i + " = " + argArray[i]); console.log("arguments " + i + " = " + arguments[i]); } } } testParams(1); testParams(1, 2, 3, 4); testParams("one", "two"); Note the use of the …argArray: number[] syntax for our testParams function. This syntax is telling the TypeScript compiler that the function can accept any number of arguments. This means that our usages of this function, i.e. calling the function with either testParams(1) or testParams(1,2,3,4), will both compile correctly. In this version of the testParams function, we have added two console.log lines, just to show that the arguments array can be accessed by either the named rest parameter, argArray[i], or through the normal JavaScript array, arguments[i]. The last line in this sample will, however, generate a compile error, as we have defined the rest parameter to only accept numbers, and we are attempting to call the function with strings. The the subtle difference between using argArray and arguments is the inferred type of the argument. Since we have explicitly specified that argArray is of type number, TypeScript will treat any item of the argArray array as a number. However, the internal arguments array does not have an inferred type, and so will be treated as the any type. We can also combine normal parameters along with rest parameters in a function definition, as long as the rest parameters are the last to be defined in the parameter list, as follows: function testParamsTs2(arg1: string, arg2: number, ...ArgArray: number[]) { } Here, we have two normal parameters named arg1 and arg2 and then an argArray rest parameter. Mistakenly placing the rest parameter at the beginning of the parameter list will generate a compile error. Function callbacks One of the most powerful features of JavaScript–and in fact the technology that Node was built on–is the concept of callback functions. A callback function is a function that is passed into another function. Remember that JavaScript is not strongly typed, so a variable can also be a function. This is best illustrated by having a look at some JavaScript code: function myCallBack(text) { console.log("inside myCallback " + text); } function callingFunction(initialText, callback) { console.log("inside CallingFunction"); callback(initialText); } callingFunction("myText", myCallBack); Here, we have a function named myCallBack that takes a parameter and logs its value to the console. We then define a function named callingFunction that takes two parameters: initialText and callback. The first line of this funciton simply logs "inside CallingFunction" to the console. The second line of the callingFunction is the interesting bit. It assumes that the callback argument is in fact a function, and invokes it. It also passes the initialText variable to the callback function. If we run this code, we will get two messages logged to the console, as follows: inside CallingFunction inside myCallback myText But what happens if we do not pass a function as a callback? There is nothing in the preceding code that signals to us that the second parameter of callingFunction must be a function. If we inadvertently called the callingFunction function with a string, instead of a function as the second parameter as follows: callingFunction("myText", "this is not a function"); We would get a JavaScript runtime error: 0x800a138a - JavaScript runtime error: Function expected Defensive minded programmers, however, would first check whether the callback parameter was in fact a function before invoking it, as follows: function callingFunction(initialText, callback) { console.log("inside CallingFunction"); if (typeof callback === "function") { callback(initialText); } else { console.log(callback + " is not a function"); } } callingFunction("myText", "this is not a function"); Note the third line of this code snippet, where we check the type of the callback variable before invoking it. If it is not a function, we then log a message to the console. On the last line of this snippet, we are executing the callingFunction, but this time passing a string as the second parameter. The output of the code snipped would be: inside CallingFunction this is not a function is not a function When using function callbacks, then, JavaScript programmers need to do two things; firstly, understand which parameters are in fact callbacks and secondly, code around the invalid use of callback functions. Function signatures The TypeScript "syntactic sugar" that enforces strong typing, is not only intended for variables and types, but for function signatures as well. What if we could document our JavaScript callback functions in code, and then warn users of our code when they are passing the wrong type of parameter to our functions ? TypeScript does this through function signatures. A function signature introduces a fat arrow syntax, () =>, to define what the function should look like. Let's re-write the preceding JavaScript sample in TypeScript: function myCallBack(text: string) { console.log("inside myCallback " + text); } function callingFunction(initialText: string, callback: (text: string) => void) { callback(initialText); } callingFunction("myText", myCallBack); callingFunction("myText", "this is not a function"); Our first function definition, myCallBack now strongly types the text parameter to be of type string. Our callingFunction function has two parameters; initialText, which is of type string, and callback, which now has the new function signature syntax. Let's look at this function signature more closely: callback: (text: string) => void What this function definition is saying, is that the callback argument is typed (by the : syntax) to be a function, using the fat arrow syntax () =>. Additionally, this function takes a parameter named text that is of type string. To the right of the fat arrow syntax, we can see a new TypeScript basic type, called void. Void is a keyword to denote that a function does not return a value. So, the callingFunction function will only accept, as its second argument, a function that takes a single string parameter and returns nothing. Compiling the preceding code will correctly highlight an error in the last line of the code snippet, where we passing a string as the second parameter, instead of a callback function: error TS2082: Build: Supplied parameters do not match any signature of call target: Type '(text: string) => void' requires a call signature, but type 'String' lacks one Given the preceding function signature for the callback function, the following code would also generate compile time errors: function myCallBackNumber(arg1: number) { console.log("arg1 = " + arg1); } callingFunction("myText", myCallBackNumber); Here, we are defining a function named myCallBackNumber, that takes a number as its only parameter. When we attempt to compile this code, we will get an error message indicating that the callback parameter, which is our myCallBackNumber function, also does not have the correct function signature: Call signatures of types 'typeof myCallBackNumber' and '(text: string) => void' are incompatible. The function signature of myCallBackNumber would actually be (arg1:number) => void, instead of the required (text: string) => void, hence the error. In function signatures, the parameter name (arg1 or text) does not need to be the same. Only the number of parameters, their types, and the return type of the function need to be the same. This is a very powerful feature of TypeScript — defining in code what the signatures of functions should be, and warning users when they do not call a function with the correct parameters. As we saw in our introduction to TypeScript, this is most significant when we are working with third-party libraries. Before we are able to use third-party functions, classes, or objects in TypeScript, we need to define what their function signatures are. These function definitions are put into a special type of TypeScript file, called a declaration file, and saved with a .d.ts extension. Function callbacks and scope JavaScript uses lexical scoping rules to define the valid scope of a variable. This means that the value of a variable is defined by its location within the source code. Nested functions have access to variables that are defined in their parent scope. As an example of this, consider the following TypeScript code: function testScope() { var testVariable = "myTestVariable"; function print() { console.log(testVariable); } } console.log(testVariable); This code snippet defines a function named testScope. The variable testVariable is defined within this function. The print function is a child function of testScope, so it has access to the testVariable variable. The last line of the code, however, will generate a compile error, because it is attempting to use the variabletestVariable, which is lexically scoped to be valid only inside the body of the testScope function: error TS2095: Build: Could not find symbol 'testVariable'. Simple, right? A nested function has access to variables depending on its location within the source code. This is all well and good, but in large JavaScript projects, there are many different files and many areas of the code are designed to be re-usable. Let's take a look at how these scoping rules can become a problem. For this sample, we will use a typical callback scenario—using jQuery to execute an asynchronous call to fetch some data. Consider the following TypeScript code: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json" success: (data, status, jqXhr) => { console.log("success : testVariable is " + testVariable); console.log("success : testVariable_2 is" + testVariable_2); }, error: (message, status, stack) => { alert("error " + message); } } ); } getData(); In this code snippet, we are defining a variable named testVariable and setting its value. We then define a function called getData. The getData function sets another variable called testVariable_2, and then calls the jQuery $.ajax function. The $.ajax function is configured with three properties: url, success, and error. The url property is a simple string that points to a sample_json.json file in our project directory. The success property is an anonymous function callback, that simply logs the values of testVariable and testVariable_2 to the console. Finally, the error property is also an anonymous function callback, that simply pops up an alert. This code runs as expected, and the success function will log the following results to the console: success : testVariable is :testValue success : testVariable_2 is :testValue_2 So far so good. Now, let's assume that we are trying to refactor the preceding code, as we are doing quite a few similar $.ajax calls, and want to reuse the success callback function elsewhere. We can easily switch out this anonymous function, and create a named function for our success callback, as follows: var testVariable = "testValue"; function getData() { var testVariable_2 = "testValue_2"; $.ajax( { url: "/sample_json.json", success: successCallback, error: (message, status, stack) => { alert("error " + message); } } ); } function successCallback(data, status, jqXhr) { console.log("success : testVariable is :" + testVariable); console.log("success : testVariable_2 is :" + testVariable_2); } getData(); In this sample, we have created a new function named successCallback with the same parameters as our previous anonymous function. We have also modified the $.ajax call to simply pass this function in, as a callback function for the success property: success: successCallback. If we were to compile this code now, TypeScript would generate an error, as follows: error TS2095: Build: Could not find symbol ''testVariable_2''. Since we have changed the lexical scope of our code, by creating a named function, the new successCallback function no longer has access the variable testVariable_2. It is fairly easy to spot this sort of error in a trivial example, but in larger projects, and when using third-party libraries, these sorts of errors become more difficult to track down. It is, therefore, worth mentioning that when using callback functions, we need to understand this lexical scope. If your code expects a property to have a value, and it does not have one after a callback, then remember to have a look at the context of the calling code. Function overloads As JavaScript is a dynamic language, we can often call the same function with different argument types. Consider the following JavaScript code: function add(x, y) { return x + y; } console.log("add(1,1)=" + add(1,1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); Here, we are defining a simple add function that returns the sum of its two parameters, x and y. The last three lines of this code snippet simply log the result of the add function with different types: two numbers, two strings, and two boolean values. If we run this code, we will see the following output: add(1,1)=2 add('1','1')=11 add(true,false)=1 TypeScript introduces a specific syntax to indicate multiple function signatures for the same function. If we were to replicate the preceding code in TypeScript, we would need to use the function overload syntax: function add(arg1: string, arg2: string): string; function add(arg1: number, arg2: number): number; function add(arg1: boolean, arg2: boolean): boolean; function add(arg1: any, arg2: any): any { return arg1 + arg2; } console.log("add(1,1)=" + add(1, 1)); console.log("add(''1'',''1'')=" + add("1", "1")); console.log("add(true,false)=" + add(true, false)); The first line of this code snippet specifies a function overload signature for the add function that accepts two strings and returns a string. The second line specifies another function overload that uses numbers, and the third line uses booleans. The fourth line contains the actual body of the function and uses the type specifier of any. The last three lines of this snippet show how we would use these function signatures, and are similar to the JavaScript code that we have been using previously. There are three points of interest in the preceding code snippet. Firstly, none of the function signatures on the first three lines of the snippet actually have a function body. Secondly, the final function definition uses the type specifier of any and eventually includes the function body. The function overload syntax must follow this structure, and the final function signature, that includes the body of the function must use the any type specifier, as anything else will generate compile-time errors. The third point to note, is that we are limiting the add function, by using these function overload signatures, to only accept two parameters that are of the same type. If we were to try and mix our types; for example, if we call the function with a boolean and a string, as follows: console.log("add(true,''1'')", add(true, "1")); TypeScript would generate compile errors: error TS2082: Build: Supplied parameters do not match any signature of call target: error TS2087: Build: Could not select overload for ''call'' expression. This seems to contradict our final function definition though. In the original TypeScript sample, we had a function signature that accepted (arg1: any, arg2: any); so, in theory, this should be called when we try to add a boolean and a number. The TypeScript syntax for function overloads, however, does not allow this. Remember that the function overload syntax must include the use of the any type for the function body, as all overloads eventually call this function body. However, the inclusion of the function overloads above the function body indicates to the compiler that these are the only signatures that should be available to the calling code. Summary To learn more about TypeScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Learning TypeScript (https://www.packtpub.com/web-development/learning-typescript) TypeScript Essentials (https://www.packtpub.com/web-development/typescript-essentials) Resources for Article: Further resources on this subject: Introduction to TypeScript[article] Writing SOLID JavaScript code with TypeScript[article] JavaScript Execution with Selenium[article]
Read more
  • 0
  • 0
  • 14049
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
16 Feb 2016
45 min read
Save for later

Looking Good – The Graphical Interface

Packt
16 Feb 2016
45 min read
We will start by creating a simple Tic-tac-toe game, using the basic pieces of GUI that Unity provides. Following this, we will discuss how we can change the styles of our GUI controls to improve the look of our game. We will also explore some tips and tricks to handle the many different screen sizes of Android devices. Finally, we will learn about a much quicker way, to put our games on the device. With all that said, let's jump in. In this article, we will cover the following topics: User preferences Buttons, text, and images Dynamic GUI positioning Build and run In this article, we will be creating a new project in Unity. The first section here will walk you through its creation and setup (For more resources related to this topic, see here.) Creating a Tic-tac-toe game The project for this article is a simple Tic-tac-toe style game, similar to what any of us might play on paper. As with anything else, there are several ways in which you can make this game. We are going to use Unity's uGUI system in order to better understand how to create a GUI for any of our other games. The game board The basic Tic-tac-toe game involves two players and a 3 x 3 grid. The players take turns filling squares with Xs and Os. The player who first fills a line of three squares with their letter wins the game. If all squares are filled without a player achieving a line of three, the game is a tie. Let's start with the following steps to create our game board: The first thing to do is to create a project for this article. So, start up Unity and we will do just that. If you have been following along so far, Unity should boot up into the last project that was open. This isn't a bad feature, but it can become extremely annoying. Think of it like this: you have been working on a project for a while and it has grown large. Now you need to quickly open something else, but Unity defaults to your huge project. If you wait for it to open before you can work on anything else, it can consume a lot of time. To change this feature, go to the top of the Unity window and click on Edit followed by Preferences. This is the same place where we changed our script editor's preferences. This time, though, we are going to change settings in the General tab. The following screenshot shows the options that are present under the General tab: At this moment, our primary concern is the Load Previous Project on Startup option; however, we will still cover all of the options in turn. All the options under the General tab are explained in detail as follows:     Auto Refresh: This is one of the best features of Unity. As an asset is changed outside of Unity, this option lets Unity automatically detect the change and refresh the asset inside your project.     Load Previous Project on Startup: This is a great option and you should make sure that this is unchecked whenever installing Unity. When checked, Unity will immediately open the last project you worked on rather than Project Wizard.     Compress Assets on Import: This is the checkbox for automatically compressing your game assets when they are first imported to Unity.     Editor Analytics: This checkbox is for Unity's anonymous usage statistics. Leave it checked and the Unity Editor will send information occasionally to the Unity source. It doesn't hurt anything to leave it on and helps the Unity team to make the Unity Editor better; however, it comes down to personal preference.     Show Asset Store search hits: This setting is only relevant if you plan to use Asset Store. The asset store can be a great source of assets and tools for any game; however, since we are not going to use it. It does what the name suggests. When you search the asset store for something within the Unity Editor, the number of results is displayed based on this checkbox.     Verify Saving Assets: This is a good one to be left off. If this is on, every time you click on Save in Unity, a dialog box will pop up so that you can make sure to save any and all of the assets that have changed since your last save. This option is not so much about your models and textures, but it is concerned with Unity's internal files, materials, and prefabs. It's best to leave it off for now.     Skin (Pro Only): This option only applies to Unity Pro users. It gives the option to switch between the light and dark versions of the Unity Editor. It is purely cosmetic, so go with your gut for this one. With your preferences set, now go to File and then select New Project. Click on the Browse... button to pick a location and name for the new project. We will not be using any of the included packages, so click on Create and we can get on with it. By changing a few simple options, we can save ourselves a lot of trouble later. This may not seem like that big of a deal now for simple projects from this article, but, for large and complex projects, not choosing the correct options can cause a lot of hassle for you even if you just want to make a quick switch between projects. Creating the board With the new project created, we have a clean slate to create our game. Before we can create the core functionality, we need to set up some structure in our scene for our game to work and our players to interact with: Once Unity finishes initializing the new project, we need to create a new canvas. We can do this by navigating to GameObject | UI | Canvas. The whole of Unity's uGUI system requires a canvas in order to draw anything on the screen. It has a few key components, as you can see in the following Inspector window, which allow it and everything else in your interface to work.    Rect Transform: This is a special type of the normal transform component that you will find on nearly every other object that you will use in your games. It keeps track of the object's position on screen, its size, its rotation, the pivot point around which it will rotate, and how it will behave when the screen size changes. By default, the Rect Transform for a canvas is locked to include the whole screen's size.    Canvas: This component controls how it and the interface elements it controls interact with the camera and your scene. You can change this by adjusting Render Mode. The default mode, Screen Space – Overlay, means that everything will be drawn on screen and over top of everything else in the scene. The Screen Space – Camera mode will draw everything a specific distance away from the camera. This allows your interface to be affected by the perspective nature of the camera, but any models that might be closer to the camera will appear in front of it. The World Space mode ensures that the canvas and elements it controls are drawn in the world just like any of the models in your scene.    Graphics Raycaster: This is the component that lets you actually interact with and click on your various interface elements. When you added the canvas, an extra object called EventSystem was also created. This is what allows our buttons and other interface elements to interact with our scripts. If you ever accidentally delete it, you can recreate it by going to the top of the Unity and navigating to GameObject | UI | EventSystem. Next, we need to adjust the way the Unity Editor will display our game so that we can easily make our game board. To do this, switch to the Game view by clicking on its tab at the top of the Scene view. Then, click on the button that says Free Aspect and select the option near the bottom: 3 : 2 Landscape (3 : 2). Most of the mobile devices your games will be played on will use a screen that approximates this ratio. The rest will not see any distortion in your game. To allow our game to adjust to the various resolutions, we need to add a new component to our canvas object. With it selected in the Hierarchy panel, click on Add Component in the Inspector panel and navigate to Layout | Canvas Scaler. The selected component allows a base screen resolution to be worked from, letting it automatically scale our GUI as the devices change. To select a base resolution, select Scale With Screen Size from the Ui Scale Mode drop-down list. Next, let's put 960 for X and 640 for Y. It is better to work from a larger resolution than a smaller one. If your resolution is too small, all your GUI elements will look fuzzy when they are scaled up for high-resolution devices. To keep things organized, we need to create three empty GameObjects. Go back to the top of Unity and select Create Empty three times under GameObject. In the Hierarchy tab, click and drag them to our canvas to make them the canvas's children. To make each of them usable for organizing our GUI elements, we need to add the Rect Transform component. Find it by navigating to Add Component | Layout | Rect Transform in Inspector for each. To rename them, click on their name at the top of the Inspector and type in a new name. Name one Board, another Buttons, and the last one as Squares. Next, make Buttons and Squares children of Board. The Buttons element will hold all of the pieces of our game board that are clickable while Squares will hold the squares that have already been selected. To keep the Board element at the same place as the devices change, we need to change the way it anchors to its parent. Click on the box with a red cross and a yellow dot in the center at the top right of Rect Transform to expand the Anchor Presets menu: Each of these options affects which corner of the parent the element will stick to as the screen changes size. We want to select the bottom-right option with four arrows, one in each direction. This will make it stretch with the parent element. Make the same change to Buttons and Squares as well. Set Left, Top, Right, and Bottom of each of these objects to 0. Also, make sure that Rotation is all set to 0 and Scale is set to 1. Otherwise, our interface may be scaled oddly when we work or play on it. Next, we need to change the anchor point of the board. If Anchor is not expanded, click on the little triangle on the left-hand side to expand it. Either way, the Max X value needs to be set to 0.667 so that our board will be a square and cover the left two-thirds of our screen. This game board is the base around which the rest of our project will be created. Without it, the game won't be playable. The game squares use it to draw themselves on screen and anchor themselves to relevant places. Later, when we create menus, this is needed to make sure that a player only sees what we need them to be interacting with at that moment. Game squares Now that we have our base game board in place, we need the actual game squares. Without them, it is going to be kind of hard to play the game. We need to create nine buttons for the player to click on, nine images for the background of the selected squares, and nine texts to display which person controls the squares. To create them and set them up, perform these steps: Navigate to Game Object | UI just like we did for the canvas, but this time select Button, Image, and Text to create everything we need. Each of the image objects needs one of the text objects as a child. Then, all of the images must be children of the Squares object and the buttons must be children of the Buttons object. All of the buttons and images need a number in their name so that we can keep them organized. Name the buttons Button0 through Button8 and the images Square0 through Square8. The next step is to lay out our board so that we can keep things organized and in sync with our programming. We need to set each numbered set specifically. But first, pick the crossed arrows from the bottom-right corner of Anchor Presets for all of them and ensure that their Left, Top, Right, and Bottom values are set to 0. To set each of our buttons and squares at the right place, just match the numbers to the following table. The result will be that all the squares will be in order, starting at the top left and ending at the bottom right: Square Min X Min Y Max X Max Y 0 0 0.67 0.33 1 1 0.33 0.67 0.67 1 2 0.67 0.67 1 1 3 0 0.33 0.33 0.67 4 0.33 0.33 0.67 0.67 5 0.67 0.33 1 0.67 6 0 0 0.33 0.33 7 0.33 0 0.67 0.33 8 0.67 0 1 0.33 The last thing we need to add is an indicator to show whose turn it is. Create another Text object just like we did before and rename it as Turn Indicator. After you make sure that the Left, Top, Right, and Bottom values are set to 0 again, set Anchor Point Preset to the blue arrows again. Finally, set Min X under Anchor to 0.67. We now have everything that we need to play the basic game of Tic-tac-toe. To check it out, select the Squares object and uncheck the box in the top-right corner to turn it off. When you hit play now, you should be able to see your whole game board and click on the buttons. You can even use Unity Remote to test it with the touch settings. If you have not already done so, it would be a good idea to save the scene before continuing. The game squares are the last piece to set up our initial game. It almost looks like a playable game now. We just need to add a few scripts and we will be able to play all the games of Tic-tac-toe that we could ever desire. Controlling the game Having a game board is one of the most important parts of creating any game. However, it does us no good if we can't control what happens when its various buttons are pressed. Let's create some scripts and write some code to fix this now: Create two new scripts in the Project panel. Name the new scripts as TicTacToeControl and SquareState. Open them and clear out the default functions. The SquareState script will hold the possible states of each square of our game board. To do this, clear absolutely everything out of the script, including the using UnityEngine line and the public class SquareState line, so that we can replace it with a simple enumeration. An enumeration is just a list of potential values. This one is concerned with the player who controls the square. It will allow us to keep track of whether X's controlling it, O's controlling it, or if it is clear. The Clear statement becomes the first and therefore, the default state: public enum SquareState { Clear, Xcontrol, Ocontrol } In our other script, TicTacToeControl, we need to start by adding an extra line at the very beginning, right under using UnityEngine. This line lets our code interact with the various GUI elements, most importantly with this game, allowing us to change the text of who controls a square and whose turn it is. using UnityEngine.UI; Next, we need two variables that will largely control the flow of the game. They need to be added in place of the two default functions. The first defines our game board. It is an array of nine squares to keep track of who owns what. The second keeps track of whose turn it is. When the Boolean is true, the X player gets a turn. When the Boolean is false, the O player gets a turn: public SquareState[] board = new SquareState[9]; public bool xTurn = true; The next variable will let us change the text on screen for whose turn it is: public Text turnIndicatorLandscape; These three variables will give us access to all of the GUI objects that we set up in the last section, allowing us to change the image and text based on who owns the square. We can also turn the buttons and squares on and off as they are clicked. All of them are marked with Landscape so that we will be able to keep them straight later, when we have a second board for the Portrait orientation of devices: public GameObject[] buttonsLandscape; public Image[] squaresLandscape; public Text[] squareTextsPortrait; The last two variables for now will give us access to the images that we need to change the backgrounds: public Sprite oImage; public Sprite xImage; Our first function for this script will be called every time a button is clicked. It receives the number of buttons clicked, and the first thing it does is turn the button off and the square on: public void ButtonClick(int squareIndex) { buttonsLandscape[squareIndex].SetActive(false); squaresLandscape[squareIndex].gameObject.SetActive(true); Next, the function checks the Boolean that we created earlier to see whose turn it is. If it is the X player's turn, the square is set to use the appropriate image and text, indicating that their control is set. It then marks on the script's internal board that controls the square before finally switching to the O player's turn: if(xTurn) { squaresLandscape[squareIndex].sprite = xImage; squareTextsLandscape[squareIndex].text = "X"; board[squareIndex] = SquareState.XControl; xTurn = false; turnIndicatorLandscape.text = "O's Turn"; } This next block of code does the same thing as the previous one, except it marks control for the O player and changes the turn to the X player: else { squaresLandscape[squareIndex].sprite = oImage; squareTextsLandscape[squareIndex].text = "O"; board[squareIndex] = SquareState.OControl; xTurn = true; turnIndicatorLandscape.text = "X's Turn"; } } That is it for the code right now. Next, we need to return to the Unity Editor and set up our new script in the scene. You can do this by creating another empty GameObject and renaming it as GameControl. Add our TicTacToeControl script to it by dragging the script from the Project panel and dropping it in the Inspector panel when the object is selected. We now need to attach all of the object references that our script needs in order to actually work. We don't need to touch the Board or XTurn slots in the Inspector panel, but the Turn Indicator object does need to be dragged from the Hierarchy tab to the Turn Indicator Landscape slot in the Inspector panel. Next, expand the Buttons Landscape, Squares Landscape, and Square Texts Landscape settings and set each Size slot to 9. To each of the new slots, we need to drag the relevant object from the Hierarchy tab. The Element 0 object under Buttons Landscape gets Button0, Element 1 gets Button1, and so on. Do this for all of the buttons, images, and texts. Ensure that you put them in the right order or else our script will appear confusing as it changes things when the player is playing. Next, we need a few images. If you have not already done so, import the starting assets for this article by going to the top of Unity, by navigating to Assets | Import New Asset and selecting the files to import them. You will need to navigate to and select each one at a time. We have Onormal and Xnormal for indicating control of the square. The ButtonNormal image is used when the button is just sitting there and ButtonActive is used when the player touches the button. The Title field is going to be used for our main menu a little bit later. In order to use any of these images in our game, we need to change their import settings. Select each of them in turn and find the Texture Type dropdown in the Inspector panel. We need to change them from Texture to Sprite (2D \ uGUI). We can leave the rest of the settings at their defaults. The Sprite Mode option is used if we have a sprite sheet with multiple elements in one image. The Packing Tag option is used for grouping and finding sprites in the sheet. The Pixels To Units option affects the size of the sprite when it is rendered in world space. The Pivot option simply changes the point around which the image will rotate. For the four square images, we can click on Sprite Editor to change how the border appears when they are rendered. When clicked, a new window opens that shows our image with some green lines at the edges and some information about it in the lower-right. We can drag these green lines to change the Border property. Anything outside the green lines will not be stretched with the image as it fills spaces that are larger than it. A setting around 13 for each side will keep our whole border from stretching. Once you make any changes, ensure that you hit the Apply button to commit them. Next, select the GameControl object once more and drag the ONormal image to the OImage slot and the XNormal image to the XImage slot. Each of the buttons needs to be connected to the script. To do this, select each of them from Hierarchy in turn and click on the plus sign at the bottom-right corner of their Inspector: We then need to click on that little circle to the left of No Function and select GameControl from the list in the new window. Now navigate to No Function | TicTacToeControl | ButtonClick (int) to connect the function in our code to the button. Finally, for each of the buttons, put the number of the button in the number slot to the right of the function list. To keep everything organized, rename your Canvas object to GameBoard_Landscape. Before we can test it out, be sure that the Squares object is turned on by checking the box in the top-left corner of its Inspector. Also, uncheck the box of each of its image children. This may not look like the best game in the world, but it is playable. We have buttons that call functions in our scripts. The turn indicator changes as we play. Also, each square indicates who controls it after they are selected. With a little more work, this game could look and work great. Messing with fonts Now that we have a basic working game, we need to make it look a little better. We are going to add our button images and pick some new font sizes and colors to make everything more readable: Let's start with the buttons. Select one of the Button elements and you will see in the Inspector that it is made of an Image (Script) component and a Button (Script) component. The first component controls how the GUI element will appear when it just sits there. The second controls how it changes when a player interacts with it and what bit of functionality this triggers.    Source Image: This is the base image that is displayed when the element just sits there and is untouched by the player.    Color: This controls the tinting and fading of the image that is being used.    Material: This lets you use a texture or shader that might otherwise be used on 3D models.    Image Type: This determines how the image will be stretched to fill the available space. Usually, it will be set to Sliced, which is for images that use a border and can be optionally filled with a color based on the Fill Center checkbox. Otherwise, it will be often set to Simple, for example when you are using a normal image and can use prevent the Preserve Aspect box from being stretched by odd sized Rect Transforms.    Interactable: This simply toggles whether or not the player is able to click on the button and trigger functionality.    Transition: This changes how the button will react as the player interacts with it. ColorTint causes the button to change color as it is interacted with. SpriteSwap will change the image when it is interacted with. Animation will let you define more complex animation sequences for the transitions between states.    The Target Graphic is a reference to the base image used for drawing the button on screen.    The Normal slot, Highlighted slot, Pressed slot, and Disabled slot define the effects or images to use when the button is not being interacted with, is moused over, the player clicks on it, and when the button has been turned off. For each of our buttons, we need to drag our ButtonNormal image from our Project panel to the Source Image slot. Next, click on the white box to the right of the Color slot to open the color picker. To stop our buttons from being faded, we need to move the A slider all the way to the right or set the box to 255. We want to change images when our buttons are pressed, so change the Transition to SpriteSwap. Mobile devices have almost no way of hovering over GUI elements, so we do not need to worry about the Highlighted state. However, we do want to add our ButtonActive image to the Pressed Sprite slot so that it will switch when the player touches the button. The button squares should be blank until someone clicks on them, so we need to get rid of the text element. The easiest way to do this is to select each one under the button and delete it. Next, we need to change the Text child of each of the image elements. It is the Text (Script) component that allows us to control how text is drawn on screen.    Text: This is the area where we can change text that will be drawn on screen.    Font: This allows us to pick any font file that is in our project to use for the text.    Font Style: This will let you adjust the bold and italic nature of the text.    Font Size: This is the size of the text. This is just like picking a font size in your favorite word processor.    Line Spacing: This is the distance between each line of text.    Rich Text: This will let you use a few special HTML style tags to affect only part of the text with a color, italics, and so on.    Alignment: This changes the location where the text will be centered in the box. The first three boxes adjust the horizontal position. The second three change the vertical position.    Horizontal Overflow / Vertical Overflow: These adjust whether the text can be drawn outside the box, wrapped to a new line, or clipped off.    Best Fit: This will automatically adjust the size of the text to fit a dynamically size-changing element, within a Min and Max value.    Color/Material: These change the color and texture of the text as and when it is drawn.    Shadow (Script): This component adds a drop shadow to the text, just like what you might add in Photoshop. For each of our text elements, we need to use a Font Size of 120 and the Alignment should be centered. For the Turn Indicator text element, we also need to use a Font Size of 120 and it also needs to be centered. The last thing to do is to change the Color of the text elements to a dark gray so that we can easily see it against the color of our buttons: Now, our board works and looks good too. Try taking a stab at adding your own images for the buttons. You will need two images; one for when the button sits there and one for when the button is pressed. Also, the default Arial font is boring. Find a new font to use for your game; you can import it just like any other asset for your game. Rotating devices If you have been testing your game so far, you have probably noticed that the game only looks good when we hold the device in the landscape mode. When it is held in the portrait mode, everything becomes squished as the squares and turn indicator try to share the little amount of horizontal space that is available. As we have already set up our game board in one layout mode, it becomes a fairly simple matter to duplicate it for the other mode. However, it does require duplicating a good portion of our code to make it all work properly: To make a copy of our game board, right-click on it and select Duplicate from the new menu. Rename the duplicate game board to GameBoard_Portrait. This will be the board used when our player's device is in the portrait mode. To see our changes while we are making them, turn off the landscape game board and select 3:2 Portrait (2:3) from the drop-down list at the top left of the Game window. Select the Board object that is a child of GameBoard_Portrait. In its Inspector panel, we need to change the anchors to use the top two-thirds of the screen rather than the left two-thirds. The values of 0 for Min X, 0.33 for Min Y, and 1 for both Max X and Max Y will make this happen. Next, Turn Indicator needs to be selected and moved to the bottom-third of the screen. Values of 0 for Min X and Min Y, 1 for Max X, and 0.33 for Max Y will work well here. Now that we have our second board set up, we need to make a place for it in our code. So, open the TicTacToeControl script and scroll to the top so that we can start with some new variables. The first variable that we are going to add will give us access to the turn indicator for the portrait mode of our screen: public Text turnIndicatorPortrait; The next three variables will keep track of the buttons, square images, and owner text information. These are just like the three lists that we created earlier to keep track of the board while it is in the landscape mode: public GameObject[] buttonsPortrait; public Image[] squaresPortrait; public Text[] squareTextsPortrait; The last two variables that we are going to add to the top of our script here are for keeping track of the two canvas objects that actually draw our game boards. We need these so that we can switch between them as the user turns their device around. public GameObject gameBoardGroupLandscape; public GameObject gameBoardGroupPortrait; Next, we need to update a few of our functions so that they make changes to both boards and not just the landscape board. These first two lines turn the portrait board's buttons off and the squares on when the player clicks on them. They need to go at the beginning of our ButtonClick function. Put them right after the two lines where we use SetActive on the buttons and squares for the landscape set: buttonsPortrait[squareIndex].SetActive(false); squaresPortrait[squareIndex].gameObject.SetActive(true); These two lines change the image and text for the controlling square in favor of the X player for the Portrait set. They go inside the if statement of our ButtonClick function, right after the two lines that do the same thing for the landscape set: squaresPortrait[squareIndex].sprite = xImage; squareTextsPortrait[squareIndex].text = "X"; This line goes at the end of that same if statement and changes the Portrait set's turn indicator text: turnIndicatorPortrait.text = "O's Turn"; The next two lines change image and text in favor of the O player. They go after the same lines for the Landscape set, inside of the else statement of our ButtonClick function: squaresPortrait[squareIndex].sprite = oImage; squareTextsPortrait[squareIndex].text = "O"; This is the last line that we need to add to our ButtonClick function; it needs to be put at the end of the else statement. It simply changes the text indicating whose turn it is: turnIndicatorPortrait.text = "X's Turn"; Next, we need to create a new function to control the changing of our game boards when the player changes the orientation of their device. We will start by defining the Update function. This is a special function called by Unity for every single frame. It will allow us to check for a change in orientation for every frame: public void Update() { The function begins with an if statement that uses Input.deviceOrientation to find out how the player's device is currently being held. It compares the finding to the LandscapeLeft orientation to see whether the device is begin held sideways, with the home button on the left side. If the result is true, the Portrait set of GUI elements are turned off while the Landscape set is turned on: if(Input.deviceOrientation == DeviceOrientation.LandscapeLeft) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } The next else if statement checks for a Portrait orientation, the home button is down. It turns Portrait on and the Landscape set off if true: else if(Input.deviceOrientation == DeviceOrientation.Portrait) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } This else if statement is checking LanscapeRight when the home button is on the right side: else if(Input.deviceOrientation == DeviceOrientation.LandscapeRight) { gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); } Finally, we check the PortraitUpsideDown orientation, which is when the home button is at the top of the device. Don't forget the extra bracket to close off and end the function: else if(Input.deviceOrientation == DeviceOrientation.PortraitUpsideDown) { gameBoardGroupPortrait.SetActive(true); gameBoardGroupLandscape.SetActive(false); } } We now need to return to Unity and select our GameControl object so that we can set up our new Inspector properties. Drag and drop the various pieces from the portrait game board in Hierarchy to the relevant slot in Inspector, Turn Indicator to the Turn Indicator Portrait slot, the buttons to the Buttons Portrait list in order, the squares to Squares Portrait, and their text children to the Square Texts Portrait. Finally, drop the GameBoard_Portrait object in the Game Board Group Portrait slot. We should now be able to play our game and see the board switch when we change the orientation of our device. You will have to either build your project on your device or connect using Unity Remote because the Editor itself and your computer simply don't have a device orientation like your mobile device. Be sure to set the display mode of your Game window to Remote in the top-left corner so that it will update along with your device while using Unity Remote. Menus and victory Our game is nearly complete. The last things that we need are as follows: An opening menu where players can start a new game A bit of code for checking whether anybody has won the game A game over menu for displaying who won the game Setting up the elements Our two new menus will be quite simple when compared to the game board. The opening menu will consist of our game's title graphic and a single button, while the game over menu will have a text element to display the victory message and a button to go back to the main menu. Let's perform the following steps to set up the elements: Let's start with the opening menu by creating a new Canvas, just like we did before, and rename it as OpeningMenu. This will allow us to keep it separate from the other screens that we have created. Next, the menu needs an Image element and a Button element as children. To make everything easier to work with, turn off the game boards with the checkbox at the top of their Inspector windows. For our image object, we can drag our Title image to the Source Image slot. For the image's Rect Transform, we need to set the Pos X and Pos Y values to 0. We also need to adjust the Width and Height. We are going to match the dimensions of the original image so that it will not be stretched. Put a value of 320 for Width and 160 for Height. To move the image to the top half of the screen, put a 0 in the Pivot Y slot. This changes where the position is based on for the image. For the button's Rect Transform, we again need the value of 0 for both Pos X and Pos Y. We again need a value of 320 for the Width, but this time we want a value of 100 for the Height. To move it to the bottom half of the screen, we need a value of 1 in the Pivot Y slot. Next up is to set the images for the button, just like we did earlier for the game board. Put the ButtonNormal image in the Source Image slot. Change Transition to SpriteSwap and put the ButtonActive image in the Pressed Sprite slot. Do not forget to change Color to have an A value of 255 in color picker so that our button is not partially faded. Finally, for this menu to change the button text, expand Button in the Hierarchy and select Text child object. Right underneath Text in the Inspector panel for this object is a text field where we can change the text displayed on the button. A value of New Game here will work well. Also, change Font Size to 45 so that we can actually read it. Next, we need to create the game over menu. So, turn off our opening menu and create a new canvas for our game over menu. Rename it as GameOverMenu so that we can continue to be organized. For this menu, we need a Text element and a Button element as its children. We will set this one up in an almost identical way to the previous one. Both the text and the button need values of 0 for the Pos X and Pos Y slots, with a value of 320 for Width. The text will use a Height of 160 and a Pivot Y of 0. We also need to set its Font Size as 80. You can change the default text, but it will be overwritten by our code anyway. To center our text in the menu, select the middle buttons from the two sets next to the Alignment property. The button will use a Height of 100 and a Pivot Y of 1. Also, be sure that you set the Source Image, Color, Transition, and Pressed Sprite to the proper images and settings. The last thing to set is the button's text child. Set the default text to Main Menu and give it a Font Size of 45. That is it for setting up our menus. We have all the screens that we need to allow the player to interact with our game. The only problem is that we don't have any of the functionality to make them actually do anything. Adding the code To make our game board buttons work, we had to create a function in our script they could reference and call when they are touched. The main menu's button will start a new game, while the game over menu's button will change screens to the main menu. We will also need to create a little bit of code to clear out and reset the game board when a new game starts. If we don't, it would be impossible for the player to play more than one round before being required to restart the whole app if they want to play again. Open the TicTacToeControl script so that we can make some more changes to it. We will start with the addition of three variables at the top of the script. The first two will keep track of the two new menus, allowing us to turn them on and off as per our need. The third is for the text object in the game over screen that will give us the ability to put a message based on the result of the game. Next, we need to create a new function. The NewGame function will be called by the button in the main menu. Its purpose is to reset the board so that we can continue to play without having to reset the whole application. public void NewGame() { The function starts by setting the game to start on the X player's turn. It then creates a new array of SquareStates, which effectively wipes out the old game board. It then sets the turn indicators for both the Landscape and Portrait sets of controls: xTurn = true; board = new SquareState[9]; turnIndicatorLandscape.text = "X's Turn"; turnIndicatorPortratit.text = "X's Turn"; We next loop through the nine buttons and squares for both the Portrait and Landscape controls. All of the buttons are turned on and the squares are turned off using SetActive, which is the same as clicking on the little checkbox at the top-left corner of the Inspector panel: for(int i=0;i<9;i++) { buttonsPortrait[i].SetActive(true); squaresPortrait[i].gameObject.SetActive(false); buttonsLandscape[i].SetActive(true); squaresLandscape[i].gameObject.SetActive(false); } The last three lines of code control which screens are visible when we change over to the game board. By default, it chooses to turn on the Landscape board and makes sure that the Portrait board is turned off. It then turns off the main menu. Don't forget the last curly bracket to close off the function: gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(true); mainMenuGroup.SetActive(false); } Next, we need to add a single line of code to the end of the ButtonClick function. It is a simple call to check whether anyone has won the game after the buttons and squares have been dealt with: CheckVictory(); The CheckVictory function runs through the possible combinations for victory in the game. If it finds a run of three matching squares, the SetWinner function will be called and the current game will end: public void CheckVictory() { A victory in this game is a run of three matching squares. We start by checking the column that is marked by our loop. If the first square is not Clear, compare it to the square below; if they match, check it against the square below that. Our board is stored as a list but drawn as a grid, so we have to add three to go down a square. The else if statement follows with checks of each row. By multiplying our loop value by three, we will skip down a row of each loop. We'll again compare the square to SquareState.Clear, then to the square to its right, and finally with the two squares to its right. If either set of conditions is correct, we'll send the first square in the set to another function to change our game screen: for(int i=0;i<3;i++) { if(board[i] != SquareState.Clear && board[i] == board[i + 3] && board[i] == board[i + 6]) { SetWinner(board[i]); return; } else if(board[i * 3] != SquareState.Clear && board[i * 3] == board[(i * 3) + 1] && board[i * 3] == board[(i * 3) + 2]) { SetWinner(board[i * 3]); return; } } The following code snippet is largely the same as the if statements that we just saw. However, these lines of code check the diagonals. If the conditions are true, again send out to the other function to change the game screen. You probably also noticed the returns after the function calls. If we have found a winner at any point, there is no need to check any more of the board. So, we'll exit the CheckVictory function early: if(board[0] != SquareState.Clear && board[0] == board[4] && board[0] == board[8]) { SetWinner(board[0]); return; } else if(board[2] != SquareState.Clear && board[2] == board[4] && board[2] == board[6]) { SetWinner(board[2]); return; } This is the last little bit for our CheckVictory function. If no one has won the game, as determined by the previous parts of this function, we have to check for a tie. This is done by checking all the squares of the game board. If any one of them is Clear, the game has yet to finish and we exit the function. But, if we make it through the entire loop without finding a Clear square, we set the winner by declaring a tie: for(int i=0;i<board.Length;i++) { if(board[i] == SquareState.Clear) return; } SetWinner(SquareState.Clear); } Next, we create the SetWinner function that is called repeatedly in our CheckVictory function. This function passes who has won the game, and it initially turns on the game over screen and turns off the game board: public void SetWinner(SquareState toWin) { gameOverGroup.SetActive(true); gameBoardGroupPortrait.SetActive(false); gameBoardGroupLandscape.SetActive(false); The function then checks to see who won and picks an appropriate message for the victorText object: if(toWin == SquareState.Clear) { victorText.text = "Tie!"; } else if(toWin == SquareState.XControl) { victorText.text = "X Wins!"; } else { victorText.text = "O Wins!"; } } Finally, we have the BackToMainMenu function. This is short and sweet; it is simply called by the button on the game over screen to switch back to the main menu: public void BackToMainMenu() { gameOverGroup.SetActive(false); mainMenuGroup.SetActive(true); } That is all of the code in our game. We have all of the visual pieces that make up our game and now, we also have all of the functional pieces. The last step is to put them together and finish the game. Putting them together We have our code and our menus. Once we connect them together, our game will be complete. To put it all together perform the following steps: Go back to the Unity Editor and select the GameControl object from the Hierarchy panel. The three new properties in its Inspector window need to be filled in. Drag the OpeningMenu canvas to the Main Menu Group slot and GameOverMenu to the Game Over Group slot. Also, find the text object child of GameOverMenu and drag it to the Victor Text slot. Next, we need to connect the button functionality for each of our menus. Let's start by selecting the button object child of our OpeningMenu canvas. Click on the little plus sign at the bottom right of its Button (Script) component to add a new functionality slot. Click on the circle in the center of the new slot and select GameControl from the new pop-up window, just like we did for each of our game board buttons. The drop-down list that currently says No Function is our next target. Click on it and navigate to TicTacToeControl | NewGame (). Repeat these few steps to add the functionality to the Button childof GameOverMenu. Except, select BackToMainMenu() from the list. The very last thing to do is to turn off both the game boards and the game over menu, using the checkbox in the top left of the Inspector. Leave only the opening menu on so that our game will start there when we play it. Congratulations! This is our game. All of our buttons are set, we have multiple menus, and we even created a game board that changes based on the orientation of the player's device. The last thing to do is to build it for our devices and go show it off. A better way to build to device Now for the part of the build process that everyone itches to learn. There is a quicker and easier way to have your game built and play it on your Android device. The long and complicated way is still very good to know. Should this shorter method fail, and it will at some point, it is helpful to know the long method so that you can debug any errors. Also, the short path is only good for building for a single device. If you have multiple devices and a large project, it will take significantly more time to load them all with the short build process. Follow these steps: Start by opening the Build Settings window. Remember, it can be found under File at the top of the Unity Editor. If you have not already done so, save your scene. The option to save your scene is also found under File at the top of the Unity Editor. Click on the Add Current button to add our current scene, also the only scene, to the Scenes In Build list. If this list is empty, there is no game. Be sure to change your Platform to Android, if you haven't already done so. Do not forget to set the Player Settings. Click on the Player Settings button to open them up in the Inspector window. At the top, set the Company Name and Product Name fields. Values of TomPacktAndroid and Ch2 TicTacToe respectively for these fields will match the included completed project. Remember, these fields will be seen by the people playing your game. The Bundle Identifier field under Other Settings needs to be set as well. The format is still com.CompanyName.ProductName, so com.TomPacktAndroid.Ch2.TicTacToe will work well. In order to see our cool dynamic GUI in action on a device, there is one other setting that should be changed. Click on Resolution and Presentation to expand the options. We are interested in Default Orientation. The default is Portrait, but this option means that the game will be fixed in the portrait display mode. Click on the drop-down menu and select Auto Rotation. This option tells Unity to automatically adjust the game to be upright irrespective of the orientation in which it is being held. The new set of options that popped up when Auto Rotation was selected allows to limit the orientations that are supported. Perhaps you are making a game that needs to be wider and held in landscape orientation. By unchecking Portrait and Portrait Upside Down, Unity will still adjust (but only for the remaining orientations). On your Android device, the controls are along one of the shorter sides; these usually are the home, menu, and back or recent apps buttons. This side is generally recognized as the bottom of the device and it is the position of these buttons that dictates what each orientation is. The Portrait mode is when these buttons are down relative to the screen. The Landscape Right mode is when they are to the right. The pattern begins to become clear, does it not? For now, leave all of the orientation options checked and we will go back to Build Settings. The next step (and this very important) is to connect your device to your computer and give it a moment to be recognized. If your device is not the first one connected to your computer, this shorter build path will fail. In the bottom-right corner of the Build Settings window, click on the Build And Run button. You will be asked to give the application file, the APK, a relevant name, and save it to an appropriate location. A name such as Ch2_TicTacToe.apk will be fine, and it is suitable enough to save the file to the desktop. Click on Save and sit back to watch the wonderful loading bar that is provided. After the application is built, there is a pushing to device step. This means that the build was successful and Unity is now putting the application on your device and installing it. Once this is done, the game will start on the device and the loading will be done. We just learned about the Build And Run button provided by the Build Settings window. This is quick, easy, and free from the pain of using the command prompt; isn't the short build path wonderful? However, if the build process fails for any reason including being unable to find the device, the application file will not be saved. You will have to go through the entire build process again, if you want to try installing again. This isn't so bad for our simple Tic-tac-toe game, but it might consume a lot of time for a larger project. Also, you can only have one Android device connected to your computer while building. Any more devices and the build process is a guaranteed failure. Unity also doesn't check for multiple devices until after it has gone through the rest of the potentially long build process. Other than these words of caution, the Build And Run option is really quite nice. Let Unity handle the hard part of getting the game to your device. This gives us much more time to focus on testing and making a great game. If you are up for a challenge, this is a tough one: creating a single player mode. You will have to start by adding an extra button to the opening screen for selecting the second game mode. Any logic for the computer player should go in the Update function. Also, take a look at Random.Range for randomly selecting a square to take control. Otherwise, you could do a little more work and make the computer search for a square where it can win or create a line of two matches. Summary To learn more about ECMAScript and JavaScript, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Unity Android Game Development by Example Beginner's Guide Unity 5 for Android Essentials Resources for Article: Further resources on this subject: Finding Your Way [article] The Blueprint Class [article] Editor Tool, Prefabs, and Main Menu [article]
Read more
  • 0
  • 0
  • 15215

article-image-your-first-swift-2-project
Packt
16 Feb 2016
29 min read
Save for later

Your First Swift 2 Project

Packt
16 Feb 2016
29 min read
After the release of Xcode 6 in 2014, it has been possible to build Swift applications for iOS and OS X and submit them to the App Store for publication. This article will present both a single view application and a master-detail application, and use these to explain the concepts behind iOS applications, as well as introduce classes in Swift. In this article, we will present the following topics: How iOS applications are structured Single-view iOS applications Creating classes in Swift Protocols and enums in Swift Using XCTest to test Swift code Master-detail iOS applications The AppDelegate and ViewController classes (For more resources related to this topic, see here.) Understanding iOS applications An iOS application is a compiled executable along with a set of supporting files in a bundle. The application bundle is packaged into an archive file to be installed onto a device or upload to the App Store. Xcode can be used to run iOS applications in a simulator, as well as testing them on a local device. Submitting an application to the App Store requires a developer signing key, which is included as part of the Apple Developer Program at https://developer.apple.com. Most iOS applications to date have been written in Objective-C, a crossover between C and Smalltalk. With the advent of Swift, it is likely that many developers will move at least parts of their applications to Swift for performance and maintenance reasons. Although Objective-C is likely to be around for a while, it is clear that Swift is the future of iOS development and probably OS X as well. Applications contain a number of different types of files, which are used both at compile time and also at runtime. These files include the following: The Info.plist file, which contains information about which languages the application is localized for, what the identity of the application is, and the configuration requirements, such as the supported interface types (iPad, iPhone, and Universal), and orientations (Portrait, Upside Down, Landscape Left, and Landscape Right) Zero or more interface builder files with a .xib extension, which contain user interface screens (which supersedes the previous .nib files) Zero or more image asset files with a .xcassets extension, which store groups of related icons at different sizes, such as the application icon or graphics for display on screen (which supersedes the previous .icns files) Zero or more storyboard files with a .storyboard extension, which are used to coordinate between different screens in an application One or more .swift files that contain application code Creating a single-view iOS application A single-view iOS application is one where the application is presented in a single screen, without any transitions or other views. This section will show how to create an application that uses a single view without storyboards. When Xcode starts, it displays a welcome message that includes the ability to create a new project. This welcome message can be redisplayed at any time by navigating to Window | Welcome to Xcode or by pressing Command + Shift + 1. Using the welcome dialog's Create a new Xcode project option, or navigating to File | New | Project..., or by pressing Command + Shift + N, create a new iOS project with Single View Application as the template, as shown in the following screenshot: When the Next button is pressed, the new project dialog will ask for more details. The product name here is SingleView with appropriate values for Organization Name and Identifier. Ensure that the language selected is Swift and the device type is Universal: The Organization Identifier is a reverse domain name representation of the organization, and the Bundle Identifier is the concatenation of the Organization Identifier with the Product Name. Publishing to the App Store requires that the Organization Identifier be owned by the publisher and is managed in the online developer center at https://developer.apple.com/membercenter/. When Next is pressed, Xcode will ask where to save the project and whether a repository should be created. The selected location will be used to create the product directory, and an option to create a Git repository will be offered. In 2014, Git became the most widely used version control system, surpassing all other distributed and centralized version-control systems. It would be foolish not to create a Git repository when creating a new Xcode project. When Create is pressed, Xcode will create the project, set up template files, and then initialize the Git repository locally or on a shared server. Press the triangular play button at the top-left of Xcode to launch the simulator: If everything has been set up correctly, the simulator will start with a white screen and the time and battery shown at the top of the screen: Removing the storyboard The default template for a single-view application includes a storyboard. This creates the view for the first (only) screen and performs some additional setup behind the scenes. To understand what happens, the storyboard will be removed and replaced with code instead. Most applications are built with one or more storyboards. The storyboard can be deleted by going to the project navigator, finding the Main.storyboard file, and pressing the Delete key or selecting Delete from the context-sensitive menu. When the confirmation dialog is shown, select the Move to Trash option to ensure that the file is deleted rather than just being removed from the list of files that Xcode knows about. To see the project navigator, press Command + 1 or navigate to View | Navigators | Show Project Navigator. Once the Main.storyboard file has been deleted, it needs to be removed from Info.plist, to prevent iOS from trying to open it at startup. Open the Info.plist file under the Supporting Files folder of SingleView. A set of key-value pairs will be displayed; clicking on the Main storyboard file base name row will present the (+) and (-) options. Clicking on the delete icon (-) will remove the line: Now, when the application is started, a black screen will be displayed. There are multiple Info.plist files that are created by Xcode's template; one file is used for the real application, while the other files are used for the test applications that get built when running tests. Setting up the view controller The view controller is responsible for setting up the view when it is activated. Typically, this is done through either the storyboard or the interface file. As these have been removed, the window and the view controller need to be instantiated manually. When iOS applications start, application:didFinishLaunchingWithOptions: is called on the corresponding UIApplicationDelegate. The optional window variable is initialized automatically when it is loaded from an interface file or a storyboard, but it needs to be explicitly initialized if the user interface is being implemented in code. Implement the application:didFinishLaunchingWithOptions: method in the AppDelegate class as follows: @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate {   var window: UIWindow?   func application(application: UIApplication,    didFinishLaunchingWithOptions launchOptions:    [NSObject:AnyObject]?) -> Bool {     window = UIWindow()     window?.rootViewController = ViewController()     window?.makeKeyAndVisible()     return true   } } To open a class by name, press Command + Shift + O and type in the class name. Alternatively, navigate to File | Open Quickly... The final step is to create the view's content, which is typically done in the viewDidLoad method of the ViewController class. As an example user interface, a UILabel will be created and added to the view. Each view controller has an associated view property, and child views can be added with the addSubview method. To make the view stand out, the background of the view will be changed to black and the text color will be changed to white: class ViewController: UIViewController {   override func viewDidLoad() {     super.viewDidLoad()     view.backgroundColor = UIColor.blackColor()     let label = UILabel(frame:view.bounds)     label.textColor = UIColor.whiteColor()     label.textAlignment = .Center     label.text = "Welcome to Swift"       view.addSubview(label)   } } This creates a label, which is sized to the full size of the screen, with a white text color and a centered text alignment. When run, this displays Welcome to Swift on the screen. Typically, views will be implemented in their own class rather than being in-lined into the view controller. This allows the views to be reused in other controllers. When the screen is rotated, the label will be rotated off screen. Logic would need to be added in a real application to handle rotation changes in the view controller, such as willRotateToInterfaceOrientation, and to appropriately add rotations to the views using the transform property of the view. Usually, an interface builder file or storyboard would be used so that this is handled automatically. Swift classes, protocols, and enums Almost all Swift applications will be object oriented. Classes, such as Process from the CoreFoundation framework, and UIColor and UIImage from the UIKit framework, were used to demonstrate how classes can be used in applications. This section describes how to create classes, protocols, and enums in Swift. Classes in Swift A class is created in Swift using the class keyword, and braces are used to enclose the class body. The body can contain variables called properties, as well as functions called methods, which are collectively referred to as members. Instance members are unique to each instance, while static members are shared between all instances of that class. Classes are typically defined in a file named for the class; so a GitHubRepository class would typically be defined in a GitHubRepository.swift file. A new Swift file can be created by navigating to File | New | File… and selecting the Swift File option under iOS. Ensure that it is added to the Tests and UITests targets as well. Once created, implement the class as follows: class GitHubRepository {   var id:UInt64 = 0   var name:String = ""   func detailsURL() -> String {     return "https://api.github.com/repositories/(id)"   } } This class can be instantiated and used as follows: let repo = GitHubRepository() repo.id = 1 repo.name = "Grit" repo.detailsURL() // returns https://api.github.com/repositories/1 It is possible to create static members, which are the same for all instances of a class. In the GitHubRepository class, the api URL is likely to remain the same for all invocations, so it can be refactored into a static property: class GitHubRepository {   // does not work in Swift 1.0 or 1.1   static let api = "https://api.github.com"   …   class func detailsURL(id:String) -> String {     return "(api)/repositories/(id)"   } } Now, if the api URL needs to be changed (for example, to support mock testing or to support an in-house GitHub Enterprise server), there is a single place to change it. Before Swift 2, a class variables are not yet supported error message may be displayed. To use static variables in Swift prior to version 2, a different approach must be used. It is possible to define computed properties, which are not stored but are calculated on demand. These have a getter (also known as an accessor) and optionally a setter (also known as a mutator). The previous example can be rewritten as follows: class GitHubRepository {   class var api:String {     get {       return "https://api.github.com"     }   }   func detailsURL() -> String {     return "(GitHubRepository.api)/repositories/(id)"   } } Although this is logically a read-only constant (there is no associated set block), it is not possible to define the let constants with accessors. To refer to a class variable, use the type name—which in this case is GitHubRepository. When the GitHubRepository.api expression is evaluated, the body of the getter is called. Subclasses and testing in Swift A simple Swift class with no explicit parent is known as a base class. However, classes in Swift frequently inherit from another class by specifying a superclass after the class name. The syntax for this is class SubClass:SuperClass{...}. Tests in Swift are written using the XCTest framework, which is included by default in Xcode templates. This allows an application to have tests written and then executed in place to confirm that no bugs have been introduced. XCTest replaces the previous testing framework OCUnit. The XCTest framework has a base class called XCTestCase that all tests inherit from. Methods beginning with test (and that take no arguments) in the test case class are invoked automatically when the tests are run. Test code can indicate success or failure by calling the XCTAssert* functions, such as XCTAssertEquals and XCTAssertGreaterThan. Tests for the GitHubRepository class conventionally exist in a corresponding GitHubRepositoryTest class, which will be a subclass of XCTestCase. Create a new Swift file by navigating to File | New | File... and choosing a Swift File under the Source category for iOS. Ensure that the Tests and UITests targets are selected but the application target is not. It can be implemented as follows: import XCTest class GitHubRepositoryTest: XCTestCase {   func testRepository() {     let repo = GitHubRepository()     repo.id = 1     repo.name = "Grit"     XCTAssertEqual(       repo.detailsURL(),       "https://api.github.com/repositories/1",       "Repository details"     )   } } Make sure that the GitHubRepositoryTest class is added to the test targets. If not added when the file is created, it can be done by selecting the file and pressing Command + Option + 1 to show the File Inspector. The checkbox next to the test target should be selected. Tests should never be added to the main target. The GitHubRepository class should be added to both test targets: When the tests are run by pressing Command + U or by navigating to Product | Test, the results of the test will be displayed. Changing either the implementation or the expected test result will demonstrate whether the test is being executed correctly. Always check whether a failing test causes the build to fail; this will confirm that the test is actually being run. For example, in the GitHubRepositoryTest class, modify the URL to remove https from the front and check whether a test failure is shown. There is nothing more useless than a correctly implemented test that never runs. Protocols in Swift A protocol is similar to an interface in other languages; it is a named type that has method signatures but no method implementations. Classes can implement zero or more protocols; when they do, they are said to adopt or conform to the protocol. A protocol may have a number of methods that are either required (the default) or optional (marked with the optional keyword). Optional protocol methods are only supported when the protocol is marked with the @objc attribute. This declares that the class will be backed by an NSObject class for interoperability with Objective-C. Pure Swift protocols cannot have optional methods. The syntax to define a protocol looks similar to the following: protocol GitHubDetails {   func detailsURL() -> String   // protocol needs @objc if using optional protocols   // optional doNotNeedToImplement() } Protocols cannot have functions with default arguments. Protocols can be used with the struct, class, and enum types unless the @objc class attribute is used; in which case, they can only be used against Objective-C classes or enums. Classes conform to protocols by listing the protocol names after the class name, similar to a superclass. When a class has both a superclass and one or more protocols, the superclass must be listed first. class GitHubRepository: GitHubDetails {   func detailsURL() -> String {     // implementation as before   } } The GitHubDetails protocol can be used as a type in the same places as an existing Swift type, such as a variable type, method return type, or argument type. Protocols are widely used in Swift to allow callbacks from frameworks that would, otherwise, not know about specific callback handlers. If a superclass was required instead, then a single class cannot be used to implement multiple callbacks. Common protocols include UIApplicationDelegate, Printable, and Comparable. Enums in Swift The final concept to understand in Swift is enumeration, or enum for short. An enum is a closed set of values, such as North, East, South, and West, or Up, and Down. An enumeration is defined using the enum keyword, followed by a type name, and a block, which contains the case keywords followed by comma-separated values as follows: enum Suit {   case Clubs, Diamonds, Hearts // many on one line   case Spades // or each on separate lines } Unlike C, enumerated values do not have a specific type by default, so they cannot generally be converted to and from an integer value. Enumerations can be defined with raw values that allow conversion to and from integer values. Enum values are assigned to variables using the type name and the enum name: var suit:Suit = Suit.Clubs However, if the type of the expression is known, then the type prefix does not need to be explicitly specified; the following form is much more common in Swift code: var suit:Suit = .Clubs Raw values For the enum values that have specific meanings, it is possible to extend the enum from a different type, such as Int. These are known as raw values: enum Rank: Int {   case Two = 2, Three, Four, Five, Six, Seven, Eight, Nine, Ten   case Jack, Queen, King, Ace } A raw value enum can be converted to and from its raw value with the rawValue property and the failable initializer Rank(rawValue:) as follows: Rank.Two.rawValue == 2 Rank(rawValue:14)! == .Ace The failable initializer returns an optional enum value, because the equivalent Rank may not exist. The expression Rank(rawValue:0) will return nil, for example. Associated values Enums can also have associated values, such as a value or case class in other languages. For example, a combination of a Suit and a Rank can be combined to form a Card: enum Card {   case Face(Rank, Suit)   case Joker } Instances can be created by passing values into an enum initializer: var aceOfSpades: Card = .Face(.Ace,.Spades) var twoOfHearts: Card = .Face(.Two,.Hearts) var theJoker: Card = .Joker The associated values of an enum instance cannot be extracted (as they can with properties of a struct), but the enum value can be accessed by pattern matching in a switch statement: var card = aceOfSpades // or theJoker or twoOfHearts ... switch card {   case .Face(let rank, let suit):     print("Got a face card (rank) of (suit)");   case .Joker:     print("Got the joker card") } The Swift compiler will require that the switch statement be exhaustive. As the enum only contains these two types, no default block is needed. If another enum value is added to Card in the future, the compiler will report an error in this switch statement. Creating a master-detail iOS application Having covered how classes, protocols, and enums are defined in Swift, a more complex master-detail application can be created. A master-detail application is a specific type of iOS application that initially presents a master table view, and when an individual element is selected, a secondary details view will show more information about the selected item. Using the Create a new Xcode project option from the welcome screen, or by navigating to File | New | Project… or by pressing Command + Shift + N, create a new project and select Master-Detail Application from the iOS Application category: In the subsequent dialog, enter appropriate values for the project, such as the name (MasterDetail), the organization identifier (typically based on the reverse DNS name), ensure that the Language dropdown reads Swift and that it is targeted for Universal devices: When the project is created, an Xcode window will open containing all the files that are created by the wizard itself, including the MasterDetail.app and MasterDetailTests.xctest products. The MasterDetail.app is a bundle that is executed by the simulator or a connected device, while the MasterDetailTests.xctest and MasterDetailsUITests.xctest products are used to execute unit tests for the application's code. The application can be launched by pressing the triangular play button on the top-left corner of Xcode or by pressing Command + R, which will run the application against the currently selected target. After a brief compile and build cycle, the iOS Simulator will open with a master page that contains an empty table, as shown in the following screenshot: The default MasterDetail application can be used to add items to the list by clicking on the add (+) button on the top-right corner of the screen. This will add a new timestamped entry to the list. When this item is clicked, the screen will switch to the details view, which, in this case, presents the time in the center of the screen: This kind of master-detail application is common in iOS applications for displaying a top-level list (such as a shopping list, a set of contacts, to-do notes, and so on) while allowing the user to tap to see the details. There are three main classes in the master-detail application: The AppDelegate class is defined in the AppDelegate.swift file, and it is responsible for starting the application and set up the initial state The MasterViewController class is defined in the MasterViewController.swift file, and it is used to manage the first (master) screen's content and interactions The DetailViewController class is defined in the DetailViewController.swift file, and it is used to manage the second (detail) screen's content In order to understand what the classes do in more detail, the next three sections will present each of them in turn. The code that is generated in this section was created from Xcode 7.0, so the templates might differ slightly if using a different version of Xcode. An exact copy of the corresponding code can be acquired from the Packt website or from this book's GitHub repository at https://github.com/alblue/com.packtpub.swift.essentials/. The AppDelegate class The AppDelegate class is the main entry point to the application. When a set of Swift source files are compiled, if the main.swift file exists, it is used as the entry point for the application by running that code. However, to simplify setting up an application for iOS, a @UIApplicationMain special attribute exists that will both synthesize the main method and set up the associated class as the application delegate. The AppDelegate class for iOS extends the UIResponder class, which is the parent of all the UI content on iOS. It also adopts two protocols, UIApplicationDelegate, and UISplitViewControllerDelegate, which are used to provide callbacks when certain events occur: @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate,    UISplitViewControllerDelegate {   var window: UIWindow?   ... } On OS X, the AppDelegate class will be a subclass of NSApplication and will adopt the NSApplicationDelegate protocol. The synthesized main function calls the UIApplicationMain method that reads the Info.plist file. If the UILaunchStoryboardName key exists and points to a suitable file (the LaunchScreen.xib interface file in this case), it will be shown as a splash screen before doing any further work. After the rest of the application has loaded, if the UIMainStoryboardFile key exists and points to a suitable file (the Main.storyboard file in this case), the storyboard is launched and the initial view controller is shown. The storyboard has references to the MasterViewController and DetailViewController classes. The window variable is assigned to the storyboard's window. The application:didFinishLaunchingWithOptions is called once the application has started. It is passed with a reference to the UIApplication instance and a dictionary of options that notifies how the application has been started: func application(  application: UIApplication,  didFinishLaunchingWithOptions launchOptions:   [NSObject: AnyObject]?) -> Bool {   // Override point for customization after application launch.   ... } In the sample MasterDetail application, the application:didFinishLaunchingWithOptions method acquires a reference to the splitViewController from the explicitly unwrapped optional window, and the AppDelegate is set as its delegate: let splitViewController =  self.window!.rootViewController as! UISplitViewController splitViewController.delegate = self The … as! UISplitViewController syntax performs a type cast so that the generic rootViewController can be assigned to the more specific type; in this case, UISplitViewController. An alternative version as? provides a runtime checked cast, and it returns an optional value that either contains the value with the correctly casted type or nil otherwise. The difference with as! is a runtime error will occur if the item is not of the correct type. Finally, a navigationController is acquired from the splitViewController, which stores an array of viewControllers. This allows the DetailView to display a button on the left-hand side to expand the details view if necessary: let navigationController = splitViewController.viewController  [splitViewController.viewControllers.count-1]  as! UINavigationController navigationController.topViewController  .navigationItem.leftBarButtonItem =  splitViewController.displayModeButtonItem() The only difference this makes is when running on a wide-screen device, such as an iPhone 6 Plus or an iPad, where the views are displayed side-by-side in landscape mode. This is a new feature in iOS 8 applications. Otherwise, when the device is in portrait mode, it will be rendered as a standard back button: The method concludes with return true to let the OS know that the application has opened successfully. The MasterViewController class The MasterViewController class is responsible for coordinating the data that is shown on the first screen (when the device is in portrait orientation) or the left-half of the screen (when a large device is in landscape orientation). This is rendered with a UITableView, and data is coordinated through the parent UITableViewController class: class MasterViewController: UITableViewController {   var detailViewcontroller: DetailViewController? = nil   var objects = [AnyObject]()   override func viewDidLoad() {…}   func insertNewObject(sender: AnyObject) {…}   … } The viewDidLoad method is used to set up or initialize the view after it has loaded. In this case, a UIBarButtonItem is created so that the user can add new entries to the table. The UIBarButtonItem takes a @selector in Objective-C, and in Swift is treated as a string literal convertible (so that "insertNewObject:" will result in a call to the insertNewObject method). Once created, the button is added to the navigation on the right-hand side, using the standard .Add type which will be rendered as a + sign on the screen: override func viewDidLoad() {   super.viewDidLoad()   self.navigationItem.leftBarButtonItem = self.editButtonItem()   let addButton = UIBarButtonItem(     barButtonSystemItem: .Add, target: self,     action: "insertNewObject:")   self.navigationItem.rightBarButtonItem = addButton   if let split = self.splitViewController {     let controllers = split.viewControllers     self.detailViewController = (controllers[controllers.count-1] as! UINavigationController).topViewController as? DetailViewController } The objects are NSDate values, and are stored inside the class as an Array of AnyObject elements. The insertNewObject method is called when the + button is pressed, and it creates a new NSDate instance which is then inserted into the array. The sender event is passed as an argument of the AnyObject type, which will be a reference to the UIBarButtonItem (although it is not needed or used here): func insertNewObject(sender: AnyObject) {   objects.insertObject(NSDate.date(), atIndex: 0)   let indexPath = NSIndexPath(forRow: 0, inSection: 0)   self.tableView.insertRowsAtIndexPaths(    [indexPath], withRowAnimation: .Automatic) } The UIBarButtonItem class was created before blocks were available on iOS devices, so it uses the older Objective-C @selector mechanism. A future release of iOS may provide an alternative that takes a block, in which case Swift functions can be passed instead. The parent class contains a reference to the tableView, which is automatically created by the storyboard. When an item is inserted, the tableView is notified that a new object is available. Standard UITableViewController methods are used to access the data from the array: override func numberOfSectionsInTableView(  tableView: UITableView) -> Int {   return 1 } override func tableView(tableView: UITableView,  numberOfRowsInSection section: Int) -> Int {   return objects.count } override func tableView(tableView: UITableView,  cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell{   let cell = tableView.dequeueReusableCellWithIdentifier(    "Cell", forIndexPath: indexPath)   let object = objects[indexPath.row] as! NSDate   cell.textLabel!.text = object.description   return cell } override func tableView(tableView: UITableView,  canEditRowAtIndexPath indexPath: NSIndexPath) -> Bool {   return true } The numberOfSectionsInTableView function returns 1 in this case, but a tableView can have multiple sections; for example, to permit a contacts application having a different section for A, B, C through Z. The numberOfRowsInSection method returns the number of elements in each section; in this case, as there is only one section, the number of objects in the array. The reason why each method is called tableView and takes a tableView argument is a result of the Objective-C heritage of UIKit. The Objective-C convention combined the method name as the first named argument, so the original method was [delegate tableView:UITableView, numberOfRowsInSection:NSInteger]. As a result, the name of the first argument is reused as the name of the method in Swift. The cellForRowAtIndexPath method is expected to return UITableViewCell for an object. In this case, a cell is acquired from the tableView using the dequeueReusableCellWithIdentifier method (which caches cells as they go off screen to save object instantiation), and then the textLabel is populated with the object's description (which is a String representation of the object; in this case, the date). This is enough to display elements in the table, but in order to permit editing (or just removal, as in the sample application), there are some additional protocol methods that are required: override func tableView(tableView: UITableView,  canEditRowAtIndexPath indexPath: NSIndexPath) -> Bool {   return true } override func tableView(tableView: UITableView,  commitEditingStyle editingStyle: UITableViewCellEditingStyle,  forRowAtIndexPath indexPath: NSIndexPath) {   if editingStyle == .Delete {     objects.removeObjectAtIndex(indexPath.row)     tableView.deleteRowsAtIndexPaths([indexPath],      withRowAnimation: .Fade)   } } The canEditRowAtIndexPath method returns true if the row is editable; if all the rows can be edited, then this will return true for all the values. The commitEditingStyle method takes a table, a path, and a style, which is an enumeration that indicates which operation occurred. In this case, UITableViewCellEditingStyle.Delete is passed in order to delete the item from both the underlying object array and also from the tableView. (The enumeration can be abbreviated to .Delete because the type of editingStyle is known to be UITableViewCellEditingStyle.) The DetailViewController class The detail view is shown when an element is selected in the MasterViewController. The transition is managed by the storyboard controller; the views are connected with a segue (pronounced seg-way; the product of the same name based it on the word segue which is derived from the Italian word for follows). To pass the selected item between controllers, a property exists in the DetailViewController class called detailItem. When the value is changed, additional code is run, which is implemented in a didSet property notification: class DetailViewController: UIViewController {   var detailItem: AnyObject? {     didSet {       self.configureView()     }   }   … } When DetailViewController has the detailItem set, the configureView method will be invoked. The didSet body is run after the value has been changed, but before the setter returns to the caller. This is triggered by the segue in the MasterViewController: class MasterViewController: UIViewController {   …   override func prepareForSegue(    segue: UIStoryboardSegue, sender: AnyObject?) {     super.prepareForSegue(segue, sender: sender)     if segue.identifier == "showDetail" {       if let indexPath =        self.tableView.indexPathForSelectedRow() {         let object = objects[indexPath.row] as! NSDate         let controller = (segue.destinationViewController          as! UINavigationController)          .topViewController as! DetailViewController         controller.detailItem = object         controller.navigationItem.leftBarButtonItem =          self.splitViewController?.displayModeButtonItem()         controller.navigationItem.leftItemsSupplementBackButton =          true       }     }   } } The prepareForSegue method is called when the user selects an item in the table. In this case, it grabs the selected row index from the table and uses this to acquire the selected date object. The navigation controller hierarchy is searched to acquire the DetailViewController, and once this has been obtained, the selected value is set with controller.detailItem = object, which triggers the update. The label is ultimately displayed in the DetailViewController through the configureView method, which stamps the description of the object onto the label in the center: class DetailViewController {   ...   @IBOutlet weak var detailDescriptionLabel: UILabel!   function configureView() {     if let detail: AnyObject = self.detailItem {       if let label = self.detailDescriptionLabel {         label.text = detail.description       }     }   } } The configureView method is called both when the detailItem is changed and when the view is loaded for the first time. If the detailItem has not been set, then this has no effect. The implementation introduces some new concepts, which are worth highlighting: The @IBOutlet attribute indicates that the property will be exposed in interface builder and can be wired up to the object instance. The weak attribute indicates that the property will not store a strong reference to the object; in other words, the detail view will not own the object but merely reference it. Generally, all @IBOutlet references should be declared as weak to avoid cyclic dependency references. The type is defined as UILabel! which is an implicitly unwrapped optional. When accessed, it performs an explicit unwrapping of the optional value; otherwise the @IBOutlet will be wired up as a UILabel? optional type. Implicitly unwrapped optional types are used when the variable is known to never be nil at runtime, which is usually the case for the @IBOutlet references. Generally, all @IBOutlet references should be implicitly unwrapped optionals. Summary In this article we saw two sample iOS applications; one in which the UI was created programmatically, and another in which the UI was loaded from a storyboard. Together with an overview of classes, protocols, and enums, and an explanation of how iOS applications start, this article gives a springboard to understand the Xcode templates that are frequently used to start new projects. To learn more about Swift 2, you can refer the following books published by Packt Publishing (https://www.packtpub.com/): Swift 2 Blueprints (https://www.packtpub.com/application-development/swift-2-blueprints) Mastering Swift 2 (https://www.packtpub.com/application-development/mastering-swift-2) Swift 2 Design Patterns (https://www.packtpub.com/application-development/swift-2-design-patterns) Resources for Article:   Further resources on this subject: Your First Swift App [article] C-Quence – A Memory Game [article] Exploring Swift [article]
Read more
  • 0
  • 0
  • 8631

article-image-training-and-visualizing-a-neural-network-with-r
Oli Huggins
16 Feb 2016
8 min read
Save for later

Training and Visualizing a neural network with R

Oli Huggins
16 Feb 2016
8 min read
The development of a neural network is inspired by human brain activities. As such, this type of network is a computational model that mimics the pattern of the human mind. In contrast to this, support vector machines first, map input data into a high dimensional feature space defined by the kernel function, and find the optimum hyperplane that separates the training data by the maximum margin. In short, we can think of support vector machines as a linear algorithm in a high dimensional space. In this article, we will cover: Training a neural network with neuralnet Visualizing a neural network trained by neuralnet (For more resources related to this topic, see here.) Training a neural network with neuralnet The neural network is constructed with an interconnected group of nodes, which involves the input, connected weights, processing element, and output. Neural networks can be applied to many areas, such as classification, clustering, and prediction. To train a neural network in R, you can use neuralnet, which is built to train multilayer perceptron in the context of regression analysis, and contains many flexible functions to train forward neural networks. In this recipe, we will introduce how to use neuralnet to train a neural network. Getting ready In this recipe, we will use an iris dataset as our example dataset. We will first split the irisdataset into a training and testing datasets, respectively. How to do it... Perform the following steps to train a neural network with neuralnet: First load the iris dataset and split the data into training and testing datasets: > data(iris) > ind <- sample(2, nrow(iris), replace = TRUE, prob=c(0.7, 0.3)) > trainset = iris[ind == 1,]> testset = iris[ind == 2,] Then, install and load the neuralnet package: > install.packages("neuralnet")> library(neuralnet) Add the columns versicolor, setosa, and virginica based on the name matched value in the Species column: > trainset$setosa = trainset$Species == "setosa" > trainset$virginica = trainset$Species == "virginica" > trainset$versicolor = trainset$Species == "versicolor" Next, train the neural network with the neuralnet function with three hidden neurons in each layer. Notice that the results may vary with each training, so you might not get the same result: > network = neuralnet(versicolor + virginica + setosa~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, trainset, hidden=3) > network Call: neuralnet(formula = versicolor + virginica + setosa ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width, data = trainset, hidden = 3) 1 repetition was calculated. Error Reached Threshold Steps 1 0.8156100175 0.009994274769 11063 Now, you can view the summary information by accessing the result.matrix attribute of the built neural network model: > network$result.matrix 1 error 0.815610017474 reached.threshold 0.009994274769 steps 11063.000000000000 Intercept.to.1layhid1 1.686593311644 Sepal.Length.to.1layhid1 0.947415215237 Sepal.Width.to.1layhid1 -7.220058260187 Petal.Length.to.1layhid1 1.790333443486 Petal.Width.to.1layhid1 9.943109233330 Intercept.to.1layhid2 1.411026063895 Sepal.Length.to.1layhid2 0.240309549505 Sepal.Width.to.1layhid2 0.480654059973 Petal.Length.to.1layhid2 2.221435192437 Petal.Width.to.1layhid2 0.154879347818 Intercept.to.1layhid3 24.399329878242 Sepal.Length.to.1layhid3 3.313958088512 Sepal.Width.to.1layhid3 5.845670010464 Petal.Length.to.1layhid3 -6.337082722485 Petal.Width.to.1layhid3 -17.990352566695 Intercept.to.versicolor -1.959842102421 1layhid.1.to.versicolor 1.010292389835 1layhid.2.to.versicolor 0.936519720978 1layhid.3.to.versicolor 1.023305801833 Intercept.to.virginica -0.908909982893 1layhid.1.to.virginica -0.009904635231 1layhid.2.to.virginica 1.931747950462 1layhid.3.to.virginica -1.021438938226 Intercept.to.setosa 1.500533827729 1layhid.1.to.setosa -1.001683936613 1layhid.2.to.setosa -0.498758815934 1layhid.3.to.setosa -0.001881935696 Lastly, you can view the generalized weight by accessing it in the network: > head(network$generalized.weights[[1]]) How it works... The neural network is a network made up of artificial neurons (or nodes). There are three types of neurons within the network: input neurons, hidden neurons, and output neurons. In the network, neurons are connected; the connection strength between neurons is called weights. If the weight is greater than zero, it is in an excitation status. Otherwise, it is in an inhibition status. Input neurons receive the input information; the higher the input value, the greater the activation. Then, the activation value is passed through the network in regard to weights and transfer functions in the graph. The hidden neurons (or output neurons) then sum up the activation values and modify the summed values with the transfer function. The activation value then flows through hidden neurons and stops when it reaches the output nodes. As a result, one can use the output value from the output neurons to classify the data. Artificial Neural Network The advantages of a neural network are: firstly, it can detect a nonlinear relationship between the dependent and independent variable. Secondly, one can efficiently train large datasets using the parallel architecture. Thirdly, it is a nonparametric model so that one can eliminate errors in the estimation of parameters. The main disadvantages of neural network are that it often converges to the local minimum rather than the global minimum. Also, it might over-fit when the training process goes on for too long. In this recipe, we demonstrate how to train a neural network. First, we split the iris dataset into training and testing datasets, and then install the neuralnet package and load the library into an R session. Next, we add the columns versicolor, setosa, and virginica based on the name matched value in the Species column, respectively. We then use the neuralnet function to train the network model. Besides specifying the label (the column where the name equals to versicolor, virginica, and setosa) and training attributes in the function, we also configure the number of hidden neurons (vertices) as three in each layer. Then, we examine the basic information about the training process and the trained network saved in the network. From the output message, it shows the training process needed 11,063 steps until all the absolute partial derivatives of the error function were lower than 0.01 (specified in the threshold). The error refers to the likelihood of calculating Akaike Information Criterion (AIC). To see detailed information on this, you can access the result.matrix of the built neural network to see the estimated weight. The output reveals that the estimated weight ranges from -18 to 24.40; the intercepts of the first hidden layer are 1.69, 1.41 and 24.40, and the two weights leading to the first hidden neuron are estimated as 0.95 (Sepal.Length), -7.22 (Sepal.Width), 1.79 (Petal.Length), and 9.94 (Petal.Width). We can lastly determine that the trained neural network information includes generalized weights, which express the effect of each covariate. In this recipe, the model generates 12 generalized weights, which are the combination of four covariates (Sepal.Length, Sepal.Width, Petal.Length, Petal.Width) to three responses (setosa, virginica, versicolor). See also For a more detailed introduction on neuralnet, one can refer to the following paper: Günther, F., and Fritsch, S. (2010). neuralnet: Training of neural networks. The R journal, 2(1), 30-38. Visualizing a neural network trained by neuralnet The package, neuralnet, provides the plot function to visualize a built neural network and the gwplot function to visualize generalized weights. In following recipe, we will cover how to use these two functions. Getting ready You need to have completed the previous recipe by training a neural network and have all basic information saved in network. How to do it... Perform the following steps to visualize the neural network and the generalized weights: You can visualize the trained neural network with the plot function: > plot(network) Figure 10: The plot of trained neural network Furthermore, You can use gwplot to visualize the generalized weights: > par(mfrow=c(2,2)) > gwplot(network,selected.covariate="Petal.Width") > gwplot(network,selected.covariate="Sepal.Width") > gwplot(network,selected.covariate="Petal.Length") > gwplot(network,selected.covariate="Petal.Width") Figure 11: The plot of generalized weights How it works... In this recipe, we demonstrate how to visualize the trained neural network and the generalized weights of each trained attribute. Also, the plot includes the estimated weight, intercepts and basic information about the training process. At the bottom of the figure, one can find the overall error and number of steps required to converge. If all the generalized weights are close to zero on the plot, it means the covariate has little effect. However, if the overall variance is greater than one, it means the covariate has a nonlinear effect. See also For more information about gwplot, one can use the help function to access the following document: > ?gwplot Summary To learn more about machine learning with R, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Machine Learning with R (Second Edition) - Read Online Mastering Machine Learning with R - Read Online Resources for Article: Further resources on this subject: Introduction to Machine Learning with R [article] Hive Security [article] Spark - Architecture and First Program [article]
Read more
  • 0
  • 0
  • 14821

article-image-swift-open-source-developers
Packt
16 Feb 2016
43 min read
Save for later

Swift for Open Source Developers

Packt
16 Feb 2016
43 min read
Apple announced Swift at WWDC 2014 as a new programming language that combines experience with the Objective-C platform and advances in dynamic and statically typed languages over the last few decades. Before Swift, most code written for iOS and OS X applications was in Objective-C, a set of object-oriented extensions to the C programming language. Swift aims to build upon patterns and frameworks of Objective-C but with a more modern runtime and automatic memory management. In December 2015, Apple open sourced Swift at https://swift.org and made binaries available for Linux as well as OS X. The content in this article can be run on either Linux or OS X. Developing iOS applications requires Xcode and OS X. In this article, we will present the following topics: How to use the Swift REPL to evaluate Swift code The different types of Swift literals How to use arrays and dictionaries Functions and the different types of function arguments Compiling and running Swift from the command line (For more resources related to this topic, see here.) Open source Swift Apple released Swift as an open source project in December 2015, hosted at https://github.com/apple/swift/ and related repositories. Information about the open source version of Swift is available from the https://swift.org site. The open-source version of Swift is similar from a runtime perspective on both Linux and OS X; however, the set of libraries available differ between the two platforms. For example, the Objective-C runtime was not present in the initial release of Swift for Linux; as a result, several methods that are delegated to Objective-C implementations are not available. "hello".hasPrefix("he") compiles and runs successfully on OS X and iOS but is a compile error in the first Swift release for Linux. In addition to missing functions, there is also a different set of modules (frameworks) between the two platforms. The base functionality on OS X and iOS is provided by the Darwin module, but on Linux, the base functionality is provided by the Glibc module. The Foundation module, which provides many of the data types that are outside of the base-collections library, is implemented in Objective-C on OS X and iOS, but on Linux, it is a clean-room reimplementation in Swift. As Swift on Linux evolves, more of this functionality will be filled in, but it is worth testing on both OS X and Linux specifically if cross platform functionality is required. Finally, although the Swift language and core libraries have been open sourced, this does not apply to the iOS libraries or other functionality in Xcode. As a result, it is not possible to compile iOS or OS X applications from Linux, and building iOS applications and editing user interfaces is something that must be done in Xcode on OS X. Getting started with Swift Swift provides a runtime interpreter that executes statements and expressions. Swift is open source, and precompiled binaries can be downloaded from https://swift.org/download/ for both OS X and Linux platforms. Ports are in progress to other platforms and operating systems but are not supported by the Swift development team. The Swift interpreter is called swift and on OS X can be launched using the xcrun command in a Terminal.app shell: $ xcrun swift Welcome to Swift version 2.2!  Type :help for assistance. >  The xcrun command allows a toolchain command to be executed; in this case, it finds /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/swift. The swift command sits alongside other compilation tools, such as clang and ld, and permits multiple versions of the commands and libraries on the same machine without conflicting. On Linux, the swift binary can be executed provided that it and the dependent libraries are in a suitable location. The Swift prompt displays > for new statements and . for a continuation. Statements and expressions that are typed into the interpreter are evaluated and displayed. Anonymous values are given references so that they can be used subsequently: > "Hello World" $R0: String = "Hello World" > 3 + 4 $R1: Int = 7 > $R0 $R2: String = "Hello World" > $R1 $R3: Int = 7 Numeric literals Numeric types in Swift can represent both signed and unsigned integral values with sizes of 8, 16, 32, or 64 bits, as well as signed 32 or 64 bit floating point values. Numbers can include underscores to provide better readability; so, 68_040 is the same as 68040: > 3.141 $R0: Double = 3.141 > 299_792_458 $R1: Int = 299792458 > -1 $R2: Int = -1 > 1_800_123456 $R3: Int = 1800123456 Numbers can also be written in binary, octal, or hexadecimal using prefixes 0b, 0o (zero and the letter "o") or 0x. Please note that Swift does not inherit C's use of a leading zero (0) to represent an octal value, unlike Java and JavaScript which do. Examples include: > 0b1010011 $R0: Int = 83 > 0o123 $R1: Int = 83 > 0123 $R2: Int = 123 > 0x7b $R3: Int = 123   Floating point literals There are three floating point types that are available in Swift which use the IEEE754 floating point standard. The Double type represents 64 bits worth of data, while Float stores 32 bits of data. In addition, Float80 is a specialized type that stores 80 bits worth of data (Float32 and Float64 are available as aliases for Float and Double, respectively, although they are not commonly used in Swift programs). Some CPUs internally use 80 bit precision to perform math operations, and the Float80 type allows this accuracy to be used in Swift. Not all architectures support Float80 natively, so this should be used sparingly. By default, floating point values in Swift use the Double type. As floating point representation cannot represent some numbers exactly, some values will be displayed with a rounding error; for example: > 3.141 $R0: Double = 3.141 > Float(3.141) $R1: Float = 3.1400003 Floating point values can be specified in decimal or hexadecimal. Decimal floating point uses e as the exponent for base 10, whereas hexadecimal floating point uses p as the exponent for base 2. A value of AeB has the value A*10^B and a value of 0xApB has the value A*2^B. For example: > 299.792458e6 $R0: Double = 299792458 > 299.792_458_e6 $R1: Double = 299792458 > 0x1p8 $R2: Double = 256 > 0x1p10 $R3: Double = 1024 > 0x4p10 $R4: Double = 4096 > 1e-1 $R5: Double = 0.10000000000000001 > 1e-2 $R6: Double = 0.01> 0x1p-1 $R7: Double = 0.5 > 0x1p-2 $R8: Double = 0.25 > 0xAp-1 $R9: Double = 5 String literals Strings can contain escaped characters, Unicode characters, and interpolated expressions. Escaped characters start with a slash () and can be one of the following: \: This is a literal slash
Read more
  • 0
  • 0
  • 19948
article-image-r-vs-pandas
Packt
16 Feb 2016
1 min read
Save for later

R vs Pandas

Packt
16 Feb 2016
1 min read
This article focuses on comparing pandas with R, the statistical package on which much of pandas' functionality is modeled. It is intended as a guide for R users who wish to use pandas, and for users who wish to replicate functionality that they have seen in the R code in pandas. It focuses on some key features available to R users and shows how to achieve similar functionality in pandas by using some illustrative examples. This article assumes that you have the R statistical package installed. If not, it can be downloaded and installed from here: http://www.r-project.org/. By the end of the article, data analysis users should have a good grasp of the data analysis capabilities of R as compared to pandas, enabling them to transition to or use pandas, should they need to. The various topics addressed in this article include the following: R data types and their pandas equivalents Slicing and selection Arithmetic operations on datatype columns Aggregation and GroupBy Matching Split-apply-combine Melting and reshaping Factors and categorical data
Read more
  • 0
  • 0
  • 4657

article-image-metal-api-get-closer-bare-metal-metal-api
Packt
16 Feb 2016
7 min read
Save for later

Metal API: Get closer to the bare metal with Metal API

Packt
16 Feb 2016
7 min read
The Metal framework supports 3D graphics rendering and other data computing commands. Metal is used in game designing to reduce the CPU overhead. In this article we'll cover: CPU/GPU framework levels Graphics pipeline overview (For more resources related to this topic, see here.) The Apple Metal API and the graphics pipeline One of the rules, if not the golden rule of modern video game development, is to keep our games running constantly at 60 frames per second or greater. If developing for VR devices and applications, this is of even more importance as dropped frame rates could lead to a sickening and game ending experience for the player. In the past, being lean was the name of the game; hardware limitations prevented much from not only being written to the screen but how much memory storage a game could hold. This limited the number of scenes, characters, effects, and levels. In the past, game development was built more with an engineering mindset, so the developers made the things work with what little they had. Many of the games on 8-bit systems and earlier had levels and characters that were only different because of elaborate sprite slicing and recoloring. Over time, advances in hardware, particularly that of GPUs allowed for richer graphical experiences. This leads to the advent of computation-heavy 3D models, real-time lighting, robust shaders, and other effects that we can use to make our games present an even greater player experience; this while trying to stuff it all in that precious .016666 second/60 Hz window. To get everything out of the hardware and combat the clash between a designer's need to make the best looking experience and the engineering reality of hardware limitations in even today's CPU/GPUs, Apple developed the Metal API. CPU/GPU framework levels Metal is what's known as a low-level GPU API. When we build our games on the iOS platform, there are different levels between the machine code in our GPU/CPU hardware and what we use to design our games. This goes for any piece of computer hardware we work with, be it Apple or others. For example, on the CPU side of things, at the very base of it all is the machine code. The next level up is the assembly language of the chipset. Assembly language differs based on the CPU chipset and allows the programmer to be as detailed as determining the individual registers to swap data in and out of in the processor. Just a few lines of a for-loop in C/C++ would take up a decent number of lines to code in assembly. The benefit of working in the lower levels of code is that we could make our games run much faster. However, most of the mid-upper level languages/APIs are made to work well enough so that this isn't a necessity anymore. Game developers have coded in assembly even after the very early days of game development. In the late 1990's, the game developer Chris Sawyer created his game, Rollercoster Tycoon™, almost entirely in the x86 assembly language! Assembly can be a great challenge for any enthusiastic developer who loves to tinker with the inner workings of computer hardware. Moving up the chain we have where C/C++ code would be and just above that is where we'd find Swift and Objective-C code. Languages such as Ruby and JavaScript, which some developers can use in Xcode, are yet another level up. That was about the CPU, now on to the GPU. The Graphics Processing Unit (GPU) is the coprocessor that works with the CPU to make the calculations for the visuals we see on the screen. The following diagram shows the GPU, the APIs that work with the GPU, and possible iOS games that can be made based on which framework/API is chosen. Like the CPU, the lowest level is the processor's machine code. To work as close to the GPU's machine code as possible, many developers would use Silicon Graphics' OpenGL API. For mobile devices, such as the iPhone and iPad, it would be the OpenGL subset, OpenGL ES. Apple provides a helper framework/library to OpenGL ES named GLKit. GLKit helps simplify some of the shader logic and lessen the manual work that goes into working with the GPU at this level. For many game developers, this was practically the only option to make 3D games on the iOS device family originally; though some use of iOS's Core Graphics, Core Animation and UIKit frameworks were perfectly fine for simpler games. Not too long into the lifespan of the iOS device family, third-party frameworks came into play, which were aimed at game development. Using OpenGL ES as its base, thus sitting directly one level above it, is the Cocos2D framework. This was actually the framework used in the original release of Rovio's Angry Birds™ series of games back in 2009. Eventually, Apple realized how important gaming was for the success of the platform and made their own game-centric frameworks, that is, the SpriteKit and SceneKit frameworks. They too, like Cocos2D/3D, sat directly above OpenGL ES. When we made SKSprite nodes or SCNNodes in our Xcode projects, up until the introduction of Metal, OpenGL operations were being used to draw these objects in the update/render cycle behind the scenes. As of iOS 9, SpriteKit and SceneKit use Metal's rendering pipeline to process graphics to the screen. If the device is older, they revert to OpenGL ES as the underlying graphics API. Graphics pipeline overview Let's take a look at the graphics pipeline to get an idea, at least on an upper level, of what the GPU is doing during a single rendered frame. We can imagine the graphical data of our games being divided in two main categories: Vertex data: This is the position information of where on the screen this data can be rendered. Vector/vertex data can be expressed as points, lines, or triangles. Remember the old saying about video game graphics, "everything is a triangle." All of those polygons in a game are just a collection of triangles via their point/vector positions. The GPU's Vertex Processing Unit (VPU) handles this data. Rendering/pixel data: Controlled by the GPU's Rasterizer, this is the data that tells the GPU how the objects, positioned by the vertex data, will be colored/shaded on the screen. For example, this is where color channels, such as RGB and alpha, are handled. In short, it's the pixel data and what we actually see on the screen. Here's a diagram showing the graphics pipeline overview: The graphics pipeline is the sequence of steps it takes to have our data rendered to the screen. The previous diagram is a simplified example of this process. Here are the main sections that can make up the pipeline: Buffer objects: These are known as Vertex Buffer Objects in OpenGL and are of the class MTLBuffer in the Metal API. These are the objects we create in our code that are sent from the CPU to the GPU for primitive processing. These objects contain data, such as the positions, normal vectors, alphas, colors, and more. Primitive processing: These are the steps in the GPU that take our Buffer Objects, break down the various vertex and rendering data in those objects, and then draw this information to the frame buffer, which is the screen output we see on the device. Before we go over the steps of primitive processing done in Metal, we should first understand the history and basics of shaders. Summary This article gives us precise knowledge about CPU/GPU framework levels and Graphics pipeline. We also learned that to overcome hardware limitations in even today's CPU/GPUs world, Apple developed the Metal API. To learn more about iOS for game development, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: iOS Game Development By Example: https://www.packtpub.com/game-development/ios-game-development-example. Sparrow iOS Game Framework Beginner’s Guide: https://www.packtpub.com/game-development/sparrow-ios-game-framework-beginner%E2%80%99s-guide Resources for Article:   Further resources on this subject: Android and iOS Apps Testing at a Glance [article] Signing up to be an iOS developer [article] Introduction to GameMaker: Studio [article]
Read more
  • 0
  • 0
  • 12525

Packt
16 Feb 2016
3 min read
Save for later

Machine learning and Python – the Dream Team

Packt
16 Feb 2016
3 min read
In this article we will be learning more about machine learning and Python. Machine learning (ML) teaches machines how to carry out tasks by themselves. It is that simple. The complexity comes with the details, and that is most likely the reason you are reading this article. (For more resources related to this topic, see here.) Machine learning and Python – the dream team The goal of machine learning is to teach machines (software) to carry out tasks by providing them a couple of examples (how to do or not do a task). Let us assume that each morning when you turn on your computer, you perform the same task of moving e-mails around so that only those e-mails belonging to a particular topic end up in the same folder. After some time, you feel bored and think of automating this chore. One way would be to start analyzing your brain and writing down all the rules your brain processes while you are shuffling your e-mails. However, this will be quite cumbersome and always imperfect. While you will miss some rules, you will over-specify others. A better and more future-proof way would be to automate this process by choosing a set of e-mail meta information and body/folder name pairs and let an algorithm come up with the best rule set. The pairs would be your training data, and the resulting rule set (also called model) could then be applied to future e-mails, which we have not yet seen. This is machine learning in its simplest form. Of course, machine learning (often also referred to as data mining or predictive analysis) is not a brand new field in itself. Quite the contrary, its success over the recent years can be attributed to the pragmatic way of using rock-solid techniques and insights from other successful fields; for example, statistics. There, the purpose is for us humans to get insights into the data by learning more about the underlying patterns and relationships. As you read more and more about successful applications of machine learning (you have checked out kaggle.com already, haven't you?), you will see that applied statistics is a common field among machine learning experts. As you will see later, the process of coming up with a decent ML approach is never a waterfall-like process. Instead, you will see yourself going back and forth in your analysis, trying out different versions of your input data on diverse sets of ML algorithms. It is this explorative nature that lends itself perfectly to Python. Being an interpreted high-level programming language, it may seem that Python was designed specifically for the process of trying out different things. What is more, it does this very fast. Sure enough, it is slower than C or similar statically typed programming languages; nevertheless, with a myriad of easy-to-use libraries that are often written in C, you don't have to sacrifice speed for agility. Summary In this is article we learned about machine learning and its goals. To learn more please refer to the following books: Building Machine Learning Systems with Python - Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/building-machine-learning-systems-python-second-edition) Expert Python Programming (https://www.packtpub.com/application-development/expert-python-programming) Resources for Article:   Further resources on this subject: Python Design Patterns in Depth – The Observer Pattern [article] Python Design Patterns in Depth: The Factory Pattern [article] Customizing IPython [article]
Read more
  • 0
  • 0
  • 11883
article-image-audio-and-animation-hand-hand
Packt
16 Feb 2016
5 min read
Save for later

Audio and Animation: Hand in Hand

Packt
16 Feb 2016
5 min read
In this article, we are going to learn techniques to match audio pitch to animation speed. This is very crucial while editing videos and creating animated contents. (For more resources related to this topic, see here.) Matching the audio pitch to the animation speed Many artifacts sound higher in pitch when accelerated and lower when slowed down. Car engines, fan coolers, Vinyl, a record player the list goes on. If you want to simulate this kind of sound effect in an animated object that can have its speed changed dynamically, follow this article. Getting ready For this, you'll need an animated 3D object and an audio clip. Please use the files animatedRocket.fbx and engineSound.wav, available in the 1362_09_01 folder, that you can find in code bundle of the book Unity 5.x Cookbook at https://www.packtpub.com/game-development/unity-5x-cookbook. How to do it... To change the pitch of an audio clip according to the speed of an animated object, please follow these steps: Import the animatedRocket.fbx file into your Project. Select the carousel.fbx file in the Project view. Then, from the Inspector view, check its Import Settings. Select Animations, then select the clip Take 001, and make sure to check the Loop Time option. Click on the Apply button, shown as follows to save the changes: The reason why we didn't need to check Loop Pose option is because our animation already loops in a seamless fashion. If it didn't, we could have checked that option to automatically create a seamless transition from the last to the first frame of the animation. Add the animatedRocket GameObject to the scene by dragging it from the Project view into the Hierarchy view. Import the engineSound.wav audio clip. Select the animatedRocket GameObject. Then, drag engineSound from the Project view into the Inspector view, adding it as an Audio Source for that object. In the Audio Source component of carousel, check the box for the Loop option, as shown in the following screenshot: We need to create a Controller for our object. In the Project view, click on the Create button and select Animator Controller. Name it as rocketlController. Double-click on rocketController object to open the Animator window, as shown. Then, right-click on the gridded area and select the Create State | Empty option, from the contextual menu. Name the new state spin and set Take 001 as its motion in the Motion field: From the Hierarchy view, select animatedRocket. Then, in the Animator component (in the Inspector view), set rocketController as its Controller and make sure that the Apply Root Motion option is unchecked as shown: In the Project view, create a new C# Script and rename it to ChangePitch. Open the script in your editor and replace everything with the following code: using UnityEngine; public class ChangePitch : MonoBehaviour{ public float accel = 0.05f; public float minSpeed = 0.0f; public float maxSpeed = 2.0f; public float animationSoundRatio = 1.0f; private float speed = 0.0f; private Animator animator; private AudioSource audioSource; void Start(){ animator = GetComponent<Animator>(); audioSource = GetComponent<AudioSource>(); speed = animator.speed; AccelRocket (0f); } void Update(){ if (Input.GetKey (KeyCode.Alpha1)) AccelRocket(accel); if (Input.GetKey (KeyCode.Alpha2)) AccelRocket(-accel); } public void AccelRocket(float accel){ speed += accel; speed = Mathf.Clamp(speed,minSpeed,maxSpeed); animator.speed = speed; float soundPitch = animator.speed * animationSoundRatio; audioSource.pitch = Mathf.Abs(soundPitch); } } Save your script and add it as a component to animatedRocket GameObject. Play the scene and change the animation speed by pressing key 1 (accelerate) and 2 (decelerate) on your alphanumeric keyboard. The audio pitch will change accordingly. How it works... At the Start() method, besides storing the Animator and Audio oururcecuSource components in variables, we'll get the initial speed from the Animator and, we'll call the AccelRocket() function by passing 0 as an argument, only for that function to calculate the resulting pitch for the Audio Source. During Update() function, the lines of the if(Input.GetKey (KeyCode.Alpha1)) and if(Input.GetKey (KeyCode.Alpha2)) code detect whenever the 1 or 2 keys are being pressed on the alphanumeric keyboard to call the AccelRocket() function, passing a accel float variable as an argument. The AccelRocket() function, in its turn, increments speed with the received argument (the accel float variable). However, it uses the Mathf.Clamp()command to limit the new speed value between the minimum and maximum speed as set by the user. Then, it changes the Animator speed and Audio Source pitch according to the new speed absolute value (the reason for making it an absolute value is keeping the pitch a positive number, even when the animation is reversed by a negative speed value). Also, please note that setting the animation speed and therefore, the sound pitch to 0 will cause the sound to stop, making it clear that stopping the object's animation also prevents the engine sound from playing. There's more... Here is some information on how to fine-tune and customize this recipe. Changing the Animation/Sound Ratio If you want the audio clip pitch to be more or less affected by the animation speed, change the value of the Animation/Sound Ratio parameter. Accessing the function from other scripts The AccelRocket()function was made public so that it can be accessed from other scripts. As an example, we have included the ExtChangePitch.cs script in 1362_09_01 folder. Try attaching this script to the Main Camera object and use it to control the speed by clicking on the left and right mouse buttons. Summary In this article we learned, how to match audio pitch to the animation speed, how to change Animation/Sound Ratio. To learn more please refer to the following books: Learning Unity 2D Game Development by Examplehttps://www.packtpub.com/game-development/learning-unity-2d-game-development-example. Unity Game Development Blueprintshttps://www.packtpub.com/game-development/unity-game-development-blueprints. Getting Started with Unityhttps://www.packtpub.com/game-development/getting-started-unity. Resources for Article:   Further resources on this subject: The Vertex Functions [article] Lights and Effects [article] Virtual Machine Concepts [article]
Read more
  • 0
  • 0
  • 15380

article-image-data-mining
Packt
16 Feb 2016
11 min read
Save for later

Data mining

Packt
16 Feb 2016
11 min read
Let's talk about data mining. What is data mining? Data mining is the discovery of a model in data; it's also called exploratory data analysis, and discovers useful, valid, unexpected, and understandable knowledge from the data. Some goals are shared with other sciences, such as statistics, artificial intelligence, machine learning, and pattern recognition. Data mining has been frequently treated as an algorithmic problem in most cases. Clustering, classification, association rule learning, anomaly detection, regression, and summarization are all part of the tasks belonging to data mining. (For more resources related to this topic, see here.) The data mining methods can be summarized into two main categories of data mining problems: feature extraction and summarization. Feature extraction This is to extract the most prominent features of the data and ignore the rest. Here are some examples: Frequent itemsets: This model makes sense for data that consists of baskets of small sets of items. Similar items: Sometimes your data looks like a collection of sets and the objective is to find pairs of sets that have a relatively large fraction of their elements in common. It's a fundamental problem of data mining. Summarization The target is to summarize the dataset succinctly and approximately, such as clustering, which is the process of examining a collection of points (data) and grouping the points into clusters according to some measure. The goal is that points in the same cluster have a small distance from one another, while points in different clusters are at a large distance from one another. The data mining process There are two popular processes to define the data mining process in different perspectives, and the more widely adopted one is CRISP-DM: Cross-Industry Standard Process for Data Mining(CRISP-DM) Sample, Explore, Modify, Model, Assess (SEMMA), which was developed by the SAS Institute, USA CRISP-DM There are six phases in this process that are shown in the following figure; it is not rigid, but often has a great deal of backtracking: Let's look at the phases in detail: Business understanding: This task includes determining business objectives, assessing the current situation, establishing data mining goals, and developing a plan. Data understanding: This task evaluates data requirements and includes initial data collection, data description, data exploration, and the verification of data quality. Data preparation: Once available, data resources are identified in the last step. Then, the data needs to be selected, cleaned, and then built into the desired form and format. Modeling: Visualization and cluster analysis are useful for initial analysis. The initial association rules can be developed by applying tools such as generalized rule induction. This is a data mining technique to discover knowledge represented as rules to illustrate the data in the view of causal relationship between conditional factors and a given decision/outcome. The models appropriate to the data type can also be applied. Evaluation :The results should be evaluated in the context specified by the business objectives in the first step. This leads to the identification of new needs and in turn reverts to the prior phases in most cases. Deployment: Data mining can be used to both verify previously held hypotheses or for knowledge. SEMMA Here is an overview of the process for SEMMA: Let's look at these processes in detail: Sample: In this step, a portion of a large dataset is extracted Explore: To gain a better understanding of the dataset, unanticipated trends and anomalies are searched in this step Modify: The variables are created, selected, and transformed to focus on the model construction process Model: A variable combination of models is searched to predict a desired outcome Assess: The findings from the data mining process are evaluated by its usefulness and reliability Social network mining As we mentioned before, data mining finds a model on data and the mining of social network finds the model on graph data in which the social network is represented. Social network mining is one application of web data mining; the popular applications are social sciences and bibliometry, PageRank and HITS, shortcomings of the coarse-grained graph model, enhanced models and techniques, evaluation of topic distillation, and measuring and modeling the Web. Social network When it comes to the discussion of social networks, you will think of Facebook, Google+, LinkedIn, and so on. The essential characteristics of a social network are as follows: There is a collection of entities that participate in the network. Typically, these entities are people, but they could be something else entirely. There is at least one relationship between the entities of the network. On Facebook, this relationship is called friends. Sometimes, the relationship is all-or-nothing; two people are either friends or they are not. However, in other examples of social networks, the relationship has a degree. This degree could be discrete, for example, friends, family, acquaintances, or none as in Google+. It could be a real number; an example would be the fraction of the average day that two people spend talking to each other. There is an assumption of nonrandomness or locality. This condition is the hardest to formalize, but the intuition is that relationships tend to cluster. That is, if entity A is related to both B and C, then there is a higher probability than average that B and C are related. Here are some varieties of social networks: Telephone networks: The nodes in this network are phone numbers and represent individuals E-mail networks: The nodes represent e-mail addresses, which represent individuals Collaboration networks: The nodes here represent individuals who published research papers; the edge connecting two nodes represent two individuals who published one or more papers jointly Social networks are modeled as undirected graphs. The entities are the nodes, and an edge connects two nodes if the nodes are related by the relationship that characterizes the network. If there is a degree associated with the relationship, this degree is represented by labeling the edges. Here is an example in which Coleman's High School Friendship Data from the sna R package is used for analysis. The data is from a research on friendship ties between 73 boys in a high school in one chosen academic year; reported ties for all informants are provided for two time points (fall and spring). The dataset's name is coleman, which is an array type in R language. The node denotes a specific student and the line represents the tie between two students. Text mining Text mining is based on the data of text, concerned with exacting relevant information from large natural language text, and searching for interesting relationships, syntactical correlation, or semantic association between the extracted entities or terms. It is also defined as automatic or semiautomatic processing of text. The related algorithms include text clustering, text classification, natural language processing, and web mining. One of the characteristics of text mining is text mixed with numbers, or in other point of view, the hybrid data type contained in the source dataset. The text is usually a collection of unstructured documents, which will be preprocessed and transformed into a numerical and structured representation. After the transformation, most of the data mining algorithms can be applied with good effects. The process of text mining is described as follows: Text mining starts from preparing the text corpus, which are reports, letters and so forth The second step is to build a semistructured text database that is based on the text corpus The third step is to build a term-document matrix in which the term frequency is included The final result is further analysis, such as text analysis, semantic analysis, information retrieval, and information summarization Information retrieval and text mining Information retrieval is to help users find information, most commonly associated with online documents. It focuses on the acquisition, organization, storage, retrieval, and distribution for information. The task of Information Retrieval (IR) is to retrieve relevant documents in response to a query. The fundamental technique of IR is measuring similarity. Key steps in IR are as follows: Specify a query. The following are some of the types of queries: Keyword query: This is expressed by a list of keywords to find documents that contain at least one keyword Boolean query: This is constructed with Boolean operators and keywords Phrase query: This is a query that consists of a sequence of words that makes up a phrase Proximity query: This is a downgrade version of the phrase queries and can be a combination of keywords and phrases Full document query: This query is a full document to find other documents similar to the query document Natural language questions: This query helps to express users' requirements as a natural language question Search the document collection. Return the subset of relevant documents. Mining text for prediction Prediction of results from text is just as ambitious as predicting numerical data mining and has similar problems associated with numerical classification. It is generally a classification issue. Prediction from text needs prior experience, from the sample, to learn how to draw a prediction on new documents. Once text is transformed into numeric data, prediction methods can be applied. Web data mining Web mining aims to discover useful information or knowledge from the web hyperlink structure, page, and usage data. The Web is one of the biggest data sources to serve as the input for data mining applications. Web data mining is based on IR, machine learning (ML), statistics, pattern recognition, and data mining. Web mining is not purely a data mining problem because of the heterogeneous and semistructured or unstructured web data, although many data mining approaches can be applied to it. Web mining tasks can be defined into at least three types: Web structure mining: This helps to find useful information or valuable structural summary about sites and pages from hyperlinks Web content mining: This helps to mine useful information from web page contents Web usage mining: This helps to discover user access patterns from web logs to detect intrusion, fraud, and attempted break-in The algorithms applied to web data mining are originated from classical data mining algorithms; it shares many similarities, such as the mining process, however, differences exist too. The characteristics of web data mining makes it different from data mining for the following reasons: The data is unstructured The information of the Web keeps changing and the amount of data keeps growing Any data type is available on the Web, such as structured and unstructured data Heterogeneous information is on the web; redundant pages are present too Vast amounts of information on the web is linked The data is noisy Web data mining differentiates from data mining by the huge dynamic volume of source dataset, a big variety of data format, and so on. The most popular data mining tasks related to the Web are as follows: Information extraction (IE):The task of IE consists of a couple of steps, tokenization, sentence segmentation, part-of-speech assignment, named entity identification, phrasal parsing, sentential parsing, semantic interpretation, discourse interpretation, template filling, and merging. Natural language processing (NLP): This researches the linguistic characteristics of human-human and human-machine interactive, models of linguistic competence and performance, frameworks to implement process with such models, processes'/models' iterative refinement, and evaluation techniques for the result systems. Classical NLP tasks related to web data mining are tagging, knowledge representation, ontologies, and so on. Question answering: The goal is to find the answer from a collection of text to questions in natural language format. It can be categorized into slot filling, limited domain, and open domain with bigger difficulties for the latter. One simple example is based on a predefined FAQ to answer queries from customers. Resource discovery: The popular applications are collecting important pages preferentially; similarity search using link topology, topical locality and focused crawling; and discovering communities. Summary We have looked at the broad aspects of data mining here. In case you are wondering what to look at next, check out how to "data mine" in R with Learning Data Mining with R (https://www.packtpub.com/big-data-and-business-intelligence/learning-data-mining-r). If R is not your taste, you can "data mine" with Python as well. Check out Learning Data Mining with Python (https://www.packtpub.com/big-data-and-business-intelligence/learning-data-mining-python). Resources for Article: Further resources on this subject: Machine Learning with R [Article] Machine learning and Python – the Dream Team [Article] Machine Learning in Bioinformatics [Article]
Read more
  • 0
  • 0
  • 28370
Modal Close icon
Modal Close icon