Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7008 Articles
article-image-developing-basic-site-nodejs-and-express
Packt
17 Feb 2016
21 min read
Save for later

Developing a Basic Site with Node.js and Express

Packt
17 Feb 2016
21 min read
In this article, we will continue with the Express framework. It's one of the most popular frameworks available and is certainly a pioneering one. Express is still widely used and several developers use it as a starting point. (For more resources related to this topic, see here.) Getting acquainted with Express Express (http://expressjs.com/) is a web application framework for Node.js. It is built on top of Connect (http://www.senchalabs.org/connect/), which means that it implements middleware architecture. In the previous chapter, when exploring Node.js, we discovered the benefit of such a design decision: the framework acts as a plugin system. Thus, we can say that Express is suitable for not only simple but also complex applications because of its architecture. We may use only some of the popular types of middleware or add a lot of features and still keep the application modular. In general, most projects in Node.js perform two functions: run a server that listens on a specific port, and process incoming requests. Express is a wrapper for these two functionalities. The following is basic code that runs the server: var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); var http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello Worldn'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); This is an example extracted from the official documentation of Node.js. As shown, we use the native module http and run a server on the port 1337. There is also a request handler function, which simply sends the Hello world string to the browser. Now, let's implement the same thing but with the Express framework, using the following code: var express = require('express'); var app = express(); app.get("/", function(req, res, next) { res.send("Hello world"); }).listen(1337); console.log('Server running at http://127.0.0.1:1337/'); It's pretty much the same thing. However, we don't need to specify the response headers or add a new line at the end of the string because the framework does it for us. In addition, we have a bunch of middleware available, which will help us process the requests easily. Express is like a toolbox. We have a lot of tools to do the boring stuff, allowing us to focus on the application's logic and content. That's what Express is built for: saving time for the developer by providing ready-to-use functionalities. Installing Express There are two ways to install Express. We'll will start with the simple one and then proceed to the more advanced technique. The simpler approach generates a template, which we may use to start writing the business logic directly. In some cases, this can save us time. From another viewpoint, if we are developing a custom application, we need to use custom settings. We can also use the boilerplate, which we get with the advanced technique; however, it may not work for us. Using package.json Express is like every other module. It has its own place in the packages register. If we want to use it, we need to add the framework in the package.json file. The ecosystem of Node.js is built on top of the Node Package Manager. It uses the JSON file to find out what we need and installs it in the current directory. So, the content of our package.json file looks like the following code: { "name": "projectname", "description": "description", "version": "0.0.1", "dependencies": { "express": "3.x" } } These are the required fields that we have to add. To be more accurate, we have to say that the mandatory fields are name and version. However, it is always good to add descriptions to our modules, particularly if we want to publish our work in the registry, where such information is extremely important. Otherwise, the other developers will not know what our library is doing. Of course, there are a bunch of other fields, such as contributors, keywords, or development dependencies, but we will stick to limited options so that we can focus on Express. Once we have our package.json file placed in the project's folder, we have to call npm install in the console. By doing so, the package manager will create a node_modules folder and will store Express and its dependencies there. At the end of the command's execution, we will see something like the following screenshot: The first line shows us the installed version, and the proceeding lines are actually modules that Express depends on. Now, we are ready to use Express. If we type require('express'), Node.js will start looking for that library inside the local node_modules directory. Since we are not using absolute paths, this is normal behavior. If we miss running the npm install command, we will be prompted with Error: Cannot find module 'express'. Using a command-line tool There is a command-line instrument called express-generator. Once we run npm install -g express-generator, we will install and use it as every other command in our terminal. If you use the framework inseveral projects, you will notice that some things are repeated. We can even copy and paste them from one application to another, and this is perfectly fine. We may even end up with our own boiler plate and can always start from there. The command-line version of Express does the same thing. It accepts few arguments and based on them, creates a skeleton for use. This can be very handy in some cases and will definitely save some time. Let's have a look at the available arguments: -h, --help: This signifies output usage information. -V, --version: This shows the version of Express. -e, --ejs: This argument adds the EJS template engine support. Normally, we need a library to deal with our templates. Writing pure HTML is not very practical. The default engine is set to JADE. -H, --hogan: This argument is Hogan-enabled (another template engine). -c, --css: If wewant to use the CSS preprocessors, this option lets us use LESS(short forLeaner CSS) or Stylus. The default is plain CSS. -f, --force: This forces Express to operate on a nonempty directory. Let's try to generate an Express application skeleton with LESS as a CSS preprocessor. We use the following line of command: express --css less myapp A new myapp folder is created with the file structure, as seen in the following screenshot: We still need to install the dependencies, so cd myapp && npm install is required. We will skip the explanation of the generated directories for now and will move to the created app.js file. It starts with initializing the module dependencies, as follows: var express = require('express'); var path = require('path'); var favicon = require('static-favicon'); var logger = require('morgan'); var cookieParser = require('cookie-parser'); var bodyParser = require('body-parser'); var routes = require('./routes/index'); var users = require('./routes/users'); var app = express(); Our framework is express, and path is a native Node.js module. The middleware are favicon, logger, cookieParser, and bodyParser. The routes and users are custom-made modules, placed in local for the project folders. Similarly, as in the Model-View-Controller(MVC) pattern, these are the controllers for our application. Immediately after, an app variable is created; this represents the Express library. We use this variable to configure our application. The script continues by setting some key-value pairs. The next code snippet defines the path to our views and the default template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); The framework uses the methods set and get to define the internal properties. In fact, we may use these methods to define our own variables. If the value is a Boolean, we can replace set and get with enable and disable. For example, see the following code: app.set('color', 'red'); app.get('color'); // red app.enable('isAvailable'); The next code adds middleware to the framework. Wecan see the code as follows: app.use(favicon()); app.use(logger('dev')); app.use(bodyParser.json()); app.use(bodyParser.urlencoded()); app.use(cookieParser()); app.use(require('less-middleware')({ src: path.join(__dirname, 'public') })); app.use(express.static(path.join(__dirname, 'public'))); The first middleware serves as the favicon of our application. The second is responsible for the output in the console. If we remove it, we will not get information about the incoming requests to our server. The following is a simple output produced by logger: GET / 200 554ms - 170b GET /stylesheets/style.css 200 18ms - 110b The json and urlencoded middleware are related to the data sent along with the request. We need them because they convert the information in an easy-to-use format. There is also a middleware for the cookies. It populates the request object, so we later have access to the required data. The generated app uses LESS as a CSS preprocessor, and we need to configure it by setting the directory containing the .less files. Eventually, we define our static resources, which should be delivered by the server. These are just few lines, but we've configured the whole application. We may remove or replace some of the modules, and the others will continue working. The next code in the file maps two defined routes to two different handlers, as follows: app.use('/', routes); app.use('/users', users); If the user tries to open a missing page, Express still processes the request by forwarding it to the error handler, as follows: app.use(function(req, res, next) { var err = new Error('Not Found'); err.status = 404; next(err); }); The framework suggests two types of error handling:one for the development environment and another for the production server. The difference is that the second one hides the stack trace of the error, which should be visible only for the developers of the application. As we can see in the following code, we are checking the value of the env property and handling the error differently: // development error handler if (app.get('env') === 'development') { app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: err }); }); } // production error handler app.use(function(err, req, res, next) { res.status(err.status || 500); res.render('error', { message: err.message, error: {} }); }); At the end, the app.js file exports the created Express instance, as follows: module.exports = app; To run the application, we need to execute node ./bin/www. The code requires app.js and starts the server, which by default listens on port 3000. #!/usr/bin/env node var debug = require('debug')('my-application'); var app = require('../app'); app.set('port', process.env.PORT || 3000); var server = app.listen(app.get('port'), function() { debug('Express server listening on port ' + server.address().port); }); The process.env declaration provides an access to variables defined in the current development environment. If there is no PORT setting, Express uses 3000 as the value. The required debug module uses a similar approach to find out whether it has to show messages to the console. Managing routes The input of our application is the routes. The user visits our page at a specific URL and we have to map this URL to a specific logic. In the context of Express, this can be done easily, as follows: var controller = function(req, res, next) { res.send("response"); } app.get('/example/url', controller); We even have control over the HTTP's method, that is, we are able to catch POST, PUT, or DELETE requests. This is very handy if we want to retain the address path but apply a different logic. For example, see the following code: var getUsers = function(req, res, next) { // ... } var createUser = function(req, res, next) { // ... } app.get('/users', getUsers); app.post('/users', createUser); The path is still the same, /users, but if we make a POST request to that URL, the application will try to create a new user. Otherwise, if the method is GET, it will return a list of all the registered members. There is also a method, app.all, which we can use to handle all the method types at once. We can see this method in the following code snippet: app.all('/', serverHomePage); There is something interesting about the routing in Express. We may pass not just one but many handlers. This means that we can create a chain of functions that correspond to one URL. For example, it we need to know if the user is logged in, there is a module for that. We can add another method that validates the current user and attaches a variable to the request object, as follows: var isUserLogged = function(req, res, next) { req.userLogged = Validator.isCurrentUserLogged(); next(); } var getUser = function(req, res, next) { if(req.userLogged) { res.send("You are logged in. Hello!"); } else { res.send("Please log in first."); } } app.get('/user', isUserLogged, getUser); The Validator class is a class that checks the current user's session. The idea is simple: we add another handler, which acts as an additional middleware. After performing the necessary actions, we call the next function, which passes the flow to the next handler, getUser. Because the request and response objects are the same for all the middlewares, we have access to the userLogged variable. This is what makes Express really flexible. There are a lot of great features available, but they are optional. At the end of this chapter, we will make a simple website that implements the same logic. Handling dynamic URLs and the HTML forms The Express framework also supports dynamic URLs. Let's say we have a separate page for every user in our system. The address to those pages looks like the following code: /user/45/profile Here, 45 is the unique number of the user in our database. It's of course normal to use one route handler for this functionality. We can't really define different functions for every user. The problem can be solved by using the following syntax: var getUser = function(req, res, next) { res.send("Show user with id = " + req.params.id); } app.get('/user/:id/profile', getUser); The route is actually like a regular expression with variables inside. Later, that variable is accessible in the req.params object. We can have more than one variable. Here is a slightly more complex example: var getUser = function(req, res, next) { var userId = req.params.id; var actionToPerform = req.params.action; res.send("User (" + userId + "): " + actionToPerform) } app.get('/user/:id/profile/:action', getUser); If we open http://localhost:3000/user/451/profile/edit, we see User (451): edit as a response. This is how we can get a nice looking, SEO-friendly URL. Of course, sometimes we need to pass data via the GET or POST parameters. We may have a request like http://localhost:3000/user?action=edit. To parse it easily, we need to use the native url module, which has few helper functions to parse URLs: var getUser = function(req, res, next) { var url = require('url'); var url_parts = url.parse(req.url, true); var query = url_parts.query; res.send("User: " + query.action); } app.get('/user', getUser); Once the module parses the given URL, our GET parameters are stored in the .query object. The POST variables are a bit different. We need a new middleware to handle that. Thankfully, Express has one, which is as follows: app.use(express.bodyParser()); var getUser = function(req, res, next) { res.send("User: " + req.body.action); } app.post('/user', getUser); The express.bodyParser() middleware populates the req.body object with the POST data. Of course, we have to change the HTTP method from .get to .post or .all. If we want to read cookies in Express, we may use the cookieParser middleware. Similar to the body parser, it should also be installed and added to the package.json file. The following example sets the middleware and demonstrates its usage: var cookieParser = require('cookie-parser'); app.use(cookieParser('optional secret string')); app.get('/', function(req, res, next){ var prop = req.cookies.propName }); Returning a response Our server accepts requests, does some stuff, and finally, sends the response to the client's browser. This can be HTML, JSON, XML, or binary data, among others. As we know, by default, every middleware in Express accepts two objects, request and response. The response object has methods that we can use to send an answer to the client. Every response should have a proper content type or length. Express simplifies the process by providing functions to set HTTP headers and sending content to the browser. In most cases, we will use the .send method, as follows: res.send("simple text"); When we pass a string, the framework sets the Content-Type header to text/html. It's great to know that if we pass an object or array, the content type is application/json. If we develop an API, the response status code is probably going to be important for us. With Express, we are able to set it like in the following code snippet: res.send(404, 'Sorry, we cannot find that!'); It's even possible to respond with a file from our hard disk. If we don't use the framework, we will need to read the file, set the correct HTTP headers, and send the content. However, Express offers the .sendfile method, which wraps all these operations as follows: res.sendfile(__dirname + "/images/photo.jpg"); Again, the content type is set automatically; this time it is based on the filename's extension. When building websites or applications with a user interface, we normally need to serve an HTML. Sure, we can write it manually in JavaScript, but it's good practice to use a template engine. This means we save everything in external files and the engine reads the markup from there. It populates them with some data and, at the end, provides ready-to-show content. In Express, the whole process is summarized in one method, .render. However, to work properly, we have to instruct the framework regarding which template engine to use. We already talked about this in the beginning of this chapter. The following two lines of code, set the path to our views and the template engine: app.set('views', path.join(__dirname, 'views')); app.set('view engine', 'jade'); Let's say we have the following template ( /views/index.jade ): h1= title p Welcome to #{title} Express provides a method to serve templates. It accepts the path to the template, the data to be applied, and a callback. To render the previous template, we should use the following code: res.render("index", {title: "Page title here"}); The HTML produced looks as follows: <h1>Page title here</h1><p>Welcome to Page title here</p> If we pass a third parameter, function, we will have access to the generated HTML. However, it will not be sent as a response to the browser. The example-logging system We've seen the main features of Express. Now let's build something real. The next few pages present a simple website where users can read only if they are logged in. Let's start and set up the application. We are going to use Express' command-line instrument. It should be installed using npm install -g express-generator. We create a new folder for the example, navigate to it via the terminal, and execute express --css less site. A new directory, site, will be created. If we go there and run npm install, Express will download all the required dependencies. As we saw earlier, by default, we have two routes and two controllers. To simplify the example, we will use only the first one: app.use('/', routes). Let's change the views/index.jade file content to the following HTML code: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr p That's a simple application using Express. Now, if we run node ./bin/www and open http://127.0.0.1:3000, we will see the page. Jade uses indentation to parse our template. So, we should not mix tabs and spaces. Otherwise, we will get an error. Next, we need to protect our content. We check whether the current user has a session created; if not, a login form is shown. It's the perfect time to create a new middleware. To use sessions in Express, install an additional module: express-session. We need to open our package.json file and add the following line of code: "express-session": "~1.0.0" Once we do that, a quick run of npm install will bring the module to our application. All we have to do is use it. The following code goes to app.js: var session = require('express-session'); app.use(session({ secret: 'app', cookie: { maxAge: 60000 }})); var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { res.send("show login form"); } } app.use('/', verifyUser, routes); Note that we changed the original app.use('/', routes) line. The session middleware is initialized and added to Express. The verifyUser function is called before the page rendering. It uses the req.session object, and checks whether there is a loggedIn variable defined and if its value is true. If we run the script again, we will see that the show login form text is shown for every request. It's like this because no code sets the session exactly the way we want it. We need a form where users can type their username and password. We will process the result of the form and if the credentials are correct, the loggedIn variable will be set to true. Let's create a new Jade template, /views/login.jade: doctype html html head title= title link(rel='stylesheet', href='/stylesheets/style.css') body h1= title hr form(method='post') label Username: br input(type='text', name='username') br label Password: br input(type='password', name='password') br input(type='submit') Instead of sending just a text with res.send("show login form"); we should render the new template, as follows: res.render("login", {title: "Please log in."}); We choose POST as the method for the form. So, we need to add the middleware that populates the req.body object with the user's data, as follows: app.use(bodyParser()); Process the submitted username and password as follows: var verifyUser = function(req, res, next) { if(req.session.loggedIn) { next(); } else { var username = "admin", password = "admin"; if(req.body.username === username && req.body.password === password) { req.session.loggedIn = true; res.redirect('/'); } else { res.render("login", {title: "Please log in."}); } } } The valid credentials are set to admin/admin. In a real application, we may need to access a database or get this information from another place. It's not really a good idea to place the username and password in the code; however, for our little experiment, it is fine. The previous code checks whether the passed data matches our predefined values. If everything is correct, it sets the session, after which the user is forwarded to the home page. Once you log in, you should be able to log out. Let's add a link for that just after the content on the index page (views/index.jade ): a(href='/logout') logout Once users clicks on this link, they will be forward to a new page. We just need to create a handler for the new route, remove the session, and forward them to the index page where the login form is reflected. Here is what our logging out handler looks like: // in app.js var logout = function(req, res, next) { req.session.loggedIn = false; res.redirect('/'); } app.all('/logout', logout); Setting loggedIn to false is enough to make the session invalid. The redirect sends users to the same content page they came from. However, this time, the content is hidden and the login form pops up. Summary In this article, we learned about one of most widely used Node.js frameworks, Express. We discussed its fundamentals, how to set it up, and its main characteristics. The middleware architecture, which we mentioned in the previous chapter, is the base of the library and gives us the power to write complex but, at the same time, flexible applications. The example we used was a simple one. We required a valid session to provide page access. However, it illustrates the usage of the body parser middleware and the process of registering the new routes. We also updated the Jade templates and saw the results in the browser. For more information on Node.js Refer to the following URLs: https://www.packtpub.com/web-development/instant-nodejs-starter-instant https://www.packtpub.com/web-development/learning-nodejs-net-developers https://www.packtpub.com/web-development/nodejs-essentials Resources for Article: Further resources on this subject: Writing a Blog Application with Node.js and AngularJS [article] Testing in Node and Hapi [article] Learning Node.js for Mobile Application Development [article]
Read more
  • 0
  • 0
  • 14618

article-image-create-your-first-react-element
Packt
17 Feb 2016
22 min read
Save for later

Create Your First React Element

Packt
17 Feb 2016
22 min read
From the 7th to the 13th of November 2016, you can save up to 80% on some of our top ReactJS content - so what are you waiting for? Dive in here before the week ends! As many of you know, creating a simple web application today involves writing the HTML, CSS, and JavaScript code. The reason we use three different technologies is because we want to separate three different concerns: Content (HTML) Styling (CSS) Logic (JavaScript) (For more resources related to this topic, see here.) This separation works great for creating a web page because, traditionally, we had different people working on different parts of our web page: one person structured the content using HTML and styled it using CSS, and then another person implemented the dynamic behavior of various elements on that web page using JavaScript. It was a content-centric approach. Today, we mostly don't think of a website as a collection of web pages anymore. Instead, we build web applications that might have only one web page, and that web page does not represent the layout for our content—it represents a container for our web application. Such a web application with a single web page is called (unsurprisingly) a Single Page Application (SPA). You might be wondering, how do we represent the rest of the content in a SPA? Surely, we need to create an additional layout using HTML tags? Otherwise, how does a web browser know what to render? These are all valid questions. Let's take a look at how it works in this article. Once you load your web page in a web browser, it creates a Document Object Model (DOM) of that web page. A DOM represents your web page in a tree structure, and at this point, it reflects the structure of the layout that you created with only HTML tags. This is what happens regardless of whether you're building a traditional web page or a SPA. The difference between the two is what happens next. If you are building a traditional web page, then you would finish creating your web page's layout. On the other hand, if you are building a SPA, then you would need to start creating additional elements by manipulating the DOM with JavaScript. A web browser provides you with the JavaScript DOM API to do this. You can learn more about it at https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model. However, manipulating (or mutating) the DOM with JavaScript has two issues: Your programming style will be imperative if you decide to use the JavaScript DOM API directly. This programming style leads to a code base that is harder to maintain. DOM mutations are slow because they cannot be optimized for speed, unlike other JavaScript code. Luckily, React solves both these problems for us. Understanding virtual DOM Why do we need to manipulate the DOM in the first place? Because our web applications are not static. They have a state represented by the user interface (UI) that a web browser renders, and that state can be changed when an event occurs. What kind of events are we talking about? There are two types of events that we're interested in: User events: When a user types, clicks, scrolls, resizes, and so on Server events: When an application receives data or an error from a server, among others What happens while handling these events? Usually, we update the data that our application depends on, and that data represents a state of our data model. In turn, when a state of our data model changes, we might want to reflect this change by updating a state of our UI. Looks like what we want is a way of syncing two different states: the UI state and the data model state. We want one to react to the changes in  the other and vice versa. How can we achieve this? One of the ways to sync your application's UI state with an underlying data model's state is two-way data binding. There are different types of two-way data binding. One of them is key-value observing (KVO), which is used in Ember.js, Knockout, Backbone, and iOS, among others. Another one is dirty checking, which is used in Angular. Instead of two-way data binding, React offers a different solution called the virtual DOM. The virtual DOM is a fast, in-memory representation of the real DOM, and it's an abstraction that allows us to treat JavaScript and DOM as if they were reactive. Let's take a look at how it works: Whenever the state of your data model changes, the virtual DOM and React will rerender your UI to a virtual DOM representation. React then calculates the difference between the two virtual DOM representations: the previous virtual DOM representation that was computed before the data was changed and the current virtual DOM representation that was computed after the data was changed. This difference between the two virtual DOM representations is what actually needs to be changed in the real DOM. React updates only what needs to be updated in the real DOM. The process of finding a difference between the two representations of the virtual DOM and rerendering only the updated patches in a real DOM is fast. Also, the best part is, as a React developer, that you don't need to worry about what actually needs to be rerendered. React allows you to write your code as if you were rerendering the entire DOM every time your application's state changes. If you would like to learn more about the virtual DOM, the rationale behind it, and how it can be compared to data binding, then I would strongly recommend that you watch this very informative talk by Pete Hunt from Facebook at https://www.youtube.com/watch?v=-DX3vJiqxm4. Now that we've learnt about the virtual DOM, let's mutate a real DOM by installing React and creating our first React element. Installing React To start using the React library, we need to first install it. I am going to show you two ways of doing this: the simplest one and the one using the npm install command. The simplest way is to add the <script> tag to our ~/snapterest/build/index.html file: For the development version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.js"></script> For the production version version of React, add the following command: <script src="https://cdnjs.cloudflare.com/ajax/libs/react/0.14.0-beta3/react.min.js"></script> For our project, we'll be using the development version of React. At the time of writing, the latest version of React library is 0.14.0-beta3. Over time, React gets updated, so make sure you use the latest version that is available to you, unless it introduces breaking changes that are incompatible with the code samples provided in this article. Visit https://github.com/fedosejev/react-essentials to learn about any compatibility issues between the code samples and the latest version of React. We all know that Browserify allows us to import all the dependency modules for our application using the require() function. We'll be using require() to import the React library as well, which means that, instead of adding a <script> tag to our index.html, we'll be using the npm install command to install React: Navigate to the ~/snapterest/ directory and run this command:  npm install --save react@0.14.0-beta3 react-dom@0.14.0-beta3 Then, open the ~/snapterest/source/app.js file in your text editor and import the React and ReactDOM libraries to the React and ReactDOM variables, respectively: var React = require('react'); var ReactDOM = require('react-dom'); The react package contains methods that are concerned with the key idea behind React, that is, describing what you want to render in a declarative way. On the other hand, the react-dom package offers methods that are responsible for rendering to the DOM. You can read more about why developers at Facebook think it's a good idea to separate the React library into two packages at https://facebook.github.io/react/blog/2015/07/03/react-v0.14-beta-1.html#two-packages. Now we're ready to start using the React library in our project. Next, let's create our first React Element! Creating React Elements with JavaScript We'll start by familiarizing ourselves with a fundamental React terminology. It will help us build a clear picture of what the React library is made of. This terminology will most likely update over time, so keep an eye on the official documentation at http://facebook.github.io/react/docs/glossary.html. Just like the DOM is a tree of nodes, React's virtual DOM is a tree of React nodes. One of the core types in React is called ReactNode. It's a building block for a virtual DOM, and it can be any one of these core types: ReactElement: This is the primary type in React. It's a light, stateless, immutable, virtual representation of a DOM Element. ReactText: This is a string or a number. It represents textual content and it's a virtual representation of a Text Node in the DOM. ReactElements and ReactTexts are ReactNodes. An array of ReactNodes is called a ReactFragment. You will see examples of all of these in this article. Let's start with an example of a ReactElement: Add the following code to your ~/snapterest/source/app.js file: var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Now your app.js file should look exactly like this: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1'); ReactDOM.render(reactElement, document.getElementById('react-application')); Navigate to the ~/snapterest/ directory and run Gulp's default task: gulp You will see the following output: Starting 'default'... Finished 'default' after 1.73 s Navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a blank web page. Open Developer Tools in your web browser and inspect the HTML markup for your blank web page. You should see this line, among others: <h1 data-reactid=".0"></h1> Well done! We've just created your first React element. Let's see exactly how we did it. The entry point to the React library is the React object. This object has a method called createElement() that takes three parameters: type, props, and children: React.createElement(type, props, children); Let's take a look at each parameter in more detail. The type parameter The type parameter can be either a string or a ReactClass: A string could be an HTML tag name such as 'div', 'p', 'h1', and so on. React supports all the common HTML tags and attributes. For a complete list of HTML tags and attributes supported by React, you can refer to http://facebook.github.io/react/docs/tags-and-attributes.html. A ReactClass is created via the React.createClass() method. The type parameter describes how an HTML tag or a ReactClass is going to be rendered. In our example, we're rendering the h1 HTML tag. The props parameter The props parameter is a JavaScript object passed from a parent element to a child element (and not the other way around) with some properties that are considered immutable, that is, those that should not be changed. While creating DOM elements with React, we can pass the props object with properties that represent the HTML attributes such as class, style, and so on. For example, run the following commands: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }); ReactDOM.render(reactElement, document.getElementById('react-application')); The preceding code will create an h1 HTML element with a class attribute set to header: <h1 class="header" data-reactid=".0"></h1> Notice that we name our property className rather than class. The reason is that the class keyword is reserved in JavaScript. If you use class as a property name, it will be ignored by React, and a helpful warning message will be printed on the web browser's console: Warning: Unknown DOM property class. Did you mean className?Use className instead. You might be wondering what this data-reactid=".0" attribute is doing in our h1 tag? We didn't pass it to our props object, so where did it come from? It is added and used by React to track the DOM nodes; it might be removed in a future version of React. The children parameter The children parameter describes what child elements this element should have, if any. A child element can be any type of ReactNode: a virtual DOM element represented by a ReactElement, a string or a number represented by a ReactText, or an array of other ReactNodes, which is also called ReactFragment. Let's take a look at this example: var React = require('react'); var ReactDOM = require('react-dom'); var reactElement = React.createElement('h1', { className: 'header' }, 'This is React'); ReactDOM.render(reactElement, document.getElementById('react-application')); The following code will create an h1 HTML element with a class attribute and a text node, This is React: <h1 class="header" data-reactid=".0">This is React</h1> The h1 tag is represented by a ReactElement, while the This is React string is represented by a ReactText. Next, let's create a React element with a number of other React elements as it's children: var React = require('react'); var ReactDOM = require('react-dom');   var h1 = React.createElement('h1', { className: 'header', key: 'header' }, 'This is React'); var p = React.createElement('p', { className: 'content', key: 'content' }, "And that's how it works."); var reactFragment = [ h1, p ]; var section = React.createElement('section', { className: 'container' }, reactFragment);   ReactDOM.render(section, document.getElementById('react-application')); We've created three React elements: h1, p, and section. h1 and p both have child text nodes, "This is React" and "And that's how it works.", respectively. The section has a child that is an array of two ReactElements, h1 and p, called reactFragment. This is also an array of ReactNodes. Each ReactElement in the reactFragment array must have a key property that helps React to identify that ReactElement. As a result, we get the following HTML markup: <section class="container" data-reactid=".0">   <h1 class="header" data-reactid=".0.$header">This is React</h1>   <p class="content" data-reactid=".0.$content">And that's how it works.</p> </section> Now we understand how to create React elements. What if we want to create a number of React elements of the same type? Does it mean that we need to call React.createElement('type') over and over again for each element of the same type? We can, but we don't need to because React provides us with a factory function called React.createFactory(). A factory function is a function that creates other functions. This is exactly what React.createFactory(type) does: it creates a function that produces a ReactElement of a given type. Consider the following example: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.createElement('li', { className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.createElement('li', { className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.createElement('li', { className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); The preceding example produces this HTML: <ul class="list-of-items" data-reactid=".0">   <li class="item-1" data-reactid=".0.$item-1">Item 1</li>   <li class="item-2" data-reactid=".0.$item-2">Item 2</li>   <li class="item-3" data-reactid=".0.$item-3">Item 3</li> </ul> We can simplify it by first creating a factory function: var React = require('react'); var ReactDOM = require('react-dom'); var createListItemElement = React.createFactory('li'); var listItemElement1 = createListItemElement({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = createListItemElement({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = createListItemElement({ className: 'item-3', key: 'item-3' }, 'Item 3'); var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.createElement('ul', { className: 'list-of-items' }, reactFragment); ReactDOM.render(listOfItems, document.getElementById('react-application')); In the preceding example, we're first calling the React.createFactory() function and passing a li HTML tag name as a type parameter. Then, the React.createFactory() function returns a new function that we can use as a convenient shorthand to create elements of type li. We store a reference to this function in a variable called createListItemElement. Then, we call this function three times, and each time we only pass the props and children parameters, which are unique for each element. Notice that React.createElement() and React.createFactory() both expect the HTML tag name string (such as li) or the ReactClass object as a type parameter. React provides us with a number of built-in factory functions to create the common HTML tags. You can call them from the React.DOM object; for example, React.DOM.ul(), React.DOM.li(), React.DOM.div(), and so on. Using them, we can simplify our previous example even further: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Now we know how to create a tree of ReactNodes. However, there is one important line of code that we need to discuss before we can progress further: ReactDOM.render(listOfItems, document.getElementById('react-application')); As you might have already guessed, it renders our ReactNode tree to the DOM. Let's take a closer look at how it works. Rendering React Elements The ReactDOM.render() method takes three parameters: ReactElement, a regular DOMElement, and a callback function: ReactDOM.render(ReactElement, DOMElement, callback); ReactElement is a root element in the tree of ReactNodes that you've created. A regular DOMElement is a container DOM node for that tree. The callback is a function executed after the tree is rendered or updated. It's important to note that if this ReactElement was previously rendered to a parent DOM Element, then ReactDOM.render() will perform an update on the already rendered DOM tree and only mutate the DOM as it is necessary to reflect the latest version of the ReactElement. This is why a virtual DOM requires fewer DOM mutations. So far, we've assumed that we're always creating our virtual DOM in a web browser. This is understandable because, after all, React is a user interface library, and all the user interfaces are rendered in a web browser. Can you think of a case when rendering a user interface on a client would be slow? Some of you might have already guessed that I am talking about the initial page load. The problem with the initial page load is the one I mentioned at the beginning of this article—we're not creating static web pages anymore. Instead, when a web browser loads our web application, it receives only the bare minimum HTML markup that is usually used as a container or a parent element for our web application. Then, our JavaScript code creates the rest of the DOM, but in order for it to do so it often needs to request extra data from the server. However, getting this data takes time. Once this data is received, our JavaScript code starts to mutate the DOM. We know that DOM mutations are slow. How can we solve this problem? The solution is somewhat unexpected. Instead of mutating the DOM in a web browser, we mutate it on a server. Just like we would with our static web pages. A web browser will then receive an HTML that fully represents a user interface of our web application at the time of the initial page load. Sounds simple, but we can't mutate the DOM on a server because it doesn't exist outside a web browser. Or can we? We have a virtual DOM that is just a JavaScript, and as you know using Node.js, we can run JavaScript on a server. So technically, we can use the React library on a server, and we can create our ReactNode tree on a server. The question is how can we render it to a string that we can send to a client? React has a method called ReactDOMServer.renderToString() just to do this: var ReactDOMServer = require('react-dom/server'); ReactDOMServer.renderToString(ReactElement); It takes a ReactElement as a parameter and renders it to its initial HTML. Not only is this faster than mutating a DOM on a client, but it also improves the Search Engine Optimization (SEO) of your web application. Speaking of generating static web pages, we can do this too with React: var ReactDOMServer = require('react-dom/server'); ReactDOM.renderToStaticMarkup(ReactElement); Similar to ReactDOM.renderToString(), this method also takes a ReactElement as a parameter and outputs an HTML string. However, it doesn't create the extra DOM attributes that React uses internally, it produces shorter HTML strings that we can transfer to the wire quickly. Now you know not only how to create a virtual DOM tree using React elements, but you also know how to render it to a client and server. Our next question is whether we can do it quickly and in a more visual manner. Creating React Elements with JSX When we build our virtual DOM by constantly calling the React.createElement() method, it becomes quite hard to visually translate these multiple function calls into a hierarchy of HTML tags. Don't forget that, even though we're working with a virtual DOM, we're still creating a structure layout for our content and user interface. Wouldn't it be great to be able to visualize that layout easily by simply looking at our React code? JSX is an optional HTML-like syntax that allows us to create a virtual DOM tree without using the React.createElement() method. Let's take a look at the previous example that we created without JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listItemElement1 = React.DOM.li({ className: 'item-1', key: 'item-1' }, 'Item 1'); var listItemElement2 = React.DOM.li({ className: 'item-2', key: 'item-2' }, 'Item 2'); var listItemElement3 = React.DOM.li({ className: 'item-3', key: 'item-3' }, 'Item 3');   var reactFragment = [ listItemElement1, listItemElement2, listItemElement3 ]; var listOfItems = React.DOM.ul({ className: 'list-of-items' }, reactFragment);   ReactDOM.render(listOfItems, document.getElementById('react-application')); Translate this to the one with JSX: var React = require('react'); var ReactDOM = require('react-dom');   var listOfItems = <ul className="list-of-items">                     <li className="item-1">Item 1</li>                     <li className="item-2">Item 2</li>                     <li className="item-3">Item 3</li>                   </ul>; ReactDOM.render(listOfItems, document.getElementById('react-application'));   As you can see, JSX allows us to write HTML-like syntax in our JavaScript code. More importantly, we can now clearly see what our HTML layout will look like once it's rendered. JSX is a convenience tool and it comes with a price in the form of an additional transformation step. Transformation of the JSX syntax into valid JavaScript syntax must happen before our "invalid" JavaScript code is interpreted. We know that the babely module transforms our JSX syntax into a JavaScript one. This transformation happens every time we run our default task from gulpfile.js: gulp.task('default', function () {   return browserify('./source/app.js')         .transform(babelify)         .bundle()         .pipe(source('snapterest.js'))         .pipe(gulp.dest('./build/')); }); As you can see, the .transform(babelify) function call transforms JSX into JavaScript before bundling it with the other JavaScript code. To test our transformation, run this command: gulp Then, navigate to the ~/snapterest/build/ directory, and open index.html in a web browser. You will see a list of three items. The React team has built an online JSX Compiler that you can use to test your understanding of how JSX works at http://facebook.github.io/react/jsx-compiler.html. Using JSX, you might feel very unusual in the beginning, but it can become a very intuitive and convenient tool to use. The best part is that you can choose whether to use it or not. I found that JSX saves me development time, so I chose to use it in this project that we're building. If you choose to not use it, then I believe that you have learned enough in this article to be able to translate the JSX syntax into a JavaScript code with the React.createElement() function calls. If you have a question about what we have discussed in this article, then you can refer to https://github.com/fedosejev/react-essentials and create a new issue. Summary We started this article by discussing the issues with single web page applications and how they can be addressed. Then, we learned what a virtual DOM is and how React allows us to build it. We also installed React and created our first React element using only JavaScript. Then, we also learned how to render React elements in a web browser and on a server. Finally, we looked at a simpler way of creating React elements with JSX. Resources for Article: Further resources on this subject: Changing Views [article] Introduction to Akka [article] ECMAScript 6 Standard [article]
Read more
  • 0
  • 0
  • 17426

article-image-using-collider-based-system
Packt
17 Feb 2016
10 min read
Save for later

Using a collider-based system

Packt
17 Feb 2016
10 min read
In this article by Jorge Palacios, the author of the book Unity 5.x Game AI Programming Cookbook, you will learn how to implement agent awareness using a mixed approach that considers the previous learnt sensory-level algorithms. (For more resources related to this topic, see here.) Seeing using a collider-based system This is probably the easiest way to simulate vision. We take a collider, be it a mesh or a Unity primitive, and use it as the tool to determine whether an object is inside the agent's vision range or not. Getting ready It's important to have a collider component attached to the same game object using the script on this recipe, as well as the other collider-based algorithms in this chapter. In this case, it's recommended that the collider be a pyramid-based one in order to simulate a vision cone. The lesser the polygons, the faster it will be on the game. How to do it… We will create a component that is able to see enemies nearby by performing the following steps: Create the Visor component, declaring its member variables. It is important to add the corresponding tags into Unity's configuration: using UnityEngine; using System.Collections; public class Visor : MonoBehaviour { public string tagWall = "Wall"; public string tagTarget = "Enemy"; public GameObject agent; } Implement the function for initializing the game object in case the component is already assigned to it: void Start() { if (agent == null) agent = gameObject; } Declare the function for checking collisions for every frame and build it in the following steps: public void OnCollisionStay(Collision coll) { // next steps here } Discard the collision if it is not a target: string tag = coll.gameObject.tag; if (!tag.Equals(tagTarget)) return; Get the game object's position and compute its direction from the Visor: GameObject target = coll.gameObject; Vector3 agentPos = agent.transform.position; Vector3 targetPos = target.transform.position; Vector3 direction = targetPos - agentPos; Compute its length and create a new ray to be shot soon: float length = direction.magnitude; direction.Normalize(); Ray ray = new Ray(agentPos, direction); Cast the created ray and retrieve all the hits: RaycastHit[] hits; hits = Physics.RaycastAll(ray, length); Check for any wall between the visor and target. If none, we can proceed to call our functions or develop our behaviors to be triggered: int i; for (i = 0; i < hits.Length; i++) { GameObject hitObj; hitObj = hits[i].collider.gameObject; tag = hitObj.tag; if (tag.Equals(tagWall)) return; } // TODO // target is visible // code your behaviour below How it works… The collider component checks every frame to know whether it is colliding with any game object in the scene. We leverage the optimizations to Unity's scene graph and engine, and focus only on how to handle valid collisions. After checking whether a target object is inside the vision range represented by the collider, we cast a ray in order to check whether it is really visible or there is a wall in between. Hearing using a collider-based system In this recipe, we will emulate the sense of hearing by developing two entities; a sound emitter and a sound receiver. It is based on the principles proposed by Millington for simulating a hearing system, and uses the power of Unity colliders to detect receivers near an emitter. Getting ready As with the other recipes based on colliders, we will need collider components attached to every object to be checked and rigid body components attached to either emitters or receivers. How to do it… We will create the SoundReceiver class for our agents and SoundEmitter for things such as alarms: Create the class for the SoundReceiver object: using UnityEngine; using System.Collections; public class SoundReceiver : MonoBehaviour { public float soundThreshold; } We define the function for our own behavior to handle the reception of sound: public virtual void Receive(float intensity, Vector3 position) { // TODO // code your own behavior here } Now, let's create the class for the SoundEmitter object: using UnityEngine; using System.Collections; using System.Collections.Generic; public class SoundEmitter : MonoBehaviour { public float soundIntensity; public float soundAttenuation; public GameObject emitterObject; private Dictionary<int, SoundReceiver> receiverDic; } Initialize the list of receivers nearby and emitterObject in case the component is attached directly: void Start() { receiverDic = new Dictionary<int, SoundReceiver>(); if (emitterObject == null) emitterObject = gameObject; } Implement the function for adding new receivers to the list when they enter the emitter bounds: public void OnCollisionEnter(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Add(objId, receiver); } Also, implement the function for removing receivers from the list when they are out of reach: public void OnCollisionExit(Collision coll) { SoundReceiver receiver; receiver = coll.gameObject.GetComponent<SoundReceiver>(); if (receiver == null) return; int objId = coll.gameObject.GetInstanceID(); receiverDic.Remove(objId); } Define the function for emitting sound waves to nearby agents: public void Emit() { GameObject srObj; Vector3 srPos; float intensity; float distance; Vector3 emitterPos = emitterObject.transform.position; // next step here } Compute sound attenuation for every receiver: foreach (SoundReceiver sr in receiverDic.Values) { srObj = sr.gameObject; srPos = srObj.transform.position; distance = Vector3.Distance(srPos, emitterPos); intensity = soundIntensity; intensity -= soundAttenuation * distance; if (intensity < sr.soundThreshold) continue; sr.Receive(intensity, emitterPos); } How it works… The collider triggers help register agents in the list of agents assigned to an emitter. The sound emission function then takes into account the agent's distance from the emitter in order to decrease its intensity using the concept of sound attenuation. There is more… We can develop a more flexible algorithm by defining different types of walls that affect sound intensity. It works by casting rays and adding up their values to the sound attenuation: Create a dictionary to store wall types as strings (using tags) and their corresponding attenuation: public Dictionary<string, float> wallTypes; Reduce sound intensity this way: intensity -= GetWallAttenuation(emitterPos, srPos); Define the function called in the previous step: public float GetWallAttenuation(Vector3 emitterPos, Vector3 receiverPos) { // next steps here } Compute the necessary values for ray casting: float attenuation = 0f; Vector3 direction = receiverPos - emitterPos; float distance = direction.magnitude; direction.Normalize(); Cast the ray and retrieve the hits: Ray ray = new Ray(emitterPos, direction); RaycastHit[] hits = Physics.RaycastAll(ray, distance); For every wall type found via tags, add up its value (stored in the dictionary): int i; for (i = 0; i < hits.Length; i++) { GameObject obj; string tag; obj = hits[i].collider.gameObject; tag = obj.tag; if (wallTypes.ContainsKey(tag)) attenuation += wallTypes[tag]; } return attenuation; Smelling using a collider-based system Smelling can be simulated by computing collision between an agent and odor particles, scattered throughout the game level. Getting ready In this recipe based on colliders, we will need collider components attached to every object to be checked, which can be simulated by computing a collision between an agent and odor particles. How to do it… We will develop the scripts needed to represent odor particles and agents able to smell: Create the particle's script and define its member variables for computing its lifespan: using UnityEngine; using System.Collections; public class OdorParticle : MonoBehaviour { public float timespan; private float timer; } Implement the Start function for proper validations: void Start() { if (timespan < 0f) timespan = 0f; timer = timespan; } Implement the timer and destroy the object after its life cycle: void Update() { timer -= Time.deltaTime; if (timer < 0f) Destroy(gameObject); } Create the class for representing the sniffer agent: using UnityEngine; using System.Collections; using System.Collections.Generic; public class Smeller : MonoBehaviour { private Vector3 target; private Dictionary<int, GameObject> particles; } Initialize the dictionary for storing odor particles: void Start() { particles = new Dictionary<int, GameObject>(); } Add to the dictionary the colliding objects that have the odor-particle component attached: public void OnCollisionEnter(Collision coll) { GameObject obj = coll.gameObject; OdorParticle op; op = obj.GetComponent<OdorParticle>(); if (op == null) return; int objId = obj.GetInstanceID(); particles.Add(objId, obj); UpdateTarget(); } Release the odor particles from the local dictionary when they are out of the agent's range or are destroyed: public void OnCollisionExit(Collision coll) { GameObject obj = coll.gameObject; int objId = obj.GetInstanceID(); bool isRemoved; isRemoved = particles.Remove(objId); if (!isRemoved) return; UpdateTarget(); } Create the function for computing the odor centroid according to the current elements in the dictionary: private void UpdateTarget() { Vector3 centroid = Vector3.zero; foreach (GameObject p in particles.Values) { Vector3 pos = p.transform.position; centroid += pos; } target = centroid; } Implement the function for retrieving the odor centroid, if any: public Vector3? GetTargetPosition() { if (particles.Keys.Count == 0) return null; return target; } How it works… Just like the hearing recipe based on colliders, we use the trigger colliders to register odor particles to an agent's perception (implemented using a dictionary). When a particle is included or removed, the odor centroid is computed. However, we implement a function to retrieve that centroid because when no odor particle is registered, the internal centroid position is not updated. There is more… The particle emission logic is left behind to be implemented according to our game's needs and it basically instantiates odor-particle prefabs. Also, it is recommended to attach the rigid body components to the agents. Odor particles are prone to be massively instantiated, reducing the game's performance. Seeing using a graph-based system We will start a recipe oriented to use graph-based logic in order to simulate sense. Again, we will start by developing the sense of vision. Getting ready It is important to grasp the chapter regarding path finding in order to understand the inner workings of the graph-based recipes. How to do it… We will just implement a new file: Create the class for handling vision: using UnityEngine; using System.Collections; using System.Collections.Generic; public class VisorGraph : MonoBehaviour { public int visionReach; public GameObject visorObj; public Graph visionGraph; } Validate the visor object: void Start() { if (visorObj == null) visorObj = gameObject; } Define and start building the function needed to detect visibility of a given set of nodes: public bool IsVisible(int[] visibilityNodes) { int vision = visionReach; int src = visionGraph.GetNearestVertex(visorObj); HashSet<int> visibleNodes = new HashSet<int>(); Queue<int> queue = new Queue<int>(); queue.Enqueue(src); } Implement a breath-first search algorithm: while (queue.Count != 0) { if (vision == 0) break; int v = queue.Dequeue(); List<int> neighbours = visionGraph.GetNeighbors(v); foreach (int n in neighbours) { if (visibleNodes.Contains(n)) continue; queue.Enqueue(v); visibleNodes.Add(v); } } Compare the set of visible nodes with the set of nodes reached by the vision system: foreach (int vn in visibleNodes) { if (visibleNodes.Contains(vn)) return true; } Return false if there is no match between the two sets of nodes: return false; How it works… The recipe uses the breath-first search algorithm in order to discover nodes within its vision reach, and then compares this set of nodes with the set of nodes where the agents reside. Summary In this article, we explained some algorithms for simulating senses and agent awareness. Resources for Article: Further resources on this subject: Animation and Unity3D Physics[article] Unity 3-0 Enter the Third Dimension[article] Animation features in Unity 5[article]
Read more
  • 0
  • 0
  • 31157

article-image-setting-kubernetes-cluster
Packt
17 Feb 2016
12 min read
Save for later

Setting up a Kubernetes Cluster

Packt
17 Feb 2016
12 min read
In this article, we will cover the following recipes: Setting up a Kubernetes cluster Scaling up and down in a Kubernetes cluster Setting up WordPress with a Kubernetes cluster (For more resources related to this topic, see here.) Introduction Running Docker on a single host may be good for the development environment, but the real value comes when we span multiple hosts. However, this is not an easy task. You have to orchestrate these containers. So, in this article, we'll discuss Kubernetes, an orchestration tool. Google started Kubernetes (http://kubernetes.io/) for Docker orchestration. Kubernetes provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. Setting up a Kubernetes cluster Kubernetes is an open source container orchestration tool across multiple nodes in the cluster. Currently, it only supports Docker. It was started by Google, and now developers from other companies are contributing to it. It provides mechanisms for application deployment, scheduling, updating, maintenance, and scaling. Kubernetes' auto-placement, auto-restart, auto-replication features make sure that the desired state of the application is maintained, which is defined by the user. Users define applications through YAML or JSON files, which we'll see later in the recipe. These YAML and JSON files also contain the API Version (the apiVersion field) to identify the schema. The following is the architectural diagram of Kubernetes: https://raw.githubusercontent.com/GoogleCloudPlatform/kubernetes/master/docs/architecture.png Let's look at some of the key components and concepts of Kubernetes. Pods: A pod, which consists of one or more containers, is the deployment unit of Kubernetes. Each container in a pod shares different namespaces with other containers in the same pod. For example, each container in a pod shares the same network namespace, which means they can all communicate through localhost. Node/Minion: A node, which was previously known as a minion, is a worker node in the Kubernetes cluster and is managed through master. Pods are deployed on a node, which has the necessary services to run them: docker, to run containers kubelet, to interact with master proxy (kube-proxy), which connects the service to the corresponding pod Master: Master hosts cluster-level control services such as the following: API server: This has RESTful APIs to interact with master and nodes. This is the only component that talks to the etcd instance. Scheduler: This schedules jobs in clusters, such as creating pods on nodes. Replication controller: This ensures that the user-specified number of pod replicas is running at any given time. To manage replicas with replication controller, we have to define a configuration file with the replica count for a pod. Master also communicates with etcd, which is a distributed key-value pair. etcd is used to store the configuration information, which is used by both master and nodes. The watch functionality of etcd is used to notify the changes in the cluster. etcd can be hosted on master or on a different set of systems. Services: In Kubernetes, each pod gets its own IP address, and pods are created and destroyed every now and then based on the replication controller configuration. So, we cannot rely on a pod's IP address to cater an app. To overcome this problem, Kubernetes defines an abstraction, which defines a logical set of pods and policies to access them. This abstraction is called a service. Labels are used to define the logical set, which a service manages. Labels: Labels are key-value pairs that can be attached to objects like, using which we select a subset of objects. For example, a service can select all pods with the label mysql. Volumes: A volume is a directory that is accessible to the containers in a pod. It is similar to Docker volumes but not the same. Different types of volumes are supported in Kubernetes, some of which are EmptyDir (ephemeral), HostDir, GCEPersistentDisk, and NFS. Active development is happening to support more types of volumes. More details can be found at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/volumes.md. Kubernetes can be installed on VMs, physical machines, and the cloud. For the complete matrix, take a look at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs/getting-started-guides. In this recipe, we'll see how to install it on VMs, using Vagrant with VirtualBox provider. This recipe and the following recipes on Kubernetes, were tried on v0.17.0 of Kubernetes. Getting ready Install latest Vagrant >= 1.6.2 from http://www.vagrantup.com/downloads.html. Install the latest VirtualBox from https://www.virtualbox.org/wiki/Downloads. How to do it… Run the following command to set up Kubernetes on Vagrant VMs: $ export KUBERNETES_PROVIDER=vagrant $ export VAGRANT_DEFAULT_PROVIDER=virtualbox $ curl -sS https://get.k8s.io | bash How it works… The bash script downloaded from the curl command, first downloads the latest Kubernetes release and then runs the ./kubernetes/cluster/kube-up.sh bash script to set up the Kubernetes environment. As we have specified Vagrant as KUBERNETES_PROVIDER, the script first downloads the Vagrant images and then, using Salt (http://saltstack.com/), configures one master and one node (minion) VM. Initial setup takes a few minutes to run. Vagrant creates a credential file in ~/.kubernetes_vagrant_auth for authentication. There's more… Similar to ./cluster/kube-up.sh, there are other helper scripts to perform different operations from the host machine itself. Make sure you are in the kubernetes directory, which was created with the preceding installation, while running the following commands: Get the list of nodes: $ ./cluster/kubectl.sh get nodes Get the list of pods: $ ./cluster/kubectl.sh get pods Get the list of services: $ ./cluster/kubectl.sh get services Get the list of replication controllers: $ ./cluster/kubectl.sh get replicationControllers Destroy the vagrant cluster: $ ./cluster/kube-down.sh Then bring back the vagrant cluster: $ ./cluster/kube-up.sh You will see some pods, services, and replicationControllers listed, as Kubernetes creates them for internal use. See also Setting up the Vagrant environment at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md The Kubernetes user guide at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide.md Kubernetes API conventions at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/api-conventions.md Scaling up and down in a Kubernetes cluster In the previous section, we mentioned that the replication controller ensures that the user-specified number of pod replicas is running at any given time. To manage replicas with the replication controller, we have to define a configuration file with the replica count for a pod. This configuration can be changed at runtime. Getting ready Make sure the Kubernetes setup is running as described in the preceding recipe and that you are in the kubernetes directory, which was created with the preceding installation. How to do it… Start the nginx container with a replica count of 3: $ ./cluster/kubectl.sh run-container my-nginx --image=nginx --replicas=3 --port=80 This will start three replicas of the nginx container. List the pods to get the status: $ ./cluster/kubectl.sh get pods Get the replication controller configuration: $ ./cluster/kubectl.sh get replicationControllers As you can see, we have a my-nginx controller, which has a replica count of 3. There is a replication controller for kube-dns, which we will explore in next recipe. Request the replication controller service to scale down to replica of 1 and update the replication controller: $ ./cluster/kubectl.sh resize rc my-nginx –replicas=1 $ ./cluster/kubectl.sh get rc Get the list of pods to verify; you should see only one pod for nginx: $ ./cluster/kubectl.sh get pods How it works… We request the replication controller service running on master to update the replicas for a pod, which updates the configuration and requests nodes/minions to act accordingly to honor the resizing. There's more… Get the services: $ ./cluster/kubectl.sh get services As you can see, we don't have any service defined for our nginx containers started earlier. This means that though we have a container running, we cannot access them from outside because the corresponding service is not defined. See also Setting up the Vagrant environment at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md The Kubernetes user guide at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide.md Setting up WordPress with a Kubernetes cluster In this recipe, we will use the WordPress example given in the Kubernetes GitHub (https://github.com/GoogleCloudPlatform/kubernetes/tree/master/examples/mysql-wordpress-pd). The given example requires some changes, as we'll be running it on the Vagrant environment instead of the default Google Compute engine. Also, instead of using the helper functions (for example, <kubernetes>/cluster/kubectl.sh), we'll log in to master and use the kubectl binary. Getting ready Make sure the Kubernetes cluster has been set up as described in the previous recipe. In the kubernetes directory that was downloaded during the setup, you will find an examples directory that contains many examples. Let's go to the mysql-wordpress-pd directory: $ cd kubernetes/examples/mysql-wordpress-pd $ ls *.yaml mysql-service.yaml mysql.yaml wordpress-service.yaml wordpress.yaml These .yaml files describe pods and services for mysql and wordpress respectively. In the pods files (mysql.yaml and wordpress.yaml), you will find the section on volumes and the corresponding volumeMount file. The original example assumes that you have access to Google Compute Engine and that you have the corresponding storage setup. For simplicity, we will not set up that and instead use ephemeral storage with the EmptyDir volume option. For reference, our mysql.yaml will look like the following: Make the similar change to wordpress.yaml. How to do it… With SSH, log in to the master node and look at the running pods: $ vagrant ssh master $ kubectl get pods The kube-dns-7eqp5 pod consists of three containers: etcd, kube2sky, and skydns, which are used to configure an internal DNS server for service name to IP resolution. We'll see it in action later in this recipe. The Vagrantfile used in this example is created so that the kubernetes directory that we created earlier is shared under /vagrant on VM, which means that the changes we made to the host system will be visible here as well. From the master node, create the mysql pod and check the running pods: $ kubectl create -f /vagrant/examples/mysql-wordpress-pd/mysql.yaml $ kubectl get pods As we can see, a new pod with the mysql name has been created and it is running on host 10.245.1.3, which is our node (minion). Now let's create the service for mysql and look at all the services: $ kubectl create -f /vagrant/examples/mysql-wordpress-pd/mysql-service.yaml $ kubectl get services As we can see, a service named mysql has been created. Each service has a Virtual IP. Other than the kubernetes services, we see a service named kube-dns, which is used as the service name for the kube-dns pod we saw earlier. Similar to mysql, let's create a pod for wordpress: $ kubectl create -f /vagrant/examples/mysql-wordpress-pd/wordpress.yaml With this command, there are a few things happening in the background: The wordpress image gets downloaded from the official Docker registry and the container runs. By default, whenever a pod starts, information about all the existing services is exported as environment variables. For example, if we log in to the wordpress pod and look for MYSQL-specific environment variables, we will see something like the following: When the WordPress container starts, it runs the /entrypoint.sh script, which looks for the environment variables mentioned earlier to start the service. https://github.com/docker-library/wordpress/blob/master/docker-entrypoint.sh. With the kube-dns service, PHP scripts of wordpress are able to the reserve lookup to proceed forward. After starting the pod, the last step here is to set up the wordpress service. In the default example, you will see an entry like the following in the service file (/vagrant/examples/mysql-wordpress-pd/mysql-service.yaml): createExternalLoadBalancer: true This has been written to keep in mind that this example will run on the Google Compute Engine. So it is not valid here. In place of that, we will need to make an entry like the following: publicIPs: - 10.245.1.3 We have replaced the load-balancer entry with the public IP of the node, which in our case is the IP address of the node (minion). So, the wordpress file would look like the following: To start the wordpress service, run the following command from the master node: $ kubectl create -f /vagrant/examples/mysql-wordpress-pd/wordpress-service.yaml We can see here that our service is also available through the node (minion) IP. To verify if everything works fine, we can install the links package on master by which we can browse a URL through the command line and connect to the public IP we mentioned: $ sudo yum install links -y $ links 10.245.1.3 With this, you should see the wordpress installation page. How it works… In this recipe, we first created a mysql pod and service. Later, we connected it to a wordpress pod, and to access it, we created a wordpress service. Each YAML file has a kind key that defines the type of object it is. For example, in pod files, the kind is set to pod and in service files, it is set to service. There's more… In this example setup, we have only one Node (minion). If you log in to it, you will see all the running containers: $ vagrant ssh minion-1 $ sudo docker ps In this example, we have not configured replication controllers. We can extend this example by creating them. See also Setting up the Vagrant environment at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/getting-started-guides/vagrant.md The Kubernetes User Guide at https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/user-guide.md The documentation on kube-dns at https://github.com/GoogleCloudPlatform/kubernetes/tree/master/cluster/addons/dns Summary Docker and its ecosystem are evolving at a very high pace, so it is very important to understand the basics and build group up to adopt to new concepts and tools. For more information you can refer to: https://www.packtpub.com/virtualization-and-cloud/beginning-docker-video https://www.packtpub.com/virtualization-and-cloud/getting-started-kubernetes Resources for Article: Further resources on this subject: Integration with Continuous Delivery [Article] Network and Data Management for Containers [Article] CoreOS Networking and Flannel Internals [Article]
Read more
  • 0
  • 0
  • 4038

Packt
17 Feb 2016
14 min read
Save for later

"D-J-A-N-G-O... The D is silent." - Authentication in Django

Packt
17 Feb 2016
14 min read
The authentication module saves a lot of time in creating space for users. The following are the main advantages of this module: The main actions related to users are simplified (connection, account activation, and so on) Using this system ensures a certain level of security Access restrictions to pages can be done very easily (For more resources related to this topic, see here.) It's such a useful module that we have already used it without noticing. Indeed, access to the administration module is performed by the authentication module. The user we created during the generation of our database was the first user of the site. This article greatly alters the application we wrote earlier. At the end of this article, we will have: Modified our UserProfile model to make it compatible with the module Created a login page Modified the addition of developer and supervisor pages Added the restriction of access to connected users How to use the authentication module In this section, we will learn how to use the authentication module by making our application compatible with the module. Configuring the Django application There is normally nothing special to do for the administration module to work in our TasksManager application. Indeed, by default, the module is enabled and allows us to use the administration module. However, it is possible to work on a site where the web Django authentication module has been disabled. We will check whether the module is enabled. In the INSTALLED_APPS section of the settings.py file, we have to check the following line: 'django.contrib.auth', Editing the UserProfile model The authentication module has its own User model. This is also the reason why we have created a UserProfile model and not just User. It is a model that already contains some fields, such as nickname and password. To use the administration module, you have to use the User model on the Python33/Lib/site-package/django/contrib/auth/models.py file. We will modify the UserProfile model in the models.py file that will become the following: class UserProfile(models.Model): user_auth = models.OneToOneField(User, primary_key=True) phone = models.CharField(max_length=20, verbose_name="Phone number", null=True, default=None, blank=True) born_date = models.DateField(verbose_name="Born date", null=True, default=None, blank=True) last_connexion = models.DateTimeField(verbose_name="Date of last connexion", null=True, default=None, blank=True) years_seniority = models.IntegerField(verbose_name="Seniority", default=0) def __str__(self): return self.user_auth.username We must also add the following line in models.py: from django.contrib.auth.models import User In this new model, we have: Created a OneToOneField relationship with the user model we imported Deleted the fields that didn't exist in the user model The OneToOne relation means that for each recorded UserProfile model, there will be a record of the User model. In doing all this, we deeply modify the database. Given these changes and because the password is stored as a hash, we will not perform the migration with South. It is possible to keep all the data and do a migration with South, but we should develop a specific code to save the information of the UserProfile model to the User model. The code should also generate a hash for the password, but it would be long. To reset South, we must do the following: Delete the TasksManager/migrations folder and all the files contained in this folder Delete the database.db file To use the migration system, we have to use the following commands: manage.py schemamigration TasksManager --initial manage.py syncdb –migrate After the deletion of the database, we must remove the initial data in create_developer.py. We must also delete the URL developer_detail and the following line in index.html: <a href="{% url "developer_detail" "2" %}">Detail second developer (The second user must be a developer)</a><br /> Adding a user The pages that allow you to add a developer and supervisor no longer work because they are not compatible with our recent changes. We will change these pages to integrate our style changes. The view contained in the create_supervisor.py file will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse from django.contrib.auth.models import User def page(request): if request.POST: form = Form_supervisor(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] specialisation = form.cleaned_data['specialisation'] email = form.cleaned_data['email'] new_user = User.objects.create_user(username = login, email = email, password=password) # In this line, we create an instance of the User model with the create_user() method. It is important to use this method because it can store a hashcode of the password in database. In this way, the password cannot be retrieved from the database. Django uses the PBKDF2 algorithm to generate the hash code password of the user. new_user.is_active = True # In this line, the is_active attribute defines whether the user can connect or not. This attribute is false by default which allows you to create a system of account verification by email, or other system user validation. new_user.last_name=name # In this line, we define the name of the new user. new_user.save() # In this line, we register the new user in the database. new_supervisor = Supervisor(user_auth = new_user, specialisation=specialisation) # In this line, we create the new supervisor with the form data. We do not forget to create the relationship with the User model by setting the property user_auth with new_user instance. new_supervisor.save() return HttpResponseRedirect(reverse('public_empty')) else: return render(request, 'en/public/create_supervisor.html', {'form' : form}) else: form = Form_supervisor() form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form' : form}) class Form_supervisor(forms.Form): name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label = "Login") email = forms.EmailField(label = "Email") specialisation = forms.CharField(label = "Specialisation") password = forms.CharField(label = "Password", widget = forms.PasswordInput) password_bis = forms.CharField(label = "Password", widget = forms.PasswordInput) def clean(self): cleaned_data = super (Form_supervisor, self).clean() password = self.cleaned_data.get('password') password_bis = self.cleaned_data.get('password_bis') if password and password_bis and password != password_bis: raise forms.ValidationError("Passwords are not identical.") return self.cleaned_data The create_supervisor.html template remains the same, as we are using a Django form. You can change the page() method in the create_developer.py file to make it compatible with the authentication module (you can refer to downloadable Packt code files for further help): def page(request): if request.POST: form = Form_inscription(request.POST) if form.is_valid(): name = form.cleaned_data['name'] login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] new_user = User.objects.create_user(username = login, password=password) new_user.is_active = True new_user.last_name=name new_user.save() new_developer = Developer(user_auth = new_user, supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) else: form = Form_inscription() return render(request, 'en/public/create_developer.html', {'form' : form}) We can also modify developer_list.html with the following content: {% extends "base.html" %} {% block title_html %} Developer list {% endblock %} {% block h1 %} Developer list {% endblock %} {% block article_content %} <table> <tr> <td>Name</td> <td>Login</td> <td>Supervisor</td> </tr> {% for dev in object_list %} <tr> <!-- The following line displays the __str__ method of the model. In this case it will display the username of the developer --> <td><a href="">{{ dev }}</a></td> <!-- The following line displays the last_name of the developer --> <td>{{ dev.user_auth.last_name }}</td> <!-- The following line displays the __str__ method of the Supervisor model. In this case it will display the username of the supervisor --> <td>{{ dev.supervisor }}</td> </tr> {% endfor %} </table> {% endblock %} Login and logout pages Now that you can create users, you must create a login page to allow the user to authenticate. We must add the following URL in the urls.py file: url(r'^connection$', 'TasksManager.views.connection.page', name="public_connection"), You must then create the connection.py view with the following code: from django.shortcuts import render from django import forms from django.contrib.auth import authenticate, login # This line allows you to import the necessary functions of the authentication module. def page(request): if request.POST: # This line is used to check if the Form_connection form has been posted. If mailed, the form will be treated, otherwise it will be displayed to the user. form = Form_connection(request.POST) if form.is_valid(): username = form.cleaned_data["username"] password = form.cleaned_data["password"] user = authenticate(username=username, password=password) # This line verifies that the username exists and the password is correct. if user: # In this line, the authenticate function returns None if authentication has failed, otherwise it returns an object that validates the condition. login(request, user) # In this line, the login() function allows the user to connect. else: return render(request, 'en/public/connection.html', {'form' : form}) else: form = Form_connection() return render(request, 'en/public/connection.html', {'form' : form}) class Form_connection(forms.Form): username = forms.CharField(label="Login") password = forms.CharField(label="Password", widget=forms.PasswordInput) def clean(self): cleaned_data = super(Form_connection, self).clean() username = self.cleaned_data.get('username') password = self.cleaned_data.get('password') if not authenticate(username=username, password=password): raise forms.ValidationError("Wrong login or passwsord") return self.cleaned_data You must then create the connection.html template with the following code: {% extends "base.html" %} {% block article_content %} {% if user.is_authenticated %} <-- This line checks if the user is connected.--> <h1>You are connected.</h1> <p> Your email : {{ user.email }} <-- In this line, if the user is connected, this line will display his/her e-mail address.--> </p> {% else %} <!-- In this line, if the user is not connected, we display the login form.--> <h1>Connexion</h1> <form method="post" action="{{ public_connection }}"> {% csrf_token %} <table> {{ form.as_table }} </table> <input type="submit" class="button" value="Connection" /> </form> {% endif %} {% endblock %} When the user logs in, Django will save his/her data connection in session variables. This example has allowed us to verify that the audit login and password was transparent to the user. Indeed, the authenticate() and login() methods allow the developer to save a lot of time. Django also provides convenient shortcuts for the developer such as the user.is_authenticated attribute that checks if the user is logged in. Users prefer when a logout link is present on the website, especially when connecting from a public computer. We will now create the logout page. First, we need to create the logout.py file with the following code: from django.shortcuts import render from django.contrib.auth import logout def page(request): logout(request) return render(request, 'en/public/logout.html') In the previous code, we imported the logout() function of the authentication module and used it with the request object. This function will remove the user identifier of the request object, and delete flushes their session data. When the user logs out, he/she needs to know that the site was actually disconnected. Let's create the following template in the logout.html file: {% extends "base.html" %} {% block article_content %} <h1>You are not connected.</h1> {% endblock %} Restricting access to the connected members When developers implement an authentication system, it's usually to limit access to anonymous users. In this section, we'll see two ways to control access to our web pages. Restricting access in views The authentication module provides simple ways to prevent anonymous users from accessing some pages. Indeed, there is a very convenient decorator to restrict access to a view. This decorator is called login_required. In the example that follows, we will use the designer to limit access to the page() view from the create_developer module in the following manner: First, we must import the decorator with the following line: from django.contrib.auth.decorators import login_required Then, we will add the decorator just before the declaration of the view: @login_required def page(request): # This line already exists. Do not copy it. With the addition of these two lines, the page that lets you add a developer is only available to the logged-in users. If you try to access the page without being connected, you will realize that it is not very practical because the obtained page is a 404 error. To improve this, simply tell Django what the connection URL is by adding the following line in the settings.py file: LOGIN_URL = 'public_connection' With this line, if the user tries to access a protected page, he/she will be redirected to the login page. You may have noticed that if you're not logged in and you click the Create a developer link, the URL contains a parameter named next. The following is the screen capture of the URL: This parameter contains the URL that the user tried to consult. The authentication module redirects the user to the page when he/she connects. To do this, we will modify the connection.py file we created. We add the line that imports the render() function to import the redirect() function: from django.shortcuts import render, redirect To redirect the user after they log in, we must add two lines after the line that contains the code login (request, user). There are two lines to be added: if request.GET.get('next') is not None: return redirect(request.GET['next']) This system is very useful when the user session has expired and he/she wants to see a specific page. Restricting access to URLs The system that we have seen does not simply limit access to pages generated by CBVs. For this, we will use the same decorator, but this time in the urls.py file. We will add the following line to import the decorator: from django.contrib.auth.decorators import login_required We need to change the line that corresponds to the URL named create_project: url (r'^create_project$', login_required(CreateView.as_view(model=Project, template_name="en/public/create_project.html", success_url = 'index')), name="create_project"), The use of the login_required decorator is very simple and allows the developer to not waste too much time. Summary In this article, we modified our application to make it compatible with the authentication module. We created pages that allow the user to log in and log out. We then learned how to restrict access to some pages for the logged in users. To learn more about Django, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Django By Example (https://www.packtpub.com/web-development/django-example) Learning Django Web Development (https://www.packtpub.com/web-development/learning-django-web-development) Resources for Article: Further resources on this subject: Code Style in Django [Article] Adding a developer with Django forms [Article] Enhancing Your Blog with Advanced Features [Article]
Read more
  • 0
  • 0
  • 10521

article-image-asking-permission-getting-your-head-around-marshmallows-runtime-permissions
Packt
17 Feb 2016
8 min read
Save for later

Asking Permission: Getting your head around Marshmallow's Runtime Permissions

Packt
17 Feb 2016
8 min read
In Android, each application runs with distinct system IDs known as Linux user ID and Group ID. The system parts are also separated into distinct IDs, forming isolated zones for applications—from each other and from the system. As part of this isolated life cycle scheme, accessing services or other applications' data requires that you declare this desire in advance by requesting a permission. This is done by adding the uses-permission element to your AndroidManifest.xml file. Your manifest may have zero or more uses-permission elements, and all of them must be the direct children of the root <manifest> element. Trying to access data or features without proper permission should give out a security exception (using a SecurityException class), informing you about the missing permission in most cases. The sendBroadcast(Intent) method is exceptional as it checks permissions after the method call has returned, so you will not receive an exception if there are permission failures. A permission failure should be printed to the system log. Note that in Android versions prior to Marshmallow, missing permissions were due to missing declarations in the manifest. Hence, it is important that you keep permissions in mind when you come up with the feature list for your app. (For more resources related to this topic, see here.) Understanding Android Marshmallow permissions Android Marshmallow introduces a new application permissions model, allowing a simpler process for users when installing and/or upgrading applications. Applications running on Marshmallow should work according to a new permissions model, where the user can grant or revoke permissions after the installation—permissions are not given until there is user acceptance. Supporting the new permissions model is backward-compatible, which means your apps can still An overview With the Android Marshmallow version, a new application permissions model has been introduced. Let's review it a bit more thoroughly: Declaring permissions: All permissions an app needs are declared in the manifest, which is done to preserve backward compatibility in a manner similar to earlier Android platform versions. Permission groups: As discussed previously, permissions are divided into permission groups based on their functionalities: PROTECTION_NORMAL permissions: Some of the permissions are granted when users install the app. Upon the installation, the system checks your app's manifest and automatically grants permissions that match the PROTECTION_NORMAL group. INTERNET permission: One important permission is the INTERNET permission, which will be granted upon the installation, and the user can't revoke it. App signature permissions granted: The user is not prompted to grant any permissions at the time of installation. Permissions granted by users at runtime: You as an app developer need to request a permission in your app; a system dialog is shown to the user, and the user response is passed back to your app, notifying whether the permission is granted. Permissions can be revoked: Users can revoke permissions that were granted previously. We must learn how to handle these cases, as we'll learn later on. If an app targets an Android Marshmallow version, it must use the new permissions model. Permission groups When working with permissions, we divide them into groups. This division is done for fast user interaction when reviewing and approving permissions. Granting is done only once per permission group. If you add a new permission or request a new permission from the same permission group and the user has already approved that group, the system will grant you the added permission without bothering the user about the approval. For more information on this, visit https://developer.android.com/reference/android/content/pm/PermissionInfo.html#constants[GS1] . When the user installs an app, the app is granted only those permissions that are listed in the manifest that belongs to the PROTECTION_NORMAL group. Requesting permissions from the PROTECTION_SIGNATURE group will be granted only if the application is signed with the same certificate as the app with the declared permission. Apps cannot request signature permissions at runtime. System components automatically receive all the permissions listed in their manifests. Runtime permissions Android Marshmallow showcased a new permissions model where users were able to directly manage app permissions at application runtime. Google has altered the old permissions model, mostly to enable easier and frictionless installations and auto-updates for users as well as for app developers. This allows users to install the app without the need to preapprove each permission the application needs. The user can install the app without going through the phase of checking each permission and declining the installation due to a single permission. Users can grant or revoke permissions for installed apps, leaving the tweaking and the freedom of choice in the users' hands. Most of the applications will need to address these issues when updating the target API to 23. Taking coding permissions into account Well, after all the explanations, we've reached the coding part, and this is where we will get our coding hands dirty. The following are the methods used for coding permissions: Context.checkSelfPermission(): This checks whether your app has been granted a permission Activity.requestPermission(): This requests a permission at runtime Even if your app is not yet targeting Android Marshmallow, you should test your app and prepare to support it. Testing permissions In the Android Marshmallow permissions model, your app must ask the user for individual permissions at runtime. There is limited compatibility support for legacy apps, and you should test your app and also test a version to make sure it's supported. You can use the following test guide and conduct app testing with the new behavior: Map your app's permissions Test flows with permissions granted and revoked The adb command shell can be quite helpful to check for permissions: Listing application permissions and status by group can be done using the following adb command: adb shell pm list permissions -g You can grant or revoke permissions using the following adb syntax: adb shell pm [grant|revoke] <permission.name> You can grant permissions and install apk using the following adb command: adb install -g <path_to_apk> Coding for runtime permissions When we want to adjust our application to the new model, we need to make sure that we organize our steps and leave no permission stranded: Check what platform the app is running on: When running a piece of code that is sensitive at the API level, we start by checking the version/API level that we are running on. By now, you should be familiar with Build.VERSION.SDK_INT. Check whether the app has the required permission: Here, we get ourselves a brand new API call: Context.checkSelfPermission(String permission_name) With this, we silently check whether permissions are granted or not. This method returns immediately, so any permission-related controls/flows should be dealt with by checking this first. Prompting for permissions: We have a new API call, Activity.requestPermissions (String[] permissions, int requestCode). This call triggers the system to show the dialog requesting a permission. This method functions asynchronously. You can request more than one permission at once. The second argument is a simple request code returned in the callback so that you can recognize the calls. This is just like how we've been dealing with startActivityForResult() and onActivityResult() for years. Another new API is Activity.shouldShowRequestPermissionRationale(String permission). This method returns true when you have requested a permission and the user denied the request. It's considered a good practice after verifying that you explain to the user why you need that exact permission. The user can decide to turn down the permission request and select the Don't ask again option; then, this method will return false. The following sample code checks whether the app has permission to read the user's contacts. It requests the permission if required, and the result callback returns to onRequestPermissionsResult: if (checkSelfPermission(Manifest.permission.READ_CONTACTS) != PackageManager.PERMISSION_GRANTED) { requestPermissions(new String[]{Manifest.permission.READ_CONTACTS}, SAMPLE_MATRIXY_READ_CONTACTS); } //Now this is our callback @Override public void onRequestPermissionsResult(int requestCode, String permissions[], int[] grantResults) { switch (requestCode) { case SAMPLE_MATRIXY_READ_CONTACTS: if (grantResults[0] == PackageManager.PERMISSION_GRANTED) { // permission granted - we can continue the feature flow. } else { // permission denied! - we should disable the functionality that depends on this permission. } } } Just to make sure we all know the constants used, here's the explanation: public static final int PERMISSION_DENIED=-1: Since it's API level 1, permission has not been granted to the given package public static final int PERMISSION_GRANTED=0: Since it's API level 1. permission has been granted to the given package. If the user denies your permission request, your app should take the appropriate action, such as notifying the user why this permission is required or explaining that the feature can't work without it. Your app cannot assume user interaction has taken place because the user can choose to reject granting a permission along with the do not show again option; your permission request is automatically rejected and onRequestPermissionsResult gets the result back. Summary To learn more about Android 6 Essentials, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Android User Interface Development: Beginner's Guide (https://www.packtpub.com/application-development/android-user-interface-development-beginners-guide) Android Native Development Kit Cookbook (https://www.packtpub.com/application-development/android-native-development-kit-cookbook) Resources for Article: Further resources on this subject: Practical How-To Recipes for Android [article] Working with Xamarin.Android [article] Detecting Shapes Employing Hough Transform [article]
Read more
  • 0
  • 0
  • 2227
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-trends-ux-design
Packt
17 Feb 2016
10 min read
Save for later

Trends UX Design

Packt
17 Feb 2016
10 min read
In this article written by Scott Faranello, author of the book Practical UX Design, as UX has come a long way since the early days of the Internet and it’s still evolving. UX may be more popular today than ever, but that doesn’t mean it’s perfected. There are still many unknowns in our field and we are still a long way off from any kind of universal understanding of what UX is, its value and its importance. There are many “truths” about UX – many of which was presented in this book – but with each new project and each new experience it becomes clear that the practice of UX is evolving and must continue to do so in order to make what we do even easier to understand. Them more popular something becomes the more those outside of the practice become interested. This is both a great thing and a bad thing. Good because it becomes better understand and accepted and bad because it can also become a “shiny object” that companies suddenly feel they must have without having any idea what it is and what it means to incorporate it into their culture. (For more resources related to this topic, see here.) Learning the fundamentals, as presented in this book, is the first step. Following that, staying abreast of what’s new will keep you moving forward and relevant as a practitioner. There is a classic movie called Glengarry Glen Ross in which the phrase Always Be Closing (ABC) came to define the practice of selling and salesmanship. Well, UX has a phrase as well, one I call ABL or Always Be Learning. Trends may come and go but learning is life long. As for trends, I recommend doing a Google search at the beginning and end of each year where you type: “UX Trends” followed by the upcoming year. If you haven’t done so do it for this year and make it a habit to do that going forward. You will find plenty of information about what coming and also what’s passed to see how much of it actually came true. Look at UX tends as more than predictions though. Trends in UX often become so because of what has already been happening, but starting to appear more often. Read about them, share them with your colleagues, notice them and seek them out in other’s work. Where some things like patterns and IA might be universal, the way we approach them can evolve. For example, here are some common areas of UX that while fundamental in our understanding of the practice are ever changing: Presentation of Design Uniformity of design UX’s Role in Design Advancement in Technologies Generational Changes and Expectations Presentation of design Today there are two ways to present design in terms of UX: Visibly and Invisibly. Visible design: Visible design is what we know and see when we go online and look at our phones, our watches and anything else that requires our attention to interactive screens using typical design elements (Colors, fonts, patterns, and so on.) Invisible design: Invisible design doesn’t require an interface in the traditional sense. We still need to interact, but none of the typical design elements are necessary. All that’s needed is an ability to read and to follow very easy instructions. For example: https://digit.co/ Digit is an application that automatically saves the user money by transferring small amounts, or whatever Digit deems “safe” to transfer without the user even noticing. It does this by tracking the users spending habits and then every few days transferring small amounts ($5 up to $50) into a Digit account where it sits until the user is ready to transfer it back to their account to spend on whatever they choose. There is no fee to use Digit. To understand how it makes money, please read about it on their website. The point is, the “interface” of digit is text on the users' phone. Invisible design is a “conversation” between itself and the user that works seamlessly and without the need for a traditional app or screen and it’s becoming more real every day. Invisible design is less about the app and more about what the app provides to the user without the user having to do anything except receive the message. Recall Dieter Rams’ Ten Principles of Design from Chapter 4 when he pointed out that good design is unobtrusive; good design does not distract or rely on the user stopping to figure it out; good design just works. This is true of all good design, both visible and invisible, but while some require an interface to engage and interact, truly invisible design does not. Some additional examples of invisible design: Automated airline text messages for flight delays Weather alerts Auto updates of apps on any device Knock (which is something you need to see to understand and even then, but it’s intriguing: http://www.knocktounlock.com/). Internet of Things (IoT) Invisible design is still is still evolving. Will it one day overtake traditional apps or has it already? Uniformity of design Ask designers today what they think of the state of design and you will get varied answers. One that seems to be getting louder is the notion that design is losing its identity. Google articles with titles such as “Where Has All The Soul Gone”; “Fall of the Designer”; “Why Web Design Is Dead”, “The End of Design As We Know It” and you will find arguments about how templates have taken over. Sites like WordPress, Drupal and others like Medium that allow content to be created by the user and seen my millions without any of the heavy lifting of web design, has created, in some eyes, a world of bland design where creativity suffers, patterns have matured and where “up and running” has become more important than what we are actually running with. There may be something to the argument, but I will leave it to you to decide. UX’s Role in Design As UX becomes more prevalent and as companies become more aware of it there is a risk of UX becoming less of a stand-alone discipline and more of something that everyone just does. A similar thing happened with Customer Experience, or UX, where now companies just consider themselves “customer centric” even if they aren’t and where the mantra is “Customer First”, even though that might not be the case in reality. Mantras are easy; living up to them at a high level is something else. As a result of these changes, UX as a practice has to clarify itself and its value more than ever. Where once just being good at design and making wireframes were a priority, today these are skills that even developers can lay claim to. To remain relevant means being able to address the more challenging concepts and activities that many may not be able to grasp or have the time to focus on, like ethnography, human factors and on going, in-depth research around usability, user needs and analyzing usage patterns and analytics as they relate to productivity. While these may not be solely within the UX domain, they are areas of focus that many companies are not spending enough time on, to their detriment. They are, however, areas that UX can easily focus on and in so doing close a widening gap that benefits everyone involved. Advancement in Technologies As technology becomes easier to use, it usually requires greater complexity to make that happen. As a result, UX practitioners need to be more aware of and on top of the latest trends as well as increasing their understanding of the technologies underlying them. This is not to suggest that UX practitioners become programmers or even a strong understanding of how to code. What is suggested is that UX practitioners become more aware of how technology is becoming more integral and intertwined into the everyday. For example, we will continue to hear about the Internet of Things (IoT) for some time, at least in the near future and probably longer. The more invisible the user experience becomes and the more untethered we become from standalone devices the more important it is going to be for UX practitioners to think more conceptually and more creatively to see how seemingly disparate objects and ideas can go together to make something entirely new. Some examples of IoT technology: Prescription refill and regimen reminders (glowcaps.com) Remote biometric readers to track a patients’ heart rate, blood pressure, safe activity levels, etc. (preventicesolutions.com) Controlling the home from afar with smart outlets that can be used to turn appliances and thermostats on and off (nest.com, belkin.com) Smart trash cans (bigbelly.com) Efficient “smart” street light systems (Echelon) The list literally goes on and on because there really is no end to what technology can provide to almost anything and all of it will require user engagement in one way or another. Knowing what, where, when and how these changes will take place and how they will affect the end user is our role and our responsibility as UX practitioners to get right the first time. Generational Changes and Expectations Baby Boomers, Gen X, Gen Y, Millenials, Gen Z. With each passing year and decade it gets harder and harder to know what the next generation will expect and demand from technology. One thing is for sure, they will expect things to work without question and without the need to stop and thin about it. Every company will require meeting such demands over at least the next decade and anyone not thinking about this and the workforce requirements to meet these demands is going to disappear. Even today we see it happening as institutions like WalMart fall to Amazon, Taxi’s to Uber, TV networks to Netflix, and who knows what else. Trends may come and go, but change, whether we are ready for it or not is going to happen faster than we’ve ever experienced it. Will you be ready? Summary In this article we looked at Tools and Trends and the importance of staying engaged and attuned to how the field of UX is evolving. In addition, choosing the tools available to make our jobs easier should be based on your needs and the needs of your team, however, when it comes to standard tools like ethnography, empathy and journey maps and usability studies there is less to debate and more to learn in terms of using these tools to get the most from your customers/users in terms of understanding and need. Resources for Article: Further resources on this subject: Performance by Design[article] Unity 3.x Scripting-Character Controller versus Rigidbody[article] Using Specular in Unity[article]
Read more
  • 0
  • 0
  • 26648

article-image-probabilistic-graphical-models
Packt
17 Feb 2016
6 min read
Save for later

Probabilistic Graphical Models

Packt
17 Feb 2016
6 min read
Probabilistic graphical models, or simply graphical models as we will refer to them in this article, are models that use the representation of a graph to describe the conditional independence relationships between a series of random variables. This topic has received an increasing amount of attention in recent years and probabilistic graphical models have been successfully applied to tasks ranging from medical diagnosis to image segmentation. In this article, we'll present some of the necessary background that will pave the way to understanding the most basic graphical model, the Naïve Bayes classifier. We will then look at a slightly more complicated graphical model, known as the Hidden Markov Model, or HMM for short. To get started in this field, we must first learn about graphs. (For more resources related to this topic, see here.) A Little Graph Theory Graph theory is a branch of mathematics that deals with mathematical objects known as graphs. Here, a graph does not have the everyday meaning that we are more used to talking about, in the sense of a diagram or plot with an x and y axis. In graph theory, a graph consists of two sets. The first is a set of vertices, which are also referred to as nodes. We typically use integers to label and enumerate the vertices. The second set consists of edges between these vertices. Thus, a graph is nothing more than a description of some points and the connections between them. The connections can have a direction so that an edge goes from the source or tail vertex to the target or head vertex. In this case, we have a directed graph. Alternatively, the edges can have no direction, so that the graph is undirected. A common way to describe a graph is via the adjacency matrix. If we have V vertices in the graph, an adjacency matrix is a V×V matrix whose entries are 0 if the vertex represented by the row number is not connected to the vertex represented by the column number. If there is a connection, the entry is 1. With undirected graphs, both nodes at each edge are connected to each other so the adjacency matrix is symmetric. For directed graphs, a vertex vi is connected to a vertex vj via an edge (vi,vj); that is, an edge where vi is the tail and vj is the head. Here is an example adjacency matrix for a graph with seven nodes: > adjacency_m 1 2 3 4 5 6 7 1 0 0 0 0 0 1 0 2 1 0 0 0 0 0 0 3 0 0 0 0 0 0 1 4 0 0 1 0 1 0 1 5 0 0 0 0 0 0 0 6 0 0 0 1 1 0 1 7 0 0 0 0 1 0 0 This matrix is not symmetric, so we know that we are dealing with a directed graph. The first 1 value in the first row of the matrix denotes the fact that there is an edge starting from vertex 1 and ending on vertex 6. When the number of nodes is small, it is easy to visualize a graph. We simply draw circles to represent the vertices and lines between them to represent the edges. For directed graphs, we use arrows on the lines to denote the directions of the edges. It is important to note that we can draw the same graph in an infinite number of different ways on the page. This is because the graph tells us nothing about the positioning of the nodes in space; we only care about how they are connected to each other. Here are two different but equally valid ways to draw the graph described by the adjacency matrix we just saw: Two vertices are said to be connected with each other if there is an edge between them (taking note of the order when talking about directed graphs). If we can move from vertex vi to vertex vj by starting at the first vertex and finishing at the second vertex, by moving on the graph along the edges and passing through an arbitrary number of graph vertices, then these intermediate edges form a path between these two vertices. Note that this definition requires that all the vertices and edges along the path are distinct from each other (with the possible exception of the first and last vertex). For example, in our graph, vertex 6 can be reached from vertex 2 by a path leading through vertex 1. Sometimes, there can be many such possible paths through the graph, and we are often interested in the shortest path, which moves through the fewest number of intermediary vertices. We can define the distance between two nodes in the graph as the length of the shortest path between them. A path that begins and ends at the same vertex is known as a cycle. A graph that does not have any cycles in it is known as an acyclic graph. If an acyclic graph has directed edges, it is known as a directed acyclic graph, which is often abbreviated as a DAG. There are many excellent references on graph theory available. One such reference which is available online, is Graph Theory, Reinhard Diestel, Springer. This landmark reference is now in its 4th edition and can be found at http://diestel-graph-theory.com/. It might not seem obvious at first, but it turns out that a large number of real world situations can be conveniently described using graphs. For example, the network of friendships on social media sites, such as Facebook, or followers on Twitter, can be represented as graphs. On Facebook, the friendship relation is reciprocal, and so the graph is undirected. On Twitter, the follower relation is not, and so the graph is directed. Another graph is the network of websites on the Web, where links from one web page to the next form directed edges. Transport networks, communication networks, and electricity grids can be represented as graphs. For the predictive modeler, it turns out that a special class of models known as probabilistic graphical models, or graphical models for short, are models that involve a graph structure. In a graphical model, the nodes represent random variables and the edges in between represent the dependencies between them. Before we can go into further detail, we'll need to take a short detour in order to visit Bayes' Theorem, a classic theorem in statistics that despite its simplicity has implications both profound and practical when it comes to statistical inference and prediction. Summary In this article, we learned that graphs are consist of nodes and edges. We also learned the way of describing a graph is via the adjacency matrix. For more information on graphical models, you can refer to the books published by Packt (https://www.packtpub.com/): Mastering Predictive Analytics with Python (https://www.packtpub.com/big-data-and-business-intelligence/mastering-predictive-analytics-python) R Graphs Cookbook Second Edition (https://www.packtpub.com/big-data-and-business-intelligence/r-graph-cookbook-%E2%80%93-second-edition) Resources for Article: Further resources on this subject: Data Analytics[article] Big Data Analytics[article] Learning Data Analytics with R and Hadoop[article]
Read more
  • 0
  • 0
  • 3502

article-image-setting-project-atomic-host
Packt
17 Feb 2016
6 min read
Save for later

Setting up a Project Atomic host

Packt
17 Feb 2016
6 min read
With DockerTM, containers are becoming mainstream and enterprises are ready to use them. Docker and its ecosystem are evolving at a very high pace, so it is very important to understand the basics and build group up to adopt to new concepts and tools. In this article, we will cover the following recipes: Setting up a Project Atomic host Doing atomic update/rollback with Project Atomic (For more resources related to this topic, see here.) Setting up a Project Atomic host Project Atomic facilitates application-centric IT architecture by providing an end-to-end solution to deploy containerized applications quickly and reliably, with atomic update and rollback for the application and host alike. This is achieved by running applications in containers on a Project Atomic host, which is a lightweight operating system specially designed to run containers. The hosts can be based on Fedora, CentOS, or Red Hat Enterprise Linux. Next, we will elaborate on the building blocks of the Project Atomic host. OSTree and rpm-OSTree OSTree (https://wiki.gnome.org/action/show/Projects/OSTree) is a tool to manage bootable, immutable, and versioned filesystem trees. Using this, we can build client-server architecture in which the server hosts an OSTree repository and the client subscribed to it can incrementally replicate the content. rpm-OSTree is a system to decompose RPMs on the server side into the OSTree repository to which the client can subscribe and perform updates. With each update, a new root is created, which is used for the next reboot. During updates, /etc is rebased and /var is untouched. Container runtime As of now Project Atomic only supports Docker as container runtime. systemd Project Atomic uses Kubernetes (http://kubernetes.io/) for application deployment over clusters of container hosts. Project Atomic can be installed on bare metal, cloud providers, VMs, and so on. In this recipe, let’s see how we can install it on a VM using virt-manager on Fedora. Getting ready Download the image: $ wget http://download.fedoraproject.org/pub/fedora/linux/releases/test/22_Beta/Cloud/x86_64/Images/Fedora-Cloud-Atomic-22_Beta-20150415.x86_64.raw.xz I have downloaded the beta image for Fedora 22 Cloud image For Containers. You should look for the latest cloud image For Containers at https://getfedora.org/en/cloud/download/. Uncompress this image by using the following command: $ xz -d Fedora-Cloud-Atomic-22_Beta-20150415.x86_64.raw.xz How to do it… We downloaded the cloud image that does not have any password set for the default user fedora. While booting the VM, we have to provide a cloud configuration file through which we can customize the VM. To do this, we need to create two files, meta-data and user-data, as follows: $ cat meta-data instance-id: iid-local01 local-hostname: atomichost $ cat user-data #cloud-config password: atomic ssh_pwauth: True chpasswd: { expire: False } ssh_authorized_keys: - ssh-rsa AAAAB3NzaC1yc......... In the preceding code, we need to provide the complete SSH public key. We then need to create an ISO image consisting of these files, which we will use to boot to the VM. As we are using a cloud image, our setting will be applied to the VM during the boot process. This means the hostname will be set to atomichost, the password will be set to atomic, and so on. To create the ISO, run the following command: $ genisoimage -output init.iso -volid cidata -joliet -rock user-data meta-data Start virt-manager. Select New Virtual Machine and then import the existing disk image. Enter the image path of the Project Atomic image we downloaded earlier. Select OS type as Linux and Version as Fedora 20/Fedora 21 (or later), and click on Forward. Next, assign CPU and Memory and click on Forward. Then, give a name to the VM and select Customize configuration before install. Finally, click on Finish and review the details. Next, click on Add Hardware, and after selecting Storage, attach the ISO (init.iso) file we created to the VM and select Begin Installation: Once booted, you can see that its hostname is correctly set and you will be able to log in through the password given in the cloud init file. The default user is fedora and password is atomic as set in the user-data file. How it works… In this recipe, we took a Project Atomic Fedora cloud image and booted it using virt-manager after supplying the cloud init file. There’s more… After logging in, if you do file listing at /, you will see that most of the traditional directories are linked to /var because it is preserved across upgrades. After logging in, you can run the Docker command as usual $sudo docker run -it fedora bash See also The virtual manager documentation at https://virt-manager.org/documentation/ More information on package systems, image systems, and RPM-OSTree at https://github.com/projectatomic/rpm-ostree/blob/master/doc/background.md The quick-start guide on the Project Atomic website at http://www.projectatomic.io/docs/quickstart/ The resources on cloud images at https://www.technovelty.org//linux/running-cloud-images-locally.html and http://cloudinit.readthedocs.org/en/latest/ How to set up Kubernetes with an Atomic host at http://www.projectatomic.io/blog/2014/11/testing-kubernetes-with-an-atomic-host/ and https://github.com/cgwalters/vagrant-atomic-cluster Doing atomic update/rollback with Project Atomic To get to the latest version or to roll back to the older version of Project Atomic, we use the atomic host command, which internally calls rpm-ostree. Getting ready Boot and log in to the Atomic host. How to do it… Just after the boot, run the following command: $ atomic host status You will see details about one deployment that is in use now. To upgrade, run the following command: This changes and/or adds new packages. After the upgrade, we will need to reboot the system to use the new update. Let’s reboot and see the outcome: As we can see, the system is now booted with the new update. The *, which is at the beginning of the first line, specifies the active build. To roll back, run the following command: $ sudo atomic host rollback We will have to reboot again if we want to use older bits. How it works… For updates, the Atomic host connects to the remote repository hosting the newer build, which is downloaded and used from the next reboot onwards until the user upgrades or rolls back. In the case rollback older build available on the system used after the reboot. See also The documentation Project Atomic website, which can be found at http://www.projectatomic.io/docs/os-updates/ Summary Docker and its ecosystem are evolving at a very high pace, so it is very important to understand the basics and build group up to adopt to new concepts and tools. Additional information on Docker can be gained be referring to: https://www.packtpub.com/networking-and-servers/docker-high-performance https://www.packtpub.com/virtualization-and-cloud/monitoring-docker Resources for Article: Further resources on this subject: Understanding Docker [article] Hands On with Docker Swarm [article] Introduction to Docker [article]
Read more
  • 0
  • 0
  • 2952

article-image-understanding-php-basics
Packt
17 Feb 2016
27 min read
Save for later

Understanding PHP basics

Packt
17 Feb 2016
27 min read
In this article by Antonio Lopez Zapata, the author of the book Learning PHP 7, you need to understand not only the syntax of the language, but also its grammatical rules, that is, when and why to use each element of the language. Luckily, for you, some languages come from the same root. For example, Spanish and French are romance languages as they both evolved from spoken Latin; this means that these two languages share a lot of rules, and learning Spanish if you already know French is much easier. (For more resources related to this topic, see here.) Programming languages are quite the same. If you already know another programming language, it will be very easy for you to go through this chapter. If it is your first time though, you will need to understand from scratch all the grammatical rules, so it might take some more time. But fear not! We are here to help you in this endeavor. In this chapter, you will learn about these topics: PHP in web applications Control structures Functions PHP in web applications Even though the main purpose of this chapter is to show you the basics of PHP, doing so in a reference-manual way is not interesting enough. If we were to copy paste what the official documentation says, you might as well go there and read it by yourself. Instead, let's not forget the main purpose of this book and your main goal—to write web applications with PHP. We will show you how can you apply everything you are learning as soon as possible, before you get too bored. In order to do that, we will go through the journey of building an online bookstore. At the very beginning, you might not see the usefulness of it, but that is just because we still haven't seen all that PHP can do. Getting information from the user Let's start by building a home page. In this page, we are going to figure out whether the user is looking for a book or just browsing. How do we find this out? The easiest way right now is to inspect the URL that the user used to access our application and extract some information from there. Save this content as your index.php file: <?php $looking = isset($_GET['title']) || isset($_GET['author']); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>You lookin'? <?php echo (int) $looking; ?></p> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> </body> </html> And now, access http://localhost:8000/?author=Harper Lee&title=To Kill a Mockingbird. You will see that the page is printing some of the information that you passed on to the URL. For each request, PHP stores in an array—called $_GET- all the parameters that are coming from the query string. Each key of the array is the name of the parameter, and its associated value is the value of the parameter. So, $_GET contains two entries: $_GET['author'] contains Harper Lee and $_GET['title'] contains To Kill a Mockingbird. On the first highlighted line, we are assigning a Boolean value to the $looking variable. If either $_GET['title'] or $_GET['author'] exists, this variable will be true; otherwise, false. Just after that, we close the PHP tag and then we start printing some HTML, but as you can see, we are actually mixing HTML with PHP code. Another interesting line here is the second highlighted line. We are printing the content of $looking, but before that, we cast the value. Casting means forcing PHP to transform a type of value to another one. Casting a Boolean to an integer means that the resultant value will be 1 if the Boolean is true or 0 if the Boolean is false. As $looking is true since $_GET contains valid keys, the page shows 1. If we try to access the same page without sending any information as in http://localhost:8000, the browser will say "Are you looking for a book? 0". Depending on the settings of your PHP configuration, you will see two notice messages complaining that you are trying to access the keys of the array that do not exist. Casting versus type juggling We already knew that when PHP needs a specific type of variable, it will try to transform it, which is called type juggling. But PHP is quite flexible, so sometimes, you have to be the one specifying the type that you need. When printing something with echo, PHP tries to transform everything it gets into strings. Since the string version of the false Boolean is an empty string, this would not be useful for our application. Casting the Boolean to an integer first assures that we will see a value, even if it is just "0". HTML forms HTML forms are one of the most popular ways to collect information from users. They consist a series of fields called inputs in the HTML world and a final submit button. In HTML, the form tag contains two attributes: the action points, where the form will be submitted and method that specifies which HTTP method the form will use—GET or POST. Let's see how it works. Save the following content as login.html and go to http://localhost:8000/login.html: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore - Login</title> </head> <body> <p>Enter your details to login:</p> <form action="authenticate.php" method="post"> <label>Username</label> <input type="text" name="username" /> <label>Password</label> <input type="password" name="password" /> <input type="submit" value="Login"/> </form> </body> </html> This form contains two fields, one for the username and one for the password. You can see that they are identified by the name attribute. If you try to submit this form, the browser will show you a Page Not Found message, as it is trying to access http://localhost:8000/authenticate.phpand the web server cannot find it. Let's create it then: <?php $submitted = !empty($_POST); ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p>Form submitted? <?php echo (int) $submitted; ?></p> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> </body> </html> As with $_GET, $_POST is an array that contains the parameters received by POST. In this piece of code, we are first asking whether that array is not empty—note the ! operator. Afterwards, we just display the information received, just as in index.php. Note that the keys of the $_POST array are the values for the name argument of each input field. Control structures So far, our files have been executed line by line. Due to that, we are getting some notices on some scenarios, such as when the array does not contain what we are looking for. Would it not be nice if we could choose which lines to execute? Control structures to the rescue! A control structure is like a traffic diversion sign. It directs the execution flow depending on some predefined conditions. There are different control structures, but we can categorize them in conditionals and loops. A conditional allows us to choose whether to execute a statement or not. A loop will execute a statement as many times as you need. Let's take a look at each one of them. Conditionals A conditional evaluates a Boolean expression, that is, something that returns a value. If the expression is true, it will execute everything inside its block of code. A block of code is a group of statements enclosed by {}. Let's see how it works: <?php echo "Before the conditional."; if (4 > 3) { echo "Inside the conditional."; } if (3 > 4) { echo "This will not be printed."; } echo "After the conditional."; In this piece of code, we are using two conditionals. A conditional is defined by the keyword if followed by a Boolean expression in parentheses and by a block of code. If the expression is true, it will execute the block; otherwise, it will skip it. You can increase the power of conditionals by adding the keyword else. This tells PHP to execute a block of code if the previous conditions were not satisfied. Let's see an example: if (2 > 3) { echo "Inside the conditional."; } else { echo "Inside the else."; } This will execute the code inside else as the condition of if was not satisfied. Finally, you can also add an elseif keyword followed by another condition and block of code to continue asking PHP for more conditions. You can add as many elseif as you need after if. If you add else, it has to be the last one of the chain of conditions. Also keep in mind that as soon as PHP finds a condition that resolves to true, it will stop evaluating the rest of the conditions: <?php if (4 > 5) { echo "Not printed"; } elseif (4 > 4) { echo "Not printed"; } elseif (4 == 4) { echo "Printed."; } elseif (4 > 2) { echo "Not evaluated."; } else { echo "Not evaluated."; } if (4 == 4) { echo "Printed"; } In this last example, the first condition that evaluates to true is the one that is highlighted. After that, PHP does not evaluate any more conditions until a new if starts. With this knowledge, let's try to clean up a bit of our application, executing statements only when needed. Copy this code to your index.php file: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <p> <?php if (isset($_COOKIE[username'])) { echo "You are " . $_COOKIE['username']; } else { echo "You are not authenticated."; } ?> </p> <?php if (isset($_GET['title']) && isset($_GET['author'])) { ?> <p>The book you are looking for is</p> <ul> <li><b>Title</b>: <?php echo $_GET['title']; ?></li> <li><b>Author</b>: <?php echo $_GET['author']; ?></li> </ul> <?php } else { ?> <p>You are not looking for a book?</p> <?php } ?> </body> </html> In this new code, we are mixing conditionals and HTML code in two different ways. The first one opens a PHP tag and adds an if-else clause that will print whether we are authenticated or not with echo. No HTML is merged within the conditionals, which makes it clear. The second option—the second highlighted block—shows an uglier solution, but this is sometimes necessary. When you have to print a lot of HTML code, echo is not that handy, and it is better to close the PHP tag; print all the HTML you need and then open the tag again. You can do that even inside the code block of an if clause, as you can see in the code. Mixing PHP and HTML If you feel like the last file we edited looks rather ugly, you are right. Mixing PHP and HTML is confusing, and you have to avoid it by all means. Let's edit our authenticate.php file too, as it is trying to access $_POST entries that might not be there. The new content of the file would be as follows: <?php $submitted = isset($_POST['username']) && isset($_POST['password']); if ($submitted) { setcookie('username', $_POST['username']); } ?> <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>Bookstore</title> </head> <body> <?php if ($submitted): ?> <p>Your login info is</p> <ul> <li><b>username</b>: <?php echo $_POST['username']; ?></li> <li><b>password</b>: <?php echo $_POST['password']; ?></li> </ul> <?php else: ?> <p>You did not submitted anything.</p> <?php endif; ?> </body> </html> This code also contains conditionals, which we already know. We are setting a variable to know whether we've submitted a login or not and to set the cookies if we have. However, the highlighted lines show you a new way of including conditionals with HTML. This way, tries to be more readable when working with HTML code, avoiding the use of {} and instead using : and endif. Both syntaxes are correct, and you should use the one that you consider more readable in each case. Switch-case Another control structure similar to if-else is switch-case. This structure evaluates only one expression and executes the block depending on its value. Let's see an example: <?php switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; break; case 'Lord of the Rings': echo "A classic!"; break; default: echo "Dunno that one."; break; } The switch case takes an expression; in this case, a variable. It then defines a series of cases. When the case matches the current value of the expression, PHP executes the code inside it. As soon as PHP finds break, it will exit switch-case. In case none of the cases are suitable for the expression, if there is a default case         , PHP will execute it, but this is optional. You also need to know that breaks are mandatory if you want to exit switch-case. If you do not specify any, PHP will keep on executing statements, even if it encounters a new case. Let's see a similar example but without breaks: <?php $title = 'Twilight'; switch ($title) { case 'Harry Potter': echo "Nice story, a bit too long."; case 'Twilight': echo 'Uh...'; case 'Lord of the Rings': echo "A classic!"; default: echo "Dunno that one."; } If you test this code in your browser, you will see that it is printing "Uh...A classic!Dunno that one.". PHP found that the second case is valid so it executes its content. But as there are no breaks, it keeps on executing until the end. This might be the desired behavior sometimes, but not usually, so we need to be careful when using it! Loops Loops are control structures that allow you to execute certain statements several times—as many times as you need. You might use them on several different scenarios, but the most common one is when interacting with arrays. For example, imagine you have an array with elements but you do not know what is in it. You want to print all its elements so you loop through all of them. There are four types of loops. Each of them has their own use cases, but in general, you can transform one type of loop into another. Let's see them closely While While is the simplest of the loops. It executes a block of code until the expression to evaluate returns false. Let's see one example: <?php $i = 1; while ($i < 4) { echo $i . " "; $i++; } Here, we are defining a variable with the value 1. Then, we have a while clause in which the expression to evaluate is $i < 4. This loop will execute the content of the block of code until that expression is false. As you can see, inside the loop we are incrementing the value of $i by 1 each time, so after 4 iterations, the loop will end. Check out the output of that script, and you will see "0 1 2 3". The last value printed is 3, so by that time, $i was 3. After that, we increased its value to 4, so when the while was evaluating whether $i < 4, the result was false. Whiles and infinite loops One of the most common problems with while loops is creating an infinite loop. If you do not add any code inside while, which updates any of the variables considered in the while expression so it can be false at some point, PHP will never exit the loop! For This is the most complex of the four loops. For defines an initialization expression, an exit condition, and the end of the iteration expression. When PHP first encounters the loop, it executes what is defined as the initialization expression. Then, it evaluates the exit condition, and if it resolves to true, it enters the loop. After executing everything inside the loop, it executes the end of the iteration expression. Once this is done, it will evaluate the end condition again, going through the loop code and the end of iteration expression until it evaluates to false. As always, an example will help clarify this: <?php for ($i = 1; $i < 10; $i++) { echo $i . " "; } The initialization expression is $i = 1 and is executed only the first time. The exit condition is $i < 10, and it is evaluated at the beginning of each iteration. The end of the iteration expression is $i++, which is executed at the end of each iteration. This example prints numbers from 1 to 9. Another more common usage of the for loop is with arrays: <?php $names = ['Harry', 'Ron', 'Hermione']; for ($i = 0; $i < count($names); $i++) { echo $names[$i] . " "; } In this example, we have an array of names. As it is defined as a list, its keys will be 0, 1, and 2. The loop initializes the $i variable to 0, and it will iterate until the value of $i is not less than the amount of elements in the array 3 The first iteration $i is 0, the second will be 1, and the third one will be 2. When $i is 3, it will not enter the loop as the exit condition evaluates to false. On each iteration, we are printing the content of the $i position of the array; hence, the result of this code will be all three names in the array. We careful with exit conditions It is very common to set an exit condition. This is not exactly what we need, especially with arrays. Remember that arrays start with 0 if they are a list, so an array of 3 elements will have entries 0, 1, and 2. Defining the exit condition as $i <= count($array) will cause an error on your code, as when $i is 3, it also satisfies the exit condition and will try to access to the key 3, which does not exist. Foreach The last, but not least, type of loop is foreach. This loop is exclusive for arrays, and it allows you to iterate an array entirely, even if you do not know its keys. There are two options for the syntax, as you can see in these examples: <?php $names = ['Harry', 'Ron', 'Hermione']; foreach ($names as $name) { echo $name . " "; } foreach ($names as $key => $name) { echo $key . " -> " . $name . " "; } The foreach loop accepts an array; in this case, $names. It specifies a variable, which will contain the value of the entry of the array. You can see that we do not need to specify any end condition, as PHP will know when the array has been iterated. Optionally, you can specify a variable that will contain the key of each iteration, as in the second loop. Foreach loops are also useful with maps, where the keys are not necessarily numeric. The order in which PHP will iterate the array will be the same order in which you used to insert the content in the array. Let's use some loops in our application. We want to show the available books in our home page. We have the list of books in an array, so we will have to iterate all of them with a foreach loop, printing some information from each one. Append the following code to the body tag in index.php: <?php endif; $books = [ [ 'title' => 'To Kill A Mockingbird', 'author' => 'Harper Lee', 'available' => true, 'pages' => 336, 'isbn' => 9780061120084 ], [ 'title' => '1984', 'author' => 'George Orwell', 'available' => true, 'pages' => 267, 'isbn' => 9780547249643 ], [ 'title' => 'One Hundred Years Of Solitude', 'author' => 'Gabriel Garcia Marquez', 'available' => false, 'pages' => 457, 'isbn' => 9785267006323 ], ]; ?> <ul> <?php foreach ($books as $book): ?> <li> <i><?php echo $book['title']; ?></i> - <?php echo $book['author']; ?> <?php if (!$book['available']): ?> <b>Not available</b> <?php endif; ?> </li> <?php endforeach; ?> </ul> The highlighted code shows a foreach loop using the : notation, which is better when mixing it with HTML. It iterates all the $books arrays, and for each book, it will print some information as a HTML list. Also note that we have a conditional inside a loop, which is perfectly fine. Of course, this conditional will be executed for each entry in the array, so you should keep the block of code of your loops as simple as possible. Functions A function is a reusable block of code that, given an input, performs some actions and optionally returns a result. You already know several predefined functions, such as empty, in_array, or var_dump. These functions come with PHP so you do not have to reinvent the wheel, but you can create your own very easily. You can define functions when you identify portions of your application that have to be executed several times or just to encapsulate some functionality. Function declaration Declaring a function means to write it down so that it can be used later. A function has a name, takes arguments, and has a block of code. Optionally, it can define what kind of value is returning. The name of the function has to follow the same rules as variable names; that is, it has to start by a letter or underscore and can contain any letter, number, or underscore. It cannot be a reserved word. Let's see a simple example: function addNumbers($a, $b) { $sum = $a + $b; return $sum; } $result = addNumbers(2, 3); Here, the function's name is addNumbers, and it takes two arguments: $a and $b. The block of code defines a new variable $sum that is the sum of both the arguments and then returns its content with return. In order to use this function, you just need to call it by its name, sending all the required arguments, as shown in the highlighted line. PHP does not support overloaded functions. Overloading refers to the ability of declaring two or more functions with the same name but different arguments. As you can see, you can declare the arguments without knowing what their types are, so PHP would not be able to decide which function to use. Another important thing to note is the variable scope. We are declaring a $sum variable inside the block of code, so once the function ends, the variable will not be accessible any more. This means that the scope of variables declared inside the function is just the function itself. Furthermore, if you had a $sum variable declared outside the function, it would not be affected at all since the function cannot access that variable unless we send it as an argument. Function arguments A function gets information from outside via arguments. You can define any number of arguments—including 0. These arguments need at least a name so that they can be used inside the function, and there cannot be two arguments with the same name. When invoking the function, you need to send the arguments in the same order as we declared them. A function may contain optional arguments; that is, you are not forced to provide a value for those arguments. When declaring the function, you need to provide a default value for those arguments, so in case the user does not provide a value, the function will use the default one: function addNumbers($a, $b, $printResult = false) { $sum = $a + $b; if ($printResult) { echo 'The result is ' . $sum; } return $sum; } $sum1 = addNumbers(1, 2); $sum1 = addNumbers(3, 4, false); $sum1 = addNumbers(5, 6, true); // it will print the result This new function takes two mandatory arguments and an optional one. The default value is false, and is used as a normal value inside the function. The function will print the result of the sum if the user provides true as the third argument, which happens only the third time that the function is invoked. For the first two times, $printResult is set to false. The arguments that the function receives are just copies of the values that the user provided. This means that if you modify these arguments inside the function, it will not affect the original values. This feature is known as sending arguments by a value. Let's see an example: function modify($a) { $a = 3; } $a = 2; modify($a); var_dump($a); // prints 2 We are declaring the $a variable with the value 2, and then we are calling the modify method, sending $a. The modify method modifies the $a argument, setting its value to 3. However, this does not affect the original value of $a, which reminds to 2 as you can see in the var_dump function. If what you want is to actually change the value of the original variable used in the invocation, you need to pass the argument by reference. To do that, you add & in front of the argument when declaring the function: function modify(&$a) { $a = 3; } Now, after invoking the modify function, $a will be always 3. Arguments by value versus by reference PHP allows you to do it, and in fact, some native functions of PHP use arguments by reference—remember the array sorting functions; they did not return the sorted array; instead, they sorted the array provided. But using arguments by reference is a way of confusing developers. Usually, when someone uses a function, they expect a result, and they do not want their provided arguments to be modified. So, try to avoid it; people will be grateful! The return statement You can have as many return statements as you want inside your function, but PHP will exit the function as soon as it finds one. This means that if you have two consecutive return statements, the second one will never be executed. Still, having multiple return statements can be useful if they are inside conditionals. Add this function inside your functions.php file: function loginMessage() { if (isset($_COOKIE['username'])) { return "You are " . $_COOKIE['username']; } else { return "You are not authenticated."; } } Let's use it in your index.php file by replacing the highlighted content—note that to save some tees, I replaced most of the code that was not changed at all with //…: //... <body> <p><?php echo loginMessage(); ?></p> <?php if (isset($_GET['title']) && isset($_GET['author'])): ?> //... Additionally, you can omit the return statement if you do not want the function to return anything. In this case, the function will end once it reaches the end of the block of code. Type hinting and return types With the release of PHP7, the language allows developers to be more specific about what functions get and return. You can—always optionally—specify the type of argument that the function needs, for example, type hinting, and the type of result the function will return—return type. Let's first see an example: <?php declare(strict_types=1); function addNumbers(int $a, int $b, bool $printSum): int { $sum = $a + $b; if ($printSum) { echo 'The sum is ' . $sum; } return $sum; } addNumbers(1, 2, true); addNumbers(1, '2', true); // it fails when strict_types is 1 addNumbers(1, 'something', true); // it always fails This function states that the arguments need to be an integer, and Boolean, and that the result will be an integer. Now, you know that PHP has type juggling, so it can usually transform a value of one type to its equivalent value of another type, for example, the string 2 can be used as integer 2. To stop PHP from using type juggling with the arguments and results of functions, you can declare the strict_types directive as shown in the first highlighted line. This directive has to be declared on the top of each file, where you want to enforce this behavior. The three invocations work as follows: The first invocation sends two integers and a Boolean, which is what the function expects. So, regardless of the value of strict_types, it will always work. The second invocation sends an integer, a string, and a Boolean. The string has a valid integer value, so if PHP was allowed to use type juggling, the invocation would resolve to just normal. But in this example, it will fail because of the declaration on top of the file. The third invocation will always fail as the something string cannot be transformed into a valid integer. Let's try to use a function within our project. In our index.php file, we have a foreach loop that iterates the books and prints them. The code inside the loop is kind of hard to understand as it is mixing HTML with PHP, and there is a conditional too. Let's try to abstract the logic inside the loop into a function. First, create the new functions.php file with the following content: <?php function printableTitle(array $book): string { $result = '<i>' . $book['title'] . '</i> - ' . $book['author']; if (!$book['available']) { $result .= ' <b>Not available</b>'; } return $result; } This file will contain our functions. The first one, printableTitle, takes an array representing a book and builds a string with a nice representation of the book in HTML. The code is the same as before, just encapsulated in a function. Now, index.php will have to include the functions.php file and then use the function inside the loop. Let's see how this is done: <?php require_once 'functions.php' ?> <!DOCTYPE html> <html lang="en"> //... ?> <ul> <?php foreach ($books as $book): ?> <li><?php echo printableTitle($book); ?> </li> <?php endforeach; ?> </ul> //... Well, now our loop looks way cleaner, right? Also, if we need to print the title of the book somewhere else, we can reuse the function instead of duplicating code! Summary In this article, we went through all the basics of procedural PHP while writing simple examples in order to practice them. You now know how to use variables and arrays with control structures and functions and how to get information from HTTP requests among others. Resources for Article: Further resources on this subject: Getting started with Modernizr using PHP IDE[article] PHP 5 Social Networking: Implementing Public Messages[article] Working with JSON in PHP jQuery[article]
Read more
  • 0
  • 0
  • 16342
article-image-python-data-analysis-utilities
Packt
17 Feb 2016
13 min read
Save for later

Python Data Analysis Utilities

Packt
17 Feb 2016
13 min read
After the success of the book Python Data Analysis, Packt's acquisition editor Prachi Bisht gauged the interest of the author, Ivan Idris, in publishing Python Data Analysis Cookbook. According to Ivan, Python Data Analysis is one of his best books. Python Data Analysis Cookbook is meant for a bit more experienced Pythonistas and is written in the cookbook format. In the year after the release of Python Data Analysis, Ivan has received a lot of feedback—mostly positive, as far as he is concerned. Although Python Data Analysis covers a wide range of topics, Ivan still managed to leave out a lot of subjects. He realized that he needed a library as a toolbox. Named dautil for data analysis utilities, the API was distributed by him via PyPi so that it is installable via pip/easy_install. As you know, Python 2 will no longer be supported after 2020, so dautil is based on Python 3. For the sake of reproducibility, Ivan also published a Docker repository named pydacbk (for Python Data Analysis Cookbook). The repository represents a virtual image with preinstalled software. For practical reasons, the image doesn't contain all the software, but it still contains a fair percentage. This article has the following sections: Data analysis, data science, big data – what is the big deal? A brief history of data analysis with Python A high-level overview of dautil IPython notebook utilities Downloading data Plotting utilities Demystifying Docker Future directions (For more resources related to this topic, see here.) Data analysis, data science, big data – what is the big deal? You've probably seen Venn diagrams depicting data science as the intersection of mathematics/statistics, computer science, and domain expertise. Data analysis is timeless and was there before data science and computer science. You could perform data analysis with a pen and paper and, in more modern times, with a pocket calculator. Data analysis has many aspects with goals such as making decisions or coming up with new hypotheses and questions. The hype, status, and financial rewards surrounding data science and big data remind me of the time when data warehousing and business intelligence were the buzzwords. The ultimate goal of business intelligence and data warehousing was to build dashboards for management. This involved a lot of politics and organizational aspects, but on the technical side, it was mostly about databases. Data science, on the other hand, is not database-centric, and leans heavily on machine learning. Machine learning techniques have become necessary because of the bigger volumes of data. Data growth is caused by the growth of the world's population and the rise of new technologies such as social media and mobile devices. Data growth is in fact probably the only trend that we can be sure will continue. The difference between constructing dashboards and applying machine learning is analogous to the way search engines evolved. Search engines (if you can call them that) were initially nothing more than well-organized collections of links created manually. Eventually, the automated approach won. Since more data will be created in time (and not destroyed), we can expect an increase in automated data analysis. A brief history of data analysis with Python The history of the various Python software libraries is quite interesting. I am not a historian, so the following notes are written from my own perspective: 1989: Guido van Rossum implements the very first version of Python at the CWI in the Netherlands as a Christmas hobby project. 1995: Jim Hugunin creates Numeric, the predecessor to NumPy. 1999: Pearu Peterson writes f2py as a bridge between Fortran and Python. 2000: Python 2.0 is released. 2001: The SciPy library is released. Also, Numarray, a competing library of Numeric, is created. Fernando Perez releases IPython, which starts out as an afternoon hack. NLTK is released as a research project. 2002: John Hunter creates the matplotlib library. 2005: NumPy is released by Travis Oliphant. Initially, NumPy is Numeric extended with features inspired by Numarray. 2006: NumPy 1.0 is released. The first version of SQLAlchemy is released. 2007: The scikit-learn project is initiated as a Google Summer of Code project by David Cournapeau. Cython is forked from Pyrex. Cython is later intensively used in pandas and scikit-learn to improve performance. 2008: Wes McKinney starts working on pandas. Python 3.0 is released. 2011: The IPython 0.12 release introduces the IPython notebook. Packt releases NumPy 1.5 Beginner's Guide. 2012: Packt releases NumPy Cookbook. 2013: Packt releases NumPy Beginner's Guide - Second Edition. 2014: Fernando Perez announces Project Jupyter, which aims to make a language-agnostic notebook. Packt releases Learning NumPy Array and Python Data Analysis. 2015: Packt releases NumPy Beginner's Guide - Third Edition and NumPy Cookbook - Second Edition. A high-level overview of dautil The dautil API that Ivan made for this book is a humble toolbox, which he found useful. It is released under the MIT license. This license is very permissive, so you could in theory use the library in a production system. He doesn't recommend doing this currently (as of January, 2016), but he believes that the unit tests and documentation are of acceptable quality. The library has 3000+ lines of code and 180+ unit tests with a reasonable coverage. He has fixed as many issues reported by pep8 and flake8 as possible. Some of the functions in dautil are on the short side and are of very low complexity. This is on purpose. If there is a second edition (knock on wood), dautil will probably be completely transformed. The API evolved as Ivan wrote the book under high time pressure, so some of the decisions he made may not be optimal in retrospect. However, he hopes that people find dautil useful and, ideally, contribute to it. The dautil modules are summarized in the following table: Module Description LOC dautil.collect Contains utilities related to collections 331 dautil.conf Contains configuration utilities 48 dautil.data Contains utilities to download and load data 468 dautil.db Contains database-related utilities 98 dautil.log_api Contains logging utilities 204 dautil.nb Contains IPython/Jupyter notebook widgets and utilities 609 dautil.options Configures dynamic options of several libraries related to data analysis 71 dautil.perf Contains performance-related utilities 162 dautil.plotting Contains plotting utilities 382 dautil.report Contains reporting utilities 232 dautil.stats Contains statistical functions and utilities 366 dautil.ts Contains Utilities for time series and dates 217 dautil.web Contains utilities for web mining and HTML processing 47 IPython notebook utilities The IPython notebook has become a standard tool for data analysis. The dautil.nb has several interactive IPython widgets to help with Latex rendering, the setting of matplotlib properties, and plotting. Ivan has defined a Context class, which represents the configuration settings of the widgets. The settings are stored in a pretty-printed JSON file in the current working directory, which is named dautil.json. This could be extended, maybe even with a database backend. The following is an edited excerpt (so that it doesn't take up a lot of space) of an example dautil.json: { ... "calculating_moments": { "figure.figsize": [ 10.4, 7.7 ], "font.size": 11.2 }, "calculating_moments.latex": [ 1, 2, 3, 4, 5, 6, 7 ], "launching_futures": { "figure.figsize": [ 11.5, 8.5 ] }, "launching_futures.labels": [ [ {}, { "legend": "loc=best", "title": "Distribution of Means" } ], [ { "legend": "loc=best", "title": "Distribution of Standard Deviation" }, { "legend": "loc=best", "title": "Distribution of Skewness" } ] ], ... }  The Context object can be constructed with a string—Ivan recommends using the name of the notebook, but any unique identifier will do. The dautil.nb.LatexRenderer also uses the Context class. It is a utility class, which helps you number and render Latex equations in an IPython/Jupyter notebook, for instance, as follows: import dautil as dl lr = dl.nb.LatexRenderer(chapter=12, context=context) lr.render(r'delta! = x - m') lr.render(r'm' = m + frac{delta}{n}') lr.render(r'M_2' = M_2 + delta^2 frac{ n-1}{n}') lr.render(r'M_3' = M_3 + delta^3 frac{ (n - 1) (n - 2)}{n^2}/ - frac{3delta M_2}{n}') lr.render(r'M_4' = M_4 + frac{delta^4 (n - 1) / (n^2 - 3n + 3)}{n^3} + frac{6delta^2 M_2}/ {n^2} - frac{4delta M_3}{n}') lr.render(r'g_1 = frac{sqrt{n} M_3}{M_2^{3/2}}') lr.render(r'g_2 = frac{n M_4}{M_2^2}-3.') The following is the result:   Another widget you may find useful is RcWidget, which sets matplotlib settings, as shown in the following screenshot: Downloading data Sometimes, we require sample data to test an algorithm or prototype a visualization. In the dautil.data module, you will find many utilities for data retrieval. Throughout this book, Ivan has used weather data from the KNMI for the weather station in De Bilt. A couple of the utilities in the module add a caching layer on top of existing pandas functions, such as the ones that download data from the World Bank and Yahoo! Finance (the caching depends on the joblib library and is currently not very configurable). You can also get audio, demographics, Facebook, and marketing data. The data is stored under a special data directory, which depends on the operating system. On the machine used in the book, it is stored under ~/Library/Application Support/dautil. The following example code loads data from the SPAN Facebook dataset and computes the clique number: import networkx as nx import dautil as dl fb_file = dl.data.SPANFB().load() G = nx.read_edgelist(fb_file, create_using=nx.Graph(), nodetype=int) print('Graph Clique Number', nx.graph_clique_number(G.subgraph(list(range(2048)))))  To understand what is going on in detail, you will need to read the book. In a nutshell, we load the data and use the NetworkX API to calculate a network metric. Plotting utilities Ivan visualizes data very often in the book. Plotting helps us get an idea about how the data is structured and helps you form hypotheses or research questions. Often, we want to chart multiple variables, but we want to easily see what is what. The standard solution in matplotlib is to cycle colors. However, Ivan prefers to cycle line widths and line styles as well. The following unit test demonstrates his solution to this issue: def test_cycle_plotter_plot(self): m_ax = Mock() cp = plotting.CyclePlotter(m_ax) cp.plot([0], [0]) m_ax.plot.assert_called_with([0], [0], '-', lw=1) cp.plot([0], [1]) m_ax.plot.assert_called_with([0], [1], '--', lw=2) cp.plot([1], [0]) m_ax.plot.assert_called_with([1], [0], '-.', lw=1) The dautil.plotting module currently also has a helper tool for subplots, histograms, regression plots, and dealing with color maps. The following example code (the code for the labels has been omitted) demonstrates a bar chart utility function and a utility function from dautil.data, which downloads stock price data: import dautil as dl import numpy as np import matplotlib.pyplot as plt ratios = [] STOCKS = ['AAPL', 'INTC', 'MSFT', 'KO', 'DIS', 'MCD', 'NKE', 'IBM'] for symbol in STOCKS: ohlc = dl.data.OHLC() P = ohlc.get(symbol)['Adj Close'].values N = len(P) mu = (np.log(P[-1]) - np.log(P[0]))/N var_a = 0 var_b = 0 for k in range(1, N): var_a = (np.log(P[k]) - np.log(P[k - 1]) - mu) ** 2 var_a = var_a / N for k in range(1, N//2): var_b = (np.log(P[2 * k]) - np.log(P[2 * k - 2]) - 2 * mu) ** 2 var_b = var_b / N ratios.append(var_b/var_a - 1) _, ax = plt.subplots() dl.plotting.bar(ax, STOCKS, ratios) plt.show() Refer to the following screenshot for the end result: The code performs a random walk test and calculates the corresponding ratio for a list of stock prices. The data is retrieved whenever you run the code, so you may get different results. Some of you have a finance aversion, but rest assured that this book has very little finance-related content. The following script demonstrates a linear regression utility and caching downloader for World Bank data (the code for the watermark and plot labels has been omitted): import dautil as dl import matplotlib.pyplot as plt import numpy as np wb = dl.data.Worldbank() countries = wb.get_countries()[['name', 'iso2c']] inf_mort = wb.get_name('inf_mort') gdp_pcap = wb.get_name('gdp_pcap') df = wb.download(country=countries['iso2c'], indicator=[inf_mort, gdp_pcap], start=2010, end=2010).dropna() loglog = df.applymap(np.log10) x = loglog[gdp_pcap] y = loglog[inf_mort] dl.options.mimic_seaborn() fig, [ax, ax2] = plt.subplots(2, 1) ax.set_ylim([0, 200]) ax.scatter(df[gdp_pcap], df[inf_mort]) ax2.scatter(x, y) dl.plotting.plot_polyfit(ax2, x, y) plt.show()  The following image should be displayed by the code: The program downloads World Bank data for 2010 and plots the infant mortality rate against the GDP per capita. Also shown is a linear fit of the log-transformed data. Demystifying Docker Docker uses Linux kernel features to provide an extra virtualization layer. It was created in 2013 by Solomon Hykes. Boot2Docker allows us to install Docker on Windows and Mac OS X as well. Boot2Docker uses a VirtualBox VM that contains a Linux environment with Docker. Ivan's Docker image, which is mentioned in the introduction, is based on the continuumio/miniconda3 Docker image. The Docker installation docs are at https://docs.docker.com/index.html. Once you install Boot2Docker, you need to initialize it. This is only necessary once, and Linux users don't need this step: $ boot2docker init The next step for Mac OS X and Windows users is to start the VM: $ boot2docker start Check the Docker environment by starting a sample container: $ docker run hello-world Docker images are organized in a repository, which resembles GitHub. A producer pushes images and a consumer pulls images. You can pull Ivan's repository with the following command. The size is currently 387 MB. $ docker pull ivanidris/pydacbk Future directions The dautil API consists of items Ivan thinks will be useful outside of the context of this book. Certain functions and classes that he felt were only suitable for a particular chapter are placed in separate per-chapter modules, such as ch12util.py. In retrospect, parts of those modules may need to be included in dautil as well. In no particular order, Ivan has the following ideas for future dautil development: He is playing with the idea of creating a parallel library with "Cythonized" code, but this depends on how dautil is received Adding more data loaders as required There is a whole range of streaming (or online) algorithms that he thinks should be included in dautil as well The GUI of the notebook widgets should be improved and extended The API should have more configuration options and be easier to configure Summary In this article, Ivan roughly sketched what data analysis, data science, and big data are about. This was followed by a brief of history of data analysis with Python. Then, he started explaining dautil—the API he made to help him with this book. He gave a high-level overview and some examples of the IPython notebook utilities, features to download data, and plotting utilities. He used Docker for testing and giving readers a reproducible data analysis environment, so he spent some time on that topic too. Finally, he mentioned the possible future directions that could be taken for the library in order to guide anyone who wants to contribute. Resources for Article:   Further resources on this subject: Recommending Movies at Scale (Python) [article] Python Data Science Up and Running [article] Making Your Data Everything It Can Be [article]
Read more
  • 0
  • 0
  • 7911

article-image-introduction-object-oriented-programming-using-python-javascript-and-c
Packt
17 Feb 2016
5 min read
Save for later

Introduction to Object-Oriented Programming using Python, JavaScript, and C#

Packt
17 Feb 2016
5 min read
In this extract from Learning Object-Oriented Programming by Gaston Hillar, we will show you the benefits of thinking in an object-oriented way. From encapsulation and inheritance to polymorphism or object-overloading, we will hammer home the thought process that must be mastered in order to truly make the most of the object-oriented approach. In this article, we will see how to generate blueprints for objects and we will design classes which include the attributes or fields that provide data for each instance. We will explore the different object-oriented approaches in different languages, such as Python, JavaScript, and C#. Generating blueprints for objects Imagine that you want to draw and calculate the areas of four different rectangles. You will end up with four rectangles each with different widths, heights, and areas. Now imagine having a blueprint to simplify the process of drawing each different rectangle. In object-oriented programming, a class is a blueprint or a template definition from which the objects are created. Classes are models that define the state and behavior of an object. After defining a class that determines the state and behavior of a rectangle, we can use it to generate objects that represent the state and behavior of each real-world rectangle. Objects are also known as instances. For example, we can say each rectangle object is an instance of the rectangle class. The following image shows four rectangle instances, with their widths and heights specified: Rectangle #1, Rectangle #2, Rectangle #3, and Rectangle #4. We can use a rectangle class as a blueprint to generate the four different rectangle instances. It is very important to understand the difference between a class and the objects or instances generated through its usage. Object-oriented programming allows us to discover the blueprint we used to generate a specific object. Thus, we are able to infer that each object is an instance of the rectangle class. Recognizing attributes/fields Now, we need to design the classes to include the attributes that provide the required data to each instance. In other words, we have to make sure that each class has the necessary variables that encapsulate all the data required by the objects to perform all the tasks. Let's start with the Square class. It is necessary to know the length of the sides for each instance of this class - for each square object. Therefore, we need an encapsulated variable that allows each instance of this class to specify the value of the length of a side. The variables defined in a class to encapsulate data for each instance of the class are known as attributes or fields. Each instance has its own independent value for the attributes or fields defined in the class. The Square class defines a floating point attribute named LengthOfSide whose initial value is equal to 0 for any new instance of the class. After you create an instance of the Square class, it is possible to change the value of the LengthOfSide attribute. For example, imagine that you create two instances of the Square class. One of the instances is named square1, and the other is square2. The instance names allow you to access the encapsulated data for each object, and therefore, you can use them to change the values of the exposed attributes. Imagine that our object-oriented programming language uses a dot (.) to allow us to access the attributes of the instances. So, square1.LengthOfSide provides access to the length of side for the Square instance named square1, and square2.LengthOfSide does the same for the Square instance named square2. You can assign the value 10 to square1.LengthOfSide and 20 to square2.LengthOfSide. This way, each Square instance is going to have a different value for their LengthOfSide attribute. Now, let's move to the Rectangle class. We can define two floating-point attributes for this class: Width and Height. Their initial values are also going to be 0. Then, you can create two instances of the Rectangle class: rectangle1 and rectangle2. You can assign the value 10 to rectangle1.Width and 20 to rectangle1.Height. This way, rectangle1 represents a 10 x 20 rectangle. You can assign the value 30 to rectangle2.Width and 50 to rectangle2.Height to make the second Rectangle instance representing a 30 x 50 rectangle. Object-oriented approaches in Python, JavaScript, and C# Python, JavaScript, and C# support object-oriented programming, also known as OOP. However, each programming language takes a different approach. Both Python and C# support classes and inheritance. Therefore, you can use the different syntax provided by each of these programming languages to declare the Shape class and its four subclasses. Then, you can create instances of each of the subclasses and call the different methods. On the other hand, JavaScript uses an object-oriented model that doesn't use classes. This object-oriented model is known as prototype-based programming. However, don't worry. Everything you have learned so far in your simple object-oriented design journey can be coded in JavaScript. Instead of using inheritance to achieve behavior reuse, we can expand upon existing objects. Thus, we can say that objects serve as prototypes in JavaScript. Instead of focusing on classes, we work with instances and decorate them to emulate inheritance in class-based languages. The object-oriented model named prototype-based programing is also known with other names such as classless programming, instance-based programming, or prototype-oriented programming. There are other important differences between Python, JavaScript, and C#. They have a great impact on the way you can code object-oriented designs. Carry on reading Learning Object-Oriented Programming to learn different ways to code the same object-oriented design in three programming languages. So what are you waiting for? Take a look at what else the book offers now!
Read more
  • 0
  • 0
  • 5658

article-image-putting-fun-functional-python
Packt
17 Feb 2016
21 min read
Save for later

Putting the Fun in Functional Python

Packt
17 Feb 2016
21 min read
Functional programming defines a computation using expressions and evaluation—often encapsulated in function definitions. It de-emphasizes or avoids the complexity of state change and mutable objects. This tends to create programs that are more succinct and expressive. In this article, we'll introduce some of the techniques that characterize functional programming. We'll identify some of the ways to map these features to Python. Finally, we'll also address some ways in which the benefits of functional programming accrue when we use these design patterns to build Python applications. Python has numerous functional programming features. It is not a purely functional programming language. It offers enough of the right kinds of features that it confers to the benefits of functional programming. It also retains all optimization power available from an imperative programming language. We'll also look at a problem domain that we'll use for many of the examples in this book. We'll try to stick closely to Exploratory Data Analysis (EDA) because its algorithms are often good examples of functional programming. Furthermore, the benefits of functional programming accrue rapidly in this problem domain. Our goal is to establish some essential principles of functional programming. We'll focus on Python 3 features in this book. However, some of the examples might also work in Python 2. (For more resources related to this topic, see here.) Identifying a paradigm It's difficult to be definitive on what fills the universe of programming paradigms. For our purposes, we will distinguish between just two of the many programming paradigms: Functional programming and Imperative programming. One important distinguishing feature between these two is the concept of state. In an imperative language, like Python, the state of the computation is reflected by the values of the variables in the various namespaces. The values of the variables establish the state of a computation; each kind of statement makes a well-defined change to the state by adding or changing (or even removing) a variable. A language is imperative because each statement is a command, which changes the state in some way. Our general focus is on the assignment statement and how it changes state. Python has other statements, such as global or nonlocal, which modify the rules for variables in a particular namespace. Statements like def, class, and import change the processing context. Other statements like try, except, if, elif, and else act as guards to modify how a collection of statements will change the computation's state. Statements like for and while, similarly, wrap a block of statements so that the statements can make repeated changes to the state of the computation. The focus of all these various statement types, however, is on changing the state of the variables. Ideally, each statement advances the state of the computation from an initial condition toward the desired final outcome. This "advances the computation" assertion can be challenging to prove. One approach is to define the final state, identify a statement that will establish this final state, and then deduce the precondition required for this final statement to work. This design process can be iterated until an acceptable initial state is derived. In a functional language, we replace state—the changing values of variables—with a simpler notion of evaluating functions. Each function evaluation creates a new object or objects from existing objects. Since a functional program is a composition of a function, we can design lower-level functions that are easy to understand, and we will design higher-level compositions that can also be easier to visualize than a complex sequence of statements. Function evaluation more closely parallels mathematical formalisms. Because of this, we can often use simple algebra to design an algorithm, which clearly handles the edge cases and boundary conditions. This makes us more confident that the functions work. It also makes it easy to locate test cases for formal unit testing. It's important to note that functional programs tend to be relatively succinct, expressive, and efficient when compared to imperative (object-oriented or procedural) programs. The benefit isn't automatic; it requires a careful design. This design effort is often easier than functionally similar procedural programming. Subdividing the procedural paradigm We can subdivide imperative languages into a number of discrete categories. In this section, we'll glance quickly at the procedural versus object-oriented distinction. What's important here is to see how object-oriented programming is a subset of imperative programming. The distinction between procedural and object-orientation doesn't reflect the kind of fundamental difference that functional programming represents. We'll use code examples to illustrate the concepts. For some, this will feel like reinventing a wheel. For others, it provides a concrete expression of abstract concepts. For some kinds of computations, we can ignore Python's object-oriented features and write simple numeric algorithms. For example, we might write something like the following to get the range of numbers: s = 0 for n in range(1, 10): if n % 3 == 0 or n % 5 == 0: s += n print(s) We've made this program strictly procedural, avoiding any explicit use of Python's object features. The program's state is defined by the values of the variables s and n. The variable, n, takes on values such that 1 ≤ n < 10. As the loop involves an ordered exploration of values of n, we can prove that it will terminate when n == 10. Similar code would work in C or Java using their primitive (non-object) data types. We can exploit Python's Object-Oriented Programming (OOP) features and create a similar program: m = list() for n in range(1, 10): if n % 3 == 0 or n % 5 == 0: m.append(n) print(sum(m)) This program produces the same result but it accumulates a stateful collection object, m, as it proceeds. The state of the computation is defined by the values of the variables m and n. The syntax of m.append(n) and sum(m) can be confusing. It causes some programmers to insist (wrongly) that Python is somehow not purely Object-oriented because it has a mixture of the function()and object.method() syntax. Rest assured, Python is purely Object-oriented. Some languages, like C++, allow the use of primitive data type such as int, float, and long, which are not objects. Python doesn't have these primitive types. The presence of prefix syntax doesn't change the nature of the language. To be pedantic, we could fully embrace the object model, the subclass, the list class, and add a sum method: class SummableList(list): def sum( self ): s= 0 for v in self.__iter__(): s += v return s If we initialize the variable, m, with the SummableList() class instead of the list() method, we can use the m.sum() method instead of the sum(m) method. This kind of change can help to clarify the idea that Python is truly and completely object-oriented. The use of prefix function notation is purely syntactic sugar. All three of these examples rely on variables to explicitly show the state of the program. They rely on the assignment statements to change the values of the variables and advance the computation toward completion. We can insert the assert statements throughout these examples to demonstrate that the expected state changes are implemented properly. The point is not that imperative programming is broken in some way. The point is that functional programming leads to a change in viewpoint, which can, in many cases, be very helpful. We'll show a function view of the same algorithm. Functional programming doesn't make this example dramatically shorter or faster. Using the functional paradigm In a functional sense, the sum of the multiples of 3 and 5 can be defined in two parts: The sum of a sequence of numbers A sequence of values that pass a simple test condition, for example, being multiples of three and five The sum of a sequence has a simple, recursive definition: def sum(seq): if len(seq) == 0: return 0 return seq[0] + sum(seq[1:]) We've defined the sum of a sequence in two cases: the base case states that the sum of a zero length sequence is 0, while the recursive case states that the sum of a sequence is the first value plus the sum of the rest of the sequence. Since the recursive definition depends on a shorter sequence, we can be sure that it will (eventually) devolve to the base case. The + operator on the last line of the preceeding example and the initial value of 0 in the base case characterize the equation as a sum. If we change the operator to * and the initial value to 1, it would just as easily compute a product. Similarly, a sequence of values can have a simple, recursive definition, as follows: def until(n, filter_func, v): if v == n: return [] if filter_func(v): return [v] + until( n, filter_func, v+1 ) else: return until(n, filter_func, v+1) In this function, we've compared a given value, v, against the upper bound, n. If v reaches the upper bound, the resulting list must be empty. This is the base case for the given recursion. There are two more cases defined by the given filter_func() function. If the value of v is passed by the filter_func() function, we'll create a very small list, containing one element, and append the remaining values of the until() function to this list. If the value of v is rejected by the filter_func() function, this value is ignored and the result is simply defined by the remaining values of the until() function. We can see that the value of v will increase from an initial value until it reaches n, assuring us that we'll reach the base case soon. Here's how we can use the until() function to generate the multiples of 3 or 5. First, we'll define a handy lambda object to filter values: mult_3_5= lambda x: x%3==0 or x%5==0 (We will use lambdas to emphasize succinct definitions of simple functions. Anything more complex than a one-line expression requires the def statement.) We can see how this lambda works from the command prompt in the following example: >>> mult_3_5(3) True >>> mult_3_5(4) False >>> mult_3_5(5) True This function can be used with the until() function to generate a sequence of values, which are multiples of 3 or 5. The until() function for generating a sequence of values works as follows: >>> until(10, lambda x: x%3==0 or x%5==0, 0) [0, 3, 5, 6, 9] We can use our recursive sum() function to compute the sum of this sequence of values. The various functions, such as sum(), until(), and mult_3_5() are defined as simple recursive functions. The values are computed without restoring to use intermediate variables to store state. We'll return to the ideas behind this purely functional recursive function definition in several places. It's important to note here that many functional programming language compilers can optimize these kinds of simple recursive functions. Python can't do the same optimizations. Using a functional hybrid We'll continue this example with a mostly functional version of the previous example to compute the sum of the multiples of 3 and 5. Our hybrid functional version might look like the following: print( sum(n for n in range(1, 10) if n%3==0 or n%5==0) ) We've used nested generator expressions to iterate through a collection of values and compute the sum of these values. The range(1, 10) method is an iterable and, consequently, a kind of generator expression; it generates a sequence of values . The more complex expression, n for n in range(1, 10) if n%3==0 or n%5==0, is also an iterable expression. It produces a set of values . A variable, n, is bound to each value, more as a way of expressing the contents of the set than as an indicator of the state of the computation. The sum() function consumes the iterable expression, creating a final object, 23. The bound variable doesn't change once a value is bound to it. The variable, n, in the loop is essentially a shorthand for the values available from the range() function. The if clause of the expression can be extracted into a separate function, allowing us to easily repurpose this with other rules. We could also use a higher-order function named filter() instead of the if clause of the generator expression. As we work with generator expressions, we'll see that the bound variable is at the blurry edge of defining the state of the computation. The variable, n, in this example isn't directly comparable to the variable, n, in the first two imperative examples. The for statement creates a proper variable in the local namespace. The generator expression does not create a variable in the same way as a for statement does: >>> sum( n for n in range(1, 10) if n%3==0 or n%5==0 ) 23 >>> n Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'n' is not defined Because of the way Python uses namespaces, it might be possible to write a function that can observe the n variable in a generator expression. However, we won't. Our objective is to exploit the functional features of Python, not to detect how those features have an object-oriented implementation under the hood. Looking at object creation In some cases, it might help to look at intermediate objects as a history of the computation. What's important is that the history of a computation is not fixed. When functions are commutative or associative, then changes to the order of evaluation might lead to different objects being created. This might have performance improvements with no changes to the correctness of the results. Consider this expression: >>> 1+2+3+4 10 We are looking at a variety of potential computation histories with the same result. Because the + operator is commutative and associative, there are a large number of candidate histories that lead to the same result. Of the candidate sequences, there are two important alternatives, which are as follows: >>> ((1+2)+3)+4 10 >>> 1+(2+(3+4)) 10 In the first case, we fold in values working from left to right. This is the way Python works implicitly. Intermediate objects 3 and 6 are created as part of this evaluation. In the second case, we fold from right-to-left. In this case, intermediate objects 7 and 9 are created. In the case of simple integer arithmetic, the two results have identical performance; there's no optimization benefit. When we work with something like the list append, we might see some optimization improvements when we change the association rules. Here's a simple example: >>> import timeit >>> timeit.timeit("((([]+[1])+[2])+[3])+[4]") 0.8846941249794327 >>> timeit.timeit("[]+([1]+([2]+([3]+[4])))") 1.0207440659869462 In this case, there's some benefit in working from left to right. What's important for functional design is the idea that the + operator (or add() function) can be used in any order to produce the same results. The + operator has no hidden side effects that restrict the way this operator can be used. The stack of turtles When we use Python for functional programming, we embark down a path that will involve a hybrid that's not strictly functional. Python is not Haskell, OCaml, or Erlang. For that matter, our underlying processor hardware is not functional; it's not even strictly object-oriented—CPUs are generally procedural. All programming languages rest on abstractions, libraries, frameworks and virtual machines. These abstractions, in turn, may rely on other abstractions, libraries, frameworks and virtual machines. The most apt metaphor is this: the world is carried on the back of a giant turtle. The turtle stands on the back of another giant turtle. And that turtle, in turn, is standing on the back of yet another turtle. It's turtles all the way down.                                                                                                             – Anonymous
There's no practical end to the layers of abstractions. More importantly, the presence of abstractions and virtual machines doesn't materially change our approach to designing software to exploit the functional programming features of Python. Even within the functional programming community, there are more pure and less pure functional programming languages. Some languages make extensive use of monads to handle stateful things like filesystem input and output. Other languages rely on a hybridized environment that's similar to the way we use Python. We write software that's generally functional with carefully chosen procedural exceptions. Our functional Python programs will rely on the following three stacks of abstractions: Our applications will be functions—all the way down—until we hit the objects The underlying Python runtime environment that supports our functional programming is objects—all the way down—until we hit the turtles The libraries that support Python are a turtle on which Python stands The operating system and hardware form their own stack of turtles. These details aren't relevant to the problems we're going to solve. A classic example of functional programming As part of our introduction, we'll look at a classic example of functional programming. This is based on the classic paper Why Functional Programming Matters by John Hughes. The article appeared in a paper called Research Topics in Functional Programming, edited by D. Turner, published by Addison-Wesley in 1990. Here's a link to the paper Research Topics in Functional Programming: http://www.cs.kent.ac.uk/people/staff/dat/miranda/whyfp90.pdf This discussion of functional programming in general is profound. There are several examples given in the paper. We'll look at just one: the Newton-Raphson algorithm for locating the roots of a function. In this case, the function is the square root. It's important because many versions of this algorithm rely on the explicit state managed via loops. Indeed, the Hughes paper provides a snippet of the Fortran code that emphasizes stateful, imperative processing. The backbone of this approximation is the calculation of the next approximation from the current approximation. The next_() function takes x, an approximation to the sqrt(n) method and calculates a next value that brackets the proper root. Take a look at the following example: def next_(n, x): return (x+n/x)/2 This function computes a series of values . The distance between the values is halved each time, so they'll quickly get to converge on the value such that, which means . We don't want to call the method next() because this name would collide with a built-in function. We call it the next_() method so that we can follow the original presentation as closely as possible. Here's how the function looks when used in the command prompt: >>> n= 2 >>> f= lambda x: next_(n, x) >>> a0= 1.0 >>> [ round(x,4) for x in (a0, f(a0), f(f(a0)), f(f(f(a0))),) ] [1.0, 1.5, 1.4167, 1.4142] We've defined the f() method as a lambda that will converge on . We started with 1.0 as the initial value for . Then we evaluated a sequence of recursive evaluations: , and so on. We evaluated these functions using a generator expression so that we could round off each value. This makes the output easier to read and easier to use with doctest. The sequence appears to converge rapidly on . We can write a function, which will (in principle) generate an infinite sequence of values converging on the proper square root: def repeat(f, a): yield a for v in repeat(f, f(a)): yield v This function will generate approximations using a function, f(), and an initial value, a. If we provide the next_() function defined earlier, we'll get a sequence of approximations to the square root of the n argument. The repeat() function expects the f() function to have a single argument, however, our next_() function has two arguments. We can use a lambda object, lambda x: next_(n, x), to create a partial version of the next_() function with one of two variables bound. The Python generator functions can't be trivially recursive, they must explicitly iterate over the recursive results, yielding them individually. Attempting to use a simple return repeat(f, f(a)) will end the iteration, returning a generator expression instead of yielding the sequence of values. We have two ways to return all the values instead of returning a generator expression, which are as follows: We can write an explicit for loop as follows: for x in some_iter: yield x. We can use the yield from statement as follows: yield from some_iter. Both techniques of yielding the values of a recursive generator function are equivalent. We'll try to emphasize yield from. In some cases, however, the yield with a complex expression will be more clear than the equivalent mapping or generator expression. Of course, we don't want the entire infinite sequence. We will stop generating values when two values are so close to each other that we can call either one the square root we're looking for. The common symbol for the value, which is close enough, is the Greek letter Epsilon, ε, which can be thought of as the largest error we will tolerate. In Python, we'll have to be a little clever about taking items from an infinite sequence one at a time. It works out well to use a simple interface function that wraps a slightly more complex recursion. Take a look at the following code snippet: def within(ε, iterable): def head_tail(ε, a, iterable): b= next(iterable) if abs(a-b) <= ε: return b return head_tail(ε, b, iterable) return head_tail(ε, next(iterable), iterable) We've defined an internal function, head_tail(), which accepts the tolerance, ε, an item from the iterable sequence, a, and the rest of the iterable sequence, iterable. The next item from the iterable bound to a name b. If , then the two values that are close enough together that we've found the square root. Otherwise, we use the b value in a recursive invocation of the head_tail() function to examine the next pair of values. Our within() function merely seeks to properly initialize the internal head_tail() function with the first value from the iterable parameter. Some functional programming languages offer a technique that will put a value back into an iterable sequence. In Python, this might be a kind of unget() or previous() method that pushes a value back into the iterator. Python iterables don't offer this kind of rich functionality. We can use the three functions next_(), repeat(), and within() to create a square root function, as follows: def sqrt(a0, ε, n): return within(ε, repeat(lambda x: next_(n,x), a0)) We've used the repeat() function to generate a (potentially) infinite sequence of values based on the next_(n,x) function. Our within() function will stop generating values in the sequence when it locates two values with a difference less than ε. When we use this version of the sqrt() method, we need to provide an initial seed value, a0, and an ε value. An expression like sqrt(1.0, .0001, 3) will start with an approximation of 1.0 and compute the value of to within 0.0001. For most applications, the initial a0 value can be 1.0. However, the closer it is to the actual square root, the more rapidly this method converges. The original example of this approximation algorithm was shown in the Miranda language. It's easy to see that there are few profound differences between Miranda and Python. The biggest difference is Miranda's ability to construct cons, a value back into an iterable, doing a kind of unget. This parallelism between Miranda and Python gives us confidence that many kinds of functional programming can be easily done in Python. Summary We've looked at programming paradigms with an eye toward distinguishing the functional paradigm from two common imperative paradigms in details. For more information kindly take a look at the following books, also by Packt Publishing: Learning Python (https://www.packtpub.com/application-development/learning-python) Mastering Python (https://www.packtpub.com/application-development/mastering-python) Mastering Object-oriented Python (https://www.packtpub.com/application-development/mastering-object-oriented-python) Resources for Article: Further resources on this subject: Saying Hello to Unity and Android [article] Using Specular in Unity [article] Unity 3.x Scripting-Character Controller versus Rigidbody [article]
Read more
  • 0
  • 1
  • 6081
Packt
17 Feb 2016
72 min read
Save for later

Speaking Java – Your First Game

Packt
17 Feb 2016
72 min read
In this article, we will start writing our very own Java code at the same time as we begin understanding Java syntax. We will learn how to store, retrieve, and manipulate different types of values stored in the memory. We will also look at making decisions and branching the flow of our code based on the values of this data. In this order, we will: Learn some Java syntax and see how it is turned into a running app by the compiler Store data and use it with variables Express yourself in Java with expressions Continue with the math game by asking a question Learn about decisions in Java Continue with the math game by getting and checking the answer (For more resources related to this topic, see here.) Acquiring the preceding Java skills will enable us to build the next two phases of our math game. This game will be able to ask the player a question on multiplication, check the answer and give feedback based on the answer given, as shown in the following screenshot: Java syntax Throughout this book, we will use plain English to discuss some fairly technical things. You will never be asked to read a technical explanation of a Java or Android concept that has not been previously explained in a non-technical way. Occasionally, I might ask or imply that you accept a simplified explanation in order to offer a fuller explanation at a more appropriate time, like the Java class as a black box; however, never will you need to scurry to Google in order to get your head around a big word or a jargon-filled sentence. Having said that, the Java and Android communities are full of people who speak in technical terms and to join in and learn from these communities, you need to understand the terms they use. So the approach this book takes is to learn a concept or appreciate an idea using an entirely plain speaking language, but at the same time, it introduces the jargon as part of the learning. Then, much of the jargon will begin to reveal its usefulness, usually as a way of clarification or keeping the explanation/discussion from becoming longer than it needs to be. The very term, "Java syntax," could be considered technical or jargon. So what is it? The Java syntax is the way we put together the language elements of Java in order to produce code that works in the Java/Dalvik virtual machine. Syntax should also be as clear as possible to a human reader, not least ourselves when we revisit our programs in the future. The Java syntax is a combination of the words we use and the formation of those words into sentence like structures. These Java elements or words are many in number, but when taken in small chunks are almost certainly easier to learn than any human-spoken language. The reason for this is that the Java language and its syntax were specifically designed to be as straightforward as possible. We also have Android Studio on our side which will often let us know if we make a mistake and will even sometimes think ahead and prompt us. I am confident that if you can read, you can learn Java; because learning Java is much easy. What then separates someone who has finished an elementary Java course from an expert programmer? The same things that separate a student of language from a master poet. Mastery of the language comes through practice and further study. The compiler The compiler is what turns our human-readable Java code into another piece of code that can be run in a virtual machine. This is called compiling. The Dalvik virtual machine will run this compiled code when our players tap on our app icon. Besides compiling Java code, the compiler will also check for mistakes. Although we might still have mistakes in our released app, many are stopped discovered at the when our code is compiled. Making code clear with comments As you become more advanced in writing Java programs, the solutions you use to create your programs will become longer and more complicated. Java was designed to manage complexity by having us divide our code into separate chunks, very often across multiple files. Comments are a part of the Java program that do not have any function in the program itself. The compiler ignores them. They serve to help the programmer to document, explain, and clarify their code to make it more understandable to themselves at a later date or to other programmers who might need to use or modify the code. So, a good piece of code will be liberally sprinkled with lines like this: //this is a comment explaining what is going on The preceding comment begins with the two forward slash characters, //. The comment ends at the end of the line. It is known as a single-line comment. So anything on that line is for humans only, whereas anything on the next line (unless it's another comment) needs to be syntactically correct Java code: //I can write anything I like here but this line will cause an error We can use multiple single-line comments: //Below is an important note //I am an important note //We can have as many single line comments like this as we like Single-line comments are also useful if we want to temporarily disable a line of code. We can put // in front of the code and it will not be included in the program. Recall this code, which tells Android to load our menu UI: //setContentView(R.layout.activity_main); In the preceding situation, the menu will not be loaded and the app will have a blank screen when run, as the entire line of code is ignored by the compiler. There is another type of comment in Java—the multiline comment. This is useful for longer comments and also to add things such as copyright information at the top of a code file. Also like the single-line comment, it can be used to temporarily disable code, in this case usually multiple lines. Everything in between the leading /* signs and the ending */ signs is ignored by the compiler. Here are some examples: /* This program was written by a Java expert You can tell I am good at this because my code has so many helpful comments in it. */ There is no limit to the number of lines in a multiline comment. Which type of comment is best to use will depend upon the situation. In this book, I will always explain every line of code explicitly but you will often find liberally sprinkled comments within the code itself that add further explanation, insight or clarification. So it's always a good idea to read all of the code: /* The winning lottery numbers for next Saturday are 9,7,12,34,29,22 But you still want to learn Java? Right? */ All the best Java programmers liberally sprinkle their code with comments. Storing data and using it with variables We can think of a variable as a labeled storage box. They are also like a programmer's window to the memory of the Android device, or whatever device we are programming. Variables can store data in memory (the storage box), ready to be recalled or altered when necessary by using the appropriate label. Computer memory has a highly complex system of addressing that we, fortunately, do not need to interact with in Java. Java variables allow us to make up convenient names for all the data that we want our program to work with; the JVM will handle all the technicalities that interact with the operating system, which in turn, probably through several layers of buck passing, will interact with the hardware. So we can think of our Android device's memory as a huge warehouse. When we assign names to our variables, they are stored in the warehouse, ready when we need them. When we use our variable's name, the device knows exactly what we are referring to. We can then tell it to do things such as "get box A and add it to box C, delete box B," and so on. In a game, we will likely have a variable named as something along the lines of score. It would be this score variable using which we manage anything related to the user's score, such as adding to it, subtracting or perhaps just showing it to the player. Some of the following situations that might arise: The player gets a question right, so add 10 to their existing score The player views their stats screen, so print score on the screen The player gets the best score ever, so make hiScore the same as their current score These are fairly arbitrary examples of names for variables and as long as you don't use any of the characters keywords that Java restricts, you can actually call your variables whatever you like. However, in practice, it is best to adopt a naming convention so that your variable names will be consistent. In this book, we will use a loose convention of variable names starting with a lowercase letter. When there is more than one word in the variable's name, the second word will begin with an uppercase letter. This is called "camel casing." Here are some examples of camel casing: score hiScore playersPersonalBest Before we look at some real Java code with some variables, we need to first look at the types of variables we can create and use. Types of variables It is not hard to imagine that even a simple game will probably have quite a few variables. What if the game has a high score table that remembers the names of the top 10 players? Then we might need variables for each player. And what about the case when a game needs to know if a playable character is dead or alive, or perhaps has any lives/retries left? We might need code that tests for life and then ends the game with a nice blood spurt animation if the playable character is dead. Another common requirement in a computer program, including games, is the right or wrong calculation: true or false. To cover almost these and many other types of information you might want to keep track of, every possibility Java has types. There are many types of variables and, we can also invent our own types or use other people's types. But for now, we will look at the built-in Java types. To be fair, they cover just about every situation we are likely to run into for a while. Some examples are the best way to explain this type of stuff. We have already discussed the hypothetical but highly likely score variable. The sWell score is likely to be a number, so we have to convey this (that the score is a number) to the Java compiler by giving the score an appropriate type. The hypothetical but equally likely playerName will of course, hold the characters that make up the player's name. Jumping ahead a couple of paragraphs, the type that holds a regular number is called an int, and the type that holds name-like data is called a string. And if we try and store a player name, perhaps "Ada Lovelace" in score, which is meant for numbers, we will certainly run into trouble. The compiler says no! Actually, the error would say this: As we can see, Java was designed to make it impossible for such errors to make it to a running program. Did you also spot in the previous screenshot that I had forgotten the semicolon at the end of the line? With this compiler identifying our errors, what could possibly go wrong? Here are the main types in Java. Later, we will see how to start using them: int: This type is used to store integers. It uses 32 pieces (bits) of memory and can therefore store values with a magnitude a little in excess of 2 billion, including negative values. long: As the name hints at, this data types can be used when even larger numbers are required. A long data type uses 64 bits of memory and 2 to the power of 63 is what we can store in this type. If you want to see what that looks like, try this: 9,223,372,036,854,775,807. Perhaps surprisingly, there are uses for long variables but if a smaller variable will do, we should use it so that our program uses less memory. You might be wondering when you might use numbers of this magnitude. The obvious examples would be math or science applications that do complex calculations but another use might be for timing. When you time how long something takes, the Java Date class uses the number of milliseconds since January 1, 1970. The long data type could be useful to subtract a start time from an end time to determine an elapsed time. float: This is for floating-point numbers, that is, numbers where there is precision beyond the decimal point. As the fractional part of a number takes memory space just as the whole number portion, the range of numbers possible in a float is therefore decreased compared to non-floating-point numbers. So, unless our variable will definitely use the extra precision, float would not be our data type of choice. double: When the precision in float is not enough we have double. short: When even an int data type is overkill, the super-skinny short fits into the tiniest of storage boxes, but we can only store around 64,000 values, from -32,768 to 32,767. byte: This is an even smaller storage box than a short type. There is plenty of room for these in memory but a byte can only store values from -128 to 127. boolean: We will be using plenty of Booleans throughout the book. A Boolean variable can be either true or false—nothing else. Perhaps Boolean answer questions such as: Is the player alive? Has a new high score been reached? Are two examples for a Boolean variable enough? char: This stores a single alphanumeric character. It's not going to change anything on its own but it could be useful if we put lots of them together. I have kept this discussion of data types to a practical level that is useful in the context of this book. If you are interested in how a data type's value is stored and why the limits are what they are, visit the Oracle Java tutorials site at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/datatypes.html. Note that you do not need any more information than we have already discussed to continue with this book. As we just learned, each type of data that we might want to store will require a specific amount of memory. So we must let the Java compiler know the type of the variable before we begin to use it. The preceding variables are known as the primitive types. They use predefined amounts of memory and so, using our storage analogy, fit into predefined sizes of the storage box. As the "primitive" label suggests, they are not as sophisticated as the reference types. Reference types You might have noticed that we didn't cover the string variable type that we previously used to introduce the concept of variables. Strings are a special type of variable known as a reference type. They quite simply refer to a place in memory where the storage of the variable begins, but the reference type itself does not define a specific amount of memory. The reason for this is fairly straightforward: we don't always know how much data will need to be stored until the program is actually run. We can think of strings and other reference types as continually expanding and contracting storage boxes. So won't one of these string reference types bump into another variable eventually? If you think about the devices memory as a huge warehouse full of racks of labeled storage boxes, then you can think of the Dalvik virtual machine as a super-efficient forklift truck driver that puts the different types of storage boxes in the most appropriate place. And if it becomes necessary, the virtual machine will quickly move stuff around in a fraction of a second to avoid collisions. It will even incinerate unwanted storage boxes when appropriate. This happens at the same time as constantly unloading new storage boxes of all types and placing them in the best place, for that type of variable. Dalvik tends to keep reference variables in a part of the warehouse that is different from the part for the primitive variables. So strings can be used to store any keyboard character, like a char data type but of almost any length. Anything from a player's name to an entire book can be stored in a single string. There are a couple more reference types we will explore. Arrays are a way to store lots of variables of the same type, ready for quick and efficient access. Think of an array as an aisle in our warehouse with all the variables of a certain type lined up in a precise order. Arrays are reference types, so Dalvik keeps these in the same part of the warehouse as strings. So we know that each type of data that we might want to store will require an amount of memory. Hence, we must let the Java compiler know the type of the variable before we begin to use it. Declaration That's enough of theory. Let's see how we would actually use our variables and types. Remember that each primitive type requires a specific amount of real device memory. This is one of the reasons that the compiler needs to know what type a variable will be of. So we must first declare a variable and its type before we attempt to do anything with it. To declare a variable of type int with the name score, we would type: int score; That's it! Simply state the type, in this case int, then leave a space, and type the name you want to use for this variable. Also note the semicolon on the end of the line as usual to show the compiler that we are done with this line and what follows, if anything, is not part of the declaration. For almost all the other variable types, declaration would occur in the same way. Here are some examples. The variable names are arbitrary. This is like reserving a labeled storage box in the warehouse: long millisecondsElapsed; float gravity; double accurrateGravity; boolean isAlive; char playerInitial; String playerName; Initialization Here, for each type, we initialize a value to the variable. Think about placing a value inside the storage box, as shown in the following code: score = 0; millisecondsElapsed = 1406879842000;//1st Aug 2014 08:57:22 gravity = 1.256; double accurrateGravity =1.256098; isAlive = true; playerInitial = 'C'; playerName = "Charles Babbage"; Notice that the char variable uses single quotes (') around the initialized value while the string uses double quotes ("). We can also combine the declaration and initialization steps. In the following snippet of code, we declare and initialize the same variables as we did previously, but in one step each: int score = 0; long millisecondsElapsed = 1406879842000;//1st Aug 2014 08:57:22 float gravity = 1.256; double accurrateGravity =1.256098; boolean isAlive = true; char playerInitial = 'C'; String playerName = "Charles Babbage"; Whether we declare and initialize separately or together is probably dependent upon the specific situation. The important thing is that we must do both: int a; //The line below attempts to output a to the console Log.i("info", "int a = " + a); The preceding code would cause the following result: Compiler Error: Variable a might not have been initialized There is a significant exception to this rule. Under certain circumstances variables can have default values. Changing variables with operators Of course, in almost any program, we are going to need to do something with these values. Here is a list of perhaps the most common Java operators that allow us to manipulate variables. You do not need to memorize them as we will look at every line of code when we use them for the first time: The assignment operator (=): This makesthe variable to the left of the operator the same as the value to the right. For example, hiScore = score; or score = 100;. The addition operator (+): This adds the values on either side of the operator. It is usually used in conjunction with the assignment operator, such as score = aliensShot + wavesCleared; or score = score + 100;. Notice that it is perfectly acceptable to use the same variable simultaneously on both sides of an operator. The subtraction operator (-): This subtracts the value on the right side of the operator from the value on the left. It is usually used in conjunction with the assignment operator, such as lives = lives - 1; or balance = income - outgoings;. The division operator (/): This divides the number on the left by the number on the right. Again, it is usually used in conjunction with the assignment operator, as shown in fairShare = numSweets / numChildren; or recycledValueOfBlock = originalValue / .9;. The multiplication operator (*): This multiplies variables and numbers, such as answer = 10 * 10; or biggerAnswer = 10 * 10 * 10;. The increment operator (++): This is a really neat way to add 1 to the value of a variable. The myVariable = myVariable + 1; statement is the same as myVariable++;. The decrement operator (--): You guessed it: a really neat way to subtract 1 from something. The myVariable = myVariable -1; statement is the same as myVariable--;. The formal names for these operators are slightly different from the names used here for explanation. For example, the division operator is actually one of the multiplicative operators. But the preceding names are far more useful for the purpose of learning Java and if you used the term "division operator", while conversing with someone from the Java community, they would know exactly what you mean. There are actually many more operators than these in Java. If you are curious about operators there is a complete list of them on the Java website at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html. All the operators required to complete the projects in this book will be fully explained in this book. The link is provided for the curious among us. Expressing yourself in Java Let's try using some declarations, assignments and operators. When we bundle these elements together into some meaningful syntax, we call it an expression. So let's write a quick app to try some out. Here we will make a little side project so we can play with everything we have learned so far. Instead, we will simply write some Java code and examine its effects by outputting the values of variables to the Android console, called LogCat. We will see exactly how this works by building the simple project and examining the code and the console output: The following is a quick reminder of how to create a new project. Close any currently open projects by navigating to File | Close Project. Click on Start a new Android Studio project. The Create New Project configuration window will appear. Fill in the Application name field and Company Domain with packtpub.com or you could use your own company website name here instead. Now click on the Next button. On the next screen, make sure the Phone and Tablet checkbox has a tick in it. Now we have to choose the earliest version of Android we want to build our app for. Go ahead and play with a few options in the drop-down selector. You will see that the earlier the version we select, the greater is the percentage of devices our app can support. However, the trade-off here is that the earlier the version we select, the less are cutting-edge Android features available in our apps. A good balance is to select API 8: Android 2.2 (Froyo). Go ahead and do that now as shown in the next screenshot. Click on Next. Now select Blank Activity as shown in the next screenshot and click on Next again. On the next screen, simply change Activity Name to MainActivity and click on Finish. To keep our code clear and simple, you can delete the two unneeded methods (onCreateOptionsMenu and onOptionsItemSelected) and their associated @override and @import statements. However, this is not necessary for the example to work. As with all the examples and projects in this book, you can copy or review the code from the download bundle. Just create the project as described previously and paste the code from MainActivity.java file from the download bundle to the MainActivity.java file that was generated when you created the project in Android Studio. Just ensure that the package name is the same as the one you chose when the project was created. However, I strongly recommend going along with the tutorial so that we can learn how to do everything for ourselves. As this app uses the LogCat console to show its output, you should run this app on the emulator only and not on a real Android device. The app will not harm a real device, but you just won't be able to see anything happening. Create a new blank project called Expressions In Java. Now, in the onCreate method just after the line where we use the setContentView method, add this code to declare and initialize some variables: //first we declare and initialize a few variables int a = 10; String b = "Alan Turing"; boolean c = true; Now add the following code. This code simply outputs the value of our variables in a form where we can closely examine them in a minute: //Let's look at how Android 'sees' these variables //by outputting them, one at a time to the console Log.i("info", "a = " + a); Log.i("info", "b = " + b); Log.i("info", "c = " + c); Now let's change our variables using the addition operator and another new operator. See if you can work out the output values for variables a, b, and c before looking at the output and the code explanation: //Now let's make some changes a++; a = a + 10; b = b + " was smarter than the average bear Booboo"; b = b + a; c = (1 + 1 == 3);//1 + 1 is definitely 2! So false. Let's output the values once more in the same way we did in step 3, but this time, the output should be different: //Now to output them all again Log.i("info", "a = " + a); Log.i("info", "b = " + b); Log.i("info", "c = " + c); Run the program on an emulator in the usual way. You can see the output by clicking on the Android tab from our "useful tabs" area below the Project Explorer. Here is the output, with some of the unnecessary formatting stripped off: info? a = 10 info? b = Alan Turing info? c = true info? a = 21 info? b = Alan Turing was smarter than the average bear Booboo21 info? c = false Now let's discuss what happened. In step 2, we declared and initialized three variables: a: This is an int that holds the value 10 b: This is a string that holds the name of an eminent computer scientist. c: This is a Boolean that holds the value false So when we output the values in step 3, it should be no surprise that we get the following: info? a = 10 info? b = Alan Turing info? c = true In step 4, all the fun stuff happens. We add 1 to the value of our int a using the increment operator like this: a++;. Remember that a++ is the same as a = a + 1. We then add 10 to a. Note we are adding 10 to a after having already added 1. So we get this output for a 10 + 1 + 10 operation: info? a = 21 Now let's examine our string, b. We appear to be using the addition operator on our eminent scientist. What is happening is what you could probably guess. We are adding together two strings "Alan Turing" and "was smarter than the average bear Booboo." When you add two strings together it is called concatenating and the + symbol doubles as the concatenation operator. Finally, for our string, we appear to be adding int a to it. This is allowed and the value of a is concatenated to the end of b. info? b = Alan Turing was smarter than the average bear Booboo21 This does not work the other way round; you cannot add a string to an int. This makes sense as there is no logical answer. a = a + b Finally, let's look at the code that changes our Boolean, c, from true to false: c = (1+1=3);. Here, we are assigning to c the value of the expression contained within the brackets. This would be straightforward, but why the double equals (==)? We have jumped ahead of ourselves a little. The double equals sign is another operator called the comparison operator. So we are really asking, does 1+1 equal 3? Clearly the answer is false. You might ask, "why use == instead of =?" Simply to make it clear to the compiler when we mean to assign and when we mean to compare. Inadvertently using = instead of == is a very common error. The assignment operator (=) assigns the value on the right to the value on the left, while the comparison operator (==) compares the values on either side. The compiler will warn us with an error when we do this but at first glance you might swear the compiler is wrong. Now let's use everything we know and a bit more to make our Math game project. Math game – asking a question Now that we have all that knowledge under our belts, we can use it to improve our math game. First, we will create a new Android activity to be the actual game screen as opposed to the start menu screen. We will then use the UI designer to lay out a simple game screen so that we can use our Java skills with variables, types, declaration, initialization, operators, and expressions to make our math game generate a question for the player. We can then link the start menu and game screens together with a push button. If you want to save typing and just review the finished project, you can use the code downloaded from the Packt website. If you have any trouble getting any of the code to work, you can review, compare, or copy and paste the code from the already completed code provided in the download bundle. The completed code is in the following files that correspond to the filenames we will be using in this tutorial: Chapter 3/MathGameChapter3a/java/MainActivity.java Chapter 3/MathGameChapter3a/java/GameActivity.java Chapter 3/MathGameChapter3a/layout/activity_main.xml Chapter 3/MathGameChapter3a/layout/activity_game.xml As usual, I recommend following this tutorial to see how we can create all of the code for ourselves. Creating the new game activity We will first need to create a new Java file for the game activity code and a related layout file to hold the game Activity UI. Run Android Studio and select your Math Game Chapter 2 project. It might have been opened by default. Now we will create the new Android Activity that will contain the actual game screen, which will run when the player taps the Play button on our main menu screen. To create a new activity, we know need another layout file and another Java file. Fortunately Android Studio will help us do this. To get started with creating all the files we need for a new activity, right-click on the src folder in the Project Explorer and then go to New | Activity. Now click on Blank Activity and then on Next. We now need to tell Android Studio a little bit about our new activity by entering information in the above dialog box. Change the Activity Name field to GameActivity. Notice how the Layout Name field is automatically changed for us to activity_game and the Title field is automatically changed to GameActivity. Click on Finish. Android Studio has created two files for us and has also registered our new activity in a manifest file, so we don't need to concern ourselves with it. If you look at the tabs at the top of the editor window, you will see that GameActivity.java has been opened up ready for us to edit, as shown in the following screenshot: Ensure that GameActivity.java is active in the editor window by clicking on the GameActivity.java tab shown previously. Android overrides some methods for us by default, and that most of them were not necessary. Here again, we can see the code that is unnecessary. If we remove it, then it will make our working environment simpler and cleaner. To avoid this here, we will simply use the code from MainActivity.java as a template for GameActivity.java. We can then make some minor changes. Click on the MainActivity.java tab in the editor window. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now copy all of the code in the editor window using the Ctrl + C on the keyboard. Now click on the GameActivity.java tab. Highlight all of the code in the editor window using Ctrl + A on the keyboard. Now paste the copied code and overwrite the currently highlighted code using Ctrl + V on the keyboard. Notice that there is an error in our code denoted by the red underlining as shown in the following screenshot. This is because we pasted the code referring to MainActivity in our file which is called GameActivity. Simply change the text MainActivity to GameActivity and the error will disappear. Take a moment to see if you can work out what the other minor change is necessary, before I tell you. Remember that setContentView loads our UI design. Well what we need to do is change setContentView to load the new design (that we will build next) instead of the home screen design. Change setContentView(R.layout.activity_main); to setContentView(R.layout.activity_game);. Save your work and we are ready to move on. Note the Project Explorer where Android Studio puts the two new files it created for us. I have highlighted two folders in the next screenshot. In future, I will simply refer to them as our java code folder or layout files folder. You might wonder why we didn't simply copy and paste the MainActivity.java file to begin with and saved going through the process of creating a new activity? The reason is that Android Studio does things behind the scenes. Firstly, it makes the layout template for us. It also registers the new Activity for use through a file we will see later, called AndroidManifest.xml. This is necessary for the new activity to be able to work in the first place. All things considered, the way we did it is probably the quickest. The code at this stage is exactly the same as the code for the home menu screen. We state the package name and import some useful classes provided by Android: package com.packtpub.mathgamechapter3a.mathgamechapter3a; import android.app.Activity; import android.os.Bundle; We create a new activity, this time called GameActivity: public class GameActivity extends Activity { Then we override the onCreate method and use the setContentView method to set our UI design as the contents of the player's screen. Currently, however, this UI is empty: super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); We can now think about the layout of our actual game screen. Laying out the game screen UI As we know, our math game will ask questions and offer the player some multiple choices to choose answers from. There are lots of extra features we could add, such as difficulty levels, high scores, and much more. But for now, let's just stick to asking a simple, predefined question and offering a choice of three predefined possible answers. Keeping the UI design to the bare minimum suggests a layout. Our target UI will look somewhat like this: The layout is hopefully self-explanatory, but let's ensure that we are really clear; when we come to building this layout in Android Studio, the section in the mock-up that displays 2 x 2 is the question and will be made up of three text views (both numbers, and the = sign is also a separate view). Finally, the three options for the answer are made up of Button layout elements. We used all of these UI elements, but this time, as we are going to be controlling them using our Java code, there are a few extra things we need to do to them. So let's go through it step by step: Open the file that will hold our game UI in the editor window. Do this by double-clicking on activity_game.xml. This is located in our UI layout folder which can be found in the project explorer. Delete the Hello World TextView, as it is not required. Find the Large Text element on the palette. It can be found under the Widgets section. Drag three elements onto the UI design area and arrange them near the top of the design as shown in the next screenshot. It does not have to be exact; just ensure that they are in a row and not overlapping, as shown in the following screenshot: Notice in the Component Tree window that each of the three TextViews has been assigned a name automatically by Android Studio. They are textView , textView2 and textView3: Android Studio refers to these element names as an id. This is an important concept that we will be making use of. So to confirm this, select any one of the textViews by clicking on its name (id), either in the component tree as shown in the preceding screenshot or directly on it in the UI designer shown previously. Now look at the Properties window and find the id property. You might need to scroll a little to do this: Notice that the value for the id property is textView. It is this id that we will use to interact with our UI from our Java code. So we want to change all the IDs of our TextViews to something useful and easy to remember. If you look back at our design, you will see that the UI element with the textView id is going to hold the number for the first part of our math question. So change the id to textPartA. Notice the lowercase t in text, the uppercase P in Part, and the uppercase A. You can use any combination of cases and you can actually name the IDs anything you like. But just as with naming conventions with Java variables, sticking to conventions here will make things less error-prone as our program gets more complicated. Now select textView2 and change id to textOperator. Select the element currently with id textView3 and change it to textPartB. This TextView will hold the later part of our question. Now add another Large Text from the palette. Place it after the row of the three TextViews that we have just been editing. This Large Text will simply hold our equals to sign and there is no plan to ever change it. So we don't need to interact with it in our Java code. We don't even need to concern ourselves with changing the ID or knowing what it is. If this situation changed, we could always come back at a later time and edit its ID. However, this new TextView currently displays Large Text and we want it to display an equals to sign. So in the Properties window, find the text property and enter the value =. We have changed the text property, and you might also like to change the text property for textPartA, textPartB, and textOperator. This is not absolutely essential because we will soon see how we can change it via our Java code; however, if we change the text property to something more appropriate, then our UI designer will look more like it will when the game runs on a real device. So change the text property of textPartA to 2, textPartB to 2, and textOperator to x. Your UI design and Component tree should now look like this: For the buttons to contain our multiple choice answers, drag three buttons in a row, below the = sign. Line them up neatly like our target design. Now, just as we did for the TextViews, find the id properties of each button, and from left to right, change the id properties to buttonChoice1, buttonChoice2, and buttonChoice3. Why not enter some arbitrary numbers for the text property of each button so that the designer more accurately reflects what our game will look like, just as we did for our other TextViews? Again, this is not absolutely essential as our Java code will control the button appearance. We are now actually ready to move on. But you probably agree that the UI elements look a little lost. It would look better if the buttons and text were bigger. All we need to do is adjust the textSize property for each TextView and for each Button. Then, we just need to find the textSize property for each element and enter a number with the sp syntax. If you want your design to look just like our target design from earlier, enter 70sp for each of the TextView textSize properties and 40sp for each of the Buttons textSize properties. When you run the game on your real device, you might want to come back and adjust the sizes up or down a bit. But we have a bit more to do before we can actually try out our game. Save the project and then we can move on. As before, we have built our UI. This time, however, we have given all the important parts of our UI a unique, useful, and easy to identify ID. As we will see we are now able to communicate with our UI through our Java code. Coding a question in Java With our current knowledge of Java, we are not yet able to complete our math game but we can make a significant start. We will look at how we can ask the player a question and offer them some multiple choice answers (one correct and two incorrect). At this stage, we know enough of Java to declare and initialize some variables that will hold the parts of our question. For example, if we want to ask the times tables question 2 x 2, we could have the following variable initializations to hold the values for each part of the question: int partA = 2; int partB = 2; The preceding code declares and initializes two variables of the int type, each to the value of 2. We use int because we will not be dealing with any decimal fractions. Remember that the variable names are arbitrary and were just chosen because they seemed appropriate. Clearly, any math game worth downloading is going to need to ask more varied and advanced questions than 2 x 2, but it is a start. Now we know that our math game will offer multiple choices as answers. So, we need a variable for the correct answer and two variables for two incorrect answers. Take a look at these combined declarations and initializations: int correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; Note that the initialization of the variables for the wrong answers depends on the value of the correct answer, and the variables for the wrong answers are initialized after initializing the correctAnswer variable. Now we need to put these values, held in our variables, into the appropriate elements on our UI. The question variables (partA and partB) need to be displayed in our UI elements, textPartA and textPartB, and the answer variables (correctAnswer, wrongAnswer1, and wrongAnswer2) need to be displayed in our UI elements with the following IDs: buttonChoice1, buttonChoice2, and buttonChoice3. We will see how we do this in the next step-by-step tutorial. We will also implement the variable declaration and initialization code that we discussed a moment ago: First, open GameActivity.java in the editor window. Remember that you can do this by double-clicking on GameActivity in our java folder or clicking on its tab above the editor window if GameActivity.java is already open. All of our code will go into the onCreate method. It will go after the setContentView(R.layout.activity_game); line but before the closing curly brace } of the onCreate method. Perhaps, it's a good idea to leave a blank line for clarity and a nice explanatory comment as shown in the following code. We can see the entire onCreate method as it stands after the latest amendments. The parts in bold are what you need to add. Feel free to add helpful comments like mine if you wish: @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         //The next line loads our UI design to the screen         setContentView(R.layout.activity_game);           //Here we initialize all our variables         int partA = 9;         int partB = 9;         int correctAnswer = partA * partB;         int wrongAnswer1 = correctAnswer - 1;         int wrongAnswer2 = correctAnswer + 1;       }//onCreate ends here Now we need to add the values contained within the variables to the TextView and Button of our UI. But first, we need to get access to the UI elements we created. We do that by creating a variable of the appropriate class and linking it via the ID property of the appropriate UI element. We already know the class of our UI elements: TextView and Button. Here is the code that creates our special class variables for each of the necessary UI elements. Take a close look at the code, but don't worry if you don't understand all of it now. We will dissect the code in detail once everything is working. Enter the code immediately after the code entered in the previous step. You can leave a blank line for clarity if you wish. Just before you proceed, note that at two points while typing in this code, you will be prompted to import another class. Go ahead and do so on both occasions: /*Here we get a working object based on either the button or TextView class and base as well as link our new   objects directly to the appropriate UI elements that we created previously*/   TextView textObjectPartA =   (TextView)findViewById(R.id.textPartA);   TextView textObjectPartB =   (TextView)findViewById(R.id.textPartB);   Button buttonObjectChoice1 =   (Button)findViewById(R.id.buttonChoice1);   Button buttonObjectChoice2 =   (Button)findViewById(R.id.buttonChoice2);   Button buttonObjectChoice3 =   (Button)findViewById(R.id.buttonChoice3); In the preceding code, if you read the multiline comment, you will see that I used the term object. When we create a variable type based on a class, we call it an object. Once we have an object of a class, we can do anything that that class was designed to do. Now we have five new objects linked to the elements of our UI that we need to manipulate. What precisely are we going to do with them? We need to display the values of our variables in the text of the UI elements. We can use the objects we just created combined with a method provided by the class, and use our variables as values for that text. As usual, we will dissect this code further at the end of this tutorial. Here is the code to enter directly after the code in the previous step. Try and work out what is going on before we look at it together: //Now we use the setText method of the class on our objects //to show our variable values on the UI elements. //Just like when we output to the console in the exercise - //Expressions in Java, only now we use setText method //to put the values in our variables onto the actual UI. textObjectPartA.setText("" + partA); textObjectPartB.setText("" + partB);   //which button receives which answer, at this stage is arbitrary.   buttonObjectChoice1.setText("" + correctAnswer); buttonObjectChoice2.setText("" + wrongAnswer1); buttonObjectChoice3.setText("" + wrongAnswer2); Save your work. If you play with the assignment values for partA and partB, you can make them whatever you like and the game adjusts the answers accordingly. Obviously, we shouldn't need to reprogram our game each time we want a new question and we will solve that problem soon. All we need to do now is link the game section we have just made to the start screen menu. We will do that in the next tutorial. Now lets explore the trickier and newer parts of our code in more detail. In step 2, we declared and initialized the variables required so far: //Here we initialize all our variables int partA = 2; int partB = 2; int correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; Then in step 3, we got a reference to our UI design through our Java code. For the TextViews, it was done like this: TextView textObjectPartA = (TextView)findViewById(R.id.textPartA); For each of the buttons, a reference to our UI design was obtained like this: Button buttonObjectChoice1 =   Button)findViewById(R.id.buttonChoice1); In step 4, we did something new. We used a the setText method to show the values of our variables on our UI elements (TextView and Button) to the player. Let's break down one line completely to see how it works. Here is the code that shows the correctAnswer variable being displayed on buttonObjectChoice1. buttonObjectChoice1.setText("" + correctAnswer); By typing buttonObjectChoice1 and adding a period, as shown in the following line of code, we have access to all the preprogrammed methods of that object's class type that are provided by Android: The power of Button and the Android API There are actually lots of methods that we can perform on an object of the Button type. If you are feeling brave, try this to get a feeling of just how much functionality there is in Android. Type the following code: buttonObjectChoice1 Be sure to type the period on the end. Android Studio will pop up a list of possible methods to use on this object. Scroll through the list and get a feel of the number and variety of options: If a mere button can do all of this, think of the possibilities for our games once we have mastered all the classes contained in Android. A collection of classes designed to be used by others is collectively known as an Application Programmers Interface (API). Welcome to the Android API! In this case, we just want to set the button's text. So, we use setText and concatenate the value stored in our correctAnswer variable to the end of an empty string, like this: setText("" + correctAnswer); We do this for each of the UI elements we require to show our variables. Playing with autocomplete If you tried the previous tip, The power of Button and the Android API, and explored the methods available for objects of the Button type, you will already have some insight into autocomplete. Note that as you type, Android Studio is constantly making suggestions for what you might like to type next. If you pay attention to this, you can save a lot of time. Simply select the correct code completion statement that is suggested and press Enter. You can even see how much time you saved by selecting Help | Productivity Guide from the menu bar. Here you will see statistics for every aspect of code completion and more. Here are a few entries from mine: As you can see, if you get used to using shortcuts early on, you can save a lot of time in the long run. Linking our game from the main menu At the moment, if we run the app, we have no way for the player to actually arrive at our new game activity. We want the game activity to run when the player clicks on the Play button on the main MainActivity UI. Here is what we need to do to make that happen: Open the file activity_main.xml, either by double-clicking on it in the project explorer or by clicking its tab in the editor window. Now, just like we did when building the game UI, assign an ID to the Play button. As a reminder, click on the Play button either on the UI design or in the component tree. Find the id property in the Properties window. Assign the buttonPlay value to it. We can now make this button do stuff by referring to it in our Java code. Open the file MainActivity.java, either by double-clicking on it in the project explorer or clicking on its tab in the editor window. In our onCreate method, just after the line where we setContentView, add the following highlighted line of code: setContentView(R.layout.activity_main); Button buttonPlay = (Button)findViewById(R.id.buttonPlay); We will dissect this code in detail once we have got this working. Basically we are making a connection to the Play button by creating a reference variable to a Button object. Notice that both words are highlighted in red indicating an error. Just as before, we need to import the Button class to make this code work. Use the Alt + Enter keyboard combination. Now click on Import class from the popped-up list of options. This will automatically add the required import directive at the top of our MainActivity.java file. Now for something new. We will give the button the ability to listen to the user clicking on it. Type this immediately after the last line of code we entered: buttonPlay.setOnClickListener(this); Notice how the this keyword is highlighted in red indicating an error. Setting that aside, we need to make a modification to our code now in order to allow the use of an interface that is a special code element that allows us to add a functionality, such as listening for button clicks. Edit the line as follows. When prompted to import another class, click on OK: public class MainActivity extends Activity { to public class MainActivity extends Activity implements View.OnClickListener{ Now we have the entire line underlined in red. This indicates an error but it's where we should be at this point. We mentioned that by adding implements View.OnClickListener, we have implemented an interface. We can think of this like a class that we can use but with extra rules. The rules of the OnClickListener interface state that we must implement/use one of its methods. Notice that until now, we have optionally overridden/used methods as and when they have suited us. If we wish to use the functionality this interface provides, namely listening for button presses, then we have to add/implement the onClick method. This is how we do it. Notice the opening curly brace, {, and the closing curly brace, }. These denote the start and end of the method. Notice that the method is empty and it doesn't do anything, but an empty method is enough to comply with the rules of the OnClickListener interface, and the red line indicating an that our code has an error has gone. Make sure that you type the following code, outside the closing curly brace (}) of the onCreate method but inside the closing curly brace of our MainActivity class: @Override     public void onClick(View view) {               } Notice that we have an empty line between { and } of the onClick method. We can now add code in here to make the button actually do something. Type the following highlighted code between { and } of onClick: @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     } OK, so that code is a bit of a mouthful to comprehend all at once. See if you can guess what is happening. The clue is in the method named startActivity and the hopefully familiar term, GameActivity. Notice that we are assigning something to i. We will quickly get our app working and then diagnose the code in full. Notice that we have an error: all instances of the word Intent are red. We can solve this by importing the classes required to make Intent work. As before press Alt + Enter. Run the game in the emulator or on your device. Our app will now work. This is what the new game screen looks like after pressing Play on the menu screen: Almost every part of our code has changed a little and we have added a lot to it as well. Let's go over the contents of MainActivity.java and look at it line by line. For context, here it is in full: package com.packtpub.mathgamechapter3a.mathgamechapter3a;   import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button;     public class MainActivity extends Activity implements View.OnClickListener{       @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main);         final Button buttonPlay =           (Button)findViewById(R.id.buttonPlay);         buttonPlay.setOnClickListener(this);     }       @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     }   } We have seen much of this code before, but let's just go over it a chunk at a time before moving on so that it is absolutely clear. The code works like this: package com.packtpub.mathgamechapter3a.mathgamechapter3a; import android.app.Activity; import android.content.Intent; import android.os.Bundle; import android.view.View; import android.widget.Button; You would probably remember that this first block of code defines what our package is called and makes available all the Android API stuff we need for Button, TextView, and Activity. From our MainActivity.java file, we have this: public class MainActivity extends Activity implements View.OnClickListener{ Our MainActivity declaration with our new bit of code implements View.OnClickListener that gives us the ability to detect button clicks. Next in our code is this: @Override     protected void onCreate(Bundle savedInstanceState) {         super.onCreate(savedInstanceState);         setContentView(R.layout.activity_main); It is at the start of our onCreate method where we first ask the hidden code of onCreate to do its stuff using super.onCreate(savedInstanceState);. Then we set our UI to the screen with setContentView(R.layout.activity_main);. Next, we get a reference to our button with an ID of buttonPlay: Button buttonPlay = (Button)findViewById(R.id.buttonPlay); buttonPlay.setOnClickListener(this); Finally, our onClick method uses the Intent class to send the player to our GameActivity class and the related UI when the user clicks on the Play button: @Override     public void onClick(View view) {         Intent i;         i = new Intent(this, GameActivity.class);         startActivity(i);     } If you run the app, you will notice that we can now click on the Play button and our math game will ask us a question. Of course, we can't answer it yet. Although we have very briefly looked at how to deal with button presses, we need to learn more of Java in order to intelligently react to them. We will also reveal how to write code to handle presses from several buttons. This will be necessary to receive input from our multiple-choice-centric game_activity UI. Decisions in Java We can now summon enough of Java prowess to ask a question but a real math game must obviously do much more than this. We need to capture the player's answer, and we are nearly there with that—we can detect button presses. From there, we need to be able to decide whether their answer is right or wrong. Then, based on this decision, we have to choose an appropriate course of action. Let's leave the math game aside for now and look at how Java might help us by learning some more fundamentals and syntax of the Java language. More operators Let's look at some more operators: we can already add (+), take away (-), multiply (*), divide (/), assign (=), increment (++), compare (==), and decrement (--) with operators. Let's introduce some more super-useful operators, and then we will go straight to actually understanding how to use them in Java. Don't worry about memorizing every operator given here. Glance at them and their explanations. There, we will put some operators to use and they will become much clearer as we see a few examples of what they allow us to do. They are presented here in a list just to make the variety and scope of operators plain from the start. The list will also be more convenient to refer back to when not intermingled with the discussion about implementation that follows it. ==: This is a comparison operator we saw this very briefly before. It tests for equality and is either true or false. An expression like (10 == 9);, for example, is false. !: The logical NOT operator. The expression, ! (2+2==5).), is true because 2+2 is NOT 5. !=: This is another comparison operator which tests if something is NOT equal. For example, the expression, (10 != 9);), is true, that is, 10 is not equal to 9. >: This is another comparison operator which tests if something is greater than something else. The expression, (10 > 9);), is true. There are a few more comparison operators as well. <: You guessed it. This tests whether the value to the left is less than the value to the right or not. The expression, (10 < 9);, is false. >=: This operator tests whether one value is greater than or equal to the other, and if either is true, the result is true. For example, the expression, (10 >= 9);, is true. The expression, (10 >= 10);, is also true. <=: Like the preceding operator, this operator tests for two conditions but this time, less than and equal to. The expression, (10 <= 9);, is false. The expression, (10 <= 10);, is true. &&: This operator is known as logical AND. It tests two or more separate parts of an expression and all parts must be true in order for the result to be true. Logical AND is usually used in conjunction with the other operators to build more complex tests. The expression, ((10 > 9) && (10 < 11));, is true because both parts are true. The expression, ((10 > 9) && (10 < 9));, is false because only one part of the expression is true and the other is false. ||: This operator is called logical OR. It is just like logical AND except that only one of two or more parts of an expression need to be true for the expression to be true. Let's look at the last example we used but replace the && sign with ||. The expression, ((10 > 9) || (10 < 9));, is now true because one part of the expression is true. All of these operators are virtually useless without a way of properly using them to make real decisions that affect real variables and code. Let's look at how to make decisions in Java. Decision 1 – If they come over the bridge, shoot them As we saw, operators serve hardly any purpose on their own but it was probably useful to see just a part of the wide and varied range available to us. Now, when we look at putting the most common operator, ==, to use, we can start to see the powerful yet fine control that operators offer us. Let's make the previous examples less abstract using the Java if keyword, a few conditional operators, and a small story: in use with some code and a made up military situation that will hopefully make the following examples less abstract. The captain is dying and, knowing that his remaining subordinates are not very experienced, he decides to write a Java program to convey his last orders after he has died. The troops must hold one side of a bridge while awaiting reinforcements. The first command the captain wants to make sure his troops understand is this: If they come over the bridge, shoot them. So how do we simulate this situation in Java? We need a Boolean variable isComingOverBridge. The next bit of code assumes that the isComingOverBridge variable has been declared and initialized. We can then use it like this: if(isComingOverBridge){   //Shoot them } If the isComingOverBridge Boolean is true, the code inside the opening and closing curly braces will run. If not, the program continues after the if block without running it. Decision 2 – Else, do this The captain also wants to tell his troops what to do (stay put) if the enemy is not coming over the bridge. Now we introduce another Java keyword, else. When we want to explicitly do something and the if block does not evaluate to true, we can use else. For example, to tell the troops to stay put if the enemy is not coming over the bridge, we use else: if(isComingOverBridge){   //Shoot them }else{   //Hold position } The captain then realized that the problem wasn't as simple as he first thought. What if the enemy comes over the bridge and has more troops? His squad will be overrun. So, he came up with this code (we'll use some variables as well this time): boolean isComingOverTheBridge; int enemyTroops; int friendlyTroops; //Code that initializes the above variables one way or another   //Now the if if(isComingOverTheBridge && friendlyTroops > enemyTroops){   //shoot them }else if(isComingOverTheBridge && friendlyTroops < enemyTroops) {   //blow the bridge }else{   //Hold position } Finally, the captain's last concern was that if the enemy came over the bridge waving the white flag of surrender and were promptly slaughtered, then his men would end up as war criminals. The Java code needed was obvious. Using the wavingWhiteFlag Boolean variable he wrote this test: if (wavingWhiteFlag){   //Take prisoners } But where to put this code was less clear. In the end, the captain opted for the following nested solution and changing the test for wavingWhiteFlag to logical NOT, like this: if (!wavingWhiteFlag){//not surrendering so check everything else   if(isComingOverTheBridge && friendlyTroops > enemyTroops){     //shoot them   }else if(isComingOverTheBridge && friendlyTroops <   enemyTroops) {     //blow the bridge   } }else{//this is the else for our first if   //Take prisoners { //Holding position This demonstrates that we can nest if and else statements inside of one another to create even deeper decisions. We could go on making more and more complicated decisions but what we have seen is more than sufficient as an introduction. Take the time to reread this if anything is unclear. It is also important to point out that very often, there are two or more ways to arrive at the solution. The right way will usually be the way that solves the problem in the clearest and simplest manner. Switching to make decisions We have seen the vast and virtually limitless possibilities of combining the Java operators with if and else statements. But sometimes a decision in Java can be better made in other ways. When we have to make a decision based on a clear list of possibilities that doesn't involve complex combinations, then switch is usually the way to go. We start a switch decision like this: switch(argument){   } In the previous example, an argument could be an expression or a variable. Then within the curly braces, we can make decisions based on the argument with case and break elements: case x:   //code to for x   break;   case y:   //code for y   break; You can see that in the previous example, each case states a possible result and each break denotes the end of that case as well as the point at which no further case statements should be evaluated. The first break encountered takes us out of the switch block to proceed with the next line of code. We can also use default without a value to run some code if none of the case statements evaluate to true, like this: default://Look no value   //Do something here if no other case statements are true break; Supposing we are writing an old-fashioned text adventure game—the kind of game where the player types commands such as "Go East", "Go West", "Take Sword", and so on. In this case, switch could handle that situation like this example code and we could use default to handle the case of the player typing a command that is not specifically handled: //get input from user in a String variable called command switch(command){     case "Go East":":   //code to go east   break;     case "Go West":   //code to go west   break;   case "Take sword":   //code to take the sword   break;     //more possible cases     default:   //Sorry I don't understand your command   break;   } We will use switch so that our onClick method can handle the different multiple-choice buttons of our math game. Java has even more operators than we have covered here. We have looked at all the operators we are going to need in this book and probably the most used in general. If you want the complete lowdown on operators, take a look at the official Java documentation at http://docs.oracle.com/javase/tutorial/java/nutsandbolts/operators.html. Math game – getting and checking the answer Here we will detect the right or wrong answer and provide a pop-up message to the player. Our Java is getting quite good now, so let's dive in and add these features. I will explain things as we go and then, as usual, dissect the code thoroughly at the end. The already completed code is in the download bundle, in the following files that correspond to the filenames we will create/autogenerate in Android Studio in a moment: Chapter 3/MathGameChapter3b/java/MainActivity.java Chapter 3/MathGameChapter3b/java/GameActivity.java Chapter 3/MathGameChapter3b/layout/activity_main.xml Chapter 3/MathGameChapter3b/layout/activity_game.xml As usual, I recommend following this tutorial step by step to see how we can create all of the code for ourselves. Open the GameActivity.java file visible in the editor window. Now we need to add the click detection functionality to our GameActivity, just as we did for our MainActivity. However, we will go a little further than the last time. So let's do it step by step as if it is totally new. Once again, we will give the buttons the ability to listen to the user clicking on them. Type this immediately after the last line of code we entered in the onCreate method but before the closing }. This time of course, we need to add some code to listen to three buttons: buttonObjectChoice1.setOnClickListener(this); buttonObjectChoice2.setOnClickListener(this); buttonObjectChoice3.setOnClickListener(this); Notice how the this keyword is highlighted in red indicating an error. Again, we need to make a modification to our code in order to allow the use of an interface, the special code element that allows us to add functionalities such as listening to button clicks. Edit the line as follows. When prompted to import another class, click on OK. Consider this line of code: public class GameActivity extends Activity { Change it to the following line: public class GameActivity extends Activity implements View.OnClickListener{   Now we have the entire preceding line underlined in red. This indicates an error but it is where we should be at this point. We mentioned that by adding implements View.OnClickListener, we have implemented an interface. We can think of this like a class that we can use, but with extra rules. One of the rules of the OnClickListener interface is that we must implement one of its methods, as you might remember. If we wish to use the useful functionality this interface provides (listening to button presses). Now we will add the onClick method. Type the following code. Notice the opening curly brace, {, and the closing curly brace, }. These denote the start and end of the method. Notice that the method is empty; it doesn't do anything but an empty method is enough to comply with the rules of the OnClickListener interface and the red line that indicated an error has gone. Make sure that you type the following code outside the closing curly brace (}) of the onCreate method but inside the closing curly brace of our MainActivity class: @Override     public void onClick(View view) {     } Notice that we have an empty line between the { and } braces of our onClick method. We can now put some code in here to make the buttons actually do something. Type the following in between { and } of onClick. This is where things get different from our code in MainActivity. We need to differentiate between the three possible buttons that could be pressed. We will do this with the switch statement that we discussed earlier. Look at the case criteria; they should look familiar. Here is the code that uses the switch statements: switch (view.getId()) {               case R.id.buttonChoice1:             //button 1 stuff goes here                 break;               case R.id.buttonChoice2:             //button 2 stuff goes here                 break;               case R.id.buttonChoice3:            //button 3 stuff goes here                 break;           }   Each case element handles a different button. For each button case, we need to get the value stored in the button that was just pressed and see if it matches our correctAnswer variable. If it does, we must tell the player they got it right, and if not, we must tell them they got it wrong. However, there is still one problem we have to solve. The onClick method is separate from the onCreate method and the Button objects. In fact, all the variables are declared in the onCreate method. If you try typing the code from step 9 now, you will get lots of errors. We need to make all the variables that we need in onClick available in onClick. To do this, we will move their declarations from above the onCreate method to just below the opening { of GameActivity. This means that these variables become variables of the GameActivity class and can be seen anywhere within GameActivity. Declare the following variables like this: int correctAnswer; Button buttonObjectChoice1; Button buttonObjectChoice2; Button buttonObjectChoice3; Now change the initialization of these variables within onCreate as follows. The actual parts of code that need to be changed are highlighted. The rest is shown for context: //Here we initialize all our variables int partA = 9; int partB = 9; correctAnswer = partA * partB; int wrongAnswer1 = correctAnswer - 1; int wrongAnswer2 = correctAnswer + 1; and TextView textObjectPartA =   (TextView)findViewById(R.id.textPartA);   TextView textObjectPartB =   (TextView)findViewById(R.id.textPartB);   buttonObjectChoice1 = (Button)findViewById(R.id.buttonChoice1);         buttonObjectChoice2 = (Button)findViewById(R.id.buttonChoice2);         buttonObjectChoice3 = (Button)findViewById(R.id.buttonChoice3);   Here is the top of our onClick method as well as the first case statement for our onClick method: @Override     public void onClick(View view) {         //declare a new int to be used in all the cases         int answerGiven=0;         switch (view.getId()) {               case R.id.buttonChoice1:             //initialize a new int with the value contained in buttonObjectChoice1             //Remember we put it there ourselves previously                 answerGiven = Integer.parseInt("" +                     buttonObjectChoice1.getText());                   //is it the right answer?                 if(answerGiven==correctAnswer) {//yay it's the right answer                     Toast.makeText(getApplicationContext(),                       "Well done!",                       Toast.LENGTH_LONG).show();                 }else{//uh oh!                     Toast.makeText(getApplicationContext(),"Sorry that's     wrong", Toast.LENGTH_LONG).show();                 }                 break;   Here are the rest of the case statements that do the same steps as the code in the previous step except handling the last two buttons. Enter the following code after the code entered in the previous step:   case R.id.buttonChoice2:                 //same as previous case but using the next button                 answerGiven = Integer.parseInt("" +                   buttonObjectChoice2.getText());                 if(answerGiven==correctAnswer) {                     Toast.makeText(getApplicationContext(), "Well done!", Toast.LENGTH_LONG).show();                 }else{                     Toast.makeText(getApplicationContext(),"Sorry that's wrong", Toast.LENGTH_LONG).show();                 }                 break;               case R.id.buttonChoice3:                 //same as previous case but using the next button                 answerGiven = Integer.parseInt("" +                     buttonObjectChoice3.getText());                 if(answerGiven==correctAnswer) {                     Toast.makeText(getApplicationContext(), "Well done!", Toast.LENGTH_LONG).show();                 }else{                     Toast.makeText(getApplicationContext(),"Sorry that's wrong", Toast.LENGTH_LONG).show();                 }                 break;           } Run the program, and then we will look at the code carefully, especially that odd-looking Toast thing. Here is what happens when we click on the leftmost button: This is how we did it: In steps 1 through 6, we set up handling for our multi-choice buttons, including adding the ability to listen to clicks using the onClick method and a switch block to handle decisions depending on the button pressed. In steps 7 and 8, we had to alter our code to make our variables available in the onClick method. We did this by making them member variables of our GameActivity class. When we make a variable a member of a class, we call it a field. In steps 9 and 10, we implemented the code that actually does the work in our switch statement in onClick. Let's take a line-by-line look at the code that runs when button1 is pressed. case R.id.buttonChoice1: First, the case statement is true when the button with an id of buttonChoice1 is pressed. Then the next line of code to execute is this: answerGiven = Integer.parseInt(""+ buttonObjectChoice1.getText()); The preceding line gets the value on the button using two methods. First, getText gets the number as a string and then Integer.parseInt converts it to an int. The value is stored in our answerGiven variable. The following code executes next: if(answerGiven==correctAnswer) {//yay it's the right answer   Toast.makeText(getApplicationContext(), "Well done!",     Toast.LENGTH_LONG).show(); }else{//uh oh!     Toast.makeText(getApplicationContext(),"Sorry that's wrong",       Toast.LENGTH_LONG).show();                 } The if statement tests to see if the answerGiven variable is the same as correctAnswer using the == operator. If so, the makeText method of the Toast object is used to display a congratulatory message. If the values of the two variables are not the same, the message displayed is a bit more negative one. The Toast line of code is possibly the most evil thing we have seen thus far. It looks exceptionally complicated and it does need a greater knowledge of Java than we have at the moment to understand. All we need to know for now is that we can use the code as it is and just change the message, and it is a great tool to announce something to the player. If you really want an explanation now, you can think of it like this: when we made button objects, we got to use all the button methods. But with Toast, we used the class directly to access its setText method without creating an object first. We can do this process when the class and its methods are designed to allow it. Finally, we break out of the whole switch statement as follows: break; Self-test questions Q1) What does this code do? // setContentView(R.layout.activity_main); Q2) Which of these lines causes an error? String a = "Hello"; String b = " Vinton Cerf"; int c = 55; a = a + b c = c + c + 10; a = a + c; c = c + a; Q3) We talked a lot about operators and how different operators can be used together to build complicated expressions. Expressions, at a glance, can sometimes make the code look complicated. However, when looked at closely, they are not as tough as they seem. Usually, it is just a case of splitting the expressions into smaller pieces to work out what is going on. Here is an expression that is more convoluted than anything else you will ever see in this book. As a challenge, can you work out: what will x be? int x = 10; int y = 9; boolean isTrueOrFalse = false; isTrueOrFalse = (((x <=y)||(x == 10))&&((!isTrueOrFalse) || (isTrueOrFalse))); Summary We went from knowing nothing about Java syntax to learning about comments, variables, operators, and decision making. As with any language, mastery of Java can be achieved by simply practicing, learning, and increasing our vocabulary. At this point, the temptation might be to hold back until mastery of the current Java syntax has been achieved, but the best way is to move on to new syntax at the same time as revisiting what we have already begun to learn. We will finally finish our math game by adding random questions of multiple difficulties as well as using more appropriate and random wrong answers for the multiple choice buttons. To enable us to do this, we will first learn some more new on Java and syntax. Resources for Article: Further resources on this subject: Asking Permission: Getting your head around Marshmallow's Runtime Permissions [article] Android and iOS Apps Testing at a Glance [article] Practical How-To Recipes for Android [article]
Read more
  • 0
  • 0
  • 8977

article-image-your-first-swift-2-project
Packt
16 Feb 2016
29 min read
Save for later

Your First Swift 2 Project

Packt
16 Feb 2016
29 min read
After the release of Xcode 6 in 2014, it has been possible to build Swift applications for iOS and OS X and submit them to the App Store for publication. This article will present both a single view application and a master-detail application, and use these to explain the concepts behind iOS applications, as well as introduce classes in Swift. In this article, we will present the following topics: How iOS applications are structured Single-view iOS applications Creating classes in Swift Protocols and enums in Swift Using XCTest to test Swift code Master-detail iOS applications The AppDelegate and ViewController classes (For more resources related to this topic, see here.) Understanding iOS applications An iOS application is a compiled executable along with a set of supporting files in a bundle. The application bundle is packaged into an archive file to be installed onto a device or upload to the App Store. Xcode can be used to run iOS applications in a simulator, as well as testing them on a local device. Submitting an application to the App Store requires a developer signing key, which is included as part of the Apple Developer Program at https://developer.apple.com. Most iOS applications to date have been written in Objective-C, a crossover between C and Smalltalk. With the advent of Swift, it is likely that many developers will move at least parts of their applications to Swift for performance and maintenance reasons. Although Objective-C is likely to be around for a while, it is clear that Swift is the future of iOS development and probably OS X as well. Applications contain a number of different types of files, which are used both at compile time and also at runtime. These files include the following: The Info.plist file, which contains information about which languages the application is localized for, what the identity of the application is, and the configuration requirements, such as the supported interface types (iPad, iPhone, and Universal), and orientations (Portrait, Upside Down, Landscape Left, and Landscape Right) Zero or more interface builder files with a .xib extension, which contain user interface screens (which supersedes the previous .nib files) Zero or more image asset files with a .xcassets extension, which store groups of related icons at different sizes, such as the application icon or graphics for display on screen (which supersedes the previous .icns files) Zero or more storyboard files with a .storyboard extension, which are used to coordinate between different screens in an application One or more .swift files that contain application code Creating a single-view iOS application A single-view iOS application is one where the application is presented in a single screen, without any transitions or other views. This section will show how to create an application that uses a single view without storyboards. When Xcode starts, it displays a welcome message that includes the ability to create a new project. This welcome message can be redisplayed at any time by navigating to Window | Welcome to Xcode or by pressing Command + Shift + 1. Using the welcome dialog's Create a new Xcode project option, or navigating to File | New | Project..., or by pressing Command + Shift + N, create a new iOS project with Single View Application as the template, as shown in the following screenshot: When the Next button is pressed, the new project dialog will ask for more details. The product name here is SingleView with appropriate values for Organization Name and Identifier. Ensure that the language selected is Swift and the device type is Universal: The Organization Identifier is a reverse domain name representation of the organization, and the Bundle Identifier is the concatenation of the Organization Identifier with the Product Name. Publishing to the App Store requires that the Organization Identifier be owned by the publisher and is managed in the online developer center at https://developer.apple.com/membercenter/. When Next is pressed, Xcode will ask where to save the project and whether a repository should be created. The selected location will be used to create the product directory, and an option to create a Git repository will be offered. In 2014, Git became the most widely used version control system, surpassing all other distributed and centralized version-control systems. It would be foolish not to create a Git repository when creating a new Xcode project. When Create is pressed, Xcode will create the project, set up template files, and then initialize the Git repository locally or on a shared server. Press the triangular play button at the top-left of Xcode to launch the simulator: If everything has been set up correctly, the simulator will start with a white screen and the time and battery shown at the top of the screen: Removing the storyboard The default template for a single-view application includes a storyboard. This creates the view for the first (only) screen and performs some additional setup behind the scenes. To understand what happens, the storyboard will be removed and replaced with code instead. Most applications are built with one or more storyboards. The storyboard can be deleted by going to the project navigator, finding the Main.storyboard file, and pressing the Delete key or selecting Delete from the context-sensitive menu. When the confirmation dialog is shown, select the Move to Trash option to ensure that the file is deleted rather than just being removed from the list of files that Xcode knows about. To see the project navigator, press Command + 1 or navigate to View | Navigators | Show Project Navigator. Once the Main.storyboard file has been deleted, it needs to be removed from Info.plist, to prevent iOS from trying to open it at startup. Open the Info.plist file under the Supporting Files folder of SingleView. A set of key-value pairs will be displayed; clicking on the Main storyboard file base name row will present the (+) and (-) options. Clicking on the delete icon (-) will remove the line: Now, when the application is started, a black screen will be displayed. There are multiple Info.plist files that are created by Xcode's template; one file is used for the real application, while the other files are used for the test applications that get built when running tests. Setting up the view controller The view controller is responsible for setting up the view when it is activated. Typically, this is done through either the storyboard or the interface file. As these have been removed, the window and the view controller need to be instantiated manually. When iOS applications start, application:didFinishLaunchingWithOptions: is called on the corresponding UIApplicationDelegate. The optional window variable is initialized automatically when it is loaded from an interface file or a storyboard, but it needs to be explicitly initialized if the user interface is being implemented in code. Implement the application:didFinishLaunchingWithOptions: method in the AppDelegate class as follows: @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate {   var window: UIWindow?   func application(application: UIApplication,    didFinishLaunchingWithOptions launchOptions:    [NSObject:AnyObject]?) -> Bool {     window = UIWindow()     window?.rootViewController = ViewController()     window?.makeKeyAndVisible()     return true   } } To open a class by name, press Command + Shift + O and type in the class name. Alternatively, navigate to File | Open Quickly... The final step is to create the view's content, which is typically done in the viewDidLoad method of the ViewController class. As an example user interface, a UILabel will be created and added to the view. Each view controller has an associated view property, and child views can be added with the addSubview method. To make the view stand out, the background of the view will be changed to black and the text color will be changed to white: class ViewController: UIViewController {   override func viewDidLoad() {     super.viewDidLoad()     view.backgroundColor = UIColor.blackColor()     let label = UILabel(frame:view.bounds)     label.textColor = UIColor.whiteColor()     label.textAlignment = .Center     label.text = "Welcome to Swift"       view.addSubview(label)   } } This creates a label, which is sized to the full size of the screen, with a white text color and a centered text alignment. When run, this displays Welcome to Swift on the screen. Typically, views will be implemented in their own class rather than being in-lined into the view controller. This allows the views to be reused in other controllers. When the screen is rotated, the label will be rotated off screen. Logic would need to be added in a real application to handle rotation changes in the view controller, such as willRotateToInterfaceOrientation, and to appropriately add rotations to the views using the transform property of the view. Usually, an interface builder file or storyboard would be used so that this is handled automatically. Swift classes, protocols, and enums Almost all Swift applications will be object oriented. Classes, such as Process from the CoreFoundation framework, and UIColor and UIImage from the UIKit framework, were used to demonstrate how classes can be used in applications. This section describes how to create classes, protocols, and enums in Swift. Classes in Swift A class is created in Swift using the class keyword, and braces are used to enclose the class body. The body can contain variables called properties, as well as functions called methods, which are collectively referred to as members. Instance members are unique to each instance, while static members are shared between all instances of that class. Classes are typically defined in a file named for the class; so a GitHubRepository class would typically be defined in a GitHubRepository.swift file. A new Swift file can be created by navigating to File | New | File… and selecting the Swift File option under iOS. Ensure that it is added to the Tests and UITests targets as well. Once created, implement the class as follows: class GitHubRepository {   var id:UInt64 = 0   var name:String = ""   func detailsURL() -> String {     return "https://api.github.com/repositories/(id)"   } } This class can be instantiated and used as follows: let repo = GitHubRepository() repo.id = 1 repo.name = "Grit" repo.detailsURL() // returns https://api.github.com/repositories/1 It is possible to create static members, which are the same for all instances of a class. In the GitHubRepository class, the api URL is likely to remain the same for all invocations, so it can be refactored into a static property: class GitHubRepository {   // does not work in Swift 1.0 or 1.1   static let api = "https://api.github.com"   …   class func detailsURL(id:String) -> String {     return "(api)/repositories/(id)"   } } Now, if the api URL needs to be changed (for example, to support mock testing or to support an in-house GitHub Enterprise server), there is a single place to change it. Before Swift 2, a class variables are not yet supported error message may be displayed. To use static variables in Swift prior to version 2, a different approach must be used. It is possible to define computed properties, which are not stored but are calculated on demand. These have a getter (also known as an accessor) and optionally a setter (also known as a mutator). The previous example can be rewritten as follows: class GitHubRepository {   class var api:String {     get {       return "https://api.github.com"     }   }   func detailsURL() -> String {     return "(GitHubRepository.api)/repositories/(id)"   } } Although this is logically a read-only constant (there is no associated set block), it is not possible to define the let constants with accessors. To refer to a class variable, use the type name—which in this case is GitHubRepository. When the GitHubRepository.api expression is evaluated, the body of the getter is called. Subclasses and testing in Swift A simple Swift class with no explicit parent is known as a base class. However, classes in Swift frequently inherit from another class by specifying a superclass after the class name. The syntax for this is class SubClass:SuperClass{...}. Tests in Swift are written using the XCTest framework, which is included by default in Xcode templates. This allows an application to have tests written and then executed in place to confirm that no bugs have been introduced. XCTest replaces the previous testing framework OCUnit. The XCTest framework has a base class called XCTestCase that all tests inherit from. Methods beginning with test (and that take no arguments) in the test case class are invoked automatically when the tests are run. Test code can indicate success or failure by calling the XCTAssert* functions, such as XCTAssertEquals and XCTAssertGreaterThan. Tests for the GitHubRepository class conventionally exist in a corresponding GitHubRepositoryTest class, which will be a subclass of XCTestCase. Create a new Swift file by navigating to File | New | File... and choosing a Swift File under the Source category for iOS. Ensure that the Tests and UITests targets are selected but the application target is not. It can be implemented as follows: import XCTest class GitHubRepositoryTest: XCTestCase {   func testRepository() {     let repo = GitHubRepository()     repo.id = 1     repo.name = "Grit"     XCTAssertEqual(       repo.detailsURL(),       "https://api.github.com/repositories/1",       "Repository details"     )   } } Make sure that the GitHubRepositoryTest class is added to the test targets. If not added when the file is created, it can be done by selecting the file and pressing Command + Option + 1 to show the File Inspector. The checkbox next to the test target should be selected. Tests should never be added to the main target. The GitHubRepository class should be added to both test targets: When the tests are run by pressing Command + U or by navigating to Product | Test, the results of the test will be displayed. Changing either the implementation or the expected test result will demonstrate whether the test is being executed correctly. Always check whether a failing test causes the build to fail; this will confirm that the test is actually being run. For example, in the GitHubRepositoryTest class, modify the URL to remove https from the front and check whether a test failure is shown. There is nothing more useless than a correctly implemented test that never runs. Protocols in Swift A protocol is similar to an interface in other languages; it is a named type that has method signatures but no method implementations. Classes can implement zero or more protocols; when they do, they are said to adopt or conform to the protocol. A protocol may have a number of methods that are either required (the default) or optional (marked with the optional keyword). Optional protocol methods are only supported when the protocol is marked with the @objc attribute. This declares that the class will be backed by an NSObject class for interoperability with Objective-C. Pure Swift protocols cannot have optional methods. The syntax to define a protocol looks similar to the following: protocol GitHubDetails {   func detailsURL() -> String   // protocol needs @objc if using optional protocols   // optional doNotNeedToImplement() } Protocols cannot have functions with default arguments. Protocols can be used with the struct, class, and enum types unless the @objc class attribute is used; in which case, they can only be used against Objective-C classes or enums. Classes conform to protocols by listing the protocol names after the class name, similar to a superclass. When a class has both a superclass and one or more protocols, the superclass must be listed first. class GitHubRepository: GitHubDetails {   func detailsURL() -> String {     // implementation as before   } } The GitHubDetails protocol can be used as a type in the same places as an existing Swift type, such as a variable type, method return type, or argument type. Protocols are widely used in Swift to allow callbacks from frameworks that would, otherwise, not know about specific callback handlers. If a superclass was required instead, then a single class cannot be used to implement multiple callbacks. Common protocols include UIApplicationDelegate, Printable, and Comparable. Enums in Swift The final concept to understand in Swift is enumeration, or enum for short. An enum is a closed set of values, such as North, East, South, and West, or Up, and Down. An enumeration is defined using the enum keyword, followed by a type name, and a block, which contains the case keywords followed by comma-separated values as follows: enum Suit {   case Clubs, Diamonds, Hearts // many on one line   case Spades // or each on separate lines } Unlike C, enumerated values do not have a specific type by default, so they cannot generally be converted to and from an integer value. Enumerations can be defined with raw values that allow conversion to and from integer values. Enum values are assigned to variables using the type name and the enum name: var suit:Suit = Suit.Clubs However, if the type of the expression is known, then the type prefix does not need to be explicitly specified; the following form is much more common in Swift code: var suit:Suit = .Clubs Raw values For the enum values that have specific meanings, it is possible to extend the enum from a different type, such as Int. These are known as raw values: enum Rank: Int {   case Two = 2, Three, Four, Five, Six, Seven, Eight, Nine, Ten   case Jack, Queen, King, Ace } A raw value enum can be converted to and from its raw value with the rawValue property and the failable initializer Rank(rawValue:) as follows: Rank.Two.rawValue == 2 Rank(rawValue:14)! == .Ace The failable initializer returns an optional enum value, because the equivalent Rank may not exist. The expression Rank(rawValue:0) will return nil, for example. Associated values Enums can also have associated values, such as a value or case class in other languages. For example, a combination of a Suit and a Rank can be combined to form a Card: enum Card {   case Face(Rank, Suit)   case Joker } Instances can be created by passing values into an enum initializer: var aceOfSpades: Card = .Face(.Ace,.Spades) var twoOfHearts: Card = .Face(.Two,.Hearts) var theJoker: Card = .Joker The associated values of an enum instance cannot be extracted (as they can with properties of a struct), but the enum value can be accessed by pattern matching in a switch statement: var card = aceOfSpades // or theJoker or twoOfHearts ... switch card {   case .Face(let rank, let suit):     print("Got a face card (rank) of (suit)");   case .Joker:     print("Got the joker card") } The Swift compiler will require that the switch statement be exhaustive. As the enum only contains these two types, no default block is needed. If another enum value is added to Card in the future, the compiler will report an error in this switch statement. Creating a master-detail iOS application Having covered how classes, protocols, and enums are defined in Swift, a more complex master-detail application can be created. A master-detail application is a specific type of iOS application that initially presents a master table view, and when an individual element is selected, a secondary details view will show more information about the selected item. Using the Create a new Xcode project option from the welcome screen, or by navigating to File | New | Project… or by pressing Command + Shift + N, create a new project and select Master-Detail Application from the iOS Application category: In the subsequent dialog, enter appropriate values for the project, such as the name (MasterDetail), the organization identifier (typically based on the reverse DNS name), ensure that the Language dropdown reads Swift and that it is targeted for Universal devices: When the project is created, an Xcode window will open containing all the files that are created by the wizard itself, including the MasterDetail.app and MasterDetailTests.xctest products. The MasterDetail.app is a bundle that is executed by the simulator or a connected device, while the MasterDetailTests.xctest and MasterDetailsUITests.xctest products are used to execute unit tests for the application's code. The application can be launched by pressing the triangular play button on the top-left corner of Xcode or by pressing Command + R, which will run the application against the currently selected target. After a brief compile and build cycle, the iOS Simulator will open with a master page that contains an empty table, as shown in the following screenshot: The default MasterDetail application can be used to add items to the list by clicking on the add (+) button on the top-right corner of the screen. This will add a new timestamped entry to the list. When this item is clicked, the screen will switch to the details view, which, in this case, presents the time in the center of the screen: This kind of master-detail application is common in iOS applications for displaying a top-level list (such as a shopping list, a set of contacts, to-do notes, and so on) while allowing the user to tap to see the details. There are three main classes in the master-detail application: The AppDelegate class is defined in the AppDelegate.swift file, and it is responsible for starting the application and set up the initial state The MasterViewController class is defined in the MasterViewController.swift file, and it is used to manage the first (master) screen's content and interactions The DetailViewController class is defined in the DetailViewController.swift file, and it is used to manage the second (detail) screen's content In order to understand what the classes do in more detail, the next three sections will present each of them in turn. The code that is generated in this section was created from Xcode 7.0, so the templates might differ slightly if using a different version of Xcode. An exact copy of the corresponding code can be acquired from the Packt website or from this book's GitHub repository at https://github.com/alblue/com.packtpub.swift.essentials/. The AppDelegate class The AppDelegate class is the main entry point to the application. When a set of Swift source files are compiled, if the main.swift file exists, it is used as the entry point for the application by running that code. However, to simplify setting up an application for iOS, a @UIApplicationMain special attribute exists that will both synthesize the main method and set up the associated class as the application delegate. The AppDelegate class for iOS extends the UIResponder class, which is the parent of all the UI content on iOS. It also adopts two protocols, UIApplicationDelegate, and UISplitViewControllerDelegate, which are used to provide callbacks when certain events occur: @UIApplicationMain class AppDelegate: UIResponder, UIApplicationDelegate,    UISplitViewControllerDelegate {   var window: UIWindow?   ... } On OS X, the AppDelegate class will be a subclass of NSApplication and will adopt the NSApplicationDelegate protocol. The synthesized main function calls the UIApplicationMain method that reads the Info.plist file. If the UILaunchStoryboardName key exists and points to a suitable file (the LaunchScreen.xib interface file in this case), it will be shown as a splash screen before doing any further work. After the rest of the application has loaded, if the UIMainStoryboardFile key exists and points to a suitable file (the Main.storyboard file in this case), the storyboard is launched and the initial view controller is shown. The storyboard has references to the MasterViewController and DetailViewController classes. The window variable is assigned to the storyboard's window. The application:didFinishLaunchingWithOptions is called once the application has started. It is passed with a reference to the UIApplication instance and a dictionary of options that notifies how the application has been started: func application(  application: UIApplication,  didFinishLaunchingWithOptions launchOptions:   [NSObject: AnyObject]?) -> Bool {   // Override point for customization after application launch.   ... } In the sample MasterDetail application, the application:didFinishLaunchingWithOptions method acquires a reference to the splitViewController from the explicitly unwrapped optional window, and the AppDelegate is set as its delegate: let splitViewController =  self.window!.rootViewController as! UISplitViewController splitViewController.delegate = self The … as! UISplitViewController syntax performs a type cast so that the generic rootViewController can be assigned to the more specific type; in this case, UISplitViewController. An alternative version as? provides a runtime checked cast, and it returns an optional value that either contains the value with the correctly casted type or nil otherwise. The difference with as! is a runtime error will occur if the item is not of the correct type. Finally, a navigationController is acquired from the splitViewController, which stores an array of viewControllers. This allows the DetailView to display a button on the left-hand side to expand the details view if necessary: let navigationController = splitViewController.viewController  [splitViewController.viewControllers.count-1]  as! UINavigationController navigationController.topViewController  .navigationItem.leftBarButtonItem =  splitViewController.displayModeButtonItem() The only difference this makes is when running on a wide-screen device, such as an iPhone 6 Plus or an iPad, where the views are displayed side-by-side in landscape mode. This is a new feature in iOS 8 applications. Otherwise, when the device is in portrait mode, it will be rendered as a standard back button: The method concludes with return true to let the OS know that the application has opened successfully. The MasterViewController class The MasterViewController class is responsible for coordinating the data that is shown on the first screen (when the device is in portrait orientation) or the left-half of the screen (when a large device is in landscape orientation). This is rendered with a UITableView, and data is coordinated through the parent UITableViewController class: class MasterViewController: UITableViewController {   var detailViewcontroller: DetailViewController? = nil   var objects = [AnyObject]()   override func viewDidLoad() {…}   func insertNewObject(sender: AnyObject) {…}   … } The viewDidLoad method is used to set up or initialize the view after it has loaded. In this case, a UIBarButtonItem is created so that the user can add new entries to the table. The UIBarButtonItem takes a @selector in Objective-C, and in Swift is treated as a string literal convertible (so that "insertNewObject:" will result in a call to the insertNewObject method). Once created, the button is added to the navigation on the right-hand side, using the standard .Add type which will be rendered as a + sign on the screen: override func viewDidLoad() {   super.viewDidLoad()   self.navigationItem.leftBarButtonItem = self.editButtonItem()   let addButton = UIBarButtonItem(     barButtonSystemItem: .Add, target: self,     action: "insertNewObject:")   self.navigationItem.rightBarButtonItem = addButton   if let split = self.splitViewController {     let controllers = split.viewControllers     self.detailViewController = (controllers[controllers.count-1] as! UINavigationController).topViewController as? DetailViewController } The objects are NSDate values, and are stored inside the class as an Array of AnyObject elements. The insertNewObject method is called when the + button is pressed, and it creates a new NSDate instance which is then inserted into the array. The sender event is passed as an argument of the AnyObject type, which will be a reference to the UIBarButtonItem (although it is not needed or used here): func insertNewObject(sender: AnyObject) {   objects.insertObject(NSDate.date(), atIndex: 0)   let indexPath = NSIndexPath(forRow: 0, inSection: 0)   self.tableView.insertRowsAtIndexPaths(    [indexPath], withRowAnimation: .Automatic) } The UIBarButtonItem class was created before blocks were available on iOS devices, so it uses the older Objective-C @selector mechanism. A future release of iOS may provide an alternative that takes a block, in which case Swift functions can be passed instead. The parent class contains a reference to the tableView, which is automatically created by the storyboard. When an item is inserted, the tableView is notified that a new object is available. Standard UITableViewController methods are used to access the data from the array: override func numberOfSectionsInTableView(  tableView: UITableView) -> Int {   return 1 } override func tableView(tableView: UITableView,  numberOfRowsInSection section: Int) -> Int {   return objects.count } override func tableView(tableView: UITableView,  cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell{   let cell = tableView.dequeueReusableCellWithIdentifier(    "Cell", forIndexPath: indexPath)   let object = objects[indexPath.row] as! NSDate   cell.textLabel!.text = object.description   return cell } override func tableView(tableView: UITableView,  canEditRowAtIndexPath indexPath: NSIndexPath) -> Bool {   return true } The numberOfSectionsInTableView function returns 1 in this case, but a tableView can have multiple sections; for example, to permit a contacts application having a different section for A, B, C through Z. The numberOfRowsInSection method returns the number of elements in each section; in this case, as there is only one section, the number of objects in the array. The reason why each method is called tableView and takes a tableView argument is a result of the Objective-C heritage of UIKit. The Objective-C convention combined the method name as the first named argument, so the original method was [delegate tableView:UITableView, numberOfRowsInSection:NSInteger]. As a result, the name of the first argument is reused as the name of the method in Swift. The cellForRowAtIndexPath method is expected to return UITableViewCell for an object. In this case, a cell is acquired from the tableView using the dequeueReusableCellWithIdentifier method (which caches cells as they go off screen to save object instantiation), and then the textLabel is populated with the object's description (which is a String representation of the object; in this case, the date). This is enough to display elements in the table, but in order to permit editing (or just removal, as in the sample application), there are some additional protocol methods that are required: override func tableView(tableView: UITableView,  canEditRowAtIndexPath indexPath: NSIndexPath) -> Bool {   return true } override func tableView(tableView: UITableView,  commitEditingStyle editingStyle: UITableViewCellEditingStyle,  forRowAtIndexPath indexPath: NSIndexPath) {   if editingStyle == .Delete {     objects.removeObjectAtIndex(indexPath.row)     tableView.deleteRowsAtIndexPaths([indexPath],      withRowAnimation: .Fade)   } } The canEditRowAtIndexPath method returns true if the row is editable; if all the rows can be edited, then this will return true for all the values. The commitEditingStyle method takes a table, a path, and a style, which is an enumeration that indicates which operation occurred. In this case, UITableViewCellEditingStyle.Delete is passed in order to delete the item from both the underlying object array and also from the tableView. (The enumeration can be abbreviated to .Delete because the type of editingStyle is known to be UITableViewCellEditingStyle.) The DetailViewController class The detail view is shown when an element is selected in the MasterViewController. The transition is managed by the storyboard controller; the views are connected with a segue (pronounced seg-way; the product of the same name based it on the word segue which is derived from the Italian word for follows). To pass the selected item between controllers, a property exists in the DetailViewController class called detailItem. When the value is changed, additional code is run, which is implemented in a didSet property notification: class DetailViewController: UIViewController {   var detailItem: AnyObject? {     didSet {       self.configureView()     }   }   … } When DetailViewController has the detailItem set, the configureView method will be invoked. The didSet body is run after the value has been changed, but before the setter returns to the caller. This is triggered by the segue in the MasterViewController: class MasterViewController: UIViewController {   …   override func prepareForSegue(    segue: UIStoryboardSegue, sender: AnyObject?) {     super.prepareForSegue(segue, sender: sender)     if segue.identifier == "showDetail" {       if let indexPath =        self.tableView.indexPathForSelectedRow() {         let object = objects[indexPath.row] as! NSDate         let controller = (segue.destinationViewController          as! UINavigationController)          .topViewController as! DetailViewController         controller.detailItem = object         controller.navigationItem.leftBarButtonItem =          self.splitViewController?.displayModeButtonItem()         controller.navigationItem.leftItemsSupplementBackButton =          true       }     }   } } The prepareForSegue method is called when the user selects an item in the table. In this case, it grabs the selected row index from the table and uses this to acquire the selected date object. The navigation controller hierarchy is searched to acquire the DetailViewController, and once this has been obtained, the selected value is set with controller.detailItem = object, which triggers the update. The label is ultimately displayed in the DetailViewController through the configureView method, which stamps the description of the object onto the label in the center: class DetailViewController {   ...   @IBOutlet weak var detailDescriptionLabel: UILabel!   function configureView() {     if let detail: AnyObject = self.detailItem {       if let label = self.detailDescriptionLabel {         label.text = detail.description       }     }   } } The configureView method is called both when the detailItem is changed and when the view is loaded for the first time. If the detailItem has not been set, then this has no effect. The implementation introduces some new concepts, which are worth highlighting: The @IBOutlet attribute indicates that the property will be exposed in interface builder and can be wired up to the object instance. The weak attribute indicates that the property will not store a strong reference to the object; in other words, the detail view will not own the object but merely reference it. Generally, all @IBOutlet references should be declared as weak to avoid cyclic dependency references. The type is defined as UILabel! which is an implicitly unwrapped optional. When accessed, it performs an explicit unwrapping of the optional value; otherwise the @IBOutlet will be wired up as a UILabel? optional type. Implicitly unwrapped optional types are used when the variable is known to never be nil at runtime, which is usually the case for the @IBOutlet references. Generally, all @IBOutlet references should be implicitly unwrapped optionals. Summary In this article we saw two sample iOS applications; one in which the UI was created programmatically, and another in which the UI was loaded from a storyboard. Together with an overview of classes, protocols, and enums, and an explanation of how iOS applications start, this article gives a springboard to understand the Xcode templates that are frequently used to start new projects. To learn more about Swift 2, you can refer the following books published by Packt Publishing (https://www.packtpub.com/): Swift 2 Blueprints (https://www.packtpub.com/application-development/swift-2-blueprints) Mastering Swift 2 (https://www.packtpub.com/application-development/mastering-swift-2) Swift 2 Design Patterns (https://www.packtpub.com/application-development/swift-2-design-patterns) Resources for Article:   Further resources on this subject: Your First Swift App [article] C-Quence – A Memory Game [article] Exploring Swift [article]
Read more
  • 0
  • 0
  • 8631
Modal Close icon
Modal Close icon