Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-getting-started-multiplayer-game-programming
Packt
04 Jun 2015
33 min read
Save for later

Getting Started with Multiplayer Game Programming

Packt
04 Jun 2015
33 min read
In this article by Rodrigo Silveira author of the book Multiplayer gaming with HTML5 game development, if you're reading this, chances are pretty good that you are already a game developer. That being the case, then you already know just how exciting it is to program your own games, either professionally or as a highly gratifying hobby that is very time-consuming. Now you're ready to take your game programming skills to the next level—that is, you're ready to implement multiplayer functionality into your JavaScript-based games. (For more resources related to this topic, see here.) In case you have already set out to create multiplayer games for the Open Web Platform using HTML5 and JavaScript, then you may have already come to realize that a personal desktop computer, laptop, or a mobile device is not particularly the most appropriate device to share with another human player for games in which two or more players share the same game world at the same time. Therefore, what is needed in order to create exciting multiplayer games with JavaScript is some form of networking technology. We will be discussing the following principles and concepts: The basics of networking and network programming paradigms Socket programming with HTML5 Programming a game server and game clients Turn-based multiplayer games Understanding the basics of networking It is said that one cannot program games that make use of networking without first understanding all about the discipline of computer networking and network programming. Although having a deep understanding of any topic can be only beneficial to the person working on that topic, I don't believe that you must know everything there is to know about game networking in order to program some pretty fun and engaging multiplayer games. Saying that is the case is like saying that one needs to be a scholar of the Spanish language in order to cook a simple burrito. Thus, let us take a look at the most basic and fundamental concepts of networking. After you finish reading this article, you will know enough about computer networking to get started, and you will feel comfortable adding multiplayer aspects to your games. One thing to keep in mind is that, even though networked games are not nearly as old as single-player games, computer networking is actually a very old and well-studied subject. Some of the earliest computer network systems date back to the 1950s. Though some of the techniques have improved over the years, the basic idea remains the same: two or more computers are connected together to establish communication between the machines. By communication, I mean data exchange, such as sending messages back and forth between the machines, or one of the machines only sends the data and the other only receives it. With this brief introduction to the concept of networking, you are now grounded in the subject of networking, enough to know what is required to network your games—two or more computers that talk to each other as close to real time as possible. By now, it should be clear how this simple concept makes it possible for us to connect multiple players into the same game world. In essence, we need a way to share the global game data among all the players who are connected to the game session, then continue to update each player about every other player. There are several different techniques that are commonly used to achieve this, but the two most common approaches are peer-to-peer and client-server. Both techniques present different opportunities, including advantages and disadvantages. In general, neither is particularly better than the other, but different situations and use cases may be better suited for one or the other technique. Peer-to-peer networking A simple way to connect players into the same virtual game world is through the peer-to-peer architecture. Although the name might suggest that only two peers ("nodes") are involved, by definition a peer-to-peer network system is one in which two or more nodes are connected directly to each other without a centralized system orchestrating the connection or information exchange. On a typical peer-to-peer setup, each peer serves the same function as every other one—that is, they all consume the same data and share whatever data they produce so that others can stay synchronized. In the case of a peer-to-peer game, we can illustrate this architecture with a simple game of Tic-tac-toe. Once both the players have established a connection between themselves, whoever is starting the game makes a move by marking a cell on the game board. This information is relayed across the wire to the other peer, who is now aware of the decision made by his or her opponent, and can thus update their own game world. Once the second player receives the game's latest state that results from the first player's latest move, the second player is able to make a move of their own by checking some available space on the board. This information is then copied over to the first player who can update their own world and continue the process by making the next desired move. The process goes on until one of the peers disconnects or the game ends as some condition that is based on the game's own business logic is met. In the case of the game of Tic-tac-toe, the game would end once one of the players has marked three spaces on the board forming a straight line or if all nine cells are filled, but neither player managed to connect three cells in a straight path. Some of the benefits of peer-to-peer networked games are as follows: Fast data transmission: Here, the data goes directly to its intended target. In other architectures, the data could go to some centralized node first, then the central node (or the "server") contacts the other peer, sending the necessary updates. Simpler setup: You would only need to think about one instance of your game that, generally speaking, handles its own input, sends its input to other connected peers, and handles their output as input for its own system. This can be especially handy in turn-based games, for example, most board games such as Tic-tac-toe. More reliability: Here one peer that goes offline typically won't affect any of the other peers. However, in the simple case of a two-player game, if one of the players is unable to continue, the game will likely cease to be playable. Imagine, though, that the game in question has dozens or hundreds of connected peers. If a handful of them suddenly lose their Internet connection, the others can continue to play. However, if there is a server that is connecting all the nodes and the server goes down, then none of the other players will know how to talk to each other, and nobody will know what is going on. On the other hand, some of the more obvious drawbacks of peer-to-peer architecture are as follows: Incoming data cannot be trusted: Here, you don't know for sure whether or not the sender modified the data. The data that is input into a game server will also suffer from the same challenge, but once the data is validated and broadcasted to all the other peers, you can be more confident that the data received by each peer from the server will have at least been sanitized and verified, and will be more credible. Fault tolerance can be very low: If enough players share the game world, one or more crashes won't make the game unplayable to the rest of the peers. Now, if we consider the many cases where any of the players that suddenly crash out of the game negatively affect the rest of the players, we can see how a server could easily recover from the crash. Data duplication when broadcasting to other peers: Imagine that your game is a simple 2D side scroller, and many other players are sharing that game world with you. Every time one of the players moves to the right, you receive the new (x, y) coordinates from that player, and you're able to update your own game world. Now, imagine that you move your player to the right by a very few pixels; you would have to send that data out to all of the other nodes in the system. Overall, peer-to-peer is a very powerful networking architecture and is still widely used by many games in the industry. Since current peer-to-peer web technologies are still in their infancy, most JavaScript-powered games today do not make use of peer-to-peer networking. For this and other reasons that should become apparent soon, we will focus almost exclusively on the other popular networking paradigm, namely, the client-server architecture. Client-server networking The idea behind the client-server networking architecture is very simple. If you squint your eyes hard enough, you can almost see a peer-to-peer graph. The most obvious difference between them, is that, instead of every node being an equal peer, one of the nodes is special. That is, instead of every node connecting to every other node, every node (client) connects to a main centralized node called the server. While the concept of a client-server network seems clear enough, perhaps a simple metaphor might make it easier for you to understand the role of each type of node in this network format as well as differentiate it from peer-to-peer . In a peer-to-peer network, you can think of it as a group of friends (peers) having a conversation at a party. They all have access to all the other peers involved in the conversation and can talk to them directly. On the other hand, a client-server network can be viewed as a group of friends having dinner at a restaurant. If a client of the restaurant wishes to order a certain item from the menu, he or she must talk to the waiter, who is the only person in that group of people with access to the desired products and the ability to serve the products to the clients. In short, the server is in charge of providing data and services to one or more clients. In the context of game development, the most common scenario is when two or more clients connect to the same server; the server will keep track of the game as well as the distributed players. Thus, if two players are to exchange information that is only pertinent to the two of them, the communication will go from the first player to and through the server and will end up at the other end with the second player. Following the example of the two players involved in a game of Tic-tac-toe, we can see how similar the flow of events is on a client-server model. Again, the main difference is that players are unaware of each other and only know what the server tells them. While you can very easily mimic a peer-to-peer model by using a server to merely connect the two players, most often the server is used much more actively than that. There are two ways to engage the server in a networked game, namely in an authoritative and a non-authoritative way. That is to say, you can have the enforcement of the game's logic strictly in the server, or you can have the clients handle the game logic, input validation, and so on. Today, most games using the client-server architecture actually use a hybrid of the two (authoritative and non-authoritative servers). For all intents and purposes, however, the server's purpose in life is to receive input from each of the clients and distribute that input throughout the pool of connected clients. Now, regardless of whether you decide to go with an authoritative server instead of a non-authoritative one, you will notice that one of challenges with a client-server game is that you will need to program both ends of the stack. You will have to do this even if your clients do nothing more than take input from the user, forward it to the server, and render whatever data they receive from the server; if your game server does nothing more than forward the input that it receives from each client to every other client, you will still need to write a game client and a game server. We will discuss game clients and servers later. For now, all we really need to know is that these two components are what set this networking model apart from peer-to-peer. Some of the benefits of client-server networked games are as follows: Separation of concerns: If you know anything about software development, you know that this is something you should always aim for. That is, good, maintainable software is written as discrete components where each does one "thing", and it is done well. Writing individual specialized components lets you focus on performing one individual task at a time, making your game easier to design, code, test, reason, and maintain. Centralization: While this can be argued against as well as in favor of, having one central place through which all communication must flow makes it easier to manage such communication, enforce any required rules, control access, and so forth. Less work for the client: Instead of having a client (peer) in charge of taking input from the user as well as other peers, validating all the input, sharing data among other peers, rendering the game, and so on, the client can focus on only doing a few of these things, allowing the server to offload some of this work. This is particularly handy when we talk about mobile gaming, and how much subtle divisions of labor can impact the overall player experience. For example, imagine a game where 10 players are engaged in the same game world. In a peer-to-peer setup, every time one player takes an action, he or she would need to send that action to nine other players (in other words, there would need to be nine network calls, boiling down to more mobile data usage). On the other hand, on a client-server configuration, one player would only need to send his or her action to one of the peers, that is, the server, who would then be responsible for sending that data to the remaining nine players. Common drawbacks of client-server architectures, whether or not the server is authoritative, are as follows: Communication takes longer to propagate: In the very best possible scenario imaginable, every message sent from the first player to the second player would take twice as long to be delivered as compared to a peer-to-peer connection. That is, the message would be first sent from the first player to the server and then from the server to the second player. There are many techniques that are used today to solve the latency problem faced in this scenario, some of which we will discuss in much more depth later. However, the underlying dilemma will always be there. More complexity due to more moving parts: It doesn't really matter how you slice the pizza; the more code you need to write (and trust me, when you build two separate modules for a game, you will write more code), the greater your mental model will have to be. While much of your code can be reused between the client and the server (especially if you use well-established programming techniques, such as object-oriented programming), at the end of the day, you need to manage a greater level of complexity. Single point of failure and network congestion: Up until now, we have mostly discussed the case where only a handful of players participates in the same game. However, the more common case is that a handful of groups of players play different games at the same time. Using the same example of the two-player game of Tic-tac-toe, imagine that there are thousands of players facing each other in single games. In a peer-to-peer setup, once a couple of players have directly paired off, it is as though there are no other players enjoying that game. The only thing to keep these two players from continuing their game is their own connection with each other. On the other hand, if the same thousands of players are connected to each other through a server sitting between the two, then two singled out players might notice severe delays between messages because the server is so busy handling all of the messages from and to all of the other people playing isolated games. Worse yet, these two players now need to worry about maintaining their own connection with each other through the server, but they also hope that the server's connection between them and their opponent will remain active. All in all, many of the challenges involved in client-server networking are well studied and understood, and many of the problems you're likely to face during your multiplayer game development will already have been solved by someone else. Client-server is a very popular and powerful game networking model, and the required technology for it, which is available to us through HTML5 and JavaScript, is well developed and widely supported. Networking protocols – UDP and TCP By discussing some of the ways in which your players can talk to each other across some form of network, we have yet only skimmed over how that communication is actually done. Let us then describe what protocols are and how they apply to networking and, more importantly, multiplayer game development. The word protocol can be defined as a set of conventions or a detailed plan of a procedure [Citation [Def. 3,4]. (n.d.). In Merriam Webster Online, Retrieved February 12, 2015, from http://www.merriam-webster.com/dictionary/protocol]. In computer networking, a protocol describes to the receiver of a message how the data is organized so that it can be decoded. For example, imagine that you have a multiplayer beat 'em up game, and you want to tell the game server that your player just issued a kick command and moved 3 units to the left. What exactly do you send to the server? Do you send a string with a value of "kick", followed by the number 3? Otherwise, do you send the number first, followed by a capitalized letter "K", indicating that the action taken was a kick? The point I'm trying to make is that, without a well-understood and agreed-upon protocol, it is impossible to successfully and predictably communicate with another computer. The two networking protocols that we'll discuss in the section, and that are also the two most widely used protocols in multiplayer networked games, are the Transmission Control Protocol (TCP) and the User Datagram Protocol (UDP). Both protocols provide communication services between clients in a network system. In simple terms, they are protocols that allow us to send and receive packets of data in such a way that the data can be identified and interpreted in a predictable way. When data is sent through TCP, the application running in the source machine first establishes a connection with the destination machine. Once a connection has been established, data is transmitted in packets in such a way that the receiving application can then put the data back together in the appropriate order. TCP also provides built-in error checking mechanisms so that, if a packet is lost, the target application can notify the sender application, and any missing packets are sent again until the entire message is received. In short, TCP is a connection-based protocol that guarantees the delivery of the full data in the correct order. Use cases where this behavior is desirable are all around us. When you download a game from a web server, for example, you want to make sure that the data comes in correctly. You want to be sure that your game assets will be properly and completely downloaded before your users start playing your game. While this guarantee of delivery may sound very reassuring, it can also be thought of as a slow process, which, as we'll see briefly, may sometimes be more important than knowing that the data will arrive in full. In contrast, UDP transmits packets of data (called datagrams) without the use of a pre-established connection. The main goal of the protocol is to be a very fast and frictionless way of sending data towards some target application. In essence, you can think of UDP as the brave employees who dress up as their company's mascot and stand outside their store waving a large banner in the hope that at least some of the people driving by will see them and give them their business. While at first, UDP may seem like a reckless protocol, the use cases that make UDP so desirable and effective includes the many situations when you care more about speed than missing packets a few times, getting duplicate packets, or getting them out of order. You may also want to choose UDP over TCP when you don't care about the reply from the receiver. With TCP, whether or not you need some form of confirmation or reply from the receiver of your message, it will still take the time to reply back to you, at least acknowledging that the message was received. Sometimes, you may not care whether or not the server received the data. A more concrete example of a scenario where UDP is a far better choice over TCP is when you need a heartbeat from the client letting the server know if the player is still there. If you need to let your server know that the session is still active every so often, and you don't care if one of the heartbeats get lost every now and again, then it would be wise to use UDP. In short, for any data that is not mission-critical and you can afford to lose, UDP might be the best option. In closing, keep in mind that, just as peer-to-peer and client-server models can be built side by side, and in the same way your game server can be a hybrid of authoritative and non-authoritative, there is absolutely no reason why your multiplayer games should only use TCP or UDP. Use whichever protocol a particular situation calls for. Network sockets There is one other protocol that we'll cover very briefly, but only so that you can see the need for network sockets in game development. As a JavaScript programmer, you are doubtlessly familiar with Hypertext Transfer Protocol (HTTP). This is the protocol in the application layer that web browsers use to fetch your games from a Web server. While HTTP is a great protocol to reliably retrieve documents from web servers, it was not designed to be used in real-time games; therefore, it is not ideal for this purpose. The way HTTP works is very simple: a client sends a request to a server, which then returns a response back to the client. The response includes a completion status code, indicating to the client that the request is either in process, needs to be forwarded to another address, or is finished successfully or erroneously. There are a handful of things to note about HTTP that will make it clear that a better protocol is needed for real-time communication between the client and server. Firstly, after each response is received by the requester, the connection is closed. Thus, before making each and every request, a new connection must be established with the server. Most of the time, an HTTP request will be sent through TCP, which, as we've seen, can be slow, relatively speaking. Secondly, HTTP is by design a stateless protocol. This means that, every time you request a resource from a server, the server has no idea who you are and what is the context of the request. (It doesn't know whether this is your first request ever or if you're a frequent requester.) A common solution to this problem is to include a unique string with every HTTP request that the server keeps track of, and can thus provide information about each individual client on an ongoing basis. You may recognize this as a standard session. The major downside with this solution, at least with regard to real-time gaming, is that mapping a session cookie to the user's session takes additional time. Finally, the major factor that makes HTTP unsuitable for multiplayer game programming is that the communication is one way—only the client can connect to the server, and the server replies back through the same connection. In other words, the game client can tell the game server that a punch command has been entered by the user, but the game server cannot pass that information along to other clients. Think of it like a vending machine. As a client of the machine, we can request specific items that we wish to buy. We formalize this request by inserting money into the vending machine, and then we press the appropriate button. Under no circumstance will a vending machine issue commands to a person standing nearby. That would be like waiting for a vending machine to dispense food, expecting people to deposit the money inside it afterwards. The answer to this lack of functionality in HTTP is pretty straightforward. A network socket is an endpoint in a connection that allows for two-way communication between the client and the server. Think of it more like a telephone call, rather than a vending machine. During a telephone call, either party can say whatever they want at any given time. Most importantly, the connection between both parties remains open throughout the duration of the conversation, making the communication process highly efficient. WebSocket is a protocol built on top of TCP, allowing web-based applications to have two-way communication with a server. The way a WebSocket is created consists of several steps, including a protocol upgrade from HTTP to WebSocket. Thankfully, all of the heavy lifting is done behind the scenes by the browser and JavaScript. For now, the key takeaway here is that with a TCP socket (yes, there are other types of socket including UDP sockets), we can reliably communicate with a server, and the server can talk back to us as per the need. Socket programming in JavaScript Let's now bring the conversation about network connections, protocols, and sockets to a close by talking about the tools—JavaScript and WebSockets—that bring everything together, allowing us to program awesome multiplayer games in the language of the open Web. The WebSocket protocol Modern browsers and other JavaScript runtime environments have implemented the WebSocket protocol in JavaScript. Don't make the mistake of thinking that just because we can create WebSocket objects in JavaScript, WebSockets are part of JavaScript. The standard that defines the WebSocket protocol is language-agnostic and can be implemented in any programming language. Thus, before you start to deploy your JavaScript games that make use of WebSockets, ensure that the environment that will run your game uses an implementation of the ECMA standard that also implements WebSockets. In other words, not all browsers will know what to do when you ask for a WebSocket connection. For the most part, though, the latest versions, as of this writing, of the most popular browsers today (namely, Google Chrome, Safari, Mozilla Firefox, Opera, and Internet Explorer) implement the current latest revision of RFC 6455. Previous versions of WebSockets (such as protocol version - 76, 7, or 10) are slowly being deprecated and have been removed by some of the previously mentioned browsers. Probably the most confusing thing about the WebSocket protocol is the way each version of the protocol is named. The very first draft (which dates back to 2010), was named draft-hixie-thewebsocketprotocol-75. The next version was named draft-hixie-thewebsocketprotocol-76. Some people refer to these versions as 75 and 76, which can be quite confusing, especially since the fourth version of the protocol is named draft-ietf-hybi-thewebsocketprotocol-07, which is named in the draft as WebSocket Version 7. The current version of the protocol (RFC 6455) is 13. Let us take a quick look at the programming interface (API) that we'll use within our JavaScript code to interact with a WebSocket server. Keep in mind that we'll need to write both the JavaScript clients that use WebSockets to consume data as well as the WebSocket server, which uses WebSockets but plays the role of the server. The difference between the two will become apparent as we go over some examples. Creating a client-side WebSocket The following code snippet creates a new object of type WebSocket that connects the client to some backend server. The constructor takes two parameters; the first is required and represents the URL where the WebSocket server is running and expecting connections. The second URL, is an optional list of sub-protocols that the server may implement. var socket = new WebSocket('ws://www.game-domain.com'); Although this one line of code may seem simple and harmless enough, here are a few things to keep in mind: We are no longer in HTTP territory. The address to your WebSocket server now starts with ws:// instead of http://. Similarly, when we work with secure (encrypted) sockets, we would specify the server's URL as wss://, just like in https://. It may seem obvious to you, but a common pitfall that those getting started with WebSockets fall into is that, before you can establish a connection with the previous code, you need a WebSocket server running at that domain. WebSockets implement the same-origin security model. As you may have already seen with other HTML5 features, the same-origin policy states that you can only access a resource through JavaScript if both the client and the server are in the same domain. For those who are not familiar with the same-domain (also known as the same-origin) policy, the three things that constitute a domain, in this context, are the protocol, host, and port of the resource being accessed. In the previous example, the protocol, host, and port number were, respectively ws (and not wss, http, or ssh), www.game-domain.com (any sub-domain, such as game-domain.com or beta.game-domain.com would violate the same-origin policy), and 80 (by default, WebSocket connects to port 80, and port 443 when it uses wss). Since the server in the previous example binds to port 80, we don't need to explicitly specify the port number. However, had the server been configured to run on a different port, say 2667, then the URL string would need to include a colon followed by the port number that would need to be placed at the end of the host name, such as ws://www.game-domain.com:2667. As with everything else in JavaScript, WebSocket instances attempt to connect to the backend server asynchronously. Thus, you should not attempt to issue commands on your newly created socket until you're sure that the server has connected; otherwise, JavaScript will throw an error that may crash your entire game. This can be done by registering a callback function on the socket's onopen event as follows: var socket = new WebSocket('ws://www.game-domain.com'); socket.onopen = function(event) {    // socket ready to send and receive data }; Once the socket is ready to send and receive data, you can send messages to the server by calling the socket object's send method, which takes a string as the message to be sent. // Assuming a connection was previously established socket.send('Hello, WebSocket world!'); Most often, however, you will want to send more meaningful data to the server, such as objects, arrays, and other data structures that have more meaning on their own. In these cases, we can simply serialize our data as JSON strings. var player = {    nickname: 'Juju',    team: 'Blue' };   socket.send(JSON.stringify(player)); Now, the server can receive that message and work with it as the same object structure that the client sent it, by running it through the parse method of the JSON object. var player = JSON.parse(event.data); player.name === 'Juju'; // true player.team === 'Blue'; // true player.id === undefined; // true If you look at the previous example closely, you will notice that we extract the message that is sent through the socket from the data attribute of some event object. Where did that event object come from, you ask? Good question! The way we receive messages from the socket is the same on both the client and server sides of the socket. We must simply register a callback function on the socket's onmessage event, and the callback will be invoked whenever a new message is received. The argument passed into the callback function will contain an attribute named data, which will contain the raw string object with the message that was sent. socket.onmessage = function(event) {    event instanceof MessageEvent; // true      var msg = JSON.parse(event.data); }; Other events on the socket object on which you can register callbacks include onerror, which is triggered whenever an error related to the socket occurs, and onclose, which is triggered whenever the state of the socket changes to CLOSED; in other words, whenever the server closes the connection with the client for any reason or the connected client closes its connection. As mentioned previously, the socket object will also have a property called readyState, which behaves in a similar manner to the equally-named attribute in AJAX objects (or more appropriately, XMLHttpRequest objects). This attribute represents the current state of the connection and can have one of four values at any point in time. This value is an unsigned integer between 0 and 3, inclusive of both the numbers. For clarity, there are four accompanying constants on the WebSocket class that map to the four numerical values of the instance's readyState attribute. The constants are as follows: WebSocket.CONNECTING: This has a value of 0 and means that the connection between the client and the server has not yet been established. WebSocket.OPEN: This has a value of 1 and means that the connection between the client and the server is open and ready for use. Whenever the object's readyState attribute changes from CONNECTING to OPEN, which will only happen once in the object's life cycle, the onopen callback will be invoked. WebSocket.CLOSING: This has a value of 2 and means that the connection is being closed. WebSocket.CLOSED: This has a value of 3 and means that the connection is now closed (or could not be opened to begin with). Once the readyState has changed to a new value, it will never return to a previous state in the same instance of the socket object. Thus, if a socket object is CLOSING or has already become CLOSED, it will never OPEN again. In this case, you would need a new instance of WebSocket if you would like to continue to communicate with the server. To summarize, let us bring together the simple WebSocket API features that we discussed previously and create a convenient function that simplifies data serialization, error checking, and error handling when communicating with the game server: function sendMsg(socket, data) {    if (socket.readyState === WebSocket.OPEN) {      socket.send(JSON.stringify(data));        return true;    }      return false; }; Game clients Earlier, we talked about the architecture of a multiplayer game that was based on the client-server pattern. Since this is the approach we will take for the games that we'll be developing, let us define some of the main roles that the game client will fulfill. From a higher level, a game client will be the interface between the human player and the rest of the game universe (which includes the game server and other human players who are connected to it). Thus, the game client will be in charge of taking input from the player, communicating this to the server, receive any further instructions and information from the server, and then render the final output to the human player again. Depending on the type of game server used, the client can be more sophisticated than just an input application that renders static data received from the server. For example, the client could very well simulate what the game server will do and present the result of this simulation to the user while the server performs the real calculations and tells the results to the client. The biggest selling point of this technique is that the game would seem a lot more dynamic and real-time to the user since the client responds to input almost instantly. Game servers The game server is primarily responsible for connecting all the players to the same game world and keeping the communication going between them. However as you will soon realize, there may be cases where you will want the server to be more sophisticated than a routing application. For example, just because one of the players is telling the server to inform the other participants that the game is over, and the player sending the message is the winner, we may still want to confirm the information before deciding that the game is in fact over. With this idea in mind, we can label the game server as being of one of the two kinds: authoritative or non-authoritative. In an authoritative game server, the game's logic is actually running in memory (although it normally doesn't render any graphical output like the game clients certainly will) all the time. As each client reports the information to the server by sending messages through its corresponding socket, the server updates the current game state and sends the updates back to all of the players, including the original sender. This way we can be more certain that any data coming from the server has been verified and is accurate. In a non-authoritative server, the clients take on a much more involved part in the game logic enforcement, which gives the client a lot more trust. As suggested previously, what we can do is take the best of both worlds and create a mix of both the techniques. What we will do is, have a strictly authoritative server, but clients that are smart and can do some of the work on their own. Since the server has the ultimate say in the game, however, any messages received by clients from the server are considered as the ultimate truth and supersede any conclusions it came to on its own. Summary Overall, we discussed the basics of networking and network programming paradigms. We saw how WebSockets makes it possible to develop real-time, multiplayer games in HTML5. Finally, we implemented a simple game client and game server using widely supported web technologies and built a fun game of Tic-tac-toe. Resources for Article: Further resources on this subject: HTML5 Game Development – A Ball-shooting Machine with Physics Engine [article] Creating different font files and using web fonts [article] HTML5 Canvas [article]
Read more
  • 0
  • 0
  • 34115

article-image-deploying-new-hosts-vcenter
Packt
04 Jun 2015
8 min read
Save for later

Deploying New Hosts with vCenter

Packt
04 Jun 2015
8 min read
In this article by Konstantin Kuminsky author of the book, VMware vCenter Cookbook, we will review some options and features available in vCenter to improve an administrator's efficiency. (For more resources related to this topic, see here.) Deploying new hosts faster with scripted installation Scripted installation is an alternative way to deploy ESXi hosts. It can be used when several hosts need to be deployed or upgraded. The installation script contains ESXi settings and can be accessed by a host during the ESXi boot from the following locations: FTP HTTP or HTTPS NFS USB flash drive or CD-ROM How to do it... The following sections describe the process of creating an installation script and using it to boot the ESXi host. Creating an installation script An installation script contains installation options for ESXi. It's a text file with the .cfg extension. The best way to create an installation script is to use the default script supplied with the ESXi installer and modify it. The default script is located in the /etc/vmware/weasel/ folder location and is called ks.cfg. Commands that can be modified include, but are not limited to: The install, installorupgrade, or upgrade commands define the ESXi disk—location, where the installation or upgrade will be installed. The available options are: --disk: This option is the disk name which can be specified as path (/vmfs/devices/disks/vmhbaX:X:X), VML name (vml.xxxxxxxx) or as LUN UID (vmkLUM_UID) –overwritevmfs: This option wipes the existing datastore. --preservevmfs: This option keeps the existing datastore. --novmfsondisk: This option prevents a new partition from being created. The Network command, which specifies the network settings. Most of the available options are self-explanatory: --bootproto=[dhcp|static] --device: MAC address of NIC to use --ip --gateway --nameserver --netmask --hostname --vlanid A full list of installation and upgrade commands can be found in the vSphere5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Use the installation script to configure ESXi In order to use the installation script, you will need to use additional ESXi boot options. Boot a host from the ESXi installation disk. When the ESXi installer screen appears, press Shift + O to provide additional boot options. In the command prompt, type the following: ks=<location of the script> <additional boot options> The valid locations are as follows: ks=cdrom:/path ks=file://path ks=protocol://path ks=usb:/path The additional options available are as follows: gateway: This option is the default gateway ip: This option is the IP address nameserver: This option is the DNS server netmask: This option is the subnet mask vlanid: This option is the VLAN ID netdevice: This option is the MAC address of NIC to use bootif: This option is the MAC address of NIC to use in PXELINUX format For example, for the HTTP location, the command will look like this: ks=http://XX.XX.XX.XX/scripts/ks-v1.cfg nameserver=XX.XX.XX.XX ip=XX.XX.XX.XX netmask=255.255.255.0 gateway=XX.XX.XX.XX Deploying new hosts faster with auto deploy vSphere Auto Deploy is VMware's solution to simplify the deployment of large numbers of ESXi hosts. It is one of the available options for ESXi deployment along with an interactive and scripted installation. The main difference of Auto Deploy compared to other deployment options is that the ESXi configuration is not stored on the host's disk. Instead, it's managed with image and host profiles by the Auto Deploy server. Getting ready Before using Auto Deploy, confirm the following: The Auto Deploy server is installed and registered with vCenter. It can be installed as a standalone server or as part of the vCenter installation. The DHCP server exists in the environment. The DHCP server is configured to point to the TFTP server for PXE boot (option 66) with the boot filename undionly.kpxe.vmw-hardwired. The TFTP server that will be used for PXE boot exists and is configured properly. The machine where Auto Deploy cmdlets will run has the following installed: Microsoft .NET 2.0 or later PowerShell 2.0 or later PowerCLI including Auto Deploy cmdlets New hosts that will be provisioned with Auto Deploy must: Meet the hardware requirements for ESXi 5 Have network connectivity to vCenter, preferably 1 Gbps or higher Have PXE boot enabled How to do it... Once prerequisites are met, the following steps are required to start deploying hosts. Configuring the TFTP server In order to configure the TFTP server with the correct boot image for ESXi, execute the following steps: In vCenter, go to Home | Auto Deploy. Switch to the Administration tab. From the Auto Deploy page, click on Download TFTP Boot ZIP. Download the file and unzip it to the appropriate folder on the TFTP server. Creating an image profile Image profies are created using Image Builder PowerCLI cmdlets. Image Builder requires PowerCLI and can be installed on a machine that's used to run administrative tasks. It doesn't have to be a vCenter server or Auto Deploy server and the only requirement for this machine is that it must have access to the software depot—a file server that stores image profiles. Image profiles can be created from scratch or by cloning an existing profile. The following steps outline the process of creating an image profile by cloning. The steps assume that: The Image Builder has been installed. The appropriate software depot has been downloaded from the VMware website by going to http://www.vmware.com/downloads and searching for the software depot. Cloning an existing profile included in the depot is the easiest way to create a new profile. The steps to do so are as follows: Add a depot with the image profile to be cloned: Add-EsxSoftwareDepot -DepotUrl <Path to softwaredepot> Find the name of the profile to be cloned using Get-ESXImageProfile. Clone the profile: New-EsxImageProfile -CloneProfile <Existing profile name> - Name <New profile name> Add a software package to the new image profile: Add-EsxSoftwarePackage -ImageProfile <New profile name> - SoftwarePackage <Package> At this point, the software package will be validated and in case of errors, or if there are any dependencies that need to be resolved, an appropriate message will be displayed. Assigning an image profile to hosts To create a rule that assigns an image profile to a host, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Add the software depot with the correct image profile to the PowerCLI session: Add-EsxSoftwareDepot <depot URL> Locate the image profile using the Get-EsxImageProfile cmdlet. Define a rule that assigns hosts with certain attributes to an image profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host profile to hosts Optionally, the existing host profile can be assigned to hosts. To accomplish this, execute the following steps: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Locate the host profile name using the Get-VMhostProfile command. Define a rule that assigns hosts with certain attributes to a host profile. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Profile name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> Assigning a host to a folder or cluster in vCenter To make sure a host is placed in a certain folder or cluster once it boots, do the following: Connect to vCenter with PowerCLI: Connect-VIServer <vCenter IP or FQDN> Define a rule that assigns hosts with certain attributes to a folder or cluster. For example, for hosts with IP addresses for a range, run the following command: New-DeployRule -Name <Rule name> -Item <Folder name> -Pattern "ipv4=192.168.1.10-192.168.1.20" Add-DeployRule <Rule name> If a host is assigned to a cluster it inherits that cluster's host profile. How it works... Auto Deploy utilizes the PXE boot to connect to the Auto Deploy server and get an image profile, vCenter location, and optionally, host profiles. The detailed process is as follows: The host gets gPXE executable and gPXE configuration files from the PXE TFTP server. As gPXE executes, it uses instructions from the configuration file to query the Auto Deploy server for specific information. The Auto Deploy server returns the requested information specified in the image and host profiles. The host boots using this information. Auto Deploy adds a host to the specified vCenter server. The host is placed in maintenance mode when additional information such as IP address is required from the administrator. To exit maintenance mode, the administrator will need to provide this information and reapply the host profile. When a new host boots for the first time, vCenter creates a new object and stores it together with the host and image profiles in the database. For any subsequent reboots, the existing object is used to get the correct host profile and any changes that have been made. More details can be found in the vSphere 5 documentation on the VMware website at https://www.vmware.com/support/pubs/. Summary In this article we learnt how new hosts can be deployed with scripted installation and auto deploy techniques. Resources for Article: Further resources on this subject: VMware vRealize Operations Performance and Capacity Management [Article] Backups in the VMware View Infrastructure [Article] Application Packaging in VMware ThinApp 4.7 Essentials [Article]
Read more
  • 0
  • 0
  • 9491

article-image-regex-practice
Packt
04 Jun 2015
24 min read
Save for later

Regex in Practice

Packt
04 Jun 2015
24 min read
Knowing Regex's syntax allows you to model text patterns, but sometimes coming up with a good reliable pattern can be more difficult, so taking a look at some actual use cases can really help you learn some common design patterns. So, in this article by Loiane Groner and Gabriel Manricks, coauthors of the book JavaScript Regular Expressions, we will develop a form, and we will explore the following topics: Validating a name Validating e-mails Validating a Twitter username Validating passwords Validating URLs Manipulating text (For more resources related to this topic, see here.) Regular expressions and form validation By far, one of the most common uses for regular expressions on the frontend is for use with user submitted forms, so this is what we will be building. The form we will be building will have all the common fields, such as name, e-mail, website, and so on, but we will also experiment with some text processing besides all the validations. In real-world applications, you usually are not going to implement the parsing and validation code manually. You can create a regular expression and rely on some JavaScript libraries, such as: jQuery validation: Refer to http://jqueryvalidation.org/ Parsely.js: Refer to http://parsleyjs.org/ Even the most popular frameworks support the usage of regular expressions with its native validation engine, such as AngularJS (refer to http://www.ng-newsletter.com/posts/validations.html). Setting up the form This demo will be for a site that allows users to create an online bio, and as such, consists of different types of fields. However, before we get into this (since we won't be building a backend to handle the form), we are going to setup some HTML and JavaScript code to catch the form submission and extract/validate the data entered in it. To keep the code neat, we will create an array with all the validation functions, and a data object where all the final data will be kept. Here is a basic outline of the HTML code for which we begin by adding fields: <!DOCTYPE HTML> <html>    <head>        <title>Personal Bio Demo</title>    </head>    <body>        <form id="main_form">            <input type="submit" value="Process" />        </form>          <script>            // js goes here        </script>    </body> </html> Next, we need to write some JavaScript to catch the form and run through the list of functions that we will be writing. If a function returns false, it means that the verification did not pass and we will stop processing the form. In the event where we get through the entire list of functions and no problems arise, we will log out of the console and data object, which contain all the fields we extracted: <script>    var fns = [];    var data = {};      var form = document.getElementById("main_form");      form.onsubmit = function(e) {      e.preventDefault();          data = {};          for (var i = 0; i < fns.length; i++) {            if (fns[i]() == false) {                return;            }        }          console.log("Verified Data: ", data);    } </script> The JavaScript starts by creating the two variables I mentioned previously, we then pull the form's object from the DOM and set the submit handler. The submit handler begins by preventing a page from actually submitting, (as we don't have any backend code in this example) and then we go through the list of functions running them one by one. Validating fields In this section, we will explore how to validate different types of fields manually, such as name, e-mail, website URL, and so on. Matching a complete name To get our feet wet, let's begin with a simple name field. It's something we have gone through briefly in the past, so it should give you an idea of how our system will work. The following code goes inside the script tags, but only after everything we have written so far: function process_name() {    var field = document.getElementById("name_field");    var name = field.value;      var name_pattern = /^(S+) (S*) ?b(S+)$/;      if (name_pattern.test(name) === false) {        alert("Name field is invalid");         return false;    }      var res = name_pattern.exec(name);    data.first_name = res[1];    data.last_name = res[3];      if (res[2].length > 0) {        data.middle_name = res[2];    }      return true; }   fns.push(process_name); We get the name field in a similar way to how we got the form, then, we extract the value and test it against a pattern to match a full name. If the name doesn't match the pattern, we simply alert the user and return false to let the form handler know that the validations have failed. If the name field is in the correct format, we set the corresponding fields on the data object (remember, the middle name is optional here). The last line just adds this function to the array of functions, so it will be called when the form is submitted. The last thing required to get this working is to add HTML for this form field, so inside the form tags (right before the submit button), you can add this text input: Name: <input type="text" id="name_field" /><br /> Opening this page in your browser, you should be able to test it out by entering different values into the Name box. If you enter a valid name, you should get the data object printed out with the correct parameters, otherwise you should be able to see this alert message: Understanding the complete name Regex Let's go back to the regular expression used to match the name entered by a user: /^(S+) (S*) ?b(S+)$/ The following is a brief explanation of the Regex: The ^ character asserts its position at the beginning of a string The first capturing group (S+) S+ matches a non-white space character [^rntf] The + quantifier between one and unlimited times The second capturing group (S*) S* matches any non-whitespace character [^rntf] The * quantifier between zero and unlimited times " ?" matches the whitespace character The ? quantifier between zero and one time b asserts its position at a (^w|w$|Ww|wW) word boundary The third capturing group (S+) S+ matches a non-whitespace character [^rntf] The + quantifier between one and unlimited times $ asserts its position at the end of a string Matching an e-mail with Regex The next type of field we may want to add is an e-mail field. E-mails may look pretty simple at first glance, but there are a large variety of e-mails out there. You may just think of creating a word@word.word pattern, but the first section can contain many additional characters besides just letters, the domain can be a subdomain, or the suffix could have multiple parts (such as .co.uk for the UK). Our pattern will simply look for a group of characters that are not spaces or instances where the @ symbol has been used in the first section. We will then want an @ symbol, followed by another set of characters that have at least one period, followed by the suffix, which in itself could contain another suffix. So, this can be accomplished in the following manner: /[^s@]+@[^s@.]+.[^s@]+/ The pattern of our example is very simple and will not match every valid e-mail address. There is an official standard for an e-mail address's regular expressions called RFC 5322. For more information, please read http://www.regular-expressions.info/email.html. So, let's add the field to our page: Email: <input type="text" id="email_field" /><br /> We can then add this function to verify it: function process_email() {    var field = document.getElementById("email_field");    var email = field.value;      var email_pattern = /^[^s@]+@[^s@.]+.[^s@]+$/;      if (email_pattern.test(email) === false) {        alert("Email is invalid");        return false;    }      data.email = email;    return true; }   fns.push(process_email); There is an HTML5 field type specifically designed for e-mails, but here we are verifying manually, as this is a Regex book. For more information, please refer to http://www.w3.org/TR/html-markup/input.email.html. Understanding the e-mail Regex Let's go back to the regular expression used to match the name entered by the user: /^[^s@]+@[^s@.]+.[^s@]+$/ Following is a brief explanation of the Regex: ^ asserts a position at the beginning of the string [^s@]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches any white space character [rntf ] @ matches the @ literal character [^s@.]+ matches a single character that is not present in the following list: The + quantifier between one and unlimited times s matches a [rntf] whitespace character @. is a single character in the @. list, literally . matches the . character literally [^s@]+ match a single character that is not present in the following list: The + quantifier between one and unlimited times s matches [rntf] a whitespace character @ is the @ literal character $ asserts its position at end of a string Matching a Twitter name The next field we are going to add is a field for a Twitter username. For the unfamiliar, a Twitter username is in the @username format, but when people enter this in, they sometimes include the preceding @ symbol and on other occasions, they only write the username by itself. Obviously, internally we would like everything to be stored uniformly, so we will need to extract the username, regardless of the @ symbol, and then manually prepend it with one, so regardless of whether it was there or not, the end result will look the same. So again, let's add a field for this: Twitter: <input type="text" id="twitter_field" /><br /> Now, let's write the function to handle it: function process_twitter() {    var field = document.getElementById("twitter_field");    var username = field.value;      var twitter_pattern = /^@?(w+)$/;      if (twitter_pattern.test(username) === false) {        alert("Twitter username is invalid");        return false;    }      var res = twitter_pattern.exec(username);    data.twitter = "@" + res[1];    return true; }   fns.push(process_twitter); If a user inputs the @ symbol, it will be ignored, as we will add it manually after checking the username. Understanding the twitter username Regex Let's go back to the regular expression used to match the name entered by the user: /^@?(w+)$/ This is a brief explanation of the Regex: ^ asserts its position at start of the string @? matches the @ character, literally The ? quantifier between zero and one time First capturing group (w+) w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times $ asserts its position at end of a string Matching passwords Another popular field, which can have some unique constraints, is a password field. Now, not every password field is interesting; you may just allow just about anything as a password, as long as the field isn't left blank. However, there are sites where you need to have at least one letter from each case, a number, and at least one other character. Considering all the ways these can be combined, creating a pattern that can validate this could be quite complex. A much better solution for this, and one that allows us to be a bit more verbose with our error messages, is to create four separate patterns and make sure the password matches each of them. For the input, it's almost identical: Password: <input type="password" id="password_field" /><br /> The process_password function is not very different from the previous example as we can see its code as follows: function process_password() {    var field = document.getElementById("password_field");    var password = field.value;      var contains_lowercase = /[a-z]/;    var contains_uppercase = /[A-Z]/;    var contains_number = /[0-9]/;    var contains_other = /[^a-zA-Z0-9]/;      if (contains_lowercase.test(password) === false) {        alert("Password must include a lowercase letter");        return false;    }      if (contains_uppercase.test(password) === false) {        alert("Password must include an uppercase letter");        return false;    }      if (contains_number.test(password) === false) {        alert("Password must include a number");        return false;    }      if (contains_other.test(password) === false) {        alert("Password must include a non-alphanumeric character");        return false;    }      data.password = password;    return true; }   fns.push(process_password); All in all, you may say that this is a pretty basic validation and something we have already covered, but I think it's a great example of working smart as opposed to working hard. Sure, we probably could have created one long pattern that would check everything together, but it would be less clear and less flexible. So, by breaking it into smaller and more manageable validations, we were able to make clear patterns, and at the same time, improve their usability with more helpful alert messages. Matching URLs Next, let's create a field for the user's website; the HTML for this field is: Website: <input type="text" id="website_field" /><br /> A URL can have many different protocols, but for this example, let's restrict it to only http or https links. Next, we have the domain name with an optional subdomain, and we need to end it with a suffix. The suffix itself can be a single word, such as .com or it can have multiple segments, such as.co.uk. All in all, our pattern looks similar to this: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i Here, we are using multiple noncapture groups, both for when sections are optional and for when we want to repeat a segment. You may have also noticed that we are using the case insensitive flag (/i) at the end of the regular expression, as links can be written in lowercase or uppercase. Now, we'll implement the actual function: function process_website() {    var field = document.getElementById("website_field");    var website = field.value;      var pattern = /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i      if (pattern.test(website) === false) {       alert("Website is invalid");        return false;    }      data.website = website;    return true; }   fns.push(process_website); At this point, you should be pretty familiar with the process of adding fields to our form and adding a function to validate them. So, for our remaining examples let's shift our focus a bit from validating inputs to manipulating data. Understanding the URL Regex Let's go back to the regular expression used to match the name entered by the user: /^(?:https?://)?w+(?:.w+)?(?:.[A-Z]{2,3})+$/i This is a brief explanation of the Regex: ^ asserts its position at start of a string (?:https?://)? is anon-capturing group The ? quantifier between zero and one time http matches the http characters literally (case-insensitive) s? matches the s character literally (case-insensitive) The ? quantifier between zero and one time : matches the : character literally / matches the / character literally / matches the / character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.w+)? is a non-capturing group The ? quantifier between zero and one time . matches the . character literally w+ matches a [a-zA-Z0-9_] word character The + quantifier between one and unlimited times (?:.[A-Z]{2,3})+ is a non-capturing group The + quantifier between one and unlimited times . matches the . character literally [A-Z]{2,3} matches a single character present in this list The {2,3} quantifier between2 and 3 times A-Z is a single character in the range between A and Z (case insensitive) $ asserts its position at end of a string i modifier: insensitive. Case insensitive letters, meaning it will match a-z and A-Z. Manipulating data We are going to add one more input to our form, which will be for the user's description. In the description, we will parse for things, such as e-mails, and then create both a plain text and HTML version of the user's description. The HTML for this form is pretty straightforward; we will be using a standard textbox and give it an appropriate field: Description: <br /> <textarea id="description_field"></textarea><br /> Next, let's start with the bare scaffold needed to begin processing the form data: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      // More Processing Here      data.html_description = "<p>" + description + "</p>";      return true; }   fns.push(process_description); This code gets the text from the textbox on the page and then saves both a plain text version and an HTML version of it. At this stage, the HTML version is simply the plain text version wrapped between a pair of paragraph tags, but this is what we will be working on now. The first thing I want to do is split between paragraphs, in a text area the user may have different split-ups—lines and paragraphs. For our example, let's say the user just entered a single new line character, then we will add a <br /> tag and if there is more than one character, we will create a new paragraph using the <p> tag. Using the String.replace method We are going to use JavaScript's replace method on the string object This function can accept a Regex pattern as its first parameter, and a function as its second; each time it finds the pattern it will call the function and anything returned by the function will be inserted in place of the matched text. So, for our example, we will be looking for new line characters, and in the function, we will decide if we want to replace the new line with a break line tag or an actual new paragraph, based on how many new line characters it was able to pick up: var line_pattern = /n+/g; description = description.replace(line_pattern, function(match) {    if (match == "n") {        return "<br />";    } else {        return "</p><p>";    } }); The first thing you may notice is that we need to use the g flag in the pattern, so that it will look for all possible matches as opposed to only the first. Besides this, the rest is pretty straightforward. Consider this form: If you take a look at the output from the console of the preceding code, you should get something similar to this: Matching a description field The next thing we need to do is try and extract e-mails from the text and automatically wrap them in a link tag. We have already covered a Regexp pattern to capture e-mails, but we will need to modify it slightly, as our previous pattern expects that an e-mail is the only thing present in the text. In this situation, we are interested in all the e-mails included in a large body of text. If you were simply looking for a word, you would be able to use the b matcher, which matches any boundary (that can be the end of a word/the end of a sentence), so instead of the dollar sign, which we used before to denote the end of a string, we would place the boundary character to denote the end of a word. However, in our case it isn't quite good enough, as there are boundary characters that are valid e-mail characters, for example, the period character is valid. To get around this, we can use the boundary character in conjunction with a lookahead group and say we want it to end with a word boundary, but only if it is followed by a space or end of a sentence/string. This will ensure we aren't cutting off a subdomain or a part of a domain, if there is some invalid information mid-way through the address. Now, we aren't creating something that will try and parse e-mails no matter how they are entered; the point of creating validators and patterns is to force the user to enter something logical. That said, we assume that if the user wrote an e-mail address and then a period, that he/she didn't enter an invalid address, rather, he/she entered an address and then ended a sentence (the period is not part of the address). In our code, we assume that to the end an address, the user is either going to have a space after, such as some kind of punctuation, or that he/she is ending the string/line. We no longer have to deal with lines because we converted them to HTML, but we do have to worry that our pattern doesn't pick up an HTML tag in the process. At the end of this, our pattern will look similar to this: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g We start off with a word boundary, then, we look for the pattern we had before. I added both the (>) greater-than and the (<) less-than characters to the group of disallowed characters, so that it will not pick up any HTML tags. At the end of the pattern, you can see that we want to end on a word boundary, but only if it is followed by a space, an HTML tag, or the end of a string. The complete function, which does all the matching, is as follows: function process_description() {    var field = document.getElementById("description_field");    var description = field.value;      data.text_description = description;      var line_pattern = /n+/g;    description = description.replace(line_pattern, function(match) {        if (match == "n") {            return "<br />";        } else {            return "</p><p>";        }    });      var email_pattern = /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g;    description = description.replace(email_pattern, function(match){        return "<a href='mailto:" + match + "'>" + match + "</a>";    });      data.html_description = "<p>" + description + "</p>";      return true; } We can continue to add fields, but I think the point has been understood. You have a pattern that matches what you want, and with the extracted data, you are able to extract and manipulate the data into any format you may need. Understanding the description Regex Let's go back to the regular expression used to match the name entered by the user: /b[^s<>@]+@[^s<>@.]+.[^s<>@]+b(?=.?(?:s|<|$))/g This is a brief explanation of the Regex: b asserts its position at a (^w|w$|Ww|wW) word boundary [^s<>@]+ matches a single character not present in the this list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ is a single character in the <>@ list (case-sensitive) @ matches the @ character literally [^s<>@.]+ matches a single character not present in this list: The + quantifier between one and unlimited times s matches any [rntf] whitespace character <>@. is a single character in the <>@. list literally (case sensitive) . matches the . character literally [^s<>@]+ matches a single character not present in this the list: The + quantifier between one and unlimited times s matches a [rntf ] whitespace character <>@ isa single character in the <>@ list literally (case sensitive) b asserts its position at a (^w|w$|Ww|wW) word boundary (?=.?(?:s|<|$)) Positive Lookahead - Assert that the Regex below can be matched .? matches any character (except new line) The ? quantifier between zero and one time (?:s|<|$) is a non-capturing group: First alternative: s matches any white space character [rntf] Second alternative: < matches the character < literally Third alternative: $ assert position at end of the string The g modifier: global match. Returns all matches of the regular expression, not only the first one Explaining a Markdown example More examples of regular expressions can be seen with the popular Markdown syntax (refer to http://en.wikipedia.org/wiki/Markdown). This is a situation where a user is forced to write things in a custom format, although it's still a format, which saves typing and is easier to understand. For example, to create a link in Markdown, you would type something similar to this: [Click Me](http://gabrielmanricks.com) This would then be converted to: <a href="http://gabrielmanricks.com">Click Me</a> Disregarding any validation on the URL itself, this can easily be achieved using this pattern: /[([^]]*)](([^(]*))/g It looks a little complex, because both the square brackets and parenthesis are both special characters that need to be escaped. Basically, what we are saying is that we want an open square bracket, anything up to the closing square bracket, then we want an open parenthesis, and again, anything until the closing parenthesis. A good website to write markdown documents is http://dillinger.io/. Since we wrapped each section into its own capture group, we can write this function: text.replace(/[([^]]*)](([^(]*))/g, function(match, text, link){    return "<a href='" + link + "'>" + text + "</a>"; }); We haven't been using capture groups in our manipulation examples, but if you use them, then the first parameter to the callback is the entire match (similar to the ones we have been working with) and then all the individual groups are passed as subsequent parameters, in the order that they appear in the pattern. Summary In this article, we covered a couple of examples that showed us how to both validate user inputs as well as manipulate them. We also took a look at some common design patterns and saw how it's sometimes better to simplify the problem instead of using brute force in one pattern for the purpose of creating validations. Resources for Article: Further resources on this subject: Getting Started with JSON [article] Function passing [article] YUI Test [article]
Read more
  • 0
  • 0
  • 7004

article-image-predicting-hospital-readmission-expense-using-cascading
Packt
04 Jun 2015
10 min read
Save for later

Predicting Hospital Readmission Expense Using Cascading

Packt
04 Jun 2015
10 min read
In this article by Michael Covert, author of the book Learning Cascading, we will look at a system that allows for health care providers to create complex predictive models that can assess who is most at risk for such readmission using Cascading. (For more resources related to this topic, see here.) Overview Hospital readmission is an event that health care providers are attempting to reduce, and it is the primary target of new regulations of the Affordable Care Act, passed by the US government. A readmission is defined as any reentry to a hospital 30 days or less from a prior discharge. The financial impact of this is that US Medicare and Medicaid will either not pay or will reduce the payment made to hospitals for expenses incurred. By the end of 2014, over 2600 hospitals will incur these losses from a Medicare and Medicaid tab that is thought to exceed $24 billion annually. Hospitals are seeking to find ways to predict when a patient is susceptible to readmission so that actions can be taken to fully treat the patient before discharge. Many of them are using big data and machine learning-based predictive analytics. One such predictive engine is MedPredict from Analytics Inside, a company based in Westerville, Ohio. MedPredict is the predictive modeling component of the MedMiner suite of health care products. These products use Concurrent Cascading products to perform nightly rescoring of inpatients using a highly customizable calculation known as LACE, which stands for the following: Length of stay: This refers to the number of days a patient been in hospital Acute admissions through emergency department: This refers to whether a patient has arrived through the ER Comorbidities: A comorbidity refers to the presence of a two or more individual conditions in a patient. Each condition is designated by a diagnosis code. Diagnosis codes can also indicate complications and severity of a condition. In LACE, certain conditions are associated with the probability of readmission through statistical analysis. For instance, a diagnosis of AIDS, COPD, diabetes, and so on will each increase the probability of readmission. So, each diagnosis code is assigned points, with other points indicating "seriousness" of the condition. Diagnosis codes: These refer to the International Classification of Disease codes. Version 9 (ICD-9) and now version 10 (ICD-10) standards are available as well. Emergency visits: This refers to the number of emergency room visits the patient has made in a particular window of time. The LACE engine looks at a patient's history and computes a score that is a predictor of readmissions. In order to compute the comorbidity score, the Charlson Comorbidity Index (CCI) calculation is used. It is a statistical calculation that factors in the age and complexity of the patient's condition. Using Cascading to control predictive modeling The full data workflow to compute the probability of readmissions is as follows: Read all hospital records and reformat them into patient records, diagnosis records, and discharge records. Read all data related to patient diagnosis and diagnosis records, that is, ICD-9/10, date of diagnosis, complications, and so on. Read all tracked diagnosis records and join them with patient data to produce a diagnosis (comorbidity) score by summing up comorbidity "points". Read all data related to patient admissions, that is, records associated with admission and discharge, length of stay, hospital, admittance location, stay type, and so on. Read patient profile record, that is, age, race, gender, ethnicity, eye color, body mass indicator, and so on. Compute all intermediate scores for age, emergency visits, and comorbidities. Calculate the LACE score (refer to Figure 2). Assign a date and time to it. Take all the patient information, as mentioned in the preceding points, and run it through MedPredict to produce these variety of metrics: Expected length of stay Expected expense Expected outcome Probability of readmission Figure 1 – The data workflow The Cascading LACE engine The calculational aspects of computing LACE scores makes it ideal for Cascading as a series of reusable subassemblies. Firstly, the extraction, transformation, and loading (ETL) of patient data is complex and costly. Secondly, the calculations are data-intensive. The CCI alone has to examine a patient's medical history and must find all matching diagnosis codes (such as ICD-9 or ICD-10) to assign a score. This score must be augmented by the patient's age, and lastly, a patient's inpatient discharge records must be examined for admittance to the ER as well as emergency room visits. Also, many hospitals desire to customize these calculations. The LACE engine supports and facilitates this since scores are adjustable at the diagnosis code level, and MedPredict automatically produces metrics about how significant an individual feature is to the resulting score. Medical data is quite complex too. For instance, the particular diagnosis codes that represent cancer are many, and their meanings are quite nuanced. In some cases, metastasis (spreading of cancer to other locations in the body) may have occurred, and this is treated as a more severe situation. In other situations, measured values may be "bucketed", so this implies that we track the number of emergency room visits over 1 year, 6 months, 90 days, and 30 days. The Cascading LACE engine performs these calculations easily. It is customized through a set of hospital supplied parameters, and it has the capability to perform full calculations nightly due to its usage of Hadoop. Using this capability, a patient's record can track the full history of the LACE index over time. Additionally, different sets of LACE indices can be computed simultaneously, maybe one used for diabetes, the other for Chronic Obstructive Pulmonary Disorder (COPD), and so on. Figure 2 – The LACE subassembly MedPredict tracking The Lace engine metrics feed into MedPredict along with many other variables cited previously. These records are rescored nightly and the patient history is updated. This patient history is then used to analyze trends and generate alerts when the patient is showing an increased likelihood of variance to the desired metric values. What Cascading does for us We chose Cascading to help reduce the complexity of our development efforts. MapReduce provided us with the scalability that we desired, but we found that we were developing massive amounts of code to do so. Reusability was difficult, and the Java code library was becoming large. By shifting to Cascading, we found that we could encapsulate our code better and achieve significantly greater reusability. Additionally, we reduced complexity as well. The Cascading API provides simplification and understandability, which accelerates our development velocity metrics and also reduces bugs and maintenance cycles. We allow Cascading to control the end-to-end workflow of these nightly calculations. It handles preprocessing and formatting of data. Then, it handles running these calculations in parallel, allowing high speed hash joins to be performed, and also for each leg of the calculation to be split into a parallel pipe. Next, all these calculations are merged and the final score is produced. The last step is to analyze the patient trends and generate alerts where potential problems are likely to occur. Cascading has allowed us to produce a reusable assembly that is highly parameterized, thereby allowing hospitals to customize their usage. Not only can thresholds, scores, and bucket sizes be varied, but if it's desired, additional information could be included for things, such as medical procedures performed on the patient. The local mode of Cascading allows for easy testing, and it also provides a scaled down version that can be run against a small number of patients. However, by using Cascading in the Hadoop mode, massive scalability can be achieved against very large patient populations and ICD-9/10 code sets. Concurrent also provides an excellent framework for predictive modeling using machine learning through its Pattern component. MedPredict uses this to integrate its predictive engine, which is written using Cascading, MapReduce, and Mahout. Pattern provides an interface for the integration of other external analysis products through the exchange of Predictive Model Markup Language (PMML), an XML dialect that allows many of the MedPredict proprietary machine learning algorithms to be directly incorporated into the full Cascading LACE workflow. MedPredict then produces a variety of predictive metrics in a single pass of the data. The LACE scores (current and historical trends) are used as features for these predictions. Additionally, Concurrent provides a product called Driven that greatly reduces the development cycle time for such large, complex applications. Their lingual product provides seamless integration with relational databases, which is also key to enterprise integration. Results Numerous studies have now been performed using LACE risk estimates. Many hospitals have shown the ability to reduce readmission rates by 5-10 percent due to early intervention and specific guidance given to a patient as a result of an elevated LACE score. Other studies are examining the efficacy of additional metrics, and of segmentation of the patients into better identifying groups, such as heart failure, cancer, diabetes, and so on. Additional effort is being put in to study the ability of modifying the values of the comorbidity scores, taking into account combinations and complications. In some cases, even more dramatic improvements have taken place using these techniques. For up-to-date information, search for LACE readmissions, which will provide current information about implementations and results. Analytics Inside LLC Analytics Inside is based in Westerville, Ohio. It was founded in 2005 and specializes in advanced analytical solutions and services. Analytics Inside produces the RelMiner family of relationship mining systems. These systems are based on machine learning, big data, graph theories, data visualizations, and Natural Language Processing (NLP). For further information, visit our website at http://www.AnalyticsInside.us, or e-mail us at info@AnalyticsInside.us. MedMiner Advanced Analytics for Health Care is an integrated software system designed to help an organization or patient care team in the following ways: Predicting the outcomes of patient cases and tracking these predictions over time Generating alerts based on patient case trends that will help direct remediation Complying better with ARRA value-based purchasing and meaningful use guidelines Providing management dashboards that can be used to set guidelines and track performance Tracking performance of drug usage, interactions, potentials for drug diversion, and pharmaceutical fraud Extracting medical information contained within text documents Designating data security is a key design point PHI can be hidden through external linkages, so data exchange is not required If PHI is required, it is kept safe through heavy encryption, virus scanning, and data isolation Using both cloud-based and on premise capabilities to meet client needs Concurrent Inc. Concurrent Inc. is the leader in big data application infrastructure, delivering products that help enterprises create, deploy, run, and manage data applications at scale. The company's flagship enterprise solution, Driven, was designed to accelerate the development and management of enterprise data applications. Concurrent is the team behind Cascading, the most widely deployed technology for data applications with more than 175,000 user downloads a month. Used by thousands of businesses, including eBay, Etsy, The Climate Corporation, and Twitter, Cascading is the defacto standard in open source application infrastructure technology. Concurrent is headquartered in San Francisco and can be found online at http://concurrentinc.com. Summary Hospital readmission is an event that health care providers are attempting to reduce, and it is a primary target of new regulation from the Affordable Care Act, passed by the US government. This article describes a system that allows for health care providers to create complex predictive models that can assess who is most at risk for such readmission using Cascading. Resources for Article: Further resources on this subject: Hadoop Monitoring and its aspects [article] Introduction to Hadoop [article] YARN and Hadoop [article]
Read more
  • 0
  • 0
  • 1933

article-image-preparing-optimizations
Packt
04 Jun 2015
11 min read
Save for later

Preparing Optimizations

Packt
04 Jun 2015
11 min read
In this article by Mayur Pandey and Suyog Sarda, authors of LLVM Cookbook, we will look into the following recipes: Various levels of optimization Writing your own LLVM pass Running your own pass with the opt tool Using another pass in a new pass (For more resources related to this topic, see here.) Once the source code transformation completes, the output is in the LLVM IR form. This IR serves as a common platform for converting into assembly code, depending on the backend. However, before converting into an assembly code, the IR can be optimized to produce more effective code. The IR is in the SSA form, where every new assignment to a variable is a new variable itself—a classic case of an SSA representation. In the LLVM infrastructure, a pass serves the purpose of optimizing LLVM IR. A pass runs over the LLVM IR, processes the IR, analyzes it, identifies the optimization opportunities, and modifies the IR to produce optimized code. The command-line interface opt is used to run optimization passes on LLVM IR. Various levels of optimization There are various levels of optimization, starting at 0 and going up to 3 (there is also s for space optimization). The code gets more and more optimized as the optimization level increases. Let's try to explore the various optimization levels. Getting ready... Various optimization levels can be understood by running the opt command-line interface on LLVM IR. For this, an example C program can first be converted to IR using the Clang frontend. Open an example.c file and write the following code in it: $ vi example.c int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Now convert this into LLVM IR using the clang command, as shown here: $ clang –S –O0 –emit-llvm example.c A new file, example.ll, will be generated, containing LLVM IR. This file will be used to demonstrate the various optimization levels available. How to do it… Do the following steps: The opt command-line tool can be run on the IR-generated example.ll file: $ opt –O0 –S example.ll The –O0 syntax specifies the least optimization level. Similarly, you can run other optimization levels: $ opt –O1 –S example.ll $ opt –O2 –S example.ll $ opt –O3 –S example.ll How it works… The opt command-line interface takes the example.ll file as the input and runs the series of passes specified in each optimization level. It can repeat some passes in the same optimization level. To see which passes are being used in each optimization level, you have to add the --debug-pass=Structure command-line option with the previous opt commands. See Also To know more on various other options that can be used with the opt tool, refer to http://llvm.org/docs/CommandGuide/opt.html Writing your own LLVM pass All LLVM passes are subclasses of the pass class, and they implement functionality by overriding the virtual methods inherited from pass. LLVM applies a chain of analyses and transformations on the target program. A pass is an instance of the Pass LLVM class. Getting ready Let's see how to write a pass. Let's name the pass function block counter; once done, it will simply display the name of the function and count the basic blocks in that function when run. First, a Makefile needs to be written for the pass. Follow the given steps to write a Makefile: Open a Makefile in the llvm lib/Transform folder: $ vi Makefile Specify the path to the LLVM root folder and the library name, and make this pass a loadable module by specifying it in Makefile, as follows: LEVEL = ../../.. LIBRARYNAME = FuncBlockCount LOADABLE_MODULE = 1 include $(LEVEL)/Makefile.common This Makefile specifies that all the .cpp files in the current directory are to be compiled and linked together in a shared object. How to do it… Do the following steps: Create a new .cpp file called FuncBlockCount.cpp: $ vi FuncBlockCount.cpp In this file, include some header files from LLVM: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" Include the llvm namespace to enable access to LLVM functions: using namespace llvm; Then start with an anonymous namespace: namespace { Next declare the pass: struct FuncBlockCount : public FunctionPass { Then declare the pass identifier, which will be used by LLVM to identify the pass: static char ID; FuncBlockCount() : FunctionPass(ID) {} This step is one of the most important steps in writing a pass—writing a run function. Since this pass inherits FunctionPass and runs on a function, a runOnFunction is defined to be run on a function: bool runOnFunction(Function &F) override {      errs() << "Function " << F.getName() << 'n';      return false;    } }; } This function prints the name of the function that is being processed. The next step is to initialize the pass ID: char FuncBlockCount::ID = 0; Finally, the pass needs to be registered, with a command-line argument and a name: static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); Putting everything together, the entire code looks like this: #include "llvm/Pass.h" #include "llvm/IR/Function.h" #include "llvm/Support/raw_ostream.h" using namespace llvm; namespace { struct FuncBlockCount : public FunctionPass { static char ID; FuncBlockCount() : FunctionPass(ID) {} bool runOnFunction(Function &F) override {    errs() << "Function " << F.getName() << 'n';    return false; }            };        }        char FuncBlockCount::ID = 0;        static RegisterPass<FuncBlockCount> X("funcblockcount", "Function Block Count", false, false); How it works A simple gmake command compiles the file, so a new file FuncBlockCount.so is generated at the LLVM root directory. This shared object file can be dynamically loaded to the opt tool to run it on a piece of LLVM IR code. How to load and run it will be demonstrated in the next section. See also To know more on how a pass can be built from scratch, visit http://llvm.org/docs/WritingAnLLVMPass.html Running your own pass with the opt tool The pass written in the previous recipe, Writing your own LLVM pass, is ready to be run on the LLVM IR. This pass needs to be loaded dynamically for the opt tool to recognize and execute it. How to do it… Do the following steps: Write the C test code in the sample.c file, which we will convert into an .ll file in the next step: $ vi sample.c   int foo(int n, int m) { int sum = 0; int c0; for (c0 = n; c0 > 0; c0--) {    int c1 = m;  for (; c1 > 0; c1--) {      sum += c0 > c1 ? 1 : 0;    } } return sum; } Convert the C test code into LLVM IR using the following command: $ clang –O0 –S –emit-llvm sample.c –o sample.ll This will generate a sample.ll file. Run the new pass with the opt tool, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so -funcblockcount sample.ll The output will look something like this: Function foo How it works… As seen in the preceding code, the shared object loads dynamically into the opt command-line tool and runs the pass. It goes over the function and displays its name. It does not modify the IR. Further enhancement in the new pass is demonstrated in the next recipe. See also To know more about the various types of the Pass class, visit http://llvm.org/docs/WritingAnLLVMPass.html#pass-classes-and-requirements Using another pass in a new pass A pass may require another pass to get some analysis data, heuristics, or any such information to decide on a further course of action. The pass may just require some analysis such as memory dependencies, or it may require the altered IR as well. The new pass that you just saw simply prints the name of the function. Let's see how to enhance it to count the basic blocks in a loop, which also demonstrates how to use other pass results. Getting ready The code used in the previous recipe remains the same. Some modifications are required, however, to enhance it—as demonstrated in next section—so that it counts the number of basic blocks in the IR. How to do it… The getAnalysis function is used to specify which other pass will be used: Since the new pass will be counting the number of basic blocks, it requires loop information. This is specified using the getAnalysis loop function: LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); This will call the LoopInfo pass to get information on the loop. Iterating through this object gives the basic block information: unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; This will go over the loop to count the basic blocks inside it. However, it counts only the basic blocks in the outermost loop. To get information on the innermost loop, recursive calling of the getSubLoops function will help. Putting the logic in a separate function and calling it recursively makes more sense: void countBlocksInLoop(Loop *L, unsigned nest) { unsigned num_Blocks = 0; Loop::block_iterator bb; for(bb = L->block_begin(); bb != L->block_end();++bb)    num_Blocks++; errs() << "Loop level " << nest << " has " << num_Blocks << " blocksn"; std::vector<Loop*> subLoops = L->getSubLoops(); Loop::iterator j, f; for (j = subLoops.begin(), f = subLoops.end(); j != f; ++j)    countBlocksInLoop(*j, nest + 1); } virtual bool runOnFunction(Function &F) { LoopInfo *LI = &getAnalysis<LoopInfoWrapperPass>().getLoopInfo(); errs() << "Function " << F.getName() + "n"; for (Loop *L : *LI)    countBlocksInLoop(L, 0); return false; } How it works… The newly modified pass now needs to run on a sample program. Follow the given steps to modify and run the sample program: Open the sample.c file and replace its content with the following program: int main(int argc, char **argv) { int i, j, k, t = 0; for(i = 0; i < 10; i++) {    for(j = 0; j < 10; j++) {      for(k = 0; k < 10; k++) {        t++;      }    }    for(j = 0; j < 10; j++) {      t++;    } } for(i = 0; i < 20; i++) {    for(j = 0; j < 20; j++) {      t++;    }    for(j = 0; j < 20; j++) {      t++;    } } return t; } Convert it into a .ll file using Clang: $ clang –O0 –S –emit-llvm sample.c –o sample.ll Run the new pass on the previous sample program: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll The output will look something like this: Function main Loop level 0 has 11 blocks Loop level 1 has 3 blocks Loop level 1 has 3 blocks Loop level 0 has 15 blocks Loop level 1 has 7 blocks Loop level 2 has 3 blocks Loop level 1 has 3 blocks There's more… The LLVM's pass manager provides a debug pass option that gives us the chance to see which passes interact with our analyses and optimizations, as follows: $ opt -load (path_to_.so_file)/FuncBlockCount.so - funcblockcount sample.ll –disable-output –debug-pass=Structure Summary In this article you have explored various optimization levels, and the optimization techniques kicking at each level. We also saw the step-by-step approach to writing our own LLVM pass. Resources for Article: Further resources on this subject: Integrating a D3.js visualization into a simple AngularJS application [article] Getting Up and Running with Cassandra [article] Cassandra Architecture [article]
Read more
  • 0
  • 0
  • 4371

article-image-upgrading-vmware-virtual-infrastructure-setups
Packt
04 Jun 2015
13 min read
Save for later

Upgrading VMware Virtual Infrastructure Setups

Packt
04 Jun 2015
13 min read
In this article by Kunal Kumar and Christian Stankowic, authors of the book VMware vSphere Essentials, you will learn how to correctly upgrade VMware virtual infrastructure setups. (For more resources related to this topic, see here.) This article will cover the following topics: Prerequisites and preparations Upgrading vCenter Server Upgrading ESXi hosts Additional steps after upgrading An example scenario Let's start with a realistic scenario that is often found in data centers these days. I assume that your virtual infrastructure consists of components such as: Multiple VMware ESXi hosts Shared storage (NFS or Fibre-channel) VMware vCenter Server and vSphere Update Manager In this example, a cluster consisting of two ESXi hosts (esxi1 and esxi2) is running VMware ESXi 5.5. On a virtual machine (vc1), a Microsoft Windows Server system is running vCenter Server and vSphere Update Manager (vUM) 5.5. This article is written as a step-by-step guide to upgrade these particular vSphere components to the most recent version, which is 6.0. Example scenario consisting of two ESXi hosts with shared storage and vCenter Server Prerequisites and preparations Before we start the upgrade, we need to fulfill the following prerequisites: Ensure ESXi version support by the hardware vendor Gurarantee ESXi version support on used hardware by VMware Create a backup of the ESXi images and vCenter Server First of all, we need to refer to our hardware vendor's support matrix to ensure that our physical hosts running VMware ESXi are supported in the new release. Hardware vendors evaluate their systems before approving upgrades to customers. As an example, Dell offers a comprehensive list for their PowerEdge servers at http://topics-cdn.dell.com/pdf/vmware-esxi-6.x_Reference%20Guide2_en-us.pdf. Here are some additional links for alternative hardware vendors: Hewlett-Packard: http://h17007.www1.hp.com/us/en/enterprise/servers/supportmatrix/vmware.aspx IBM: http://www-03.ibm.com/systems/info/x86servers/serverproven/compat/us/nos/vmware.html Cisco UCS: http://www.cisco.com/web/techdoc/ucs/interoperability/matrix/matrix.html When using Fibre-channel-based storage systems, you might also need to ensure fulfilling that vendor's support matrix. Please check out your vendor's website or contact support for this information. VMware also offers a comprehensive list of tested hardware setups at http://www.vmware.com/resources/compatibility/pdf/vi_systems_guide.pdf. In their Compatibility Guide portal, VMware enabled customers to browse for particular server systems—this information might be more recent than the aforementioned PDF file. Creating a backup of ESXi Before upgrading our ESXi hosts, we also need to make sure that we have a valid backup. In case things go wrong, we might need this backup to restore the previous ESXi version. For creating a backup of the hard disk ESXi is installed on, there are a plenty of tools in the market that implement image-based backups. One possible solution, which is free, is Clonezilla. Clonezilla is a Linux-based live medium that can easily create backup images of hard disks. To create a backup using Clonezilla, proceed with the following steps: Download the Clonezilla ISO image from their website. Make sure you select the AMD64 architecture and the ISO file format. Enable maintenance mode for the particular ESXi host. Make sure you migrate virtual machines to alternative nodes or power them off. Connect the ISO file to the ESXi host and boot from CD. Also, connect a USB drive to the host. This drive will be used to store the backup. Boot from CD and select Clonezilla live. Wait until the boot process completes. When prompted, select your keyboard layout (for example, en_US.utf8) and select Don't touch keymap. In the Start Clonezilla menu, select Start_Clonezilla and device-image. This mode creates an image of the medium ESXi is running on and stores it in the USB storage. Select local_dev and choose the USB storage connected to the host from the list in the next step. Select a folder for storing the backup (optional). Select Beginner and savedisk to store the entire disk ESXi resides on as an image. Enter a name for the backup. Select the hard disk containing the ESXi installation and proceed. You can also specify whether Clonezilla should check the image after creating it (highly recommended). Afterwards, confirm the backup process. The backup job will start immediately. Once the backup completes, select reboot from the menu to reboot the host. A running backup job in Clonezilla To restore a backup using Clonezilla, perform the following steps after booting the Clonezilla media: Complete steps 1 to 8 from the previous guide. Select Beginner and restoredisk to restore the entire disk. Select the image from the USB storage and the hard drive the image should be restored on. Acknowledge the restore process. Once the restoration completes, select reboot from the menu to reboot the host. For the system running vCenter Server, we can easily create a VM snapshot, or also use Clonezilla if a physical machine is used instead. The upgrade path It is very important to execute the particular upgrade tasks in the following order: Upgrade VMware vCenter Server Upgrade the particular ESXi hosts Reformat or upgrade the VMFS data stores (if applicable) Upgrading additional components, such as distributed virtual switches, or additional appliances The first step is to upgrade vCenter Server. This is necessary to ensure that we are able to manage our ESXi hosts after upgrading them. Newer vCenter Server versions are downward compatible with numerous ESXi versions. To double-check this, we can look up the particular version support by browsing VMware's Product Interoperability Matrix on their website. Click on Solution Interoperability, choose VMware vCenter Server from the drop-down menu, and select the version you want to upgrade to. In our example, we will choose the most recent release, 6.0, and select VMware ESX/ESXi from the Add Platform/Solution drop-down menu. VMware Product Interoperability Matrix for vCenter Server and ESXi vCenter Server 6.0 supports management of VMware ESXi 5.0 and higher. We need to ensure the same support agreement for any other used products, such as these: VMware vSphere Update Manager VMware vCenter Operations (if applicable) VMware vSphere Data Protection In other words, we need to upgrade all additional vSphere and vCenter Server components to ensure full functionality. Upgrading vCenter Server Upgrading vCenter Server is the most crucial step, as this is our central management platform. The upgrade process varies according to the chosen architecture. Upgrading Windows-based vCenter Server installations is quite easy, as the installation supports in-place upgrades. When using the vCenter Server Appliance (vCSA), there is no in-place upgrade; it is necessary to deploy a new vCSA and import the settings from the old installation. This process varies between the particular vCSA versions. For upgrading from vCSA 5.0 or 5.1 to 5.5, VMware offers a comprehensive article at http://kb.vmware.com/kb/2058441. To upgrade vCenter Server 5.x on Windows to 6.0 using the Easy Install method, proceed with the following steps: Mount the vCenter Server 6.x installation media (VMware-VIMSetup-all-6.0.0-xxx.iso) on the server running vCenter Server. Wait until the installation wizard starts; if it doesn't start, double-click on the CD/DVD icon in Windows Explorer. Select vCenter Server for Windows and click on Install to start the installation utility. Accept the End-User License Agreement (EULA). Enter the current vCenter Single-Sign-On password and proceed with the next step. The installation utility begins to execute pre-upgrade checks; this might take some time. If you're running vCenter Server along with Microsoft SQL Server Express Edition, the database will be migrated to VMware vPostgres. Review and change (if necessary) the network ports of your vCenter Server installation. If needed, change the directories for vCenter Server and the Embedded Platform Controller (ESC). Carefully review the upgrade information displayed in the wizard. Also verify that you have created a backup of your system and the database. Then click on Upgrade to start the upgrade. After the upgrade, vSphere Web Client can be used to connect to the upgraded vCenter Server system. Also note that the Microsoft SQL Server Express Edition database is not used anymore. Upgrading ESXi hosts Upgrading ESXi hosts can be done using two methods: Using the installation media from the VMware website vSphere Update Manager If you need to upgrade a large number of ESXi hosts, I recommend that you use vSphere Update Manager to save time, as it can automate the particular steps. For smaller landscapes, using the installation media is easier. For using vUM to upgrade ESXi hosts, VMware offers a guide on their knowledge base at http://kb.vmware.com/kb/1019545. In order to upgrade an ESXi host using the installation media, perform the following steps: First of all, enable maintenance mode for the particular ESXi host. Make sure you migrate the virtual machines to alternative nodes or power them off. Connect the installation media to the ESXi host and boot from CD. Once the setup utility becomes available, press Enter to start the installation wizard. Accept the End-User License Agreement (EULA) by pressing F11. Select the disk containing the current ESXi installation. In the ESXi found dialog, select Upgrade. Review the installation information and press F11 to start the upgrade. After the installation completes, press Enter to reboot the system. After the system has rebooted, it will automatically reconnect to vCenter Server. Select the particular ESXi host to see whether the version has changed. In this example, the ESXi host has been successfully upgraded to version 6.0: Version information of an updated ESXi host running release 6.0 Repeat all of these steps for all the remaining ESXi hosts. Note that running an ESXi cluster with mixed versions should only be a temporary solution. It is not recommended to mix various ESXi releases in production usage, as the various features of ESXi might not perform as expected in mixed clusters. Additional steps After upgrading vCenter Server and our ESXi hosts, there are additional steps that can be done: Reformating or upgrading VMFS data stores Upgrading distributed virtual switches Upgrading virtual machine's hardware versions Upgrading VMFS data stores VMware's VMFS (Virtual Machine Filesystem) is the most used filesystem for shared storage. It can be used along with local storage, iSCSI, or Fibre-channel storage. Particularly, ESX(i) releases support various versions of VMFS. Let's take a look at the major differences:   VMFS 2   VMFS 3   VMFS 5   Supported by ESX 2.x, ESXi 3.x/4.x (read-only) ESX(i) 3.x and higher ESXi 5.x and higher Block size(s) 1, 8, 64, or 256 MB 1, 2, 4, or 8 MB 1 MB (fixed) Maximum file size 1 MB block size: 456 MB 8 MB block size: 2.5 TB 64 MB block size: 28.5 TB 256 MB block size: 64 TB 1 MB block size: 256 MB 2 MB block size: 512 GB 4 MB block size: 1 TB 8 MB block size: 2 TB 62 TB Files per volume Ca. 256 (no directories supported) Ca. 37,720 Ca. 130,690 When migrating from an ESXi version such as 4.x or older, it is possible to upgrade VMFS data stores to version 5. VMFS 2 cannot be upgraded to VMFS 5; it first needs to be upgraded to VMFS 3. To enable the upgrade, a VMFS 2 volume must not have a block size more than 8 MB, as VMFS 3 only supports block sizes up to 8 MB. In comparison with older VMFS versions, VMFS 5 supports larger file sizes and more files per volume. I highly recommend that you reformat VMFS data stores instead of upgrading them, as the upgrade does not change the filesystem's block size. Because of this limitation, you won't benefit from all the new VMFS 5 features after an upgrade. To upgrade a VMFS 3 volume to VMFS 5, perform these steps: Log in to vSphere Web Client. Go to the Storage pane. Click on the data store to upgrade and go to Settings under the Manage tab. Click on Upgrade to VMFS5. Then click on OK to start the upgrade. VMware vNetwork Distributed Switch When using vNetwork Distributed Switches (also often called dvSwitches) it is recommended to perform an upgrade to the latest version. In comparison with vNetwork Standard Switches (also called vSwitches), dvSwitches are created at the vCenter Server level and replicated to all subscribed ESXi hosts. When creating a dvSwitch, the administrator can choose between various dvSwitch versions. After upgrading vCenter Server and the ESXi hosts, additional features can be unlocked by upgrading the dvSwitch. Let's take a look at some commonly used dvSwitch versions:   vDS 5.0   vDS 5.1   vDS 5.5   vDS 6.0   Compatible with ESXi 5.0 and higher ESXi 5.1 and higher ESXi 5.5 and higher ESXi 6.0 Common features Network I/O Control, load-based teaming, traffic shaping, VM port blocking, PVLANs (private VLANs), network vMotion, and port policies Additional features Network resource pools, NetFlow, and port mirroring VDS 5.0 +, management network rollback, network health checks, enhanced port mirroring, and LACP (Link Aggregation Control Protocol) VDS 5.1 +, traffic filtering, and enhanced LACP functionality VDS 5.5 +, multicast snooping, and Network I/O Control version 3 (bandwidth guarantee) It is also possible to use the old version furthermore, as vCenter Server is downward compatible with numerous dvSwitch versions. Upgrading a dvSwitch is a task that cannot be undone. During the upgrade, it is possible that virtual machines will lose their network connectivity for some seconds. After the upgrade, older ESXi hosts will not be able to participate in the distributed switch setup. To upgrade a dvSwitch, perform the following steps: Log in to vSphere Web Client. Go to the Networking pane and select the dvSwitch to upgrade. Lorem..... After upgrading the dvSwitch, you will notice that the version has changed: Version information of a dvSwitch running VDS 6.0 Virtual machine hardware version Every virtual machine is created with a virtual machine hardware version specified (also called VMHW or vHW). A vHW version defines a set of particular limitations and features, such as controller types or network cards. To benefit from the new virtual machine features, it is sufficient to upgrade vHW versions. ESXi hosts support a range of vHW versions, but it is always advisable to use the most recent vHW version. Once a vHW version is upgraded, particular virtual machines cannot be started on older ESXi versions that don't support the vHW version. Let's take a deeper look at some popular vHW versions:   vSphere 4.1   vSphere 5.1   vSphere 5.5   vSphere 6.0   Maximum vHW 7 9 10 11 Virtual CPUs 8 64 128 Virtual RAM 255 GB 1 TB 4 TB vDisk size 2 TB 62 TB SCSI adapters / targets 4/60 SATA adapters / targets Not supported 4/30 Parallel / Serial Ports 3/4 3/32 USB controllers / devices per VM 1/20 (USB 1.x + 2.x) 1/20 (USB 1.x, 2.x + 3.x) The upgrade cannot be undone. Also, it might be necessary to update VMware Tools and the drivers of the operating system running in the virtual machine. Summary In this article we learnt how to correctly upgrade VMware virtual infrastructure setups. If you want to know more about VMware vSphere and virtual infrastructure setups, go ahead and get your copy of Packt Publishing's book VMware vSphere Essentials. Resources for Article: Further resources on this subject: Networking [article] The Design Documentation [article] VMware View 5 Desktop Virtualization [article]
Read more
  • 0
  • 0
  • 7692
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-data-analysis-using-r
Packt
04 Jun 2015
17 min read
Save for later

Data Analysis Using R

Packt
04 Jun 2015
17 min read
In this article by Viswa Viswanathan and Shanthi Viswanathan, the authors of the book R Data Analysis Cookbook, we discover how R can be used in various ways such as comparison, classification, applying different functions, and so on. We will cover the following recipes: Creating charts that facilitate comparisons Building, plotting, and evaluating – classification trees Using time series objects Applying functions to subsets of a vector (For more resources related to this topic, see here.) Creating charts that facilitate comparisons In large datasets, we often gain good insights by examining how different segments behave. The similarities and differences can reveal interesting patterns. This recipe shows how to create graphs that enable such comparisons. Getting ready If you have not already done so, download the code files and save the daily-bike-rentals.csv file in your R working directory. Read the data into R using the following command: > bike <- read.csv("daily-bike-rentals.csv") > bike$season <- factor(bike$season, levels = c(1,2,3,4),   labels = c("Spring", "Summer", "Fall", "Winter")) > attach(bike) How to do it... We base this recipe on the task of generating histograms to facilitate the comparison of bike rentals by season. Using base plotting system We first look at how to generate histograms of the count of daily bike rentals by season using R's base plotting system: Set up a 2 X 2 grid for plotting histograms for the four seasons: > par(mfrow = c(2,2)) Extract data for the seasons: > spring <- subset(bike, season == "Spring")$cnt > summer <- subset(bike, season == "Summer")$cnt > fall <- subset(bike, season == "Fall")$cnt > winter <- subset(bike, season == "Winter")$cnt Plot the histogram and density for each season: > hist(spring, prob=TRUE,   xlab = "Spring daily rentals", main = "") > lines(density(spring)) >  > hist(summer, prob=TRUE,   xlab = "Summer daily rentals", main = "") > lines(density(summer)) >  > hist(fall, prob=TRUE,   xlab = "Fall daily rentals", main = "") > lines(density(fall)) >  > hist(winter, prob=TRUE,   xlab = "Winter daily rentals", main = "") > lines(density(winter)) You get the following output that facilitates comparisons across the seasons: Using ggplot2 We can achieve much of the preceding results in a single command: > qplot(cnt, data = bike) + facet_wrap(~ season, nrow=2) +   geom_histogram(fill = "blue") You can also combine all four into a single histogram and show the seasonal differences through coloring: > qplot(cnt, data = bike, fill = season) How it works... When you plot a single variable with qplot, you get a histogram by default. Adding facet enables you to generate one histogram per level of the chosen facet. By default, the four histograms will be arranged in a single row. Use facet_wrap to change this. There's more... You can use ggplot2 to generate comparative boxplots as well. Creating boxplots with ggplot2 Instead of the default histogram, you can get a boxplot with either of the following two approaches: > qplot(season, cnt, data = bike, geom = c("boxplot"), fill = season) >  > ggplot(bike, aes(x = season, y = cnt)) + geom_boxplot() The preceding code produces the following output: The second line of the preceding code produces the following plot: Building, plotting, and evaluating – classification trees You can use a couple of R packages to build classification trees. Under the hood, they all do the same thing. Getting ready If you do not already have the rpart, rpart.plot, and caret packages, install them now. Download the data files and place the banknote-authentication.csv file in your R working directory. How to do it... This recipe shows you how you can use the rpart package to build classification trees and the rpart.plot package to generate nice-looking tree diagrams: Load the rpart, rpart.plot, and caret packages: > library(rpart) > library(rpart.plot) > library(caret) Read the data: > bn <- read.csv("banknote-authentication.csv") Create data partitions. We need two partitions—training and validation. Rather than copying the data into the partitions, we will just keep the indices of the cases that represent the training cases and subset as and when needed: > set.seed(1000) > train.idx <- createDataPartition(bn$class, p = 0.7, list = FALSE) Build the tree: > mod <- rpart(class ~ ., data = bn[train.idx, ], method = "class", control = rpart.control(minsplit = 20, cp = 0.01)) View the text output (your result could differ if you did not set the random seed as in step 3): > mod n= 961   node), split, n, loss, yval, (yprob)      * denotes terminal node   1) root 961 423 0 (0.55983351 0.44016649)    2) variance>=0.321235 511 52 0 (0.89823875 0.10176125)      4) curtosis>=-4.3856 482 29 0 (0.93983402 0.06016598)        8) variance>=0.92009 413 10 0 (0.97578692 0.02421308) *        9) variance< 0.92009 69 19 0 (0.72463768 0.27536232)        18) entropy< -0.167685 52   6 0 (0.88461538 0.11538462) *        19) entropy>=-0.167685 17   4 1 (0.23529412 0.76470588) *      5) curtosis< -4.3856 29   6 1 (0.20689655 0.79310345)      10) variance>=2.3098 7   1 0 (0.85714286 0.14285714) *      11) variance< 2.3098 22   0 1 (0.00000000 1.00000000) *    3) variance< 0.321235 450 79 1 (0.17555556 0.82444444)      6) skew>=6.83375 76 18 0 (0.76315789 0.23684211)      12) variance>=-3.4449 57   0 0 (1.00000000 0.00000000) *      13) variance< -3.4449 19   1 1 (0.05263158 0.94736842) *      7) skew< 6.83375 374 21 1 (0.05614973 0.94385027)      14) curtosis>=6.21865 106 16 1 (0.15094340 0.84905660)        28) skew>=-3.16705 16   0 0 (1.00000000 0.00000000) *       29) skew< -3.16705 90   0 1 (0.00000000 1.00000000) *      15) curtosis< 6.21865 268   5 1 (0.01865672 0.98134328) * Generate a diagram of the tree (your tree might differ if you did not set the random seed as in step 3): > prp(mod, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray") The following output is obtained as a result of the preceding command: Prune the tree: > # First see the cptable > # !!Note!!: Your table can be different because of the > # random aspect in cross-validation > mod$cptable            CP nsplit rel error   xerror       xstd 1 0.69030733     0 1.00000000 1.0000000 0.03637971 2 0.09456265     1 0.30969267 0.3262411 0.02570025 3 0.04018913     2 0.21513002 0.2387707 0.02247542 4 0.01891253     4 0.13475177 0.1607565 0.01879222 5 0.01182033     6 0.09692671 0.1347518 0.01731090 6 0.01063830     7 0.08510638 0.1323877 0.01716786 7 0.01000000     9 0.06382979 0.1276596 0.01687712   > # Choose CP value as the highest value whose > # xerror is not greater than minimum xerror + xstd > # With the above data that happens to be > # the fifth one, 0.01182033 > # Your values could be different because of random > # sampling > mod.pruned = prune(mod, mod$cptable[5, "CP"]) View the pruned tree (your tree will look different): > prp(mod.pruned, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray") Use the pruned model to predict for a validation partition (note the minus sign before train.idx to consider the cases in the validation partition): > pred.pruned <- predict(mod, bn[-train.idx,], type = "class") Generate the error/classification-confusion matrix: > table(bn[-train.idx,]$class, pred.pruned, dnn = c("Actual", "Predicted"))      Predicted Actual   0   1      0 213 11      1 11 176 How it works... Steps 1 to 3 load the packages, read the data, and identify the cases in the training partition, respectively. In step 3, we set the random seed so that your results should match those that we display. Step 4 builds the classification tree model: > mod <- rpart(class ~ ., data = bn[train.idx, ], method = "class", control = rpart.control(minsplit = 20, cp = 0.01)) The rpart() function builds the tree model based on the following:   Formula specifying the dependent and independent variables   Dataset to use   A specification through method="class" that we want to build a classification tree (as opposed to a regression tree)   Control parameters specified through the control = rpart.control() setting; here we have indicated that the tree should only consider nodes with at least 20 cases for splitting and use the complexity parameter value of 0.01—these two values represent the defaults and we have included these just for illustration Step 5 produces a textual display of the results. Step 6 uses the prp() function of the rpart.plot package to produce a nice-looking plot of the tree: > prp(mod, type = 2, extra = 104, nn = TRUE, fallen.leaves = TRUE, faclen = 4, varlen = 8, shadow.col = "gray")   use type=2 to get a plot with every node labeled and with the split label below the node   use extra=4 to display the probability of each class in the node (conditioned on the node and hence summing to 1); add 100 (hence extra=104) to display the number of cases in the node as a percentage of the total number of cases   use nn = TRUE to display the node numbers; the root node is node number 1 and node n has child nodes numbered 2n and 2n+1   use fallen.leaves=TRUE to display all leaf nodes at the bottom of the graph   use faclen to abbreviate class names in the nodes to a specific maximum length   use varlen to abbreviate variable names   use shadow.col to specify the color of the shadow that each node casts Step 7 prunes the tree to reduce the chance that the model too closely models the training data—that is, to reduce overfitting. Within this step, we first look at the complexity table generated through cross-validation. We then use the table to determine the cutoff complexity level as the largest xerror (cross-validation error) value that is not greater than one standard deviation above the minimum cross-validation error. Steps 8 through 10 display the pruned tree; use the pruned tree to predict the class for the validation partition and then generate the error matrix for the validation partition. There's more... We discuss in the following an important variation on predictions using classification trees. Computing raw probabilities We can generate probabilities in place of classifications by specifying type="prob": > pred.pruned <- predict(mod, bn[-train.idx,], type = "prob") Create the ROC Chart Using the preceding raw probabilities and the class labels, we can generate a ROC chart: > pred <- prediction(pred.pruned[,2], bn[-train.idx,"class"]) > perf <- performance(pred, "tpr", "fpr") > plot(perf) Using time series objects In this recipe, we look at various features to create and plot time-series objects. We will consider data with both a single and multiple time series. Getting ready If you have not already downloaded the data files, do it now and ensure that the files are in your R working directory. How to do it... Read the data. The file has 100 rows and a single column named sales: > s <- read.csv("ts-example.csv") Convert the data to a simplistic time series object without any explicit notion of time: > s.ts <- ts(s) > class(s.ts) [1] "ts" Plot the time series: > plot(s.ts) Create a proper time series object with proper time points: > s.ts.a <- ts(s, start = 2002) > s.ts.a Time Series: Start = 2002 End = 2101 Frequency = 1        sales [1,]   51 [2,]   56 [3,]   37 [4,]   101 [5,]   66 (output truncated) > plot(s.ts.a) > # results show that R treated this as an annual > # time series with 2002 as the starting year The result of the preceding commands is seen in the following graph: To create a monthly time series run the following command: > # Create a monthly time series > s.ts.m <- ts(s, start = c(2002,1), frequency = 12) > s.ts.m        Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec 2002 51 56 37 101 66 63 45 68 70 107 86 102 2003 90 102 79 95 95 101 128 109 139 119 124 116 2004 106 100 114 133 119 114 125 167 149 165 135 152 2005 155 167 169 192 170 180 175 207 164 204 180 203 2006 215 222 205 202 203 209 200 199 218 221 225 212 2007 250 219 242 241 267 249 253 242 251 279 298 260 2008 269 257 279 273 275 314 288 286 290 288 304 291 2009 314 290 312 319 334 307 315 321 339 348 323 342 2010 340 348 354 291 > plot(s.ts.m) # note x axis on plot The following plot can be seen as a result of the preceding commands: > # Specify frequency = 4 for quarterly data > s.ts.q <- ts(s, start = 2002, frequency = 4) > s.ts.q        Qtr1 Qtr2 Qtr3 Qtr4 2002   51   56   37 101 2003   66   63   45   68 2004   70 107   86 102 2005   90 102   79   95 2006   95 101 128 109 (output truncated) > plot(s.ts.q) Query time series objects (we use the s.ts.m object we created in the previous step): > # When does the series start? > start(s.ts.m) [1] 2002   1 > # When does it end? > end(s.ts.m) [1] 2010   4 > # What is the frequency? > frequency(s.ts.m) [1] 12 Create a time series object with multiple time series. This data file contains US monthly consumer prices for white flour and unleaded gas for the years 1980 through 2014 (downloaded from the website of the US Bureau of Labor Statistics): > prices <- read.csv("prices.csv") > prices.ts <- ts(prices, start=c(1980,1), frequency = 12) Plot a time series object with multiple time series: > plot(prices.ts) The plot in two separate panels appears as follows: > # Plot both series in one panel with suitable legend > plot(prices.ts, plot.type = "single", col = 1:2) > legend("topleft", colnames(prices.ts), col = 1:2, lty = 1) Two series plotted in one panel appear as follow: How it works... Step 1 reads the data. Step 2 uses the ts function to generate a time series object based on the raw data. Step 3 uses the plot function to generate a line plot of the time series. We see that the time axis does not provide much information. Time series objects can represent time in more friendly terms. Step 4 shows how to create time series objects with a better notion of time. It shows how we can treat a data series as an annual, monthly, or quarterly time series. The start and frequency parameters help us to control these data series. Although the time series we provide is just a list of sequential values, in reality our data can have an implicit notion of time attached to it. For example, the data can be annual numbers, monthly numbers, or quarterly ones (or something else, such as 10-second observations of something). Given just the raw numbers (as in our data file, ts-example.csv), the ts function cannot figure out the time aspect and by default assumes no secondary time interval at all. We can use the frequency parameter to tell ts how to interpret the time aspect of the data. The frequency parameter controls how many secondary time intervals there are in one major time interval. If we do not explicitly specify it, by default frequency takes on a value of 1. Thus, the following code treats the data as an annual sequence, starting in 2002: > s.ts.a <- ts(s, start = 2002) The following code, on the other hand, treats the data as a monthly time series, starting in January 2002. If we specify the start parameter as a number, then R treats it as starting at the first subperiod, if any, of the specified start period. When we specify frequency as different from 1, then the start parameter can be a vector such as c(2002,1) to specify the series, the major period, and the subperiod where the series starts. c(2002,1) represent January 2002: > s.ts.m <- ts(s, start = c(2002,1), frequency = 12) Similarly, the following code treats the data as a quarterly sequence, starting in the first quarter of 2002: > s.ts.q <- ts(s, start = 2002, frequency = 4) The frequency values of 12 and 4 have a special meaning—they represent monthly and quarterly time sequences. We can supply start and end, just one of them, or none. If we do not specify either, then R treats the start as 1 and figures out end based on the number of data points. If we supply one, then R figures out the other based on the number of data points. While start and end do not play a role in computations, frequency plays a big role in determining seasonality, which captures periodic fluctuations. If we have some other specialized time series, we can specify the frequency parameter appropriately. Here are two examples:   With measurements taken every 10 minutes and seasonality pegged to the hour, we should specify frequency as 6   With measurements taken every 10 minutes and seasonality pegged to the day, use frequency = 24*6 (6 measurements per hour times 24 hours per day) Step 5 shows the use of the functions start, end, and frequency to query time series objects. Steps 6 and 7 show that R can handle data files that contain multiple time series. Applying functions to subsets of a vector The tapply function applies a function to each partition of the dataset. Hence, when we need to evaluate a function over subsets of a vector defined by a factor, tapply comes in handy. Getting ready Download the files and store the auto-mpg.csv file in your R working directory. Read the data and create factors for the cylinders variable: > auto <- read.csv("auto-mpg.csv", stringsAsFactors=FALSE) > auto$cylinders <- factor(auto$cylinders, levels = c(3,4,5,6,8),   labels = c("3cyl", "4cyl", "5cyl", "6cyl", "8cyl")) How to do it... To apply functions to subsets of a vector, follow these steps: Calculate mean mpg for each cylinder type: > tapply(auto$mpg,auto$cylinders,mean)      3cyl     4cyl     5cyl     6cyl     8cyl 20.55000 29.28676 27.36667 19.98571 14.96311 We can even specify multiple factors as a list. The following example shows only one factor since the out file has only one, but it serves as a template that you can adapt: > tapply(auto$mpg,list(cyl=auto$cylinders),mean)   cyl    3cyl     4cyl     5cyl     6cyl     8cyl 20.55000 29.28676 27.36667 19.98571 14.96311 How it works... In step 1 the mean function is applied to the auto$mpg vector grouped according to the auto$cylinders vector. The grouping factor should be of the same length as the input vector so that each element of the first vector can be associated with a group. The tapply function creates groups of the first argument based on each element's group affiliation as defined by the second argument and passes each group to the user-specified function. Step 2 shows that we can actually group by several factors specified as a list. In this case, tapply applies the function to each unique combination of the specified factors. There's more... The by function is similar to tapply and applies the function to a group of rows in a dataset, but by passing in the entire data frame. The following examples clarify this. Applying a function on groups from a data frame In the following example, we find the correlation between mpg and weight for each cylinder type: > by(auto, auto$cylinders, function(x) cor(x$mpg, x$weight)) auto$cylinders: 3cyl [1] 0.6191685 --------------------------------------------------- auto$cylinders: 4cyl [1] -0.5430774 --------------------------------------------------- auto$cylinders: 5cyl [1] -0.04750808 --------------------------------------------------- auto$cylinders: 6cyl [1] -0.4634435 --------------------------------------------------- auto$cylinders: 8cyl [1] -0.5569099 Summary Being an extensible system, R's functionality is divided across numerous packages with each one exposing large numbers of functions. Even experienced users cannot expect to remember all the details off the top of their head. In this article, we went through a few techniques using which R helps analyze data and visualize the results. Resources for Article: Further resources on this subject: Combining Vector and Raster Datasets [article] Factor variables in R [article] Big Data Analysis (R and Hadoop) [article]
Read more
  • 0
  • 0
  • 3583

article-image-installing-openstack-swift
Packt
04 Jun 2015
10 min read
Save for later

Installing OpenStack Swift

Packt
04 Jun 2015
10 min read
In this article by Amar Kapadia, Sreedhar Varma, and Kris Rajana, authors of the book OpenStack Object Storage (Swift) Essentials, we will see how IT administrators can install OpenStack Swift. The version discussed here is the Juno release of OpenStack. Installation of Swift has several steps and requires careful planning before beginning the process. A simple installation consists of installing all Swift components on a single node, and a complex installation consists of installing Swift on several proxy server nodes and storage server nodes. The number of storage nodes can be in the order of thousands across multiple zones and regions. Depending on your installation, you need to decide on the number of proxy server nodes and storage server nodes that you will configure. This article demonstrates a manual installation process; advanced users may want to use utilities such as Puppet or Chef to simplify the process. This article walks you through an OpenStack Swift cluster installation that contains one proxy server and five storage servers. (For more resources related to this topic, see here.) Hardware planning This section describes the various hardware components involved in the setup. Since Swift deals with object storage, disks are going to be a major part of hardware planning. The size and number of disks required should be calculated based on your requirements. Networking is also an important component, where factors such as a public or private network and a separate network for communication between storage servers need to be planned. Network throughput of at least 1 GB per second is suggested, while 10 GB per second is recommended. The servers we set up as proxy and storage servers are dual quad-core servers with 12 GB of RAM. In our setup, we have a total of 15 x 2 TB disks for Swift storage; this gives us a total size of 30 TB. However, with in-built replication (with a default replica count of 3), Swift maintains three copies of the same data. Therefore, the effective capacity for storing files and objects is approximately 10 TB, taking filesystem overhead into consideration. This is further reduced due to less than 100 percent utilization. The following figure depicts the nodes of our Swift cluster configuration: The storage servers have container, object, and account services running in them. Server setup and network configuration All the servers are installed with the Ubuntu server operating system (64-bit LTS version 14.04). You'll need to configure three networks, which are as follows: Public network: The proxy server connects to this network. This network provides public access to the API endpoints within the proxy server. Storage network: This is a private network and it is not accessible to the outside world. All the storage servers and the proxy server will connect to this network. Communication between the proxy server and the storage servers and communication between the storage servers take place within this network. In our configuration, the IP addresses assigned in this network are 172.168.10.0 and 172.168.10.99. Replication network: This is also a private network that is not accessible to the outside world. It is dedicated to replication traffic, and only storage servers connect to it. All replication-related communication between storage servers takes place within this network. In our configuration, the IP addresses assigned in this network are 172.168.9.0 and 172.168.9.99. This network is optional, and if it is set up, the traffic on it needs to be monitored closely. Pre-installation steps In order for various servers to communicate easily, edit the /etc/hosts file and add the host names of each server in it. This has to be done on all the nodes. The following screenshot shows an example of the contents of the /etc/hosts file of the proxy server node: Install the Network Time Protocol (NTP) service on the proxy server node and storage server nodes. This helps all the nodes to synchronize their services effectively without any clock delays. The pre-installation steps to be performed are as follows: Run the following command to install the NTP service: # apt-get install ntp Configure the proxy server node to be the reference server for the storage server nodes to set their time from the proxy server node. Make sure that the following line is present in /etc/ntp.conf for NTP configuration in the proxy server node: server ntp.ubuntu.com For NTP configuration in the storage server nodes, add the following line to /etc/ntp.conf. Comment out the remaining lines with server addresses such as 0.ubuntu.pool.ntp.org, 1.ubuntu.pool.ntp.org, 2.ubuntu.pool.ntp.org, and 3.ubuntu.pool.ntp.org: # server 0.ubuntu.pool.ntp.org# server 1.ubuntu.pool.ntp.org# server 2.ubuntu.pool.ntp.org# server 3.ubuntu.pool.ntp.orgserver s-swift-proxy Restart the NTP service on each server with the following command: # service ntp restart Downloading and installing Swift The Ubuntu Cloud Archive is a special repository that provides users with the ability to install new releases of OpenStack. The steps required to download and install Swift are as follows: Enable the capability to install new releases of OpenStack, and install the latest version of Swift on each node using the following commands. The second command shown here creates a file named cloudarchive-juno.list in /etc/apt/sources.list.d, whose content is "deb http://ubuntu-cloud.archieve.canonical.com/ubuntu": Now, update the OS using the following command: # apt-get update && apt-get dist-upgrade On all the Swift nodes, we will install the prerequisite software and services using this command: # apt-get install swift rsync memcached python-netifaces python-xattr python-memcache Next, we create a Swift folder under /etc and give users the permission to access this folder, using the following commands: # mkdir –p /etc/swift/# chown –R swift:swift /etc/swift Download the /etc/swift/swift.conf file from GitHub using this command: # curl –o /etc/swift/swift.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/swift.conf-sample Modify the /etc/swift/swift.conf file and add a variable called swift_hash_path_suffix in the swift-hash section. We then create a unique hash string using # python –c "from uuid import uuid4; print uuid4()" or # openssl rand –hex 10, and assign it to this variable, as shown in the following configuration option: We then add another variable called swift_hash_path_prefix to the swift-hash section, and assign to it another hash string created using the method described in the preceding step. These strings will be used in the hashing process to determine the mappings in the ring. The swift.conf file should be identical on all the nodes in the cluster. Setting up storage server nodes This section explains additional steps to set up the storage server nodes, which will contain the object, container, and account services. Installing services The first step required to set up the storage server node is installing services. Let's look at the steps involved: On each storage server node, install the packages for swift-account services, swift-container services, swift-object services, and xfsprogs (XFS Filesystem) using this command: # apt-get install swift-account swift-container swift-object xfsprogs Download the account-server.conf, container-server.conf, and object-server.conf samples from GitHub, using the following commands: # curl –o /etc/swift/account-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/account-server.conf-sample# curl –o /etc/swift/container-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/container-server.conf-sample# curl –o /etc/swift/object-server.conf https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/object-server.conf-sample Edit the /etc/swift/account-server.conf file with the following section: Edit the /etc/swift/container-server.conf file with this section: Edit the /etc/swift/object-server.conf file with the following section: Formatting and mounting hard disks On each storage server node, we need to identify the hard disks that will be used to store the data. We will then format the hard disks and mount them on a directory, which Swift will then use to store data. We will not create any RAID levels or subpartitions on these hard disks because they are not necessary for Swift. They will be used as entire disks. The operating system will be installed on separate disks, which will be RAID configured. First, identify the hard disks that are going to be used for storage and format them. In our storage server, we have identified sdb, sdc, and sdd to be used for storage. We will perform the following operations on sdb. These four steps should be repeated for sdc and sdd as well: Carry out the partitioning for sdb and create the filesystem using this command: # fdisk /dev/sdb# mkfs.xfs /dev/sdb1 Then let's create a directory in /srv/node/sdb1 that will be used to mount the filesystem. Give the permission to the swift user to access this directory. These operations can be performed using the following commands: # mkdir –p /srv/node/sdb1# chown –R swift:swift /srv/node/sdb1 We set up an entry in fstab for the sdb1 partition in the sdb hard disk, as follows. This will automatically mount sdb1 on /srv/node/sdb1 upon every boot. Add the following command line to the /etc/fstab file: /dev/sdb1 /srv/node/sdb1 xfsnoatime,nodiratime,nobarrier,logbufs=8 0 2 Mount sdb1 on /srv/node/sdb1 using the following command: # mount /srv/node/sdb1 RSYNC and RSYNCD In order for Swift to perform the replication of data, we need to configure rsync by configuring rsyncd.conf. This is done by performing the following steps: Create the rsyncd.conf file in the /etc folder with the following content: # vi /etc/rsyncd.conf We are setting up synchronization within the network by including the following lines in the configuration file: 172.168.9.52 is the IP address that is on the replication network for this storage server. Use the appropriate replication network IP addresses for the corresponding storage servers. We then have to edit the /etc/default/rsync file and set RSYNC_ENABLE to true using the following configuration option: RSYNC_ENABLE=true Next, we restart the rsync service using this command: # service rsync restart Then we create the swift, recon, and cache directories using the following commands, and then set its permissions: # mkdir -p /var/cache/swift# mkdir -p /var/swift/recon Setting permissions is done using these commands: # chown -R swift:swift /var/cache/swift# chown -R swift:swift /var/swift/recon Repeat these steps on every storage server. Setting up the proxy server node This section explains the steps required to set up the proxy server node, which are as follows: Install the following services only on the proxy server node: # apt-get install python-swiftclient python-keystoneclientpython-keystonemiddleware swift-proxy Swift doesn't support HTTPS. OpenSSL has already been installed as part of the operating system installation to support HTTPS. We are going to use the OpenStack Keystone service for authentication. In order to set up the proxy-server.conf file for this, we download the configuration file from the following link and edit it: https://raw.githubusercontent.com/openstack/swift/stable/juno/etc/proxy-server.conf-sample# vi /etc/swift/proxy-server.conf The proxy-server.conf file should be edited to get the correct auth_host, admin_token, admin_tenant_name, admin_user, and admin_password values: admin_token = 01d8b673-9ebb-41d2-968a-d2a85daa1324admin_tenant_name = adminadmin_user = adminadmin_password = changeme Next, we create a keystone-signing directory and give permissions to the swift user using the following commands: # mkdir -p /home/swift/keystone-signing# mkdir -R swift:swift /home/swift/keystone-signing Summary In this article, you learned how to install and set up the OpenStack Swift service to provide object storage, and install and set up the Keystone service to provide authentication for users to access the Swift object storage. Resources for Article: Further resources on this subject: Troubleshooting in OpenStack Cloud Computing [Article] Using OpenStack Swift [Article] Playing with Swift [Article]
Read more
  • 0
  • 0
  • 15975

article-image-plotting-haskell
Packt
04 Jun 2015
10 min read
Save for later

Plotting in Haskell

Packt
04 Jun 2015
10 min read
In this article by James Church, author of the book Learning Haskell Data Analysis, we will see the different methods of data analysis by plotting data using Haskell. The other topics that this article covers is using GHCi, scaling data, and comparing stock prices. (For more resources related to this topic, see here.) Can you perform data analysis in Haskell? Yes, and you might even find that you enjoy it. We are going to take a few snippets of Haskell and put some plots of the stock market data together. To get started with, the following software needs to be installed: The Haskell platform (http://www.haskell.org/platform) Gnuplot (http://www.gnuplot.info/) The cabal command-line tool is the tool used to install packages in Haskell. There are three packages that we may need in order to analyze the stock market data. To use cabal, you will use the cabal install [package names] command. Run the following command to install the CSV parsing package, the EasyPlot package, and the Either package: $ cabal install csv easyplot either Once you have the necessary software and packages installed, we are all set for some introductory analysis in Haskell. We need data It is difficult to perform an analysis of data without data. The Internet is rich with sources of data. Since this tutorial looks at the stock market data, we need a source. Visit the Yahoo! Finance website to find the history of every publicly traded stock on the New York Stock Exchange that has been adjusted to reflect splits over time. The good folks at Yahoo! provide this resource in the csv file format. We begin with downloading the entire history of the Apple company from Yahoo! Finance (http://finance.yahoo.com). You can find the content for Apple by performing a quote look up from the Yahoo! Finance home page for the AAPL symbol (that is, 2 As, not 2 Ps). On this page, you can find the link for Historical Prices. On the Historical Prices page, identify the link that says Download to Spreadsheet. The complete link to Apple's historical prices can be found at the following link: http://real-chart.finance.yahoo.com/table.csv?s=AAPL. We should take a moment to explore our dataset. Here are the column headers in the csv file: Date: This is a string that represents the date of a particular date in Apple's history Open: This is the opening value of one share High: This is the high trade value over the course of this day Low: This is the low trade value of the course of this day Close: This is the final price of the share at the end of this trading day Volume: This is the total number of shares traded on this day Adj Close: This is a variation on the closing price that adjusts the dividend payouts and company splits Another feature of this dataset is that each of the rows are written in a table in a chronological reverse order. The most recent date in the table is the first. The oldest is the last. Yahoo! Finance provides this table (Apple's historical prices) under the unhelpful name table.csv. I renamed my csv file aapl.csv, which is provided by Yahoo! Finance. Start GHCi The interactive prompt for Haskell is GHCi. On the command line, type GHCi. We begin with importing our newly installed libraries from the prompt: > import Data.List< > import Text.CSV< > import Data.Either.Combinators< > import Graphics.EasyPlot Parse the csv file that you just downloaded using the parseCSVFromFile command. This command will return an Either type, which represents one of the two things that happened: your file was parsed (Right) or something went wrong (Left). We can inspect the type of our result with the :t command: > eitherErrorOrCells <- parseCSVFromFile "aapl.csv"< > :t eitherErrorOrCells < eitherErrorOrCells :: Either Text.Parsec.Error.ParseError CSV Did we get an error for our result? For this, we are going to use the fromRight and fromLeft commands. Remember, Right is right and Left is wrong. When we run the fromLeft command, we should see this message saying that our content is in the Right: > fromLeft' eitherErrorOrCells < *** Exception: Data.Either.Combinators.fromLeft: Argument takes from 'Right _' Pull the cells of our csv file into cells. We can see the first four rows of our content using take 5 (which will pull our header line and the first four cells): > let cells = fromRight' eitherErrorOrCells< > take 5 cells< [["Date","Open","High","Low","Close","Volume","Adj Close"],["2014-11-10","552.40","560.63","551.62","558.23","1298900","558.23"],["2014-11-07","555.60","555.60","549.35","551.82","1589100","551.82"],["2014-11-06","555.50","556.80","550.58","551.69","1649900","551.69"],["2014-11-05","566.79","566.90","554.15","555.95","1645200","555.95"]] The last column in our csv file is the Adj Close, which is the column we would like to plot. Count the columns (starting with 0), and you will find that Adj Close is number 6. Everything else can be dropped. (Here, we are also using the init function to drop the last row of the data, which is an empty list. Grabbing the 6th element of an empty list will not work in Haskell.): > map (x -> x !! 6) (take 5 (init cells))< ["Adj Close","558.23","551.82","551.69","555.95"] We know that this column represents the adjusted close prices. We should drop our header row. Since we use tail to drop the header row, take 5 returns the first five adjusted close prices: > map (x -> x !! 6) (take 5 (tail (init cells)))< ["558.23","551.82","551.69","555.95","564.19"] We should store all of our adjusted close prices in a value called adjCloseOriginal: > let adjCloseAAPLOriginal = map (x -> x !! 6) (tail (init cells)) These are still raw strings. We need to convert these to a Double type with the read function: > let adjCloseAAPL = map read adjCloseAaplOriginal :: [Double] We are almost done messaging our data. We need to make sure that every value in adjClose is paired with an index position for the purpose of plotting. Remember that our adjusted closes are in a chronological reverse order. This will create a tuple, which can be passed to the plot function: > let aapl = zip (reverse [1.0..length adjCloseAAPL]) adjCloseAAPL< > take 5 aapl < [(2577,558.23),(2576,551.82),(2575,551.69),(2574,555.95),(2573,564.19)] Plotting > plot (PNG "aapl.png") $ Data3D [Title "AAPL"] [] aapl< True The following chart is the result of the preceding command: Open aapl.png, which should be newly created in your current working directory. This is a typical default chart created by EasyPlot. We can see the entire history of the Apple stock price. For most of this history, the adjusted share price was less than $10 per share. At about the 6,000 trading day, we see the quick ascension of the share price to over $100 per share. Most of the time, when we take a look at a share price, we are only interested in the tail portion (say, the last year of changes). Our data is already reversed, so the newest close prices are at the front. There are 252 trading days in a year, so we can take the first 252 elements in our value and plot them. While we are at it, we are going to change the style of the plot to a line plot: > let aapl252 = take 252 aapl< > plot (PNG "aapl_oneyear.png") $ Data2D [Title "AAPL", Style Lines] [] aapl252< True The following chart is the result of the preceding command: Scaling data Looking at the share price of a single company over the course of a year will tell you whether the price is trending upward or downward. While this is good, we can get better information about the growth by scaling the data. To scale a dataset to reflect the percent change, we subtract each value by the first element in the list, divide that by the first element, and then multiply by 100. Here, we create a simple function called percentChange. We then scale the values 100 to 105, using this new function. (Using the :t command is not necessary, but I like to use it to make sure that I have at least the desired type signature correct.): > let percentChange first value = 100.0 * (value - first) / first< > :t percentChange< percentChange :: Fractional a => a -> a -> a< > map (percentChange 100) [100..105]< [0.0,1.0,2.0,3.0,4.0,5.0] We will use this new function to scale our Apple dataset. Our tuple of values can be split using the fst (for the first value containing the index) and snd (for the second value containing the adjusted close) functions: > let firstValue = snd (last aapl252)< > let aapl252scaled = map (pair -> (fst pair, percentChange firstValue (snd pair))) aapl252< > plot (PNG "aapl_oneyear_pc.png") $ Data2D [Title "AAPL PC", Style Lines] [] aapl252scaled< True The following chart is the result of the preceding command: Let's take a look at the preceding chart. Notice that it looks identical to the one we just made, except that the y axis is now changed. The values on the left-hand side of the chart are now the fluctuating percent changes of the stock from a year ago. To the investor, this information is more meaningful. Comparing stock prices Every publicly traded company has a different stock price. When you hear that Company A has a share price of $10 and Company B has a price of $100, there is almost no meaningful content to this statement. We can arrive at a meaningful analysis by plotting the scaled history of the two companies on the same plot. Our Apple dataset uses an index position of the trading day for the x axis. This is fine for a single plot, but in order to combine plots, we need to make sure that all plots start at the same index. In order to prepare our existing data of Apple stock prices, we will adjust our index variable to begin at 0: > let firstIndex = fst (last aapl252scaled)< > let aapl252scaled = map (pair -> (fst pair - firstIndex, percentChange firstValue (snd pair))) aapl252 We will compare Apple to Google. Google uses the symbol GOOGL (spelled Google without the e). I downloaded the history of Google from Yahoo! Finance and performed the same steps that I previously wrote with our Apple dataset: > -- Prep Google for analysis< > eitherErrorOrCells <- parseCSVFromFile "googl.csv"< > let cells = fromRight' eitherErrorOrCells< > let adjCloseGOOGLOriginal = map (x -> x !! 6) (tail (init cells))< > let adjCloseGOOGL = map read adjCloseGOOGLOriginal :: [Double]< > let googl = zip (reverse [1.0..genericLength adjCloseGOOGL]) adjCloseGOOGL< > let googl252 = take 252 googl< > let firstValue = snd (last googl252)< > let firstIndex = fst (last googl252)< > let googl252scaled = map (pair -> (fst pair - firstIndex, percentChange firstValue (snd pair))) googl252 Now, we can plot the share prices of Apple and Google on the same chart, Apple plotted in red and Google plotted in blue: > plot (PNG "aapl_googl.png") [Data2D [Title "AAPL PC", Style Lines, Color Red] [] aapl252scaled, Data2D [Title "GOOGL PC", Style Lines, Color Blue] [] googl252scaled]< True The following chart is the result of the preceding command: You can compare for yourself the growth rate of the stock price for these two competing companies because I believe that the contrast is enough to let the image speak for itself. This type of analysis is useful in the investment strategy known as growth investing. I am not recommending this as a strategy, nor am I recommending either of these two companies for the purpose of an investment. I am recommending Haskell as your language of choice for performing data analysis. Summary In this article, we used data from a csv file and plotted data. The other topics covered in this article were using GHCi and EasyPlot for plotting, scaling data, and comparing stock prices. Resources for Article: Further resources on this subject: The Hunt for Data [article] Getting started with Haskell [article] Driving Visual Analyses with Automobile Data (Python) [article]
Read more
  • 0
  • 0
  • 7936

article-image-getting-started-livecode-mobile-0
Packt
03 Jun 2015
34 min read
Save for later

Getting Started with LiveCode Mobile

Packt
03 Jun 2015
34 min read
In this article written by Joel Gerdeen, author of the book LiveCode Mobile Development: Beginner's Guide - Second Edition we will learn the following topics: Sign up for Google Play Sign up for Amazon Appstore Download and install the Android SDK Configure LiveCode so that it knows where to look for the Android SDK Become an iOS developer with Apple Download and install Xcode Configure LiveCode so that it knows where to look for iOS SDKs Set up simulators and physical devices Test a stack in a simulator and physical device (For more resources related to this topic, see here.) Disclaimer This article references many Internet pages that are not under our control. Here, we do show screenshots or URLs, so remember that the content may have changed since we wrote this. The suppliers may also have changed some of the details, but in general, our description of procedures should still work the way we have described them. Here we go... iOS, Android, or both? It could be that you only have interest in iOS or Android. You should be able to easily skip to the sections you're interested in unless you're intrigued about how the other half works! If, like me, you're a capitalist, then you should be interested in both the operating systems. Far fewer steps are needed to get the Android SDK than the iOS developer tools because for iOS, we have to sign up as a developer with Apple. However, the configuration for Android is more involved. We'll go through all the steps for Android and then the ones for iOS. If you're an iOS-only kind of person, skip the next few pages and start up again at the Becoming an iOS Developer section. Becoming an Android developer It is possible to develop Android OS apps without signing up for anything. We'll try to be optimistic and assume that within the next 12 months, you will find time to make an awesome app that will make you rich! To that end, we'll go over everything that is involved in the process of signing up to publish your apps in both Google Play (formally known as Android Market) and Amazon Appstore. Google Play The starting location to open Google Play is http://developer.android.com/: We will come back to this page again, shortly to download the Android SDK, but for now, click on the Distribute link in the menu bar and then on the Developer Console button on the following screen. Since Google changes these pages occasionally, you can use the URL https://play.google.com/apps/publish/ or search for "Google Play Developer Console". The screens you will progress through are not shown here since they tend to change with time. There will be a sign-in page; sign in using your usual Google details. Which e-mail address to use? Some Google services are easier to sign up for if you have a Gmail account. Creating a Google+ account, or signing up for some of their cloud services, requires a Gmail address (or so it seemed to me at the time!). If you have previously set up Google Wallet as part of your account, some of the steps in signing up become simpler. So, use your Gmail address and if you don't have one, create one! Google charges you a $25 fee to sign up for Google Play. At least now, you know about this! Enter the developer name, e-mail address, website URL (if you have one), and your phone number. The payment of $25 will be done through Google Wallet, which will save you from entering the billing details yet again. Now, you're all signed up and ready to make your fortune! Amazon Appstore Although the rules and costs for Google Play are fairly relaxed, Amazon has a more Apple-like approach, both in the amount they charge you to register and in the review process to accept app submissions. The URL to open Amazon Appstore is http://developer.amazon.com/public: Follow these steps to start with Amazon Appstore: When you select Get Started, you need to sign in to your Amazon account. Which email address to use? This feels like déjà vu! There is no real advantage of using your Google e-mail address when signing up for the Amazon Appstore Developer Program, but if you happen to have an account with Amazon, sign in with that one. It will simplify the payment stage, and your developer account and the general Amazon account will be associated with each other. You are then asked to agree to the Appstore Distribution Agreement terms before learning about the costs. These costs are $99 per year, but the first year is free. So that's good! Unlike the Google Android Market, Amazon asks for your bank details up front, ready to send you lots of money later, we hope! That's it, you're ready to make another fortune to go along with the one that Google sent you! Pop quiz – when is something too much? You're at the end of developing your mega app, it's 49.5 MB in size, and you just need to add title screen music. Why would you not add the two-minute epic tune you have lined up? It would take too long to load. People tend to skip the title screen soon anyway. The file size is going to be over 50 MB. Heavy metal might not be appropriate for a children's storybook app! Answer: 3 The other answers are valid too, though you could play the music as an external sound to reduce loading time, but if your file size goes over 50 MB, you would then cut out potential sales from people who are connected by cellular and not wireless networks. At the time of writing this aticle, all the stores require that you be connected to the site via a wireless network if you intend to download apps that are over 50 MB. Downloading the Android SDK Head back to http://developer.android.com/ and click on the Get the SDK link or go straight to http://developer.android.com/sdk/index.html. This link defaults to the OS that you are running on. Click on the Other Download Options link to see the full set of options for other systems, as shown here: In this article, we're only going to cover Windows and Mac OS X (Intel) and only as much as is needed to make LiveCode work with the Android and iOS SDKs. If you intend to make native Java-based applications, you may be interested in reading through all the steps that are described in the web page http://developer.android.com/sdk/installing.html. Click on the SDK download link for your platform. Note that you don't need the ADT Bundle unless you plan to develop outside the LiveCode IDE. The steps you'll have to go through are different for Mac and Windows. Let's start with Mac. Installing the Android SDK on Mac OS X (Intel) LiveCode itself doesn't require Intel Mac; you can develop stacks using a PowerPC-based Mac, but both the Android SDK and some of the iOS tools require an Intel-based Mac, which sadly means that if you're reading this as you sit next to your Mac G4 or G5, you're not going to get too far! The Android SDK requires the Java Runtime Environment (JRE). Since Apple stopped including the JRE in more recent OS X systems, you should check whether you have it in your system by typing java –version in a Terminal window. The terminal will display the version of Java installed. If not, you may get a message like the following: Click on the More Info button and follow the instructions to install the JRE and verify its installation. At the time of writing this article, JRE 8 doesn't work with OS X 10.10 and I had to use the JRE 6 obtained from http://support.apple.com/kb/DL1572. The file that you just downloaded will automatically expand to show a folder named android-sdk-macosx. It may be in your downloads folder right now, but a more natural place for it would be in your Documents folder, so move it there before performing the next steps. There is an SDK readme text file that lists the steps you need to follow during the installation. If these steps are different to what we have here, then follow the steps in the readme file in case they have been updated since the procedure here was written. Open the Terminal application, which is in Applications/Utilities. You need to change the default directories present in the android-sdk-macosx folder. One handy trick, using Terminal, is that you can drag items into the Terminal window to get the file path to that item. Using this trick, you can type cd and a space in the Terminal window and then drag the android-sdk-macosx folder after the space character. You'll end up with this line if your username is Fred: new-host-3:~ fred$ cd /Users/fred/Documents/android-sdk-macosx Of course, the first part of the line and the user folder will match yours, not Fred's! Whatever your name is, press the Return or Enter key after entering the preceding line. The location line now changes to look like this: new-host-3:android-sdk-macosx colin$ Either carefully type or copy and paste the following line from the readme file: tools/android update sdk --no-ui Press Return or Enter again. How long the file takes to get downloaded depends on your Internet connection. Even with a very fast Internet connection, it could still take over an hour. If you care to follow the update progress, you can just run the android file in the tools directory. This will open the Android SDK Manager, which is similar to the Windows version shown a couple of pages further on in this article. Installing the Android SDK on Windows The downloads page recommends that you use the .exe download link, as it gives extra services to you, such as checking whether you have the Java Development Kit (JDK) installed. When you click on the link, either use the Run or Save options, as you would with any download of a Windows installer. Here, we've opted to use Run; if you use Save, then you need to open the file after it has been saved to your hard drive. In the following case, as the JDK wasn't installed, a dialog box appears saying go to Oracle's site to get the JDK: If you see this screen too, you can leave the dialog box open and click on the Visit java.oracle.com button. On the Oracle page, click on a checkbox to agree to their terms and then on the download link that corresponds with your platform. Choose the 64-bit option if you are running a 64-bit version of Windows or the x86 option if you are running a 32-bit version of Windows. Either way, you're greeted with another installer that you can Run or Save as you prefer. Naturally, it takes a while for the installer to do its thing too! When the installation is complete, you will see a JDK registration page and it's up to you, to register or not. Back at the Android SDK installer dialog box, you can click on the Back button and then the Next button to get back to the JDK checking stage; only now, it sees that you have the JDK installed. Complete the remaining steps of the SDK installer as you would with any Windows installer. One important thing to note is that the last screen of the installer offers to open the SDK Manager. You should do that, so resist the temptation to uncheck that box! Click on Finish and you'll be greeted with a command-line window for a few moments, as shown in the following screenshot, and then, the Android SDK Manager will appear and do its thing: As with the Mac version, it takes a very long time for all these add-ons to download. Pointing LiveCode to the Android SDK After all the installation and command-line work, it's a refreshing change to get back to LiveCode! Open the LiveCode Preferences and choose Mobile Support: We will set the two iOS entries after we get iOS going (but these options will be grayed out in Windows). For now, click on the … button next to the Android development SDK root field and navigate to where the SDK is installed. If you've followed the earlier steps correctly, then the SDK will be in the Documents folder on Mac or you can navigate to C:Program Files (x86)Android to find it on Windows (or somewhere else, if you choose to use a custom location). Depending on the APIs that were loaded in the SDK Manager, you may get a message that the path does not include support for Android 2.2 (API 8). If so, use the Android SDK Manager to install it. LiveCode seems to want API 8 even though at this time Android 5.0 uses API 21. Phew! Now, let's do the same for iOS… Pop quiz – tasty code names An Android OS uses some curious code names for each version. At the time of writing this article, we were on Android OS 5, which had a code name of Lollipop. Version 4.1 was Jelly Bean and version 4.4 was KitKat. Which of these is most likely to be the code name for the next Android OS? Lemon Cheesecake Munchies Noodle Marshmallow Answer: 4 The pattern, if it isn't obvious, is that the code name takes on the next letter of the alphabet, is a kind of food, but more specifically, it's a dessert. "Munchies" almost works for Android OS 6, but "Marshmallow" or "Meringue Pie" would be a better choices! Becoming an iOS developer Creating iOS LiveCode applications requires that LiveCode must have access to the iOS SDK. This is installed as part of the Xcode developer tools and is a Mac-only program. Also, when you upload an app to the iOS App Store, the application used is Mac only and is part of the Xcode installation. If you are a Windows-based developer and wish to develop and publish for iOS, you need either an actual Mac based system or a virtual machine that can run the Mac OS. We can even use VirtualBox for running a Mac based virtual machine, but performance will be an issue. Refer to http://apple.stackexchange.com/questions/63147/is-mac-os-x-in-a-virtualbox-vm-suitable-for-ios-development for more information. The biggest difference between becoming an Android developer and becoming an iOS developer is that you have to sign up with Apple for their developer program even if you never produce an app for the iOS App Store, but no such signing up is required when becoming an Android developer. If things go well and you end up making an app for various stores, then this isn't such a big deal. It will cost you $25 to submit an app to the Android Market, $99 a year (with the first year free) to submit an app to the Amazon Appstore, and $99 a year (including the first year) to be an iOS developer with Apple. Just try to sell more than 300 copies of your amazing $0.99 app and you'll find that it has paid for itself! Note that there is a free iOS App Store and app licensing included, with LiveCode Membership, which also costs $99 per year. As a LiveCode member, you can submit your free non-commercial app to RunRev who will provide a license that will allow you to submit your app as "closed source" to iOS App Store. This service is exclusively available for LiveCode members. The first submission each year is free; after that, there is a $25 administration fee per submission. Refer to http://livecode.com/membership/ for more information. You can enroll yourself in the iOS Developer Program for iOS at http://developer.apple.com/programs/ios/: While signing up to be an iOS developer, there are a number of possibilities when it comes to your current status. If you already have an Apple ID, which you use with your iTunes or Apple online store purchases, you could choose the I already have an Apple ID… option. In order to illustrate all the steps to sign up, we will start as a brand new user, as shown in the following screenshot: You can choose whether you want to sign up as an individual or as a company. We will choose Individual, as shown in the following screenshot: With any such sign up process, you need to enter your personal details, set a security question, and enter your postal address: Most Apple software and services have their own legal agreement for you to sign. The one shown in the following screenshot is the general Registered Apple Developer Agreement: In order to verify the e-mail address you have used, a verification code is sent to you with a link in the e-mail, you can click this, or enter the code manually. Once you have completed the verification code step, you can then enter your billing details. It could be that you might go on to make LiveCode applications for the Mac App Store, in which case, you will need to add the Mac Developer Program product. For our purpose, we only need to sign up for the iOS Developer Program, as shown in the following screenshot: Each product that you sign up for has its own agreement. Lots of small print to read! The actual purchasing of the iOS developer account is handled through the Apple Store of your own region, shown as follows: As you can see in the next screenshot, it is going to cost you $99 per year or $198 per year if you also sign up for the Mac Developer account. Most LiveCode users won't need to sign up for the Mac Developer account unless their plan is to submit desktop apps to the Mac App Store. After submitting the order, you are rewarded with a message that tells you that you are now registered as an Apple developer! Sadly, you won't get an instant approval, as was the case with Android Market or Amazon Appstore. You have to wait for the approval for five days. In the early iPhone Developer days, the approval could take a month or more, so 24 hours is an improvement! Pop quiz – iOS code names You had it easy with the pop quiz about Android OS code names! Not so with iOS. Which of these names is more likely to be a code name for a future version of iOS? Las Vegas Laguna Beach Hunter Mountain Death Valley Answer: 3 Although not publicized, Apple does use code names for each version of iOS. Previous examples included Big Bear, Apex, Kirkwood, and Telluride. These, and all the others are apparently ski resorts. Hunter Mountain is a relatively small mountain (3,200 feet), so if it does get used, perhaps it would be a minor update! Installing Xcode Once you receive confirmation of becoming an iOS developer, you will be able to log in to the iOS Dev Center at https://developer.apple.com/devcenter/ios/index.action. This same page is used by iOS developers who are not using LiveCode and is full of support documents that can help you create native applications using Xcode and Objective-C. We don't need all the support documents, but we do need to download Xcode's support documents. In the downloads area of the iOS Dev Center page, you will see a link to the current version of Xcode and a link to get to the older versions as well. The current version is delivered via Mac App Store; when you try the given link, you will see a button that takes you to the App Store application. Installing Xcode from Mac App Store is very straightforward. It's just like buying any other app from the store, except that it's free! It does require you to use the latest version of Mac OS X. Xcode will show up in your Applications folder. If you are using an older system, then you need to download one of the older versions from the developer page. The older Xcode installation process is much like the installation process of any other Mac application: The older version of Xcode takes a long time to get installed, but in the end, you should have the Developer folder or a new Xcode application ready for LiveCode. Coping with newer and older devices In early 2012, Apple brought to the market a new version of iPad. The main selling point of this one compared to iPad 2 is that it has a Retina display. The original iPads have a resolution of 1024 x 768 and the Retina version has a resolution of 2048 x 1536. If you wish to build applications to take advantage of this, you must get the current version of Xcode from Mac App Store and not one of the older versions from the developer page. The new version of Xcode demands that you work on Mac OS 10.10 or its later versions. So, to fully support the latest devices, you may have to update your system software more than you were expecting! But wait, there's more… By taking a later version of Xcode, you are missing the iOS SDK versions needed to support older iOS devices, such as the original iPhone and iPhone 3G. Fortunately, you can go to Preferences in Xcode where there is a Downloads tab where you can get these older SDKs downloaded in the new version of Xcode. Typically, Apple only allows you to download one version older than the one that is currently provided in Xcode. There are older versions available, but are not accepted by Apple for App Store submission. Pointing LiveCode to the iOS SDKs Open the LiveCode Preferences and choose Mobile Support: Click on the Add Entry button in the upper-right section of the window to see a dialog box that asks whether you are using Xcode 4.2 or 4.3 or a later version. If you choose 4.2, then go on to select the folder named Developer at the root of your hard drive. For 4.3 or later versions, choose the Xcode application itself in your Applications folder. LiveCode knows where to find the SDKs for iOS. Before we make our first mobile app… Now that the required SDKs are installed and LiveCode knows where they are, we can make a stack and test it in a simulator or on a physical device. We do, however, have to get the simulators and physical devices warmed up… Getting ready for test development on an Android device Simulating on iOS is easier than it is on Android, and testing on a physical device is easier on Android than on iOS, but the setting up of physical Android devices can be horrendous! Time for action – starting an Android Virtual Device You will have to dig a little deep in the Android SDK folders to find the Android Virtual Device setup program. You might as well provide a shortcut or an alias to it for quicker access. The following steps will help you setup and start an Android virtual device: Navigate to the Android SDK tools folder located at C:Program Files (x86)Androidandroid-sdk on Windows and navigate to your Documents/android-sdk-macosx/tools folder on Mac. Open AVD Manager on Windows or android on Mac (these look like a Unix executable file; just double-click on it and the application will open via a command-line window). If you're on Mac, select Manage AVDs… from the Tools menu. Select Tablet from the list of devices if there is one. If not, you can add your own custom devices as described in the following section. Click on the Start button. Sit patiently while the virtual device starts up! Open LiveCode, create a new Mainstack, and click on Save to save the stack to your hard drive. Navigate to File | Standalone Application Settings…. Click on the Android icon and click on the Build for Android checkbox to select it. Close the settings dialog box and take a look at the Development menu. If the virtual machine is up and running, you should see it listed in the Test Target submenu. Creating an Android Virtual Device If there are no devices listed when you open the Android Virtual Device (AVD) Manager, you may If you wish to create a device, so click on the Create button. The following screenshot will appear when you do so. Further explanation of the various fields can be found at https://developer.android.com/tools/devices/index.html. After you have created a device, you can click on Start to start the virtual device and change some of the Launch Options. You should typically select Scale display to real size unless it is too big for your development screen. Then, click on Launch to fire up the emulator. Further information on how to run the emulator can be found at http://developer.android.com/tools/help/emulator.html. What just happened? Now that you've opened an Android virtual device, LiveCode will be able to test stacks using this device. Once it has finished loading, that is! Connecting a physical Android device Connecting a physical Android device can be extremely straightforward: Connect your device to the system by USB. Select your device from the Development | Test Target submenu. Select Test from the Development menu or click on the Test button in the Tool Bar. There can be problem cases though, and Google Search will become your best friend before you are done solving these problems! We should look at an example problem case, so that you get an idea of how to solve similar situations that you may encounter. Using Kindle Fire When it comes to finding Android devices, the Android SDK recognizes a lot of them automatically. Some devices are not recognized and you have to do something to help Android Debug Bridge (ADB) find these devices. Android Debug Bridge (ADB) is part of the Android SDK that acts as an intermediary between your device and any software that needs to access the device. In some cases, you will need to go to the Android system on the device to tell it to allow access for development purposes. For example, on an Android 3 (Honeycomb) device, you need to go to the Settings | Applications | Development menu and you need to activate the USB debugging mode. Before ADB connects to a Kindle Fire device, that device must first be configured, so that it allows connection. This is enabled by default on the first generation Kindle Fire device. On all other Kindle Fire models, go to the device settings screen, select Security, and set Enable ADB to On. The original Kindle Fire model comes with USB debugging already enabled, but the ADB system doesn't know about the device at all. You can fix this! Time for action – adding Kindle Fire to ADB It only takes one line of text to add Kindle Fire to the list of devices that ADB knows about. The hard part is tracking down the text file to edit and getting ADB to restart after making the required changes. Things are more involved when using Windows than with Mac because you also have to configure the USB driver, so the two systems are shown here as separate steps. The steps to be followed for adding a Kindle Fire to ADB for a Windows OS are as follows: In Windows Explorer, navigate to C:Usersyourusername.android where the adv_usb.ini file is located. Open the adv_usb.ini text file in a text editor. The file has no visible line breaks, so it is better to use WordPad than NotePad. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Now, save the file. Navigate to C:Program Files (x86)Androidandroid-sdkextrasgoogleusb_driver where android_winusb.inf is located. Right-click on the file and in Properties, Security, select Users from the list and click on Edit to set the permissions, so that you are allowed to write the file. Open the android_winusb.inf file in NotePad. Add the following three lines to the [Google.NTx86] and [Google.NTamd64] sections and save the file: ;Kindle Fire %SingleAdbInterface% = USB_Install, USBVID_1949&PID_0006 %CompositeAdbInterface% = USB_Install, USBVID_1949&PID_0006&MI_01 You need to set the Kindle so that it uses the Google USB driver that you just edited. In the Windows control panel, navigate to Device Manager and find the Kindle entry in the list that is under USB. Right-click on the Kindle entry and choose Update Driver Software…. Choose the option that lets you find the driver on your local drive, navigate to the googleusb_driver folder, and then select it to be the new driver. When the driver is updated, open a command window (a handy trick to open a command window is to use Shift-right-click on the desktop and to choose "Open command window here"). Change the directories to where the ADB tool is located by typing: cd C:Program Files (x86)Androidandroid-sdkplatform-tools Type the following three line of code and press Enter after each line: adb kill-server adb start-server adb devices You should see the Kindle Fire listed (as an obscure looking number) as well as the virtual device if you still have that running. The steps to be followed for a Mac (MUCH simpler!) system are as follows: Navigate to where the adv_usb.ini file is located. On Mac, in Finder, select the menu by navigating to Go | Go to Folder… and type ~/.android/in. Open the adv_usb.ini file in a text editor. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Save the adv_usb.ini file. Navigate to Utilities | Terminal. You can let OS X know how to find ADB from anywhere by typing the following line (replace yourusername with your actual username and also change the path if you've installed the Android SDK to some other location): export PATH=$PATH:/Users/yourusername/Documents/android-sdk-macosx/platform-tools Now, try the same three lines as we did with Windows: adb kill-server adb start-server adb devices Again, you should see the Kindle Fire listed here. What just happened? I suspect that you're going to have nightmares about all these steps! It took a lot of research on the Web to find out some of these obscure hacks. The general case with Android devices on Windows is that you have to modify the USB driver for the device to be handled using the Google USB driver, and you may have to modify the adb_usb.ini file (on Mac too) for the device to be considered as an ADB compatible device. Getting ready for test development on an iOS device If you carefully went through all these Android steps, especially on Windows, you will hopefully be amused by the brevity of this section! There is a catch though; you can't really test on an iOS device from LiveCode. We'll look at what you have to do instead in a moment, but first, we'll look at the steps required to test an app in the iOS simulator. Time for action – using the iOS simulator The initial steps are much like what we did for Android apps, but the process becomes a lot quicker in later steps. Remember, this only applies to a Mac OS; you can only do these things on Windows if you are using a Mac OS in a virtual machine, which may have performance issues. This is most likely not covered by the Mac OS's user agreement! In other words, get a Mac OS if you intend to develop for iOS. The following steps will help you achieve that: Open LiveCode and create a new Mainstack and save the stack to your hard drive. Select File and then Standalone Application Settings…. Click on the iOS icon to select the Build for iOS checkbox. Close the settings dialog box and take a look at the Test Target menu under Development. You will see a list of simulator options for iPhone and iPad and different versions of iOS. To start the iOS simulator, select an option and click on the Test button. What just happened? This was all it took for us to get the testing done using the iOS simulators! To test on a physical iOS device, we need to create an application file first. Let's do that. Appiness at last! At this point, you should be able to create a new Mainstack, save it, select either iOS or Android in the Standalone Settings dialog box, and be able to see simulators or virtual devices in the Development/Test menu item. In the case of an Android app, you will also see your device listed if it is connected via USB at the time. Time for action – testing a simple stack in the simulators Feel free to make things that are more elaborate than the ones we have made through these steps! The following instructions make an assumption that you know how to find things by yourself in the object inspector palette: Open LiveCode, create a new Mainstack, and save it someplace where it is easy to find in a moment from now. Set the card window to the size 480 x 320 and uncheck the Resizable checkbox. Drag a label field to the top-left corner of the card window and set its contents to something appropriate. Hello World might do. If you're developing on Windows, skip to step 11. Open the Standalone Application Settings dialog box, click on the iOS icon, and click on the Build for iOS checkbox. Under Orientation Options, set the iPhone Initial Orientation to Landscape Left. Close the dialog box. Navigate to the Development | Test Target submenu and choose an iPhone Simulator. Select Test from the Development menu. You should now be able to see your test stack running in the iOS simulator! As discussed earlier, launch the Android virtual device. Open the Standalone Application Settings dialog box, click on the Android icon, and click on the Build for Android checkbox. Under User Interface Options, set the Initial Orientation to Landscape. Close the dialog box. If the virtual device is running by now, do whatever it takes to get past the locked home screen, if that's what it is showing. From the Development/Test Target submenu, choose the Android emulator. Select Test from the Development menu. You should now see your test stack running in the Android emulator! What just happened? All being well, you just made and ran your first mobile app on both Android and iOS! For an encore, we should try this on physical devices only to give Android a chance to show how easy it can be done. There is a whole can of worms we didn't open yet that has to do with getting an iOS device configured, so that it can be used for testing. You could visit the iOS Provisioning Portal at https://developer.apple.com/ios/manage/overview/index.action and look at the How To tab in each of the different sections. Time for action – testing a simple stack on devices Now, let's try running our tests on physical devices. Get your USB cables ready and connect the devices to your computer. Lets go through the steps for an Android device first: You should still have Android selected in Standalone Application Settings. Get your device to its home screen past the initial Lock screen if there is one. Choose Development/Test Target and select your Android device. It may well say "Android" and a very long number. Choose Development/Test. The stack should now be running on your Android device. Now, we'll go through the steps to test a simple stack on an iOS device: Change the Standalone Application Settings back to iOS. Under Basic Application Settings of the iOS settings is a Profile drop-down menu of the provisioning files that you have installed. Choose one that is configured for the device you are going to test. Close the dialog box and choose Save as Standalone Application… from the File menu. In Finder, locate the folder that was just created and open it to reveal the app file itself. As we didn't give the stack a sensible name, it will be named Untitled 1. Open Xcode, which is in the Developer folder you installed earlier, in the Applications subfolder. In the Xcode folder, choose Devices from the Window menu if it isn't already selected. You should see your device listed. Select it and if you see a button labeled Use for Development, click on that button. Drag the app file straight from the Finder menu to your device in the Devices window. You should see a green circle with a + sign. You can also click on the + sign below Installed Apps and locate your app file in the Finder window. You can also replace or delete an installed app from this window. You can now open the app on your iOS device! What just happened? In addition to getting a test stack to work on real devices, we also saw how easy it is, once it's all configured, to test a stack, straight on an Android device. If you are developing an app that is to be deployed on both Android and iOS, you may find that the fastest way to work is to test with the iOS Simulator for iOS tests, but for this, you need to test directly on an Android device instead of using the Android SDK virtual devices. Have a go hero – Nook Until recently, the Android support for the Nook Color from Barnes & Noble wasn't good enough to install LiveCode apps. It seems to have improved though and could well be another worthwhile app store for you to target. Investigate about the sign up process, download their SDK, and so on. With any luck, some of the processes that you've learned while signing up for the other stores will also apply to the Nook store. You can start the signing up process at https://nookdeveloper.barnesandnoble.com. Further reading The SDK providers, Google and Apple, have extensive pages of information on how to set up development environments, create certificates and provisioning files, and so on. The information covers a lot of topics that don't apply to LiveCode, so try not to get lost! These URLs would be good starting points if you want to read further: http://developer.android.com/ http://developer.apple.com/ios/ Summary Signing up for programs, downloading files, using command lines all over the place, and patiently waiting for the Android emulator to launch. Fortunately, you only have to go through it once. In this article, we worked through a number of tasks that you have to do before you create a mobile app in LiveCode. We had to sign up as an iOS developer before we could download and install Xcode and iOS SDKs. We then downloaded and installed the Android SDK and configured LiveCode for devices and simulators. We also covered some topics that will be useful once you are ready to upload a finished app. We showed you how to sign up for the Android Market and Amazon Appstore. There will be a few more mundane things that we have to cover at the end of the article, but not for a while! Next up, we will start to play with some of the special abilities of mobile devices. Resources for Article: Further resources on this subject: LiveCode: Loops and Timers [article] Creating Quizzes [article] Getting Started with LiveCode for Mobile [article]
Read more
  • 0
  • 0
  • 4445
article-image-adding-graphical-user-interface
Packt
03 Jun 2015
12 min read
Save for later

Adding a Graphical User Interface

Packt
03 Jun 2015
12 min read
In this article by Dr. Edward Lavieri, the author of Getting Started with Unity 5, you will learn how to use Unity 5's new User Interface (UI) system. (For more resources related to this topic, see here.) An overview of graphical user interface Graphical User Interfaces or GUI (pronounced gooey) is a collection of visual components such as text, buttons, and images that facilitates a user's interaction with software. GUIs are also used to provide feedback to players. In the case of our game, the GUI allows players to interact with our game. Without a GUI, the user would have no visual indication of how to use the game. Imagine software without any on-screen indicators of how to use the software. The following image shows how early user interfaces were anything but intuitive: We use GUIs all the time and might not pay too close attention to them, unless they are poorly designed. If you've ever tried to figure out how to use an app on your Smartphone or could not figure out how to perform a specific action with desktop software, you've most likely encountered a poorly designed GUI. Functions of a GUI Our goal is to create a GUI for our game that both informs the user and allows for interaction between the game and the user. To that end, GUIs have two primary purposes: feedback and control. Feedback is generated by the game to the user and control is given to the user and managed by user input. Let's look at each of these more closely. Feedback Feedback can come in many forms. The most common forms of game feedback are visual and audio. Visual feedback can be something as simple as a text on a game screen. An example would be a game player's current score ever-present on the game screen. Games that include dialog systems where the player interacts with non-player characters (NPC) usually have text feedback on the screen that informs what the NPC's responses are. Visual feedback can also be non-textual, such as smoke, fire, explosions, or other graphic effect. Audio feedback can be as simple as a click sound when the user clicks or taps on a button or as complex as a radar ping when an enemy submarine is detected on long-distance sonar scans. You can probably think of all the audio feedback your favorite game provides. When you run your cart over a coin, an audio sound effect is played so there is no question that you earned the coin. If you take a moment to consider all of the audio feedback you are exposed to in games, you'll begin to appreciate the significance of them. Less typical feedback includes device vibration, which is sometimes used with smartphone applications and console games. Some attractions have taken feedback to another level through seat movement and vibration, dispensing liquid and vapor, and introducing chemicals that provide olfactory input. Control Giving players control of the game is the second function of GUIs. There is a wide gambit of types of control. The most simple is using buttons or menus in a game. A game might have a graphical icon of a backpack that, when clicked, gives the user access to the inventory management system of a game. Control seems like an easy concept and it is. Interestingly, most popular console games lack good GUI interfaces, especially when it comes to control. If you play console games, think about how many times you have to refer to the printed or in-game manual. Do you intuitively know all of the controller key mappings? How do you jump, switch weapons, crotch, throw a grenade, or go into stealth mode? In the defense of the game studios that publish these games, there is a lot of control and it can be difficult to make them intuitive. By extension, control is often physical in addition to graphical. Physical components of control include keyboards, mice, trackballs, console controllers, microphones, and other devices. Feedback and control Feedback and control GUI elements are often paired. When you click or tap a button, it usually has both visual and audio effects as well as executing the user's action. When you click (control) on a treasure chest, it opens (visual feedback) and you hear the creak of the old wooden hinges (audio feedback). This example shows the power of using adding feedback to control actions. Game Layers At a primitive level, there are three layers to every game. The core or base level is the Game Layer. The top layer is the User Layer; this is the actual person playing your game. So, it is the layer in between, the GUI Layer that serves as an intermediary between a game and its player. It becomes clear that designing and developing intuitive and well-functioning GUIs is important to a game's functionality, the user experience, and a game's success. Unity 5's UI system Unity's UI system has recently been re-engineered and is now more powerful than ever. Perhaps the most important concept to grasp is the Canvas object. All UI elements are contained in a canvas. Project and scenes can have more than one canvas. You can think of a canvas as a container for UI elements. Canvas To create a canvas, you simply navigate and select GameObject | UI | Canvas from the drop-down menu. You can see from the GameObject | UI menu pop-up that there are 11 different UI elements. Alternatively, you can create your first UI element, such as a button and Unity will automatically create a canvas for you and add it to your Hierarchy view. When you create subsequent UI elements, simply highlight the canvas in the Hierarchy view and then navigate to the GameObject | UI menu to select a new UI element. Here is a brief description of each of the UI elements: UI element Description Panel A frame object Button Standard button that can be clicked Text Text with standard text formatting Image Images can be simple, sliced, tiled, and filled Raw Image Texture file Slider Slider with min and max values Scrollbar Scrollbar with values between 0 and 1 Toggle Standard checkbox; can also be grouped Input Field Text input field Canvas The game object container for UI elements Event System Allows us to trigger scripts from UI elements. An Event System is automatically created when you create a canvas. You can have multiple canvases in your game. As you start building larger games, you'll likely find a use for more than one canvas. Render mode There are a few settings in the Inspector view that you should be aware of regarding your canvas game object. The first setting is the render mode. There are three settings: Screen Space – Overlay, Screen Space – Camera, and World Space: In this render mode, the canvas is automatically resized when the user changes the size or resolution of the game screen. The second render mode, Screen Space – Camera, has a plane distance property that determines how far the canvas is rendered from the camera. The third render mode is World Space. This mode gives you the most control and can be manipulated much like any other game object. I recommend experimenting with different render modes so you know which one you like best and when to use each one. Creating a GUI Creating a GUI in Unity is a relatively easy task. We first create a canvas, or have Unity create it for us when we create our first UI element. Next, we simply add the desired UI elements to our canvas. Once all the necessary elements are in our canvas, you can arrange and format them. It is often best to switch to 2D mode in the Scene view when placing the UI elements on the canvas. This simply makes the task a bit easier. If you have used earlier versions of Unity, you'll note that several things have changed regarding creating and referencing GUI elements. For example, you'll need to include the using UnityEngine.UI; statement before referencing UI components. Also, instead of referencing GUI text as public GUIText waterHeld; you now use public Text waterHeld;. Heads-up displays A game's heads-up display (HUD) is graphical and textual information available to the user at all times. No action should be required of the user other than to look at a specific region of the screen to read the displays. For example, if you are playing car-racing game, you might have an odometer, speedometer, compass, fuel tank level, air pressure, and other visual indicators always on the screen. Creating a HUD Here are the basic steps to create a HUD: Open the game project and load the scene. Navigate and select the GameObject | UI | Text option from the drop-down menu. This will result in a Canvas game object being added to the Hierarchy view, along with a text child item. Select the child item in the Hierarchy view. Then, in the Inspector view, change the text to what you want displayed on the screen. In the Inspector view, you can change the font size. If necessary, you can change the Horizontal Overflow option from Wrap to Overflow: Zoom out in the Scene view until you can see the GUI Canvas. Use the transform tools to place your new GUI element in the top-left corner of the screen. Depending on how you are viewing the scene in the Scene view, you might need to use the hand tool to rotate the scene. So, if your GUI text appears backwards, just rotate the scene until it is correct. Repeat steps 2 through 5 until you've created all the HUD elements you need for your game. Mini-maps Miniature-maps or mini-maps provide game players with a small visual aid that helps them maintain perspective and direction in a game. These mini-maps can be used for many different purposes, depending on the game. Some examples include the ability to view a mini-map that overlooks an enemy encampment; a zoomed out view of the game map with friendly and enemy force indicators; and a mini-map that has the overall tunnel map while the main game screen views the current section of tunnel. Creating a Mini-Map Here are the steps used to create a mini-map for our game: Navigate and select GameObject | Camera from the top menu. In the Hierarchy view, change the name from Camera to Mini-Map. With the mini-map camera selected, go to the Inspector view and click on the Layer button, then Add Layer in the pop-up menu. In the next available User Layer, add the name Mini-Map: Select the Mini-Map option in the Hierarchy view, and then select Layer | Mini-Map. Now the new mini-map camera is assigned to the Mini-Map layer: Next, we'll ensure the main camera is not rendering the Mini-Map camera. Select the Main Camera option in the Hierarchy view. In the Inspector view, select Culling Mask, and then deselect Mini-Map from the pop-up menu: Now we are ready to finish the configuration of our mini-map camera. Select the Mini-Map in the Hierarchy view. Using the transform tools in the Scene view, adjust the camera object so that it shows the area of the game environment you want visible via the mini-map. In the Inspector view, under Camera, make the settings match the following values: Setting Value Clear Flags Depth only Culling Mask Everything Projection Orthographic Size 25 Clipping Planes Near 0.3; Far 1000 Viewpoint Rect X 0.75; Y 0.75; W 1; H 1 Depth 1 Rendering Path User Player Settings Target Texture None Occlusion Culling Selected HDR Not Selected With the Mini-Map camera still selected, right-click on each of the Flare Layer, GUI Layer, and Audio Listener components in the Inspector view and select Remove Component. Save your scene and your project. You are ready to test your mini-map. Mini-maps can be very powerful game components. There are a couple of things to keep in mind if you are going to use mini-maps in your games: Make sure the mini-map size does not obstruct too much of the game environment. There is nothing worse than getting shot by an enemy that you could not see because a mini-map was in the way. The mini-map should have a purpose—we do not include them in games because they are cool. They take up screen real estate and should only be used if needed, such as helping the player make informed decisions. In our game, the player is able to keep an eye on Colt's farm animals while he is out gathering water and corn. Items should be clearly visible on the mini-map. Many games use red dots for enemies, yellow for neutral forces, and blue for friendlies. This type of color-coding provides users with a lot of information at a very quick glance. Ideally, the user should have the flexibility to move the mini-map to a corner of their choosing and toggle it on and off. In our game, we placed the mini-map in the top-right corner of the game screen so that the HUD objects would not be in the way. Summary In this article, you learned about the UI system in Unity 5. You gained an appreciation for the importance of GUIs in games we create. Resources for Article: Further resources on this subject: Bringing Your Game to Life with AI and Animations [article] Looking Back, Looking Forward [article] Introducing the Building Blocks for Unity Scripts [article]
Read more
  • 0
  • 0
  • 7654

Packt
03 Jun 2015
9 min read
Save for later

Microsoft Azure – Developing Web API for Mobile Apps

Packt
03 Jun 2015
9 min read
Azure Websites is an excellent platform to deploy and manage the Web API, Microsoft Azure provides, however, another alternative in the form of Azure Mobile Services, which targets mobile application developers. In this article by Nikhil Sachdeva, coauthor of the book Building Web Services with Microsoft Azure, we delve into the capabilities of Azure Mobile Services and how it provides a quick and easy development ecosystem to develop Web APIs that support mobile apps. (For more resources related to this topic, see here.) Creating a Web API using Mobile Services In this section, we will create a Mobile Services-enabled Web API using Visual Studio 2013. For our fictitious scenario, we will create an Uber-like service but for medical emergencies. In the case of a medical emergency, users will have the option to send a request using their mobile device. Additionally, third-party applications and services can integrate with the Web API to display doctor availability. All requests sent to the Web API will follow the following process flow: The request will be persisted to a data store. An algorithm will find a doctor that matches the incoming request based on availability and proximity. Push Notifications will be sent to update the physician and patient. Creating the project Mobile Services provides two options to create a project: Using the Management portal, we can create a new Mobile Service and download a preassembled package that contains the Web API as well as the targeted mobile platform project Using Visual Studio templates The Management portal approach is easier to implement and does give a jumpstart by creating and configuring the project. However, for the scope of this article, we will use the Visual Studio template approach. For more information on creating a Mobile Services Web API using the Azure Management Portal, please refer to http://azure.microsoft.com/en-us/documentation/articles/mobile-services-dotnet-backend-windows-store-dotnet-get-started/. Azure Mobile Services provides a Visual Studio 2013 template to create a .NET Web API, we will use this template for our scenario. Note that the Azure Mobile Services template is only available from Visual Studio 2013 update 2 and onward. Creating a Mobile Service in Visual Studio 2013 requires the following steps: Create a new Azure Mobile Service project and assign it a Name, Location, and Solution. Click OK. In the next tab, we have a familiar ASP.NET project type dialog. However, we notice a few differences from the traditional ASP.NET dialog, which are as follows:    The Web API option is enabled by default and is the only choice available    The Authentication tab is disabled by default    The Test project option is disabled    The Host in the cloud option automatically suggests Mobile Services and is currently the only choice Select the default settings and click on OK. Visual Studio 2013 prompts developers to enter their Azure credentials in case they are not already logged in: For more information on Azure tools for Visual Studio, please refer visit https://msdn.microsoft.com/en-us/library/azure/ee405484.aspx. Since we are building a new Mobile Service, the next screen gathers information about how to configure the service. We can specify the existing Azure resources in our subscription or create new from within Visual Studio. Select the appropriate options and click on Create: The options are described here: Option Description Subscription This lists the name of the Azure subscription where the service will be deployed. Select from the dropdown if multiple subscriptions are available. Name This is the name of the Mobile Services deployment, this will eventually become the root DNS URL for the mobile service unless a custom domain is specified. (For example, contoso.azure-mobile.net). Runtime This allows selection of runtime. Note that as of writing this book, only the .NET framework was supported in Visual Studio, so this option is currently prepopulated and disabled. Region Select the Azure data center where the Web API will be deployed. As of writing this book, Mobile Services is available in the following regions: West US, East US, North Europe, East Asia, and West Japan. For details on latest regional availability, please refer to http://azure.microsoft.com/en-us/regions/#services. Database By default, a SQL Azure database gets associated with every Mobile Services deployment. It comes in handy if SQL is being used as the data store. However, in scenarios where different data stores such as the table storage or Mongo DB may be used, we still create this SQL database. We can select from a free 20 MB SQL database or an existing paid standard SQL database. For more information about SQL tiers, please visit http://azure.microsoft.com/en-us/pricing/details/sql-database. Server user name Provide the server name for the Azure SQL database. Server password Provide a password for the Azure SQL database. This process creates the required entities in the configured Azure subscription. Once completed, we have a new Web API project in the Visual Studio solution. The following screenshot is the representation of a new Mobile Service project: When we create a Mobile Service Web API project, the following NuGet packages are referenced in addition to the default ASP.NET Web API NuGet packages: Package Description WindowsAzure MobileServices Backend This package enables developers to build scalable and secure .NET mobile backend hosted in Microsoft Azure. We can also incorporate structured storage, user authentication, and push notifications. Assembly: Microsoft.WindowsAzure.Mobile.Service Microsoft Azure Mobile Services .NET Backend Tables This package contains the common infrastructure needed when exposing structured storage as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Tables Microsoft Azure Mobile Services .NET Backend Entity Framework Extension This package contains all types necessary to surface structured storage (using Entity Framework) as part of the .NET mobile backend hosted in Microsoft Azure. Assembly: Microsoft.WindowsAzure.Mobile.Service.Entity Additionally, the following third-party packages are installed: Package Description EntityFramework Since Mobile Services provides a default SQL database, it leverages Entity Framework to provide an abstraction for the data entities. AutoMapper AutoMapper is a convention based object-to-object mapper. It is used to map legacy custom entities to DTO objects in Mobile Services. OWIN Server and related assemblies Mobile Services uses OWIN as the default hosting mechanism. The current template also adds: Microsoft OWIN Katana packages to run the solution in IIS Owin security packages for Google, Azure AD, Twitter, Facebook Autofac This is the favorite Inversion of Control (IoC) framework. Azure Service Bus Microsoft Azure Service Bus provides Notification Hub functionality. We now have our Mobile Services Web API project created. The default project added by Visual Studio is not an empty project but a sample implementation of a Mobile Service-enabled Web API. In fact, a controller and Entity Data Model are already defined in the project. If we hit F5 now, we can see a running sample in the local Dev environment: Note that Mobile Services modifies the WebApiConfig file under the App_Start folder to accommodate some initialization and configuration changes: {    ConfigOptions options = new ConfigOptions();      HttpConfiguration config = ServiceConfig.Initialize     (new ConfigBuilder(options)); } In the preceding code, the ServiceConfig.Initialize method defined in the Microsoft.WindowsAzure.Mobile.Service assembly is called to load the hosting provider for our mobile service. It loads all assemblies from the current application domain and searches for types with HostConfigProviderAttribute. If it finds one, the custom host provider is loaded, or else the default host provider is used. Let's extend the project to develop our scenario. Defining the data model We now create the required entities and data model. Note that while the entities have been kept simple for this article, in the real-world application, it is recommended to define a data architecture before creating any data entities. For our scenario, we create two entities that inherit from Entity Data. These are described here. Record Record is an entity that represents data for the medical emergency. We use the Record entity when invoking CRUD operations using our controller. We also use this entity to update doctor allocation and status of the request as shown: namespace Contoso.Hospital.Entities {       /// <summary>    /// Emergency Record for the hospital    /// </summary> public class Record : EntityData    {        public string PatientId { get; set; }          public string InsuranceId { get; set; }          public string DoctorId { get; set; }          public string Emergency { get; set; }          public string Description { get; set; }          public string Location { get; set; }          public string Status { get; set; }           } } Doctor The Doctor entity represents the doctors that are registered practitioners in the area, the service will search for the availability of a doctor based on the properties of this entity. We will also assign the primary DoctorId to the Record type when a doctor is assigned to an emergency. The schema for the Doctor entity is as follows: amespace Contoso.Hospital.Entities {    public class Doctor: EntityData    {        public string Speciality{ get; set; }          public string Location { get; set; }               public bool Availability{ get; set; }           } } Summary In this article, we looked at a solution for developing a Web API that targets mobile developers. Resources for Article: Further resources on this subject: Security in Microsoft Azure [article] Azure Storage [article] High Availability, Protection, and Recovery using Microsoft Azure [article]
Read more
  • 0
  • 0
  • 3558

article-image-playing-physics
Packt
03 Jun 2015
26 min read
Save for later

Playing with Physics

Packt
03 Jun 2015
26 min read
In this article by Maxime Barbier, author of the book SFML Blueprints, we will add physics into this game and turn it into a new one. By doing this, we will learn: What is a physics engine How to install and use the Box2D library How to pair the physics engine with SFML for the display How to add physics in the game In this article, we will learn the magic of physics. We will also do some mathematics but relax, it's for conversion only. Now, let's go! (For more resources related to this topic, see here.) A physics engine – késako? We will speak about physics engine, but the first question is "what is a physics engine?" so let's explain it. A physics engine is a software or library that is able to simulate Physics, for example, the Newton-Euler equation that describes the movement of a rigid body. A physics engine is also able to manage collisions, and some of them can deal with soft bodies and even fluids. There are different kinds of physics engines, mainly categorized into real-time engine and non-real-time engine. The first one is mostly used in video games or simulators and the second one is used in high performance scientific simulation, in the conception of special effects in cinema and animations. As our goal is to use the engine in a video game, let's focus on real-time-based engine. Here again, there are two important types of engines. The first one is for 2D and the other for 3D. Of course you can use a 3D engine in a 2D world, but it's preferable to use a 2D engine for an optimization purpose. There are plenty of engines, but not all of them are open source. 3D physics engines For 3D games, I advise you to use the Bullet physics library. This was integrated in the Blender software, and was used in the creation of some commercial games and also in the making of films. This is a really good engine written in C/C++ that can deal with rigid and soft bodies, fluids, collisions, forces… and all that you need. 2D physics engines As previously said, in a 2D environment, you can use a 3D physics engine; you just have to ignore the depth (Z axes). However, the most interesting thing is to use an engine optimized for the 2D environment. There are several engines like this one and the most famous ones are Box2D and Chipmunk. Both of them are really good and none of them are better than the other, but I had to make a choice, which was Box2D. I've made this choice not only because of its C++ API that allows you to use overload, but also because of the big community involved in the project. Physics engine comparing game engine Do not mistake a physics engine for a game engine. A physics engine only simulates a physical world without anything else. There are no graphics, no logics, only physics simulation. On the contrary, a game engine, most of the time includes a physics engine paired with a render technology (such as OpenGL or DirectX). Some predefined logics depend on the goal of the engine (RPG, FPS, and so on) and sometimes artificial intelligence. So as you can see, a game engine is more complete than a physics engine. The two mostly known engines are Unity and Unreal engine, which are both very complete. Moreover, they are free for non-commercial usage. So why don't we directly use a game engine? This is a good question. Sometimes, it's better to use something that is already made, instead of reinventing it. However, do we really need all the functionalities of a game engine for this project? More importantly, what do we need it for? Let's see the following: A graphic output Physics engine that can manage collision Nothing else is required. So as you can see, using a game engine for this project would be like killing a fly with a bazooka. I hope that you have understood the aim of a physics engine, the differences between a game and physics engine, and the reason for the choices made for the project. Using Box2D As previously said, Box2D is a physics engine. It has a lot of features, but the most important for the project are the following (taken from the Box2D documentation): Collision: This functionality is very interesting as it allows our tetrimino to interact with each other Continuous collision detection Rigid bodies (convex polygons and circles) Multiple shapes per body Physics: This functionality will allow a piece to fall down and more Continuous physics with the time of impact solver Joint limits, motors, and friction Fairly accurate reaction forces/impulses As you can see, Box2D provides all that we need in order to build our game. There are a lot of other features usable with this engine, but they don't interest us right now so I will not describe them in detail. However, if you are interested, you can take a look at the official website for more details on the Box2D features (http://box2d.org/about/). It's important to note that Box2D uses meters, kilograms, seconds, and radians for the angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. I will come back to this later. Preparing Box2D Now that Box2D is introduced, let's install it. You will find the list of available versions on the Google code project page at https://code.google.com/p/box2d/downloads/list. Currently, the latest stable version is 2.3. Once you have downloaded the source code (from compressed file or using SVN), you will need to build it. Install Once you have successfully built your Box2D library, you will need to configure your system or IDE to find the Box2D library and headers. The newly built library can be found in the /path/to/Box2D/build/Box2D/ directory and is named libBox2D.a. On the other hand, the headers are located in the path/to/Box2D/Box2D/ directory. If everything is okay, you will find a Box2D.h file in the folder. On Linux, the following command adds Box2D to your system without requiring any configuration: sudo make install Pairing Box2D and SFML Now that Box2D is installed and your system is configured to find it, let's build the physics "hello world": a falling square. It's important to note that Box2D uses meters, kilograms, seconds, and radian for angle as units; SFML uses pixels, seconds, and degrees. So we will need to make some conversions. Converting radians to degrees or vice versa is not difficult, but pixels to meters… this is another story. In fact, there is no way to convert a pixel to meter, unless if the number of pixels per meter is fixed. This is the technique that we will use. So let's start by creating some utility functions. We should be able to convert radians to degrees, degrees to radians, meters to pixels, and finally pixels to meters. We will also need to fix the pixel per meter value. As we don't need any class for these functions, we will define them in a namespace converter. This will result as the following code snippet: namespace converter {    constexpr double PIXELS_PER_METERS = 32.0;    constexpr double PI = 3.14159265358979323846;      template<typename T>    constexpr T pixelsToMeters(const T& x){return x/PIXELS_PER_METERS;};      template<typename T>    constexpr T metersToPixels(const T& x){return x*PIXELS_PER_METERS;};      template<typename T>    constexpr T degToRad(const T& x){return PI*x/180.0;};      template<typename T>    constexpr T radToDeg(const T& x){return 180.0*x/PI;} } As you can see, there is no difficulty here. We start to define some constants and then the convert functions. I've chosen to make the function template to allow the use of any number type. In practice, it will mostly be double or int. The conversion functions are also declared as constexpr to allow the compiler to calculate the value at compile time if it's possible (for example, with constant as a parameter). It's interesting because we will use this primitive a lot. Box2D, how does it work? Now that we can convert SFML unit to Box2D unit and vice versa, we can pair Box2D with SFML. But first, how exactly does Box2D work? Box2D works a lot like a physics engine: You start by creating an empty world with some gravity. Then, you create some object patterns. Each pattern contains the shape of the object position, its type (static or dynamic), and some other characteristics such as its density, friction, and energy restitution. You ask the world to create a new object defined by the pattern. In each game loop, you have to update the physical world with a small step such as our world in the games we've already made. Because the physics engine does not display anything on the screen, we will need to loop all the objects and display them by ourselves. Let's start by creating a simple scene with two kinds of objects: a ground and square. The ground will be fixed and the squares will not. The square will be generated by a user event: mouse click. This project is very simple, but the goal is to show you how to use Box2D and SFML together with a simple case study. A more complex one will come later. We will need three functionalities for this small project to: Create a shape Display the world Update/fill the world Of course there is also the initialization of the world and window. Let's start with the main function: As always, we create a window for the display and we limit the FPS number to 60. I will come back to this point with the displayWorld function. We create the physical world from Box2D, with gravity as a parameter. We create a container that will store all the physical objects for the memory clean purpose. We create the ground by calling the createBox function (explained just after). Now it is time for the minimalist game loop:    Close event managements    Create a box by detecting that the right button of the mouse is pressed Finally, we clean the memory before exiting the program: int main(int argc,char* argv[]) {    sf::RenderWindow window(sf::VideoMode(800, 600, 32), "04_Basic");    window.setFramerateLimit(60);    b2Vec2 gravity(0.f, 9.8f);    b2World world(gravity);    std::list<b2Body*> bodies;    bodies.emplace_back(book::createBox(world,400,590,800,20,b2_staticBody));      while(window.isOpen()) {        sf::Event event;        while(window.pollEvent(event)) {            if (event.type == sf::Event::Closed)                window.close();        }        if (sf::Mouse::isButtonPressed(sf::Mouse::Left)) {            int x = sf::Mouse::getPosition(window).x;            int y = sf::Mouse::getPosition(window).y;            bodies.emplace_back(book::createBox(world,x,y,32,32));        }        displayWorld(world,window);    }      for(b2Body* body : bodies) {        delete static_cast<sf::RectangleShape*>(body->GetUserData());        world.DestroyBody(body);    }    return 0; } For the moment, except the Box2D world, nothing should surprise you so let's continue with the box creation. This function is under the book namespace. b2Body* createBox(b2World& world,int pos_x,int pos_y, int size_x,int size_y,b2BodyType type = b2_dynamicBody) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),                         converter::pixelsToMeters<double>(pos_y));    bodyDef.type = type;    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(size_x/2.0),                    converter::pixelsToMeters<double>(size_y/2.0));      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.4;    fixtureDef.restitution= 0.5;    fixtureDef.shape = &b2shape;      b2Body* res = world.CreateBody(&bodyDef);    res->CreateFixture(&fixtureDef);      sf::Shape* shape = new sf::RectangleShape(sf::Vector2f(size_x,size_y));    shape->setOrigin(size_x/2.0,size_y/2.0);    shape->setPosition(sf::Vector2f(pos_x,pos_y));                                                   if(type == b2_dynamicBody)        shape->setFillColor(sf::Color::Blue);    else        shape->setFillColor(sf::Color::White);      res->SetUserData(shape);      return res; } This function contains a lot of new functionalities. Its goal is to create a rectangle of a specific size at a predefined position. The type of this rectangle is also set by the user (dynamic or static). Here again, let's explain the function step-by-step: We create b2BodyDef. This object contains the definition of the body to create. So we set the position and its type. This position will be in relation to the gravity center of the object. Then, we create b2Shape. This is the physical shape of the object, in our case, a box. Note that the SetAsBox() method doesn't take the same parameter as sf::RectangleShape. The parameters are half the size of the box. This is why we need to divide the values by two. We create b2FixtureDef and initialize it. This object holds all the physical characteristics of the object such as its density, friction, restitution, and shape. Then, we properly create the object in the physical world. Now, we create the display of the object. This will be more familiar because we will only use SFML. We create a rectangle and set its position, origin, and color. As we need to associate and display SFML object to the physical object, we use a functionality of Box2D: the SetUserData() function. This function takes void* as a parameter and internally holds it. So we use it to keep track of our SFML shape. Finally, the body is returned by the function. This pointer has to be stored to clean the memory later. This is the reason for the body's container in main(). Now, we have the capability to simply create a box and add it to the world. Now, let's render it to the screen. This is the goal of the displayWorld function: void displayWorld(b2World& world,sf::RenderWindow& render) {    world.Step(1.0/60,int32(8),int32(3));    render.clear();    for (b2Body* body=world.GetBodyList(); body!=nullptr; body=body->GetNext())    {          sf::Shape* shape = static_cast<sf::Shape*>(body->GetUserData());        shape->setPosition(converter::metersToPixels(body->GetPosition().x),        converter::metersToPixels(body->GetPosition().y));        shape->setRotation(converter::radToDeg<double>(body->GetAngle()));        render.draw(*shape);    }    render.display(); } This function takes the physics world and window as a parameter. Here again, let's explain this function step-by-step: We update the physical world. If you remember, we have set the frame rate to 60. This is why we use 1,0/60 as a parameter here. The two others are for precision only. In a good code, the time step should not be hardcoded as here. We have to use a clock to be sure that the value will always be the same. Here, it has not been the case to focus on the important part: physics. We reset the screen, as usual. Here is the new part: we loop the body stored by the world and get back the SFML shape. We update the SFML shape with the information taken from the physical body and then render it on the screen. Finally, we render the result on the screen. As you can see, it's not really difficult to pair SFML with Box2D. It's not a pain to add it. However, we have to take care of the data conversion. This is the real trap. Pay attention to the precision required (int, float, double) and everything should be fine. Now that you have all the keys in hand, let's build a real game with physics. Adding physics to a game Now that Box2D is introduced with a basic project, let's focus on the real one. We will modify our basic Tetris to get Gravity-Tetris alias Gravitris. The game control will be the same as in Tetris, but the game engine will not be. We will replace the board with a real physical engine. With this project, we will reuse a lot of work previously done. As already said, the goal of some of our classes is to be reusable in any game using SFML. Here, this will be made without any difficulties as you will see. The classes concerned are those you deal with user event Action, ActionMap, ActionTarget—but also Configuration and ResourceManager. There are still some changes that will occur in the Configuration class, more precisely, in the enums and initialization methods of this class because we don't use the exact same sounds and events that were used in the Asteroid game. So we need to adjust them to our needs. Enough with explanations, let's do it with the following code: class Configuration {    public:        Configuration() = delete;        Configuration(const Configuration&) = delete;        Configuration& operator=(const Configuration&) = delete;               enum Fonts : int {Gui};        static ResourceManager<sf::Font,int> fonts;               enum PlayerInputs : int { TurnLeft,TurnRight, MoveLeft, MoveRight,HardDrop};        static ActionMap<int> playerInputs;               enum Sounds : int {Spawn,Explosion,LevelUp,};        static ResourceManager<sf::SoundBuffer,int> sounds;               enum Musics : int {Theme};        static ResourceManager<sf::Music,int> musics;               static void initialize();           private:        static void initTextures();        static void initFonts();        static void initSounds();        static void initMusics();        static void initPlayerInputs(); }; As you can see, the changes are in the enum, more precisely in Sounds and PlayerInputs. We change the values into more adapted ones to this project. We still have the font and music theme. Now, take a look at the initialization methods that have changed: void Configuration::initSounds() {    sounds.load(Sounds::Spawn,"media/sounds/spawn.flac");    sounds.load(Sounds::Explosion,"media/sounds/explosion.flac");    sounds.load(Sounds::LevelUp,"media/sounds/levelup.flac"); } void Configuration::initPlayerInputs() {    playerInputs.map(PlayerInputs::TurnRight,Action(sf::Keyboard::Up));    playerInputs.map(PlayerInputs::TurnLeft,Action(sf::Keyboard::Down));    playerInputs.map(PlayerInputs::MoveLeft,Action(sf::Keyboard::Left));    playerInputs.map(PlayerInputs::MoveRight,Action(sf::Keyboard::Right));  playerInputs.map(PlayerInputs::HardDrop,Action(sf::Keyboard::Space,    Action::Type::Released)); } No real surprises here. We simply adjust the resources to our needs for the project. As you can see, the changes are really minimalistic and easily done. This is the aim of all reusable modules or classes. Here is a piece of advice, however: keep your code as modular as possible, this will allow you to change a part very easily and also to import any generic part of your project to another one easily. The Piece class Now that we have the configuration class done, the next step is the Piece class. This class will be the most modified one. Actually, as there is too much change involved, let's build it from scratch. A piece has to be considered as an ensemble of four squares that are independent from one another. This will allow us to split a piece at runtime. Each of these squares will be a different fixture attached to the same body, the piece. We will also need to add some force to a piece, especially to the current piece, which is controlled by the player. These forces can move the piece horizontally or can rotate it. Finally, we will need to draw the piece on the screen. The result will show the following code snippet: constexpr int BOOK_BOX_SIZE = 32; constexpr int BOOK_BOX_SIZE_2 = BOOK_BOX_SIZE / 2; class Piece : public sf::Drawable {    public:        Piece(const Piece&) = delete;        Piece& operator=(const Piece&) = delete;          enum TetriminoTypes {O=0,I,S,Z,L,J,T,SIZE};        static const sf::Color TetriminoColors[TetriminoTypes::SIZE];          Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation);        ~Piece();        void update();        void rotate(float angle);        void moveX(int direction);        b2Body* getBody()const;      private:        virtual void draw(sf::RenderTarget& target, sf::RenderStates states) const override;        b2Fixture* createPart((int pos_x,int pos_y,TetriminoTypes type); ///< position is relative to the piece int the matrix coordinate (0 to 3)        b2Body * _body;        b2World& _world; }; Some parts of the class don't change such as the TetriminoTypes and TetriminoColors enums. This is normal because we don't change any piece's shape or colors. The rest is still the same. The implementation of the class, on the other side, is very different from the precedent version. Let's see it: Piece::Piece(b2World& world,int pos_x,int pos_y,TetriminoTypes type,float rotation) : _world(world) {    b2BodyDef bodyDef;    bodyDef.position.Set(converter::pixelsToMeters<double>(pos_x),    converter::pixelsToMeters<double>(pos_y));    bodyDef.type = b2_dynamicBody;    bodyDef.angle = converter::degToRad(rotation);    _body = world.CreateBody(&bodyDef);      switch(type)    {        case TetriminoTypes::O : {            createPart((0,0,type); createPart((0,1,type);            createPart((1,0,type); createPart((1,1,type);        }break;        case TetriminoTypes::I : {            createPart((0,0,type); createPart((1,0,type);             createPart((2,0,type); createPart((3,0,type);        }break;        case TetriminoTypes::S : {            createPart((0,1,type); createPart((1,1,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::Z : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,1,type);        }break;        case TetriminoTypes::L : {            createPart((0,1,type); createPart((0,0,type);            createPart((1,0,type); createPart((2,0,type);        }break;        case TetriminoTypes::J : {            createPart((0,0,type); createPart((1,0,type);            createPart((2,0,type); createPart((2,1,type);        }break;        case TetriminoTypes::T : {            createPart((0,0,type); createPart((1,0,type);            createPart((1,1,type); createPart((2,0,type);        }break;        default:break;    }    body->SetUserData(this);    update(); } The constructor is the most important method of this class. It initializes the physical body and adds each square to it by calling createPart(). Then, we set the user data to the piece itself. This will allow us to navigate through the physics to SFML and vice versa. Finally, we synchronize the physical object to the drawable by calling the update() function: Piece::~Piece() {    for(b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        fixture->SetUserData(nullptr);        delete shape;    }    _world.DestroyBody(_body); } The destructor loop on all the fixtures attached to the body, destroys all the SFML shapes and then removes the body from the world: b2Fixture* Piece::createPart((int pos_x,int pos_y,TetriminoTypes type) {    b2PolygonShape b2shape;    b2shape.SetAsBox(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2),    converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2)    ,b2Vec2(converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_x*BOOK_BOX_SIZE)), converter::pixelsToMeters<double>(BOOK_BOX_SIZE_2+(pos_y*BOOK_BOX_SIZE))),0);      b2FixtureDef fixtureDef;    fixtureDef.density = 1.0;    fixtureDef.friction = 0.5;    fixtureDef.restitution= 0.4;    fixtureDef.shape = &b2shape;      b2Fixture* fixture = _body->CreateFixture(&fixtureDef);      sf::ConvexShape* shape = new sf::ConvexShape((unsigned int) b2shape.GetVertexCount());    shape->setFillColor(TetriminoColors[type]);    shape->setOutlineThickness(1.0f);    shape->setOutlineColor(sf::Color(128,128,128));    fixture->SetUserData(shape);       return fixture; } This method adds a square to the body at a specific place. It starts by creating a physical shape as the desired box and then adds this to the body. It also creates the SFML square that will be used for the display, and it will attach this as user data to the fixture. We don't set the initial position because the constructor will do it. void Piece::update() {    const b2Transform& xf = _body->GetTransform();       for(b2Fixture* fixture = _body->GetFixtureList(); fixture != nullptr;    fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        const b2PolygonShape* b2shape = static_cast<b2PolygonShape*>(fixture->GetShape());        const uint32 count = b2shape->GetVertexCount();        for(uint32 i=0;i<count;++i) {            b2Vec2 vertex = b2Mul(xf,b2shape->m_vertices[i]);            shape->setPoint(i,sf::Vector2f(converter::metersToPixels(vertex.x),            converter::metersToPixels(vertex.y)));        }    } } This method synchronizes the position and rotation of all the SFML shapes from the physical position and rotation calculated by Box2D. Because each piece is composed of several parts—fixture—we need to iterate through them and update them one by one. void Piece::rotate(float angle) {    body->ApplyTorque((float32)converter::degToRad(angle),true); } void Piece::moveX(int direction) {    body->ApplyForceToCenter(b2Vec2(converter::pixelsToMeters(direction),0),true); } These two methods add some force to the object to move or rotate it. We forward the job to the Box2D library. b2Body* Piece::getBody()const {return _body;}   void Piece::draw(sf::RenderTarget& target, sf::RenderStates states) const {    for(const b2Fixture* fixture=_body->GetFixtureList();fixture!=nullptr; fixture=fixture->GetNext()) {        sf::ConvexShape* shape = static_cast<sf::ConvexShape*>(fixture->GetUserData());        if(shape)            target.draw(*shape,states);    } } This function draws the entire piece. However, because the piece is composed of several parts, we need to iterate on them and draw them one by one in order to display the entire piece. This is done by using the user data saved in the fixtures. Summary Since the usage of a physics engine has its own particularities such as the units and game loop, we have learned how to deal with them. Finally, we learned how to pair Box2D with SFML, integrate our fresh knowledge to our existing Tetris project, and build a new funny game. Resources for Article: Further resources on this subject: Skinning a character [article] Audio Playback [article] Sprites in Action [article]
Read more
  • 0
  • 0
  • 10498
article-image-pointers-and-references
Packt
03 Jun 2015
14 min read
Save for later

Pointers and references

Packt
03 Jun 2015
14 min read
In this article by Ivo Balbaert, author of the book, Rust Essentials, we will go through the pointers and memory safety. (For more resources related to this topic, see here.) The stack and the heap When a program starts, by default a 2 MB chunk of memory called the stack is granted to it. The program will use its stack to store all its local variables and function parameters; for example, an i32 variable takes 4 bytes of the stack. When our program calls a function, a new stack frame is allocated to it. Through this mechanism, the stack knows the order in which the functions are called so that the functions return correctly to the calling code and possibly return values as well. Dynamically sized types, such as strings or arrays, can't be stored on the stack. For these values, a program can request memory space on its heap, so this is a potentially much bigger piece of memory than the stack. When possible, stack allocation is preferred over heap allocation because accessing the stack is a lot more efficient. Lifetimes All variables in a Rust code have a lifetime. Suppose we declare an n variable with the let n = 42u32; binding. Such a value is valid from where it is declared to when it is no longer referenced, which is called the lifetime of the variable. This is illustrated in the following code snippet: fn main() { let n = 42u32; let n2 = n; // a copy of the value from n to n2 life(n); println!("{}", m); // error: unresolved name `m`. println!("{}", o); // error: unresolved name `o`. }   fn life(m: u32) -> u32 {    let o = m;    o } The lifetime of n ends when main() ends; in general, the start and end of a lifetime happen in the same scope. The words lifetime and scope are synonymous, but we generally use the word lifetime to refer to the extent of a reference. As in other languages, local variables or parameters declared in a function do not exist anymore after the function has finished executing; in Rust, we say that their lifetime has ended. This is the case for the m and o variables in the preceding code snippet, which are only known in the life function. Likewise, the lifetime of a variable declared in a nested block is restricted to that block, like phi in the following example: {    let phi = 1.618; } println!("The value of phi is {}", phi); // is error Trying to use phi when its lifetime is over results in an error: unresolved name 'phi'. The lifetime of a value can be indicated in the code by an annotation, for example 'a, which reads as lifetime where a is simply an indicator; it could also be written as 'b, 'n, or 'life. It's common to see single letters being used to represent lifetimes. In the preceding example, an explicit lifetime indication was not necessary since there were no references involved. All values tagged with the same lifetime have the same maximum lifetime. In the following example, we have a transform function that explicitly declares the lifetime of its s parameter to be 'a: fn transform<'a>(s: &'a str) { /* ... */ } Note the <'a> indication after the name of the function. In nearly all cases, this explicit indication is not needed because the compiler is smart enough to deduce the lifetimes, so we can simply write this: fn transform_without_lifetime(s: &str) { /* ... */ } Here is an example where even when we indicate a lifetime specifier 'a, the compiler does not allow our code. Let's suppose that we define a Magician struct as follows: struct Magician { name: &'static str, power: u32 } We will get an error message if we try to construct the following function: fn return_magician<'a>() -> &'a Magician { let mag = Magician { name: "Gandalf", power: 4625}; &mag } The error message is error: 'mag' does not live long enough. Why does this happen? The lifetime of the mag value ends when the return_magician function ends, but this function nevertheless tries to return a reference to the Magician value, which no longer exists. Such an invalid reference is known as a dangling pointer. This is a situation that would clearly lead to errors and cannot be allowed. The lifespan of a pointer must always be shorter than or equal to than that of the value which it points to, thus avoiding dangling (or null) references. In some situations, the decision to determine whether the lifetime of an object has ended is complicated, but in almost all cases, the borrow checker does this for us automatically by inserting lifetime annotations in the intermediate code; so, we don't have to do it. This is known as lifetime elision. For example, when working with structs, we can safely assume that the struct instance and its fields have the same lifetime. Only when the borrow checker is not completely sure, we need to indicate the lifetime explicitly; however, this happens only on rare occasions, mostly when references are returned. One example is when we have a struct with fields that are references. The following code snippet explains this: struct MagicNumbers { magn1: &u32, magn2: &u32 } This won't compile and will give us the following error: missing lifetime specifier [E0106]. Therefore, we have to change the code as follows: struct MagicNumbers<'a> { magn1: &'a u32, magn2: &'a u32 } This specifies that both the struct and the fields have the lifetime as 'a. Perform the following exercise: Explain why the following code won't compile: fn main() {    let m: &u32 = {        let n = &5u32;        &*n    };    let o = *m; } Answer the same question for this code snippet as well: let mut x = &3; { let mut y = 4; x = &y; } Copying values and the Copy trait In the code that we discussed in earlier section the value of n is copied to a new location each time n is assigned via a new let binding or passed as a function argument: let n = 42u32; // no move, only a copy of the value: let n2 = n; life(n); fn life(m: u32) -> u32 {    let o = m;    o } At a certain moment in the program's execution, we would have four memory locations that contain the copied value 42, which we can visualize as follows: Each value disappears (and its memory location is freed) when the lifetime of its corresponding variable ends, which happens at the end of the function or code block in which it is defined. Nothing much can go wrong with this Copy behavior, in which the value (its bits) is simply copied to another location on the stack. Many built-in types, such as u32 and i64, work similar to this, and this copy-value behavior is defined in Rust as the Copy trait, which u32 and i64 implement. You can also implement the Copy trait for your own type, provided all of its fields or items implement Copy. For example, the MagicNumber struct, which contains a field of the u64 type, can have the same behavior. There are two ways to indicate this: One way is to explicitly name the Copy implementation as follows: struct MagicNumber {    value: u64 } impl Copy for MagicNumber {} Otherwise, we can annotate it with a Copy attribute: #[derive(Copy)] struct MagicNumber {    value: u64 } This now means that we can create two different copies, mag and mag2, of a MagicNumber by assignment: let mag = MagicNumber {value: 42}; let mag2 = mag; They are copies because they have different memory addresses (the values shown will differ at each execution): println!("{:?}", &mag as *const MagicNumber); // address is 0x23fa88 println!("{:?}", &mag2 as *const MagicNumber); // address is 0x23fa80 The *const function is a so-called raw pointer. A type that does not implement the Copy trait is called non-copyable. Another way to accomplish this is by letting MagicNumber implement the Clone trait: #[derive(Clone)] struct MagicNumber {    value: u64 } Then, we can use clone() mag into a different object called mag3, effectively making a copy as follows: let mag3 = mag.clone(); println!("{:?}", &mag3 as *const MagicNumber); // address is 0x23fa78 mag3 is a new pointer referencing a new copy of the value of mag. Pointers The n variable in the let n = 42i32; binding is stored on the stack. Values on the stack or the heap can be accessed by pointers. A pointer is a variable that contains the memory address of some value. To access the value it points to, dereference the pointer with *. This happens automatically in simple cases such as in println! or when a pointer is given as a parameter to a method. For example, in the following code, m is a pointer containing the address of n: let m = &n; println!("The address of n is {:p}", m); println!("The value of n is {}", *m); println!("The value of n is {}", m); This prints out the following output, which differs for each program run: The address of n is 0x23fb34 The value of n is 42 The value of n is 42 So, why do we need pointers? When we work with dynamically allocated values, such as a String, that can change in size, the memory address of that value is not known at compile time. Due to this, the memory address needs to be calculated at runtime. So, to be able to keep track of it, we need a pointer for it whose value will change when the location of String in memory changes. The compiler automatically takes care of the memory allocation of pointers and the freeing up of memory when their lifetime ends. You don't have to do this yourself like in C/C++, where you could mess up by freeing memory at the wrong moment or at multiple times. The incorrect use of pointers in languages such as C++ leads to all kinds of problems. However, Rust enforces a strong set of rules at compile time called the borrow checker, so we are protected against them. We have already seen them in action, but from here onwards, we'll explain the logic behind their rules. Pointers can also be passed as arguments to functions, and they can be returned from functions, but the compiler severely restricts their usage. When passing a pointer value to a function, it is always better to use the reference-dereference &* mechanism, as shown in this example: let q = &42; println!("{}", square(q)); // 1764 fn square(k: &i32) -> i32 {    *k * *k } References In our previous example, m, which had the &n value, is the simplest form of pointer, and it is called a reference (or borrowed pointer); m is a reference to the stack-allocated n variable and has the &i32 type because it points to a value of the i32 type. In general, when n is a value of the T type, then the &n reference is of the &T type. Here, n is immutable, so m is also immutable; for example, if you try to change the value of n through m with *m = 7; you will get a cannot assign to immutable borrowed content '*m' error. Contrary to C, Rust does not let you change an immutable variable via its pointer. Since there is no danger of changing the value of n through a reference, multiple references to an immutable value are allowed; they can only be used to read the value, for example: let o = &n; println!("The address of n is {:p}", o); println!("The value of n is {}", *o); It prints out as described earlier: The address of n is 0x23fb34 The value of n is 42 We could represent this situation in the memory as follows: It is clear that working with pointers such as this or in much more complex situations necessitates much stricter rules than the Copy behavior. For example, the memory can only be freed when there are no variables or pointers associated with it anymore. And when the value is mutable, can it be changed through any of its pointers? Mutable references do exist, and they are declared as let m = &mut n. However, n also has to be a mutable value. When n is immutable, the compiler rejects the m mutable reference binding with the error, cannot borrow immutable local variable 'n' as mutable. This makes sense since immutable variables cannot be changed even when you know their memory location. To reiterate, in order to change a value through a reference, both the variable and its reference have to be mutable, as shown in the following code snippet: let mut u = 3.14f64; let v = &mut u; *v = 3.15; println!("The value of u is now {}", *v); This will print: The value of u is now 3.15. Now, the value at the memory location of u is changed to 3.15. However, note that we now cannot change (or even print) that value anymore by using the u: u = u * 2.0; variable gives us a compiler error: cannot assign to 'u' because it is borrowed. We say that borrowing a variable (by making a reference to it) freezes that variable; the original u variable is frozen (and no longer usable) until the reference goes out of scope. In addition, we can only have one mutable reference: let w = &mut u; which results in the error: cannot borrow 'u' as mutable more than once at a time. The compiler even adds the following note to the previous code line with: let v = &mut u; note: previous borrow of 'u' occurs here; the mutable borrow prevents subsequent moves, borrows, or modification of `u` until the borrow ends. This is logical; the compiler is (rightfully) concerned that a change to the value of u through one reference might change its memory location because u might change in size, so it will not fit anymore within its previous location and would have to be relocated to another address. This would render all other references to u as invalid, and even dangerous, because through them we might inadvertently change another variable that has taken up the previous location of u! A mutable value can also be changed by passing its address as a mutable reference to a function, as shown in this example: let mut m = 7; add_three_to_magic(&mut m); println!("{}", m); // prints out 10 With the function add_three_to_magic declared as follows: fn add_three_to_magic(num: &mut i32) {    *num += 3; // value is changed in place through += } To summarize, when n is a mutable value of the T type, then only one mutable reference to it (of the &mut T type) can exist at any time. Through this reference, the value can be changed. Using ref in a match If you want to get a reference to a matched variable inside a match function, use the ref keyword, as shown in the following example: fn main() { let n = 42; match n {      ref r => println!("Got a reference to {}", r), } let mut m = 42; match m {      ref mut mr => {        println!("Got a mutable reference to {}", mr);        *mr = 43;      }, } println!("m has changed to {}!", m); } Which prints out: Got a reference to 42 Got a mutable reference to 42 m has changed to 43! The r variable inside the match has the &i32 type. In other words, the ref keyword creates a reference for use in the pattern. If you need a mutable reference, use ref mut. We can also use ref to get a reference to a field of a struct or tuple in a destructuring via a let binding. For example, while reusing the Magician struct, we can extract the name of mag by using ref and then return it from the match: let mag = Magician { name: "Gandalf", power: 4625}; let name = {    let Magician { name: ref ref_to_name, power: _ } = mag;    *ref_to_name }; println!("The magician's name is {}", name); Which prints: The magician's name is Gandalf. References are the most common pointer type and have the most possibilities; other pointer types should only be applied in very specific use cases. Summary In this article, we learned the intelligence behind the Rust compiler, which is embodied in the principles of ownership, moving values, and borrowing. Resources for Article: Further resources on this subject: Getting Started with NW.js [article] Creating Random Insults [article] Creating Man-made Materials in Blender 2.5 [article]
Read more
  • 0
  • 0
  • 2736

article-image-running-cucumber
Packt
03 Jun 2015
6 min read
Save for later

Running Cucumber

Packt
03 Jun 2015
6 min read
In this article by Shankar Garg, author of the book Cucumber Cookbook, we will cover the following topics: Integrating Cucumber with Maven Running Cucumber from the Terminal Overriding options from the Terminal (For more resources related to this topic, see here.) Integrating Cucumber with Maven Maven has a lot of advantages over other build tools, such as dependency management, lots of plugins and the convenience of running integration tests. So let's also integrate our framework with Maven. Maven will allow our test cases to be run in different flavors, such as from the Terminal, integrating with Jenkins, and parallel execution. So how do we integrate with Maven? Let's find out in the next section. Getting ready I am assuming that we know the basics of Maven (the basics of Maven are out of the scope of this book). Follow the upcoming instructions to install Maven on your system and to create a sample Maven project. We need to install Maven on our system first. So, follow instructions mentioned on the following blogs: For Windows: http://www.mkyong.com/maven/how-to-install-maven-in-windows/ For Mac: http://www.mkyong.com/maven/install-maven-on-mac-osx/ We can also install the Maven Eclipse plugin by following the instructions mentioned on this blog: http://theopentutorials.com/tutorials/eclipse/installing-m2eclipse-maven-plugin-for-eclipse/. To import a Maven project into Eclipse, follow the instructions on this blog: http://www.tutorialspoint.com/maven/maven_eclispe_ide.htm. How to do it… Since it is a Maven project, we are going to change the pom.xml file to add the Cucumber dependencies. First we are going to declare some custom properties which will be used by us in managing the dependency version: <properties>    <junit.version>4.11</junit.version>    <cucumber.version>1.2.2</cucumber.version>    <selenium.version>2.45.0</selenium.version>    <maven.compiler.version>2.3.2</maven.compiler.version> </properties> Now, we are going to add the dependency for Cucumber-JVM with scope as test: <!—- Cucumber-java--> <dependency>    <groupId>info.cukes</groupId>    <artifactId>cucumber-java</artifactId>    <version>${cucumber.version}</version>    <scope>test</scope> </dependency> Now we need to add the dependency for Cucumber-JUnit with scope as test. <!-— Cucumber-JUnit --> <dependency>    <groupId>info.cukes</groupId>    <artifactId>cucumber-junit</artifactId>    <version>${cucumber.version}</version>    <scope>test</scope> </dependency> That's it! We have integrated Cucumber and Maven. How it works… By following these Steps, we have created a Maven project and added the Cucumber-Java dependency. At the moment, this project only has a pom.xml file, but this project can be used for adding different modules such as Feature files and Step Definitions. The advantage of using properties is that we are making sure that the dependency version is declared at one place in the pom.xml file. Otherwise, we declare a dependency at multiple places and may end up with a discrepancy in the dependency version. The Cucumber-Java dependency is the main dependency necessary for the different building blocks of Cucumber. The Cucumber-JUnit dependency is for Cucumber JUnit Runner, which we use in running Cucumber test cases. Running Cucumber from the Terminal Now we have integrated Cucumber with Maven, running Cucumber from the Terminal will not be a problem. Running any test framework from the Terminal has its own advantages, such as overriding the run configurations mentioned in the code. So how do we run Cucumber test cases from the Terminal? Let's find out in our next section. How to do it… Open the command prompt and cd until the project root directory. First, let's run all the Cucumber Scenarios from the command prompt. Since it's a Maven project and we have added Cucumber in test scope dependency and all features are also added in test packages, run the following command in the command prompt: mvn test This is the output:     The previous command runs everything as mentioned in the JUnit Runner class. However, if we want to override the configurations mentioned in the Runner, then we need to use following command: mvn test –DCucumber.options="<<OPTIONS>>" If you need help on these Cucumber options, then enter the following command in the command prompt and look at the output: mvn test -Dcucumber.options="--help" This is the output: How it works… mvn test runs Cucumber Features using Cucumber's JUnit Runner. The @RunWith (Cucumber.class) annotation on the RunCukesTest class tells JUnit to kick off Cucumber. The Cucumber runtime parses the command-line options to know what Feature to run, where the Glue Code lives, what plugins to use, and so on. When you use the JUnit Runner, these options are generated from the @CucumberOptions annotation on your test. Overriding Options from the Terminal When it is necessary to override the options mentioned in the JUnit Runner, then we need Dcucumber.options from the Terminal. Let's look at some of the practical examples. How to do it… If we want to run a Scenario by specifying the filesystem path, run the following command and look at the output: mvn test -Dcucumber.options= "src/test/java/com/features/sample.feature:5"   In the preceding code, "5" is the Feature file line number where a Scenario starts. If we want to run the test cases using Tags, then we run the following command and notice the output: mvn test -Dcucumber.options="--tags @sanity" The following is the output of the preceding command: If we want to generate a different report, then we can use the following command and see the JUnit report generate at the location mentioned: mvn test -Dcucumber.options= "--plugin junit:target/cucumber-junit-report.xml" How it works… When you override the options with -Dcucumber.options, you will completely override whatever options are hardcoded in your @CucumberOptions. There is one exception to this rule, and that is the --plugin option. This will not override, but instead, it will add a plugin. Summary In this article we learned that for successful implementation of any testing framework, it is mandatory that test cases can be run in multiple ways so that people with different competency levels can use it how they need to. In this article, we also covered advanced topics of running Cucumber test cases in parallel by a combination of Cucumber and Maven. Resources for Article: Further resources on this subject: Signing an application in Android using Maven [article] Apache Maven and m2eclipse [article] Understanding Maven [article]
Read more
  • 0
  • 0
  • 2729
Modal Close icon
Modal Close icon