Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7009 Articles
article-image-controlling-movement-robot-legs
Packt
06 May 2015
18 min read
Save for later

Controlling the Movement of a Robot with Legs

Packt
06 May 2015
18 min read
In this article by Richard Grimmett, author of the book Raspberry Pi Robotics Projects - Second Edition, we will add the ability to move the entire project using legs. In this article, you will be introduced to some of the basics of servo motors and to using Raspberry Pi to control the speed and direction of your legged platform. (For more resources related to this topic, see here.) Even though you've learned to make your robot mobile by adding wheels or tracks, these platforms will only work well on smooth, flat surfaces. Often, you'll want your robot to work in environments where the path is not smooth or flat; perhaps, you'll even want your robot to go upstairs or over other barriers. In this article, you'll learn how to attach your board, both mechanically and electrically, to a platform with legs so that your projects can be mobile in many more environments. Robots that can walk! What could be more amazing than this? In this article, we will cover the following topics: Connecting Raspberry Pi to a two-legged mobile platform using a servo motor controller Creating a program in Linux so that you can control the movement of the two-legged mobile platform Making your robot truly mobile by adding voice control Gathering the hardware In this article, you'll need to add a legged platform to make your project mobile. For a legged robot, there are a lot of choices for hardware. Some robots are completely assembled and others require some assembly; you may even choose to buy the components and construct your own custom mobile platform. Also, I'm going to assume that you don't want to do any soldering or mechanical machining yourself, so let's look at several choices of hardware that are available completely assembled or can be assembled using simple tools (a screwdriver and/or pliers). One of the simplest legged mobile platforms is one that has two legs and four servo motors. The following is an image of this type of platform: You'll use this legged mobile platform in this article because it is the simplest to program and the least expensive, requiring only four servos. To construct this platform, you must purchase the parts and then assemble them yourself. Find the instructions and parts list at http://www.lynxmotion.com/images/html/build112.htm. Another easy way to get all the mechanical parts (except servos) is by purchasing a biped robot kit with six degrees of freedom (DOFs). This will contain the parts needed to construct a six-servo biped, but you can use a subset of the parts for your four-servo biped. These six DOF bipeds can be purchased on eBay or at http://www.robotshop.com/2-wheeled-development-platforms-1.html. You'll also need to purchase the servo motors. Servo motors are similar to the DC motors, except that servo motors are designed to move at specific angles based on the control signals that you send. For this type of robot, you can use standard-sized servos. I like Hitec HS-311 for this robot. They are inexpensive but powerful enough for the operations you'll use for this robot. You can get them on Amazon or eBay. The following is an image of an HS-311 servo: I personally like the 5-V cell phone rechargeable batteries that are available at almost any place that supplies cell phones. Choose one that comes with two USB connectors; you can use the second port to power your servo controller. The mobile power supply shown in the following image mounts well on the biped hardware platform: You'll also need a USB cable to connect your battery to Raspberry Pi. You should already have one of these. Now that you have the mechanical parts for your legged mobile platform, you'll need some hardware that will turn the control signals from your Raspberry Pi into voltage levels that can control the servo motors. Servo motors are controlled using a signal called PWM. For a good overview of this type of control, see http://pcbheaven.com/wikipages/How_RC_Servos_Works/ or https://www.ghielectronics.com/docs/18/pwm. Although the Raspberry Pi's GPIO pins do support some limited square-wave pulse width modulation (SW PWM) signals, unfortunately these signals are not stable enough to accurately control servos. In order to control servos reliably, you should purchase a servo controller that can talk over a USB and control the servo motor. These controllers protect your board and make controlling many servos easy. My personal favorite for this application is a simple servo motor controller utilizing a USB from Pololu that can control six servo motors—Micro Maestro 6-Channel USB Servo Controller (assembled). This is available at www.pololu.com. The following is an image of the unit: Make sure you order the assembled version. This piece of hardware will turn USB commands into voltage levels that control your servo motors. Pololu makes a number of different versions of this controller, each able to control a certain number of servos. Once you've chosen your legged platform, simply count the number of servos you need to control and choose a controller that can control that many servos. In this article, you will use a two-legged, four-servo robot, so you'll build the robot by using the six-servo version. Since you are going to connect this controller to Raspberry Pi through USB, you'll also need a USB A to mini-B cable. You'll also need a power cable running from the battery to your servo controller. You'll want to purchase a USB to FTDI cable adapter that has female connectors, for example, the PL2303HX USB to TTL to UART RS232 COM cable available at www.amazon.com. The TTL to UART RS232 cable isn't particularly important; other than that, the cable itself provides individual connectors to each of the four wires in a USB cable. The following is an image of the cable: Now that you have all the hardware, let's walk through a quick tutorial of how a two-legged system with servos works and then some step-by-step instructions to make your project walk. Connecting Raspberry Pi to the mobile platform using a servo controller Now that you have a legged platform and a servo motor controller, you are ready to make your project walk! Before you begin, you'll need some background on servo motors. Servo motors are somewhat similar to DC motors. However, there is an important difference; while DC motors are generally designed to move in a continuous way, rotating 360 degrees at a given speed, servo motors are generally designed to move at angles within a limited set. In other words, in the DC motor world, you generally want your motors to spin at a continuous rotation speed that you control. In the servo world, you want to limit the movement of your motor to a specific position. For more information on how servos work, visit http://www.seattlerobotics.org/guide/servos.html or http://www.societyofrobots.com/actuators_servos.shtml. Connecting the hardware To make your project walk, you first need to connect the servo motor controller to the servos. There are two connections you need to make, the first is to the servo motors, and the second is to the battery. In this section, before connecting your controller to your Raspberry Pi, you'll first connect your servo controller to your PC or Linux machine to check whether or not everything is working. The steps for doing so are as follows: Connect the servos to the controller. The following is an image of your two-legged robot and the four different servo connections: In order to be consistent, let's connect your four servos to the connections marked from 0 to 3 on the controller by using the following configurations:      0: Left foot      1: Left hip      2: Right foot      3: Right hip The following is an image of the back of the controller; it will show you where to connect your servos: Connect these servos to the servo motor controller as follows:      The left foot to 0 (the top connector) and the black cable to the outside (-)      The left hip to connector 1 and the black cable out      The right foot to connector 2 and the black cable out      The right hip to connector 3 and the black cable out See the following image indicating how to connect servos to the controller: Now, you need to connect the servo motor controller to your battery. You'll use the USB to the FTDI UART cable; plug the red and black cables into the power connector on the servo controller, as shown in the following image: Now, plug the other end of the USB cable into one of the battery outputs. Configuring the software Now, you can connect the motor controller to your PC or Linux machine to see whether or not you can talk to it. Once the hardware is connected, you will use some of the software provided by Polulu to control the servos. The steps to do so are as follows: Download the Polulu software from http://www.pololu.com/docs/0J40/3.a and install it using the instructions on the website. Once it is installed, run the software; you should see the window shown in the following screenshot: You will first need to change the Serial mode configuration in Serial Settings, so select the Serial Settings tab; you should see the window shown in the following screenshot: Make sure that USB Chained is selected; this will allow you to connect to and control the motor controller over the USB. Now, go back to the main screen by selecting the Status tab; you can now turn on the four servos. The screen should look as shown in the following screenshot: Now, you can use the sliders to control the servos. Enable the four servos and make sure that servo 0 moves the left foot; 1, the left hip; 2, the right foot; and 3, the right hip. You've checked the motor controllers and the servos and you'll now connect the motor controller to Raspberry Pi to control the servos from there. Remove the USB cable from the PC and connect it to Raspberry Pi. The entire system will look as shown in the following image: Let's now talk to the motor controller from your Raspberry Pi by downloading the Linux code from Pololu at http://www.pololu.com/docs/0J40/3.b. Perhaps the best way to do this is by logging on to Raspberry Pi using vncserver and opening a VNC Viewer window on your PC. To do this, log in to your Raspberry Pi by using PuTTY, and then, type vncserver at the prompt to make sure vncserver is running. Then, perform the following steps: On your PC, open the VNC Viewer application, enter your IP address, and then click on Connect. Then, enter the password that you created for the vncserver; you should see the Raspberry Pi viewer screen, which should look as shown in the following screenshot: Open a browser window and go to http://www.pololu.com/docs/0J40/3.b. Click on the Maestro Servo Controller Linux Software link. You will need to download the maestro_linux_100507.tar.gz file to the Download folder. You can also use wget to get this software by typing wget http://www.pololu.com/file/download/maestro-linux-100507.tar.gz?file_id=0J315 in a terminal window. Go to your Download folder, move it to your home folder by typing mv maestro_linux_100507.tar.gz .., and then go back to your home folder. Unpack the file by typing tar –xzfv maestro_linux_011507.tar.gz. This will create a folder called maestro_linux. Go to this folder by typing cd maestro_linux and then, type ls. You should see the output as shown in the following screenshot: The document README.txt will give you explicit instructions on how to install the software. Unfortunately, you can't run Maestro Control Center on your Raspberry Pi. The standard version of Maestro Control Center doesn't support the Raspberry Pi graphical system, but you can control your servos by using the UscCmd command-line application. First, type ./UscCmd --list; you should see the following screenshot: The software now recognizes that you have a servo controller. If you just type ./UscCmd, you can see all the commands you could send to your controller. When you run this command, you can see the result as shown in the following screenshot: Notice that you can send a servo a specific target angle, although if the target angle is not within range, it makes it a bit difficult to know where you are sending your servo. Try typing ./UscCmd --servo 0, 10. The servo will most likely move to its full angle position. Type ./UscCmd – servo 0, 0 and it will prevent the servo from trying to move. In the next section, you'll write some software that will translate your angles to the electronic signals that will move the servos. If you haven't run the Maestro Controller tool and set the Serial Settings setting to USB Chained, your motor controller may not respond. Creating a program in Linux to control the mobile platform Now that you can control your servos by using a basic command-line program, let's control them by programming some movement in Python. In this section, you'll create a Python program that will let you talk to your servos a bit more intuitively. You'll issue commands that tell a servo to go to a specific angle and it will go to that angle. You can then add a set of such commands to allow your legged mobile robot to lean left or right and even take a step forward. Let's start with a simple program that will make your legged mobile robot's servos turn at 90-degrees; this should be somewhere close to the middle of the 180-degree range you can work within. However, the center, maximum, and minimum values can vary from one servo to another, so you may need to calibrate them. To keep things simple, we will not cover that here. The following screenshot shows the code required for turning the servos: The following is an explanation of the code: The #!/user/bin/python line allows you to make this Python file available for execution from the command line. It will allow you to call this program from your voice command program. We'll talk about this in the next section. The import serial and import time lines include the serial and time libraries. You need the serial library to talk to your unit via USB. If you have not installed this library, type sudo apt-get install python-serial. You will use the time library later to wait between servo commands. The PololuMicroMaestro class holds the methods that will allow you to communicate with your motor controller. The __init__ method, opens the USB port associated with your servo motor controller. The setAngle, method converts your desired settings for the servo and angle to the serial command that the servo motor controller needs. The values, such as minTarget and maxTarget, and the structure of the communications—channelByte, commandByte, lowTargetByte, and highTargetByte—comes from the manufacturer. The close, method closes the serial port. Now that you have the class, the __main__ statement of the program instantiates an instance of your servo motor controller class so that you can call it. Now, you can set each servo to the desired position. The default would be to set each servo to 90-degrees. However, the servos weren't exactly centered, so I found that I needed to set the angle of each servo so that my robot has both feet on the ground and both hips centered. Once you have the basic home position set, you can ask your robot to do different things; the following screenshot shows some examples in simple Python code: In this case, you are using your setAngle command to set your servos to manipulate your robot. This set of commands first sets your robot to the home position. Then, you can use the feet to lean to the right and then to the left and then you can use a combination of commands to make your robot step forward with the left and then the right foot. Once you have the program working, you'll want to package all your hardware onto the mobile robot. By following these principles, you can make your robot do many amazing things, such as walk forward and backward, dance, and turn around—any number of movements are possible. The best way to learn these movements is to try positioning the servos in new and different ways. Making your mobile platform truly mobile by issuing voice commands Now that your robot can move, wouldn't it be neat to have it obey your commands? You should now have a mobile platform that you can program to move in any number of ways. Unfortunately, you still have your LAN cable connected, so the platform isn't completely mobile. Once you have started executing the program, you can't alter its behavior. In this section, you will use the principles to issue voice commands to initiate movement. You'll need to modify your voice recognition program so that it will run your Python program when it gets a voice command. You are going to make a simple modification to the continuous.c program in /home/pi/pocketsphinx-0.8/src/. To do this, type cd /home/pi/pocketsphinx-0.8/src/programs and then type emacs continuous.c. The changes will appear in the same section as your other voice commands and will look as shown in the following screenshot: The additions are pretty straightforward. Let's walk through them: else if (strcmp(hyp, "FORWARD") == 0): This checks the input word as recognized by your voice command program. If it corresponds with the word FORWARD, you will execute everything within the if statement. You use { and } to tell the system which commands go with this else if clause. system("espeak "moving robot""): This executes Espeak, which should tell you that you are about to run your robot program. system("/home/pi/maestro_linux/robot.py"): This indicates the name of the program you will execute. In this case, your mobile platform will do whatever the robot.py program tells it to. After doing this, you will need to recompile the program, so type make and the pocketsphinx_continuous executable will be created. Run the program by typing ./pocketsphinx_continuous. Disconnect the LAN cable and the mobile platform will now take the forward voice command and execute your program. You should now have a complete mobile platform! When you execute your program, the mobile platform can now move around based on what you have programmed it to do. You can use the command-line arguments, to make your robot do many different actions. Perhaps one voice command can move your robot forward, a different one can move it backwards, and another can turn it right or left. Congratulations! Your robot should now be able to move around in any way you program it to move. You can even have the robot dance. You have now built a two-legged robot and you can easily expand on this knowledge to create robots with even more legs. The following is an image of the mechanical structure of a four-legged robot that has eight DOFs and is fairly easy to create by using many of the parts that you have used to create your two-legged robot; this is my personal favorite because it doesn't fall over and break the electronics: You'll need eight servos and lots of batteries. If you search eBay, you can often find kits for sale for four-legged robots with 12 DOFs, but remember that the battery will need to be much bigger. For this application, you can use an RC (which stands for remote control) battery. RC batteries are nice as they are rechargeable and can provide lots of power, but make sure you either purchase one that is 5 V to 6 V or include a way to regulate the voltage. The following is an image of such a battery, available at most hobby stores: If you use this type of battery, don't forget its charger. The hobby store can help with choosing an appropriate match. Summary Now, you have the ability to build not only wheeled robots but also robots with legs. It is also easy to expand this ability to robots with arms; controlling the servos for an arm is the same as controlling them for legs. Resources for Article: Further resources on this subject: Penetration Testing [article] Testing Your Speed [article] Making the Unit Very Mobile – Controlling the Movement of a Robot with Legs [article]
Read more
  • 0
  • 0
  • 12119

article-image-why-big-data-financial-sector
Packt
06 May 2015
7 min read
Save for later

Why Big Data in the Financial Sector?

Packt
06 May 2015
7 min read
In this article by Rajiv Tiwari, author of the book, Hadoop for Finance Essentials, explains big data is not just changing the data landscape of healthcare, human science, telecom, and online retail industries, but is also transforming the way financial organizations treat their massive data. (For more resources related to this topic, see here.) As shown in the following figure, a study by McKinsey on big data use cases claims that the financial services industry is poised to gain the most from this remarkable technology: The data in financial organizations is relatively easier to capture compared to other industries, as it is easily accessible from internal systems—transactions and customer details—as well as external systems—FX Rates, legal entity data, and so on. Quite simply, the gain per byte of data is the maximum for financial services. Where do we get the big data in finance? The data is collected at every stage—be it onboarding of new customers, call center records, or financial transactions. The financial industry is rapidly moving online, and so it has never been easier to capture the data. There are other reasons as well, such as: Customers no longer need to visit branches to withdraw and deposit money or make investments. They can discuss their requirements with the Bank online or over the phone instead of physical meetings. According to SNL Financial, institutions have shut 2,599 branches in 2014 against 1,137 openings, a net loss of 1,462 branches that is just off 2013's record full-year total of 1,487 branches opened. The move brings total US branches down to 94,752, a decline of 1.5 percent. The trend is global and not just in the US. Electronic channels such as debit/credit cards and mobile devices, through which customers can interact with financial organizations, have increased in the UK, as shown in the following figure. The trend is global and not just in the UK. Mobile equipment such as computers, smartphones, telephones, or tablets make it easier and inexpensive for customers to transact, which means customers will transact more and generate more data. Since customer profiles and transaction patterns are rapidly changing, risk models based on smaller data sets are not very accurate. We need to analyze data for longer durations and be able to write complex data algorithms without worrying about computing and data storage capabilities. When financial organizations combine structured data with unstructured data on social media, the data analysis becomes very powerful. For example, they can get feedback on their new products or TV advertisements by analyzing Twitter, Facebook, and other social media comments. Big data use cases in the financial sector The financial sector is also sometimes called the BFSI sector; that is, banking, financial services, and insurance. Banking includes retail, corporate, business, investment (including capital markets), cards, and other core banking services Financial services include brokering, payment channels, mutual funds, asset management, and other services Insurance covers life and general insurance Financial organizations have been actively using big data platforms for the last few years and their key objectives are: Complying with regulatory requirements Better risk analytics Understanding customer behavior and improving services Understanding transaction patterns and monetizing using cross-selling of products Now I will define a few use cases within the financial services industry with real tangible business benefits. Data archival on HDFS Archiving data on HDFS is one of the basic use cases for Hadoop in financial organizations and is a quick win. It is likely to provide a very high return on investment. The data is archived on Hadoop and is still available to query (although not in real time), which is far more efficient than archiving on tape and far less expensive than keeping it on databases. Some of the use cases are: Migrate expensive and inefficient legacy mainframe data and load jobs to the Hadoop platform Migrate expensive older transaction data from high-end expensive databases to Hadoop HDFS Migrate unstructured legal, compliance, and onboarding documents to Hadoop HDFS Regulatory Financial organizations must comply with regulatory requirements. In order to meet these requirements, the use of traditional data processing platforms is becoming increasingly expensive and unsustainable. A couple of such use cases are: Checking customer names against a sanctions blacklist is very complicated due to the same or similar names. It is even more complicated when financial organizations have different names or aliases across different systems. With Hadoop, we can apply complex fuzzy matching on name and contact information across massive data sets at a much lower cost. The BCBS239 regulation states that financial organizations must be able to aggregate risk exposures across the whole group quickly and accurately. With Hadoop, financial organizations can consolidate and aggregate data on a single platform in the most efficient and cost-effective way. Fraud detection Fraud is estimated to cost the financial industry billions of US dollars per year. Financial organizations have invested in Hadoop platforms to identify fraudulent transactions by picking up unusual behavior patterns. Complex algorithms that need to be run on large volumes of transaction data to identify outliers are now possible on the Hadoop platform at a much lower expense. Tick data Stock market tick data is real-time data and generated on a massive scale. Live data streams can be processed using real-time streaming technology on the Hadoop infrastructure for quick trading decisions, and older tick data can be used for trending and forecasting using batch Hadoop tools. Risk management Financial organizations must be able to measure risk exposures for each customer and effectively aggregate it across entire business divisions. They should be able to score the credit risk for each customer using internal rules. They need to build risk models with intensive calculation on the underlying massive data. All these risk management requirements have two things in common—massive data and intensive calculation. Hadoop can handle both, given its inexpensive commodity hardware and parallel execution of jobs. Customer behavior prediction Once the customer data has been consolidated from a variety of sources on a Hadoop platform, it is possible to analyze data and: Predict mortgage defaults Predict spending for retail customers Analyze patterns that lead to customers leaving and customer dissatisfaction Sentiment analysis – unstructured Sentiment analysis is one of the best use cases to test the power of unstructured data analysis using Hadoop. Here are a few use cases: Analyze all e-mail text and call recordings from customers, which indicates whether they feel positive or negative about the products offered to them Analyze Facebook and Twitter comments to make buy or sell recommendations—analyze the market sentiments on which sectors or organizations will be a better buy for stock investments Analyze Facebook and Twitter comments to assess the feedback on new products Other use cases Big data has the potential to create new non-traditional income streams for financial organizations. As financial organizations store all the payment details of their retailers, they know exactly where, when, and how their customers are spending money. By analyzing this information, financial organizations can develop deep insight into customer intelligence and spending patterns, which they will be able to monetize. A few such possibilities include: Partner with a retailer to understand where the retailer's customers live, where and when they buy, what they buy, and how much they spend. This information will be used to recommend a sales strategy. Partner with a retailer to recommend discount offers to loyalty cardholders who use their loyalty cards in the vicinity of the retailer's stores. Summary In this article, we learned the use cases of Hadoop across different industry sectors and then detailed a few use cases within the financial sector. Resources for Article: Further resources on this subject: Hive in Hadoop [article] Hadoop and MapReduce [article] Hadoop Monitoring and its aspects [article]
Read more
  • 0
  • 0
  • 2627

article-image-introduction-hadoop
Packt
06 May 2015
11 min read
Save for later

Introduction to Hadoop

Packt
06 May 2015
11 min read
In this article by Shiva Achari, author of the book Hadoop Essentials, you'll get an introduction about Hadoop, its uses, and advantages (For more resources related to this topic, see here.) Hadoop In big data, the most widely used system is Hadoop. Hadoop is an open source implementation of big data, which is widely accepted in the industry, and benchmarks for Hadoop are impressive and, in some cases, incomparable to other systems. Hadoop is used in the industry for large-scale, massively parallel, and distributed data processing. Hadoop is highly fault tolerant and configurable to as many levels as we need for the system to be fault tolerant, which has a direct impact to the number of times the data is stored across. As we have already touched upon big data systems, the architecture revolves around two major components: distributed computing and parallel processing. In Hadoop, the distributed computing is handled by HDFS, and parallel processing is handled by MapReduce. In short, we can say that Hadoop is a combination of HDFS and MapReduce, as shown in the following image: Hadoop history Hadoop began from a project called Nutch, an open source crawler-based search, which processes on a distributed system. In 2003–2004, Google released Google MapReduce and GFS papers. MapReduce was adapted on Nutch. Doug Cutting and Mike Cafarella are the creators of Hadoop. When Doug Cutting joined Yahoo, a new project was created along the similar lines of Nutch, which we call Hadoop, and Nutch remained as a separate sub-project. Then, there were different releases, and other separate sub-projects started integrating with Hadoop, which we call a Hadoop ecosystem. The following figure and description depicts the history with timelines and milestones achieved in Hadoop: Description 2002.8: The Nutch Project was started 2003.2: The first MapReduce library was written at Google 2003.10: The Google File System paper was published 2004.12: The Google MapReduce paper was published 2005.7: Doug Cutting reported that Nutch now uses new MapReduce implementation 2006.2: Hadoop code moved out of Nutch into a new Lucene sub-project 2006.11: The Google Bigtable paper was published 2007.2: The first HBase code was dropped from Mike Cafarella 2007.4: Yahoo! Running Hadoop on 1000-node cluster 2008.1: Hadoop made an Apache Top Level Project 2008.7: Hadoop broke the Terabyte data sort Benchmark 2008.11: Hadoop 0.19 was released 2011.12: Hadoop 1.0 was released 2012.10: Hadoop 2.0 was alpha released 2013.10: Hadoop 2.2.0 was released 2014.10: Hadoop 2.6.0 was released Advantages of Hadoop Hadoop has a lot of advantages, and some of them are as follows: Low cost—Runs on commodity hardware: Hadoop can run on average performing commodity hardware and doesn't require a high performance system, which can help in controlling cost and achieve scalability and performance. Adding or removing nodes from the cluster is simple, as an when we require. The cost per terabyte is lower for storage and processing in Hadoop. Storage flexibility: Hadoop can store data in raw format in a distributed environment. Hadoop can process the unstructured data and semi-structured data better than most of the available technologies. Hadoop gives full flexibility to process the data and we will not have any loss of data. Open source community: Hadoop is open source and supported by many contributors with a growing network of developers worldwide. Many organizations such as Yahoo, Facebook, Hortonworks, and others have contributed immensely toward the progress of Hadoop and other related sub-projects. Fault tolerant: Hadoop is massively scalable and fault tolerant. Hadoop is reliable in terms of data availability, and even if some nodes go down, Hadoop can recover the data. Hadoop architecture assumes that nodes can go down and the system should be able to process the data. Complex data analytics: With the emergence of big data, data science has also grown leaps and bounds, and we have complex and heavy computation intensive algorithms for data analysis. Hadoop can process such scalable algorithms for a very large-scale data and can process the algorithms faster. Uses of Hadoop Some examples of use cases where Hadoop is used are as follows: Searching/text mining Log processing Recommendation systems Business intelligence/data warehousing Video and image analysis Archiving Graph creation and analysis Pattern recognition Risk assessment Sentiment analysis Hadoop ecosystem A Hadoop cluster can be of thousands of nodes, and it is complex and difficult to manage manually, hence there are some components that assist configuration, maintenance, and management of the whole Hadoop system. In this article, we will touch base upon the following components: Layer Utility/Tool name Distributed filesystem Apache HDFS Distributed programming Apache MapReduce Apache Hive Apache Pig Apache Spark NoSQL databases Apache HBase Data ingestion Apache Flume Apache Sqoop Apache Storm Service programming Apache Zookeeper Scheduling Apache Oozie Machine learning Apache Mahout System deployment Apache Ambari All the components above are helpful in managing Hadoop tasks and jobs. Apache Hadoop The open source Hadoop is maintained by the Apache Software Foundation. The official website for Apache Hadoop is http://hadoop.apache.org/, where the packages and other details are described elaborately. The current Apache Hadoop project (version 2.6) includes the following modules: Hadoop common: The common utilities that support other Hadoop modules Hadoop Distributed File System (HDFS): A distributed filesystem that provides high-throughput access to application data Hadoop YARN: A framework for job scheduling and cluster resource management Hadoop MapReduce: A YARN-based system for parallel processing of large datasets Apache Hadoop can be deployed in the following three modes: Standalone: It is used for simple analysis or debugging. Pseudo distributed: It helps you to simulate a multi-node installation on a single node. In pseudo-distributed mode, each of the component processes runs in a separate JVM. Instead of installing Hadoop on different servers, you can simulate it on a single server. Distributed: Cluster with multiple worker nodes in tens or hundreds or thousands of nodes. In a Hadoop ecosystem, along with Hadoop, there are many utility components that are separate Apache projects such as Hive, Pig, HBase, Sqoop, Flume, Zookeper, Mahout, and so on, which have to be configured separately. We have to be careful with the compatibility of subprojects with Hadoop versions as not all versions are inter-compatible. Apache Hadoop is an open source project that has a lot of benefits as source code can be updated, and also some contributions are done with some improvements. One downside for being an open source project is that companies usually offer support for their products, not for an open source project. Customers prefer support and adapt Hadoop distributions supported by the vendors. Let's look at some Hadoop distributions available. Hadoop distributions Hadoop distributions are supported by the companies managing the distribution, and some distributions have license costs also. Companies such as Cloudera, Hortonworks, Amazon, MapR, and Pivotal have their respective Hadoop distribution in the market that offers Hadoop with required sub-packages and projects, which are compatible and provide commercial support. This greatly reduces efforts, not just for operations, but also for deployment, monitoring, and tools and utility for easy and faster development of the product or project. For managing the Hadoop cluster, Hadoop distributions provide some graphical web UI tooling for the deployment, administration, and monitoring of Hadoop clusters, which can be used to set up, manage, and monitor complex clusters, which reduce a lot of effort and time. Some Hadoop distributions which are available are as follows: Cloudera: According to The Forrester Wave™: Big Data Hadoop Solutions, Q1 2014, this is the most widely used Hadoop distribution with the biggest customer base as it provides good support and has some good utility components such as Cloudera Manager, which can create, manage, and maintain a cluster, and manage job processing, and Impala is developed and contributed by Cloudera which has real-time processing capability. Hortonworks: Hortonworks' strategy is to drive all innovation through the open source community and create an ecosystem of partners that accelerates Hadoop adoption among enterprises. It uses an open source Hadoop project and is a major contributor to Hadoop enhancement in Apache Hadoop. Ambari was developed and contributed to Apache by Hortonworks. Hortonworks offers a very good, easy-to-use sandbox for getting started. Hortonworks contributed changes that made Apache Hadoop run natively on the Microsoft Windows platforms including Windows Server and Microsoft Azure. MapR: MapR distribution of Hadoop uses different concepts than plain open source Hadoop and its competitors, especially support for a network file system (NFS) instead of HDFS for better performance and ease of use. In NFS, Native Unix commands can be used instead of Hadoop commands. MapR have high availability features such as snapshots, mirroring, or stateful failover. Amazon Elastic MapReduce (EMR): AWS's Elastic MapReduce (EMR) leverages its comprehensive cloud services, such as Amazon EC2 for compute, Amazon S3 for storage, and other services, to offer a very strong Hadoop solution for customers who wish to implement Hadoop in the cloud. EMR is much advisable to be used for infrequent big data processing. It might save you a lot of money. Pillars of Hadoop Hadoop is designed to be highly scalable, distributed, massively parallel processing, fault tolerant and flexible and the key aspect of the design are HDFS, MapReduce and YARN. HDFS and MapReduce can perform very large scale batch processing at a much faster rate. Due to contributions from various organizations and institutions Hadoop architecture has undergone a lot of improvements, and one of them is YARN. YARN has overcome some limitations of Hadoop and allows Hadoop to integrate with different applications and environments easily, especially in streaming and real-time analysis. One such example that we are going to discuss are Storm and Spark, they are well known in streaming and real-time analysis, both can integrate with Hadoop via YARN. Data access components MapReduce is a very powerful framework, but has a huge learning curve to master and optimize a MapReduce job. For analyzing data in a MapReduce paradigm, a lot of our time will be spent in coding. In big data, the users come from different backgrounds such as programming, scripting, EDW, DBA, analytics, and so on, for such users there are abstraction layers on top of MapReduce. Hive and Pig are two such layers, Hive has a SQL query-like interface and Pig has Pig Latin procedural language interface. Analyzing data on such layers becomes much easier. Data storage component HBase is a column store-based NoSQL database solution. HBase's data model is very similar to Google's BigTable framework. HBase can efficiently process random and real-time access in a large volume of data, usually millions or billions of rows. HBase's important advantage is that it supports updates on larger tables and faster lookup. The HBase data store supports linear and modular scaling. HBase stores data as a multidimensional map and is distributed. HBase operations are all MapReduce tasks that run in a parallel manner. Data ingestion in Hadoop In Hadoop, storage is never an issue, but managing the data is the driven force around which different solutions can be designed differently with different systems, hence managing data becomes extremely critical. A better manageable system can help a lot in terms of scalability, reusability, and even performance. In a Hadoop ecosystem, we have two widely used tools: Sqoop and Flume, both can help manage the data and can import and export data efficiently, with a good performance. Sqoop is usually used for data integration with RDBMS systems, and Flume usually performs better with streaming log data. Streaming and real-time analysis Storm and Spark are the two new fascinating components that can run on YARN and have some amazing capabilities in terms of processing streaming and real-time analysis. Both of these are used in scenarios where we have heavy continuous streaming data and have to be processed in, or near, real-time cases. The example which we discussed earlier for traffic analyzer is a good example for use cases of Storm and Spark. Summary In this article, we explored a bit about Hadoop history, finally migrating to the advantages and uses of Hadoop. Hadoop systems are complex to monitor and manage, and we have separate sub-projects' frameworks, tools, and utilities that integrate with Hadoop and help in better management of tasks, which are called a Hadoop ecosystem. Resources for Article: Further resources on this subject: Hive in Hadoop [article] Hadoop and MapReduce [article] Evolution of Hadoop [article]
Read more
  • 0
  • 0
  • 3178

article-image-ensuring-five-star-rating-marketplace
Packt
05 May 2015
43 min read
Save for later

Ensuring Five-star Rating in the MarketPlace

Packt
05 May 2015
43 min read
In this article written by Feroz Pearl Louis and Gaurav Gupta, author of the book Mastering Mobile Test Automation, we will learn that the star rating system on mobile marketplaces, such as Google Play and Application Store, is a source of positive as well as negative feedback for the applications deployed by any organization. This system is used to measure various aspects of the application, such as like functionality, usability, and is a way to quantify the all-elusive measurement-defying factor that organizations yearn to measure called "user experience", besides the obvious ones, such as the appeal and aesthetics of an application's graphical user interface (GUI). If an organization does not spend time in testing the functionality adequately, then it may suffer the consequences and lose the market share to competitors. The challenge to enable different channels such as web applications through mobile browsers, as well as providing different native or hybrid applications to service the customers as per their preferences, often leads to a situation where organizations have to develop both a web version and a hybrid version of the application. (For more resources related to this topic, see here.) At any given point of time, it is almost impossible to test an application completely, and to cover various permutations and combinations of operating systems, their versions, device manufacturers, device specifications with various screen sizes, and application types, with solely employed manual testing techniques. This is where automation comes to the rescue. However, mobile automation in itself is very complex because of the previously explained fragmentation issue. In this article, you will learn how not to fall into the trap of using different tools, frameworks, and techniques to address these differences. In this article, we will cover the following topics: Introduction to mobile test automation Types of mobile application packages Mobile test automation overview Some common factors to be considered during mobile testing, including Interrupt testing, form factor testing, layout testing, and more Overview of different types of mobile automation testing approaches Selection of the best mobile testing approach depending on the project Troubleshooting and best practices Introduction to mobile test automation Before we start learning about mobile test automation, let's understand what functional test automation is. Test automation has always been a fundamental part of the software testing lifecycle for any project. Organizations invariably look to automate the repetitive testing actions in order to utilize the manual effort thus saved for more dynamic and productive tasks. Use of automation tools also allows utilization of system idle time more effectively. To address these needs, there are a plethora of tools available in the market along with various frameworks and implementation techniques. There are both open source and licensed tools available in the market. Tools such as HP's Unified Functional Testing (UFT), formerly known as QuickTest Professional (QTP), TestComplete, Selenium, eggPlant, Ranorex, SilkTest, IBM Functional tester, and numerous others, provide various capabilities for functional automation. However, almost all of these tools are designed to support only a single operating system (predominantly Windows—owing to its popularity and the coverage it enjoys across industry verticals), although a few provide support for other lesser-used operating systems, such as Unix, Linux, Sun Solaris, and Apple Macintosh. As far as functional automation is concerned, you don't need to even consider the implications of supporting multiple operating systems in most cases. With Windows as the only operating system that is supported, there aren't any considerations for different operating systems. If the application is a web application, then there may be a need to do cross-browser testing, that is, testing automation on various browser types (Chrome, Firefox, and Safari besides Internet Explorer) and their respective versions. Also, as far as functional automation is considered, there is a very clear demarcation between nonfunctional and functional requirements. So, an automated solution for functional testing is not required to consider factors such as how others processes running on the machine would impact it, or any of the hardware aspects, such as the screen resolution of monitors and the make of the machines (IBM, Lenovo, and others). When it comes to mobile automation, there is an impact on the test suite design due to various other aspects, such as operating systems (Android, iOS, Blackberry, Windows) on which the application is supposed to be accessed, the mode of access (Wi-Fi, 3G, LTE, and so on), the form factor of the devices (tablets, phones, phablets, and so on), and the behavior of the application in various orientation modes (portrait, landscape, and so on). So, apart from normal automation challenges, a robust mobile automation suite should be able to address all these challenges in a reliable way. Fragmentation of the mobile ecosystem is an aspect that compounds this manifold problem. An application should be able to service different operating systems and their flavors provided by original equipment manufacturers (OEMs), such as Apple with iOS, Google's Android with Samsung, HTC, Xiaomi, and numerous others, Windows with Nokia and HTC, and even Blackberry and other lesser-used operating systems and devices. Add to this the complexity of dealing with various form factors, such as phones, tablets, phablets, and their various hybrids. The following figure is a visualization of the Android market fragmentation over various equipment manufacturers, form factors, and OS versions: As we know, test automation is the use of software to automate and control the setting up of test preconditions, execution of tests, test control, and test reporting functions with minimum, or ideally zero, user intervention. Automating the testing for any mobile application is the best way to ensure quality, and to achieve the quick and precise results that are needed to accommodate fast development cycles. Organizations look toward functional test automation primarily to reduce the total cost of ownership over a period of time, and to ensure the quality of the product or application being developed. These advantages are compounded many times for mobile test automation and hence it provides the same advantages, but to a much greater degree. The following are the various advantages of mobile test automation for any project: Improved testing efficiency: The same scripts can be used to run uniform validations across different devices, operating systems, and application types (of the same application), thereby reducing the test execution effort considerably. This also means that the return on investment (RoI), which typically takes about 3-5 cycles of executing the conventional functional automation to achieve breakeven, is viable in most cases within the first release itself, as mobile testing is typically repeated on many devices. So, in this case, fragmentation acts as a positive factor if the automation is employed properly, whereas, with pure manual testing, it greatly increases the costs. Consistent and repeatable testing process: Human beings tend to get bored with repetitive tasks and this makes such a test prone to errors. Due to the effect of fragmentation in the mobile world, the same application functionality needs to be validated across various combinations of operating systems, application types, device manufacturers, network conditions, and many more. Hence, the use of automation, which is basically a program, ensures that the same scripts run without any modifications every time. Improved regression testing coverage: The use of automation scripts allows the regression tests to be iterated over multiple combinations of test data. Such data-driven scripts allow the same flow to be validated against different test data combinations. For example, if an application allows users to search for the nearest ATMs in a given area, basically, the same flow would need to be tested with various zip codes as inputs. Hence, the use of automated scripts would instantly allow the test coverage to be increased dramatically. More tests can be run in less time: Since automated scripts can be run in parallel over various devices, the same amount of testing can be compacted inside a much smaller time window in comparison to the manually executed functional testing. With the use of automation scripts that include device setups as preconditions, the execution window can be exponentially reduced, which otherwise would take a manual tester considerable time to complete. 24/7 operation: Although any functional automation suite can lead to better resource utilization in terms of executing more number of scripts in lesser time, with respect to mobile automation, the resources are often expensive mobile devices. If functional testing is done manually, then more of the same devices need to be procured to allow manual testers to carry out tests, and especially, more so in the case of geographically distributed testing teams. Mobile automation scripts, on the other hand, can be triggered remotely and can run unattended, reducing the overall cost of ownership and allowing 24/7 utilization of devices and tools. Human resources are free to perform advanced manual tests: Having automation scripts perform repetitive regression testing tasks frees up the bandwidth of manual testing teams for exploratory tests that are expensive to automate and cumbersome to manage. Hence, the use of automation leads to a balanced approach, where testers can perform more meaningful work and thereby improve the quality of delivered applications. In mobiles, since regression is more repetitive on account of the fragmentation problem, the amount of effort saved is manifold, and hence, testers can generally focus on testing aspects such as user interface (UI) testing and user experience testing. Simple reproduction of found defects: Since automation scripts can be executed multiple times on demand and are usually accompanied with reports and screenshots, defect triangulation is easy and is just a matter of re-execution of automation scripts. With pure manual testing, a tester would have to spend effort on manually recreating the defect, capturing all the required details, and then reporting it for defect tracking. With mobile automation, the same flow can be triggered multiple times on a multitude of devices hence, the same defect can be replicated and isolated if it occurs only on a specific set of devices. Accurate and realistic real-life mobile scenarios: Since a mobile requires tests to be specifically designed for variable network conditions and other considerations, such as device screen sizes, orientation, and more, which are difficult to recreate accurately with pure manual testing effort, automation scripts can be developed that accurately to recreate these real-world scenarios in a reliable way. These types of tests are mainly not required to be developed for functional automation suites, and hence, this is one of the major differences. For the most realistic results, conventional wisdom is to test automation on actual devices—without optical recognition, emulation, jailbreaking, or tethering. It is impractical to try to automate everything, especially for mobile devices. However, leveraging commercial off-the-shelf (COTS) tools can vastly reduce the cost of automation and thereby enhance the benefits of the automation process. In the following section, we will discuss in detail the challenges that make mobile automation vastly different from conventional functional automation. The following are some of the issues that make the effective testing automation of mobile applications challenging: Restricted access to native methods to enable automation tools: Traditional functional automation tools utilize native operating system methods to emulate user interactions. This is comparatively easy to do as the operating system allows access. However, the same level of access is not available with a mobile operating system. Also, inter-application interactions are restricted in a mobile operating system and each application is treated as an individual thread. This is normally only allowed when a phone is rooted or when the application under test is modified to allow instrumentation access. So, using other software (the test automation tool) to control user inputs in a mobile application is much more difficult to achieve and consequently slower or more error prone. For example, if an Android application under test makes a call to the photo gallery, then the automated test would not be able to continue because a new application comes to the foreground. Lack of prediction techniques for UI synchronization in a Mobile environment: In addition to the restricted access mentioned in the previous point, mobile application user interface response times are dependent on many variables, such as connection speed and device configuration other than the server response times. Hence, it is much harder to predict the synchronization response in a mobile application. Due to this automation of mobile, the application is more prone to be unstable unless hardcoded wait times are included in the automation scripts. Handling location-specific changes in the application behavior: Many mobile applications are designed to interact with the user location, and behave differently as per the change in GPS coordinates. Since network strengths cannot be controlled externally, it is very difficult to predict the application behavior and to replicate the preconditions of a network strength-specific use case through the use of automation. So, this is another aspect that every automation solution has to address appropriately. Some automation tools allow the simulation of such network conditions that should be specifically handled while developing the automation suite. Supporting application behavior changes for varied form factors: As explained earlier, since there are different screen sizes, available for mobile devices, the behavior of the application is often specific to the screen size owing to responsive design techniques that are now quite widely used. Even with the change in the orientation of the devices, application use cases have alternative behavior. For example, an application interface loaded in the portrait mode would appear different, with objects in different locations than they would appear in the landscape mode. Hence, automation solutions would need to factor this in and ensure that such changes are handled in a robust and scalable way. Scripting complexity due to diversity in OS: Since many applications are developed to support various OSes, especially mobile web applications, it is a key challenge to handle application differences, such as mobile device input methods for various devices, as devices differ in keystrokes, input methods, menu structures, and display properties. With different mobile operating systems in the market, such as Android, iOS, Brew Symbian, Tizen, Windows, and BlackBerry (RIM), each having its own limitations and variations, creation of a single script for every device is a challenge that needs to be adequately tackled in order to make the automation solution more robust, maintainable, and scalable to support newer devices in future. Mobile application packages With the advancement in wireless technology, big technology companies, such as Apple, Amazon, Google, and so on, came out with a solution that provides users with a more realistic approach to finding information, making decisions, shopping, and other countless things at their fingertips by developing mobile applications for their products. The main purpose of developing mobile applications was actually to retrieve information using various productivity tools, which includes calculator, e-mail, calendar, contacts, and many more. However, with more demand for and the availability of resources, there was a rapid growth and expansion in other categories, such as mobile games, shopping, GPS and location-based services, banking, order tracking, ticket purchases, and recently, mobile medical applications. The distribution platforms, such as Apple App Store, Google Play, Windows Phone Store, Nokia Store, and BlackBerry Application World, are operated by the owners of the mobile operating systems, and mobile applications are made available to users by them. We usually hear about the terms such as a native application, hybrid application, or web application, so, did you ever wonder what they are and what is the difference is between them? Moving ahead, we will discuss the different mobile packages available for use and their salient features that make an impact on the selection of a strategy and testing tool for automation. The different mobile packages available are: Native applications Web applications Hybrid applications Native applications Any mobile application needs to be installed through various distribution systems, such as Application Store and Google Play. Native applications are the applications developed specifically for one platform, such as iOS, Android, Windows, and many more. They can interact and take full advantage of operating system features and other software that is typically installed on that platform. They have the ability to use device-specific hardware and software, such as the GPS, compass, camera, contact book, and so on. These types of applications can also incorporate gestures such as standard operating system gestures or new application-defined gestures. Native applications have their entire code developed for a particular operating system and hence have no reusability across operating systems. A native application for iOS would thus have its application handles built specifically for Objective-C or Swift and hence would not work on an Android device. If the same application needs to be used across different operating systems, which is a very logical requirement for any successful application, then developers would have to write a whole new repository of code for another mobile operating system. This makes the application maintenance cumbersome and the uniformity of features is another challenge that becomes difficult to manage. However, having different code bases for different operating systems allows the flexibility to have operating-system-specific customizations that are easy to build and deploy. Also, today, there is a need to follow very strict "look and feel" guidelines for each operating system. Using a native application might be the best way to keep this presentation correct one for each OS. Also, testing native applications is usually limited to the operating system in question and hence, the fragmentation is usually limited in impact. Only manufactures and operating system versions need to be considered. Mobile web applications A mobile web application is actually not an application but in essence only websites that are accessed via a mobile interface, and it has design features specific to the smaller screen interface and it has user interactions such as swipe, scroll, pinch, and zoom built in. These mobile web applications are accessed via a mobile browser and are typically developed using HTML or HTML5. Users first access them as they would access any web page. They navigate to a special URL and then have the option of installing them on their home screen by creating a bookmark for that page. So, in many ways, a web application is hard to differentiate from a native application, as in mobile screens, usually there are no visible browser buttons or bars, although it runs in mobile browsers. A user can perform various native application functionalities, such as swiping to move on to new sections of the application. Most of the native application features are available in the HTML5 web application, for example, they can use the tap-to-call feature, GPS, compass, camera, contact book, and so on. However, there are still some native features that are inaccessible (at least for now) in a browser, such as the push notifications, running an application in the background, accelerometer information (other than detecting landscape or portrait orientations), complex gestures, and more. While web applications are generally very quick to develop with a lot of ready-to-use libraries and tools, such as AngularJS, Sencha, and JQuery, and also provide a unique code base for all operating systems, there is an added complexity of testing that adds to the fragmentation problem discussed earlier. There is no dearth of good mobile browsers and on a mobile device, there is very limited control that application developers can have, so users are free to use any mobile browser of their choice, such as Chrome, Safari, UC Browser, Opera Mobile, Opera Mini, Firefox, and many more. Consequently, these applications are generally development-light and testing-heavy. Hence, while developing automation scripts, the solution has to consider this impact, and the tool and technique selected should have the facility to run scripts on all these different browsers. Of course, it could be argued that many applications (native or otherwise) do not take advantage of the extra features provided by native applications. However, if an application really requires native features, you will have to create a native application or, at least, a hybrid application. Hybrid applications Hybrid applications are combinations of both native applications and web applications, and because of that, many people incorrectly call them web applications. Like native applications, they are installed in a device through an Application Store and can take advantage of the many device features available. Just like web applications, hybrid applications are dependent on HTML being rendered in a browser, with the caveat that the browser is embedded within the application. So, for an existing web page, companies build hybrid applications as wrappers without spending significant effort and resources, and they can get their existence known in Application Store and have a star rating! Web applications usually do not have one and hence have this added disadvantage of lacking the automatic publicity that a five-star rating provides in the mobile stores. Because of cross-platform development and significantly low development costs, hybrid applications are becoming popular, as the same HTML code components are reusable on different mobile operating systems. The other added advantage is that hybrid applications can have the same code base wrapped inside an operating-system-specific shell thereby making it development-light. By removing the problem posed by various device browsers, hybrid applications can be more tightly controlled, making them less prone to fragmentation, at least on the browser side. However, since they are hybrid applications, any automation testing solution should have the ability to test across different operating system and version combinations, with the ability to differentiate between various operating-system-specific functionality differences. Various tools such as PhoneGap and Sencha allow developers to code and design an application across various platforms just by using the power of HTML. Factors to be considered during mobile testing In many aspects, an approach to perform any type of testing is not so different from mobile automation testing. From methodology and experience, while working with the actual testing tools, what testers have learned in testing can be applied to mobile automation testing. So, a question might come to your minds that then, where does the difference lie and how should you accommodate these differences? So, following this topic, we will see some of the factors that are highly relevant to mobile automation testing and require particular attention, but if handled correctly, then we can ensure a successful mobile testing effort. Some of the factors that need to be taken care of in testing mobile applications are as follows: Testing for cross device and platform coverage: It is not feasible to test an application on each and every available device because of the plethora of devices that support the application across different platforms, which means you have to strategically choose only a limited, but sufficient set of physical devices. You need to remember that testing on one device, irrespective of whether it is of the same make, same operating system version, or uses the same platform cannot ensure that it would work on any other device. So, it is important that, at the very least, most of the critical features, if not all, are tested on a physical device. Otherwise, the application always runs a risk of potential failure on an untested device, especially when the target audience for the application is widespread, such as for a game or banking application. Use of emulated devices is one of the common ways to overcome the issues of testing on numerous physical devices. Although this approach is generally less expensive, we cannot rely completely on the emulated devices for the results they present, and with emulators, it may be quite possible that test conditions are not close enough to the real-life scenarios. So, an adequate coverage of different physical devices is required to test these following variations, providing sufficient coverage in order to negate the effects of fragmentation and have sufficient representation of these various factors: Varying screen sizes Different form factors Different pixel densities and resolutions Different input methods, such as QWERTY, touch screen, and more Different user input methods, such as swipes, gestures, scrolling, and many more Testing different versions of an operating system of the same platform: For thorough testing, we need to test the application on all major platforms, such as Android, iOS, Windows, and others, for the target customer base, but each one of them has numerous versions available that keep on growing regularly. Most commonly, testing automation on the latest version of any operating system can be sufficient, as the operating systems are generally backward compatible. However, due to fragmentation of the Android OS, the application would still need to be tested on at least the most commonly used versions besides the latest ones, which in some cases may be significantly behind the latest version. This is because there may be many Android devices that are on an earlier version of Android and are not supported by the latest versions of Android. Testing of various network types and network providers: Most of the mobile applications, such as banking- or information-search-related applications require network connectivity, such as CDMA or GSM, at least partially, if not completely. If the application talks to a server about the flow of information to and fro, testing on various (at least all major) network providers is important. The network infrastructure used by network providers may affect data communication between application and the backend. Apart from the different network providers, an application needs to be tested on other modes of network communication, such as Wi-Fi network as well. Testing for mobile-environment-specific constraints: The mobile environment is very dynamic and has constraints, such as limited computing resources, available memory, in-between calls or messages, network switching, battery life, and a lot of other sensors and features, such as accelerometer, gyroscope, GPS, memory cards, camera, and others, present in the device, as an application's behavior depends on these factors. An application should integrate or interact (if required) with these features gracefully, and sufficient testing needs to be carried out in various situations to ensure this. However, oftentimes, it is not practically feasible to recreate all permutations and combinations of these factors, and hence a strategic approach needs to be taken to ensure sufficient coverage. Testing for the unpredictability of a mobile user: A tester has to be more cautious and should expand the horizon while testing the applications. They should make sure that an application provides an overall good response to all users and a good user experience; hence, User Experience (UX) testing invariably needs to be performed to a certain degree for all mobile applications. Since, a mobile application's audience comprises of various people ranging from nontech people to skilled technical users and from children to middle-aged users. Each of the users have their own style of using the application and have set their own expectations of it. A middle-aged or an aged user will be much calmer while using any application than someone who is young when it comes to the performance of the application. In general, we can say that mobile users have set incredibly high expectations of the applications available in the marketplace. Mobile automation testing approaches In this section, you will understand the different approaches used for automation of a mobile application and their salient points. There are, broadly speaking, four different approaches or techniques available for mobile application testing automation: Test automation using physically present real devices Test automation using emulators and simulators Mobile web application test automation through the user agent simulation technique Cloud-solutions-based test automation Automation using real devices As the name suggests, this technique is based on the usage of real devices that are physically present with the testing automation team. Since this technique is based on the usage of real devices, it is a natural consequence that the Application Under Test (AUT) is also tested over a real network (GSM, CDMA, or Wi-Fi). To establish connectivity of the automation tool with the devices, any of the communication mechanisms, such as USB, Bluetooth, or Wi-Fi can be used; however, the most commonly used and the most reliable one is the USB connection. After the connection is established between the machines on which the automation tool is installed and the Device Under Test (DUT), the automation scripts can capture object properties of the AUT and later, the developed scripts can be executed on other devices as well, but with minor modifications. There are numerous automation tools, both licensed as well as open source freeware, available for mobile automation. Some commonly used licensed tools are: Experitest SeeTest TestPlant eggPlant Mobile /eggOn Jamo Solutions M-eux Test ZAP-fiX Prominent tools for Android and iOS automation are: Selenium with solutions such as Selendroid and Appium along with iOS and Android drivers MonkeyTalk (formerly FoneMonkey) The following are the salient features of this approach: The AUT is accessed on devices either by using a real mobile network or Wi-Fi network and can also be accessed by the Intranet network of the machine to which it is connected The automation testing tool is installed on the desktop that uses the USB or Wi-Fi connectivity to control devices under test Steps to set up automation For automation on real devices, scripts are required to be executed on the devices with a USB or Wi-Fi connection to send commands via the execution tool to the devices. The following is a step-by-step description of how to perform the automation on real devices: Determine the device connectivity solution (USB or Wi-Fi connectivity) based on the available setup. In some cases, USB connectivity is not enabled due to security policies and only in those cases is a Wi-Fi connection utilized. Identify the tool to be used for the automation based on the tool feasibility study of the application. Procure the required licenses (seat or concurrent) if a licensed tool is selected. License procurement might mean that lengthy agreements need to be signed by both parties, besides arranging for the payment of services such as support. So, this step should be well planned with enough buffer time. If the existing automation setup is to be leveraged, then an additional license needs to be acquired that corresponds to the tool (such as Quick Test Professional, Quality Center, and more). In some cases, you might also have to integrate the existing automation scripts developed with tools such as Quick Test Professional/Unified Functional Testing along with the automation scripts developed for the mobile. In such a case, the framework already in place needs to be modified. Install the tools on the automation computer and establish the connectivity with the real devices. Installation may not be as simple as just running an executable file when it comes to mobile automation. There are various network-level settings and additional drivers that are needed to connect the computer and to control various mobile devices from the computer. Hence, all this should be done and planned well in advance. Script the test cases and execute them on real devices. Limitations of this automation This approach has the following limitations: The overall cost can be high as multiple devices are required to be procured for different teams and testers Maintenance and physical security can be an overhead Script maintenance can be delayed if testing cycles are overlapping with functional and automation teams Emulators-based automation Emulators are programs that replicate the behavior of a mobile operating system and, to some extent, the device features on a computer. So, in essence, these programs are used to create virtual devices. So, any mobile application can be deployed on such virtual devices and then tested without the use of a real device. Ideally speaking, there are two types of mobile device virtualization programs: emulators and simulators. From a purely theoretical standpoint, the following are the differences between an emulator and a simulator. A device emulator is a desktop application that emulates both the mobile device hardware and its operating systems; thus, it allows us to test the applications to a lesser degree of tolerance and better accuracy. There are also operating system emulators that don't represent any real device hardware, but rather the operating system as a whole. These exist for Windows Mobile and Android, but a simulator is a simpler application that simulates some of the behavior of a device, does not emulate hardware, and does not work over the real operating system. These tools are simpler and less useful than emulators. A simulator may be created by the device manufacturer or by some other company that offers a simulation environment for developers. Thus, simulator programs have lesser accuracy than emulator programs. For the sake of keeping the discussion simple, we will refer to both as emulators in this article. Since this technique does not use real devices, it is a natural consequence that the AUT is not tested over a real network (GSM, CDMA, or Wi-Fi), and the network connection of the machine is utilized to make a connection with the application server (if it connects to a server, which around 90 percent of mobile applications do). Since the virtual devices are available on the computer, there is no external connection required between the device's operating system and automation tool. However, an emulator is not as simple as automating any other program because the actual AUT runs inside the shell of the virtual device. So, a special configuration needs to be enabled with the automation tools to enable the automation on the virtual device. The following is a diagram depicting an Android emulator running on a Windows 7 computer: In most projects, this technique is used for prelaunch testing of the application, but there are cases where emulators are automated to a great extent. However, since the emulator is essentially more limited in scope than the real devices, mobile-network-specific and certain other features such as memory utilization cannot be relied upon while testing automation with emulators. There are numerous automation tools, both licensed as well as of an open source freeware available for mobile automation on these virtual devices, and ideally, emulators for various mobile platforms can be automated with most of the tools that support real device automation. The prominent licensed tools are: ExperiTest SeeTest TestPlant eggPlant Mobile /eggOn Jamo Solutions M-eux Test Tools such as Selenium and ExperiTest SeeTest can be used to launch device platform emulators and execute scripts on the AUT. The prominent free-to-use tools for emulator automation are: Selenium WebDriver Appium MonkeyTalk (formerly FoneMonkey) Since emulators are also software that run on other machines, device-specific configurations need to be performed prior to test automation and have to be handled in the scripts. The following is the conceptual depiction of this technique. The emulator and simulator programs are installed on a computer with a given operating system, such as Windows, Linux, or Mac, which then virtualizes the mobile operating system, such as Android, iOS, RIM, or Windows, and subsequently, which can be used to run scripts that emulate the behavior of an application on the real devices. Steps to set up automation The following are the steps to set up the automation process for this approach: Identify the various platforms for which the AUT needs to be automated. Establish the connectivity to AUT by enabling the firewall access in the required network for mobile applications. Identify the various devices, platforms, emulators, and device configurations, according to which test needs to be carried out. Install emulators/simulators for the various platforms. Create scripts and execute them across multiple emulators/simulators. Advantages This approach has the following advantages: Standalone emulators that don't have real devices can be utilized No additional connectivity is required for automation This provides support for iOS and Android with freeware This provides support to all platforms and types of applications with licensed tools, such as Jamo Solutions M-eux and ExperiTest SeeTest Limitations This approach has the following limitations: This can be difficult to automate as the emulators and simulators are themselves not thoroughly tested software and might have unknown bugs. Selenium WebDriver cannot be used to automate Android applications in some versions due to a bug in the Android emulator. It might sometimes be difficult to triangulate a defect that is detected on a virtual device and it might be needed that you recreate it on a real device first. In many cases, it has been observed that defects caught on emulators are not reproduced on real devices. For iOS simulators, access to a Mac machine with Xcode is required, which can be difficult to set up in a secure Offshore Development Center (ODC) due to security restrictions. User agent-simulation-based automation The third technique is the simplest of all. However, it is also very limited in its scope of applicability. It can be used only for mobile web applications and only to a very limited extent. Hence, it is generally only used to automate the functional regression testing of mobile web applications and rarely used for GUI validations. User agent is the string that web servers use to identify information, such as the operating system of the requester and the browser that is accessing it. This string is normally sent with the HTTP/HTTPS request to identify the requester details to the server. Based on this information, a server presents the required interface to the requesting browser. This approach utilizes the browser user agent manipulation technique. This is depicted in the following schematic diagram: In this approach, an external program or a browser add-on is used to override the user agent information that is sent to the web application server to identify the requestor system as a mobile instead of its real information. So, for example, when a web application URL such as https://www.yahoo.com is accessed from a mobile device, the application server detects the requester to be a mobile device and redirects it to https://mobile.yahoo.com/, thereby presenting the mobile view. If the user agent information is overridden to indicate that it is coming from a Safari browser on an iPhone, then it will be presented with the mobile view. The following screenshot depicts how the application server has responded to a request when it detects that the request is from an iPhone 4 and is presented the mobile view: Since the mobile web application is accessed entirely from the computer, automation can be done using traditional web browser automation tools, such as Quick Test Professional/Unified Functional Testing or Selenium. The salient features of this technique are as follows: With browser user agent manipulation, any mobile platform can be simulated Browser uder agent manipulation is limited to only mobile web applications and is not extended to native and hybrid applications Browser simulation can be done using freeware files that are available for all leading web browsers The common user agent switching tools are: Bayden UAPick for IE User agent switcher add-on for Firefox Fiddler for IE Modify Headers for Firefox UA Spoofer add-on for Chrome Built-in device emulator with Chrome that can be accessed from developer tools Steps to set up the automation The following are the steps to set up the automation process for this approach: Identify the various platforms for which the AUT needs to be validated. Identify the user-agent switcher tool that corresponds to any browser that needs to be leveraged for testing. Identify the user-agent string for all platforms in scope and set up configuration in the user-agent switcher tool. Leverage any functional testing tool that offers testing capabilities using any web browser, for example, Quick Test Professional, RFT, SilkTest, and Selenium WebDriver. Advantages This approach has the following advantages: All platforms can be automated with little modification of scripts Quick implementation of automation solution Can leverage an open source software, such as Selenium for automation Existing automation set up can be leveraged Limitations This approach has the following limitations: Least representative of real device-based tests Device-specific issues cannot be captured through this approach This cannot be used for UI-related test cases This approach supports only web-based mobile applications Cloud-based automation This technique provides most of the capabilities for test automation, but is also one of the more expensive techniques. In this technique, automation is done on real devices connected to real networks that are accessed remotely through cloud-based solutions, such as Perfecto Mobile, Mobile Labs, Sauce Labs, and DeviceAnywhere. The salient features of this technique are as follows: Cloud-based tools, such as Perfecto mobile and Device Anywhere provide a WYSIWYG (What You See Is What You Get) solution for automation Both OCR (Optical Character Recognition) and native object recognition and analysis is utilized in these tools for automation These tools also provide simple high-level keywords, such as Launch Browser, Call Me, and many more that can be used to design test cases The scripts that are thus created need to be re-recorded for every new type of device due to differences between the interface and GUI objects Mobile devices are accessed via a web interface or thick-client, by teams in various regions The devices are connected to real networks that use Wi-Fi or various mobile network operators (AT&T, Vodafone, and more) The AUT is accessed via the Internet or through a secure intranet connection This approach provides offer integration with common automation tools such as Quick Test Professional/UFT and Selenium Steps to set up the automation The following are the steps to set up the automation process for this approach: Identify the various platforms and devices for which the AUT needs to be automated. Establish connectivity to AUT by enabling the firewall access for mobile web applications. Open an account with the chosen cloud solution provider and negotiate to get the licenses for automation or set up a private cloud infrastructure within your company premises. Install the cloud service provider client-side software setup along with the automation plugin for the relevant tool of choice (UFT or Selenium). Book the devices as per testing needs (this usage normally has a cost associated with it). Create scripts and execute across multiple devices. Advantages This approach has the following advantages: This allows us to test automation on multiple devices of various manufactures (hardware) For example: Samsung, Apple, Sony, Nokia A script can be executed on multiple mobile devices from the same manufactures (models) For example: Galaxy SII, Galaxy SIII, iPhone 4S, iPhone 5, iPad2 Scripts can be tested on different platforms (software) For example: Android 2.3 - 4.4, iOS 4-8, Symbian, Bada Limitations This approach has the following limitations: Network latency may be experienced Cost can be high as fees depends on device usage Setting up a private mobile lab is costly, but may be necessary due to an organization's security policies, particularly in legally regulated industries, such as BFSI organizations Types of mobile application tests Apart from the usual functional test, which ensures that the application is working as per the requirements, there are a few more types that need to be handled with an automation solution: Interrupt testing: A mobile application while functioning may face several interruptions that can affect the performance or functionality of an application. The different types of interruptions that can adversely affect the functionality of an application are: Incoming calls and SMS or MMS Receiving notifications, such as Push Notifications Sudden removal of battery Transfer of data through a data cable by inserting or removing the data cable Network/Data loss or recovery Turning a Media Player off or on Ideally, an application should be able to handle these interruptions, for example, whenever an interruption is there, an application can go into a suspended state and resuming afterwards. So, we should design automation scripts in such a way that they can not only test these interrupts, but they can reliably also reproduce them at the requisite step of the flow. UI testing: A user interface for a mobile application is designed to support various screen sizes and hence, the various components of a mobile application screen appear differently or in some cases, even behave differently as per the OS or device make. Hence, any automation script needs be able to work with varying components and also be able to verify the component's behavior. Use of automation ensures that the application is quickly tested and the fixes are regression tested across different applications. Since UI is where the end users interact with the application, use of a robust automation suite is the best way to ensure that the application is thoroughly tested so that it rolls out to the users in the most cost-effective manner. A properly tested application makes the end user experience more seamless and thereby, the application under test is more likely to get a better star rating and its key to commercial success. Installation testing: Installation testing ensures that the installation process goes smoothly without the user facing any difficulty. This type of a testing process includes not only installing an application but also updating and uninstalling an application. Use of automation to install and uninstall any application as per the defined process is one of the most cost-effective ways to do this type of testing. Form factor testing: Applications may behave differently (especially in terms of user interface) on smartphones and tablets. If the test application supports both smartphones and tablets, it should be tested on both form factors. This can be treated as an extension to the UI testing type. Selection of the best mobile testing approach While selecting a suitable mobile testing approach, you need to look at the following important considerations: Availability of automation tools: The availability of relevant mobile automation tool plays a big role in the selection and implementation of the mobile automation approach. Mode of connection of devices: This is one of the primary, if not the most important, aspect that plays a pivotal role in the selection of a mobile automation approach. There are different ways in which devices can be connected to the automation tools such as: Using a USB connection Using a Wi-Fi connection Using a Bluetooth connectivity (only for a very limited set of tools) Using localized hotspots, that is, having one device as a hotspot and other devices riding its network for access Cloud connection Use of emulators and simulators All these approaches need specific configurations on machines, and with the automation tools, which may sometimes be restricted, any automation solution should be able to work around the constraints in various setups. The key consideration is the degree of tolerance of the automation solution. The four different approaches that we discussed earlier in this article have each got a different level of accuracy. The least accurate is the user agent-based approach because it relies just on a web browser's rendering on a Windows machine rather than a real device. The most accurate approach, in terms of closeness to the real-world situation, is the use of real devices. However, this approach suffers from restrictions in terms of scalability of the solution, that is, supporting multiple devices simultaneously. Use of emulators and simulators is also prone to inaccuracies with respect to the real-device features, such as RAM, screen resolutions, pixel sizes, and many more. While working with cloud-based solutions, a remote connection is established with the devices, but there can be unwanted signal delays and screen refresh issues due to network bandwidth issues. So, any approach that is selected for automation should factor in the degree of tolerance that is acceptable with any automation suite. For example, for a mobile application that makes heavy usage of graphics and advanced HTML 5 controls, such as embedded videos and music, automation should not be carried out with an emulator solution, as the degree of accuracy would suffer adversely and usually beyond the acceptable tolerance limit. Consider another application that is a simple mobile web application with no complex controls and that doesn't rely on any mobile-device-specific controls, such as camera controls, or touch screen sensitive controls, such as pinch and zoom. Such an application can easily be automated with the user agent-based approach without any significant impact on the degree of accuracy. If an application uses network bandwidth very heavily, then it is not recommended to use the cloud-based approach, as it will suffer from network issues more severely and would have unhandled exceptions in the automation suite. Conversely, the cloud-based approach is most suitable for organizations that have geographically and logically dispersed teams that can use remotely connected devices from a single web interface. This approach is also very suitable when there are restrictions on the usage of other device connection approaches, such as USB, Wi-Fi, or Bluetooth. Although this approach does need additional tools to enable cloud access, it is a worthwhile investment for organizations that have a high need for system and network security, such as banking and financial organizations. Troubleshooting and best practices The following best practices should ideally be followed for any mobile automation project: The mode of connectivity between the AUT, DUT, and computer on which the automation tool is installed should be clearly established with all the considerations of any organization's security policies. In most cases, there is no way to workaround to the absence of USB connectivity, other than to use cloud-based automation solutions. So, before starting a project, the physical setup should be thoroughly vetted. The various operating systems and versions, mobile equipment manufacturers, and different form factors that need to be supported with the application, and consequently, the automation solution should be designed to support all of them. However, if you start automating before identifying all the supported devices, then there would invariably be a lot of rework required to make the scripts work with other devices. Hence, automation scripts should be made for all supported OSes and devices right from the design stage. A user agent-based automation can only be implemented for mobile web applications. It is a cost-effective and quick way to implement solutions since it involves automation of just a few basic features. However, this technique should not be relied upon for validating GUI components and should always be accompanied with a round of device testing. If any simulation or emulation technique (user agent or emulators/simulators) is used for automation, then it should strictly be used for functional regression testing on different device configurations. Ideally, projects utilizing these solutions should also have a GUI testing round with real devices, at least for the first release. If a geographically-distributed team is to utilize the automation solution, for example, an offshore-onsite team that needs to use the same devices, then the most cost-effective solution in the long run is the cloud-based automation. Even though the initial setup cost of the cloud solution generally is the highest of the four techniques, since different teams can multiplex and use devices from different locations and so the overall cost is offset by using fewer devices overall. During the use of emulators/simulators, the automation scripts should be designed to trigger the virtualization program with the required settings for memory, RAM, and the requisite version of the operating system, so that there is no manual intervention required to start the programs before you trigger the execution. Also, this way, scripts can be triggered remotely and in an unmonitored way. Irrespective of the technique utilized, a proper framework should be implemented with the automation solution. Summary In this article, we learned what mobile test automation is, what are the different mobile packages that are available, and what factors should be considered during mobile automation testing. We then moved on to learn the different types of approaches and selection of the best approach according to any specific project requirements. So, it is evident that with the use of automation to test any mobile application, a good user experience can be ensured with a defect-free software, with which a good star rating can be expected for the AUT. Resources for Article: Further resources on this subject: DOM and QTP [article] Automated testing using Robotium [article] Managing Test Structure with Robot Framework [article]
Read more
  • 0
  • 0
  • 1847

article-image-solving-some-not-so-common-vcenter-issues
Packt
05 May 2015
7 min read
Save for later

Solving Some Not-so-common vCenter Issues

Packt
05 May 2015
7 min read
In this article by Chuck Mills, author of the book vCenter Troubleshooting, we will review some of the not-so-common vCenter issues that administrators could face while they work with the vSphere environment. The article will cover the following issues and provide the solutions: The vCenter inventory shows no objects after you log in You get the VPXD must be stopped to perform this operation message Removing the vCenter plugins when they are no longer needed (For more resources related to this topic, see here.) Solving the problem of no objects in vCenter After successfully completing the vSphere 5.5 installation (not an upgrade) process with no error messages whatsoever, and logging in you log in to vCenter with the account you used for the installation. In this case, it is the local administrator account. Surprisingly, you are presented with an inventory of 0. The first thing is to make sure you have given vCenter enough time to start. Considering the previously mentioned account was the account used to install vCenter, you would assume the account is granted appropriate rights that allow you to manage your vCenter Server. Also consider the fact that you can log in and receive no objects from vCenter. Then, you might try logging in with your domain administrator account. This makes you wonder, What is going on here? After installing vCenter 5.5 using the Windows option, remember that the administrator@vsphere.local user will have administrator privileges for both the vCenter Single Sign-On Server and vCenter Server. You log in using the administrator@vsphere.local account with the password you defined during the installation of the SSO server: vSphere attaches the permissions along with assigning the role of administrator to the default account administrator@vsphere.local. These privileges are given for both the vCenter Single Sign-On server and the vCenter Server system. You must log in with this account after the installation is complete. After logging in with this account, you can configure your domain as an identity source. You can also give your domain administrator access to vCenter Server. Remember, the installation does not assign any administrator rights to the user account that was used to install vCenter. For additional information, review the Prerequisites for Installing vCenter Single Sign-On, Inventory Service, and vCenter Server document found at https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.install.doc%2FGUID-C6AF2766-1AD0-41FD-B591-75D37DDB281F.html. Now that you understand what is going on with the vCenter account, use the following steps to enable the use of your Active Directory account for managing vCenter. Add or verify your AD domain as an identity source using the following procedure: Log in with administrator@vsphere.local. Select Administration from the menu. Choose Configuration under the Single Sign-On option. You will see the Single Sign-On | Configuration option only when you log in using the administrator@vsphere.local account. Select the Identity Sources tab and verify that the AD domain is listed. If not, choose Active Directory (Integrated Windows Authentication) found at the top of the window. Enter your Domain name and click on OK at the bottom of the window. Verify that your domain was added to Identity Sources, as shown in the following screenshot: Add the permissions for the AD account using the following steps: Click on Home at the top left of the window. Select vCenter from the menu options. Select vCenter Servers and then choose the vCenter Server object: Select the Manage tab and then the Permissions tab found in the vCenter Object window. Review the image that follows the steps to verify the process. Click on the green + icon to add permission. Choose the Add button located at the bottom of the window. Select the AD domain found in the drop-down option at the top of the window. Choose a user or group you want to assign permission to (the account named Chuck was selected for this example). Verify that the user or group is selected in the window. Use the drop-down options to choose the level of permissions (verify that Propagate to children is checked). Now, you should be able to log into vCenter with your AD account. See the results of the successful login in the following screenshot: Now, by adding the permissions to the account, you are able to log into vCenter using your AD credentials. The preceding screenshot shows the results of the changes, which is much different than the earlier attempt. Fixing the VPXD must be stopped to perform this operation message It has been mentioned several times in this article that the Virtual Center Service Appliance (VCSA) is the direction VMware is moving in when it comes to managing vCenter. As the number of administrators using it keeps increasing, the number of problems will also increase. One of the components an administrator might have problems with is the Virtual Centre Server service. This service should not be running during any changes to the database or the account settings. However, as with most vSphere components, there are times when something happens and you need to stop or start a service in order to fix the problem. There are times when an administrator who works within the VCSA appliance encounters the following error: This service can be stopped using the web console, by performing the following steps: Log into the console using https://ip-of-vcsa:5480. Enter your username and password: Choose vCenter Server after logging in. Make sure the Summary tab is selected. Click on the Stop button to stop the server: This should work most of the time, but if you find that using the web console is not working, then you need to log into the VCSA appliance directly and use the following procedure to stop the server: Connect to the appliance by using an SSH client such as Putty or mRemote. Type the command chkconfig. This will list all the services and their current status: Verify that vmware-vxpd is on: You can stop the server by using service vmware-vpxd stop command: After completing your work, you can start the server using one of the following methods: Restart the VCSA appliance Use the web console by clicking on the Start button on the vCenter Summary page Type service vmware-vpxd start on the SSH command line This should fix the issues that occur when you see the VPXD must be stopped to perform this operation message. Removing unwanted plugins in vSphere Administrators add and remove tools from their environment based on the needs and also the life of the tool. This is no different for the vSphere environment. As the needs of the administrator change, so does the usage of the plugins used in vSphere. The following section can be used to remove any unwanted plugins from your current vCenter. So, if you have lots of plugins and they are no longer needed, use the follow procedure to remove them: Log into your vCenter using http://vCenter_name or IP_address/mob and enter your username and password: Click on the content link under Properties: Click on ExtensionManager, which is found in the VALUE column: Highlight, right-click, and Copy the extension to be removed. Check out the Knowledge Base 1025360 found at http://Kb.vmware.com/kb/1025360 to get an overview of the plugins and their names. Select UnregisterExtension near the bottom of the page: Right-click on the plugin name and Paste it into the Value field: Click on Invoke Method to remove the plugin: This will give you the Method Invocation Result: void message. This message informs you that the selected plugin has been removed. You can repeat this process for each plugin that you want to remove. Summary In this article, we covered some of the not-so-common challenges an administrator could encounter in the vSphere environment. It provided the troubleshooting along with the solutions to the following issues: Seeing NO objects after logging into vCenter with the account you used to install it How to get past the VPXD must be stopped error when you are performing certain tasks within vCenter Removing the unwanted plugins from vCenter Server Resources for Article: Further resources on this subject: Availability Management [article] The Design Documentation [article] Design, Install, and Configure [article]
Read more
  • 0
  • 0
  • 7762

article-image-installation-and-upgrade
Packt
05 May 2015
8 min read
Save for later

Installation and Upgrade

Packt
05 May 2015
8 min read
In this article by Robert Hedblom, author of the book Microsoft System Center Data Protection Manager Cookbook, we will cover the installation and upgrade for SQL Server on DPM server. We will also understand the prerequisites to start your upgrade process. You will learn how to: Install a SQL Server locally on the DPM server Prepare a remote SQL Server for DPMDB (For more resources related to this topic, see here.) The final result of an installation will never be better than the dependent application design and implementation. A common mistake discovered frequently is the misconfiguration of the SQL configurations that the System Center applications depend on. If you provide System Center a poorly configured SQL Server or insufficient resources, you will end up with quite a bad installation of the application that could be part of the services you would like to provision within your modern data center. In the end, a System Center application can never work faster than what the underlying dependent architecture or technology allows. By proper planning and decent design, you can also provide a scalable scenario for your installation that will make your System Center application applicable for future scenarios. One important note regarding the upgrade scenario for the System Center Data Protection Manager software is the fact that there is no rollback feature built in. If your upgrade fails, you will not be able to provide an easy approach for restoring your DPM server to its former running state. Always remember to provide supported scenarios for your solution. Never take any shortcuts because there aren't any. Installing a SQL Server locally on the DPM server This recipe will cover the installation process of a local SQL Server that is collocated with the DPM server on the same operating system. Getting ready SQL Server is a core component for System Center Data Protection Manager. It is of major importance that the installation and design of SQL Server is well planned and implemented. If you have an undersized installation of SQL Server, it will provide you with a negative experience while operating the System Center Data Protection Manager. How to do it… Make sure that your operating system is fully patched and rebooted before you start the installation of SQL Server 2012 and that the DPM Admins group is a member of the local administrators group. Now take the following steps: Insert the SQL server media and start the SQL server setup. In the SQL Server Installation Center, click on New SQL Server stand-alone installation… The Setup Support Rules will start and will identify any problems that might occur during the SQL server installation. When the Operation is complete, click on OK to continue. In the Product Key step, Enter the product key and click on Next > to continue. The next is the License Terms step where you check the I accept the license terms checkbox if you agree with the license terms. Click on Next > to continue. The SQL server installation will verify if there are any product updates available from the Microsoft update service. Check the Include SQL Server product updates checkbox and click on Next > to continue. Next is the Install Setup Files step that initializes the actual installation. When the tasks have finished, click on Install to continue. Verify that all the rules have passed in the Setup Support Rule step of the SQL server installation process. Resolve any warnings or errors and click the Re-run button to run the verification again. If all the rules have passed, click on Next > to continue. In the Setup Role step, select SQL Server Feature Installation and click on Next >. In the Feature Selection, choose the SQL server features that you would like to install. System Center Data Protection Manager requires: Database Engine Service Full-Text and Semantic Extractions for Search Reporting Services – Native As an option, you can also install the SQL Server Management Studio on the same operating system as the DPM sever. Those components are found under Management Tools, check both Basic and Complete. Click on Next > to continue. Verify the Installation Rules step, resolve any errors, and click on Next > to continue. In the Instance Configuration step, select Named instance and type in a suitable name for your SQL server instance. Click on the button next to the Instance root directory and select the volume that should host the DPMDB. Click on Next > to continue Verify that there are no problems in the Disk Space Requirement step, resolve any issues, and click on Next > to continue. In the Server Configuration step, type in the credentials for the dedicated service account you would like to use for this SQL server. Switch the Startup Type to Automatic for the SQL Server Agent. When all the credentials are filled in, click on the Collation tab. In the Collation tab, must enter the collation for the database engine. System Center Data Protection Manager must have the SQL_Latin1_General_CP1_CI_AS collation. Click on the Customize… button to choose the correct collation and then Next > to continue. The next step is the Database Engine Configuration step and here you enter the authentication security mode, administrators, and directories. In the Authentication Mode section, choose Windows Authentication mode. In the Specify SQL Server administrators section, add the DPM Admins group and click on the Data Directories tab to verify that all your SQL server configurations point to the dedicated disk. Click on Next > to continue. In the Reporting Services Configuration step, configure SSRS or SQL Server Reporting Services. For the Reporting Services Native Mode choose Install and configure and click on Next > to continue. The next step is Error Reporting. Choose the defaults and click on Next > to continue. In the Installation and Configuration Rules step, verify that all operations pass the rules. Resolve any warnings or errors and click the Re-run button for another verification. When all operations have passed, click on Next > to continue. Verify the configuration in the Ready to Install step and click on Install to start the installation. The Installation Progress step will show the current status of the installation process. When the installation is done, the SQL Server 2012 Setup will show you a summary of the Complete step. That is the final step page of the SQL Server Server 2012 installation wizard. Click on the Close button to end the SQL Server 2012 Setup. How it works… SQL server is a very important component for the System Center family. If the SQL server is undersized or misconfigured in any way, it will reflect negatively in many ways on the performance of the System Center. It is crucial to plan, design, and measure the performance of the SQL server so that you know it will fit the scale you are planning for, and the workloads that it should host. Preparing a remote SQL server for DPMDB This recipe will cover the procedure to prepare a remote SQL server for hosting the DPMDB. Getting ready In the scenario where you build a large hosted DPM service solution delivering BaaS (Backup as a Service), RaaS (Restore as a Service), or DRaaS (Disaster Recovery as a Service) within your modern data center, you may want to use a dedicated backend SQL server that is either a standalone SQL server or a clustered one, for high availability. It is not advisable to use SQL Server Always-On to host the DPMDB. Regardless of whether you put the DPMDB on a cluster or a backend standalone SQL server, you still need to perform some initial configurations prior to the actual DPM server installation. How to do it… After installing your backend SQL server solution you must prepare it for hosting the DPMDB. Insert the DPM2012R2 media and run the setup. In the setup screen, click on the DPM Remote SQL Prep link. The installation wizard will start and install the DPM 2012 R2 Support Files; this is a very quick installation. When the installation has finished, a message box prompts that the installation has finished and that the System Center 2012 R2 DPM Support Files have been successfully installed. How it works… The support files for SQL server will be installed on the backend SQL server box and will be used when the DPM server connects and creates its database. There's more… For the DPM server installation to be successful, when you place the DPMDB on a backend SQL server solution, you need to install the SQL 2012 SERVICE PACK 1 Tools that are located in the catalogueSCDPMSQLSRV2012SP1 directory on the DPM media. Summary In this article we learned how to install a SQL Server on a local DPM server, and prepared the remote SQL Server for hosting the DPMDB. We got to know the prerequisites to start the upgrade process for System Center Data Protection Manager. Resources for Article: Further resources on this subject: Mobility [article] Planning a Compliance Program in Microsoft System Center 2012 [article] Wireless and Mobile Hacks [article]
Read more
  • 0
  • 0
  • 1851
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-symmetric-messages-and-asynchronous-messages-part-1
Packt
05 May 2015
31 min read
Save for later

Symmetric Messages and Asynchronous Messages (Part 1)

Packt
05 May 2015
31 min read
In this article by Kingston Smiler. S, author of the book OpenFlow Cookbook describes the steps involved in sending and processing symmetric messages and asynchronous messages in the switch and contains the following recipes: Sending and processing a hello message Sending and processing an echo request and a reply message Sending and processing an error message Sending and processing an experimenter message Handling a Get Asynchronous Configuration message from the controller, which is used to fetch a list of asynchronous events that will be sent from the switch Sending a Packet-In message to the controller Sending a Flow-removed message to the controller Sending a port-status message to the controller Sending a controller-role status message to the controller Sending a table-status message to the controller Sending a request-forward message to the controller Handling a packet-out message from the controller Handling a barrier-message from the controller (For more resources related to this topic, see here.) Symmetric messages can be sent from both the controller and the switch without any solicitation between them. The OpenFlow switch should be able to send and process the following symmetric messages to or from the controller, but error messages will not be processed by the switch: Hello message Echo request and echo reply message Error message Experimenter message Asynchronous messages are sent by both the controller and the switch when there is any state change in the system. Like symmetric messages, asynchronous messages also should be sent without any solicitation between the switch and the controller. The switch should be able to send the following asynchronous messages to the controller: Packet-in message Flow-removed message Port-status message Table-status message Controller-role status message Request-forward message Similarly, the switch should be able to receive, or process, the following controller-to-switch messages: Packet-out message Barrier message The controller can program or instruct the switch to send a subset of interested asynchronous messages using an asynchronous configuration message. Based on this configuration, the switch should send the subset of asynchronous messages only via the communication channel. The switch should replicate and send asynchronous messages to all the controllers based on the information present in the asynchronous configuration message sent from each controller. The switch should maintain asynchronous configuration information on a per communication channel basis. Sending and processing a hello message The OFPT_HELLO message is used by both the switch and the controller to identify and negotiate the OpenFlow version supported by both the devices. Hello messages should be sent from the switch once the TCP/TLS connection is established and are considered part of the communication channel establishment procedure. The switch should send a hello message to the controller immediately after establishing the TCP/TLS connection with the controller. How to do it... As hello messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process the hello message. The following section explains these procedures in detail. Sending the OFPT_HELLO message The message format to be used to send the hello message from the switch is as follows. This message includes the OpenFlow header along with zero or more elements that have variable size: /* OFPT_HELLO. This message includes zero or more    hello elements having variable size. */ struct ofp_hello { struct ofp_header header; /* Hello element list */ struct ofp_hello_elem_header elements[0]; /* List of elements */ }; The version field in the ofp_header should be set with the highest OpenFlow protocol version supported by the switch. The elements field is an optional field and might contain the element definition, which takes the following TLV format: /* Version bitmap Hello Element */ struct ofp_hello_elem_versionbitmap { uint16_t type;           /* OFPHET_VERSIONBITMAP. */ uint16_t length;         /* Length in bytes of this element. */        /* Followed by:          * - Exactly (length - 4) bytes containing the bitmaps,          * then Exactly (length + 7)/8*8 - (length) (between 0          * and 7) bytes of all-zero bytes */ uint32_t bitmaps[0]; /* List of bitmaps - supported versions */ }; The type field should be set with OFPHET_VERSIONBITMAP. The length field should be set to the length of this element. The bitmaps field should be set with the list of the OpenFlow versions the switch supports. The number of bitmaps included in the field should depend on the highest version number supported by the switch. The ofp_versions 0 to 31 should be encoded in the first bitmap, ofp_versions 32 to 63 should be encoded in the second bitmap, and so on. For example, if the switch supports only version 1.0 (ofp_versions = 0 x 01) and version 1.3 (ofp_versions = 0 x 04), then the first bitmap should be set to 0 x 00000012. Refer to the send_hello_message() function in the of/openflow.c file for the procedure to build and send the OFPT_Hello message. Receiving the OFPT_HELLO message The switch should be able to receive and process the OFPT_HELLO messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the hello message, it should calculate the protocol version to be used for messages exchanged with the controller. The procedure required to calculate the protocol version to be used is as follows: If the hello message received from the switch contains an optional OFPHET_VERSIONBITMAP element and the bitmap field contains a valid value, then the negotiated version should be the highest common version among the supported protocol versions in the controller, with the bitmap field in the OFPHET_VERSIONBITMAP element. If the hello message doesn't contain any OFPHET_VERSIONBITMAP element, then the negotiated version should be the smallest of the switch-supported protocol versions and the version field set in the OpenFlow header of the received hello message. If the negotiated version is supported by the switch, then the OpenFlow connection between the controller and the switch continues. Otherwise, the switch should send an OFPT_ERROR message with the type field set as OFPET_HELLO_FAILED, the code field set as OFPHFC_INCOMPATIBLE, and an optional ASCII string explaining the situation in the data and terminate the connection. There's more… Once the switch and the controller negotiate the OpenFlow protocol version to be used, the connection setup procedure is complete. From then on, both the controller and the switch can send OpenFlow protocol messages to each other. Sending and processing an echo request and a reply message Echo request and reply messages are used by both the controller and the switch to maintain and verify the liveliness of the controller-switch connection. Echo messages are also used to calculate the latency and bandwidth of the controller-switch connection. On reception of an echo request message, the switch should respond with an echo reply message. How to do it... As echo messages are transmitted by both the switch and the controller, the switch should be able to send, receive, and process them. The following section explains these procedures in detail. Sending the OFPT_ECHO_REQUEST message The OpenFlow specification doesn't specify how frequently this echo message has to be sent from the switch. However, the switch might choose to send an echo request message periodically to the controller with the configured interval. Similarly, the OpenFlow specification doesn't mention what the timeout (the longest period of time the switch should wait) for receiving echo reply message from the controller should be. After sending an echo request message to the controller, the switch should wait for the echo reply message for the configured timeout period. If the switch doesn't receive the echo reply message within this period, then it should initiate the connection interruption procedure. The OFPT_ECHO_REQUEST message contains an OpenFlow header followed by an undefined data field of arbitrary length. The data field might be filled with the timestamp at which the echo request message was sent, various lengths or values to measure the bandwidth, or be zero-size for just checking the liveliness of the connection. In most open source implementations of OpenFlow, the echo request message only contains the header field and doesn't contain any body. Refer to the send_echo_request() function in the of/openflow.c file for the procedure to build and send the echo_request message. Receiving OFPT_ECHO_REQUEST The switch should be able to receive and process OFPT_ECHO_REQUEST messages that are sent from the controller. The controller also uses the same message format, structures, and enumerations as defined in the previous section of this recipe. Once the switch receives the echo request message, it should build the OFPT_ECHO_REPLY message. This message consists of ofp_header and an arbitrary-length data field. While forming the echo reply message, the switch should copy the content present in the arbitrary-length field of the request message to the reply message. Refer to the process_echo_request() function in the of/openflow.c file for the procedure to handle and process the echo request message and send the echo reply message. Processing OFPT_ECHO_REPLY message The switch should be able to receive the echo reply message from the controller. If the switch sends the echo request message to calculate the latency or bandwidth, on receiving the echo reply message, it should parse the arbitrary-length data field and can calculate the bandwidth, latency, and so on. There's more… If the OpenFlow switch implementation is divided into multiple layers, then the processing of the echo request and reply should be handled in the deepest possible layer. For example, if the OpenFlow switch implementation is divided into user-space processing and kernel-space processing, then the echo request and reply message handling should be in the kernel space. Sending and processing an error message Error messages are used by both the controller and the switch to notify the other end of the connection about any problem. Error messages are typically used by the switch to inform the controller about failure of execution of the request sent from the controller. How to do it... Whenever the switch wants to send the error message to the controller, it should build the OFPT_ERROR message, which takes the following message format: /* OFPT_ERROR: Error message (datapath -> the controller). */ struct ofp_error_msg { struct ofp_header header; uint16_t type; uint16_t code; uint8_t data[0]; /* Variable-length data. Interpreted based on the type and code. No padding. */ }; The type field indicates a high-level type of error. The code value is interpreted based on the type. The data value is a piece of variable-length data that is interpreted based on both the type and the value. The data field should contain an ASCII text string that adds details about why the error occurred. Unless specified otherwise, the data field should contain at least 64 bytes of the failed message that caused this error. If the failed message is shorter 64 bytes, then the data field should contain the full message without any padding. If the switch needs to send an error message in response to a specific message from the controller (say, OFPET_BAD_REQUEST, OFPET_BAD_ACTION, OFPET_BAD_INSTRUCTION, OFPET_BAD_MATCH, or OFPET_FLOW_MOD_FAILED), then the xid field of the OpenFlow header in the error message should be set with the offending request message. Refer to the send_error_message() function in the of/openflow.c file for the procedure to build and send an error message. If the switch needs to send an error message for a request message sent from the controller (because of an error condition), then the switch need not send the reply message to that request. Sending and processing an experimenter message Experimenter messages provide a way for the switch to offer additional vendor-defined functionalities. How to do it... The controller sends the experimenter message with the format. Once the switch receives this message, it should invoke the appropriate vendor-specific functions. Handling a "Get Asynchronous Configuration message" from the controller The OpenFlow specification provides a mechanism in the controller to fetch the list of asynchronous events that can be sent from the switch to the controller channel. This is achieved by sending the "Get Asynchronous Configuration message" (OFPT_GET_ASYNC_REQUEST) to the switch. How to do it... The message format to be used to get the asynchronous configuration message (OFPT_GET_ASYNC_REQUEST) doesn't have any body other than ofp_header. On receiving this OFPT_GET_ASYNC_REQUEST message, the switch should respond with the OFPT_GET_ASYNC_REPLY message. The switch should fill the property list with the list of asynchronous configuration events / property types that the relevant controller channel is preconfigured to receive. The switch should get this information from its internal data structures. Refer to the process_async_config_request() function in the of/openflow.c file for the procedure to process the get asynchronous configuration request message from the controller. Sending a packet-in message to the controller Packet-in messages (OFP_PACKET_IN) are sent from the switch to the controller to transfer a packet received from one of the switch-ports to the controller for further processing. By default, a packet-in message should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to controllers that are in the slave state. There are three ways by which the switch can send a packet-in event to the controller: Table-miss entry: When there is no matching flow entry for the incoming packet, the switch can send the packet to the controller. TTL checking: When the TTL value in a packet reaches zero, the switch can send the packet to the controller. The "send to the controller" action in the matching entry (either the flow table entry or the group table entry) of the packet. How to do it... When the switch wants to send a packet received in its data path to the controller, the following message format should be used: /* Packet received on port (datapath -> the controller). */ struct ofp_packet_in { struct ofp_header header; uint32_t buffer_id; /* ID assigned by datapath. */ uint16_t total_len; /* Full length of frame. */ uint8_t reason;     /* Reason packet is being sent                      * (one of OFPR_*) */ uint8_t table_id;   /* ID of the table that was looked up */ uint64_t cookie;   /* Cookie of the flow entry that was                      * looked up. */ struct ofp_match match; /* Packet metadata. Variable size. */ /* The variable size and padded match is always followed by: * - Exactly 2 all-zero padding bytes, then * - An Ethernet frame whose length is inferred from header.length. * The padding bytes preceding the Ethernet frame ensure that IP * header (if any) following the Ethernet header is 32-bit aligned. */ uint8_t pad[2]; /* Align to 64 bit + 16 bit */ uint8_t data[0]; /* Ethernet frame */ }; The buffer-id field should be set to the opaque value generated by the switch. When the packet is buffered, the data portion of the packet-in message should contain some bytes of data from the incoming packet. If the packet is sent to the controller because of the "send to the controller" action of a table entry, then the max_len field of ofp_action_output should be used as the size of the packet to be included in the packet-in message. If the packet is sent to the controller for any other reason, then the miss_send_len field of the OFPT_SET_CONFIG message should be used to determine the size of the packet. If the packet is not buffered, either because of unavailability of buffers or an explicit configuration via OFPCML_NO_BUFFER, then the entire packet should be included in the data portion of the packet-in message with the buffer-id value as OFP_NO_BUFFER. The date field should be set to the complete packet or a fraction of the packet. The total_length field should be set to the length of the packet included in the data field. The reason field should be set with any one of the following values defined in the enumeration, based on the context that triggers the packet-in event: /* Why is this packet being sent to the controller? */ enum ofp_packet_in_reason { OFPR_TABLE_MISS = 0,   /* No matching flow (table-miss                        * flow entry). */ OFPR_APPLY_ACTION = 1, /* Output to the controller in                        * apply-actions. */ OFPR_INVALID_TTL = 2, /* Packet has invalid TTL */ OFPR_ACTION_SET = 3,   /* Output to the controller in action set. */ OFPR_GROUP = 4,       /* Output to the controller in group bucket. */ OFPR_PACKET_OUT = 5,   /* Output to the controller in packet-out. */ }; If the packet-in message was triggered by the flow-entry "send to the controller" action, then the cookie field should be set with the cookie of the flow entry that caused the packet to be sent to the controller. This field should be set to -1 if the cookie cannot be associated with a particular flow. When the packet-in message is triggered by the "send to the controller" action of a table entry, there is a possibility that some changes have already been applied over the packet in previous stages of the pipeline. This information needs to be carried along with the packet-in message, and it can be carried in the match field of the packet-in message with a set of OXM (short for OpenFlow Extensible Match) TLVs. If the switch includes an OXM TLV in the packet-in message, then the match field should contain a set of OXM TLVs that include context fields. The standard context fields that can be added into the OXL TLVs are OFPXMT_OFB_IN_PORT, OFPXMT_OFB_IN_PHY_PORT, OFPXMT_OFB_METADATA, and OFPXMT_OFB_TUNNEL_ID. When the switch receives the packet in the physical port, and this packet information needs to be carried in the packet-in message, then OFPXMT_OFB_IN_PORT and OFPXMT_OFB_IN_PHY_PORT should have the same value, which is the OpenFlow port number of that physical port. When the switch receives the packet in the logical port and this packet information needs to be carried in the packet-in message, then the switch should set the logical port's port number in OFPXMT_OFB_IN_PORT and the physical port's port number in OFPXMT_OFB_IN_PHY_PORT. For example, consider a packet received on a tunnel interface defined over a Link Aggregation Group (LAG) with two member ports. Then the packet-in message should carry the tunnel interface's port_no to the OFPXMT_OFB_IN_PORT field and the physical interface's port_no to the OFPXMT_OFB_IN_PHY_PORT field. Refer to the send_packet_in_message() function in the of/openflow.c file for the procedure to send a packet-in message event to the controller. How it works... The switch can send either the entire packet it receives from the switch port to the controller, or a fraction of the packet to the controller. When the switch is configured to send only a fraction of the packet, it should buffer the packet in its memory and send a portion of packet data. This is controlled by the switch configuration. If the switch is configured to buffer the packet, and it has sufficient memory to buffer the packet, then the packet-in message should contain the following: A fraction of the packet. This is the size of the packet to be included in the packet-in message, configured via the switch configuration message. By default, it is 128 bytes. When the packet-in message is resulted by a table-entry action, then the output action itself can specify the size of the packet to be sent to the controller. For all other packet-in messages, it is defined in the switch configuration. The buffer ID to be used by the controller when the controller wants to forward the message at a later point in time. There's more… The switch that implements buffering should be expected to expose some details, such as the amount of available buffers, the period of time the buffered data will be available, and so on, through documentation. The switch should implement the procedure to release the buffered packet when there is no response from the controller to the packet-in event. Sending a flow-removed message to the controller A flow-removed message (OFPT_FLOW_REMOVED) is sent from the switch to the controller when a flow entry is removed from the flow table. This message should be sent to the controller only when the OFPFF_SEND_FLOW_REM flag in the flow entry is set. The switch should send this message only to the controller channel wherein the controller requested the switch to send this event. The controller can express its interest in receiving this event by sending the switch configuration message to the switch. By default, OFPT_FLOW_REMOVED should be sent to all the controllers that are in equal (OFPCR_ROLE_EQUAL) and master (OFPCR_ROLE_MASTER) roles. This message should not be sent to a controller that is in the slave state. How to do it... When the switch removes an entry from the flow table, it should build an OFPT_FLOW_REMOVED message with the following format and send this message to the controllers that have already shown interest in this event: /* Flow removed (datapath -> the controller). */ struct ofp_flow_removed { struct ofp_header header; uint64_t cookie;       /* Opaque the controller-issued identifier. */ uint16_t priority;     /* Priority level of flow entry. */ uint8_t reason;         /* One of OFPRR_*. */ uint8_t table_id;       /* ID of the table */ uint32_t duration_sec; /* Time flow was alive in seconds. */ uint32_t duration_nsec; /* Time flow was alive in nanoseconds                          * beyond duration_sec. */ uint16_t idle_timeout; /* Idle timeout from original flow mod. */ uint16_t hard_timeout; /* Hard timeout from original flow mod. */ uint64_t packet_count; uint64_t byte_count; struct ofp_match match; /* Description of fields.Variable size. */ }; The cookie field should be set with the cookie of the flow entry, the priority field should be set with the priority of the flow entry, and the reason field should be set with one of the following values defined in the enumeration: /* Why was this flow removed? */ enum ofp_flow_removed_reason { OFPRR_IDLE_TIMEOUT = 0,/* Flow idle time exceeded idle_timeout. */ OFPRR_HARD_TIMEOUT = 1, /* Time exceeded hard_timeout. */ OFPRR_DELETE = 2,       /* Evicted by a DELETE flow mod. */ OFPRR_GROUP_DELETE = 3, /* Group was removed. */ OFPRR_METER_DELETE = 4, /* Meter was removed. */ OFPRR_EVICTION = 5,     /* the switch eviction to free resources. */ }; The duration_sec and duration_nsec should be set with the elapsed time of the flow entry in the switch. The total duration in nanoseconds can be computed as duration_sec*109 + duration_nsec. All the other fields, such as idle_timeout, hard_timeoutm, and so on, should be set with the appropriate value from the flow entry, that is, these values can be directly copied from the flow mode that created this entry. The packet_count and byte_count should be set with the number of packet count and the byte count associated with the flow entry, respectively. If the values are not available, then these fields should be set with the maximum possible value. Refer to the send_flow_removed_message() function in the of/openflow.c file for the procedure to send a flow removed event message to the controller. Sending a port-status message to the controller Port-status messages (OFPT_PORT_STATUS) are sent from the switch to the controller when there is any change in the port status or when a new port is added, removed, or modified in the switch's data path. The switch should send this message only to the controller channel that the controller requested the switch to send it. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. By default, the port-status message should be sent to all configured controllers in the switch, including the controller in the slave role (OFPCR_ROLE_SLAVE). How to do it... The switch should construct an OFPT_PORT_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A physical port has changed in the datapath */ struct ofp_port_status { struct ofp_header header; uint8_t reason; /* One of OFPPR_*. */ uint8_t pad[7]; /* Align to 64-bits. */ struct ofp_port desc; }; The reason field should be set to one of the following values as defined in the enumeration: /* What changed about the physical port */ enum ofp_port_reason { OFPPR_ADD = 0,   /* The port was added. */ OFPPR_DELETE = 1, /* The port was removed. */ OFPPR_MODIFY = 2, /* Some attribute of the port has changed. */ }; The desc field should be set to the port description. In the port description, all properties need not be filled by the switch. The switch should fill the properties that have changed, whereas the unchanged properties can be included optionally. Refer to the send_port_status_message() function in the of/openflow.c file for the procedure to send port_status_message to the controller. Sending a controller role-status message to the controller Controller role-status messages (OFPT_ROLE_STATUS) are sent from the switch to the set of controllers when the role of a controller is changed as a result of an OFPT_ROLE_REQUEST message. For example, if there are three the controllers connected to a switch (say controller1, controller2, and controller3) and controller1 sends an OFPT_ROLE_REQUEST message to the switch, then the switch should send an OFPT_ROLE_STATUS message to controller2 and controller3. How to do it... The switch should build the OFPT_ROLE_STATUS message with the following format and send it to all the other controllers: /* Role status event message. */ struct ofp_role_status { struct ofp_header header; /* Type OFPT_ROLE_REQUEST /                            * OFPT_ROLE_REPLY. */ uint32_t role;           /* One of OFPCR_ROLE_*. */ uint8_t reason;           /* One of OFPCRR_*. */ uint8_t pad[3];           /* Align to 64 bits. */ uint64_t generation_id;   /* Master Election Generation Id */ /* Role Property list */ struct ofp_role_prop_header properties[0]; }; The reason field should be set with one of the following values as defined in the enumeration: /* What changed about the controller role */ enum ofp_controller_role_reason { OFPCRR_MASTER_REQUEST = 0, /* Another the controller asked                            * to be master. */ OFPCRR_CONFIG = 1,         /* Configuration changed on the                            * the switch. */ OFPCRR_EXPERIMENTER = 2,   /* Experimenter data changed. */ }; The role should be set to the new role of the controller. The generation_id should be set with the generation ID of the OFPT_ROLE_REQUEST message that triggered the OFPT_ROLE_STATUS message. If the reason code is OFPCRR_EXPERIMENTER, then the role property list should be set in the following format: /* Role property types. */ enum ofp_role_prop_type { OFPRPT_EXPERIMENTER = 0xFFFF, /* Experimenter property. */ };   /* Experimenter role property */ struct ofp_role_prop_experimenter { uint16_t type;         /* One of OFPRPT_EXPERIMENTER. */ uint16_t length;       /* Length in bytes of this property. */ uint32_t experimenter; /* Experimenter ID which takes the same                        * form as struct                        * ofp_experimenter_header. */ uint32_t exp_type;     /* Experimenter defined. */ /* Followed by: * - Exactly (length - 12) bytes containing the experimenter data, * - Exactly (length + 7)/8*8 - (length) (between 0 and 7) * bytes of all-zero bytes */ uint32_t experimenter_data[0]; }; The experimenter field in the experimenter ID should take the same format as the experimenter structure. Refer to the send_role_status_message() function in the of/openflow.c file for the procedure to send a role status message to the controller. Sending a table-status message to the controller Table-status messages (OFPT_TABLE_STATUS) are sent from the switch to the controller when there is any change in the table status; for example, the number of entries in the table crosses the threshold value, called the vacancy threshold. The switch should send this message only to the controller channel in which the controller requested the switch to send it. The controller can express its interest to receive this event by sending the asynchronous configuration message to the switch. How to do it... The switch should build an OFPT_TABLE_STATUS message with the following format and send this message to the controllers that have already shown interest in this event: /* A table config has changed in the datapath */ struct ofp_table_status { struct ofp_header header; uint8_t reason;             /* One of OFPTR_*. */ uint8_t pad[7];             /* Pad to 64 bits */ struct ofp_table_desc table; /* New table config. */ }; The reason field should be set with one of the following values defined in the enumeration: /* What changed about the table */ enum ofp_table_reason { OFPTR_VACANCY_DOWN = 3, /* Vacancy down threshold event. */ OFPTR_VACANCY_UP = 4,   /* Vacancy up threshold event. */ }; When the number of free entries in the table crosses the vacancy_down threshold, the switch should set the reason code as OFPTR_VACANCY_DOWN. Once the vacancy_down event is generated by the switch, the switch should not generate any further vacancy down event until a vacancy up event is generated. When the number of free entries in the table crosses the vacancy_up threshold value, the switch should set the reason code as OFPTR_VACANCY_UP. Again, once the vacancy up event is generated by the switch, the switch should not generate any further vacancy up event until a vacancy down event is generated. The table field should be set with the table description. Refer to the send_table_status_message() function in the of/openflow.c file for the procedure to send a table status message to the controller. Sending a request-forward message to the controller When a the switch receives a modify request message from the controller to modify the state of a group or meter entries, after successful modification of the state, the switch should forward this request message to all other controllers as a request forward message (OFPT_REQUESTFORWAD). The switch should send this message only to the controller channel in which the controller requested the switch to send this event. The controller can express its interest to receive this event by sending an asynchronous configuration message to the switch. How to do it... The switch should build the OFPT_REQUESTFORWAD message with the following format, and send this message to the controllers that have already shown interest in this event: /* Group/Meter request forwarding. */ struct ofp_requestforward_header { struct ofp_header header; /* Type OFPT_REQUESTFORWARD. */ struct ofp_header request; /* Request being forwarded. */ }; The request field should be set with the request that received from the controller. Refer to the send_request_forward_message() function in the of/openflow.c file for the procedure to send request_forward_message to the controller. Handling a packet-out message from the controller Packet-out (OFPT_PACKET_OUT) messages are sent from the controller to the switch when the controller wishes to send a packet out through the switch's data path via a switch port. How to do it... There are two ways in which the controller can send a packet-out message to the switch: Construct the full packet: In this case, the controller generates the complete packet and adds the action list field to the packet-out message. The action field contains a list of actions defining how the packet should be processed by the switch. If the switch receives a packet_out message with buffer_id set as OFP_NO_BUFFER, then the switch should look into the action list, and based on the action to be performed, it can do one of the following: Modify the packet and send it via the switch port mentioned in the action list Hand over the packet to OpenFlow's pipeline processing, based on the OFPP_TABLE specified in the action list Use a packet buffer in the switch: In this mechanism, the switch should use the buffer that was created at the time of sending the packet-in message to the controller. While sending the packet_in message to the controller, the switch adds the buffer_id to the packet_in message. When the controller wants to send a packet_out message that uses this buffer, the controller includes this buffer_id in the packet_out message. On receiving the packet_out message with a valid buffer_id, the switch should fetch the packet from the buffer and send it via the switch port. Once the packet is sent out, the switch should free the memory allocated to the buffer, which was cached. Handling a barrier message from the controller The switch implementation could arbitrarily reorder the message sent from the controller to maximize its performance. So, if the controller wants to enforce the processing of the messages in order, then barrier messages are used. Barrier messages (OFPT_TABLE_STATUS) are sent from the controller to the switch to ensure message ordering. The switch should not reorder any messages across the barrier message. For example, if the controller is sending a group add message, followed by a flow add message referencing the group, then the message order should be preserved in the barrier message. How to do it... When the controller wants to send messages that are related to each other, it sends a barrier message between these messages. The switch should process these messages as follows: Messages before a barrier request should be processed fully before the barrier, including sending any resulting replies or errors. The barrier request message should then be processed and a barrier reply should be sent. While sending the barrier reply message, the switch should copy the xid value from the barrier request message. The switch should process the remaining messages. Both the barrier request and barrier reply messages don't have any body. They only have the ofp_header. Summary This article covers the list of symmetric and asynchronous messages sent and received by the OpenFlow switch, along with the procedure for handling these messages. Resources for Article: Further resources on this subject: The OpenFlow Controllers [article] Untangle VPN Services [article] Getting Started [article]
Read more
  • 0
  • 0
  • 9638

article-image-writing-fully-native-application
Packt
05 May 2015
15 min read
Save for later

Writing a Fully Native Application

Packt
05 May 2015
15 min read
In this article written by Sylvain Ratabouil, author of Android NDK Beginner`s Guide - Second Edition, we have breached Android NDK's surface using JNI. But there is much more to find inside! The NDK includes its own set of specific features, one of them being Native Activities. Native activities allow creating applications based only on native code, without a single line of Java. No more JNI! No more references! No more Java! (For more resources related to this topic, see here.) In addition to native activities, the NDK brings some APIs for native access to Android resources, such as display windows, assets, device configuration. These APIs help in getting rid of the tortuous JNI bridge often necessary to embed native code. Although there is a lot still missing, and not likely to be available (Java remains the main platform language for GUIs and most frameworks), multimedia applications are a perfect target to apply them. Here we initiate a native C++ project developed progressively throughout this article: DroidBlaster. Based on a top-down viewpoint, this sample scrolling shooter will feature 2D graphics, and, later on, 3D graphics, sound, input, and sensor management. We will be creating its base structure and main game components. Let's now enter the heart of the Android NDK by: Creating a fully native activity Handling main activity events Accessing display window natively Retrieving time and calculating delays Creating a native Activity The NativeActivity class provides a facility to minimize the work necessary to create a native application. It lets the developer get rid of all the boilerplate code to initialize and communicate with native code and concentrate on core functionalities. This glue Activity is the simplest way to write applications, such as games without a line of Java code. The resulting project is provided with this book under the name DroidBlaster_Part1. Time for action – creating a basic native Activity We are now going to see how to create a minimal native activity that runs an event loop. Create a new hybrid Java/C++ project:      Name it DroidBlaster.      Turn the project into a native project. Name the native module droidblaster.      Remove the native source and header files that have been created by ADT.      Remove the reference to the Java src directory in Project Properties | Java Build Path | Source. Then, remove the directory itself on disk.      Get rid of all layouts in the res/layout directory.      Get rid of jni/droidblaster.cpp if it has been created. In AndroidManifest.xml, use Theme.NoTitleBar.Fullscreen as the application theme. Declare a NativeActivity that refers to the native module named droidblaster (that is, the native library we will compile) using the meta-data property android.app.lib_name: <?xml version="1.0" encoding="utf-8"?> <manifest    package="com.packtpub.droidblaster2d" android_versionCode="1"    android_versionName="1.0">    <uses-sdk        android_minSdkVersion="14"        android_targetSdkVersion="19"/>      <application android_icon="@drawable/ic_launcher"        android_label="@string/app_name"        android_allowBackup="false"        android:theme        ="@android:style/Theme.NoTitleBar.Fullscreen">        <activity android_name="android.app.NativeActivity"            android_label="@string/app_name"            android_screenOrientation="portrait">            <meta-data android_name="android.app.lib_name"                android:value="droidblaster"/>            <intent-filter>                <action android:name ="android.intent.action.MAIN"/>                <category                    android_name="android.intent.category.LAUNCHER"/>            </intent-filter>        </activity>    </application> </manifest> Create the file jni/Types.hpp. This header will contain common types and the header cstdint: #ifndef _PACKT_TYPES_HPP_ #define _PACKT_TYPES_HPP_   #include <cstdint>   #endif Let's write a logging class to get some feedback in the Logcat.      Create jni/Log.hpp and declare a new class Log.      Define the packt_Log_debug macro to allow the activating or deactivating of debug messages with a simple compile flag: #ifndef _PACKT_LOG_HPP_ #define _PACKT_LOG_HPP_   class Log { public:    static void error(const char* pMessage, ...);    static void warn(const char* pMessage, ...);    static void info(const char* pMessage, ...);    static void debug(const char* pMessage, ...); };   #ifndef NDEBUG    #define packt_Log_debug(...) Log::debug(__VA_ARGS__) #else    #define packt_Log_debug(...) #endif   #endif Implement the jni/Log.cpp file and implement the info() method. To write messages to Android logs, the NDK provides a dedicated logging API in the android/log.h header, which can be used similarly as printf() or vprintf() (with varArgs) in C: #include "Log.hpp"   #include <stdarg.h> #include <android/log.h>   void Log::info(const char* pMessage, ...) {    va_list varArgs;    va_start(varArgs, pMessage);    __android_log_vprint(ANDROID_LOG_INFO, "PACKT", pMessage,        varArgs);    __android_log_print(ANDROID_LOG_INFO, "PACKT", "n");    va_end(varArgs); } ... Write other log methods, error(), warn(), and debug(), which are almost identical, except the level macro, which are respectively ANDROID_LOG_ERROR, ANDROID_LOG_WARN, and ANDROID_LOG_DEBUG instead. Application events in NativeActivity can be processed with an event loop. So, create jni/EventLoop.hpp to define a class with a unique method run(). Include the android_native_app_glue.h header, which defines the android_app structure. It represents what could be called an applicative context, where all the information is related to the native activity; its state, its window, its event queue, and so on: #ifndef _PACKT_EVENTLOOP_HPP_ #define _PACKT_EVENTLOOP_HPP_   #include <android_native_app_glue.h>   class EventLoop { public:    EventLoop(android_app* pApplication);      void run();   private:    android_app* mApplication; }; #endif Create jni/EventLoop.cpp and implement the activity event loop in the run() method. Include a few log events to get some feedback in Android logs. During the whole activity lifetime, the run() method loops continuously over events until it is requested to terminate. When an activity is about to be destroyed, the destroyRequested value in the android_app structure is changed internally to indicate to the client code that it must exit. Also, call app_dummy() to ensure the glue code that ties native code to NativeActivity is not stripped by the linker. #include "EventLoop.hpp" #include "Log.hpp"   EventLoop::EventLoop(android_app* pApplication):        mApplication(pApplication) {}   void EventLoop::run() {    int32_t result; int32_t events;    android_poll_source* source;      // Makes sure native glue is not stripped by the linker.    app_dummy();      Log::info("Starting event loop");    while (true) {        // Event processing loop.        while ((result = ALooper_pollAll(-1, NULL, &events,                (void**) &source)) >= 0) {            // An event has to be processed.            if (source != NULL) {                source->process(mApplication, source);            }            // Application is getting destroyed.            if (mApplication->destroyRequested) {                Log::info("Exiting event loop");                return;            }        }    } } Finally, create jni/Main.cpp to define the program entry point android_main(), which runs the event loop in a new file Main.cpp: #include "EventLoop.hpp" #include "Log.hpp"   void android_main(android_app* pApplication) {    EventLoop(pApplication).run(); } Edit the jni/Android.mk file to define the droidblaster module (the LOCAL_MODULE directive). Describe the C++ files to compile the LOCAL_SRC_FILES directive with the help of the LS_CPP macro. Link droidblaster with the native_app_glue module (the LOCAL_STATIC_LIBRARIES directive) and android (required by the Native App Glue module), as well as the log libraries (the LOCAL_LDLIBS directive): LOCAL_PATH := $(call my-dir)   include $(CLEAR_VARS)   LS_CPP=$(subst $(1)/,,$(wildcard $(1)/*.cpp)) LOCAL_MODULE := droidblaster LOCAL_SRC_FILES := $(call LS_CPP,$(LOCAL_PATH)) LOCAL_LDLIBS := -landroid -llog LOCAL_STATIC_LIBRARIES := android_native_app_glue   include $(BUILD_SHARED_LIBRARY)   $(call import-module,android/native_app_glue)   Create jni/Application.mk to compile the native module for multiple ABIs. We will use the most basic ones, as shown in the following code: APP_ABI := armeabi armeabi-v7a x86 What just happened? Build and run the application. Of course, you will not see anything tremendous when starting this application. Actually, you will just see a black screen! However, if you look carefully at the LogCat view in Eclipse (or the adb logcat command), you will discover a few interesting messages that have been emitted by your native application in reaction to activity events. We initiated a Java Android project without a single line of Java code! Instead of referencing a child of Activity in AndroidManifest, we referenced the android.app.NativeActivity class provided by the Android framework. NativeActivity is a Java class, launched like any other Android activity and interpreted by the Dalvik Virtual Machine like any other Java class. However, we never faced it directly. NativeActivity is in fact a helper class provided with Android SDK, which contains all the necessary glue code to handle application events (lifecycle, input, sensors, and so on) and broadcasts them transparently to native code. Thus, a native activity does not eliminate the need for JNI. It just hides it under the cover! However, the native C/C++ module run by NativeActivity is executed outside Dalvik boundaries in its own thread, entirely natively (using the Posix Thread API)! NativeActivity and native code are connected together through the native_app_glue module. The Native App Glue has the responsibility of: Launching the native thread, which runs our own native code Receiving events from NativeActivity Routing these events to the native thread event loop for further processing The Native glue module code is located in ${ANDROID_NDK}/sources/android/native_app_glue and can be analyzed, modified, or forked at will. The headers related to native APIs such as, looper.h, can be found in ${ANDROID_NDK}/platforms/<Target Platform>/<Target Architecture>/usr/include/android/. Let's see in more detail how it works. More about the Native App Glue Our own native code entry point is declared inside the android_main() method, which is similar to the main methods in desktop applications. It is called only once when NativeActivity is instantiated and launched. It loops over application events until NativeActivity is terminated by the user (for example, when pressing a device's back button) or until it exits by itself. The android_main() method is not the real native application entry point. The real entry point is the ANativeActivity_onCreate() method hidden in the android_native_app_glue module. The event loop we implemented in android_main() is in fact a delegate event loop, launched in its own native thread by the glue module. This design decouples native code from the NativeActivity class, which is run on the UI thread on the Java side. Thus, even if your code takes a long time to handle an event, NativeActivity is not blocked and your Android device still remains responsive. The delegate native event loop in android_main() is itself composed, in our example, of two nested while loops. The outer one is an infinite loop, terminated only when activity destruction is requested by the system (indicated by the destroyRequested flag). It executes an inner loop, which processes all pending application events. ... int32_t result; int32_t events; android_poll_source* source; while (true) {    while ((result = ALooper_pollAll(-1, NULL, &events,            (void**) &source)) >= 0) {        if (source != NULL) {            source->process(mApplication, source);        }        if (mApplication->destroyRequested) {            return;        }    } } ... The inner For loop polls events by calling ALooper_pollAll(). This method is part of the Looper API, which can be described as a general-purpose event loop manager provided by Android. When timeout is set to -1, like in the preceding example, ALooper_pollAll() remains blocked while waiting for events. When at least one is received, ALooper_pollAll() returns and the code flow continues. The android_poll_source structure describing the event is filled and is then used by client code for further processing. This structure looks as follows: struct android_poll_source {    int32_t id; // Source identifier  struct android_app* app; // Global android application context    void (*process)(struct android_app* app,            struct android_poll_source* source); // Event processor }; The process() function pointer can be customized to process application events manually. As we saw in this part, the event loop receives an android_app structure in parameter. This structure, described in android_native_app_glue.h, contains some contextual information as shown in the following table: void* userData Pointer to any data you want. This is essential in giving some contextual information to the activity or input event callbacks. void (*pnAppCmd)(…) and int32_t (*onInputEvent)(…) These member variables represent the event callbacks triggered by the Native App Glue when an activity or an input event occurs. ANativeActivity* activity Describes the Java native activity (its class as a JNI object, its data directories, and so on) and gives the necessary information to retrieve a JNI context. AConfiguration* config Describes the current hardware and system state, such as the current language and country, the current screen orientation, density, size, and so on. void* savedState size_t and savedStateSize Used to save a buffer of data when an activity (and thus its native thread) is destroyed and later restored. AInputQueue* inputQueue Provides input events (used internally by the native glue). ALooper* looper Allows attaching and detaching event queues used internally by the native glue. Listeners poll and wait for events sent on a communication pipe. ANativeWindow* window and ARect contentRect Represents the "drawable" area on which graphics can be drawn. The ANativeWindow API, declared in native_window.h, allows retrieval of the window width, height, and pixel format, and the changing of these settings. int activityState Current activity state, that is, APP_CMD_START, APP_CMD_RESUME, APP_CMD_PAUSE, and so on. int destroyRequested When equal to 1, it indicates that the application is about to be destroyed and the native thread must be terminated immediately. This flag has to be checked in the event loop. The android_app structure also contains some additional data for internal use only, which should not be changed. Knowing all these details is not essential to program native programs but can help you understand what's going on behind your back. Let's now see how to handle these activity events. Summary The Android NDK allows us to write fully native applications without a line of Java code. NativeActivity provides a skeleton to implement an event loop that processes application events. Associated with the Posix time management API, the NDK provides the required base to build complex multimedia applications or games. In summary, we created NativeActivity that polls activity events to start or stop native code accordingly. We accessed the display window natively, like a bitmap, to display raw graphics. Finally, we retrieved time to make the application adapt to device speed using a monotonic clock. Resources for Article: Further resources on this subject: Android Native Application API [article] Organizing a Virtual Filesystem [article] Android Fragmentation Management [article]
Read more
  • 0
  • 0
  • 18485

Packt
04 May 2015
7 min read
Save for later

Git Teaches – Great Tools Don't Make Great Craftsmen

Packt
04 May 2015
7 min read
This article is written by Ferdinando Santacroce, author of the book, Git Essentials. (For more resources related to this topic, see here.) Git is a powerful tool. In case you need to retain multiple versions of files—even if you may not be a software developer—Git can perform this task easily. As a Git user, in my humble career, I have never found a dead-end street—a circumstance where I had to give up because of a lack of solutions. Git always offers a wide range of alternatives even when you make a mistake; you can use either git revert to revert your change, or git reset if there is no need to preserve the previous commit. Another key strength of Git is its ability to let your project grow and take different ways when needed. Git branching is a killer feature of this tool. Every versioning system is able to manage branches. However in Git, using this feature is a pleasure; it is super-fast (it does all the work locally), and it does not require a great amount of space. For those who are used to work with other versioning systems such as Subversion, this probably makes a difference. In my career as a developer, I have assisted in situations where developers wouldn't create new branches for new features, because branching was a time-consuming process. Their versioning system, on large repositories, required 5-6 minutes to create a new branch. Git usually doesn't concede alibis. The git branch command and the consecutive git merge operations are fast and very reliable. Even when you commit, git commit doesn't allow you to store a new commit without a message to protect you from our laziness and grow a talking repository with a clear history and not a mute one. However, Git can't perform miracles. So, to get the most out of it, we need a little discipline. This discipline distinguishes an apprentice from a good craftsman. One of the most difficult things in software development is with regard to the sharing of a common code base. Often, programmers are solitary people who love instructions that are typed in their preferred editor, which helps them make working software without any hassles. However, in professional software development, you usually deal with big projects that require more than a single developer at a time; everyone contributes their own code. At this point, if you don't have an effective tool to share code like Git and a little bit of discipline, you can easily screw up. When I talk about discipline, I talk about two main concepts—writing good commits and using the right workflow. Let's start with the first point. What is a good commit? What makes a commit either good or bad? Well, you will come across highly opinionated answers to this question. So here, I will provide mine. First of all, good commits are those commits that do not mix apples and oranges; when you commit something, you have to focus on resolving one problem at a time (fix a bug and implement a new feature or make a clear step forward towards the final target) without modifying anything that is not strictly related to the task you are working on. While writing some code, especially when you have to refactor or modify the existing code, you may too often fall into the temptation to fix here and there some other things. This is just your nature, I know. Developers hate ugly code, even though they often are the ones who wrote it some time ago; they can't leave it there even for a minute. It's a compulsive reaction. So, in a matter of a few minutes, you end up with a ton of modified files with dozens of cross-modifications that are quite difficult to comment in a commit message. They are also hard to merge and quite impossible to cherry-pick, if necessary. So, one of the first things that you have to learn is to make consistent commits. It has to became a habit, and we all know that habits are hard to grow and hard to break. There are some simple tricks that have helped me become a better committer day by day (yes, I'm still far from becoming a good committer). One of the most effective tricks that you can use to make consistent commits is to have a pencil and a paper with you; when you find something wrong with your code that is not related to what you are working on at the moment, pick up the pencil and write down a note on a piece of paper. Don't work on it immediately. However, remind yourself that there is something that you have to fix in the next commit. In the same manner, when you feel that the feature you're going to implement either is too wide for a single commit, or requires more than a bunch of hours to terminate (I tend to avoid long coding sessions), make an effort and try to split the work in two or three parts, writing down these steps on the paper. Thus, you are unconsciously wrapping up your next commits. Another way to avoid a loss of focus is to write the commit message before you start coding. This may sound a little weird, but having the target of your actual work in front of your eyes helps a lot. If you practice Test Driver Development (TDD), or even better, Behavior Drive Development (BDD), you probably already know that they have a huge side-effect despite their main testing purpose—they force you to look at the final results, maintaining the focus on what the code has to do, and not the implementation details. Writing preemptive commit messages is the same thing. When the target of your commit is clear and you can keep an eye on it every time you look at your paper notebook, then you know that you can code peacefully because you will not go off the rails. Now that we have a clear vision of what makes a good commit, let's move your attention to good workflows. Generally speaking, the sharing of a common way to work is the most taken-for-granted advice that you can give. However, it often represents exactly the biggest problem when you look at underperforming firms. Having a versioning workflow that is decided with the help of common agreement is the most important thing about a development team (even for a team of one) because it lets you feel comfortable even in case of emergency. When I talk about emergencies, I talk about common hitches for a software developer—urgently fixing a bug on a specific software version, developing different features in parallel, and building beta or testing versions to let testers and users give you feedback. There are plenty of good Git workflows out there. You can take inspiration from them. You can use a workflow as it is, or you can take some inspiration and adapt it to fit your project peculiarities. However, the important thing is that you have to keep on using it not only to be consistent (don't cheat!), but also to adapt it when premises change. Don't blindly follow a workflow if you don't feel comfortable with it, and don't even try to use the same workflow every time. There are good workflows for web projects where there's usually no need to keep multiple versions of the same software and the ones that fit desktop applications better, where multiple versions are the order of the day. Every kind of project needs its perfectly tailored workflow. The last thing I wish to suggest to the developers interested in Git is to share common sense. Good developers share coding standards, and a good team has to share the same committing policy and the same workflow. Lone cowboys and outlaws represent a problem even in software development, and not just in Spaghetti Western movies. Resources for Article: Further resources on this subject: Configuration [article] Maintaining Your GitLab Instance [article] Searching and Resolving Conflicts [article]
Read more
  • 0
  • 0
  • 1335

article-image-using-your-smart-watch-control-networked-leds
Andrew Fisher
04 May 2015
15 min read
Save for later

Using your smart watch to control networked LEDs

Andrew Fisher
04 May 2015
15 min read
Introduction In the middle of this year, the hacker’s watchmaker, Pebble, will launch their next products. These watches, and the non-color ones before them, are remarkably capable devices that can easily talk with other services - whether it’s to get notified of your latest mention on Twitter or get the latest weather report. Pebble have also made building applications easy by allowing you to build in JavaScript. In this post I’ll use JavaScript to show you how easy it is to build a watch application that can interact with an external service. I could build a simple web application that gets your latest train times or Yelp reviews, but where’s the fun in that? Instead I’m going to pair a Pebble with one of the other current darlings of the hacker community, the ESP8266 WiFi module. The ESP8266 wireless module made its entrance into the hardware community in late 2014 and things haven’t been quite the same at hacker spaces around the world. This is a device that has more memory than an Arduino, natively uses WiFi, can be programmed in C or Lua (with a custom firmware), has low power consumption and costs less than $5 per module. It is still early days in the ESP8266 community so the edges are a bit rough, but things are stable enough to get playing and what better combination of things to play with than making some LEDs controllable wirelessly using your smart watch as an interface. Design approach To make the wireless LED device I’m going to use an ESP8266 with the NodeMCU firmware on it with some custom code that exposes a web server. The web server will accept a post request that has values for red, green and blue (between 0 - off, and 255 - full on, for each channel). These values will be interpreted and used to set the color of a strip of WS2812 controllable LEDs (aka NeoPixels). Once the web service is exposed, the Pebble watch application simply needs to make an HTTP request to the ESP8266 web server, POSTing the data to it of the colour to turn the pixels. Bill of materials You’ll need the following to build this project Qty Item Price Notes 1 Pebble Watch $99 Any Pebble watch will work 1 USB to Serial programmer $15 A 3.3 and 5v switchable USB-Serial converter is really useful. http://www.dfrobot.com/index.php?route=product/product&product_id=147 This project requires a 3.3V one 1 ESP8266 $5 The ESP-01 is the most readily available board (even available on Amazon) http://www.amazon.com/ESP8266-ESP-01-Serial-Wireless-Transceiver/dp/B00NF0FCW8 Lots Jumper wires M-M, M-F $1 Get a mix 1 2xAA battery holder $1 http://www.jameco.com/1/1/1260-bh-321-4a-r-2xaa-battery-holder-wires-mounting-tabs.html 1 NeoPixel Strip $10 http://www.adafruit.com/neopixel(Strips work well of any size eg: http://www.adafruit.com/products/1426 or else the circular rings http://www.adafruit.com/products/1643) In addition you’ll probably also want the usual hacker tools of hot glue, solder and band aids. Prerequisites There are a few things you’ll need to set up before you get started which will be specific to your system. This may take a little while (30-60 minutes) but the documentation linked below is exhaustive for Linux, Mac and Windows. Sign up for CloudPebble and get your developer account (it’s free) Download the Pebble SDK and install it following these directions - you need this to be able to install your JS application on your watch Install ESP Tool (for flashing your ESP8266 module) - this uses python so you may need a python environment if you’re using Windows. Install ESPlora (for uploading your application to the ESP8266 module) - this uses Java If you want some additional background on the ESP8266 and the development process, this excellent presentation / documentation by Andy Gelme is well worth a read. Once your environment is ready to go, grab the files that go along with this post with the following command: mkdir ~/watch-led git clone https://gist.github.com/ee6fadcd837a0f46be8d.git ~/watch-led && cd ~/watch-led Configuring the ESP8266 In order to configure the wireless module you will need to do the following things: Wire the module up Flash a binary file with the NodeMCU firmware onto the module Configure the application and upload that to the NodeMCU environment Test that the code is working Wire the module The ESP-01 module is relatively simple inasmuch as it only has 8 pins, however the use of double pin header means you can’t just plug it into a breadboard. You have a few options here, many people make a converter that uses 2x4 female header and converts this to a 1x8 strip of male header that can be plugged into a breadboard. Others (like me) make custom cables for different applications - most people just use jumper wires to join to a breadboard or other modules. You choose whatever works best for you but the wiring diagram is below. ESP-01 USB Serial adapter 1 RXD <-- TXD 2 VCC 3.3V (make sure in range 3-3.5V) 3 GPIO 0 Connect to ground when flashing firmware 4 RESET 3.3V resets (Don’t connect) 5 GPIO 2 Signal line on WS2812 strip 6 CH_PD 3.3V (enables the module) 7 GND Ground 8 TXD --> RXD With the LEDs, connect VCC to 3.3V and ground to ground as well. These LEDs are happy to run off as little as 3V. Install NodeMCU firmware Before you install the firmware you need to put the module into “flash mode”. You do this by joining GPIO 0 (zero) to ground as you can see in the photo below illustrated by the pink wire. If you don’t do this, you can’t put new firmware on your module and esptool will tell you it can’t connect. Next, issue the following commands from a terminal (assuming esptool.py is in your path): cd ~/watch-led esptool.py -p <<PORT>> write_flash 0x00000 nodemcu_dev_0x00000.bin 0x10000 nodemcu_dev_0x10000.bin Note that you need to change <<PORT>> to the path to your serial port (eg /dev/ttyUSB0) - esptool will now go through the process of erasing and then flashing the NodeMCU firmware onto the module. ESPtool is very simple and is fully automated Assuming you have installed esptool and wired your module correctly you should not expect to see any errors in this process. If you do, check your wiring and your setup and then head to the GitHub repo for more information. Once the firmware upload is complete, power down your module, remove the jumper on GPIO0 to ground and then power the module back up again. The module is now using 9600 baud rate - if you connect using screen or minicom etc you will be dropped into a lua interpreter. Here you can issue commands and get responses. screen /dev/ttyUSB0 9600 Change /dev/ttyUSB0 for your port The firmware has a full lua interpreter you can play with. Once you’ve had your fill playing with a lua interpreter, disconnect your serial connection. Configure application Now the module has NodeMCU on it, it’s possible to use the ESPlora IDE to talk to it, upload files to the module and write applications using Lua. Open the ESPlora IDE and connect to your module. Select your connection from the marked area and then hit “open”. From here, you can upload files to the module. Open up the five Lua files in the code folder you downloaded earlier. You should have setup.lua, server.lua, init.lua, config.lua and application.lua - these files can be edited directly from the code screen and then uploaded onto the ESP8266. Two files that are of interest are config and server. Set your wireless details Switch to the config.lua tab and update the line: module.SSID["SSID"] = "PASSWORD" And change SSID and PASSWORD to the details for your network. You’ll notice when you hit save (CTRL+S) that it will save and then upload the file to the module as well and confirm that is complete. In addition you can hit the Save to ESP button which will upload it as well. Go ahead now and upload config.lua, setup.lua, init.lua and application.lua. Configure your LEDs Open up the server.lua file and look for the line: local NO_PIXELS = 17 Change this to however many LEDs you have on your strip (the maximum limit is probably 50 or so before you have memory issues). There are other config options at the top of this file as well which should be fairly self explanatory. Save this file to the module as well. Test application is working As a check before we test if the module is working properly, hit the “File List” button and you will get the files that are uploaded. Check that all five are there and if they are hit the “Reset ESP” button. This will cause the module to restart. You’ll see a bootup sequence and after 10-15 seconds you’ll get a message showing the application has started, and the IP address of the module and the server. You can now issue commands from curl: curl --data "red=255&green=0&blue=255" http://<IP> Where <<IP>> is the IP address of the module. You should at this point have some LEDs turning a nice shade of magenta. If not, backtrack a few steps and make sure each component is working - the debug log will tell you if there’s any serious issues and you can use print() statements in server.lua to print out messages. Also try things like ping IP from a command line to make sure you can see the module on your network from your computer. Once you have your LED web service running it’s time to make it accessible from your Pebble watch. Building the Pebble watch app Building a Pebble watch app using a combination of the cloud pebble IDE and JavaScript makes things super easy. If you’ve done any sort of web development with JavaScript before you’ll find building apps this way really fast. Whilst the amount of access to the hardware is a bit reduced compared to C, you can make calls to external services, create interfaces and have access to the sensors so there’s plenty to play with. If you haven’t already, link your Pebble account to a developer account and login to CloudPebble.net - create a new project and select Pebble JS from the project type. Give your project a descriptive name. Once created you’ll be taken to a blank workspace. Click on the app.js file under source files. The app.js file is the main file you build your application in - you can add more but ours is really simple so we only need the one. Open up the app.js file in the repository folder on your computer and copy and paste the contents of it into the IDE. Before you do anything else, change the HOST IP address to the IP of the ESP8266 at the top of the file. The application code is broken down with further explanations below. var UI = require('ui'); var ajax = require('ajax'); Sets up the required UI library to draw things to the screen and gets an ajax library to make requests to web services. function colour_request(r, g, b) { var req = { url: HOST, method: 'post', data: {red:r, green:g, blue:b} }; ajax(req, function(data, status, request) { console.log(data); }, function(data, status, request) { console.log('The ajax request failed: ' + data + status + JSON.stringify(request)); } ); } This function makes the AJAX request to our ESP8266 service posting the data. This is set up so it can be called from any interaction point within the application as we need it, taking a value for red, green and blue. var colours = [ { title: "OFF", r: 0, g: 0, b: 0, }, { title: "RED", r: 255, g: 0, b: 0, }, { title: "GREEN", r: 0, g: 255, b: 0, }, { title: "BLUE", r: 0, g: 0, b: 255, }, { title: "YELLOW", r: 255, g: 255, b: 0, }, { title: "MAGENTA", r: 255, g: 0, b: 255, }, { title: "CYAN", r: 0, g: 255, b: 255, }, { title: "WHITE", r: 255, g: 255, b: 255, }, ]; Next we define an array of objects that represent the menu items in our application. The title’s will be used for the text on the menu items and then the r,g,b values will be used in order to provide the colour values for the colour_request function. var menu = new UI.Menu({ sections: [{ title: 'Choose LED colour', items: colours }] }); menu.on('select', function(e) { colour_request(e.item.r, e.item.g, e.item.b); }); menu.show(); The last part of the code creates the UI menu items and then defines an event handler for each menu item to call the colour_request function when it is selected and then finally menu.show() puts everything in the screen. Compile and deploy the application When it comes time to test or deploy your application you can either download a PBW file which can then be installed on the Pebble using command line tools or else you can use the emulator that is in the IDE. To use the emulator simple save and then hit the “Play” button to run your app within a watch emulator in the IDE. The great thing about the emulator is that you can test things like UI interactions and ensure configuration happens properly in a nice tight development loop. Unfortunately, because the emulator runs in Pebble’s network, it more than likely won’t have access to your ESP8266 which is sitting on your LAN. One way to fix this is simply create a route through your firewall and map it to your ESP8266. I won’t get into how to do this for your particular router. Simply change the HOST IP address in the app.js file to your public Internet Address and then configure a route to your internal LAN address. If you do this you can test directly from the emulator quite happily. The other option, and one you’ll need to do at some point anyway, is to install the application on your watch directly. Ensure your HOST IP address is updated in app.js and your phone is on the same network as the ESP8266. Now, from the CloudPebble IDE, click “Compilation” and then run a build. This will sit as pending for a little while and then if everything works it will go into the build log with the status “succeeded”. Download the PBW file by clicking on the “PBW” button and make note of the name of the file as it saves it to your downloads. Open up your phone and ensure developer mode is selected as instructed in the build tools set up at the top of this post. Get the IP address from the developer connection screen, as shown below. Open up a terminal and make sure you’ve activated the Pebble development environment. Export your phone’s IP address as shown from the developer connection tool in the Pebble app. export PEBBLE_PHONE=<<IP ADDRESS>> Now you should be able to do things like this: pebble ping And you’ll receive a notification on your watch. If that’s the case then simply install the application with: pebble install ~/Downloads/Watch_LED.pbw The Pebble will vibrate when it is working. Make it wireless Once you have everything working, the last step is to get onto battery power. This is easy - simply remove the USB-Serial adapter then plug in the Ground and VCC of your battery pack into the ground and VCC of your setup. At that point the module will power up - wait approximately 15 seconds and you should be able to ping it (and then control the LEDs from your watch). Going further The Pebble Watch and the ESP8266 are both extremely interesting devices. With a small amount of JS, the watch can be connected to any service that talks standard web protocols and with only $5 and a little bit of Lua it’s possible to make a wireless hardware service that the watch can interact with. Using this basic model you can make all sorts of internet connected devices. Here are some other things you could try now you’ve got this working: Extend the watch app to be able to talk to multiple ESP8266s scattered around your workspace. Change the event model so that the LEDs update on different notifications from your watch such as blue for a new tweet, red for a new email etc. Flip the service around - instead of controlling hardware, attach a sensor to the ESP8266 and then pass the data back to the watch periodically where it can display what’s happening. About the Author Andrew Fisher is a creator and destroyer of things that combine mobile web, ubicomp and lots of data. He is a sometime programmer, interaction researcher and CTO at JBA, a data consultancy in Melbourne, Australia. He can be found on Twitter @ajfisher.
Read more
  • 0
  • 0
  • 10099
article-image-hadoop-monitoring-and-its-aspects
Packt
04 May 2015
8 min read
Save for later

Hadoop Monitoring and its aspects

Packt
04 May 2015
8 min read
In this article by Gurmukh Singh, the author of the book Monitoring Hadoop, tells us the importance of monitoring Hadoop and its importance. It also explains various other concepts of Hadoop, such as its architecture, Ganglia (a tool used to monitor Hadoop), and so on. (For more resources related to this topic, see here.) In any enterprise, how big or small it could be, it is very important to monitor the health of all its components like servers, network devices, databases, and many more and make sure things are working as intended. Monitoring is a critical part for any business dependent upon infrastructure, by giving signals to enable necessary actions incase of any failures. Monitoring can be very complex with many components and configurations in a real production environment. There might be different security zones; different ways in which servers are setup or a same database might be used in many different ways with servers listening on various service ports. Before diving into setting up Monitoring and logging for Hadoop, it is very important to understand the basics of monitoring, how it works and some commonly used tools in the market. In Hadoop, we can do monitoring of the resources, services and also do metrics collection of various Hadoop counters. There are many tools available in the market and one of them is Nagios, which is widely used. Nagios is a powerful monitoring system that provides you with instant awareness of your organization's mission-critical IT infrastructure. By using Nagios, you can: Plan release cycle and rollouts, before things get outdated Early detection, before it causes an outage Have automation and a better response across the organization Nagios Architecture It is based on a simple server client architecture, in which the server has the capability to execute checks remotely using NRPE agents on the Linux clients. The results of execution are captured by the server and accordingly alerted by the system. The checks could be for memory, disk, CPU utilization, network, database connection and many more. It provides the flexibility to use either active or passive checks. Ganglia Ganglia, it is a beautiful tool for aggregating the stats and plotting them nicely. Nagios, give the events and alerts, Ganglia aggregates and presents it in a meaningful way. What if you want to look for total CPU, memory per cluster of 2000 nodes or total free disk space on 1000 nodes. Some of the key feature of Ganglia. View historical and real time metrics of a single node or for the entire cluster Use the data to make decision on cluster sizing and performance Ganglia Components Ganglia Monitoring Daemon (gmond): This runs on the nodes that need to be monitored, captures state change and sends updates using XDR to a central daemon. Ganglia Meta Daemon (gmetad): This collects data from gmond and other gmetad daemons. The data is indexed and stored to disk in round robin fashion. There is also a Ganglia front-end for meaningful display of information collected. All these tools can be integrated with Hadoop, to monitor it and capture its metrics. Integration with Hadoop There are many important components in Hadoop that needs to be monitored, like NameNode uptime, disk space, memory utilization, and heap size. Similarly, on DataNode we need to monitor disk usage, memory utilization or job execution flow status across the MapReduce components. To know what to monitor, we must understand how Hadoop daemons communicate with each other. There are lots of ports used in Hadoop, some are for internal communication like scheduling jobs, and replication, while others are for user interactions. They may be exposed using TCP or HTTP. Hadoop daemons provide information over HTTP about logs, stacks, metrics that could be used for troubleshooting. NameNode can expose information about the file system, live or dead nodes or block reports by the DataNode or JobTracker for tracking the running jobs. Hadoop uses TCP, HTTP, IPC or socket for communication among the nodes or daemons. YARN Framework The YARN (Yet Another resource Negotiator) is the new MapReduce framework. It is designed to scale for large clusters and performs much better as compared to the old framework. There are new sets of daemons in the new framework and it is good to understand how to communicate with each other. The diagram that follows, explains the daemons and ports on which they talk. Logging in Hadoop In Hadoop, each daemon writes its own logs and the severity of logging is configurable. The logs in Hadoop can be related to the daemons or the jobs submitted. Useful to troubleshoot slowness, issue with map reduce tasks, connectivity issues and platforms bugs. The logs generated can be user level like task tracker logs on each node or can be related to master daemons like NameNode and JobTracker. In the newer YARN platform, there is a feature to move the logs to HDFS after the initial logging. In Hadoop 1.x the user log management is done using UserLogManager, which cleans and truncates logs according to retention and size parameters like mapred.userlog.retain.hours and mapreduce.cluster.map.userlog.retain-size respectively. The tasks standard out and error are piped to Unix tail program, so it retains the require size only. The following are some of the challenges of log management in Hadoop: Excessive logging: The truncation of logs is not done till the tasks finish, this for many jobs could cause disk space issues as the amount of data written is quite large. Truncation: We cannot always say what to log and how much is good enough. For some users 500KB of logs might be good but for some 10MB might not suffice. Retention: How long to retain logs, 1 or 6 months?. There is no rule, but there are best practices or governance issues. In many countries there is regulation in place to keep data for 1 year. Best practice for any organization is to keep it for at least 6 months. Analysis: What if we want to look at historical data, how to aggregate logs onto a central system and do analyses. In Hadoop logs are served over HTTP for a single node by default. Some of the above stated issues have been addressed in the YARN framework. Rather then truncating logs and that to on individual nodes, the logs can be moved to HDFS and processed using other tools. The logs are written at the per application level into directories per application. The user can access these logs through command line or web UI. For example, $HADOOP_YARN_HOME/bin/yarn logs. Hadoop metrics In Hadoop there are many daemons running like DataNode, NameNode, JobTracker, and so on, each of these daemons captures a lot of information about the components they work on. Similarly, in YARN framework we have ResourceManager, NodeManager, and Application Manager, each of which exposes metrics, explained in the following sections under Metrics2. For example, DataNode collects metrics like number of blocks it has for advertising to the NameNode, the number of replicated blocks, metrics about the various read or writes from clients. In addition to this there could be metrics related to events, and so on. Hence, it is very important to gather it for the working of the Hadoop cluster and also helps in debugging, if something goes wrong. For this, Hadoop has a metrics system, for collecting all this information. There are two versions of the metrics system, Metrics and Metrics2 for Hadoop 1.x and Hadoop 2.x respectively. The file hadoop-metrics.properties and hadoop-metrics2.properties for each Hadoop version can be configured respectively. Configuring Metrics2 For Hadoop version 2, which uses YARN framework, the metrics can be configured using hadoop-metrics2.properties, under the $HADOOP_HOME directory. *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink *.period=10 namenode.sink.file.filename=namenode-metrics.out datanode.sink.file.filename=datanode-metrics.out jobtracker.sink.file.filename=jobtracker-metrics.out tasktracker.sink.file.filename=tasktracker-metrics.out maptask.sink.file.filename=maptask-metrics.out reducetask.sink.file.filename=reducetask-metrics.out Hadoop metrics Configuration for Ganglia Firstly, we need to define a sink class, as per Ganglia. *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31 Secondly, we need to define the frequency of how often the source showed be polled for data. We are polling every 30 seconds: *.sink.ganglia.period=30 Define retention for the metrics: *.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40 Summary In this article, we learned about Hadoop monitoring and its importance, and also the various concepts of Hadoop. Resources for Article: Further resources on this subject: Hadoop and MapReduce [article] YARN and Hadoop [article] Hive in Hadoop [article]
Read more
  • 0
  • 0
  • 2832

article-image-getting-started-codeception
Packt
04 May 2015
17 min read
Save for later

Getting started with Codeception

Packt
04 May 2015
17 min read
In this article by Matteo Pescarin, the author of Learning Yii Testing, we will get introduced to Codeception. Not everyone has been exposed to testing. The ones who actually have are aware of the quirks and limitations of the testing tools they've used. Some might be more efficient than others, and in either case, you had to rely on the situation that was presented to you: legacy code, hard to test architectures, no automation, no support whatsoever on the tools, and other setup problems, just to name a few. Only certain companies, because they have either the right skillsets or the budget, invest in testing, but most of them don't have the capacity to see beyond the point that quality assurance is important. Getting the testing infrastructure and tools in place is the immediate step following getting developers to be responsible for their own code and to test it. (For more resources related to this topic, see here.) Even if testing is something not particularly new in the programming world, PHP always had a weak point regarding it. Its history is not the one of a pure-bred programming language done with all the nice little details, and only just recently has PHP found itself in a better position and started to become more appreciated. Because of this, the only and most important tool that came out has been PHPUnit, which was released just 10 years ago, in 2004, thanks to the efforts of Sebastian Bergmann. PHPUnit was and sometimes is still difficult to master and understand. It requires time and dedication, particularly if you are coming from a non-testing experience. PHPUnit simply provided a low-level framework to implement unit tests and, up to a certain point, integration tests, with the ability to create mocks and fakes when needed. Although it still is the quickest way to discover bugs, it didn't cover everything and using it to create large integration tests will end up being an almost impossible task. On top of this, PHPUnit since version 3.7, when it switched to a different autoloading mechanism and moved away from PEAR, caused several headaches rendering most of the installations unusable. Other tools developed since mostly come from other environments and requirements, programming languages, and frameworks. Some of these tools were incredibly strong and well-built, but they came with their own way of declaring tests and interacting with the application, set of rules, and configuration specifics. A modular framework rather than just another tool Clearly, mastering all these tools required a bit of understanding, and the learning curve wasn't promised to be the same among all of them. So, if this is the current panorama, why create another tool if you will end up in the same situation we were in before? Well, one of the most important things to be understood about Codeception is that it's not just a tool, rather a full stack, as noted on the Codeception site, a suite of frameworks, or if you want to go meta, a framework for frameworks. Codeception provides a uniform way to design different types of test by using as much as possible the same semantic and logic, a way to make the whole testing infrastructure more coherent and approachable. Outlining concepts behind Codeception Codeception has been created with the following basic concepts in mind: Easy to read: By using a declarative syntax close to the natural language, tests can be read and interpreted quite easily, making them an ideal candidate to be used as documentation for the application. Any stakeholder and engineer close to the project can ensure that tests are written correctly and cover the required scenarios without knowing any special lingo. It can also generate BDD-style test scenarios from code test cases. Easy to write: As we already underlined, every testing framework uses its own syntax or language to write tests, resulting in some degree of difficulty when switching from one suite to the other, without taking into account the learning curve each one has. Codeception tries to bridge this gap of knowledge by using a common declarative language. Further, abstractions provide a comfortable environment that makes maintenance simple. Easy to debug: Codeception is born with the ability to see what's behind the scenes without messing around with the configuration files or doing random print_r around your code. On top of this all, Codeception has also been written with modularity and extensibility in mind, so that organizing your code is simple while also promoting code reuse throughout your tests. But let's see what's provided by Codeception in more detail. Types of tests As we've seen, Codeception provides three basic types of test: Unit tests Functional tests Acceptance tests Each one of them is self-contained in its own folder where you can find anything needed, from the configuration and the actual tests to any additional piece of information that is valuable, such as the fixtures, database snapshots, or specific data to be fed to your tests. In order to start writing tests, you need to initialize all the required classes that will allow you to run your tests, and you can do this by invoking codecept with the build argument: $ cd tests $ ../vendor/bin/codecept build Building Actor classes for suites: functional, acceptance, unit FunctionalTester includes modules: Filesystem, Yii2 FunctionalTester.php generated successfully. 61 methods added AcceptanceTester includes modules: PhpBrowser AcceptanceTester.php generated successfully. 47 methods added UnitTester includes modules: UnitTester.php generated successfully. 0 methods added $ The codecept build command needs to be run every time you modify any configuration file owned by Codeception when adding or removing any module, in other words, whenever you modify any of the .suite.yml files available in the /tests folder. What you have probably already noticed in the preceding output is the presence of a very peculiar naming system for the test classes. Codeception introduces the Guys that have been renamed in Yii terminology as Testers, and are as follows: AcceptanceTester: This is used for acceptance tests FunctionalTester: This is used for functional tests UnitTester: This is used for unit tests These will become your main interaction points with (most of) the tests and we will see why. By using such nomenclature, Codeception shifts the point of attention from the code itself to the person that is meant to be acting the tests you will be writing. This way we will become more fluent in thinking in a more BDD-like mindset rather than trying to figure out all the possible solutions that could be covered, while losing the focus of what we're trying to achieve. Once again, BDD is an improvement over TDD, because it declares in a more detailed way what needs to be tested and what doesn't. AcceptanceTester AcceptanceTester can be seen as a person who does not have any knowledge of the technologies used and tries to verify the acceptance criteria that have been defined at the beginning. If we want to re-write our previously defined acceptance tests in a more standardized BDD way, we need to remember the structure of a so-called user story. The story should have a clear title, a short introduction that specifies the role that is involved in obtaining a certain result or effect, and the value that this will reflect. Following this, we will then need to specify the various scenarios or acceptance criteria, which are defined by outlining the initial scenario, the trigger event, and the expected outcome in one or more clauses. Let's discuss login using a modal window, which is one of the two features we are going to implement in our application. Story title – successful user login I, as an acceptance tester, want to log in into the application from any page. Scenario 1: Log in from the homepage      I am on the homepage.      I click on the login link.      I enter my username.      I enter my password.      I press submit.      The login link now reads "logout (<username>)" and I'm still on the homepage. Scenario 2: Log in from a secondary page      I am on a secondary page.     I click on the login link.     I enter my username.     I enter my password.     I press Submit.     The login link now reads "logout (<username>)" and I'm still on the secondary page. As you might have noticed I am limiting the preceding example to successful cases. The preceding story can be immediately translated into something along the lines of the following code: // SuccessfulLoginAcceptanceTest.php   $I = new AcceptanceTester($scenario); $I->wantTo("login into the application from any page");   // scenario 1 $I->amOnPage("/"); $I->click("login"); $I->fillField("username", $username); $I->fillField("password", $password); $I->click("submit"); $I->canSee("logout (".$username.")"); $I->seeInCurrentUrl("/");   // scenario 2 $I->amOnPage("/"); $I->click("about"); $I->seeLink("login"); $I->click("login"); $I->fillField("username", $username); $I->fillField("password", $password); $I->click("submit"); $I->canSee("logout (".$username.")"); $I->amOnPage("about"); As you can see this is totally straightforward and easy to read, to the point that anyone in the business should be able to write any case scenario (this is an overstatement, but you get the idea). Clearly, the only thing that is needed to understand is what the AcceptanceTester is able to do: The class generated by the codecept build command can be found in tests/codeception/acceptance/AcceptanceTester.php, which contains all the available methods. You might want to skim through it if you need to understand how to assert a particular condition or perform an action on the page. The online documentation available at http://codeception.com/docs/04-AcceptanceTests will also give you a more readable way to get this information. Don't forget that at the end AcceptanceTester is just a name of a class, which is defined in the YAML file for the specific test type: $ grep class tests/codeception/acceptance.suite.yml class_name: AcceptanceTester Acceptance tests are the topmost level of tests, as some sort of high-level user-oriented integration tests. Because of this, acceptance tests end up using an almost real environment, where no mocks or fakes are required. Clearly, we would need some sort of initial state that we can revert to, particularly if we're causing actions that modify the state of the database. As per Codeception documentation, we could have used a snapshot of the database to be loaded at the beginning of each test. Unfortunately, I didn't have much luck in finding this feature working. So later on, we'll be forced to use the fixtures. Everything will then make more sense. When we will write our acceptance tests, we will also explore the various modules that you can also use with it, such as PHPBrowser and Selenium WebDriver and their related configuration options. FunctionalTester As we said earlier, FunctionalTester represents our character when dealing with functional tests. You might think of functional tests as a way to leverage on the correctness of the implementation from a higher standpoint. The way to implement functional tests bears the same structure as that of acceptance tests, to the point that most of the time the code we've written for an acceptance test in Codeception can be easily swapped with that for a functional test, so you might ask yourself: "where are the differences?" It must be noted that the concept of functional tests is something specific to Codeception and can be considered almost the same as that of integration tests for the mid-layer of your application. The most important thing is that functional tests do not require a web server to run, and they're called headless: For this reason, they are not only quicker than acceptance tests, but also less "real" with all the implications of running on a specific environment. And it's not the case that the acceptance tests provided by default by the basic application are, almost, the same as the functional tests. Because of this, we will end up having more functional tests that will cover more use cases for specific parts of our application. FunctionalTester is somehow setting the $_GET, $_POST and $_REQUEST variables and running the application from within a test. For this reason, Codeception ships with modules that let it interact with the underlying framework, be it Symfony2, Laravel4, Zend, or, in our case, Yii 2. In the configuration file, you will notice the module for Yii 2 already enabled: # tests/functional.suite.yml   class_name: FunctionalTester modules:    enabled:      - Filesystem      - Yii2 # ... FunctionalTester has got a better understanding of the technologies used although he might not have the faintest idea of how the various features he's going to test have been implemented in detail; he just knows the specifications. This makes a perfect case for the functional tests to be owned or written by the developers or anyone that is close to the knowledge of how the various features have been exposed for general consumption. The base functionality of the REST application, exposed through the API, will also be heavily tested, and in this case, we will have the following scenarios: I can use POST to send correct authentication data and will receive a JSON containing the successful authentication I can use POST to send bad authentication data and will receive a JSON containing the unsuccessful authentication After a correct authentication, I can use GET to retrieve the user data After a correct authentication, I will receive an error when doing a GET for a user stating that it's me I can use POST to send my updated hashed password Without a correct authentication, I cannot perform any of the preceding actions The most important thing to remember is that at the end of each test, it's your responsibility to keep the memory clean: The PHP application will not terminate after processing a request. All requests happening in the same memory container are not isolated. If you see your tests failing for some unknown reason when they shouldn't, try to execute a single test separately. UnitTester I've left UnitTester for the end as it's a very special guy. For all we know, until now, Codeception must have used some other framework to cover unit tests, and we're pretty much sure that PHPUnit is the only candidate to achieve this. If any of you have already worked with PHPUnit, you will remember the learning curve together with the initial problem of understanding its syntax and performing even the simplest of tasks. I found that most developers have a love-and-hate relationship with PHPUnit: either you learn its syntax or you spend half of the time looking at the manual to get to a single point. And I won't blame you. We will see that Codeception will come to our aid once again if we're struggling with tests: remember that these unit tests are the simplest and most atomic part of the work we're going to test. Together with them come the integration tests that cover the interaction of different components, most likely with the use of fake data and fixtures. If you're used to working with PHPUnit, you won't find any particular problems writing tests; otherwise, you can make use of UnitTester and implement the same tests by using the Verify and Specify syntax. UnitTester assumes a deep understanding of the signature and how the infrastructure and framework work, so these tests can be considered the cornerstone of testing. They are super fast to run, compared to any other type of test, and they should also be relatively easy to write. You can start with adequately simple assertions and move to data providers before needing to deal with fixtures. Other features provided by Codeception On top of the types of tests, Codeception provides some more aids to help you organize, modularize, and extend your test code. As we've seen, functional and acceptance tests have a very plain and declarative structure, and all the code and the scenarios related to specific acceptance criteria are kept in the same file at the same level and these are executed linearly. In most of the situations, as it is in our case, this is good enough, but when your code starts growing and the number of components and features become more and more complex, the list of scenarios and steps to perform an acceptance or functional test can be quite lengthy. Further, some tests might end up depending on others, so you might want to start considering writing more compact scenarios and promote code reuse throughout your tests or split your test into two or more tests. If you feel your code needs a better organization and structure, you might want to start generating CEST classes instead of normal tests, which are called CEPT instead. A CEST class groups the scenarios all together as methods as highlighted in the following snippet: <?php // SuccessfulLoginCest.php   class SuccessfulLoginCest {    public function _before(CodeceptionEventTestEvent $event) {}      CodeceptionEventTestEvent $event        public function _fail(CodeceptionEventTestEvent $event) {}      // tests    public function loginIntoTheApplicationTest(AcceptanceTester $I)    {        $I->wantTo("login into the application from any page");        $I->amOnPage("/");        $I->click("login");        $I->fillField("username", $username);        $I->fillField("password", $password);        $I->click("submit");        $I->canSee("logout (".$username.")");        $I->seeInCurrentUrl("/");        // ...    } } ?> Any method that is not preceded by the underscore is considered a test, and the reserved methods _before and _after are executed at the beginning and at the end of the list of tests contained in the test class, while the _fail method is used as a cleanup method in case of failure. This alone might not be enough, and you can use document annotations to create reusable code to be run before and after the tests with the use of @before <methodName> and @after <methodName>. You can also be stricter and require a specific test to pass before any other by using the document annotation @depends <methodName>. We're going to use some of these document annotations, but before we start installing Codeception, I'd like to highlight two more features: PageObjects and StepObjects. The PageObject is a common pattern amongst test automation engineers. It represents a web page as a class, where its DOM elements are properties of the class, and methods instead provide some basic interactions with the page. The main reason for using PageObjects is to avoid hardcoding CSS and XPATH locators in your tests. Yii provides some example implementation of the PageObjects used in /tests/codeception/_pages. StepObject is another way to promote code reuse in your tests: It will define some common actions that can be used in several tests. Together with PageObjects, StepObjects can become quite powerful. StepObject extends the Tester class and can be used to interact with the PageObject. This way your tests will become less dependent on a specific implementation and will save you the cost of refactoring when the markup and the way to interact with each component in the page changes. For future reference, you can find all of these in the Codeception documentation in the section regarding the advanced use at http://codeception.com/docs/07-AdvancedUsage together with other features, like grouping and an interactive console that you can use to test your scenarios at runtime. Summary In this article, we got hands-on with Codeception and looked at the different types of tests available. Resources for Article: Further resources on this subject: Building a Content Management System [article] Creating an Extension in Yii 2 [article] Database, Active Record, and Model Tricks [article]
Read more
  • 0
  • 0
  • 3286

Packt
04 May 2015
51 min read
Save for later

Saying Hello to Unity and Android

Packt
04 May 2015
51 min read
Welcome to the wonderful world of mobile game development. Whether you are still looking for the right development kit or have already chosen one, this article will be very important. In this article by Tom Finnegan, the author of Learning Unity Android Game Development, we explore the various features that come with choosing Unity as your development environment and Android as the target platform. Through comparison with major competitors, it is discovered why Unity and Android stand at the top of the pile. Following this, we will examine how Unity and Android work together. Finally, the development environment will be set up and we will create a simple Hello World application to test whether everything is set up correctly. (For more resources related to this topic, see here.) In this article, we will cover the following topics: Major Unity features Major Android features Unity licensing options Installing the JDK Installing the Android software development kit (SDK) Installing Unity 3D Installing Unity Remote Understanding what makes Unity great Perhaps the greatest feature of Unity is how open-ended it is. Nearly all game engines currently on the market are limited in what one can build with them. It makes perfect sense but it can limit the capabilities of a team. The average game engine has been highly optimized for creating a specific game type. This is great if all you plan on making is the same game again and again. It can be quite frustrating when one is struck with inspiration for the next great hit, only to find that the game engine can't handle it and everyone has to retrain in a new engine or double the development time to make the game engine capable. Unity does not suffer from this problem. The developers of Unity have worked very hard to optimize every aspect of the engine, without limiting what types of games can be made using it. Everything ranging from simple 2D platformers to massive online role-playing games are possible in Unity. A development team that just finished an ultrarealistic first-person shooter can turn right around and make 2D fighting games without having to learn an entirely new system. Being so open-ended does, however, bring a drawback. There are no default tools that are optimized for building the perfect game. To combat this, Unity grants the ability to create any tool one can imagine, using the same scripting that creates the game. On top of that, there is a strong community of users that have supplied a wide selection of tools and pieces, both free and paid, that can be quickly plugged in and used. This results in a large selection of available content that is ready to jump-start you on your way to the next great game. When many prospective users look at Unity, they think that, because it is so cheap, it is not as good as an expensive AAA game engine. This is simply not true. Throwing more money at the game engine is not going to make a game any better. Unity supports all of the fancy shaders, normal maps, and particle effects that you could want. The best part is that nearly all of the fancy features that you could want are included in the free version of Unity, and 90 percent of the time beyond that, you do not even need to use the Pro-only features. One of the greatest concerns when selecting a game engine, especially for the mobile market, is how much girth it will add to the final build size. Most game engines are quite hefty. With Unity's code stripping, the final build size of the project becomes quite small. Code stripping is the process by which Unity removes every extra little bit of code from the compiled libraries. A blank project compiled for Android that utilizes full code stripping ends up being around 7 megabytes. Perhaps one of the coolest features of Unity is its multi-platform compatibility. With a single project, one can build for several different platforms. This includes the ability to simultaneously target mobiles, PCs, and consoles. This allows you to focus on real issues, such as handling inputs, resolution, and performance. In the past, if a company desired to deploy their product on more than one platform, they had to nearly double the development costs in order to essentially reprogram the game. Every platform did, and still does, run by its own logic and language. Thanks to Unity, game development has never been simpler. We can develop games using simple and fast scripting, letting Unity handle the complex translation to each platform. Unity – the best among the rest There are of course several other options for game engines. Two major ones that come to mind are cocos2d and Unreal Engine. While both are excellent choices, you will find them to be a little lacking in certain respects. The engine of Angry Birds, cocos2d, could be a great choice for your next mobile hit. However, as the name suggests, it is pretty much limited to 2D games. A game can look great in it, but if you ever want that third dimension, it can be tricky to add it to cocos2d; you may need to select a new game engine. A second major problem with cocos2d is how bare bones it is. Any tool for building or importing assets needs to be created from scratch, or it needs to be found. Unless you have the time and experience, this can seriously slow down development. Then there is the staple of major game development, Unreal Engine. This game engine has been used successfully by developers for many years, bringing great games to the world Unreal Tournament and Gears of War not the least among them. These are both, however, console and computer games, which is the fundamental problem with the engine. Unreal is a very large and powerful engine. Only so much optimization can be done on it for mobile platforms. It has always had the same problem; it adds a lot of girth to a project and its final build. The other major issue with Unreal is its rigidity in being a first-person shooter engine. While it is technically possible to create other types of games in it, such tasks are long and complex. A strong working knowledge of the underlying system is a must before achieving such a feat. All in all, Unity definitely stands strong amidst game engines. But these are still great reasons for choosing Unity for game development. Unity projects can look just as great as AAA titles. The overhead and girth in the final build are small and this is very important when working on mobile platforms. The system's potential is open enough to allow you to create any type of game that you might want, where other engines tend to be limited to a single type of game. In addition, should your needs change at any point in the project's life cycle, it is very easy to add, remove, or change your choice of target platforms. Understanding what makes Android great With over 30 million devices in the hands of users, why would you not choose the Android platform for your next mobile hit? Apple may have been the first one out of the gate with their iPhone sensation, but Android is definitely a step ahead when it comes to smartphone technology. One of its best features is its blatant ability to be opened up so that you can take a look at how the phone works, both physically and technically. One can swap out the battery and upgrade the micro SD card on nearly all Android devices, should the need arise. Plugging the phone into a computer does not have to be a huge ordeal; it can simply function as a removable storage media. From the point of view of the cost of development as well, the Android market is superior. Other mobile app stores require an annual registration fee of about 100 dollars. Some also have a limit on the number of devices that can be registered for development at one time. The Google Play market has a one-time registration fee of 25 dollars, and there is no concern about how many Android devices or what type of Android devices you are using for development. One of the drawbacks of some of the other mobile development kits is that you have to pay an annual registration fee before you have access to the SDK. With some, registration and payment are required before you can view their documentation. Android is much more open and accessible. Anybody can download the Android SDK for free. The documentation and forums are completely viewable without having to pay any fee. This means development for Android can start earlier, with device testing being a part of it from the very beginning. Understanding how Unity and Android work together As Unity handles projects and assets in a generic way, there is no need to create multiple projects for multiple target platforms. This means that you could easily start development with the free version of Unity and target personal computers. Then, at a later date, you can switch targets to the Android platform with the click of a button. Perhaps, shortly after your game is launched, it takes the market by storm and there is a great call to bring it to other mobile platforms. With just another click of the button, you can easily target iOS without changing anything in your project. Most systems require a long and complex set of steps to get your project running on a device. However, once your device is set up and recognized by the Android SDK, a single button click will allow Unity to build your application, push it to a device, and start running it. There is nothing that has caused more headaches for some developers than trying to get an application on a device. Unity makes this simple. With the addition of a free Android application, Unity Remote, it is simple and easy to test mobile inputs without going through the whole build process. While developing, there is nothing more annoying than waiting for 5 minutes for a build every time you need to test a minor tweak, especially in the controls and interface. After the first dozen little tweaks, the build time starts to add up. Unity Remote makes it simple and easy to test everything without ever having to hit the Build button. These are the big three reasons why Unity works well with Android: Generic projects A one-click build process Unity Remote We could, of course, come up with several more great ways in which Unity and Android can work together. However, these three are the major time and money savers. You could have the greatest game in the world, but if it takes 10 times longer to build and test, what is the point? Differences between the Pro and Basic versions of Unity Unity comes with two licensing options, Pro and Basic, which can be found at https://store.unity3d.com. If you are not quite ready to spend the 3,000 dollars that is required to purchase a full Unity Pro license with the Android add-on, there are other options. Unity Basic is free and comes with a 30-day free trial of Unity Pro. This trial is full and complete, as if you have purchased Unity Pro, the only downside being a watermark in the bottom-right corner of your game stating Demo Use Only. It is also possible to upgrade your license at a later date. Where Unity Basic comes with mobile options for free, Unity Pro requires the purchase of Pro add-ons for each of the mobile platforms. An overview of license comparison License comparisons can be found at http://unity3d.com/unity/licenses. This section will cover the specific differences between Unity Android Pro and Unity Android Basic. We will explore what the features are and how useful each one is in the following points: NavMeshes, pathfinding, and crowd simulation This feature is Unity's built-in pathfinding system. It allows characters to find their way from a point to another around your game. Just bake your navigation data in the editor and let Unity take over at runtime. Until recently, this was a Unity Pro only feature. Now the only part of it that is limited in Unity Basic is the use of off-mesh links. The only time you are going to need them is when you want your AI characters to be able to jump across and otherwise navigate around gaps. LOD support LOD (short for level of detail) lets you control how complex a mesh is, based on its distance from the camera. When the camera is close to an object, you can render a complex mesh with a bunch of detail in it. When the camera is far from that object, you can render a simple mesh because all that detail is not going to be seen anyway. Unity Pro provides a built-in system to manage this. However, this is another system that could be created in Unity Basic. Whether or not you are using the Pro version, this is an important feature for game efficiency. By rendering less complex meshes at a distance, everything can be rendered faster, leaving more room for awesome gameplay. The audio filter Audio filters allow you to add effects to audio clips at runtime. Perhaps you created gravel footstep sounds for your character. Your character is running and we can hear the footsteps just fine, when suddenly they enter a tunnel and a solar flare hits, causing a time warp and slowing everything down. Audio filters would allow us to warp the gravel footstep sounds to sound as if they were coming from within a tunnel and were slowed by a time warp. Of course, you could also just have the audio guy create a new set of tunnel gravel footsteps in the time warp sounds, although this might double the amount of audio in your game and limit how dynamic we can be with it at runtime. We either are or are not playing the time warp footsteps. Audio filters would allow us to control how much time warp is affecting our sounds. Video playback and streaming When dealing with complex or high-definition cut scenes, being able to play videos becomes very important. Including them in a build, especially with a mobile target, can require a lot of space. This is where the streaming part of this feature comes in. This feature not only lets us play videos but also lets us stream a video from the Internet. There is, however, a drawback to this feature. On mobile platforms, the video has to go through the device's built-in video-playing system. This means that the video can only be played in fullscreen and cannot be used as a texture for effects such as moving pictures on a TV model. Theoretically, you could break your video into individual pictures for each frame and flip through them at runtime, but this is not recommended for build size and video quality reasons. Fully-fledged streaming with asset bundles Asset bundles are a great feature provided by Unity Pro. They allow you to create extra content and stream it to users without ever requiring an update to the game. You could add new characters, levels, or just about any other content you can think of. Their only drawback is that you cannot add more code. The functionality cannot change, but the content can. This is one of the best features of Unity Pro. The 100,000 dollar turnover This one isn't so much a feature as it is a guideline. According to Unity's End User License Agreement, the basic version of Unity cannot be licensed by any group or individual who made $100,000 in the previous fiscal year. This basically means that if you make a bunch of money, you have to buy Unity Pro. Of course, if you are making that much money, you can probably afford it without an issue. This is the view of Unity at least and the reason why there is a 100,000 dollar turnover. Mecanim – IK Rigs Unity's new animation system, Mecanim, supports many exciting new features, one of which is IK (short form for Inverse Kinematics). If you are unfamiliar with the term, IK allows one to define the target point of an animation and let the system figure out how to get there. Imagine you have a cup sitting on a table and a character that wants to pick it up. You could animate the character to bend over and pick it up; but, what if the character is slightly to the side? Or any number of other slight offsets that a player could cause, completely throwing off your animation? It is simply impractical to animate for every possibility. With IK, it hardly matters that the character is slightly off. We just define the goal point for the hand and leave the animation of the arm to the IK system. It calculates how the arm needs to move in order to get the hand to the cup. Another fun use is making characters look at interesting things as they walk around a room: a guard could track the nearest person, the player's character could look at things that they can interact with, or a tentacle monster could lash out at the player without all the complex animation. This will be an exciting one to play with. Mecanim – sync layers and additional curves Sync layers, inside Mecanim, allow us to keep multiple sets of animation states in time with each other. Say you have a soldier that you want to animate differently based on how much health he has. When he is at full health, he walks around briskly. After a little damage to his health, the walk becomes more of a trudge. If his health is below half, a limp is introduced into his walk, and when he is almost dead he crawls along the ground. With sync layers, we can create one animation state machine and duplicate it to multiple layers. By changing the animations and syncing the layers, we can easily transition between the different animations while maintaining the state machine. The additional curves feature is simply the ability to add curves to your animation. This means we can control various values with the animation. For example, in the game world, when a character picks up its feet for a jump, gravity will pull them down almost immediately. By adding an extra curve to that animation, in Unity, we can control how much gravity is affecting the character, allowing them to actually be in the air when jumping. This is a useful feature for controlling such values alongside the animations, but you could just as easily create a script that holds and controls the curves. The custom splash screen Though pretty self-explanatory, it is perhaps not immediately evident why this feature is specified, unless you have worked with Unity before. When an application that is built in Unity initializes on any platform, it displays a splash screen. In Unity Basic, this will always be the Unity logo. By purchasing Unity Pro, you can substitute for the Unity logo with any image you want. Real-time spot/point and soft shadows Lights and shadows add a lot to the mood of a scene. This feature allows us to go beyond blob shadows and use realistic-looking shadows. This is all well and good if you have the processing space for it. However, most mobile devices do not. This feature should also never be used for static scenery; instead, use static lightmaps, which is what they are for. However, if you can find a good balance between simple needs and quality, this could be the feature that creates the difference between an alright and an awesome game. If you absolutely must have real-time shadows, the directional light supports them and is the fastest of the lights to calculate. It is also the only type of light available to Unity Basic that supports real-time shadows. HDR and tone mapping HDR (short for high dynamic range) and tone mapping allow us to create more realistic lighting effects. Standard rendering uses values from zero to one to represent how much of each color in a pixel is on. This does not allow for a full spectrum of lighting options to be explored. HDR lets the system use values beyond this range and processes them using tone mapping to create better effects, such as a bright morning room or the bloom from a car window reflecting the sun. The downside of this feature is in the processor. The device can still only handle values between zero and one, so converting them takes time. Additionally, the more complex the effect, the more time it takes to render it. It would be surprising to see this used well on handheld devices, even in a simple game. Maybe the modern tablets could handle it. Light probes Light probes are an interesting little feature. When placed in the world, light probes figure out how an object should be lit. Then, as a character walks around, they tell it how to be shaded. The character is, of course, lit by the lights in the scene, but there are limits on how many lights can shade an object at once. Light probes do all the complex calculations beforehand, allowing for better shading at runtime. Again, however, there are concerns about processing power. Too little power and you won't get a good effect; too much and there will be no processing power left for playing the game. Lightmapping with global illumination and area lights All versions of Unity support lightmaps, allowing for the baking of complex static shadows and lighting effects. With the addition of global illumination and area lights, you can add another touch of realism to your scenes. However, every version of Unity also lets you import your own lightmaps. This means that you could use some other program to render the lightmaps and import them separately. Static batching This feature speeds up the rendering process. Instead of spending time grouping objects for faster rendering on each frame , this allows the system to save the groups generated beforehand. Reducing the number of draw calls is a powerful step towards making a game run faster. That is exactly what this feature does. Render-to-texture effects This is a fun feature, but of limited use. It allows you to use the output from a camera in your game as a texture. This texture could then, in its most simple form, be put onto a mesh and act as a surveillance camera. You could also do some custom post processing, such as removing the color from the world as the player loses their health. However, this option could become very processor-intensive. Fullscreen post-processing effects This is another processor-intensive feature that probably will not make it into your mobile game. However, you can add some very cool effects to your scene, such as adding motion blur when the player is moving really fast or a vortex effect to warp the scene as the ship passes through a warped section of space. One of the best effects is using the bloom effect to give things a neon-like glow. Occlusion culling This is another great optimization feature. The standard camera system renders everything that is within the camera's view frustum, the view space. Occlusion culling lets us set up volumes in the space our camera can enter. These volumes are used to calculate what the camera can actually see from those locations. If there is a wall in the way, what is the point of rendering everything behind it? Occlusion culling calculates this and stops the camera from rendering anything behind that wall. Deferred rendering If you desire the best looking game possible, with highly detailed lighting and shadows, this is a feature of interest for you. Deferred rendering is a multi-pass process for calculating your game's light and shadow detail. This is, however, an expensive process and requires a decent graphics card to fully maximize its use. Unfortunately, this makes it a little outside of our use for mobile games. Stencil buffer access Custom shaders can use the stencil buffer to create special effects by selectively rendering over specific pixels. It is similar to how one might use an alpha channel to selectively render parts of a texture. GPU skinning This is a processing and rendering method by which the calculations for how a character or object appears, when using a skeleton rig, is given to the graphics card rather than getting it done by the central processor. It is significantly faster to render objects in this way. However, this is only supported on DirectX 11 and OpenGL ES 3.0, leaving it a bit out of reach for our mobile games. Navmesh – dynamic obstacles and priority This feature works in conjunction with the pathfinding system. In scripts, we can dynamically set obstacles, and characters will find their way around them. Being able to set priorities means that different types of characters can take different types of objects into consideration when finding their way around. For example, a soldier must go around the barricades to reach his target. The tank, however, could just crash through, should the player desire. Native code plugins' support If you have a custom set of code in the form of a Dynamic Link Library (DLL), this is the Unity Pro feature you need access to. Otherwise, the native plugins cannot be accessed by Unity for use with your game. Profiler and GPU profiling This is a very useful feature. The profiler provides tons of information about how much load your game puts on the processor. With this information, we can get right down into the nitty-gritties and determine exactly how long a script takes to process. Script access to the asset pipeline This is an alright feature. With full access to the pipeline, there is a lot of custom processing that can be done on assets and builds. The full range of possibilities is beyond the scope of this article. However, you can think of it as something that can make tint all of the imported textures slightly blue. Dark skin This is entirely a cosmetic feature. Its point and purpose are questionable. However, if a smooth, dark-skinned look is what you desire, this is the feature that you want. There is an option in the editor to change it to the color scheme used in Unity Basic. For this feature, whatever floats your boat goes. Setting up the development environment Before we can create the next great game for Android, we need to install a few programs. In order to make the Android SDK work, we will first install the Java Development Kit (JDK). Then we will install the Android SDK. After that, we will install Unity. We then have to install an optional code editor. To make sure everything is set up correctly, we will connect to our devices and take a look at some special strategies if the device is a tricky one. Finally, we will install Unity Remote, a program that will become invaluable in your mobile development. Installing the JDK Android's development language of choice is Java; so, to develop for it, we need a copy of the Java SE Development Kit on our computer. The process of installing the JDK is given in the following steps: The latest version of the JDK can be downloaded from http://www.oracle.com/technetwork/java/javase/downloads/index.html. So open the site in a web browser, and you will be able to see the screen showed in the following screenshot: Select Java Platform (JDK) from the available versions and you will be brought to a page that contains the license agreement and allows you to select the type of file you wish to download. Accept the license agreement and select your appropriate Windows version from the list at the bottom. If you are unsure about which version to choose, then Windows x86 is usually a safe choice. Once the download is completed, run the new installer. After a system scan, click on Next two times, the JDK will initialize, and then click on the Next button one more time to install the JDK to the default location. It is as good there as anywhere else, so once it is installed, hit the Close button. We have just finished installing the JDK. We need this so that our Android development kit will work. Luckily, the installation process for this keystone is short and sweet. Installing the Android SDK In order to actually develop and connect to our devices, we need to have installed the Android SDK. Having the SDK installed fulfills two primary requirements. First, it makes sure that we have the bulk of the latest drivers for recognizing devices. Second, we are able to use the Android Debug Bridge (ADB). ADB is the system used for actually connecting to and interacting with a device. The process of installing the Android SDK is given in the following steps: The latest version of the Android SDK can be found at http://developer.android.com/sdk/index.html, so open a web browser and go to the given site. Once there, scroll to the bottom and find the SDK Tools Only section. This is where we can get just the SDK, which we need to make Android games with Unity, without dealing with the fancy fluff of the Android Studio. We need to select the .exe package with (Recommended) underneath it (as shown in the following screenshot): You will then be sent to a Terms and Conditions page. Read it if you prefer, but agree to it to continue. Then hit the Download button to start downloading the installer. Once it has finished downloading, start it up. Hit the first Next button and the installer will try to find an appropriate version of the JDK. You will come to a page that will notify you about not finding the JDK if you do not have it installed. If you skipped ahead and do not have the JDK installed, hit the Visit java.oracle.com button in the middle of the page and go back to the previous section for guidance on installing it. If you do have it, continue with the process. Hitting Next again will bring you to a page that will ask you about the person for whom you are installing the SDK . Select Install for anyone using this computer because the default install location is easier to get to for later purposes. Hit Next twice, followed by Install to install the SDK to the default location. Once this is done, hit Next and Finish to complete the installation of the Android SDK Manager. If Android SDK Manager does not start right away, start it up. Either way, give it a moment to initialize. The SDK Manager makes sure that we have the latest drivers, systems, and tools for developing with the Android platform. However, we have to actually install them first (which can be done from the following screen): By default, the SDK manager should select a number of options to install. If not, select the latest Android API (Android L (API 20) as of the time of writing this article), Android Support Library and Google USB Driver found in Extras. Be absolutely sure that Android SDK Platform-tools is selected. This will be very important later. It actually includes the tools that we need to connect to our device. Once everything is selected, hit Install packages at the bottom-right corner. The next screen is another set of license agreements. Every time a component is installed or updated through the SDK manager, you have to agree to the license terms before it gets installed. Accept all of the licenses and hit Install to start the process. You can now sit back and relax. It takes a while for the components to be downloaded and installed. Once this is all done, you can close it out. We have completed the process, but you should occasionally come back to it. Periodically checking the SDK manager for updates will make sure that you are using the latest tools and APIs. The installation of the Android SDK is now finished. Without it, we would be completely unable to do anything on the Android platform. Aside from the long wait to download and install components, this was a pretty easy installation. Installing Unity 3D Perform the following steps to install Unity: The latest version of Unity can be found at http://www.unity3d.com/unity/download. As of the time of writing this article, the current version is 5.0. Once it is downloaded, launch the installer and click on Next until you reach the Choose Components page, as shown in the following screenshot: Here, we are able to select the features of Unity installation:      Example Project: This is the current project built by Unity to show off some of its latest features. If you want to jump in early and take a look at what a complete Unity game can look like, leave this checked.      Unity Development Web Player: This is required if you plan on developing browser applications with Unity. As this article is focused on Android development, it is entirely optional. It is, however, a good one to check. You never know when you may need a web demo and since it is entirely free to develop for the web using Unity, there is no harm in having it.      MonoDevelop: It is a wise choice to leave this option unchecked. There is more detail in the next section, but it will suffice for now to say that it just adds an extra program for script editing that is not nearly as useful as it should be. Once you have selected or deselected your desired options, hit Next. If you wish to follow this article, note that we will uncheck MonoDevelop and leave the rest checked. Next is the location of installation. The default location works well, so hit Install and wait. This will take a couple of minutes, so sit back, relax, and enjoy your favorite beverage. Once the installation is complete, the option to run Unity will be displayed. Leave it checked and hit Finish. If you have never installed Unity before, you will be presented with a license activation page (as shown in the following screenshot): While Unity does provide a feature-rich, free version, in order to follow the entirety of this article, one is required to make use of some of the Unity Pro features. At https://store.unity3d.com, you have the ability to buy a variety of licenses. Once they are purchased, you will receive an e-mail containing your new license key. Enter that in the provided text field. If you are not ready to make a purchase, you have two alternatives. We will go over how to reset your license in the Building a simple application section later in the article. The alternatives are as follows:      The first alternative is that you can check the Activate the free version of Unity checkbox. This will allow you to use the free version of Unity. As discussed earlier, there are many reasons to choose this option. The most notable at the moment is cost.      Alternatively, you can select the Activate a free 30-day trial of Unity Pro option. Unity offers a fully functional, one-time installation and a free 30-day trial of  Unity Pro. This trial also includes the Android Pro add-on. Anything produced during the 30 days is completely yours, as if you had purchased a full Unity Pro license. They want you to see how great it is, so you will come back and make a purchase. The downside is that the Trial Version watermark will be constantly displayed at the corner of the game. After the 30 days, Unity will revert to the free version. This is a great option, should you choose to wait before making a purchase. Whatever your choice is, hit OK once you have made it. The next page simply asks you to log in with your Unity account. This will be the same account that you used to make your purchase. Just fill out the fields and hit OK. If you have not yet made a purchase, you can hit Create Account and have it ready for when you do make a purchase. The next page is a short survey on your development interests. Fill it out and hit OK or scroll straight to the bottom and hit Not right now. Finally, there is a thank you page. Hit Start using Unity. After a short initialization, the project wizard will open and we can start creating the next great game. However, there is still a bunch of work to do to connect the development device. So for now, hit the X button in the top-right corner to close the project wizard. We will cover how to create a new project in the Building a simple application section later on. We just completed installing Unity 3D. We also had to make a choice about licenses. The alternatives, though, will have a few shortcomings. You will either not have full access to all of the features or be limited to the length of the trial period while making due with a watermark in your games. The optional code editor Now a choice has to be made about code editors. Unity comes with a system called MonoDevelop. It is similar in many respects to Visual Studio. And like Visual Studio, it adds many extra files and much girth to a project, all of which it needs to operate. All this extra girth makes it take an annoying amount of time to start up, before one can actually get to the code. Technically, you can get away with a plain text editor, as Unity doesn't really care. This article recommends using Notepad++, which is found at http://notepad-plus-plus.org/download. It is free to use and it is essentially Notepad with code highlighting. There are several fancy widgets and add-ons for Notepad++ that add even greater functionality to it, but they are not necessary for following this article. If you choose this alternative, installing Notepad++ to the default location will work just fine. Connecting to a device Perhaps the most annoying step in working with Android devices is setting up the connection to your computer. Since there are so many different kinds of devices, it can get a little tricky at times just to have the device recognized by your computer. A simple device connection The simple device connection method involves changing a few settings and a little work in the command prompt. It may seem a little scary, but if all goes well you will be connected to your device shortly: The first thing you need to do is turn on the phone's Developer options. In the latest version of Android, these have been hidden. Go to your phone's settings page and find the About phone page. Next, you need to find the Build number information slot and tap it several times. At first, it will appear to do nothing, but it will shortly display that you need to press the button a few more times to activate the Developer options. The Android team did this so that the average user does not accidentally make changes. Now go back to your settings page and there should be a new Developer options page; select it now. This page controls all of the settings you might need to change while developing your applications. The only checkbox we are really concerned with checking right now is USB debugging. This allows us to actually detect our device from the development environment. If you are using Kindle, be sure to go into Security and turn on Enable ADB as well. There are several warning pop-ups that are associated with turning on these various options. They essentially amount to the same malicious software warnings associated with your computer. Applications with immoral intentions can mess with your system and get to your private information. All these settings need to be turned on if your device is only going to be used for development. However, as the warnings suggest, if malicious applications are a concern, turn them off when you are not developing. Next, open a command prompt on your computer. This can be done most easily by hitting your Windows key, typing cmd.exe, and then hitting Enter. We now need to navigate to the ADB commands. If you did not install the SDK to the default location, replace the path in the following commands with the path where you installed it. If you are running a 32-bit version of Windows and installed the SDK to the default location, type the following in the command prompt: cd c:\program files\android\android-sdk\platform-tools If you are running a 64-bit version, type the following in the command prompt: cd c:\program files (x86)\android\android-sdk\platform-tools Now, connect your device to your computer, preferably using the USB cable that came with it. Wait for your computer to finish recognizing the device. There should be a Device drivers installed type of message pop-up when it is done. The following command lets us see which devices are currently connected and recognized by the ADB system. Emulated devices will show up as well. Type the following in the command prompt: adb devices After a short pause for processing, the command prompt will display a list of attached devices along with the unique IDs of all the attached devices. If this list now contains your device, congratulations! You have a developer-friendly device. If it is not completely developer-friendly, there is one more thing that you can try before things get tricky. Go to the top of your device and open your system notifications. There should be one that looks like the USB symbol. Selecting it will open the connection settings. There are a few options here and by default Android selects to connect the Android device as a Media Device. We need to connect our device as a Camera. The reason is the connection method used. Usually, this will allow your computer to connect. We have completed our first attempt at connecting to our Android devices. For most, this should be all that you need to connect to your device. For some, this process is not quite enough. The next little section covers solutions to resolve the issue for connecting trickier devices. For trickier devices, there are a few general things that we can try; if these steps fail to connect your device, you may need to do some special research. Start by typing the following commands. These will restart the connection system and display the list of devices again: adb kill-server adb start-server adb devices If you are still not having any luck, try the following commands. These commands force an update and restart the connection system: cd ../tools android update adb cd ../platform-tools adb kill-server adb start-server adb devices If your device is still not showing up, you have one of the most annoying and tricky devices. Check the manufacturer's website for data syncing and management programs. If you have had your device for quite some time, you have probably been prompted to install this more than once. If you have not already done so, install the latest version even if you never plan on using it. The point is to obtain the latest drivers for your device, and this is the easiest way. Restart the connection system again using the first set of commands and cross your fingers! If you are still unable to connect, the best, professional recommendation that can be made is to google for the solution to your problem. Conducting a search for your device's brand with adb at the end should turn up a step-by-step tutorial that is specific to your device in the first couple of results. Another excellent resource for finding out all about the nitty-gritties of Android devices can be found at http://www.xda-developers.com/. Some of the devices that you will encounter while developing will not connect easily. We just covered some quick steps and managed to connect these devices. If we could have covered the processes for every device, we would have. However, the variety of devices is just too large and the manufacturers keep making more. Unity Remote Unity Remote is a great application created by the Unity team. It allows developers to connect their Android-powered devices to the Unity Editor and provide mobile inputs for testing. This is a definite must for any aspiring Unity and Android developer. If you are using a non-Amazon device, acquiring Unity Remote is quite easy. At the time of writing this article, it could be found on Google Play at https://play.google.com/store/apps/details?id=com.unity3d.genericremote. It is free and does nothing but connects your Android device to the Unity Editor, so the app permissions are negligible. In fact, there are currently two versions of Unity Remote. To connect to Unity 4.5 and later versions, we must use Unity Remote 4. If, however, you like the ever-growing Amazon market or seek to target Amazon's line of Android devices, adding Unity Remote will become a little trickier. First, you need to download a special Unity Package from the Unity Asset Store. It can be found at https://www.assetstore.unity3d.com/en/#!/content/18106. You will need to import the package into a fresh project and build it from there. Import the package by going to the top of Unity, navigate to Assets | Import Package | Custom Package, and then navigate to where you saved it. In the next section, we will build a simple application and put it on our device. After you have imported the package, follow along from the step where we open the Build Settings window, replacing the simple application with the created APK. Building a simple application We are now going to create a simple Hello World application. This will familiarize you with the Unity interface and how to actually put an application on your device. Hello World To make sure everything is set up properly, we need a simple application to test with and what better to do that with than a Hello World application? To build the application, perform the following steps: The first step is pretty straightforward and simple: start Unity. If you have been following along so far, once this is done you should see a screen resembling the next screenshot. As the tab might suggest, this is the screen through which we open our various projects. Right now, though, we are interested in creating one; so, select New Project from the top-right corner and we will do just that: Use the Project name* field to give your project a name; Ch1_HelloWorld fits well for a project name. Then use the three dots to the right of the Location* field to choose a place on your computer to put the new project. Unity will create a new folder in this location, based on the project name, to store your project and all of its related files: For now, we can ignore the 3D and 2D buttons. These let us determine the defaults that Unity will use when creating a new scene and importing new assets. We can also ignore the Asset packages button. This lets you select from the bits of assets and functionality that is provided by Unity. They are free for you to use in your projects. Hit the Create Project button, and Unity will create a brand-new project for us. The following screenshot shows the windows of the Unity Editor: The default layout of Unity contains a decent spread of windows that are needed to create a game:      Starting from the left-hand side, Hierarchy contains a list of all the objects that currently exist in our scene. They are organized alphabetically and are grouped under parent objects.      Next to this is the Scene view. This window allows us to edit and arrange objects in the 3D space. In the top left-hand side, there are two groups of buttons. These affect how you can interact with the Scene view.      The button on the far left that looks like a hand lets you pan around when you click and drag with the mouse.      The next button, the crossed arrows, lets you move objects around. Its behavior and the gizmo it provides will be familiar if you have made use of any modeling programs.      The third button changes the gizmo to rotation. It allows you to rotate objects.      The fourth button is for scale. It changes the gizmo as well.      The fifth button lets you adjust the position and the scale based on the bounding box of the object and its orientation relative to how you are viewing it.      The second to last button toggles between Pivot and Center. This will change the position of the gizmo used by the last three buttons to be either at the pivot point of the selected object, or at the average position point of all the selected objects.      The last button toggles between Local and Global. This changes whether the gizmo is orientated parallel with the world origin or rotated with the selected object.      Underneath the Scene view is the Game view. This is what is currently being rendered by any cameras in the scene. This is what the player will see when playing the game and is used for testing your game. There are three buttons that control the playback of the Game view in the upper-middle section of the window.      The first is the Play button. It toggles the running of the game. If you want to test your game, press this button.      The second is the Pause button. While playing, pressing this button will pause the whole game, allowing you to take a look at the game's current state.      The third is the Step button. When paused, this button will let you progress through your game one frame at a time.      On the right-hand side is the Inspector window. This displays information about any object that is currently selected.      In the bottom left-hand side is the Project window. This displays all of the assets that are currently stored in the project.      Behind this is Console. It will display debug messages, compile errors, warnings, and runtime errors. At the top, underneath Help, is an option called Manage License.... By selecting this, we are given options to control the license. The button descriptions cover what they do pretty well, so we will not cover them in more detail at this point. The next thing we need to do is connect our optional code editor. At the top, go to Edit and then click on Preferences..., which will open the following window: By selecting External Tools on the left-hand side, we can select other software to manage asset editing. If you do not want to use MonoDevelop, select the drop-down list to the right of External Script Editor and navigate to the executable of Notepad++, or any other code editor of your choice. Your Image application option can also be changed here to Adobe Photoshop or any other image-editing program that you prefer, in the same way as the script editor. If you installed the Android SDK to the default location, do not worry about it. Otherwise, click on Browse... and find the android-sdk folder. Now, for the actual creation of this application, right-click inside your Project window. From the new window that pops up, select Create and C# Script from the menu. Type in a name for the new script (HelloWorld will work well) and hit Enter twice: once to confirm the name and once to open it. In this article, this will be a simple Hello World application. Unity supports C#, JavaScript, and Boo as scripting languages. For consistency, this article will be using C#. If you, instead, wish to use JavaScript for your scripts, copies of all of the projects can be found with the other resources for this article, under a _JS suffix for JavaScript. Every script that is going to attach to an object extends the functionality of the MonoBehaviour class. JavaScript does this automatically, but C# scripts must define it explicitly. However, as you can see from the default code in the script, we do not have to worry about setting this up initially; it is done automatically. Extending the MonoBehaviour class lets our scripts access various values of the game object, such as the position, and lets the system automatically call certain functions during specific events in the game, such as the Update cycle and the GUI rendering. For now, we will delete the Start and Update functions that Unity insists on including in every new script. Replace them with a bit of code that simply renders the words Hello World in the top-left corner of the screen; you can now close the script and return to Unity: public void OnGUI() { GUILayout.Label("Hello World"); } Drag the HelloWorld script from the Project window and drop it on the Main Camera object in the Hierarchy window. Congratulations! You have just added your first bit of functionality to an object in Unity. If you select Main Camera in Hierarchy, then Inspector will display all of the components attached to it. At the bottom of the list is your brand-new HelloWorld script. Before we can test it, we need to save the scene. To do this, go to File at the top and select Save Scene. Give it the name HelloWorld and hit Save. A new icon will appear in your Project window, indicating that you have saved the scene. You are now free to hit the Play button in the upper-middle section of the editor and witness the magic of Hello World. We now get to build the application. At the top, select File and then click on Build Settings.... By default, the target platform is PC. Under Platform, select Android and hit Switch Platform in the bottom-left corner of the Build Settings window. Underneath the Scenes In Build box, there is a button labeled Add Current. Click on it to add our currently opened scene to the build. Only scenes that are in this list and checked will be added to the final build of your game. The scene with the number zero next to it will be the first scene that is loaded when the game starts. There is one last group of things to change before we can hit the Build button. Select Player Settings... at the bottom of the Build Settings window. The Inspector window will open Player Settings (shown in the following screenshot) for the application. From here, we can change the splash screen, icon, screen orientation, and a handful of other technical options: At the moment, there are only a few options that we care about. At the top, Company Name is the name that will appear under the information about the application. Product Name is the name that will appear underneath the icon on your Android device. You can largely set these to anything you want, but they do need to be set immediately. The important setting is Bundle Identifier, underneath Other Settings and Identification. This is the unique identifier that singles out your application from all other applications on the device. The format is com.CompanyName.ProductName, and it is a good practice to use the same company name across all of your products. For this article, we will be using com.TomPacktAndBegin.Ch1.HelloWorld for Bundle Identifier and opt to use an extra dot (period) for the organization. Go to File and then click on Save again. Now you can hit the Build button in the Build Settings window. Pick a location to save the file, and a file name ( Ch1_HelloWorld.apk works well). Be sure to remember where it is and hit Save. If during the build process Unity complains about where the Android SDK is, select the android-sdk folder inside the location where it was installed. The default would be C:\Program Files\Android\android-sdk for a 32-bit Windows system and C:\Program Files (x86)\Android\android-sdk for a 64-bit Windows system. Once loading is done, which should not be very long, your APK will have been made and we are ready to continue. We are through with Unity for this article. You can close it down and open a command prompt. Just as we did when we were connecting our devices, we need to navigate to the platform-tools folder in order to connect to our device. If you installed the SDK to the default location, use:      For a 32-bit Windows system: cd c:\program files\android\android-sdk\platform-tools      For a 64-bit Windows system: cd c:\program files (x86)\android\android-sdk\platform-tools Double-check to make sure that the device is connected and recognized by using the following command: adb devices Now we will install the application. This command tells the system to install an application on the connected device. The -r indicates that it should override if an application is found with the same Bundle Identifier as the application we are trying to install. This way you can just update your game as you develop, rather than uninstalling before installing the new version each time you need to make an update. The path to the .apk file that you wish to install is shown in quotes as follows: adb install -r "c:\users\tom\desktop\packt\book\ch1_helloworld.apk" Replace it with the path to your APK file; capital letters do not matter, but be sure to have all the correct spacing and punctuations. If all goes well, the console will display an upload speed when it has finished pushing your application to the device and a success message when it has finished the installation. The most common causes for errors at this stage are not being in the platform-tools folder when issuing commands and not having the correct path to the .apk file, surrounded by quotes. Once you have received your success message, find the application on your phone and start it up. Now, gaze in wonder at your ability to create Android applications with the power of Unity. We have created our very first Unity and Android application. Admittedly, it was just a simple Hello World application, but that is how it always starts. This served very well for double-checking the device connection and for learning about the build process without all the clutter from a game. If you are looking for a further challenge, try changing the icon for the application. It is a fairly simple procedure that you will undoubtedly want to perform as your game develops. How to do this was mentioned earlier in this section, but, as a reminder, take a look at Player Settings. Also, you will need to import an image. Take a look under Assets, in the menu bar, to know how to do this. Summary There were a lot of technical things in this article. First, we discussed the benefits and possibilities when using Unity and Android. That was followed by a whole lot of installation; the JDK, the Android SDK, Unity 3D, and Unity Remote. We then figured out how to connect to our devices through the command prompt. Our first application was quick and simple to make. We built it and put it on a device. Resources for Article: Further resources on this subject: What's Your Input? [article] That's One Fancy Hammer! [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 3779
article-image-firebase
Packt
04 May 2015
8 min read
Save for later

Using Firebase: Learn how and why to use Firebase

Packt
04 May 2015
8 min read
In this article by Manoj Waikar, author of the book Data-oriented Development with AngularJS, we will learn a brief description about various types of persistence mechanisms, local versus hosted databases, what Firebase is, why to use it, and different use cases where Firebase can be useful. (For more resources related to this topic, see here.) We can write web applications by using the frameworks of our choice—be it server-side MVC frameworks, client-side MVC frameworks, or some combination of these. We can also use a persistence store (a database) of our choice—be it an RDBMS or a more modern NoSQL store. However, making our applications real time (meaning, if you are viewing a page and data related to that page gets updated, then the page should be updated or at least you should get a notification to refresh the page) is not a trivial task and we have to start thinking about push notifications and what not. This does not happen with Firebase. Persistence One of the very early decisions a developer or a team has to make when building any production-quality application is the choice of a persistent storage mechanism. Until a few years ago, this choice, more often than not, boiled down to a relational database such as Oracle, SQL Server, or PostgreSQL. However, the rise of NoSQL solutions such as MongoDB (http://www.mongodb.org/) and CouchDB (http://couchdb.apache.org/)—document-oriented databases or Redis (http://redis.io/), Riak (http://basho.com/riak/), keyvalue stores, Neo4j (http://www.neo4j.org/), and a graph database—has widened the choice for us. Please check the Wikipedia page on NoSQL (http://en.wikipedia.org/wiki/NoSQL) solutions for a detailed list of various NoSQL solutions including their classification and performance characteristics. There is one more buzzword that everyone must have already heard of, Cloud, the short form for cloud computing. Cloud computing briefly means that shared resources (or software) are provided to consumers on a paid/free basis over a network (typically, the Internet). So, we now have the luxury of choosing our preferred RDBMS or NoSQL database as a hosted solution. Consequently, we have one more choice to make—whether to install the database locally (on our own machine or inside the corporate network) or use a hosted solution (in the cloud). As with everything else, there are pros and cons to each of the approaches. The pros of a local database are fast access and one-time buying cost (if it's not an open source database), and the cons include the initial setup time. If you have to evaluate some another database, then you'll have to install the other database as well. The pros of a hosted solution are ease of use and minimal initial setup time, and the cons are the need for a reliable Internet connection, cost (again, if it's not a free option), and so on. Considering the preceding pros and cons, it's a safe bet to use a hosted solution when you are still evaluating different databases and only decide later between a local or a hosted solution, when you've finally zeroed in on your database of choice. What is Firebase? So, where does Firebase fit into all of this? Firebase is a NoSQL database that stores data as simple JSON documents. We can, therefore, compare it to other document-oriented databases such as CouchDB (which also stores data as JSON) or MongoDB (which stores data in the BSON, which stands for binary JSON, format). Although Firebase is a database with a RESTful API, it's also a real-time database, which means that the data is synchronized between different clients and with the backend server almost instantaneously. This implies that if the underlying data is changed by one of the clients, it gets streamed in real time to every connected client; hence, all the other clients automatically get updates with the newest set of data (without anyone having to refresh these clients manually). So, to summarize, Firebase is an API and a cloud service that gives us a real-time and scalable (NoSQL) backend. It has libraries for most server-side languages/frameworks such as Node.js, Java, Python, PHP, Ruby, and Clojure. It has official libraries for Node.js and Java and unofficial third-party libraries for Python, Ruby, and PHP. It also has libraries for most of the leading client-side frameworks such as AngularJS, Backbone, Ember, React, and mobile platforms such as iOS and Android. Firebase – Benefits and why to use? Firebase offers us the following benefits: It is a cloud service (a hosted solution), so there isn't any setup involved. Data is stored as native JSON, so what you store is what you see (on the frontend, fetched through a REST API)—WYSIWYS. Data is safe because Firebase requires 2048-bit SSL encryption for all data transfers. Data is replicated and backed-up to multiple secure locations, so there are minimal chances of data loss. When data changes, apps update instantly across devices. Our apps can work offline—as soon as we get connectivity, the data is synchronized instantly. Firebase gives us lightning fast data synchronization. So, combined with AngularJS, it gives us three-way data binding between HTML, JavaScript, and our backend (data). With two-way data binding, whenever our (JavaScript) model changes, the view (HTML) updates itself and vice versa. But, with three-way data binding, even when the data in our database changes, our JavaScript model gets updated, and consequently, the view gets updated as well. Last but not the least, it has libraries for the most popular server-side languages/frameworks (such as Node.js, Ruby, Java, and Python) as well as the popular client-side frameworks (such as Backbone, Ember, and React), including AngularJS. The Firebase binding for AngularJS is called AngularFire (https://www.firebase.com/docs/web/libraries/angular/). Firebase use cases Now that you've read how Firebase makes it easy to write applications that update in real time, you might still be wondering what kinds of applications are most suited for use with Firebase. Because, as often happens in the enterprise world, either you are not at liberty to choose all the components of your stack or you might have an existing application and you just have to add some new features to it. So, let's study the three main scenarios where Firebase can be a good fit for your needs. Apps with Firebase as the only backend This scenario is feasible if: You are writing a brand-new application or rewriting an existing one from scratch You don't have to integrate with legacy systems or other third-party services Your app doesn't need to do heavy data processing or it doesn't have complex user authentication requirements In such scenarios, Firebase is the only backend store you'll need and all dynamic content and user data can be stored and retrieved from it. Existing apps with some features powered by Firebase This scenario is feasible if you already have a site and want to add some real-time capabilities to it without touching other parts of the system. For example, you have a working website and just want to add chat capabilities, or maybe, you want to add a comment feed that updates in real time or you have to show some real-time notifications to your users. In this case, the clients can connect to your existing server (for existing features) and they can connect to Firebase for the newly added real-time capabilities. So, you can use Firebase together with the existing server. Both client and server code powered by Firebase In some use cases, there might be computationally intensive code that can't be run on the client. In situations like these, Firebase can act as an intermediary between the server and your clients. So, the server talks to the clients by manipulating data in Firebase. The server can connect to Firebase using either the Node.js library (for Node.js-based server-side applications) or through the REST API (for other server-side languages). Similarly, the server can listen to the data changes made by the clients and can respond appropriately. For example, the client can place tasks in a queue that the server will process later. One or more servers can then pick these tasks from the queue and do the required processing (as per their availability) and then place the results back in Firebase so that the clients can read them. Firebase is the API for your product You might not have realized by now (but you will once you see some examples) that as soon as we start saving data in Firebase, the REST API keeps building side-by-side for free because of the way data is stored as a JSON tree and is associated on different URLs. Think for a moment if you had a relational database as your persistence store; you would then need to specially write REST APIs (which are obviously preferable to old RPC-style web services) by using the framework available for your programming language to let external teams or customers get access to data. Then, if you wanted to support different platforms, you would need to provide libraries for all those platforms whereas Firebase already provides real-time SDKs for JavaScript, Objective-C, and Java. So, Firebase is not just a real-time persistence store, but it doubles up as an API layer too. Summary In this article, we learned a brief description about Firebase is, why to use it, and different use cases where Firebase can be useful. Resources for Article: Further resources on this subject: AngularJS Performance [article] An introduction to testing AngularJS directives [article] Our App and Tool Stack [article]
Read more
  • 0
  • 0
  • 5775

article-image-less-external-applications-and-frameworks
Packt
30 Apr 2015
11 min read
Save for later

Less with External Applications and Frameworks

Packt
30 Apr 2015
11 min read
In this article by Bass Jobsen, author of the book Less Web Development Essentials - Second Edition, we will cover the following topics: WordPress and Less Using Less with the Play framework, AngularJS, Meteor, and Rails (For more resources related to this topic, see here.) WordPress and Less Nowadays, WordPress is not only used for weblogs, but it can also be used as a content management system for building a website. The WordPress system, written in PHP, has been split into the core system, plugins, and themes. The plugins add additional functionalities to the system, and the themes handle the look and feel of a website built with WordPress. They work independently of each other and are also independent of the theme. The theme does not depend on plugins. WordPress themes define the global CSS for a website, but every plugin can also add its own CSS code. The WordPress theme developers can use Less to compile the CSS code of the themes and the plugins. Using the Sage theme by Roots with Less Sage is a WordPress starter theme. You can use it to build your own theme. The theme is based on HTML5 Boilerplate (http://html5boilerplate.com/) and Bootstrap. Visit the Sage theme website at https://roots.io/sage/. Sage can also be completely built using Gulp. More information about how to use Gulp and Bower for the WordPress development can be found at https://roots.io/sage/docs/theme-development/. After downloading Sage, the Less files can be found at assets/styles/. These files include Bootstrap's Less files. The assets/styles/main.less file imports the main Bootstrap Less file, bootstrap.less. Now, you can edit main.less to customize your theme. You will have to rebuild the Sage theme after the changes you make. You can use all of the Bootstrap's variables to customize your build. JBST with a built-in Less compiler JBST is also a WordPress starter theme. JBST is intended to be used with the so-called child themes. More information about the WordPress child themes can be found at https://codex.wordpress.org/Child_Themes. After installing JBST, you will find a Less compiler under Appearance in your Dashboard pane, as shown in the following screenshot: JBST's built-in Less compiler in the WordPress Dashboard The built-in Less compiler can be used to fully customize your website using Less. Bootstrap also forms the skeleton of JBST, and the default settings are gathered by the a11y bootstrap theme mentioned earlier. JBST's Less compiler can be used in the following different ways: First, the compiler accepts any custom-written Less (and CSS) code. For instance, to change the color of the h1 elements, you should simply edit and recompile the code as follows: h1 {color: red;} Secondly, you can edit Bootstrap's variables and (re)use Bootstrap's mixins. To set the background color of the navbar component and add a custom button, you can use the code block mentioned here in the Less compiler: @navbar-default-bg:             blue; .btn-colored { .button-variant(blue;red;green); } Thirdly, you can set JBST's built-in Less variables as follows: @footer_bg_color: black; Lastly, JBST has its own set of mixins. To set a custom font, you can edit the code as shown here: .include-custom-font(@family: arial,@font-path, @path:   @custom-font-dir, @weight: normal, @style: normal); In the preceding code, the parameters mentioned were used to set the font name (@family) and the path name to the font files (@path/@font-path). The @weight and @style parameters set the font's properties. For more information, visit https://github.com/bassjobsen/Boilerplate-JBST-Child-Theme. More Less code blocks can also be added to a special file (wpless2css/wpless2css.less or less/custom.less); these files will give you the option to add, for example, a library of prebuilt mixins. After adding the library using this file, the mixins can also be used with the built-in compiler. The Semantic UI WordPress theme The Semantic UI, as discussed earlier, offers its own WordPress plugin. The plugin can be downloaded from https://github.com/ProjectCleverWeb/Semantic-UI-WordPress. After installing and activating this theme, you can use your website directly with the Semantic UI. With the default setting, your website will look like the following screenshot: Website built with the Semantic UI WordPress theme WordPress plugins and Less As discussed earlier, the WordPress plugins have their own CSS. This CSS will be added to the page like a normal style sheet, as shown here: <link rel='stylesheet' id='plugin-name'   href='//domain/wp-content/plugin-name/plugin-name.css?ver=2.1.2'     type='text/css' media='all' /> Unless a plugin provides the Less files for their CSS code, it will not be easy to manage its styles with Less. The WP Less to CSS plugin The WP Less to CSS plugin, which can be found at http://wordpress.org/plugins/wp-less-to-css/, offers the possibility of styling your WordPress website with Less. As seen earlier, you can enter the Less code along with the built-in compiler of JBST. This code will then be compiled into the website's CSS. This plugin compiles Less with the PHP Less compiler, Less.php. Using Less with the Play framework The Play framework helps you in building lightweight and scalable web applications by using Java or Scala. It will be interesting to learn how to integrate Less with the workflow of the Play framework. You can install the Play framework from https://www.playframework.com/. To learn more about the Play framework, you can also read, Learning Play! Framework 2, Andy Petrella, Packt Publishing. To read Petrella's book, visit https://www.packtpub.com/web-development/learning-play-framework-2. To run the Play framework, you need JDK 6 or later. The easiest way to install the Play framework is by using the Typesafe activator tool. After installing the activator tool, you can run the following command: > activator new my-first-app play-scala The preceding command will install a new app in the my-first-app directory. Using the play-java option instead of the play-scala option in the preceding command will lead to the installation of a Java-based app. Later on, you can add the Scala code in a Java app or the Java code in a Scala app. After installing a new app with the activator command, you can run it by using the following commands: cd my-first-app activator run Now, you can find your app at http://localhost:9000. To enable the Less compilation, you should simply add the sbt-less plugin to your plugins.sbt file as follows: addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.6") After enabling the plugin, you can edit the build.sbt file so as to configure Less. You should save the Less files into app/assets/stylesheets/. Note that each file in app/assets/stylesheets/ will compile into a separate CSS file. The CSS files will be saved in public/stylesheets/ and should be called in your templates with the HTML code shown here: <link rel="stylesheet"   href="@routes.Assets.at("stylesheets/main.css")"> In case you are using a library with more files imported into the main file, you can define the filters in the build.sbt file. The filters for these so-called partial source files can look like the following code: includeFilter in (Assets, LessKeys.less) := "*.less" excludeFilter in (Assets, LessKeys.less) := "_*.less" The preceding filters ensure that the files starting with an underscore are not compiled into CSS. Using Bootstrap with the Play framework Bootstrap is a CSS framework. Bootstrap's Less code includes many files. Keeping your code up-to-date by using partials, as described in the preceding section, will not work well. Alternatively, you can use WebJars with Play for this purpose. To enable the Bootstrap WebJar, you should add the code shown here to your build.sbt file: libraryDependencies += "org.webjars" % "bootstrap" % "3.3.2" When using the Bootstrap WebJar, you can import Bootstrap into your project as follows: @import "lib/bootstrap/less/bootstrap.less"; AngularJS and Less AngularJS is a structural framework for dynamic web apps. It extends the HTML syntax, and this enables you to create dynamic web views. Of course, you can use AngularJS with Less. You can read more about AngularJS at https://angularjs.org/. The HTML code shown here will give you an example of what repeating the HTML elements with AngularJS will look like: <!doctype html> <html ng-app> <head>    <title>My Angular App</title> </head> <body ng-app>      <ul>      <li ng-repeat="item in [1,2,3]">{{ item }}</li>    </ul> <script   src="https://ajax.googleapis.com/ajax/libs/angularjs/1.3.12/&    angular.min.js"></script> </body> </html> This code should make your page look like the following screenshot: Repeating the HTML elements with AngularJS The ngBoilerplate system The ngBoilerplate system is an easy way to start a project with AngularJS. The project comes with a directory structure for your application and a Grunt build process, including a Less task and other useful libraries. To start your project, you should simply run the following commands on your console: > git clone git://github.com/ngbp/ngbp > cd ngbp > sudo npm -g install grunt-cli karma bower > npm install > bower install > grunt watch And then, open ///path/to/ngbp/build/index.html in your browser. After installing ngBoilerplate, you can write the Less code into src/less/main.less. By default, only src/less/main.less will be compiled into CSS; other libraries and other codes should be imported into this file. Meteor and Less Meteor is a complete open-source platform for building web and mobile apps in pure JavaScript. Meteor focuses on fast development. You can publish your apps for free on Meteor's servers. Meteor is available for Linux and OS X. You can also install it on Windows. Installing Meteor is as simple as running the following command on your console: > curl https://install.meteor.com | /bin/sh You should install the Less package for compiling the CSS code of the app with Less. You can install the Less package by running the command shown here: > meteor add less Note that the Less package compiles every file with the .less extension into CSS. For each file with the .less extension, a separate CSS file is created. When you use the partial Less files that should only be imported (with the @import directive) and not compiled into the CSS code itself, you should give these partials the .import.less extension. When using the CSS frameworks or libraries with many partials, renaming the files by adding the .import.less extension will hinder you in updating your code. Also running postprocess tasks for the CSS code is not always possible. Many packages for Meteor are available at https://atmospherejs.com/. Some of these packages can help you solve the issue with using partials mentioned earlier. To use Bootstrap, you can use the meteor-bootstrap package. The meteor-bootstrap package can be found at https://github.com/Nemo64/meteor-bootstrap. The meteor-bootstrap package requires the installation of the Less package. Other packages provide you postprocsess tasks, such as autoprefixing your code. Ruby on rails and Less Ruby on Rails, or Rails, for short is a web application development framework written in the Ruby language. Those who want to start developing with Ruby on Rails can read the Getting Started with Rails guide, which can be found at http://guides.rubyonrails.org/getting_started.html. In this section, you can read how to integrate Less into a Ruby on Rails app. After installing the tools and components required for starting with Rails, you can launch a new application by running the following command on your console: > rails new blog Now, you should integrate Less with Rails. You can use less-rails (https://github.com/metaskills/less-rails) to bring Less to Rails. Open the Gemfile file, comment on the sass-rails gem, and add the less-rails gem, as shown here: #gem 'sass-rails', '~> 5.0' gem 'less-rails' # Less gem 'therubyracer' # Ruby Then, create a controller called welcome with an action called index by running the following command: > bin/rails generate controller welcome index The preceding command will generate app/views/welcome/index.html.erb. Open app/views/welcome/index.html.erb and make sure that it contains the HTML code as shown here: <h1>Welcome#index</h1> <p>Find me in app/views/welcome/index.html.erb</p> The next step is to create a file, app/assets/stylesheets/welcome.css.less, with the Less code. The Less code in app/assets/stylesheets/welcome.css.less looks as follows: @color: red; h1 { color: @color; } Now, start a web server with the following command: > bin/rails server Finally, you can visit the application at http://localhost:3000/. The application should look like the example shown here: The Rails app Summary In this article, you learned how to use Less WordPress, Play, Meteor, AngularJS, Ruby on Rails. Resources for Article: Further resources on this subject: Media Queries with Less [article] Bootstrap 3 and other applications [article] Getting Started with Bootstrap [article]
Read more
  • 0
  • 0
  • 5392
Modal Close icon
Modal Close icon