Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-speeding-gradle-builds-android
Packt
16 Jul 2015
7 min read
Save for later

Speeding up Gradle builds for Android

Packt
16 Jul 2015
7 min read
In this article by Kevin Pelgrims, the author of the book, Gradle for Android, we will cover a few tips and tricks that will help speed up the Gradle builds. A lot of Android developers that start using Gradle complain about the prolonged compilation time. Builds can take longer than they do with Ant, because Gradle has three phases in the build lifecycle that it goes through every time you execute a task. This makes the whole process very configurable, but also quite slow. Luckily, there are several ways to speed up Gradle builds. Gradle properties One way to tweak the speed of a Gradle build is to change some of the default settings. You can enable parallel builds by setting a property in a gradle.properties file that is placed in the root of a project. All you need to do is add the following line: org.gradle.parallel=true Another easy win is to enable the Gradle daemon, which starts a background process when you run a build the first time. Any subsequent builds will then reuse that background process, thus cutting out the startup cost. The process is kept alive as long as you use Gradle, and is terminated after three hours of idle time. Using the daemon is particularly useful when you use Gradle several times in a short time span. You can enable the daemon in the gradle.properties file like this: org.gradle.daemon=true In Android Studio, the Gradle daemon is enabled by default. This means that after the first build from inside the IDE, the next builds are a bit faster. If you build from the command-line interface; however, the Gradle daemon is disabled, unless you enable it in the properties. To speed up the compilation itself, you can tweak parameters on the Java Virtual Machine (JVM). There is a Gradle property called jvmargs that enables you to set different values for the memory allocation pool for the JVM. The two parameters that have a direct influence on your build speed are Xms and Xmx. The Xms parameter is used to set the initial amount of memory to be used, while the Xmx parameter is used to set a maximum. You can manually set these values in the gradle.properties file like this: org.gradle.jvmargs=-Xms256m -Xmx1024m You need to set the desired amount and a unit, which can be k for kilobytes, m for megabytes, and g for gigabytes. By default, the maximum memory allocation (Xmx) is set to 256 MB, and the starting memory allocation (Xms) is not set at all. The optimal settings depend on the capabilities of your computer. The last property you can configure to influence build speed is org.gradle.configureondemand. This property is particularly useful if you have complex projects with several modules, as it tries to limit the time spent in the configuration phase, by skipping modules that are not required for the task that is being executed. If you set this property to true, Gradle will try to figure out which modules have configuration changes and which ones do not, before it runs the configuration phase. This is a feature that will not be very useful if you only have an Android app and a library in your project. If you have a lot of modules that are loosely coupled, though, this feature can save you a lot of build time. System-wide Gradle properties If you want to apply these properties system-wide to all your Gradle-based projects, you can create a gradle.properties file in the .gradle folder in your home directory. On Microsoft Windows, the full path to this directory is %UserProfile%.gradle, on Linux and Mac OS X it is ~/.gradle. It is a good practice to set these properties in your home directory, rather than on the project level. The reason for this is that you usually want to keep memory consumption down on build servers, and the build time is of less importance. Android Studio The Gradle properties you can change to speed up the compilation process are also configurable in the Android Studio settings. To find the compiler settings, open the Settings dialog, and then navigate to Build, Execution, Deployment | Compiler. On that screen, you can find settings for parallel builds, JVM options, configure on demand, and so on. These settings only show up for Gradle-based Android modules. Have a look at the following screenshot: Configuring these settings from Android Studio is easier than configuring them manually in the build configuration file, and the settings dialog makes it easy to find properties that influence the build process. Profiling If you want to find out which parts of the build are slowing the process down, you can profile the entire build process. You can do this by adding the --profile flag whenever you execute a Gradle task. When you provide this flag, Gradle creates a profiling report, which can tell you which parts of the build process are the most time consuming. Once you know where the bottlenecks are, you can make the necessary changes. The report is saved as an HTML file in your module in build/reports/profile. This is the report generated after executing the build task on a multimodule project: The profiling report shows an overview of the time spent in each phase while executing the task. Below that summary is an overview of how much time Gradle spent on the configuration phase for each module. There are two more sections in the report that are not shown in the screenshot. The Dependency Resolution section shows how long it took to resolve dependencies, per module. Lastly, the Task Execution section contains an extremely detailed task execution overview. This overview has the timing for every single task, ordered by execution time from high to low. Jack and Jill If you are willing to use experimental tools, you can enable Jack and Jill to speed up builds. Jack (Java Android Compiler Kit) is a new Android build toolchain that compiles Java source code directly to the Android Dalvik executable (dex) format. It has its own .jack library format and takes care of packaging and shrinking as well. Jill (Jack Intermediate Library Linker) is a tool that can convert .aar and .jar files to .jack libraries. These tools are still quite experimental, but they were made to improve build times and to simplify the Android build process. It is not recommended to start using Jack and Jill for production versions of your projects, but they are made available so that you can try them out. To be able to use Jack and Jill, you need to use build tools version 21.1.1 or higher, and the Android plugin for Gradle version 1.0.0 or higher. Enabling Jack and Jill is as easy as setting one property in the defaultConfig block: android {   buildToolsRevision '22.0.1'   defaultConfig {     useJack = true   }} You can also enable Jack and Jill on a certain build type or product flavor. This way, you can continue using the regular build toolchain, and have an experimental build on the side: android {   productFlavors {       regular {           useJack = false       }        experimental {           useJack = true       }   }} As soon as you set useJack to true, minification and obfuscation will not go through ProGuard anymore, but you can still use the ProGuard rules syntax to specify certain rules and exceptions. Use the same proguardFiles method that we mentioned before, when talking about ProGuard. Summary This article helped us lean different ways to speed up builds; we first saw how we can tweak the settings, configure Gradle and the JVM, we saw how to detect parts that are slowing down the process, and then we learned Jack and Jill tool. Resources for Article: Further resources on this subject: Android Tech Page Android Virtual Device Manager Apache Maven and m2eclipse  Saying Hello to Unity and Android
Read more
  • 0
  • 0
  • 8354

article-image-application-development-workflow
Packt
08 Sep 2015
15 min read
Save for later

Application Development Workflow

Packt
08 Sep 2015
15 min read
 In this article by Ivan Turkovic, author of the book PhoneGap Essentials, you will learn some of the basics on how to work with the PhoneGap application development and how to start building the application. We will go over some useful steps and tips to get the most out of your PhoneGap application. In this article, you will learn the following topics: An introduction to a development workflow Best practices Testing (For more resources related to this topic, see here.) An introduction to a development workflow PhoneGap solves a great problem of developing mobile applications for multiple platforms at the same time, but still it is pretty much open about how you want to approach the creation of an application. You do not have any predefined frameworks that come out of-the-box by default. It just allows you to use the standard web technologies such as the HTML5, CSS3, and JavaScript languages for hybrid mobile application development. The applications are executed in wrappers that are custom-built to work on every platform and the underlying web view behaves in the same way on all the platforms. For accessing device APIs, it relies on the standard API bindings to access every device's sensors or the other features. The developers who start using PhoneGap usually come from different backgrounds, as shown in the following list: Mobile developers who want to expand the functionality of their application on other platforms but do not want to learn a new language for each platform Web developers who want to port their existing desktop web application to a mobile application; if they are using a responsive design, it is quite simple to do this Experienced mobile developers who want to use both the native and web components in their application, so that the web components can communicate with the internal native application code as well The PhoneGap project itself is pretty simple. By default, it can open an index.html page and load the initial CSS file, JavaScript, and other resources needed to run it. Besides the user's resources, it needs to refer the cordova.js file, which provides the API bindings for all the plugins. From here onwards, you can take different steps but usually the process falls in two main workflows: web development workflow and native platform development. Web project development A web project development workflow can be used when you want to create a PhoneGap application that runs on many mobile operating systems with as little as possible changes to a specific one. So there is a single codebase that is working along with all the different devices. It has become possible with the latest versions since the introduction of the command-line interface (CLI). This automates the tedious work involved in a lot of the functionalities while taking care of each platform, such as building the app, copying the web assets in the correct location for every supported platform, adding platform-specific changes, and finally running build scripts to generate binaries. This process can be automated even more with build system automating tasks such as Gulp or Grunt. You can run these tasks before running PhoneGap commands. This way you can optimize the assets before they are used. Also you can run JSLint automatically for any change or doing automatic builds for every platform that is available. Native platform development A native platform development workflow can be imagined as a focus on building an application for a single platform and the need to change the lower-level platform details. The benefit of using this approach is that it gives you more flexibility and you can mix the native code with a WebView code and impose communication between them. This is appropriate for those functionalities that contain a section of the features that are not hard to reproduce with web views only; for example, a video app where you can do the video editing in the native code and all the social features and interaction can be done with web views. Even if you want to start with this approach, it is better to start the new project as a web project development workflow and then continue to separate the code for your specific needs. One thing to keep in mind is that, to develop with this approach, it is better to develop the application in more advanced IDE environments, which you would usually use for building native applications. Best practices                            The running of hybrid mobile applications requires some sacrifices in terms of performance and functionality; so it is good to go over some useful tips for new PhoneGap developers. Use local assets for the UI As mobile devices are limited by the connection speeds and mobile data plans are not generous with the bandwidth, you need to prepare all the UI components in the application before deploying to the app store. Nobody will want to use an application that takes a few seconds to load the server-rendered UI when the same thing could be done on the client. For example, the Google Fonts or other non-UI assets that are usually loaded from the server for the web applications are good enough as for the development process, but for the production; you need to store all the assets in the application's container and not download them during its run process. You do not want the application to wait while an important part is being loaded. The best advice on the UI that I can give you is to adopt the Single Page Application (SPA) design; it is a client-side application that is run from one request from a web page. Initial loading means taking care of loading all the assets that are required for the application in order to function, and any further updates are done via AJAX (such as loading data). When you use SPA, not only do you minimize the amount of interaction with the server, you also organize your application in a more efficient manner. One of the benefits is that the application doesn't need to wait for every deviceready event for each additional page that it loads from the start. Network access for data As you have seen in the previous section, there are many limitations that mobile applications face with the network connection—from mobile data plans to the network latency. So you do not want it to rely on the crucial elements, unless real-time communication is required for the application. Try to keep the network access only to access crucial data and everything else that is used frequently can be packed into assets. If the received data does not change often, it is advisable to cache it for offline use. There are many ways to achieve this, such as localStorage, sessionStorage, WebSQL, or a file. When loading data, try to load only the data you need at that moment. If you have a comment section, it will make sense if you load all thousand comments; the first twenty comments should be enough to start with. Non-blocking UI When you are loading additional data to show in the application, don't try to pause the application until you receive all the data that you need. You can add some animation or a spinner to show the progress. Do not let the user stare at the same screen when he presses the button. Try to disable the actions once they are in motion in order to prevent sending the same action multiple times. CSS animations As most of the modern mobile platforms now support CSS3 with a more or less consistent feature set, it is better to make the animations and transitions with CSS rather than with the plain JavaScript DOM manipulation, which was done before CSS3. CSS3 is much faster as the browser engine supports the hardware acceleration of CSS animations and is more fluid than the JavaScript animations. CSS3 supports translations and full keyframe animations as well, so you can be really creative in making your application more interactive. Click events You should avoid click events at any cost and use only touch events. They work in the same way as they do in the desktop browser. They take a longer time to process as the mobile browser engine needs to process the touch or touchhold events before firing a click event. This usually takes 300 ms, which is more than enough to give an additional impression of slow responses. So try to start using touchstart or touchend events. There is a solution for this called FastClick.js. It is a simple, easy-to-use library for eliminating the 300 ms delay between a physical tap and the firing of a click event on mobile browsers. Performance The performance that we get on the desktops isn't reflected in mobile devices. Most of the developers assume that the performance doesn't change a lot, especially as most of them test the applications on the latest mobile devices and a vast majority of the users use mobile devices that are 2-3 years old. You have to keep in mind that even the latest mobile devices have a slower CPU, less RAM, and a weaker GPU. Recently, mobile devices are catching up in the sheer numbers of these components but, in reality, they are slower and the maximum performance is limited due to the battery life that prevents it from using the maximum performance for a prolonged time. Optimize the image assets We are not limited any more by the app size that we need to deploy. However, you need to optimize the assets, especially images, as they take a large part of the assets, and make them appropriate for the device. You should prepare images in the right size; do not add the biggest size of the image that you have and force the mobile device to scale the image in HTML. Choosing the right image size is not an easy task if you are developing an application that should support a wide array of screens, especially for Android that has a very fragmented market with different screen sizes. The scaled images might have additional artifacts on the screen and they might not look so crisp. You will be hogging additional memory just for an image that could leave a smaller memory footprint. You should remember that mobile devices still have limited resources and the battery doesn't last forever. If you are going to use PhoneGap Build, you will need to make sure you do not exceed the limit as the service still has a limited size. Offline status As we all know, the network access is slow and limited, but the network coverage is not perfect so it is quite possible that your application will be working in the offline mode even in the usual locations. Bad reception can be caused by being inside a building with thick walls or in the basement. Some weather conditions can affect the reception too. The application should be able to handle this situation and respond to it properly, such as by limiting the parts of the application that require a network connection or caching data and syncing it when you are online once again. This is one of the aspects that developers usually forget to test in the offline mode to see how the app behaves under certain conditions. You should have a plugin available in order to detect the current state and the events when it passes between these two modes. Load only what you need There are a lot of developers that do this, including myself. We need some part of the library or a widget from a framework, which we don't need for anything other than this, and yet we are a bit lazy about loading a specific element and the full framework. This can load an immense amount of resources that we will never need but they will still run in the background. It might also be the root cause of some of the problems as some libraries do not mix well and we can spend hours trying to solve this problem. Transparency You should try to use as little as possible of the elements that have transparent parts as they are quite processor-intensive because you need to update screen on every change behind them. The same things apply to the other visual elements that are processor-intensive such as shadows or gradients. The great thing is that all the major platforms have moved away from flashy graphical elements and started using the flat UI design. JSHint If you use JSHint throughout the development, it will save you a lot of time when developing things in JavaScript. It is a static code analysis tool for checking whether the JavaScript source code complies with the coding rules. It will detect all the common mistakes done with JavaScript, as JavaScript is not a compiled language and you can't see the error until you run the code. At the same time, JSHint can be a very restrictive and demanding tool. Many beginners in JavaScript, PhoneGap, or mobile programming could be overwhelmed with the number of errors or bad practices that JSHint will point out. Testing The testing of applications is an important aspect of build applications, and mobile applications are no exception. With a slight difference for most of the development that doesn't require native device APIs, you can use the platform simulators and see the results. However, if you are using the native device APIs that are not supported through simulators, then you need to have a real device in order to run a test on it. It is not unusual to use desktop browsers resized to mobile device screen resolution to emulate their screen while you are developing the application just to test the UI screens, since it is much faster and easier than building and running the application on a simulator or real device for every small change. There is a great plugin for the Google Chrome browser called Apache Ripple. It can be run without any additional tools. The Apache Ripple simulator runs as a web app in the Google Chrome browser. In Cordova, it can be used to simulate your app on a number of iOS and Android devices and it provides basic support for the core Cordova plugins such as Geolocation and Device Orientation. You can run the application in a real device browser or use the PhoneGap developer app. This simplifies the workflow as you can test the application on your mobile device without the need to re-sign, recompile, or reinstall your application to test the code. The only disadvantage is that with simulators, you cannot access the device APIs that aren't available in the regular web browsers. The PhoneGap developer app allows you to access device APIs as long as you are using one of the supplied APIs. It is good if you remember to always test the application on real devices at least before deploying to the app store. Computers have almost unlimited resources as compared to mobile devices, so the application that runs flawlessly on the computer might fail on mobile devices due to low memory. As simulators are faster than the real device, you might get the impression that it will work on every device equally fast, but it won't—especially with older devices. So, if you have an older device, it is better to test the response on it. Another reason to use the mobile device instead of the simulator is that it is hard to get a good usability experience from clicking on the interface on the computer screen without your fingers interfering and blocking the view on the device. Even though it is rare that you would get some bugs with the plain PhoneGap that was introduced with the new version, it might still happen. If you use the UI framework, it is good if you try it on the different versions of the operating systems as they might not work flawlessly on each of them. Even though hybrid mobile application development has been available for some time, it is still evolving, and as yet there are no default UI frameworks to use. Even the PhoneGap itself is still evolving. As with the UI, the same thing applies to the different plugins. Some of the features might get deprecated or might not be supported, so it is good if you implement alternatives or give feedback to the users about why this will not work. From experience, the average PhoneGap application will use at least ten plugins or different libraries for the final deployment. Every additional plugin or library installed can cause conflicts with another one. Summary In this article, we learned more advanced topics that any PhoneGap developer should get into more detail once he/she has mastered the essential topics. Resources for Article: Further resources on this subject: Building the Middle-Tier[article] Working with the sharing plugin[article] Getting Ready to Launch Your PhoneGap App in the Real World [article]
Read more
  • 0
  • 0
  • 8258

article-image-learning-nodejs-mobile-application-development
Packt
23 Sep 2015
5 min read
Save for later

Learning Node.js for Mobile Application Development

Packt
23 Sep 2015
5 min read
  In Learning Node.js for Mobile Application Development by Christopher Svanefalk and Stefan Buttigieg, the overarching goal of this article is to give you the tools and know-how to install Node.js on multiple OS platforms and how to verify the installation. After reading this article you will know how to install, configure and use the fundamental software components. You will also have a good understanding of why these tools are appropriate for developing modern applications. (For more resources related to this topic, see here.) Why Node.js? Modern apps have several requirements which cannot be provided by the app itself, such as central data storage, communication routing, and user management. In order to provide such services, apps rely on an external software component known as a backend. The backend we will use for this is Node.js, a powerful but strange beast in its category. Node.js is known for being both reliable and highly performing. Node.js comes with its own package management system, NPM (Node Package Manager), through which you can easily install, remove and manage packages for your project. What this article covers? This article covers the installation of Node.js on multiple OS platforms and how to verify the installation. The installation Node.js is delivered as a set of JavaScript libraries, executing on a C/C++ runtime that is built around the Google V8 JavaScript Engine. The two come bundled together for most major operating systems, and we will look at the specifics of installing it. Google V8 JavaScript Engine is the same JavaScript engine that is used in the Chrome browser, built for speed and efficiency. Windows For Windows, there is a dedicated MSI wizard that can be used to install Node.js, which can be downloaded from the project's official website. To do so, go to the main page, navigate to Downloads, and then select Windows Installer. After it is downloaded, run MSI, follow the steps given to select the installation options, and conclude the install. Keep in mind that you will need to restart your system in order to make the changes effective. Linux Most major Linux distributions provide convenient installs of Node.js through their own package management systems. However, it is important to keep in mind that for many of them, NPM will not come bundled with the main Node.js package. Rather, it will be provided as a separate package. We will show how to install both in the following section. Ubuntu/Debian Open a terminal and issue sudo apt-get update to make sure that you have the latest package listings. After this, issue apt-get install nodejsnpm in order to install both Node.js and NPM in one swoop. Fedora/RHEL/CentOS On Fedora 18 or later, open a terminal and issue sudo yum install nodejsnpm. The system will perform the full setup for you. If you are running RHEL or CentOS, you will need to enable the optional EPEL repository. This can be done in conjunction with the install process, so that you do not need to do it again while upgrading, by issuing the sudo yum install nodejsnpm --enablerepo=epel command. Verifying your installation Now that we have finished the install, let's do a sanity check and make sure that everything works as expected. To do so, we can use the Node.js shell, which is an interactive runtime environment that is used to execute JavaScript code. To open it, first open a terminal, and then issue the following on it: node This will start the interpreter, which will appear as a shell, with the input line starting with the > sign. Once you are in it, type the following: console.log(“Hello world!); Then, press Enter. The Hello world! phrase will appear on the next line. Congratulations, your system is now set up to run Node.js! Mac OS X For Mac OS X, you can find a ready-to-install PKG file by going to www.nodejs.org, navigating to Downloads, and selecting the Mac OS X Installer option. Otherwise, you can click on Install, and your package file will automatically be downloaded as shown in the followin screenshot: Once you have downloaded the file, run it and follow the instructions on the screen. It is recommended that you keep all the offered default settings, unless there are compelling reasons for you to change something with regard to your specific machine. Verifying your installation for Mac OS X After the install finishes, open a terminal and start the Node.js shell by issuing the following command: node This will start the interactive node shell where you can execute JavaScript code. To make sure that everything works, try issuing the following command to the interpreter: console.log(“hello world!”); After pressing Enter, the Hello world! phrase will appear on your screen. Congratulations, Node.js is all set up and good to go! Who this article is written for Intended for web developers of all levels of expertise who want to deep dive into cross-platform mobile application development without going through the pains of understanding the languages and native frameworks which form an integral part of developing for different mobile platforms. This article will provide the readers with the necessary basic idea to develop mobile applications with near-native functionality and help them understand the process to develop a successful cross-platform mobile application. Summary In this article, we learned the different techniques that can be used to install Node.js across different platforms. Read Learning Node.js for Mobile Application Development to dive into cross-platform mobile application development. The following are some other related titles: Node.js Design Patterns Web Development with MongoDB and Node.js Deploying Node.js Node Security Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] Introduction and Composition[article] Deployment and Maintenance [article]
Read more
  • 0
  • 0
  • 8185

article-image-apps-different-platforms
Packt
01 Oct 2015
9 min read
Save for later

Apps for Different Platforms

Packt
01 Oct 2015
9 min read
In this article by Hoc Phan, the author of the book Ionic Cookbook, we will cover tasks related to building and publishing apps, such as: Building and publishing an app for iOS Building and publishing an app for Android Using PhoneGap Build for cross–platform (For more resources related to this topic, see here.) Introduction In the past, it used to be very cumbersome to build and successfully publish an app. However, there are many documentations and unofficial instructions on the Internet today that can pretty much address any problem you may run into. In addition, Ionic also comes with its own CLI to assist in this process. This article will guide you through the app building and publishing steps at a high level. You will learn how to: Build iOS and Android app via Ionic CLI Publish iOS app using Xcode via iTunes Connect Build Windows Phone app using PhoneGap Build The purpose of this article is to provide ideas on what to look for and some "gotchas". Apple, Google, and Microsoft are constantly updating their platforms and processes so the steps may not look exactly the same over time. Building and publishing an app for iOS Publishing on App Store could be a frustrating process if you are not well prepared upfront. In this section, you will walk through the steps to properly configure everything in Apple Developer Center, iTunes Connect and local Xcode Project. Getting ready You must register for Apple Developer Program in order to access https://developer.apple.com and https://itunesconnect.apple.com because those websites will require an approved account. In addition, the instructions given next use the latest version of these components: Mac OS X Yosemite 10.10.4 Xcode 6.4 Ionic CLI 1.6.4 Cordova 5.1.1 How to do it Here are the instructions: Make sure you are in the app folder and build for the iOS platform. $ ionic build ios Go to the ios folder under platforms/ to open the .xcodeproj file in Xcode. Go through the General tab to make sure you have correct information for everything, especially Bundle Identifier and Version. Change and save as needed. Visit Apple Developer website and click on Certificates, Identifiers & Profiles. For iOS apps, you just have to go through the steps in the website to fill out necessary information. The important part you need to do correctly here is to go to Identifiers | App IDs because it must match your Bundle Identifier in Xcode. Visit iTunes Connect and click on the My Apps button. Select the Plus (+) icon to click on New iOS App. Fill out the form and make sure to select the right Bundle Identifier of your app. There are several additional steps to provide information about the app such as screenshots, icons, addresses, and so on. If you just want to test the app, you could just provide some place holder information initially and come back to edit later. That's it for preparing your Developer and iTunes Connect account. Now open Xcode and select iOS Device as the archive target. Otherwise, the archive feature will not turn on. You will need to archive your app before you can submit it to the App Store. Navigate to Product | Archive in the top menu. After the archive process completed, click on Submit to App Store to finish the publishing process. At first, the app could take an hour to appear in iTunes Connect. However, subsequent submission will go faster. You should look for the app in the Prerelease tab in iTunes Connect. iTunes Connect has very nice integration with TestFlight to test your app. You can switch on and off this feature. Note that for each publish, you have to change the version number in Xcode so that it won't conflict with existing version in iTunes Connect. For publishing, select Submit for Beta App Review. You may want to go through other tabs such as Pricing and In-App Purchases to configure your own requirements. How it works Obviously this section does not cover every bit of details in the publishing process. In general, you just need to make sure your app is tested thoroughly, locally in a physical device (either via USB or TestFlight) before submitting to the App Store. If for some reason the Archive feature doesn't build, you could manually go to your local Xcode folder to delete that specific temporary archived app to clear cache: ~/Library/Developer/Xcode/Archives See also TestFlight is a separate subject by itself. The benefit of TestFlight is that you don't need your app to be approved by Apple in order to install the app on a physical device for testing and development. You can find out more information about TestFlight here: https://developer.apple.com/library/prerelease/ios/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/BetaTestingTheApp.html Building and publishing an app for Android Building and publishing an Android app is a little more straightforward than iOS because you just interface with the command line to build the .apk file and upload to Google Play's Developer Console. Ionic Framework documentation also has a great instruction page for this: http://ionicframework.com/docs/guide/publishing.html. Getting ready The requirement is to have your Google Developer account ready and login to https://play.google.com/apps/publish. Your local environment should also have the right SDK as well as keytool, jarsigner, and zipalign command line for that specific version. How to do it Here are the instructions: Go to your app folder and build for Android using this command: $ ionic build --release android You will see the android-release-unsigned.apk file in the apk folder under /platforms/android/build/outputs. Go to that folder in the Terminal. If this is the first time you create this app, you must have a keystore file. This file is used to identify your app for publishing. If you lose it, you cannot update your app later on. To create a keystore, type the following command in the command line and make sure it's the same keytool version of the SDK: $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Once you fill out the information in the command line, make a copy of this file somewhere safe because you will need it later. The next step is to use that file to sign your app so it will create a new .apk file that Google Play allow users to install: $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore HelloWorld-release-unsigned.apk alias_name To prepare for final .apk before upload, you must package it using zipalign: $ zipalign -v 4 HelloWorld-release-unsigned.apk HelloWorld.apk Log in to Google Developer Console and click on Add new application. Fill out as much information as possible for your app using the left menu. Now you are ready to upload your .apk file. First is to perform a beta testing. Once you are completed with beta testing, you can follow Developer Console instructions to push the app to Production. How it works This section does not cover other Android marketplaces such as Amazon Appstore because each of them has different processes. However, the common idea is that you need to completely build the unsigned version of the apk folder, sign it using existing or new keystore file, and finally zipalign to prepare for upload. Using PhoneGap Build for cross-platform Adobe PhoneGap Build is a very useful product that provides build-as-a-service in the cloud. If you have trouble building the app locally in your computer, you could upload the entire Ionic project to PhoneGap Build and it will build the app for Apple, Android, and Windows Phone automatically. Getting ready Go to https://build.phonegap.com and register for a free account. You will be able to build one private app for free. For additional private apps, there is monthly fee associated with the account. How to do it Here are the instructions: Zip your entire /www folder and replace cordova.js to phonegap.js in index.html as described in http://docs.build.phonegap.com/en_US/introduction_getting_started.md.html#Getting%20Started%20with%20Build. You may have to edit config.xml to ensure all plugins are included. Detailed changes are at PhoneGap documentation: http://docs.build.phonegap.com/en_US/configuring_plugins.md.html#Plugins. Select Upload a .zip file under private tab. Upload the .zip file of the www folder. Make sure to upload appropriate key for each platform. For Windows Phone, upload publisher ID file. After that, you just build the app and download the completed build file for each platform. How it works In a nutshell, PhoneGap Build is a convenience way when you are only familiar with one platform during development process but you want your app to be built quickly for other platforms. Under the hood, PhoneGap Build has its own environment to automate the process for each user. However, the user still has to own the responsibility of providing key file for signing the app. PhoneGap Build just helps attach the key to your app. See also The common issue people usually face with when using PhoneGap Build is failure to build. You may want to refer to their documentation for troubleshooting: http://docs.build.phonegap.com/en_US/support_failed-builds.md.html#Failed%20Builds Summary This article provided you with general information about tasks related to building and publishing apps for Android, for iOS and for cross-platform using PhoneGap, wherein you came to know how to publish an app in various places such as App Store and Google Play. Resources for Article: Further resources on this subject: Our App and Tool Stack[article] Directives and Services of Ionic[article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 8173

article-image-creating-quizzes
Packt
24 Oct 2013
9 min read
Save for later

Creating Quizzes

Packt
24 Oct 2013
9 min read
(For more resources related to this topic, see here.) Creating a short-answer question For this task, we will create a card to host an interface for a short-answer question. This type of question allows the user to input their answers via the keyboard. Evaluating this type of answer can be especially challenging, since there could be several correct answers and users are prone to make spelling mistakes. Engage Thrusters Create a new card and name it SA. Copy the Title label from the Main card and paste it onto the new SA card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new SA card. Copy the Progress label from the TF card and paste it onto the new SA card. Copy the Submit button from the Sequencing card and paste it onto the new SA card. Drag a text entry field onto the card and make the following modifications: Change the name to answer. Set the size to 362 by 46. Set the location to 237, 185. Change the text size to 14. We are now ready to program our interface. Enter the following code at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr # Section 2 put "" into fld "question" put "" into fld "answer" put "" into fld "progress" # Section 3 put "What farm animal eats shrubs, can be eaten, and are smaller than cows?" into qArray["1"]["question"] put "goat" into qArray["1"]["answer"] -- put "What is used in pencils for writing?" into qArray["2"]["question"] put "lead" into qArray["2"]["answer"] -- put "What programming language are you learning" into qArray["3"] ["question"] put "livecode" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we reset the question counter (qNbr) variable to 1. Section 2 contains the code to clear the question, answer, and progress fields. Section 3 populates the question/answer array (qArray). As you can see, this is the simplest array we have used. It only contains a question and answer pairing for each row. Our last step for the short answer question interface is to program the Submit button. Here is the code for that button: on mouseUp global qNbr, qArray local tResult # Section 1 if the text of fld "answer" contains qArray[qNbr]["answer"] then put "correct" into tResult else put "incorrect" into tResult end if #Section 2 switch tResult case "correct" if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if break case "incorrect" if qNbr < 3 then answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Next" titled "Wrong Answer" else answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Okay" titled "Wrong Answer" end if break end switch # Section 3 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp Our Submit button script is divided into three sections. The first section (section 1) checks to see if the answer contained in the array (qArray) is part of the answer the user entered. This is a simple string comparison and is not case sensitive. Section 2 of this button's code contains a switch statement based on the local variable tResult. Here, we provide the user with the actual answer if they do not get it right on their own. The final section (section 3) navigates to the next question or to the main card, depending upon which question set the user is on. Objective Complete - Mini Debriefing We have successfully coded our short answer quiz card. Our approach was to use a simple question and data input design with a Submit button. Your user interface should resemble the following screenshot: Creating a picture question card Using pictures as part of a quiz, poll, or other interface can be fun for the user. It might also be more appropriate than simply using text. Let's create a card that uses pictures as part of a quiz. Engage Thrusters Create a new card and name it Pictures. Copy the Title label from the Main card and paste it onto the new Pictures card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new Pictures card. Copy the Progress label from the TF card and paste it onto the new Pictures card. Drag a Rectangle Button onto the card and make the following customizations: Change the name to picture1. Set the size to 120 x 120. Set the location to 128, 196. Drag a second Rectangle Button onto the card and make the following customizations: Change the name to picture2. Set the size to 120 x 120. Set the location to 336, 196. Upload the following listed files into your mobile application's Image Library. This LiveCode function is available by selecting the Development pull-down menu, then selecting Image Library. Near the bottom of the Image Library dialog is an Import File button. Once your files are uploaded, take note of the ID numbers assigned by LiveCode: q1a1.png q1a2.png q2a1.png q2a2.png q3a1.png q3a2.png With our interface fully constructed, we are now ready to add LiveCode script to the card. Here is the code you will enter at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr set the icon of btn "picture1" to empty set the icon of btn "picture2" to empty # Section 2 put "" into fld "question" put "" into fld "progress" # Section 3 put "Which puppy is real?" into qArray["1"]["question"] put "2175" into qArray["1"]["pic1"] put "2176" into qArray["1"]["pic2"] put "q1a1" into qArray["1"]["answer"] -- put "Which puppy looks bigger?" into qArray["2"]["question"] put "2177" into qArray["2"]["pic1"] put "2178" into qArray["2"]["pic2"] put "q2a2" into qArray["2"]["answer"] -- put "Which scene is likely to make her owner more upset?" into qArray["3"]["question"] put "2179" into qArray["3"]["pic1"] put "2180" into qArray["3"]["pic2"] put "q3a1" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we set the qNbr to 1. This is our question counter. We also ensure that there is no image visible in the two buttons. We do this by setting the icon of the buttons to empty. In section 2, we empty the contents of the two onscreen fields (Question and Progress). In the third section, we populate the question set array (qArray). Each question has an answer that corresponds with the filename of the images you added to your stack in the previous step. The ID numbers of the six images you uploaded are also added to the array, so you will need to refer to your notes from step 7. Our next step is to program the picture1 and picture2 buttons. Here is the code for the picture1 button: on mouseUp global qNbr, qArray # Section 1 if qArray[qNbr]["answer"] contains "a1" then if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if else if qNbr < 3 then answer "That is not correct." with "Next" titled "Wrong Answer" else answer "That is not correct." with "Okay" titled "Wrong Answer" end if end if # Section 2 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp In section 1 of our code, we check to see if the answer from the array contains a1, which indicates that the picture on the left is the correct answer. Based on the answer evaluation, one of two text feedbacks is provided to the user. The name of the button on the feedback dialog is either Next or Okay, depending upon which question set the user is currently on. The second section of this code routes the user to either the main card (if they finished all three questions) or to the next question. Copy the code you entered in the picture1 button and paste it onto the picture2 button. Only one piece of code needs to change. On the first line of the section 1 code, change the string from a1 to a2. That line of code should be as follows: if qArray[qNbr]["answer"] contains "a2" then Objective Complete - Mini Debriefing In just 9 easy steps, we created a picture-based question type that uses images we uploaded to our stack's image library and a question set array. Your final interface should look similar to the following screenshot: Adding navigational scripting In this task, we will add scripts to the interface buttons on the Main card. Engage Thrusters Navigate to the Main card. Add the following script to the true-false button: on mouseUp set the disabled of me to true go to card "TF" end mouseUp Add the following script to the m-choice button: on mouseUp set the disabled of me to true go to card "MC" end mouseUp Add the following script to the sequence button: on mouseUp set the disabled of me to true go to card "Sequencing" end mouseUp Add the following script to the short-answer button: on mouseUp set the disabled of me to true go to card "SA" end mouseUp Add the following script to the pictures button: on mouseUp set the disabled of me to true go to card "Pictures" end mouseUp The last step in this task is to program the Reset button. Here is the code for that button: on mouseUp global theScore, totalQuestions, totalCorrect # Section 1 set the disabled of btn "true-false" to false set the disabled of btn "m-choice" to false set the disabled of btn "sequence" to false set the disabled of btn "short-answer" to false set the disabled of btn "pictures" to false # Section 2 set the backgroundColor of grc "progress1" to empty set the backgroundColor of grc "progress2" to empty set the backgroundColor of grc "progress3" to empty set the backgroundColor of grc "progress4" to empty set the backgroundColor of grc "progress5" to empty # Section3 put 0 into theScore put 0 into totalQuestions put 0 into totalCorrect put theScore & "%" into fld "Score" end mouseUp There are three sections to this code. In section 1, we are enabling each of the buttons. In the second section, we are clearing out the background color of each of the five progress circles in the bottom-center of the screen. In the final section, section 3, we reset the score and the score display. Objective Complete - Mini Debriefing That is all there was to this task, seven easy steps. There are no visible changes to the mobile application's interface. Summary In this article, we saw how to create a couple of quiz apps for mobile such as short-answer questions and picture card questions. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating mobile friendly themes [Article] So, what is XenMobile? [Article]
Read more
  • 0
  • 0
  • 8094

Packt
23 Apr 2014
7 min read
Save for later

User Interactivity – Mini Golf

Packt
23 Apr 2014
7 min read
(For more resources related to this topic, see here.) Using user input and touch events A touch event occurs each time a user touches the screen, drags, or releases the screen, and also during an interruption. Touch events begin with a user touching the screen. The touches are all handled by the UIResponder class, which then causes a UIEvent object to be generated, which then passes your code to a UITouch object for each finger touching the screen. For most of the code you work on, you will be concerned about the basic information. Where did the user start to touch the screen? Did they drag their finger across the screen? And, where did they let go? You will also want to know if the touch event got interrupted or canceled by another event. All of these situations can be handled in your view controller code using the following four methods (You can probably figure out what each one does.): -(void)touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event -(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event -(void)touchesCancelled:(NSSet *)touches withEvent:(UIEvent *)event These methods are structured pretty simply, and we will show you how to implement these a little later in the article. When a touch event occurs on an object, let's say a button, that object will maintain the touch event until it has ended. So, if you touch a button and drag your finger outside the button, the touch-ended event is sent to the button as a touch up outside event. This is very important to know when you are setting up touches for your code. If you have a set of touch events associated with your background and another set associated with a button in the foreground, you need to understand that if touchesBegan begins on the button and touchesEnded ends on the background, touchesEnded is still passed to your button. In this article, we will be creating a simple Mini Golf game. We start with the ball in a standard start position on the screen. The user can touch anywhere on the screen and drag their finger in any direction and release. The ball will move in the direction the player dragged their finger, and the longer the distance from their original position, the more powerful the shot will be. In the Mini Golf game, we put the touch events on the background and not the ball for one simple reason. When the user's ball stops close to the edge of the screen, we still want them to be able to drag a long distance for more power. Using gestures in iOS apps There are special types of touch events known as gestures. Gestures are used all over in iOS apps, but most notable is their use in maps. The simplest gesture would be tap, which is used for selecting items; however, gestures such as pinch and rotate are used for scaling and, well, rotating. The most common gestures have been put together in the Gesture Recognizer classes from Apple, and they work on all iOS devices. These gestures are tap, pinch, rotate, swipe, pan, and long press. You will see all of the following items listed in your Object Library: Tap: This is an simple touch and release on the screen. A tap may also include multiple taps. Pinch: This is a gesture involving a two-finger touch and dragging of your fingers together or apart. Rotate: This is a gesture involving a two finger-touch and rotation of your fingers in a circle. Swipe: This gesture involves holding a touch down and then dragging to a release point. Pan: This gesture involves holding a touch and dragging. A pan does not have an end point like the swipe but instead recognizes direction. Long press: This gesture involves a simple touch and hold for a prespecified minimum amount of time. The following screenshot displays the Object Library: Let's get started on our new game as we show you how to integrate a gesture into your game: We'll start by making our standard Single View Application project. Let's call this one MiniGolf and save it to a new directory, as shown in the following screenshot: Once your project has been created, select it and uncheck Landscape Left and Landscape Right in your Deployment Info. Create a new Objective-C class file for our game view controller with a subclass of UIViewController, which we will call MiniGolfViewController so that we don't confuse it with our other game view controllers: Next, you will need to drag it into the Resources folder for the Mini Golf game: Select your Main.storyboard, and let's get our initial menu set up. Drag in a new View Controller object from your Objects Library. We are only going to be working with two screens for this game. For the new View Controller object, let's set the custom class to your new MiniGolfViewController: Let's add in a background and a button to segue into our new Mini Golf View Controller scene. Drag out an Image View object from our Object Library and set it to full screen. Drag out a Button object and name it Play Game. Press control, click on the Play Game button, and drag over to the Mini Golf View Controller scene to create a modal segue, as shown in the next screenshot. We wouldn't normally do this with a game, but let's use gestures to create a segue from the main menu to start the game. In the ViewController.m file, add in your unwind code: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue { } In your ViewController.h file, add in the corresponding action declaration: - (IBAction)unwindToThisView:(UIStoryboardSegue *)unwindSegue; In the Mini Golf View Controller, let's drag out an Image View object, set it to full screen, and set the image to gameBackground.png. Drag out a Button object and call it Exit Game and a Label object and set Strokes to 0 in the text area. It should look similar to the following screenshot: Press control and click-and-drag your Exit Game button to the Exit unwind segue button, as shown in the following screenshot: If you are planning on running this game in multiple screen sizes, you should pin your images and buttons in place. If you are just running your code in the iPhone 4 simulator, you should be fine. That should just about do it for the usual prep work. Now, let's get into some gestures. If you run your game right now, you should be able to go into your game area and exit back out. But what if we wanted to set it up so that when you touch the screen and swipe right, you would use a different segue to enter the gameplay area? The actions performed on the gestures are as follows: Select your view and drag out Swipe Gesture Recognizer from your Objects Library onto your view: Press control and drag your Swipe Gesture Recognizer to your Mini Golf View Controller just like you did with your Play Game button. Choose a modal segue and you are done. For each of the different types of gestures that are offered in the Object Library, there are unique settings in the attributes inspector. For the swipe gesture, you can choose the swipe direction and the number of touches required to run. For example, you could set up a two-finger left swipe just by changing a few settings, as shown in the following screenshot: You could just as easily press control and click-and-drag from your Swipe Gesture Recognizer to your header file to add IBAction associated with your swipe; this way, you can control what happens when you swipe. You will need to set Type to UISwipeGestureRecognizer: Once you have done this, a new IBAction function will be added to both your header file and your implementation file: - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender; and - (IBAction)SwipeDetected:(UISwipeGestureRecognizer *)sender { //Code for when a user swipes } This works the same way for each of the gestures in the Object Library.
Read more
  • 0
  • 0
  • 8084
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-flappy-swift
Packt
16 Jun 2015
15 min read
Save for later

Flappy Swift

Packt
16 Jun 2015
15 min read
Let's start using the first framework by implementing a nice clone of Flappy Bird with the help of this article by Giordano Scalzo, the author of Swift by Example. (For more resources related to this topic, see here.) The app is… Only someone who has been living under a rock for the past two years may not have heard of Flappy Bird, but to be sure that everybody understands the game, let's go through a brief introduction. Flappy Bird is a simple, but addictive, game where the player controls a bird that must fly between a series of pipes. Gravity pulls the bird down, but by touching the screen, the player can make the bird flap and move towards the sky, driving the bird through a gap in a couple of pipes. The goal is to pass through as many pipes as possible. Our implementation will be a high-fidelity tribute to the original game, with the same simplicity and difficulty level. The app will consist of only two screens—a clean menu screen and the game itself—as shown in the following screenshot:   Building the skeleton of the app Let's start implementing the skeleton of our game using the SpriteKit game template. Creating the project For implementing a SpriteKit game, Xcode provides a convenient template, which prepares a project with all the useful settings: Go to New| Project and select the Game template, as shown in this screenshot: In the following screen, after filling in all the fields, pay attention and select SpriteKit under Game Technology, like this: By running the app and touching the screen, you will be delighted by the cute, rotating airplanes! Implementing the menu First of all, let's add CocoaPods; write the following code in the Podfile: use_frameworks!   target 'FlappySwift' do pod 'Cartography', '~> 0.5' pod 'HTPressableButton', '~> 1.3' end Then install CocoaPods by running the pod install command. As usual, we are going to implement the UI without using Interface Builder and the storyboards. Go to AppDelegate and add these lines to create the main ViewController:    func application(application: UIApplication, didFinishLaunchingWithOptions launchOptions: [NSObject: AnyObject]?) -> Bool {        let viewController = MenuViewController()               let mainWindow = UIWindow(frame: UIScreen.mainScreen().bounds)        mainWindow.backgroundColor = UIColor.whiteColor()       mainWindow.rootViewController = viewController        mainWindow.makeKeyAndVisible()        window = mainWindow          return true    } The MenuViewController, as the name suggests, implements a nice menu to choose between the game and the Game Center: import UIKit import HTPressableButton import Cartography   class MenuViewController: UIViewController {    private let playButton = HTPressableButton(frame: CGRectMake(0, 0, 260, 50), buttonStyle: .Rect)    private let gameCenterButton = HTPressableButton(frame: CGRectMake(0, 0, 260, 50), buttonStyle: .Rect)      override func viewDidLoad() {        super.viewDidLoad()        setup()        layoutView()        style()        render()    } } As you can see, we are using the usual structure. Just for the sake of making the UI prettier, we are using HTPressableButtons instead of the default buttons. Despite the fact that we are using AutoLayout, the implementation of this custom button requires that we instantiate the button by passing a frame to it: // MARK: Setup private extension MenuViewController{    func setup(){        playButton.addTarget(self, action: "onPlayPressed:", forControlEvents: .TouchUpInside)        view.addSubview(playButton)        gameCenterButton.addTarget(self, action: "onGameCenterPressed:", forControlEvents: .TouchUpInside)        view.addSubview(gameCenterButton)    }       @objc func onPlayPressed(sender: UIButton) {        let vc = GameViewController()        vc.modalTransitionStyle = .CrossDissolve        presentViewController(vc, animated: true, completion: nil)    }       @objc func onGameCenterPressed(sender: UIButton) {        println("onGameCenterPressed")    }   } The only thing to note is that, because we are setting the function to be called when the button is pressed using the addTarget() function, we must prefix the designed methods using @objc. Otherwise, it will be impossible for the Objective-C runtime to find the correct method when the button is pressed. This is because they are implemented in a private extension; of course, you can set the extension as internal or public and you won't need to prepend @objc to the functions: // MARK: Layout extension MenuViewController{    func layoutView() {        layout(playButton) { view in             view.bottom == view.superview!.centerY - 60            view.centerX == view.superview!.centerX            view.height == 80            view.width == view.superview!.width - 40        }        layout(gameCenterButton) { view in            view.bottom == view.superview!.centerY + 60            view.centerX == view.superview!.centerX            view.height == 80            view.width == view.superview!.width - 40        }    } } The layout functions simply put the two buttons in the correct places on the screen: // MARK: Style private extension MenuViewController{    func style(){        playButton.buttonColor = UIColor.ht_grapeFruitColor()        playButton.shadowColor = UIColor.ht_grapeFruitDarkColor()        gameCenterButton.buttonColor = UIColor.ht_aquaColor()        gameCenterButton.shadowColor = UIColor.ht_aquaDarkColor()    } }   // MARK: Render private extension MenuViewController{    func render(){        playButton.setTitle("Play", forState: .Normal)        gameCenterButton.setTitle("Game Center", forState: .Normal)    } } Finally, we set the colors and text for the titles of the buttons. The following screenshot shows the complete menu: You will notice that on pressing Play, the app crashes. This is because the template is using the view defined in storyboard, and we are directly using the controllers. Let's change the code in GameViewController: class GameViewController: UIViewController {    private let skView = SKView()      override func viewDidLoad() {        super.viewDidLoad()        skView.frame = view.bounds        view.addSubview(skView)        if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {            scene.size = skView.frame.size            skView.showsFPS = true            skView.showsNodeCount = true            skView.ignoresSiblingOrder = true            scene.scaleMode = .AspectFill            skView.presentScene(scene)        }    } } We are basically creating the SKView programmatically, and setting its size just as we did for the main view's size. If the app is run now, everything will work fine. You can find the code for this version at https://github.com/gscalzo/FlappySwift/tree/the_menu_is_ready. A stage for a bird Let's kick-start the game by implementing the background, which is not as straightforward as it might sound. SpriteKit in a nutshell SpriteKit is a powerful, but easy-to-use, game framework introduced in iOS 7. It basically provides the infrastructure to move images onto the screen and interact with them. It also provides a physics engine (based on Box2D), a particles engine, and basic sound playback support, making it particularly suitable for casual games. The content of the game is drawn inside an SKView, which is a particular kind of UIView, so it can be placed inside a normal hierarchy of UIViews. The content of the game is organized into scenes, represented by subclasses of SKScene. Different parts of the game, such as the menu, levels, and so on, must be implemented in different SKScenes. You can consider an SK in SpriteKit as an equivalent of the UIViewController. Inside an SKScene, the elements of the game are grouped in the SKNode's tree which tells the SKScene how to render the components. An SKNode can be either a drawable node, such as SKSpriteNode or SKShapeNode; or something to be applied to the subtree of its descendants, such as SKEffectNode or SKCropNode. Note that SKScene is an SKNode itself. Nodes are animated using SKAction. An SKAction is a change that must be applied to a node, such as a move to a particular position, a change of scaling, or a change in the way the node appears. The actions can be grouped together to create actions that run in parallel, or wait for the end of a previous action. Finally, we can define physics-based relations between objects, defining mass, gravity, and how the nodes interact with each other. That said, the best way to understand and learn SpriteKit is by starting to play with it. So, without further ado, let's move on to the implementation of our tiny game. In this way, you'll get a complete understanding of the most important features of SpriteKit. Explaining the code In the previous section, we implemented the menu view, leaving the code similar to what was created by the template. With basic knowledge of SpriteKit, you can now start understanding the code: class GameViewController: UIViewController {    private let skView = SKView()      override func viewDidLoad() {        super.viewDidLoad()        skView.frame = view.bounds        view.addSubview(skView)        if let scene = GameScene.unarchiveFromFile("GameScene") as? GameScene {            scene.size = skView.frame.size            skView.showsFPS = true            skView.showsNodeCount = true            skView.ignoresSiblingOrder = true            scene.scaleMode = .AspectFill            skView.presentScene(scene)        }    } } This is the UIViewController that starts the game; it creates an SKView to present the full screen. Then it instantiates the scene from GameScene.sks, which can be considered the equivalent of a Storyboard. Next, it enables some debug information before presenting the scene. It's now clear that we must implement the game inside the GameScene class. Simulating a three-dimensional world using parallax To simulate the depth of the in-game world, we are going to use the technique of parallax scrolling, a really popular method wherein the farther images on the game screen move slower than the closer images. In our case, we have three different levels, and we'll use three different speeds: Before implementing the scrolling background, we must import the images into our project, remembering to set each image as 2x in the assets. The assets can be downloaded from https://github.com/gscalzo/FlappySwift/blob/master/assets.zip?raw=true. The GameScene class basically sets up the background levels: import SpriteKit   class GameScene: SKScene {    private var screenNode: SKSpriteNode!    private var actors: [Startable]!      override func didMoveToView(view: SKView) {        screenNode = SKSpriteNode(color: UIColor.clearColor(), size: self.size)        addChild(screenNode)        let sky = Background(textureNamed: "sky", duration:60.0).addTo(screenNode)        let city = Background(textureNamed: "city", duration:20.0).addTo(screenNode)        let ground = Background(textureNamed: "ground", duration:5.0).addTo(screenNode)        actors = [sky, city, ground]               for actor in actors {            actor.start()        }    } } The only implemented function is didMoveToView(), which can be considered the equivalent of viewDidAppear for a UIVIewController. We define an array of Startable objects, where Startable is a protocol for making the life cycle of the scene, uniform: import SpriteKit   protocol Startable {    func start()    func stop() } This will be handy for giving us an easy way to stop the game later, when either we reach the final goal or our character dies. The Background class holds the behavior for a scrollable level: import SpriteKit   class Background {    private let parallaxNode: ParallaxNode    private let duration: Double      init(textureNamed textureName: String, duration: Double) {        parallaxNode = ParallaxNode(textureNamed: textureName)        self.duration = duration    }       func addTo(parentNode: SKSpriteNode) -> Self {        parallaxNode.addTo(parentNode)        return self    } } As you can see, the class saves the requested duration of a cycle, and then it forwards the calls to a class called ParallaxNode: // Startable extension Background : Startable {    func start() {        parallaxNode.start(duration: duration)    }       func stop() {        parallaxNode.stop()    } } The Startable protocol is implemented by forwarding the methods to ParallaxNode. How to implement the scrolling The idea of implementing scrolling is really straightforward: we implement a node where we put two copies of the same image in a tiled format. We then place the node such that we have the left half fully visible. Then we move the entire node to the left until we fully present the left node. Finally, we reset the position to the original one and restart the cycle. The following figure explains this algorithm: import SpriteKit   class ParallaxNode {    private let node: SKSpriteNode!       init(textureNamed: String) {        let leftHalf = createHalfNodeTexture(textureNamed, offsetX: 0)        let rightHalf = createHalfNodeTexture(textureNamed, offsetX: leftHalf.size.width)               let size = CGSize(width: leftHalf.size.width + rightHalf.size.width,            height: leftHalf.size.height)               node = SKSpriteNode(color: UIColor.whiteColor(), size: size)        node.anchorPoint = CGPointZero        node.position = CGPointZero        node.addChild(leftHalf)        node.addChild(rightHalf)    }       func zPosition(zPosition: CGFloat) -> ParallaxNode {        node.zPosition = zPosition        return self    }       func addTo(parentNode: SKSpriteNode) -> ParallaxNode {        parentNode.addChild(node)        return self    }   } The init() method simply creates the two halves, puts them side by side, and sets the position of the node: // Mark: Private private func createHalfNodeTexture(textureNamed: String, offsetX: CGFloat) -> SKSpriteNode {    let node = SKSpriteNode(imageNamed: textureNamed,                            normalMapped: true)    node.anchorPoint = CGPointZero    node.position = CGPoint(x: offsetX, y: 0)    return node } The half node is just a node with the correct offset for the x-coordinate: // Mark: Startable extension ParallaxNode {    func start(#duration: NSTimeInterval) {        node.runAction(SKAction.repeatActionForever(SKAction.sequence(            [                SKAction.moveToX(-node.size.width/2.0, duration: duration),                SKAction.moveToX(0, duration: 0)            ]            )))    }       func stop() {        node.removeAllActions()    } } Finally, the Startable protocol is implemented using two actions in a sequence. First, we move half the size—which means an image width—to the left, and then we move the node to the original position to start the cycle again. This is what the final result looks like: You can find the code for this version at https://github.com/gscalzo/FlappySwift/tree/stage_with_parallax_levels. Summary This article has given an idea on how to go about direction that you need to build a clone of the Flappy Bird app. For the complete exercise and a lot more, please refer to Swift by Example by Giordano Scalzo. Resources for Article: Further resources on this subject: Playing with Swift [article] Configuring Your Operating System [article] Updating data in the background [article]
Read more
  • 0
  • 0
  • 8063

article-image-detecting-touchscreen-gestures
Packt
06 Aug 2015
18 min read
Save for later

Detecting Touchscreen Gestures

Packt
06 Aug 2015
18 min read
In this article by Kyle Mew author of the book, Android 5 Programming by Example, we will learn how to: Add a GestureDetector to a view Add an OnTouchListener and an OnGestureListener Detect and refine fling gestures Use the DDMS Logcat to observe the MotionEvent class Edit the Logcat filter configuration Simplify code with a SimpleOnGestureListener Add a GestureDetector to an Activity Edit the Manifest to control launch behavior Hide UI elements Create a splash screen Lock screen orientation (For more resources related to this topic, see here.) Adding a GestureDetector to a view Together, view.GestureDetector and view.View.OnTouchListener are all that are required to provide our ImageView with gesture functionality. The listener contains an onTouch() callback that relays each MotionEvent to the detector. We are going to program the large ImageView so that it can display a small gallery of related pictures that can be accessed by swiping left or right on the image. There are two steps to this task as, before we implement our gesture detector, we need to provide the data for it to work on. Adding the gallery data As this app is for demonstration and learning purposes, and so we can progress as quickly as possible, we will only provide extra images for one or two of the ancient sites in the project. Here is how it's done: Open the Ancient Britain project. Open the MainData.java file. Add the following arrays: static Integer[] hengeArray = {R.drawable.henge_large, R.drawable.henge_2, R.drawable.henge_3, R.drawable.henge_4}; static Integer[] horseArray = {}; static Integer[] wallArray = {R.drawable.wall_large, R.drawable.wall_2}; static Integer[] skaraArray = {}; static Integer[] towerArray = {}; static Integer[][] galleryArray = {hengeArray, horseArray, wallArray, skaraArray, towerArray}; Either download the project files from the Packt website or find four of your own images (around 640 x 480 px). Name them henge_2, henge_3, henge_4, and wall_2 and place them in your res/drawable directory. This is all very straightforward, and the code that will accompany it allows you to have individual arrays of any length. This is all we need to add to our gallery data. Now, we need to code our GestureDetector and OnTouchListener. Adding the GestureDetector Along with the OnTouchListener that we will define for our ImageView, the GestureDetector has its own listeners. Here we will use GestureDetector.OnGestureListener to detect a fling gesture and collect the MotionEvent that describe it. Follow these steps to program your ImageView to respond to fling gestures: Open the DetailActivity.java file. Declare the following class fields: private static final int MIN_DISTANCE = 150; private static final int OFF_PATH = 100; private static final int VELOCITY_THRESHOLD = 75; private GestureDetector detector; View.OnTouchListener listener; private int ImageIndex; In the onCreate() method assigns both the detector and listener like this: detector = new GestureDetector(this, new GalleryGestureDetector()); listener = new View.OnTouchListener() { @Override public boolean onTouch(View v, MotionEvent event) { return detector.onTouchEvent(event); } }; Beneath this, add the following line: ImageIndex = 0; Beneath the line detailImage = (ImageView) findViewById(R.id.detail_image);, add the following line: detailImage.setOnTouchListener(listener); Create the following inner class: class GalleryGestureDetector implements GestureDetector.OnGestureListener { } Before dealing with the errors this generates, add the following field to the class: private int item; { item = MainActivity.currentItem; } Click anywhere on the line registering the error and press Alt + Enter. Then select Implement Methods, making sure that you have the Copy JavaDoc and Insert @Override boxes checked. Complete the onDown() method like this: @Override public boolean onDown(MotionEvent e) { return true; } Fill in the onShowPress() method: @Override public void onShowPress(MotionEvent e) { detailImage.setElevation(4); } Then fill in the onFling() method: @Override public boolean onFling(MotionEvent event1, MotionEvent event2, float velocityX, float velocityY) { if (Math.abs(event1.getY() - event2.getY()) > OFF_PATH) return false; if (MainData.galleryArray[item].length != 0) { // Swipe left if (event1.getX() - event2.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex++; if (ImageIndex == MainData.galleryArray[item].length) ImageIndex = 0; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } else { // Swipe right if (event2.getX() - event1.getX() > MIN_DISTANCE && Math.abs(velocityX) > VELOCITY_THRESHOLD) { ImageIndex--; if (ImageIndex < 0) ImageIndex = MainData.galleryArray[item].length - 1; detailImage.setImageResource(MainData .galleryArray[item][ImageIndex]); } } } detailImage.setElevation(0); return true; } Test the project on an emulator or handset. The process of gesture detection in the preceding code begins when the OnTouchListener listener's onTouch() method is called. It then passes that MotionEvent to our gesture detector class, GalleryGestureDetector, which monitors motion events, sometimes stringing them together and timing them until one of the recognized gestures is detected. At this point, we can enter our own code to control how our app responds as we did here with the onDown(), onShowPress(), and onFling() callbacks. It is worth taking a quick look at these methods in turn. It may seem, at the first glance, that the onDown() method is redundant; after all, it's the fling gesture that we are trying to catch. In fact, overriding the onDown() method and returning true from it is essential in all gesture detections as all the gestures begin with an onDown() event. The purpose of the onShowPress() method may also appear unclear as it seems to do a little more than onDown(). As the JavaDoc states, this method is handy for adding some form of feedback to the user, acknowledging that their touch has been received. The Material Design guidelines strongly recommend such feedback and here we have raised the view's elevation slightly. Without including our own code, the onFling() method will recognize almost any movement across the bounding view that ends in the user's finger being raised, regardless of direction or speed. We do not want very small or very slow motions to result in action; furthermore, we want to be able to differentiate between vertical and horizontal movement as well as left and right swipes. The MIN_DISTANCE and OFF_PATH constants are in pixels and VELOCITY_THRESHOLD is in pixels per second. These values will need tweaking according to the target device and personal preference. The first MotionEvent argument in onFling() refers to the preceding onDown() event and, like any MotionEvent, its coordinates are available through its getX() and getY() methods. The MotionEvent class contains dozens of useful classes for querying various event properties—for example, getDownTime(), which returns the time in milliseconds since the current onDown() event. In this example, we used GestureDetector.OnGestureListener to capture our gesture. However, the GestureDetector has three such nested classes, the other two being SimpleOnGestureListener and OnDoubleTapListener. SimpleOnGestureListener provides a more convenient way to detect gestures as we only need to implement those methods that relate to the gestures we are interested in capturing. We will shortly edit our Activity so that it implements the SimpleOnGestureListener instead, allowing us to tidy our code and remove the four callbacks that we do not need. The reason for taking this detour, rather than applying the simple listener to begin with, was to get to see all of the gestures available to us through a gesture listener and demonstrate how useful JavaDoc comments can be, particularly if we are new to the framework. For example, take a look at the following screenshot: Another very handy tool is the Dalvik Debug Monitor Server (DDMS), which allows us to see what is going on inside our apps while they are running. The workings of our gesture listener are a good place to do this as most of its methods operate invisibly. Viewing gesture activity with DDMS To view the workings of our OnGestureListener with DDMS, we need to first create a tag to identify our messages and then a filter to view them. The following steps demonstrate how to do this: Open the DetailActivity.java file. Declare the following constant: private static final String DEBUG_TAG = "tag"; Add the following line inside the onDown() method: Log.d(DEBUG_TAG, "onDown"); Add the line Log.d(DEBUG_TAG, "onShowPress"); to the onShowPress() method and do the same for each of our OnGestureDetector methods. Add the following lines to the appropriate clauses in onFling(): Log.d(DEBUG_TAG, "left"); Log.d(DEBUG_TAG, "right"); Open the Android DDMS pane from the Android tab at the bottom of the window or by pressing Alt + 6. If logcat is not visible, it can be opened with the icon to the right of the top-right drop-down menu. Click on this drop-down menu and select Edit Filter Configuration. Complete the dialog as shown in the following screenshot: You can now run the project on a handset or emulator and view, in the Logcat, which gestures are being triggered and how. Your output should resemble the one here: 02-17 14:39:00.990 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:01.039 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp 02-17 14:39:03.503 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:03.601 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onShowPress 02-17 14:39:04.101 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onLongPress 02-17 14:39:10.484 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onDown 02-17 14:39:10.541 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.091 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onScroll 02-17 14:39:11.232 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onFling 02-17 14:39:11.680 1430- 1430/com.example.kyle.ancientbritain D/tag﹕ right 02-17 14:39:01.039   1430- 1430/com.example.kyle.ancientbritain D/tag﹕ onSingleTapUp DDMS is an invaluable tool when it comes to debugging our apps and seeing what is going on beneath the hood. Once a Log Tag has been defined in the code, we can then create a filter for it so that we see only the messages we are interested in. The Log class contains several methods to report information based on its level of importance. We used Log.d, which stands for debug. All these methods work with the same two parameters: Log.[method](String tag, String message). The full list of these methods is as follows: Log.v: Verbose Log.d: Debug Log.i: Information Log.w: Warning Log.e: Error Log.wtf: Unexpected error It is worth noting that most debug messages will be ignored during the packaging for distribution except for the verbose messages; thus, it is essential to remove them before your final build. Having seen a little more of the inner workings of our gesture detector and listener, we can now strip our code of unused methods by implementing GestureDetector.SimpleOnGestureListener. Implementing a SimpleOnGestureListener It is very simple to convert our gesture detector from one class of listener to another. All we need to do is change the class declaration and delete the unwanted methods. To do this, perform the following steps: Open the DetailActivity file. Change the class declaration for our gesture detector class to the following: class GalleryGestureDetector extends GestureDetector.SimpleOnGestureListener { Delete the onShowPress(), onSingleTapUp(), onScroll(), and onLongPress() methods. This is all you need to do to switch to the SimpleOnGestureListener. We have now successfully constructed and edited a gesture detector to allow the user to browse a series of images. You will have noticed that there is no onDoubleTap() method in the gesture listener. Double-taps can, in fact, be handled with the third GestureDetector listener, OnDoubleTapListener, which operates in a very similar way to the other two. However, Google, in its UI guidelines, recommends that a long press should be used instead, whenever possible. Before moving on to multitouch events, we will take a look at how to attach a GestureDetector listener to an entire Activity by adding a splash screen to our project. In the process, we will also see how to create a Full-Screen Activity and how to edit the Maniftest file so that our app launches with the splash screen. Adding a GestureDetector to an Activity The method we have employed so far allows us to attach a GestureDetector listener to any view or views and this, of course, applies to ViewGroups such as Layouts. There are times when we may want to detect gestures to be applied to the whole screen. For this purpose, we will create a splash screen that can be dismissed with a long press. There are two things we need to do before implementing the gesture detector: creating a layout and editing the Manifest file so that the app launches with our splash screen. Designing the splash screen layout The main difference between processing gestures for a whole Activity and an individual widget, is that we do not need an OnTouchListener as we can override the Activity's own onTouchEvent(). Here is how it is done: Create a new Blank Activity from the Project Explorer context menu called SplashActivity.java. The Activity wizard should have created an associated XML layout called activity_splash.xml. Open this and view it using the Text tab. Remove all the padding properties from the root layout so that it looks similar to this: <RelativeLayout android:layout_width="match_parent" android:layout_height="match_parent" tools:context="com.example.kyle.ancientbritain .SplashActivity"> Here we will need an image to act as the background for our splash screen. If you have not downloaded the project files from the Packt website, find an image, roughly of the size and aspect of your target device's screen, upload it to the project drawable folder, and call it splash. The file I used is 480 x 800 px. Remove the TextView that the wizard placed inside the layout and replace it with this ImageView: <ImageView android:id="@+id/splash_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:src="@drawable/splash"/> Create a TextView beneath this, such as the following: <TextView android:layout_width="match_parent" android:layout_height="wrap_content" android:layout_alignParentBottom="true" android:layout_centerHorizontal="true" android:layout_marginBottom="40dp" android:gravity="center_horizontal" android:textAppearance="?android:attr/ textAppearanceLarge" android:textColor="#fffcfcbd"/> Add the following text property: android:text="Welcome to <b>Ancient Britain</b>npress and hold nanywhere on the screennto start" To save time adding string resources to the strings.xml file, enter a hardcoded string such as the preceding one and heed the warning from the editor to have the string extracted for you like this: There is nothing in this layout that we have not encountered before. We removed all the padding so that our splash image will fill the layout; however, you will see from the preview that this does not appear to be the case. We will deal with this next in our Java code, but we need to edit our Manifest first so that the app gets launched with our SplashActivity. Editing the Manifest It is very simple to configure the AndroidManifest file so that an app will get launched with whichever Activity we choose; the way it does so is with an intent. While we are editing the Manifest, we will also configure the display to fill the screen. Simply follow these steps: Open the res/values-v21/styles.xml file and add the following style: <style name="SplashTheme" parent="android:Theme.Material. NoActionBar.Fullscreen"> </style> Open the AndroidManifest.xml file. Cut-and-paste the <intent-filter> element from MainActivity to SplashActivity. Include the following properties so that the entire <activity> node looks similar to this: <activity android:name=".SplashActivity" android:theme="@style/SplashTheme" android:screenOrientation="portrait" android:configChanges="orientation|screenSize" android:label="Old UK" > <intent-filter> <action android_name="android.intent.action.MAIN" /> <category android_name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> We have encountered themes and styles before and, here, we took advantage of a built-in theme designed for full screen activities. In many cases, we might have designed a landscape layout here but, as is often the case with splash screens, we locked the orientation with the android:screenOrientation property. The android:configChanges line is not actually needed here, but is included as it is useful to know about it. Configuring any attribute such as this prevents the system from automatically reloading the Activity whenever the device is rotated or the screen size changed. Instead of the Activity restarting, the onConfigurationChanged() method is called. This was not needed here as the screen size and orientation were taken care of in the previous lines of code and this line was only included as a point of interest. Finally, we changed the value of android:label. You may have noticed that, depending on the screen size of the device you are using, the name of our app is not displayed in full on the home screen or apps drawer. In such cases, when you want to use a shortened name for your app, it can be inserted here. With everything else in place, we can get on with adding our gesture detector. This is not dissimilar to the way we did this before but, this time, we will apply the detector to the whole screen and will be listening for a long press, rather than a fling. Adding the GestureDetector Along with implementing a gesture detector for the entire Activity here, we will also take the final step in configuring our splash screen so that the image fills the screen, but maintains its aspect ratio. Follow these steps to complete the app splash screen. Open the SplashActivity file. Declare a GestureDetector as we did in the earlier exercise: private GestureDetector detector; In the onCreate() method, assign and configure our splash image and gesture detector like this: ImageView imageView = (ImageView) findViewById(R.id.splash_image); imageView.setScaleType(ImageView.ScaleType.CENTER_CROP); detector = new GestureDetector(this, new SplashListener()); Now, override the Activity's onTouchEvent() like this: @Override public boolean onTouchEvent(MotionEvent event) { this.detector.onTouchEvent(event); return super.onTouchEvent(event); } Create the following SimpleOnGestureListener class: private class SplashListener extends GestureDetector. SimpleOnGestureListener { @Override public boolean onDown(MotionEvent e) { return true; } @Override public void onLongPress(MotionEvent e) { startActivity(new Intent(getApplicationContext(), MainActivity.class)); } } Build and run the app on your phone or an emulator. The way a gesture detector is implemented across an entire Activity should be familiar by this point, as should the capturing of the long press event. The ImageView.setScaleType(ImageView.ScaleType) method is essential here; it is a very useful method in general. The CENTER_CROP constant scales the image to fill the view while maintaining the aspect ratio, cropping the edges when necessary. There are several similar ScaleTypes, such as CENTER_INSIDE, which scales the image to the maximum size possible without cropping it, and CENTER, which does not scale the image at all. The beauty of CENTER_CROP is that it means that we don't have to design a separate image for every possible aspect ratio on the numerous devices our apps will end up running on. Provided that we make allowances for very wide or very narrow screens by not including essential information too close to the edges, we only need to provide a handful of images of varying pixel densities to maintain the image quality on large, high-resolution devices. The scale type of ImageView can be set from within XML with android:scaleType="centerCrop", for example. You may have wondered why we did not use the built-in Full-Screen Activity from the wizard; we could easily have done so. The template code the wizard creates for a Full-Screen Activity provides far more features than we needed for this exercise. Nevertheless, the template is worth taking a look at, especially if you want a fullscreen that brings the status bar and other components into view when the user interacts with the Activity. That brings us to the end of this article. Not only have we seen how to make our apps interact with touch events and gestures, but also how to send debug messages to the IDE and make a Full-Screen Activity. Summary We began this article by adding a GestureDetector to our project. We then edited it so that we could filter out meaningful touch events (swipe right and left, in this case). We went on to see how the SimpleOnGestureListener can save us a lot of time when we are only interested in catching a subset of the recognized gestures. We also saw how to use DDMS to pass debug messages during runtime and how, through a combination of XML and Java, the status and action bars can be hidden and the entire screen be filled with a single view or view group. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [Article] Saying Hello to Unity and Android [Article] Testing with the Android SDK [Article]
Read more
  • 0
  • 0
  • 7934

article-image-introducing-android-platform
Packt
12 Sep 2013
9 min read
Save for later

Introducing an Android platform

Packt
12 Sep 2013
9 min read
(For more resources related to this topic, see here.) Introducing an Android app Mobile software application that runs on Android is an Android app. The apps use the extension of .apk as the installer file extension. There are several popular examples of mobile apps, such as Foursquare, Angry Birds, and Fruit Ninja. Primarily in an Eclipse environment, we use Java, which is then compiled into Dalvik bytecode (not the ordinary Java bytecode). Android provides Dalvik virtual machine (DVM) inside Android (not Java virtual machine JVM). Dalvik VM does not ally with Java SE and Java ME libraries, and is built on an Apache Harmony java implementation. What is Dalvik virtual machine? Dalvik VM is a register-based architecture, authored by Dan Bornstein. It is being optimized for low memory requirements, and the virtual machine was slimmed down to use less space and less power consumption. Preparing for Android development Eclipse ADT In this part of the article, we will see how to install the development environment for Android on Eclipse Juno (4.2). Eclipse is a major IDE for Android development. We need to install an Eclipse extension ADT Android Development Toolkit (ADT) for development of the Android application. Debugging an Android project It is advisable to use the Log class for this purpose, the reason being we can filter, print different colors, and define log types. This could be one of the ways of debugging your program, by displaying variables value or parameters. To use Log, import android.util.Log, and use one the following methods to print messages to LogCat: v(String, String) (verbose) d(String, String) (debug) i(String, String) (information) w(String, String) (warning) e(String, String) (error) LogCat is used to view the internal log of the Android system. It is useful to trace any activity happening inside the device or emulator through the Android Debug Bridge (ADB). The Android project structure The following table illustrates the brief description of the important folders and files available in an Android project: Folder Functions /src The Java codes are placed in this folder. /gen It is generated automatically. /assets You can put your fonts, videos, and sounds here. It is more like a filesystem, and can also place CSS, JavaScript files, and so on. /libs It is an external library (normally in JAR). /res It contains images, layout, and global variables. /drawable-xhdpi It is used for extra high specification devices (for example, Tablet, Galaxy SIII, HTC One X). /drawable-hdpi  It is used for high specification phones (for example, SGSI, SGSII) /drawable-mdpi It is used for medium specification phones (for example, Galaxy W and HTC Desire). /drawable-ldpi  It is used for low specification phones (for example: Galaxy Y and HTC WildFire). /layout  It includes all the XML file for the screen(s) layout. /menu XML files for screen menu. /values It includes global constants. /values-v11 These are template styles definitions for devices with Honeycomb (Android API level 11). /values-v14 These are template styles definitions for devices with ICS (Android API level 14). AndroidManifest.xml This is one of important files to define the apps. This is the first file located by the Android OS in order to run the app. It contains the app's properties, activity declarations, and list of permissions.   Dalvik Debug Monitor Server (DDMS) DDMS is a must have tool to view the emulator/device activities. To access DDMS in the Eclipse, navigate to Windows | Open Perspective | Other, and then choose DDMS. By default it is available in the Android SDK (it's inside the folder android-sdk/tools by the file ddms). From this perspective the following aspects are available: Devices: The list of the devices and AVD that are connected to ADB Emulator control: It helps to carry out device functions LogCat: It views real-time system log messages Threads: It gives an idea of currently running threads within a VM Heap: It shows heap usage by application Allocation tracker: It provides information on memory allocation of objects File explorer: It explores the device filesystem Creating a new Android project using Eclipse ADT To create a new Android project in Eclipse, navigate to File | New | Project. A new project window will appear, from there choose Android | Android Application Project from the list. Then click on the Next button. Application Name: This is the name of your application, it will appear side-by-side to the launcher icon. Choose a project name that relevant to your application. Project Name: This is typically similar to your application name. Avoid having the same name with existing projects in Eclipse, it is not permitted. Package Name: This is the package name of the application. It will act as an ID in the Google Play app store if we wish to publish. Typically it will be the reverse of your domain name if we have one (since this is unique) followed by the application name, and a valid Java package name else we can have anything now and refactor it before publishing. Running the application on an Android device To run and deploy on real device, first install the driver of the device. This varies as per device model and manufacturer. These are a few links you could refer to: For Google Android devices refer to http://developer.android.com/sdk/win-usb.html. For others refer to http://www.teamandroid.com/download-android-usb-drivers/. Make sure the Android phone is connected to the computer through the USB cable. To check whether the phone is properly connected to your PC and in debug mode, please switch to the DDMS perspective. Adding multiple activity in Android application This exercise is to add an information screen on the SimpleNumb3r5 app. The information regarding the developer, e-mail, Facebook fan page, and other information is displayed. Since the screen contains a lot of text information including several pictures, so we make use of an HTML page as our approach here: Create an activity class to handle the new screen. Open the src folder, right-click on the package name (net.kerul.SimpleNumb3r5), and choose New | Other... from the selections, choose to add a new Android activity, and click on the Next button. Then, choose a blank activity and click on Next. Set the activity name as Info, and the wizard will suggest the screen layout as info_activity. Click on the Finish button. Adding the RadioGroup or RadioButton controls Android SDK provides two types of radio controls to be used in conjunction, where only one control can be chosen at a given time. RadioGroup (Android widget RadioGroup) is used to encapsulate a set of RadioButton controls for this purpose. Defining the preference screen Preferences are an important aspect of the Android applications. It allows users to have the choice to modify and personalize it. Preferences can be set in two ways: first method is to create the preferences.xml file in the res/xml directory, and second method is to set the preferences from the code. We will use the former also the easier one, by creating the preferences.xml file. Usually, there are five different preference views as listed in the following table: Views Description CheckBoxPreference It is a simple checkbox which returns true/false ListPreference It shows RadioGroup, where only 1 item can be selected EditTextPreference It shows dialog box to edit TextView, and returns String RingTonePreference It is a radioGroup that shows ringtone PreferenceCategory It is a category with preferences Fragment A fragment is an independent component that can be connected to an Activity or simply is subactivity. Typically it defines a part of UI but can also exist with no user interface, that is, headless. An instance of fragment must exist within an activity. Fragments ease the reuse of components for different layouts. Fragments are the way to support UI variances across different types of screens. The most popular use is of building single pane layouts for phone and multipane layouts for tablets (large screens). Adding an external library Android project – AdMob An Android application cannot achieve everything on its own, it will always need the company of external jars/libraries to achieve different goals and serve various purposes. Almost every free Android application published on store has advertisement embedded in it, which makes use of external component to achieve it. Incorporating advertisement in the Android application is a vital aspect of today's application development. In this article, we will continue on our DistanceConverter application, and make use of the external library, AdMob to incorporate advertisement in our application. Adding the AdMob SDK to the project Let's extract the previously downloaded AdMob SDK zip file, and we should get the folder GoogleAdMobAdsSdkAndroid-6.*.*, under that folder there is GoogleAdMobAdsSdk-6.x.x.jar. You should copy this JAR file in the libs folder of the project. Signing and distributing APK The Android package (APK), in simple terms is similar to the runnable JAR or executable file (on Windows OS) which consists of everything that is needed to run the application. The Android ecosystem uses a virtual machine, that is, Dalvik virtual machine (DVM) to run the Java applications. Dalvik uses its own bytecode which is quite different from the Java bytecode. Generating a private key An android application must be signed with our own private key. It identifies a person, corporate, or entity associated with the application. This can be generated using the program keytool from the Java SDK. The following command is used for generating the key: keytool -genkey -v -keystore <filename>.keystore -alias <key-name> -keyalg RSA -keysize 2048 -validity 10000 We can use different key for each published application, and specify different name to identify it. Also, Google expects validity of at least 25 years or more. A very important thing to consider is to keep back up and securely store key, because once it is compromised it impossible to update an already published application. Publishing to Google Play Publishing at Google Play is very simple and involves register for Google play. You just have to visit and register it at https://play.google.com/. It requires $25 USD to register, and is fairly straight and can take a few days until you get the final access. Summary In this article, we learned how to install the Eclipse Juno (the IDE), the Android SDK and the testing platform. Also, we learned about the fragment and its usage, and used it to have different layouts for landscape mode for our application DistanceConverter. We also learned about handling different screen types and persisting state during screen mode changes. Resources for Article: Further resources on this subject: Installing Alfresco Software Development Kit (SDK) [Article] JBoss AS plug-in and the Eclipse Web Tools Platform [Article] Creating a pop-up menu [Article]
Read more
  • 0
  • 0
  • 7809

article-image-using-resources
Packt
06 Oct 2015
6 min read
Save for later

Using Resources

Packt
06 Oct 2015
6 min read
In this article written by Mathieu Nayrolles, author of the book Xamarin Studio for Android Programming: A C# Cookbook, wants us to learn of how to play sound and how to play a movie by using an user-interactive—press on button in a programmative way. (For more resources related to this topic, see here.) Playing sound There are an infinite number of occasions in which you want your applications to play sound. In this section, we will learn how to play a song from our application in a user-interactive—press on a button—or programmative way. Getting ready For using this you need to create a project on your own: How to do it… Add the following using parameter to your MainActivity.cs file. using Android.Media; Create a class variable named _myPlayer of the MediaPlayer class: MediaPlayer _myPlayer; Create a subfolder named raw under Resources. Place the sound you wish to play inside the newly created folder. We will use the Mario theme, free of rights for non-commercial use, downloaded from http://mp3skull.com. Add the following lines at the end of the OnCreate() method of your MainActivity. _myPlayer = MediaPlayer.Create (this, Resource.Raw.mario); Button button = FindViewById<Button> (Resource.Id.myButton); button.Click += delegate { _myPlayer.Start(); }; In the preceding code sample, the first line creates an instance of the MediaPlayer using the this statement as Context and Resource.Raw.mario as the file to play with this MediaPlayer. The rest of the code is trivial, we just acquired a reference to the button and create a behavior for the OnClick() event of the button. In this event, we call the Start() method of the _myPlayer() variable. Run your application and press on the button as shown on the following screenshot: You should hear the Mario theme playing right after you pressed the button, even if you are running the application on the emulator. How it works... Playing sound (and video) is an activity handled by the Media Player class of the Android platform. This class involves some serious implementations and a multitude of states in the same way as activities. However, as an Android Applications Developer and not as an Android Platform Developer we only require a little background on this. The Android multimedia framework includes—thought the Media Player class—a support for playing a very large variety of media such MP3 from the filesystem or from the Internet. Also, you can only play a song on the current sound device which could be the phone speakers, headset or even a Bluetooth enabled speaker. In other words, even if there are many sound outputs available on the phone, the current default settled by the user is the one where your sound will be played. Finally, you cannot play a sound during a call. There's more... Playing sound which is not stored locally Obviously, you may want to play sounds that are not stored locally—in the raw folder—but anywhere else on the phone, like SDcard or so. To do it, you have to use the following code sample: Uri myUri = new Uri ("uriString"); _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(myUri); _myPlayer.Prepare(); The first line defines a Uri for the targeting file to play. The three following lines set the StreamType, the Uri prepares the MediaPlayer. The Prepare() is a method which prepares the player for playback in a synchrone manner. Meaning that this instruction blocks the program until the player is ready to play, until the player has loaded the file. You could also call the PrepareAsync() which returns immediately and performs the loading in an asynchronous way. Playing online sound Using a code very similar to the one required to play sounds stored somewhere on the phone, we could play sound from the Internet. Basically, we just have to replace the Uri parameter by some HTTP address just like the following: String url = "http://myWebsite/mario.mp3"; _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(url); _myPlayer.Prepare(); Also, you must request the permission to access the Internet with your application. This is done in the manifest by adding an <uses-permission> tag for your application as shown by the following code sample: <application android_icon="@drawable/Icon" android_label="Splash"> <uses-permission android_name="android.permission.INTERNET" /> </application> See Also See also the next recipe for playing video. Playing a movie As a final recipe in this article, we will see how to play a movie with your Android application. Playing video, unlike playing audio, involves some special views for displaying it to the users. Getting ready For the last time, we will reuse the same project, and more specifically, we will play a Mario video under the button for playing the Mario theme seen in the previous recipe. How to do it... Add the following code to your Main.axml under Layout file: <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As a result, the content of your Main.axml file should look like the following screenshot: Add the following code to the MainActivity.cs file in the OnCreate() method: var videoView = FindViewById<VideoView> (Resource.Id.SampleVideoView); var uri = Android.Net.Uri.Parse ("url of your video"); videoView.SetVideoURI (uri); Finally, invoke the videoView.Start() method. Note that playing Internet based video, even short one, will take a very long time as the video need to be fully loaded while using this technique. How it works... Playing video should involve some special view to display it to users. This special view is named the VideoView tag and should be used as same as simple TextView tag. <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As you can see in the preceding code sample, you can apply the same parameters to VideoView tag as TextView tag such as layout based options. The VideoView tag, like the MediaPlayer for audio, have a method to set the video URI named SetVideoURI and another one to start the video named Start();. Summary In this article, we've learned how to play a sound clip as well as video on the Android application which we've developed. Resources for Article: Further resources on this subject: Heads up to MvvmCross [article] XamChat – a Cross-platform App [article] Gesture [article]
Read more
  • 0
  • 0
  • 7799
article-image-sending-and-syncing-data
Packt
10 Aug 2015
4 min read
Save for later

Sending and Syncing Data

Packt
10 Aug 2015
4 min read
This article, by Steven F. Daniel, author of the book, Android Wearable Programming, will provide you with the background and understanding of how you can effectively build applications that communicate between the Android handheld device and the Android wearable. Android Wear comes with a number of APIs that will help to make communicating between the handheld and the wearable a breeze. We will be learning the differences between using MessageAPI, which is sometimes referred to as a "fire and forget" type of message, and DataLayerAPI that supports syncing of data between a handheld and a wearable, and NodeAPI that handles events related to each of the local and connected device nodes. (For more resources related to this topic, see here.) Creating a wearable send and receive application In this section, we will take a look at how to create an Android wearable application that will send an image and a message, and display this on our wearable device. In the next sections, we will take a look at the steps required to send data to the Android wearable using DataAPI, NodeAPI, and MessageAPIs. Firstly, create a new project in Android Studio by following these simple steps: Launch Android Studio, and then click on the File | New Project menu option. Next, enter SendReceiveData for the Application name field. Then, provide the name for the Company Domain field. Now, choose Project location and select where you would like to save your application code: Click on the Next button to proceed to the next step. Next, we will need to specify the form factors for our phone/tablet and Android Wear devices using which our application will run. On this screen, we will need to choose the minimum SDK version for our phone/tablet and Android Wear. Click on the Phone and Tablet option and choose API 19: Android 4.4 (KitKat) for Minimum SDK. Click on the Wear option and choose API 21: Android 5.0 (Lollipop) for Minimum SDK: Click on the Next button to proceed to the next step. In our next step, we will need to add Blank Activity to our application project for the mobile section of our app. From the Add an activity to Mobile screen, choose the Add Blank Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Activity so that it can be used by our application. Here we will need to specify the name of our activity, layout information, title, and menu resource file. From the Customize the Activity screen, enter MobileActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard: In the next step, we will need to add Blank Activity to our application project for the Android wearable section of our app. From the Add an activity to Wear screen, choose the Blank Wear Activity option from the list of activities shown and click on the Next button to proceed to the next step: Next, we need to customize the properties for Blank Wear Activity so that our Android wearable can use it. Here we will need to specify the name of our activity and the layout information. From the Customize the Activity screen, enter WearActivity for Activity Name shown and click on the Next button to proceed to the next step in the wizard:   Finally, click on the Finish button and the wizard will generate your project and after a few moments, the Android Studio window will appear with your project displayed. Summary In this article, we learned about three new APIs, DataAPI, NodeAPI, and MessageAPIs, and how we can use them and their associated methods to transmit information between the handheld mobile and the wearable. If, for whatever reason, the connected wearable node gets disconnected from the paired handheld device, the DataApi class is smart enough to try sending again automatically once the connection is reestablished. Resources for Article: Further resources on this subject: Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article] Testing with the Android SDK [article]
Read more
  • 0
  • 0
  • 7490

article-image-command-line-companion-called-artisan
Packt
06 May 2015
17 min read
Save for later

A Command-line Companion Called Artisan

Packt
06 May 2015
17 min read
In this article by Martin Bean, author of the book Laravel 5 Essentials, we will see how Laravel's command-line utility has far more capabilities and can be used to run and automate all sorts of tasks. In the next pages, you will learn how Artisan can help you: Inspect and interact with your application Enhance the overall performance of your application Write your own commands By the end of this tour of Artisan's capabilities, you will understand how it can become an indispensable companion in your projects. (For more resources related to this topic, see here.) Keeping up with the latest changes New features are constantly being added to Laravel. If a few days have passed since you first installed it, try running a composer update command from your terminal. You should see the latest versions of Laravel and its dependencies being downloaded. Since you are already in the terminal, finding out about the latest features is just one command away: $ php artisan changes This saves you from going online to find a change log or reading through a long history of commits on GitHub. It can also help you learn about features that you were not aware of. You can also find out which version of Laravel you are running by entering the following command: $ php artisan --version Laravel Framework version 5.0.16 All Artisan commands have to be run from your project's root directory. With the help of a short script such as Artisan Anywhere, available at https://github.com/antonioribeiro/artisan-anywhere, it is also possible to run Artisan from any subfolder in your project. Inspecting and interacting with your application With the route:list command, you can see at a glance which URLs your application will respond to, what their names are, and if any middleware has been registered to handle requests. This is probably the quickest way to get acquainted with a Laravel application that someone else has built. To display a table with all the routes, all you have to do is enter the following command: $ php artisan route:list In some applications, you might see /{v1}/{v2}/{v3}/{v4}/{v5} appended to particular routes. This is because the developer has registered a controller with implicit routing, and Laravel will try to match and pass up to five parameters to the controller. Fiddling with the internals When developing your application, you will sometimes need to run short, one-off commands to inspect the contents of your database, insert some data into it, or check the syntax and results of an Eloquent query. One way you could do this is by creating a temporary route with a closure that is going to trigger these actions. However, this is less than practical since it requires you to switch back and forth between your code editor and your web browser. To make these small changes easier, Artisan provides a command called tinker, which boots up the application and lets you interact with it. Just enter the following command: $ php artisan tinker This will start a Read-Eval-Print Loop (REPL) similar to what you get when running the php -a command, which starts an interactive shell. In this REPL, you can enter PHP commands in the context of the application and immediately see their output: > $cat = 'Garfield'; > AppCat::create(['name' => $cat,'date_of_birth' => new DateTime]); > echo AppCat::whereName($cat)->get(); [{"id":"4","name":"Garfield 2","date_of_birth":…}] > dd(Config::get('database.default')); Version 5 of Laravel leverages PsySH, a PHP-specific REPL that provides a more robust shell with support for keyboard shortcuts and history. Turning the engine off Whether it is because you are upgrading a database or waiting to push a fix for a critical bug to production, you may want to manually put your application on hold to avoid serving a broken page to your visitors. You can do this by entering the following command: $ php artisan down This will put your application into maintenance mode. You can determine what to display to users when they visit your application in this mode by editing the template file at resources/views/errors/503.blade.php (since maintenance mode sends an HTTP status code of 503 Service Unavailable to the client). To exit maintenance mode, simply run the following command: $ php artisan up Fine-tuning your application For every incoming request, Laravel has to load many different classes and this can slow down your application, particularly if you are not using a PHP accelerator such as APC, eAccelerator, or XCache. In order to reduce disk I/O and shave off precious milliseconds from each request, you can run the following command: $ php artisan optimize This will trim and merge many common classes into one file located inside storage/framework/compiled.php. The optimize command is something you could, for example, include in a deployment script. By default, Laravel will not compile your classes if app.debug is set to true. You can override this by adding the --force flag to the command but bear in mind that this will make your error messages less readable. Caching routes Apart from caching class maps to improve the response time of your application, you can also cache the routes of your application. This is something else you can include in your deployment process. The command? Simply enter the following: $ php artisan route:cache The advantage of caching routes is that your application will get a little faster as its routes will have been pre-compiled, instead of evaluating the URL and any matches routes on each request. However, as the routing process now refers to a cache file, any new routes added will not be parsed. You will need to re-cache them by running the route:cache command again. Therefore, this is not suitable during development, where routes might be changing frequently. Generators Laravel 5 ships with various commands to generate new files of different types. If you run $ php artisan list under the make namespace, you will find the following entries: make:command make:console make:controller make:event make:middleware make:migration make:model make:provider make:request These commands create a stub file in the appropriate location in your Laravel application containing boilerplate code ready for you to get started with. This saves keystrokes, creating these files from scratch. All of these commands require a name to be specified, as shown in the following command: $ php artisan make:model Cat This will create an Eloquent model class called Cat at app/Cat.php, as well as a corresponding migration to create a cats table. If you do not need to create a migration when making a model (for example, if the table already exists), then you can pass the --no-migration option as follows: $ php artisan make:model Cat --no-migration A new model class will look like this: <?php namespace App; use IlluminateDatabaseEloquentModel; class Cat extends Model { // } From here, you can define your own properties and methods. The other commands may have options. The best way to check is to append --help after the command name, as shown in the following command: $ php artisan make:command --help You will see that this command has --handler and --queued options to modify the class stub that is created. Rolling out your own Artisan commands At this stage you might be thinking about writing your own bespoke commands. As you will see, this is surprisingly easy to do with Artisan. If you have used Symfony's Console component, you will be pleased to know that an Artisan command is simply an extension of it with a slightly more expressive syntax. This means the various helpers will prompt for input, show a progress bar, or format a table, are all available from within Artisan. The command that we are going to write depends on the application we built. It will allow you to export all cat records present in the database as a CSV with or without a header line. If no output file is specified, the command will simply dump all records onto the screen in a formatted table. Creating the command There are only two required steps to create a command. Firstly, you need to create the command itself, and then you need to register it manually. We can make use of the following command to create a console command we have seen previously: $ php artisan make:console ExportCatsCommand This will generate a class inside app/Console/Commands. We will then need to register this command with the console kernel, located at app/Console/Kernel.php: protected $commands = [ 'AppConsoleCommandsExportCatsCommand', ]; If you now run php artisan, you should see a new command called command:name. This command does not do anything yet. However, before we start writing the functionality, let's briefly look at how it works internally. The anatomy of a command Inside the newly created command class, you will find some code that has been generated for you. We will walk through the different properties and methods and see what their purpose is. The first two properties are the name and description of the command. Nothing exciting here, this is only the information that will be shown in the command line when you run Artisan. The colon is used to namespace the commands, as shown here: protected $name = 'export:cats';   protected $description = 'Export all cats'; Then you will find the fire method. This is the method that gets called when you run a particular command. From there, you can retrieve the arguments and options passed to the command, or run other methods. public function fire() Lastly, there are two methods that are responsible for defining the list of arguments or options that are passed to the command: protected function getArguments() { /* Array of arguments */ } protected function getOptions() { /* Array of options */ } Each argument or option can have a name, a description, and a default value that can be mandatory or optional. Additionally, options can have a shortcut. To understand the difference between arguments and options, consider the following command, where options are prefixed with two dashes: $ command --option_one=value --option_two -v=1 argument_one argument_two In this example, option_two does not have a value; it is only used as a flag. The -v flag only has one dash since it is a shortcut. In your console commands, you'll need to verify any option and argument values the user provides (for example, if you're expecting a number, to ensure the value passed is actually a numerical value). Arguments can be retrieved with $this->argument($arg), and options—you guessed it—with $this->option($opt). If these methods do not receive any parameters, they simply return the full list of parameters. You refer to arguments and options via their names, that is, $this->argument('argument_name');. Writing the command We are going to start by writing a method that retrieves all cats from the database and returns them as an array: protected function getCatsData() { $cats = AppCat::with('breed')->get(); foreach ($cats as $cat) {    $output[] = [      $cat->name,      $cat->date_of_birth,      $cat->breed->name,    ]; } return $output; } There should not be anything new here. We could have used the toArray() method, which turns an Eloquent collection into an array, but we would have had to flatten the array and exclude certain fields. Then we need to define what arguments and options our command expects: protected function getArguments() { return [    ['file', InputArgument::OPTIONAL, 'The output file', null], ]; } To specify additional arguments, just add an additional element to the array with the same parameters: return [ ['arg_one', InputArgument::OPTIONAL, 'Argument 1', null], ['arg_two', InputArgument::OPTIONAL, 'Argument 2', null], ]; The options are defined in a similar way: protected function getOptions() { return [    ['headers', 'h', InputOption::VALUE_NONE, 'Display headers?',    null], ]; } The last parameter is the default value that the argument and option should have if it is not specified. In both the cases, we want it to be null. Lastly, we write the logic for the fire method: public function fire() { $output_path = $this->argument('file');   $headers = ['Name', 'Date of Birth', 'Breed']; $rows = $this->getCatsData();   if ($output_path) {    $handle = fopen($output_path, 'w');      if ($this->option('headers')) {        fputcsv($handle, $headers);      }      foreach ($rows as $row) {        fputcsv($handle, $row);      }      fclose($handle);   } else {        $table = $this->getHelperSet()->get('table');        $table->setHeaders($headers)->setRows($rows);        $table->render($this->getOutput());    } } While the bulk of this method is relatively straightforward, there are a few novelties. The first one is the use of the $this->info() method, which writes an informative message to the output. If you need to show an error message in a different color, you can use the $this->error() method. Further down in the code, you will see some functions that are used to generate a table. As we mentioned previously, an Artisan command extends the Symfony console component and, therefore, inherits all of its helpers. These can be accessed with $this->getHelperSet(). Then it is only a matter of passing arrays for the header and rows of the table, and calling the render method. To see the output of our command, we will run the following command: $ php artisan export:cats $ php artisan export:cats --headers file.csv Scheduling commands Traditionally, if you wanted a command to run periodically (hourly, daily, weekly, and so on), then you would have to set up a Cron job in Linux-based environments, or a scheduled task in Windows environments. However, this comes with drawbacks. It requires the user to have server access and familiarity with creating such schedules. Also, in cloud-based environments, the application may not be hosted on a single machine, or the user might not have the privileges to create Cron jobs. The creators of Laravel saw this as something that could be improved, and have come up with an expressive way of scheduling Artisan tasks. Your schedule is defined in app/Console/Kernel.php, and with your schedule being defined in this file, it has the added advantage of being present in source control. If you open the Kernel class file, you will see a method named schedule. Laravel ships with one by default that serves as an example: $schedule->command('inspire')->hourly(); If you've set up a Cron job in the past, you will see that this is instantly more readable than the crontab equivalent: 0 * * * * /path/to/artisan inspire Specifying the task in code also means we can easily change the console command to be run without having to update the crontab entry. By default, scheduled commands will not run. To do so, you need a single Cron job that runs the scheduler each and every minute: * * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1 When the scheduler is run, it will check for any jobs whose schedules match and then runs them. If no schedules match, then no commands are run in that pass. You are free to schedule as many commands as you wish, and there are various methods to schedule them that are expressive and descriptive: $schedule->command('foo')->everyFiveMinutes(); $schedule->command('bar')->everyTenMinutes(); $schedule->command('baz')->everyThirtyMinutes(); $schedule->command('qux')->daily(); You can also specify a time for a scheduled command to run: $schedule->command('foo')->dailyAt('21:00'); Alternatively, you can create less frequent scheduled commands: $schedule->command('foo')->weekly(); $schedule->command('bar')->weeklyOn(1, '21:00'); The first parameter in the second example is the day, with 0 representing Sunday, and 1 through 6 representing Monday through Saturday, and the second parameter is the time, again specified in 24-hour format. You can also explicitly specify the day on which to run a scheduled command: $schedule->command('foo')->mondays(); $schedule->command('foo')->tuesdays(); $schedule->command('foo')->wednesdays(); // And so on $schedule->command('foo')->weekdays(); If you have a potentially long-running command, then you can prevent it from overlapping: $schedule->command('foo')->everyFiveMinutes()          ->withoutOverlapping(); Along with the schedule, you can also specify the environment under which a scheduled command should run, as shown in the following command: $schedule->command('foo')->weekly()->environments('production'); You could use this to run commands in a production environment, for example, archiving data or running a report periodically. By default, scheduled commands won't execute if the maintenance mode is enabled. This behavior can be easily overridden: $schedule->command('foo')->weekly()->evenInMaintenanceMode(); Viewing the output of scheduled commands For some scheduled commands, you probably want to view the output somehow, whether that is via e-mail, logged to a file on disk, or sending a callback to a pre-defined URL. All of these scenarios are possible in Laravel. To send the output of a job via e-mail by using the following command: $schedule->command('foo')->weekly()          ->emailOutputTo('someone@example.com'); If you wish to write the output of a job to a file on disk, that is easy enough too: $schedule->command('foo')->weekly()->sendOutputTo($filepath); You can also ping a URL after a job is run: $schedule->command('foo')->weekly()->thenPing($url); This will execute a GET request to the specified URL, at which point you could send a message to your favorite chat client to notify you that the command has run. Finally, you can chain the preceding command to send multiple notifications: $schedule->command('foo')->weekly()          ->sendOutputTo($filepath)          ->emailOutputTo('someone@example.com'); However, note that you have to send the output to a file before it can be e-mailed if you wish to do both. Summary In this article, you have learned the different ways in which Artisan can assist you in the development, debugging, and deployment process. We have also seen how easy it is to build a custom Artisan command and adapt it to your own needs. If you are relatively new to the command line, you will have had a glimpse into the power of command-line utilities. If, on the other hand, you are a seasoned user of the command line and you have written scripts with other programming languages, you can surely appreciate the simplicity and expressiveness of Artisan. Resources for Article: Further resources on this subject: Your First Application [article] Creating and Using Composer Packages [article] Eloquent relationships [article]
Read more
  • 0
  • 0
  • 7481

article-image-asynctask-and-hardwaretask-classes
Packt
17 Feb 2015
10 min read
Save for later

The AsyncTask and HardwareTask Classes

Packt
17 Feb 2015
10 min read
This article is written by Andrew Henderson, the author of Android for the BeagleBone Black. This article will cover the usage of the AsyncTask and HardwareTask classes. (For more resources related to this topic, see here.) Understanding the AsyncTask class HardwareTask extends the AsyncTask class, and using it provides a major advantage over the way hardware interfacing is implemented in the gpio app. AsyncTasks allows you to perform complex and time-consuming hardware-interfacing tasks without your app becoming unresponsive while the tasks are executed. Each instance of an AsyncTask class can create a new thread of execution within Android. This is similar to how multithreaded programs found on other OSes spin new threads to handle file and network I/O, manage UIs, and perform parallel processing. The gpio app only used a single thread during its execution. This thread is the main UI thread that is part of all Android apps. The UI thread is designed to handle UI events as quickly as possible. When you interact with a UI element, that element's handler method is called by the UI thread. For example, clicking a button causes the UI thread to invoke the button's onClick() handler. The onClick() handler then executes a piece of code and returns to the UI thread. Android is constantly monitoring the execution of the UI thread. If a handler takes too long to finish its execution, Android shows an Application Not Responding (ANR) dialog to the user. You never want an ANR dialog to appear to the user. It is a sign that your app is running inefficiently (or even not at all!) by spending too much time in handlers within the UI thread. The Application Not Responding dialog in Android The gpio app performed reads and writes of the GPIO states very quickly from within the UI thread, so the risk of triggering the ANR was very small. Interfacing with the FRAM is a much slower process. With the BBB's I2C bus clocked at its maximum speed of 400 KHz, it takes approximately 25 microseconds to read or write a byte of data when using the FRAM. While this is not a major concern for small writes, reading or writing the entire 32,768 bytes of the FRAM can take close to a full second to execute! Multiple reads and writes of the full FRAM can easily trigger the ANR dialog, so it is necessary to move these time-consuming activities out of the UI thread. By placing your hardware interfacing into its own AsyncTask class, you decouple the execution of these time-intensive tasks from the execution of the UI thread. This prevents your hardware interfacing from potentially triggering the ANR dialog. Learning the details of the HardwareTask class The AsyncTask base class of HardwareTask provides many different methods, which you can further explore by referring to the Android API documentation. The four AsyncTask methods that are of immediate interest for our hardware-interfacing efforts are: onPreExecute() doInBackground() onPostExecute() execute() Of these four methods, only the doInBackground() method executes within its own thread. The other three methods all execute within the context of the UI thread. Only the methods that execute within the UI thread context are able to update screen UI elements.   The thread contexts in which the HardwareTask methods and the PacktHAL functions are executed Much like the MainActivity class of the gpio app, the HardwareTask class provides four native methods that are used to call PacktHAL JNI functions related to FRAM hardware interfacing: public class HardwareTask extends AsyncTask<Void, Void, Boolean> {   private native boolean openFRAM(int bus, int address); private native String readFRAM(int offset, int bufferSize); private native void writeFRAM(int offset, int bufferSize,      String buffer); private native boolean closeFRAM(); The openFRAM() method initializes your app's access to a FRAM located on a logical I2C bus (the bus parameter) and at a particular bus address (the address parameter). Once the connection to a particular FRAM is initialized via an openFRAM() call, all readFRAM() and writeFRAM() calls will be applied to that FRAM until a closeFRAM() call is made. The readFRAM() method will retrieve a series of bytes from the FRAM and return it as a Java String. A total of bufferSize bytes are retrieved starting at an offset of offset bytes from the start of the FRAM. The writeFRAM() method will store a series of bytes to the FRAM. A total of bufferSize characters from the Java string buffer are stored in the FRAM started at an offset of offset bytes from the start of the FRAM. In the fram app, the onClick() handlers for the Load and Save buttons in the MainActivity class each instantiate a new HardwareTask. Immediately after the instantiation of HardwareTask, either the loadFromFRAM() or saveToFRAM() method is called to begin interacting with the FRAM: public void onClickSaveButton(View view) {    hwTask = new HardwareTask();    hwTask.saveToFRAM(this); }    public void onClickLoadButton(View view) {    hwTask = new HardwareTask();    hwTask.loadFromFRAM(this); } Both the loadFromFRAM() and saveToFRAM() methods in the HardwareTask class call the base AsyncTask class execution() method to begin the new thread creation process: public void saveToFRAM(Activity act) {    mCallerActivity = act;    isSave = true;    execute(); }    public void loadFromFRAM(Activity act) {    mCallerActivity = act;    isSave = false;    execute(); } Each AsyncTask instance can only have its execute() method called once. If you need to run an AsyncTask a second time, you must instantiate a new instance of it and call the execute() method of the new instance. This is why we instantiate a new instance of HardwareTask in the onClick() handlers of the Load and Save buttons, rather than instantiating a single HardwareTask instance and then calling its execute() method many times. The execute() method automatically calls the onPreExecute() method of the HardwareTask class. The onPreExecute() method performs any initialization that must occur prior to the start of the new thread. In the fram app, this requires disabling various UI elements and calling openFRAM() to initialize the connection to the FRAM via PacktHAL: protected void onPreExecute() {    // Some setup goes here    ...   if ( !openFRAM(2, 0x50) ) {      Log.e("HardwareTask", "Error opening hardware");      isDone = true; } // Disable the Buttons and TextFields while talking to the hardware saveText.setEnabled(false); saveButton.setEnabled(false); loadButton.setEnabled(false); } Disabling your UI elements When you are performing a background operation, you might wish to keep your app's user from providing more input until the operation is complete. During a FRAM read or write, we do not want the user to press any UI buttons or change the data held within the saveText text field. If your UI elements remain enabled all the time, the user can launch multiple AsyncTask instances simultaneously by repeatedly hitting the UI buttons. To prevent this, disable any UI elements required to restrict user input until that input is necessary. Once the onPreExecute() method finishes, the AsyncTask base class spins a new thread and executes the doInBackground() method within that thread. The lifetime of the new thread is only for the duration of the doInBackground() method. Once doInBackground() returns, the new thread will terminate. As everything that takes place within the doInBackground() method is performed in a background thread, it is the perfect place to perform any time-consuming activities that would trigger an ANR dialog if they were executed from within the UI thread. This means that the slow readFRAM() and writeFRAM() calls that access the I2C bus and communicate with the FRAM should be made from within doInBackground(): protected Boolean doInBackground(Void... params) {    ...    Log.i("HardwareTask", "doInBackground: Interfacing with hardware");    try {      if (isSave) {          writeFRAM(0, saveData.length(), saveData);      } else {        loadData = readFRAM(0, 61);      }    } catch (Exception e) {      ... The loadData and saveData string variables used in the readFRAM() and writeFRAM() calls are both class variables of HardwareTask. The saveData variable is populated with the contents of the saveEditText text field via a saveEditText.toString() call in the HardwareTask class' onPreExecute() method. How do I update the UI from within an AsyncTask thread? While the fram app does not make use of them in this example, the AsyncTask class provides two special methods, publishProgress() and onPublishProgress(), that are worth mentioning. The AsyncTask thread uses these methods to communicate with the UI thread while the AsyncTask thread is running. The publishProgress() method executes within the AsyncTask thread and triggers the execution of onPublishProgress() within the UI thread. These methods are commonly used to update progress meters (hence the name publishProgress) or other UI elements that cannot be directly updated from within the AsyncTask thread. After doInBackground() has completed, the AsyncTask thread terminates. This triggers the calling of doPostExecute() from the UI thread. The doPostExecute() method is used for any post-thread cleanup and updating any UI elements that need to be modified. The fram app uses the closeFRAM() PacktHAL function to close the current FRAM context that it opened with openFRAM() in the onPreExecute() method. protected void onPostExecute(Boolean result) {    if (!closeFRAM()) {    Log.e("HardwareTask", "Error closing hardware"); }    ... The user must now be notified that the task has been completed. If the Load button was pressed, then the string displayed in the loadTextField widget is updated via the MainActivity class updateLoadedData() method. If the Save button was pressed, a Toast message is displayed to notify the user that the save was successful. Log.i("HardwareTask", "onPostExecute: Completed."); if (isSave) {    Toast toast = Toast.makeText(mCallerActivity.getApplicationContext(),      "Data stored to FRAM", Toast.LENGTH_SHORT);    toast.show(); } else {    ((MainActivity)mCallerActivity).updateLoadedData(loadData); } Giving Toast feedback to the user The Toast class is a great way to provide quick feedback to your app's user. It pops up a small message that disappears after a configurable period of time. If you perform a hardware-related task in the background and you want to notify the user of its completion without changing any UI elements, try using a Toast message! Toast messages can only be triggered by methods that are executing from within the UI thread. An example of the Toast message Finally, the onPostExecute() method will re-enable all of the UI elements that were disabled in onPreExecute(): saveText.setEnabled(true);saveButton.setEnabled(true); loadButton.setEnabled(true); The onPostExecute() method has now finished its execution and the app is back to patiently waiting for the user to make the next fram access request by pressing either the Load or Save button. Are you ready for a challenge? Now that you have seen all of the pieces of the fram app, why not change it to add new functionality? For a challenge, try adding a counter that indicates to the user how many more characters can be entered into the saveText text field before the 60-character limit is reached. Summary The fram app in this article demonstrated how to use the AsyncTask class to perform time-intensive hardware interfacing tasks without stalling the app's UI thread and triggering the ANR dialog. Resources for Article: Further resources on this subject: Sound Recorder for Android [article] Reversing Android Applications [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 7460
article-image-gesture
Packt
25 Oct 2013
5 min read
Save for later

Gesture

Packt
25 Oct 2013
5 min read
(For more resources related to this topic, see here.) Gestures While there are ongoing arguments in the courts of America at the time of writing over who invented the likes of dragging images, it is without a doubt that a key feature of iOS is the ability to use gestures. To put it simply, when you tap the screen to start an app or select a part of an image to enlarge it or anything like that, you are using gestures. A gesture (in terms of iOS) is any touch interaction between the UI and the device. With iOS 6, there are six gestures the user has the ability to use. These gestures, along with brief explanations, have been listed in the following table: Class Name and type Gesture UIPanGesture Recognizer PanGesture; Continuous type Pan images or over-sized views by dragging across the screen UISwipeGesture Recognizer SwipeGesture; Continuous type Similar to panning, except it is a swipe UITapGesture Recognizer TapGesture; Discrete type Tap the screen a number of times (configurable) UILongPressGesture Recognizer LongPressGesture; Discrete type Hold the finger down on the screen UIPinchGesture Recognizer PinchGesture; Continuous type Zoom by pinching an area and moving your fingers in or out UIRotationGesture Recognizer RotationGesture; Continuous type Rotate by moving your fingers in opposite directions Gestures can be added by programming or via Xcode. The available gestures are listed in the following screenshot with the rest of the widgets on the right-hand side of the designer: To add a gesture, drag the gesture you want to use under the view on the View bar (shown in the following screenshot): Design the UI as you want and while pressing the Ctrl key, drag the gesture to what you want to recognize using the gesture. In my example, the object you want to recognize is anywhere on the screen. Once you have connected the gesture to what you want to recognize, you will see the configurable options of the gesture. The Taps field is the number of taps required before the Recognizer is triggered, and the Touches field is the number of points onscreen required to be touched for the Recognizer to be triggered. When you come to connect up the UI, the gesture must also be added. Gesture code When using Xcode, it is simple to code gestures. The class defined in the Xcode design for the tapping gesture is called tapGesture and is used in the following code: private int tapped = 0; public override void ViewDidLoad() { base.ViewDidLoad(); tapGesture.AddTarget(this, new Selector("screenTapped")); View.AddGestureRecognizer(tapGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { tapped++; lblCounter.Text = tapped.ToString(); } There is nothing really amazing to the code; it just displays how many times the screen has been tapped. The Selector method is called by the code when the tap has been seen. The method name doesn't make any difference as long as the Selector and Export names are the same. Types When the gesture types were originally described, they were given a type. The type reflects the number of messages sent to the Selector method. A discrete one generates a single message. A continuous one generates multiple messages, which requires the Selector method to be more complex. The complexity is added by the Selector method having to check the State of the gesture to decide on what to do with what message and whether it has been completed. Adding a gesture in code It is not a requirement that Xcode be used to add a gesture. To perform the same task in the following code as my preceding code did in Xcode is easy. The code will be as follows: UITapGestureRecognizer t'pGesture = new UITapGestureRecognizer() { NumberOfTapsRequired = 1 }; The rest of the code from AddTarget can then be used. Continuous types The following code, a Pinch Recognizer, shows a simple rescaling. There are a couple of other states that I'll explain after the code. The only difference in the designer code is that I have UIImageView instead of a label and a UIPinchGestureRecognizer class instead of a UITapGestureRecognizer class. public override void ViewDidLoad() { base.ViewDidLoad(); uiImageView.Image =UIImage.FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f); pinchGesture.AddTarget(this, new Selector("screenTapped")); uiImageView.AddGestureRecognizer(pinchGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { UIPinchGestureRecognizer pinch = (UIPinchGestureRecognizer)s; float scale = 0f; PointF location; switch(s.State) { case UIGestureRecognizerState.Began: Console.WriteLine("Pinch begun"); location = s.LocationInView(s.View); break; case UIGestureRecognizerState.Changed: Console.WriteLine("Pinch value changed"); scale = pinch.Scale; uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f), scale); break; case UIGestureRecognizerState.Cancelled: Console.WriteLine("Pinch cancelled"); uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f)); scale = 0f; break; case UIGestureRecognizerState.Recognized: Console.WriteLine("Pinch recognized"); break; } } Other UIGestureRecognizerState values The following table gives a list of other Recognizer states: State Description Notes Possible Default state; gesture hasn't been recognized Used by all gestures Failed Gesture failed No messages sent for this state Translation Direction of pan Used in the pan gesture Velocity Speed of pan Used in the pan gesture In addition to these, it should be noted that discrete types only use Possible and Recognized states. Summary Gestures certainly can add a lot to your apps. They can enable the user to speed around an image, move about a map, enlarge and reduce, as well as select areas of anything on a view. Their flexibility underpins why iOS is recognized as being an extremely versatile device for users to manipulate images, video, and anything else on-screen. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] So, what is XenMobile? [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 7402

article-image-signing-application-android-using-maven
Packt
18 Mar 2015
10 min read
Save for later

Signing an application in Android using Maven

Packt
18 Mar 2015
10 min read
In this article written by Patroklos Papapetrou and Jonathan LALOU, authors of the book Android Application Development with Maven, we'll learn different modes of digital signing and also using Maven to digitally sign the applications. The topics that we will explore in this article are: (For more resources related to this topic, see here.) Signing an application Android requires that all packages, in order to be valid for installation in devices, need to be digitally signed with a certificate. This certificate is used by the Android ecosystem to validate the author of the application. Thankfully, the certificate is not required to be issued by a certificate authority. It would be a total nightmare for every Android developer and it would increase the cost of developing applications. However, if you want to sign the certificate by a trusted authority like the majority of the certificates used in web servers, you are free to do it. Android supports two modes of signing: debug and release. Debug mode is used by default during the development of the application, and the release mode when we are ready to release and publish it. In debug mode, when building and packaging an application the Android SDK automatically generates a certificate and signs the package. So don't worry; even though we haven't told Maven to do anything about signing, Android knows what to do and behind the scenes signs the package with the autogenerated key. When it comes to distributing an application, debug mode is not enough; so, we need to prepare our own self-signed certificate and instruct Maven to use it instead of the default one. Before we dive to Maven configuration, let us quickly remind you how to issue your own certificate. Open a command window, and type the following command: keytool -genkey -v -keystore my-android-release-key.keystore -alias my-android-key -keyalg RSA -keysize 2048 -validity 10000 If the keytool command line utility is not in your path, then it's a good idea to add it. It's located under the %JAVA_HOME%/bin directory. Alternatively, you can execute the command inside this directory. Let us explain some parameters of this command. We use the keytool command line utility to create a new keystore file under the name my-android-release-key.keystore inside the current directory. The -alias parameter is used to define an alias name for this key, and it will be used later on in Maven configuration. We also specify the algorithm, RSA, and the key size, 2048; finally we set the validity period in days. The generated key will be valid for 10,000 days—long enough for many many new versions of our application! After running the command, you will be prompted to answer a series of questions. First, type twice a password for the keystore file. It's a good idea to note it down because we will use it again in our Maven configuration. Type the word :secret in both prompts. Then, we need to provide some identification data, like name, surname, organization details, and location. Finally, we need to set a password for the key. If we want to keep the same password with the keystore file, we can just hit RETURN. If everything goes well, we will see the final message that informs us that the key is being stored in the keystore file with the name we just defined. After this, our key is ready to be used to sign our Android application. The key used in debug mode can be found in this file: ~/.android.debug.keystore and contains the following information: Keystore name: "debug.keystore" Keystore password: "android" Key alias: "androiddebugkey" Key password: "android" CN: "CN=Android Debug,O=Android,C=US" Now, it's time to let Maven use the key we just generated. Before we add the necessary configuration to our pom.xml file, we need to add a Maven profile to the global Maven settings. The profiles defined in the user settings.xml file can be used by all Maven projects in the same machine. This file is usually located under this folder: %M2_HOME%/conf/settings.xml. One fundamental advantage of defining global profiles in user's Maven settings is that this configuration is not shared in the pom.xml file to all developers that work on the application. The settings.xml file should never be kept under the Source Control Management (SCM) tool. Users can safely enter personal or critical information like passwords and keys, which is exactly the case of our example. Now, edit the settings.xml file and add the following lines inside the <profiles> attribute: <profile> <id>release</id> <properties>    <sign.keystore>/path/to/my/keystore/my-android-release-    key.keystore</sign.keystore>    <sign.alias>my-android-key</sign.alias>    <sign.storepass>secret</sign.storepass>    <sign.keypass>secret</sign.keypass> </properties> </profile> Keep in mind that the keystore name, the alias name, the keystore password, and the key password should be the ones we used when we created the keystore file. Clearly, storing passwords in plain text, even in a file that is normally protected from other users, is not a very good practice. A quick way to make it slightly less easy to read the password is to use XML entities to write the value. Some sites on the internet like this one http://coderstoolbox.net/string/#!encoding=xml&action=encode&charset=none provide such encryptions. It will be resolved as plain text when the file is loaded; so Maven won't even notice it. In this case, this would become: <sign.storepass>&#115;&#101;&#99;&#114;&#101;&#116;</sign.storepass> We have prepared our global profile and the corresponding properties, and so we can now edit the pom.xml file of the parent project and do the proper configuration. Adding common configuration in the parent file for all Maven submodules is a good practice in our case because at some point, we would like to release both free and paid versions, and it's preferable to avoid duplicating the same configuration in two files. We want to create a new profile and add all the necessary settings there, because the release process is not something that runs every day during the development phase. It should run only at a final stage, when we are ready to deploy our application. Our first priority is to tell Maven to disable debug mode. Then, we need to specify a new Maven plugin name: maven-jarsigner-plugin, which is responsible for driving the verification and signing process for custom/private certificates. You can find the complete release profile as follows: <profiles> <profile>    <id>release</id>    <build>      <plugins>        <plugin>          <groupId>com.jayway.maven.plugins.android.generation2          </groupId>          <artifactId>android-maven-plugin</artifactId>          <extensions>true</extensions>          <configuration>            <sdk>              <platform>19</platform>            </sdk>            <sign>              <debug>false</debug>            </sign>          </configuration>        </plugin>        <plugin>          <groupId>org.apache.maven.plugins</groupId>          <artifactId>maven-jarsigner-plugin</artifactId>          <executions>            <execution>              <id>signing</id>              <phase>package</phase>              <goals>                <goal>sign</goal>                <goal>verify</goal>              </goals>              <inherited>true</inherited>              <configuration>                <removeExistingSignatures>true                </removeExistingSignatures>                <archiveDirectory />                <includes>                  <include>${project.build.directory}/                  ${project.artifactId}.apk</include>                </includes>                <keystore>${sign.keystore}</keystore>                <alias>${sign.alias}</alias>                <storepass>${sign.storepass}</storepass>                <keypass>${sign.keypass}</keypass>                <verbose>true</verbose>              </configuration>            </execution>          </executions>        </plugin>      </plugins>    </build> </profile> </profiles> We instruct the JAR signer plugin to be triggered during the package phase and run the goals of verification and signing. Furthermore, we tell the plugin to remove any existing signatures from the package and use the variable values we have defined in our global profile, $sign.alias, $sign.keystore, $sign.storepass and $sign.keypass. The "verbose" setting is used here to verify that the private key is used instead of the debug key. Before we run our new profile, for comparison purposes, let's package our application without using the signing capability. Open a terminal window, and type the following Maven command: mvn clean package When the command finishes, navigate to the paid version target directory, /PaidVersion/target, and take a look at its contents. You will notice that there are two packaging files: a PaidVersion.jar (size 14KB) and PaidVersion.apk (size 46KB). Since we haven't discussed yet about releasing an application, we can just run the following command in a terminal window and see how the private key is used for signing the package: mvn clean package -Prelease You must have probably noticed that we use only one profile name, and that is the beauty of Maven. Profiles with the same ID are merged together, and so it's easier to understand and maintain the build scripts. If you want to double-check that the package is signed with your private certificate, you can monitor the Maven output, and at some point you will see something similar to the following image: This output verifies that the classes have been properly signed through the execution of the Maven JAR signer plugin. To better understand how signing and optimization affects the packages generation, we can navigate again to the /PaidVersion/target directory and take a look at the files created. You will be surprised to see that the same packages exist again but they have different sizes. The PaidVersion.jar file has a size of 18KB, which is greater than the file generated without signing. However, the PaidVersion.apk is smaller (size 44KB) than the first version. These differences happen because the .jar file is signed with the new certificate; so the size is getting slightly bigger, but what about the .apk file. Should be also bigger because every file is signed with the certificate. The answer can be easily found if we open both the .apk files and compare them. They are compressed files so any well-known tool that opens compressed files can do this. If you take a closer look at the contents of the .apk files, you will notice that the contents of the .apk file that was generated using the private certificate are slightly larger except the resources.arsc file. This file, in the case of custom signing, is compressed, whereas in the debug signing mode it is in raw format. This explains why the signed version of the .apk file is smaller than the original one. There's also one last thing that verifies the correct completion of signing. Keep the compressed .apk files opened and navigate to the META-INF directory. These directories contain a couple of different files. The signed package with our personal certificate contains the key files named with the alias we used when we created the certificate and the package signed in debug mode contains the default certificate used by Android. Summary We know that Android developers struggle when it comes to proper package and release of an application to the public. We have analyzed in many details the necessary steps for a correct and complete packaging of Maven configuration. After reading this article, you should have basic knowledge of digitally signing packages with and without the help of Mavin in Android. Resources for Article: Further resources on this subject: Best Practices [article] Installing Apache Karaf [article] Apache Maven and m2eclipse [article]
Read more
  • 0
  • 0
  • 7229
Modal Close icon
Modal Close icon