Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cross-Platform Mobile Development

96 Articles
article-image-task-automation
Packt
05 Nov 2015
33 min read
Save for later

Task Automation

Packt
05 Nov 2015
33 min read
In this article by Kerri Shotts, author of the Mastering PhoneGap Mobile Application Development, you will learn the following topics: Logology, our demonstration app Why Gulp for Task Automation Setting up your app's directory structure Installing Gulp Creating your first Gulp configuration file Performing substitutions Executing Cordova tasks Managing version numbers Supporting ES2015 Linting your code Minifying/uglifying your code (For more resources related to this topic, see here.) Before we begin Before you continue with this article, ensure that you have the following tools installed. The version that was used in this article is listed as well, for your reference: Git (http://git-scm.com, v2.8.3) Node.js (http://nodejs.org, v0.12.2) npm, short for Node Package Manager (typically installed with Node.js, v2.7.4) Cordova 5.x (http://cordova.apache.org, v5.2.0) or PhoneGap 5.x (http://www.phonegap.com, v5.2.2) You'll need to execute the following in each directory in order to build the projects: # On Linux / Mac OS X $ npm install && gulp init % On Windows >npm install > gulp init If you're not intending to build the sample application in the code bundle, be sure to create a new directory that can serve as a container for all the work you'll be doing in this article.. Just remember, each time you create a new directory and copy the prior version to it, you'll need to execute npm install and gulp init to set things up. About Logology I'm calling it Logology—and if you're familiar with any Greek words, you might have already guessed what the app will be: a dictionary. Now, I understand that this is not necessarily the coolest app, but it is sufficient for our purposes. It will help you learn how advanced mobile development is done. By the time we're done, the app will have the following features: Search: The user will be able to search for a term Browse: The user will be able to browse the dictionary Responsive design: The app will size itself appropriately to any display size Accessibility: The app will be usable even if the user has visual difficulties Persistent storage: The app will persist settings and other user-generated information File downloads: The app will be able to download new content Although the app sounds relatively simple, it's complex enough to benefit from task automation. Since it is useful to have task automation in place from the very beginning, we'll install Gulp and verify that it is working with some simple files first, before we really get to the meat of implementing Logology. As such, the app we build in this article is very simple: it exists to verify that our tasks are working correctly. Once we have verified our workflow, we can go on to the more complicated project at hand. You may think that working through this is very time-consuming, but it pays off in the long run. Once you have a workflow that you like, you can take that workflow and apply it to the other apps you may build in the future. This means that future apps can be started almost immediately (just copy the configuration from a previous app). Even if you don't write other apps, the time you saved from having a task runner outweigh the initial setup time. Why Gulpfor task automation? Gulp (http://gulpjs.com) is a task automation utility using the Node.js platform. Unlike some other task runners, one configures Gulp by writing the JavaScript code. The configuration for Gulp is just like any other JavaScript file, which means that if you know JavaScript, you can start defining tasks quickly. Gulp also uses the concept of "streams" (again, from Node.js). This makes Gulp very efficient. Plugins can be inserted within these steams to perform many different transformations, including beautification or uglification, transpilation (for example, ECMAScript 6 to ECMAScript 2015), concatenation, packaging, and much more. If you've performed any sort of piping on the command line, Gulp should feel familiar, because it operates on a similar concept. The output from one process is piped to the next process, which performs any number of transformations, and so on, until the final output is written to another location. Gulp also tries to run as many dependent tasks in parallel as possible. Ideally, this makes running Gulp tasks faster, although it really depends on how your tasks are structured. Other task runners such as Grunt will perform their task steps in sequence, which may result in slower output, although tracing the steps from input to output may be easier to follow when the steps are performed sequentially. That's not to say that Gulp is the best task runner—there are many that are quite good, and you may find that you prefer one of them over Gulp. The skills you learn in this article can easily be transferred to other task running and build systems. Here are some other task runners that are useful: Grunt (http://www.gruntjs.com): This configuration is specified through settings, not code. Tasks are performed sequentially. Cake (http://coffeescript.org/documentation/docs/cake.html): This uses CoffeeScript and the configuration is specified via code, such as Gulp. If you like using CoffeeScript, you might prefer this over Gulp. Broccoli (https://github.com/broccolijs/broccoli): This also uses the configuration through code. Installing Gulp Installing Gulp is easy, but is actually a two-step process. The first step is to install Gulp globally. This installs the command-line utility, but Gulp actually won't work without also being installed locally within our project. If you aren't familiar with Node.js, packages can be installed locally and/or globally. A locally installed package is local to the project's root directory, while a globally installed package is specific to the developer's machine. Project dependencies are tracked in package.json, which makes it easy to replicate your development setup on another machine. Assuming you have Node.js installed and package.json created in your project directory, the installation of Gulp will go very easily. Be sure to be positioned in your project's root directory and then execute the following: $ npm install -g gulp $ npm install --save-dev gulp If you receive an error while running these commands on OS X, you may need to run them with sudo. For example: sudo install -g gulp. You can usually ignore any WARN messages. It's a good idea to be positioned in your project's root directory any time you execute an npm or gulp command. On Linux and OS X, these commands generally will locate the project's root directory automatically, but this isn't guaranteed on all platforms, so it's better to be safe than sorry. That's it! Gulp itself is very easy to install, but most workflows will require additional plugins that work with Gulp. In addition, we'll also install Cordova dependencies for this project. First, let's install the Cordova dependencies: $ npm install --save-dev cordova-lib cordova-ios cordova-android cordova-lib allows us to programmatically interact with Cordova. We can create projects, build them, and emulate them—everything we can do with the Cordova command line we can do with cordova-lib. cordova-ios and cordova-android refer to the iOS and Android platforms that cordova platform add ios android would add. We've made them dependencies for our project, so we can easily control the version we build with. While starting a new project, it's wise to start with the most recent version of Cordova and the requisite platforms. Once you begin, it's usually a good practice to stick with a specific platform version unless there are serious bugs or the like. Next, let's install the Gulp plugins we'll need: $ npm install --save-dev babel-eslint cordova-android cordova-ios cordova-lib cordova-tasks gulp gulp-babel gulp-bump gulp-concat gulp-eslint gulp-jscs gulp-notify gulp-rename gulp-replace-task gulp-sourcemaps gulp-uglify gulp-util               merge-stream rimraf These will take a few moments to install; but when you're done, take a look in package.json. Notice that all the dependencies we added were also added to the devDependencies. This makes it easy to install all the project's dependencies at a later date (say, on a new machine) simply by executing npm install. Before we go on, let's quickly go over what each of the above utility does. We'll go over them in more detail as we progress through the remainder of this article. gulp-babel: Converts ES2015 JavaScript into ES5. If you aren't familiar with ES2015, it has several new features and an improved syntax that makes writing mobile apps that much easier. Unfortunately, because most browsers don't yet natively support the ES2015 features and syntax, it must be transpiled to ES5 syntax. Of course, if you prefer other languages that can be compiled to ES5 JavaScript, you could use those as well (these would include CoffeeScript and similar). gulp-bump: This small utility manages version numbers in package.json. gulp-concat: Concatenates streams together. We can use this to bundle files together. gulp-jscs: Performs the JavaScript code style checks against your code. Supports ES2015. gulp-eslint: Lints your JavaScript code. Supports ES2015. babel-eslint: Provides ES2015 support to gulp-eslint. gulp-notify: This is an optional plugin, but it is handy especially when some of your tasks take a few seconds to run. This plugin will send a notification to your computer's notification panel when something of import occurs. If the plugin can't send it to your notification panel, it logs to the console. gulp-rename: Renames streams. gulp-replace-task: Performs search and replace within streams. gulp-sourcemaps: When transpiling ES2015 to ES5, it can be helpful to have a map between the original source and the transpiled source. This plugin creates them as a part of the workflow. gulp-uglify: Uglifies/minifies code. While useful for code obfuscation, it also reduces the size of your code. gulp-util: Additional utilities for Gulp, such as logging. merge-stream: Merges multiple tasks. rimraf: Easy file deletion. Akin to rm on the command line. Creating your first Gulp configuration file Gulp tasks are defined by the contents of the project's gulpfile.js file. This is a JavaScript program, so the same skills you have with JavaScript will apply here. Furthermore, it's executed by Node.js, so if you have any Node.js knowledge, you can use it to your advantage. This file should be placed in the root directory of your project, and must be named gulpfile.js. The first few lines of your Gulp configuration file will require the Gulp plugins that you'll need in order to complete your tasks. The following lines then specify how to perform various tasks. For example, a very simple configuration might look like this: var gulp = require("gulp"); gulp.task("copy-files", function () { gulp.src(["./src/**/*"])       .pipe(gulp.dest("./build")); }); This configuration only performs one task: it moves all the files contained within src/ to build/. In many ways, this is the simplest form of a build workflow, but it's a bit too simple for our purposes. Note the pattern we use to match all the files. If you need to see the documentation on what patterns are supported, see https://www.npmjs.com/package/glob. To execute the task, one can execute gulp copy-files. Gulp would then execute the task and copy all the files from src/ to build/. What makes Gulp so powerful is the concept of task composition. Tasks can depend on any number of other tasks, and those tasks can depend on yet more tasks. This makes it easy to create complex workflows out of simpler pieces. Furthermore, each task is asynchronous, so it is possible for many tasks with no shared dependencies to operate in parallel. Each task, as you can see in the prior code, is comprised of selecting a series of source files (src()), optionally performing some additional processing on each file (via pipe()), and then writing those files to a destination path (dest()). If no additional processing is specified (as in the prior example), Gulp will simply copy the files that match the wildcard pattern. The beauty of streams, however, is that one can execute any number of transformations before the final data is saved to storage, and so workflows can become very complex. Now that you've seen a simple task, let's get into some more complicated tasks in the next section. How to execute Cordova tasks It's tempting to use the Cordova command-line interface directly, but there's a problem with this: there's no great way to ensure that what you write will work across multiple platforms. If you are certain you'll only work with a specific platform, you can go ahead and execute shell commands instead; but what we're going to do is a bit more flexible. The code in this section is inspired by https://github.com/kamrik/CordovaGulpTemplate. The Cordova CLI is really just a thin wrapper around the cordova-lib project. Everything the Cordova CLI can do, cordova-lib can do as well. Because the Cordova project will be a build artifact, we need to be able to create a Cordova project in addition to building the project. We'll also need to emulate and run the app. To do this, we first require cordova-lib at the top of our Gulp configuration file (following the other require statements): var cordovaLib = require("cordova-lib"); var cordova = cordovaLib.cordova.raw; var rimraf = require("rimraf"); Next, let's create the code to create a new Cordova project in the build directory: var cordovaTasks = {     // CLI: cordova create ./build com.example.app app_name     //              --copy-from template_path create: function create() { return cordova.create(BUILD_DIR, pkg.cordova.id,                               pkg.cordova.name,                 { lib: { www: { url: path.join(__dirname,                                     pkg.cordova.template), link: false                     }                   }                 });     } } Although it's a bit more complicated than cordova create is on the command line, you should be able to see the parallels. The lib object that is passed is simply to provide a template for the project (equivalent to --copy-from on the command line). In our case, package.json specifies that this should come from the blank/ directory. If we don't do this, all our apps would be created with the sample Hello World app that Cordova installs by default. Our blank project template resides in ../blank, relative from the project root. Yours may reside elsewhere (since you're apt to reuse the same template), so package.json can use whatever path you need. Or, you might want the template to be within your project's root; in which case, package.json should use a path inside your project's root directory. We won't create a task to use this just yet — we need to define several other methods to build and emulate Cordova apps: var gutil = require("gulp-util"); var PLATFORM = gutil.env.platform ? gutil.env.platform :"ios";                                                   // or android var BUILD_MODE = gutil.env.mode ? gutil.env.mode :"debug";                                                   // or release var BUILD_PLATFORMS = (gutil.env.for ? gutil.env.for                                     : "ios,android").split(","); var TARGET_DEVICE = gutil.env.target ? "--target=" + gutil.env.target :""; var cordovaTasks = { create: function create() {/* as above */}, cdProject: function cdProject() { process.chdir(path.join(BUILD_DIR, "www"));     }, cdUp: function cdUp() { process.chdir("..");     }, copyConfig: function copyConfig() { return gulp.src([path.join([SOURCE_DIR], "config.xml")])                 .pipe(performSubstitutions())                 .pipe(gulp.dest(BUILD_DIR"));     },     // cordova plugin add ... addPlugins: function addPlugins() { cordovaTasks.cdProject(); return cordova.plugins("add", pkg.cordova.plugins)             .then(cordovaTasks.cdUp);     },     // cordova platform add ... addPlatforms: function addPlatforms() { cordovaTasks.cdProject(); function transformPlatform(platform) { return path.join(__dirname, "node_modules", "cordova-" + platform);         } return cordova.platforms("add", pkg.cordova.platforms.map(transformPlatform))               .then(cordovaTasks.cdUp);     },     // cordova build <platforms> --release|debug     //                           --target=...|--device build: function build() { var target = TARGET_DEVICE; cordovaTasks.cdProject(); if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordova.build({platforms:BUILD_PLATFORMS, options: ["--" + BUILD_MODE, target] })             .then(cordovaTasks.cdUp);     },     // cordova emulate ios|android --release|debug emulate: function emulate() { cordovaTasks.cdProject(); return cordova.emulate({ platforms: [PLATFORM], options: ["--" + BUILD_MODE,                                         TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     },     // cordova run ios|android --release|debug run: function run() { cordovaTasks.cdProject(); return cordova.run({ platforms: [PLATFORM], options: ["--" + BUILD_MODE, "--device",                                     TARGET_DEVICE] })             .then(cordovaTasks.cdUp);     }, init: function() { return this.create()             .then(cordovaTasks.copyConfig)             .then(cordovaTasks.addPlugins)             .then(cordovaTasks.addPlatforms);     } }; Place cordovaTasks prior to projectTasks in your Gulp configuration. If you aren't familiar with promises, you might want to learn more about them. http://www.html5rocks.com/en/tutorials/es6/promises/ is a fantastic resource. Before we explain the preceding code, there's another change you need to make, and that's to projectTasks.copyConfig, because we move copyConfig to cordovaTasks: var projectTasks = { …, copyConfig: function() { return cordovaTasks.copyConfig();     }, ... } Most of the earlier mentioned tasks should be fairly self-explanatory — they correspond directly with their Cordova CLI counterparts. A few, however, need a little more explanation. cdProject / cdUp: These change the current working directory. All the cordova-lib commands after create need to be executed from within the Cordova project directory, not our project's root directory. You should notice them in several of the tasks. addPlatforms: The platforms are added directly from our project's dependencies, rather than from the Cordova CLI. This allows us to control the platform versions we are using. As such, addPlatforms has to do a little more work to specify the actual directory name of each platform. build: This executes the cordova build command. By default, the CLI will build every platform, but it's possible that we might want to control the platforms that are built, hence the use of BUILD_PLATFORMS. On iOS, the build for an emulator is different than the build for a physical device, so we also need a way to specify that, which is what TARGET_DEVICE is for. This will look for emulators with the name specified for the TARGET_DEVICE, but we might want to build for a physical device; in which case, we will look for device (or no target specified at all) and switch over to the --device flag, which forces Cordova to build for a physical device. init: This does the hard work of creating the Cordova project, copying the configuration file (and performing substitutions), adding plugins to the Cordova project, and then adding the platforms. Now is also a good time to mention that we can specify various settings with switches on the Gulp command line. In the earlier snippet, we're supporting the use of --platform to specify the platform to emulate or run, --mode to specify the build mode (debug or release), --for to determine what platforms Cordova will build for, and --target for specifying the target device. The code specifies sane defaults if these switches aren't specified, but they also allow the developer extra control over the workflow, which is very useful. For example, we'll be able to use commands like these: $ gulp build --for ios,android --target device $ gulp emulate --platform ios --target iPhone-6s $ gulp run --platform ios --mode release Next, let's write the code to actually perform various Cordova tasks — it's pretty simple: var projectTasks = {     ..., init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate();     }, runCordova: function runCordova() { return cordovaTasks.run();     }, buildCordova: function buildCordova() { return cordovaTasks.build();     }, clean: function clean(cb) { rimraf(BUILD_DIR, cb);     },     ... }   … gulp.task("clean", projectTasks.clean); gulp.task("init", ["clean"], projectTasks.init); gulp.task("build", ["copy"], projectTasks.buildCordova); gulp.task("emulate", ["copy"], projectTasks.emulateCordova); gulp.task("run", ["copy"], projectTasks.runCordova); There's a catch with the cordovaTasks.create method — it will fail if anything is already in the build/ directory. As you can guess, this could easily happen, so we also created a projectTasks.clean method. This clean method uses rimraf to delete a specified directory. This is equivalent to using rm -rf build. We then build a Gulp task named init that depends on clean. So, whenever we execute gulp init, the old Cordova project will be removed and a new one will be created for us. Finally, note that the build (and other) tasks all depend on copy. This means that all our files in src/ will be copied (and transformed, if necessary) to build/ prior to executing the desired Cordova command. As you can see, our tasks are already becoming very complex, while also remaining graspable when taken singularly. This means we can now use the following tasks in Gulp: $ gulp init                   # create the cordova project;                               # cleaning first if needed $ gulp clean                  # remove the cordova project $ gulp build                  # copy src to build; apply                               # transformations; cordova build $ gulp build --mode release   # do the above, but build in                               # release mode $ gulp build --for ios        # only build for iOS $ gulp build --target=device  # build device versions instead of                               # emulator versions $ gulp emulate --platform ios # copy src to build; apply                               # transformations;                               # cordova emulate ios $ gulp emulate --platform ios --target iPhone-6                               # same as above, but open the                               # iPhone 6 emulator $ gulp run --platform ios     # copy src to build;                               # apply transformations;                               # cordova run ios --device Now, you're welcome to use the earlier code as it is, or you can use an NPM package that takes care of the cordovaTasks portion for you. This has the benefit of drastically shortening your Gulp configuration. We've already included this package in our package.json file — it's named cordova-tasks, and was created by the author, and shares a lot of similarities to the earlier code. To use it, the following needs to go at the top of our configuration file below all the other require statements: var cordova = require("cordova-tasks"); var cordovaTasks = new cordova.CordovaTasks(     {pkg: pkg, basePath: __dirname, buildDir: "build", sourceDir: "src", gulp: gulp, replace: replace}); Then, you can remove the entire cordovaTasks object from your configuration file as well. The projectTasks section needs to change only slightly: var projectTasks = { init: function init() { return cordovaTasks.init();     }, emulateCordova: function emulateCordova() { return cordovaTasks.emulate({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, runCordova: function runCordova() { return cordovaTasks.run({buildMode: BUILD_MODE, platform: PLATFORM, options: [TARGET_DEVICE]});     }, buildCordova: function buildCordova() { var target = TARGET_DEVICE; if (!target || target === "" || target === "--target=device") { target = "--device";       } return cordovaTasks.build({buildMode: BUILD_MODE, platforms: BUILD_PLATFORMS, options: [target]});     },... } There's one last thing to do: in copyCode change .pipe(performSubstitutions()) to .pipe(cordovaTasks.performSubstitutions()). This is because the cordova-tasks package automatically takes care of all of the substitutions that we need, including version numbers, plugins, platforms, and more. This was one of the more complex sections, so if you've come this far, take a coffee break. Next, we'll worry about managing version numbers. Supporting ES2015 We've already mentioned ES2015 (or EcmaScript 2015) in this article. Now is the moment we actually get to start using it. First, though, we need to modify our copy-code task to transpile from ES2015 to ES5, or our code wouldn't run on any browser that doesn't support the new syntax (that is still quite a few mobile platforms). There are several transpilers available: I prefer Babel (https://babeljs.io). There is a Gulp plugin that we can use that makes this transpilation transformation extremely simple. To do this, we need to add the following to the top of our Gulp configuration: var babel = require("gulp-babel"); var sourcemaps = require("gulp-sourcemaps"); Source maps are an important piece of the debugging puzzle. Because our code will be transformed by the time it is running on our device, it makes debugging a little more difficult since line numbers and the like don't match. Sourcemaps provides the browser with a map between your ES2015 code and the final result so that debugging is a lot easier. Next, let's modify our projectTasks.copyCode method: var projectTasks = { …, copyCode: function copyCode() { var isRelease = (BUILD_MODE === "release"); gulp.src(CODE_FILES)             .pipe(cordovaTasks.performSubstitutions())             .pipe(isRelease ? gutil.noop() : sourcemaps.init())             .pipe(babel())             .pipe(concat("app.js"))             .pipe(isRelease ? gutil.noop() : sourcemaps.write())             .pipe(gulp.dest(CODE_DEST));     },... } Our task is now a little more complex, but that's only because we want to control when the source maps are generated. When babel() is called, it will convert ES2015 code to ES5 and also generate a sourcemap of those changes. This makes debugging easier, but it also increases the file size by quite a large amount. As such, when we're building in release mode, we don't want to include the sourcemaps, so we call gutil.noop instead, which will just do nothing. The sourcemap functionality requires us to call sourcemaps.init prior to any Gulp plugin that might generate sourcemaps. After the plugin that creates the sourcemaps executes, we also have to call sourcemaps.write to save the sourcemap back to the stream. We could also write the sourcemap to a separate .map file by calling sourcemaps.write("."), but you do need to be careful about cleaning that file up while creating a release build. babel is what is doing the actual hard work of converting ES2015 code to ES5. But it does need a little help in the form of a small support library. We'll add this library to src/www/js/lib/ by copying it from the gulp-babel module: $ cp node_modules/babel-core/browser-polyfill.js src/www/js/lib If you don't have the src/www/js/lib/directory yet, you'll need to create it before executing the previous command. Next, we need to edit src/www/index.html to include this script. While we're at it, let's make a few other changes: <!DOCTYPE html> <html> <head> <script src="cordova.js" type="text/javascript"></script> <script src="./js/lib/browser-polyfill.js" type="text/javascript"></script> <script src="./js/app/app.js" type="text/javascript"></script> </head> <body> <p>This is static content..., but below is dynamic content.</p> <div id="demo"></div> </body> </html> Finally, let's write some ES2015 code in src/www/js/app/index.js: function h ( elType, ...children ) { let el = document.createElement(elType); for (let child of children) { if (typeof child !== "object") {           el.textContent = child;       } else if (child instanceof Array) { child.forEach( el.appendChild.bind(el) );       } else { el.appendChild( child );       }   } return el; }   function startApp() { document.querySelector("#demo").appendChild( h("div", h("ul", h("li", "Some information about this app..."), h("li", "App name: {{{NAME}}}"), h("li", "App version: {{{VERSION}}}")       )     )   ); }   document.addEventListener("deviceready", startApp, false); This article isn't about how to write ES2015 code, so I won't bore you with all the details. Suffice it to say, the previous generates a few list items when the app is run using a very simple form of DOM templating. But it does so using the …​ (spread) syntax for variable parameters, the for … of loop and let instead of var. Although it looks a lot like JavaScript, it's definitely different enough that it will take some time to learn how best to use the new features. Linting your code You could execute a gulp emulate --platform ios (or android) right now, and the app should work. But how do we know our code will work when built? Better yet — how can we prevent a build if the code isn't valid? We do this by adding lint tasks to our Gulp configuration file. Linting is a lot like compiling — the linter checks your code for obvious errors and aborts if it finds any. There are various linters available (some better than others), but not all of them support ES2015 syntax yet. The best one that does is ESLint (http://www.eslint.org). Thankfully, there's a very simple Gulp plugin that uses it. We could stop at linting and be done, but code style is also important and can catch out potentially serious issues as well. As such, we're also going to be using the JavaScript Code Style checker or JSCS (https://github.com/jscs-dev/node-jscs). Let's create tasks to lint and check our coding style. First, add the following to the top of our Gulp configuration: var eslint = require("gulp-eslint"); var jscs = require("gulp-jscs"); var CONFIG_DIR = path.join(__dirname, "config"); var CODE_STYLE_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; var CODE_LINT_FILES = [path.join(SOURCE_DIR, "www", "js", "app", "**", "*.js")]; Now, let's create the tasks var projectTasks = { …, checkCodeStyle: function checkCodeStyle() { return gulp.src(CODE_STYLE_FILES) .pipe(jscs({ configPath: path.join(CONFIG_DIR, "jscs.json"), esnext: true })); }, lintCode: function lintCode() { return gulp.src(CODE_LINT_FILES) .pipe(eslint(path.join(CONFIG_DIR, "eslint.json"))) .pipe(eslint.format()) .pipe(eslint.failOnError()); } } … gulp.task("lint", projectTasks.lintCode); gulp.task("code-style", projectTasks.checkCodeStyle); Now, before you run this, you'll need two configuration files to tell each task what should be an error and what shouldn't be. If you want to change the settings, you can do so — the sites for ESLint and JSCS have information on how to modify the configuration files. config/eslint.json must contain "parser": "babel-eslint" in order to force it to use ES2015 syntax. This is set for JSCS in the Gulp configuration, however. config/jscs.json must exist and must not be empty. If you don't need to specify any rules, use an empty JSON object ({}). Now, if you were to execute gulp lint and our source code had a syntax error, you would receive an error message. The same goes for code styles — gulp code-style would generate an error if it didn't like the look of the code. Modify the build, emulate, and run tasks in the Gulp configuration as follows: gulp.task("build", ["lint", "code-style", "copy"], projectTasks.buildCordova); gulp.task("emulate", ["lint", "code-style", "copy"], projectTasks.emulateCordova); gulp.task("run", ["lint", "code-style", "copy"], projectTasks.runCordova); Now, if you execute gulp build and there is a linting or code style error, the build will fail with an error. This gives a little more assurance that our code is at least syntactically valid prior to distributing or running the code. Linting and style checks do not guarantee your code works logically. It just ensures that there are no syntax or style errors. If your program responds incorrectly to a gesture or processes some data incorrectly, a linter won't necessarily catch those issues. Uglifying your code Code uglification or minification sounds a bit painful, but it's a really simple step we can add to our workflow that will reduce the size of our applications when we build in the release mode. Uglification also tends to obfuscate our code a little bit, but don't rely on this for any security — obfuscation can be easily undone. To add the code uglification, add the following to the top of our Gulp configuration: var uglify = require("gulp-uglify"); We can then uglify our code by adding the following code immediately after .pipe(concat("app.js")) in our projectTasks.copyCode method: Next, modify our projectTasks.copyCode method to look like this: .pipe(isRelease ? uglify({preserveComments: "some"}) : gutil.noop()) Notice that we added the uglify method call, but only if the build mode is release. This means that we'll only trigger it if we execute gulp build --mode release. You can, of course, specify additional options. If you want to see all the documentation, visit https://github.com/mishoo/UglifyJS2/. Our options include certain comments (the ones most likely to be license-related) while stripping out all the other comments. Putting it all together You've accomplished quite a bit, but there's one last thing we want to mention: the default task. If gulp is run with no parameters, it looks for a default task to perform. This can be anything you want. To specify this, just add the following to your Gulp configuration: gulp.task("default", ["build"]); Now, if you execute gulp with no specific task, you'll actually start the build task instead. What you want to use for your default task is largely dependent upon your preferences. Your Gulp configuration is now quite large and complex. We've added a few additional features to it (mostly for config.xml). We've also added several other features to the configuration, which you might want to investigate further: BrowserSync for rapid iteration and testing The ability to control whether or not the errors prevent further tasks from being executed Help text Summary In this article, you've learned why a task runner is useful, how to install Gulp, and how to create several tasks of varying complexity to automate building your project and other useful tasks. Resources for Article: Further resources on this subject: Getting Ready to Launch Your PhoneGap App in the Real World [article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article]
Read more
  • 0
  • 0
  • 4604

article-image-intents-mobile-components
Packt
20 Jan 2014
7 min read
Save for later

Intents for Mobile Components

Packt
20 Jan 2014
7 min read
(For more resources related to this topic, see here.) Common mobile components Due to the open source nature of the Android operating system, many different companies such as HTC and Samsung ported the Android OS on their devices with many different functionalities and styles. Each Android phone is unique in some way or the other and possesses many unique features and components different from other brands and phones. But there are some components that are found to be common in all the Android phones. We are using two key terms here: components and features. Component is the hardware part of an Android phone, such as camera, Bluetooth and so on. And Feature is the software part of an Android phone, such as the SMS feature, E-mail feature, and so on. This article is all about hardware components, their access, and their use through intents. These common components can be generally used and implemented independently of any mobile phone or model. And there is no doubt that intents are the best asynchronous messages to activate these Android components. These intents are used to trigger the Android OS when some event occurrs and some action should be taken. Android, on the basis of the data received, determines the receiver for the intent and triggers it. Here are a few common components found in each Android phone: The Wi-Fi component Each Android phone comes with a complete support of the Wi-Fi connectivity component. The new Android phones having Android Version 4.1 and above support the Wi-Fi Direct feature as well. This allows the user to connect to nearby devices without the need to connect with a hotspot or network access point. The Bluetooth component An Android phone includes Bluetooth network support that allows the users of Android phones to exchange data wirelessly in low range with other devices. The Android application framework provides developers with the access to Bluetooth functionality through Android Bluetooth APIs. The Cellular component No mobile phone is complete without a cellular component. Each Android phone has a cellular component for mobile communication through SMS, calls, and so on. The Android system provides very high, flexible APIs to utilize telephony and cellular components to create very interesting and innovative apps. Global Positioning System (GPS) and geo-location GPS is a very useful but battery-consuming component in any Android phone. It is used for developing location-based apps for Android users. Google Maps is the best feature related to GPS and geo-location. Developers have provided so many innovative apps and games utilizing Google Maps and GPS components in Android. The Geomagnetic field component Geomagnetic field component is found in most Android phones. This component is used to estimate the magnetic field of an Android phone at a given point on the Earth and, in particular, to compute magnetic declination from the North. The geomagnetic field component uses the World Magnetic Model produced by United States National Geospatial-Intelligence Agency. The current model that is being used for the geomagnetic field is valid until 2015. Newer Android phones will have the newer version of the geomagnetic field. Sensor components Most Android devices have built-in sensors that measure motion, orientation, environment conditions, and so on. These sensors sometimes act as the brains of the app. For example, they take actions on the basis of the mobile's surrounding (weather) and allow users to have an automatic interaction with the app. These sensors provide raw data with high precision and accuracy for measuring the respective sensor values. For example, gravity sensor can be used to track gestures and motions, such as tilt, shake, and so on, in any app or game. Similarly, a temperature sensor can be used to detect the mobile temperature, or a geomagnetic sensor (as introduced in the previous section) can be used in any travel application to track the compass bearing. Broadly, there are three categories of sensors in Android: motion, position, and environmental sensors. The following subsections discuss these types of sensors briefly. Motion sensors Motion sensors let the Android user monitor the motion of the device. There are both hardware-based sensors such as accelerometer, gyroscope, and software-based sensors such as gravity, linear acceleration, and rotation vector sensors. Motion sensors are used to detect a device's motion including tilt effect, shake effect, rotation, swing, and so on. If used properly, these effects can make any app or game very interesting and flexible, and can prove to provide a great user experience. Position sensors The two position sensors, geomagnetic sensor and orientation sensor, are used to determine the position of the mobile device. Another sensor, the proximity sensor, lets the user determine how close the face of a device is to an object. For example, when we get any call on an Android phone, placing the phone on the ear shuts off the screen, and when we hold the phone back in our hands, the screen display appears automatically. This simple application uses the proximity sensor to detect the ear (object) with the face of the device (the screen). Environmental sensors These sensors are not used much in Android apps, but used widely by the Android system to detect a lot of little things. For example, the temperature sensor is used to detect the temperature of the phone, and can be used in saving the battery and mobile life. At the time of writing this article, the Samsung Galaxy S4 Android phone has been launched. The phone has shown a great use of environmental gestures by allowing users to perform actions such as making calls by no-touch gestures such as moving your hand or face in front of the phone. Components and intents Android phones contain a large number of components and features. This becomes beneficial to both Android developers and users. Android developers can use these mobile components and features to customize the user experience. For most components, developers get two options; either they extend the components and customize those according to their application requirements, or they use the built-in interfaces provided by the Android system. We won't read about the first choice of extending components as it is beyond the scope of this article. However, we will study the other option of using built-in interfaces for mobile components. Generally, to use any mobile component from our Android app, the developers send intents to the Android system and then Android takes the action accordingly to call the respective component. Intents are asynchronous messages sent to the Android OS to perform any functionality. Most of the mobile components can be triggered by intents just by using a few lines of code and can be utilized fully by developers in their apps. In the following sections of this article, we will see few components and how they are used and triggered by intents with practical examples. We have divided the components in three ways: communication components, media components, and motion components. Now, let's discuss these components in the following sections. Communication components Any mobile phone's core purpose is communication. Android phones provide a lot of features other than communication features. Android phones contain SMS/MMS, Wi-Fi, and Bluetooth for communication purposes. This article focuses on the hardware components; so, we will discuss only Wi-Fi and Bluetooth. The Android system provides built-in APIs to manage and use Bluetooth devices, settings, discoverability, and much more. It offers full network APIs not only for Bluetooth but also for Wi-Fi, hotspots, configuring settings, Internet connectivity, and much more. More importantly, these APIs and components can be used very easily by writing few lines of code through intents. We will start by discussing Bluetooth, and how we can use Bluetooth through intents in the next section.
Read more
  • 0
  • 0
  • 4591

Packt
10 Feb 2014
8 min read
Save for later

XamChat – a Cross-platform App

Packt
10 Feb 2014
8 min read
(For more resources related to this topic, see here.) Describing our sample application concept The concept is simple: a chat application that uses a standard Internet connection as an alternative to sending text messages. There are several popular applications like this in the Apple App Store, probably due to the cost of text messaging and support for devices such as the iPod Touch or iPad. This should be a neat real-world example that could be useful for users, and will cover specific topics in developing applications for iOS and Android. Before starting with the development, let's list the set of screens that we'll need: Login / sign up: This screen will include a standard login and sign-up process for the user List of conversations: This screen will include a button to start a new conversation List of friends: This screen will provide a way to add new friends when we start a new conversation Conversation: This screen will have a list of messages between you and another user, and an option to reply A quick wireframe layout of the application would help us grasp a better understanding of the layout of the app. The following figure shows the set of screens to be included in your app: Developing our model layer Since we have a good idea of what the application is, the next step is to develop the business objects or model layer of this application. Let's start out by defining a few classes that would contain the data to be used throughout the app. It is recommended, for the sake of organization, to add these to a Models folder in your project. Let's begin with a class representing a user. The class can be created as follows: public class User {   public int Id { get; set; }   public string Username { get; set; }   public string Password { get; set; } } Pretty straightforward so far; let's move on to create classes representing a conversation and a message as follows: public class Conversation {   public int Id { get; set; }   public int UserId { get; set; }   public string Username { get; set; } } public class Message {   public int Id { get; set; }   public int ConversationId { get; set; }   public int UserId { get; set; }   public string Username { get; set; }   public string Text { get; set; } } Notice that we are using integers as identifiers for the various objects. UserId is the value that would be set by the application to change the user that the object is associated with. Now let's go ahead and set up our solution by performing the following steps: Start by creating a new solution and a new C# Library project. Name the project as XamChat.Core and the solution as XamChat. Next, let's set the library to a Mono / .NET 4.5 project. This setting is found in the project option dialog under Build | General | Target Framework. You could also choose to use Portable Library for this project, Writing a mock web service Many times when developing a mobile application, you may need to begin the development of your application before the real backend web service is available. To prevent the development from halting entirely, a good approach would be to develop a mock version of the service. First, let's break down the operations our app will perform against a web server. The operations are as follows: Log in with a username and password. Register a new account. Get the user's list of friends. Add friends by their usernames. Get a list of the existing conversations for the user. Get a list of messages in a conversation. Send a message. Now let's define an interface that offers a method for each scenario. The interface is as follows: public interface IWebService {   Task<User> Login(string username, string password);   Task<User> Register(User user);   Task<User[]> GetFriends(int userId);   Task<User> AddFriend(int userId, string username);   Task<Conversation[]> GetConversations(int userId);   Task<Message[]> GetMessages(int conversationId);   Task<Message> SendMessage(Message message); } As you see, we're using asynchronous communication with the TPL(Task Parallel Library) technology. Since communicating with a web service can be a lengthy process, it is always a good idea to use the Task<T> class for these operations. Otherwise, you could inadvertently run a lengthy task on the user interface thread, which would prevent user inputs during the operation. Task is definitely needed for web requests, since users could easily be using a cellular Internet connection on iOS and Android, and it will give us the ability to use the async and await keywords down the road. Now let's implement a fake service that implements this interface. Place classes such as FakeWebService in the Fakes folder of the project. Let's start with the class declaration and the first method of the interface: public class FakeWebService {   public int SleepDuration { get; set; }   public FakeWebService()   {     SleepDuration = 1;   }   private Task Sleep()   {     return Task.Delay(SleepDuration);   }   public async Task<User> Login(     string username, string password)   {     await Sleep();     return new User { Id = 1, Username = username };   } } We started off with a SleepDuration property to store a number in milliseconds. This is used to simulate an interaction with a web server, which can take some time. It is also useful for changing the SleepDuration value in different situations. For example, you might want to set this to a small number when writing unit tests so that the tests execute quickly. Next, we implemented a simple Sleep method to return a task that introduce delays of a number of milliseconds. This method will be used throughout the fake service to cause a delay on each operation. Finally, the Login method merely used an await call on the Sleep method and returned a new User object with the appropriate Username. For now, any username or password combination will work; however, you may wish to write some code here to check specific credentials. Now, let's implement a few more methods to continue our FakeWebService class as follows: public async Task<User> Register(User user) {   await Sleep();   return user; } public async Task<User[]> GetFriends(int userId) {   await Sleep();   return new[]   {     new User { Id = 2, Username = "bobama" },     new User { Id = 2, Username = "bobloblaw" },     new User { Id = 3, Username = "gmichael" },   }; } public async Task<User> AddFriend(   int userId, string username) {   await Sleep();   return new User { Id = 4, Username = username }; } For each of these methods, we kept in mind exactly same pattern as the Login method. Each method will delay and return some sample data. Feel free to mix the data with your own values. Now, let's implement the GetConversations method required by the interface as follows: public async Task<Conversation[]> GetConversations(int userId) {   await Sleep();   return new[]   {     new Conversation { Id = 1, UserId = 2 },     new Conversation { Id = 1, UserId = 3 },     new Conversation { Id = 1, UserId = 4 },   }; } Basically, we just create a new array of the Conversation objects with arbitrary IDs. We also make sure to match up the UserId values with the IDs we've used on the User objects so far. Next, let's implement GetMessages to retrieve a list of messages as follows: public async Task<Message[]> GetMessages(int conversationId) {   await Sleep();   return new[]   {     new Message     {       Id = 1,       ConversationId = conversationId,       UserId = 2,       Text = "Hey",     },     new Message     {       Id = 2,       ConversationId = conversationId,       UserId = 1,       Text = "What's Up?",     },     new Message     {       Id = 3,       ConversationId = conversationId,       UserId = 2,       Text = "Have you seen that new movie?",     },     new Message     {       Id = 4,       ConversationId = conversationId,       UserId = 1,       Text = "It's great!",     },   }; } Once again, we are adding some arbitrary data here, and mainly making sure that UserId and ConversationId match our existing data so far. And finally, we will write one more method to send a message as follows: public async Task<Message> SendMessage(Message message) {   await Sleep();   return message; } Most of these methods are very straightforward. Note that the service doesn't have to work perfectly; it should merely complete each operation successfully with a delay. Each method should also return test data of some kind to be displayed in the UI. This will give us the ability to implement our iOS and Android applications while filling in the web service later. Next, we need to implement a simple interface for persisting application settings. Let's define an interface named ISettings as follows: public interface ISettings {   User User { get; set; }   void Save(); } Note that you might want to set up the Save method to be asynchronous and return Task if you plan on storing settings in the cloud. We don't really need this with our application since we will only be saving our settings locally. Later on, we'll implement this interface on each platform using Android and iOS APIs. For now, let's just implement a fake version that will be used later when we write unit tests. The interface is created by the following lines of code: public class FakeSettings : ISettings {   public User User { get; set; }   public void Save() { } } Note that the fake version doesn't actually need to do anything; we just need to provide a class that will implement the interface and not throw any unexpected errors. This completes the Model layer of the application. Here is a final class diagram of what we have implemented so far:
Read more
  • 0
  • 0
  • 4578

article-image-camera-api
Packt
07 Aug 2015
4 min read
Save for later

The Camera API

Packt
07 Aug 2015
4 min read
In this article by Purusothaman Ramanujam, the author of PhoneGap Beginner's Guide Third Edition, we will look at the Camera API. The Camera API provides access to the device's camera application using the Camera plugin identified by the cordova-plugin-camera key. With this plugin installed, an app can take a picture or gain access to a media file stored in the photo library and albums that the user created on the device. The Camera API exposes the following two methods defined in the navigator.camera object: getPicture: This opens the default camera application or allows the user to browse the media library, depending on the options specified in the configuration object that the method accepts as an argument cleanup: This cleans up any intermediate photo file available in the temporary storage location (supported only on iOS) (For more resources related to this topic, see here.) As arguments, the getPicture method accepts a success handler, failure handler, and optionally an object used to specify several camera options through its properties as follows: quality: This is a number between 0 and 100 used to specify the quality of the saved image. destinationType: This is a number used to define the format of the value returned in the success handler. The possible values are stored in the following Camera.DestinationType pseudo constants: DATA_URL(0): This indicates that the getPicture method will return the image as a Base64-encoded string FILE_URI(1): This indicates that the method will return the file URI NATIVE_URI(2): This indicates that the method will return a platform-dependent file URI (for example, assets-library:// on iOS or content:// on Android) sourceType: This is a number used to specify where the getPicture method can access an image. The following possible values are stored in the Camera.PictureSourceType pseudo constants: PHOTOLIBRARY (0), CAMERA (1), and SAVEDPHOTOALBUM (2): PHOTOLIBRARY: This indicates that the method will get an image from the device's library CAMERA: This indicates that the method will grab a picture from the camera SAVEDPHOTOALBUM: This indicates that the user will be prompted to select an album before picking an image allowEdit: This is a Boolean value (the value is true by default) used to indicate that the user can make small edits to the image before confirming the selection; it works only in iOS. encodingType: This is a number used to specify the encoding of the returned file. The possible values are stored in the Camera.EncodingType pseudo constants: JPEG (0) and PNG (1). targetWidth and targetHeight: These are the width and height in pixels, to which you want the captured image to be scaled; it's possible to specify only one of the two options. When both are specified, the image will be scaled to the value that results in the smallest aspect ratio (the aspect ratio of an image describes the proportional relationship between its width and height). mediaType: This is a number used to specify what kind of media files have to be returned when the getPicture method is called using the Camera.PictureSourceType.PHOTOLIBRARY or Camera.PictureSourceType.SAVEDPHOTOALBUM pseudo constants as sourceType; the possible values are stored in the Camera.MediaType object as pseudo constants and are PICTURE (0), VIDEO (1), and ALLMEDIA (2). correctOrientation: This is a Boolean value that forces the device camera to correct the device orientation during the capture. cameraDirection: This is a number used to specify which device camera has to be used during the capture. The values are stored in the Camera.Direction object as pseudo constants and are BACK (0) and FRONT (1). popoverOptions: This is an object supported on iOS to specify the anchor element location and arrow direction of the popover used on iPad when selecting images from the library or album. saveToPhotoAlbum: This is a Boolean value (the value is false by default) used in order to save the captured image in the device's default photo album. The success handler receives an argument that contains the URI to the file or data stored in the file's Base64-encoded string, depending on the value stored in the encodingType property of the options object. The failure handler receives a string containing the device's native code error message as an argument. Similarly, the cleanup method accepts a success handler and a failure handler. The only difference between the two is that the success handler doesn't receive any argument. The cleanup method is supported only on iOS and can be used when the sourceType property value is Camera.PictureSourceType.CAMERA and the destinationType property value is Camera.DestinationType.FILE_URI. Summary In this article, we looked at the various properties available with the Camera API. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article] iPhone JavaScript: Installing Frameworks [article]
Read more
  • 0
  • 0
  • 4574

article-image-getting-started-livecode-mobile-0
Packt
03 Jun 2015
34 min read
Save for later

Getting Started with LiveCode Mobile

Packt
03 Jun 2015
34 min read
In this article written by Joel Gerdeen, author of the book LiveCode Mobile Development: Beginner's Guide - Second Edition we will learn the following topics: Sign up for Google Play Sign up for Amazon Appstore Download and install the Android SDK Configure LiveCode so that it knows where to look for the Android SDK Become an iOS developer with Apple Download and install Xcode Configure LiveCode so that it knows where to look for iOS SDKs Set up simulators and physical devices Test a stack in a simulator and physical device (For more resources related to this topic, see here.) Disclaimer This article references many Internet pages that are not under our control. Here, we do show screenshots or URLs, so remember that the content may have changed since we wrote this. The suppliers may also have changed some of the details, but in general, our description of procedures should still work the way we have described them. Here we go... iOS, Android, or both? It could be that you only have interest in iOS or Android. You should be able to easily skip to the sections you're interested in unless you're intrigued about how the other half works! If, like me, you're a capitalist, then you should be interested in both the operating systems. Far fewer steps are needed to get the Android SDK than the iOS developer tools because for iOS, we have to sign up as a developer with Apple. However, the configuration for Android is more involved. We'll go through all the steps for Android and then the ones for iOS. If you're an iOS-only kind of person, skip the next few pages and start up again at the Becoming an iOS Developer section. Becoming an Android developer It is possible to develop Android OS apps without signing up for anything. We'll try to be optimistic and assume that within the next 12 months, you will find time to make an awesome app that will make you rich! To that end, we'll go over everything that is involved in the process of signing up to publish your apps in both Google Play (formally known as Android Market) and Amazon Appstore. Google Play The starting location to open Google Play is http://developer.android.com/: We will come back to this page again, shortly to download the Android SDK, but for now, click on the Distribute link in the menu bar and then on the Developer Console button on the following screen. Since Google changes these pages occasionally, you can use the URL https://play.google.com/apps/publish/ or search for "Google Play Developer Console". The screens you will progress through are not shown here since they tend to change with time. There will be a sign-in page; sign in using your usual Google details. Which e-mail address to use? Some Google services are easier to sign up for if you have a Gmail account. Creating a Google+ account, or signing up for some of their cloud services, requires a Gmail address (or so it seemed to me at the time!). If you have previously set up Google Wallet as part of your account, some of the steps in signing up become simpler. So, use your Gmail address and if you don't have one, create one! Google charges you a $25 fee to sign up for Google Play. At least now, you know about this! Enter the developer name, e-mail address, website URL (if you have one), and your phone number. The payment of $25 will be done through Google Wallet, which will save you from entering the billing details yet again. Now, you're all signed up and ready to make your fortune! Amazon Appstore Although the rules and costs for Google Play are fairly relaxed, Amazon has a more Apple-like approach, both in the amount they charge you to register and in the review process to accept app submissions. The URL to open Amazon Appstore is http://developer.amazon.com/public: Follow these steps to start with Amazon Appstore: When you select Get Started, you need to sign in to your Amazon account. Which email address to use? This feels like déjà vu! There is no real advantage of using your Google e-mail address when signing up for the Amazon Appstore Developer Program, but if you happen to have an account with Amazon, sign in with that one. It will simplify the payment stage, and your developer account and the general Amazon account will be associated with each other. You are then asked to agree to the Appstore Distribution Agreement terms before learning about the costs. These costs are $99 per year, but the first year is free. So that's good! Unlike the Google Android Market, Amazon asks for your bank details up front, ready to send you lots of money later, we hope! That's it, you're ready to make another fortune to go along with the one that Google sent you! Pop quiz – when is something too much? You're at the end of developing your mega app, it's 49.5 MB in size, and you just need to add title screen music. Why would you not add the two-minute epic tune you have lined up? It would take too long to load. People tend to skip the title screen soon anyway. The file size is going to be over 50 MB. Heavy metal might not be appropriate for a children's storybook app! Answer: 3 The other answers are valid too, though you could play the music as an external sound to reduce loading time, but if your file size goes over 50 MB, you would then cut out potential sales from people who are connected by cellular and not wireless networks. At the time of writing this aticle, all the stores require that you be connected to the site via a wireless network if you intend to download apps that are over 50 MB. Downloading the Android SDK Head back to http://developer.android.com/ and click on the Get the SDK link or go straight to http://developer.android.com/sdk/index.html. This link defaults to the OS that you are running on. Click on the Other Download Options link to see the full set of options for other systems, as shown here: In this article, we're only going to cover Windows and Mac OS X (Intel) and only as much as is needed to make LiveCode work with the Android and iOS SDKs. If you intend to make native Java-based applications, you may be interested in reading through all the steps that are described in the web page http://developer.android.com/sdk/installing.html. Click on the SDK download link for your platform. Note that you don't need the ADT Bundle unless you plan to develop outside the LiveCode IDE. The steps you'll have to go through are different for Mac and Windows. Let's start with Mac. Installing the Android SDK on Mac OS X (Intel) LiveCode itself doesn't require Intel Mac; you can develop stacks using a PowerPC-based Mac, but both the Android SDK and some of the iOS tools require an Intel-based Mac, which sadly means that if you're reading this as you sit next to your Mac G4 or G5, you're not going to get too far! The Android SDK requires the Java Runtime Environment (JRE). Since Apple stopped including the JRE in more recent OS X systems, you should check whether you have it in your system by typing java –version in a Terminal window. The terminal will display the version of Java installed. If not, you may get a message like the following: Click on the More Info button and follow the instructions to install the JRE and verify its installation. At the time of writing this article, JRE 8 doesn't work with OS X 10.10 and I had to use the JRE 6 obtained from http://support.apple.com/kb/DL1572. The file that you just downloaded will automatically expand to show a folder named android-sdk-macosx. It may be in your downloads folder right now, but a more natural place for it would be in your Documents folder, so move it there before performing the next steps. There is an SDK readme text file that lists the steps you need to follow during the installation. If these steps are different to what we have here, then follow the steps in the readme file in case they have been updated since the procedure here was written. Open the Terminal application, which is in Applications/Utilities. You need to change the default directories present in the android-sdk-macosx folder. One handy trick, using Terminal, is that you can drag items into the Terminal window to get the file path to that item. Using this trick, you can type cd and a space in the Terminal window and then drag the android-sdk-macosx folder after the space character. You'll end up with this line if your username is Fred: new-host-3:~ fred$ cd /Users/fred/Documents/android-sdk-macosx Of course, the first part of the line and the user folder will match yours, not Fred's! Whatever your name is, press the Return or Enter key after entering the preceding line. The location line now changes to look like this: new-host-3:android-sdk-macosx colin$ Either carefully type or copy and paste the following line from the readme file: tools/android update sdk --no-ui Press Return or Enter again. How long the file takes to get downloaded depends on your Internet connection. Even with a very fast Internet connection, it could still take over an hour. If you care to follow the update progress, you can just run the android file in the tools directory. This will open the Android SDK Manager, which is similar to the Windows version shown a couple of pages further on in this article. Installing the Android SDK on Windows The downloads page recommends that you use the .exe download link, as it gives extra services to you, such as checking whether you have the Java Development Kit (JDK) installed. When you click on the link, either use the Run or Save options, as you would with any download of a Windows installer. Here, we've opted to use Run; if you use Save, then you need to open the file after it has been saved to your hard drive. In the following case, as the JDK wasn't installed, a dialog box appears saying go to Oracle's site to get the JDK: If you see this screen too, you can leave the dialog box open and click on the Visit java.oracle.com button. On the Oracle page, click on a checkbox to agree to their terms and then on the download link that corresponds with your platform. Choose the 64-bit option if you are running a 64-bit version of Windows or the x86 option if you are running a 32-bit version of Windows. Either way, you're greeted with another installer that you can Run or Save as you prefer. Naturally, it takes a while for the installer to do its thing too! When the installation is complete, you will see a JDK registration page and it's up to you, to register or not. Back at the Android SDK installer dialog box, you can click on the Back button and then the Next button to get back to the JDK checking stage; only now, it sees that you have the JDK installed. Complete the remaining steps of the SDK installer as you would with any Windows installer. One important thing to note is that the last screen of the installer offers to open the SDK Manager. You should do that, so resist the temptation to uncheck that box! Click on Finish and you'll be greeted with a command-line window for a few moments, as shown in the following screenshot, and then, the Android SDK Manager will appear and do its thing: As with the Mac version, it takes a very long time for all these add-ons to download. Pointing LiveCode to the Android SDK After all the installation and command-line work, it's a refreshing change to get back to LiveCode! Open the LiveCode Preferences and choose Mobile Support: We will set the two iOS entries after we get iOS going (but these options will be grayed out in Windows). For now, click on the … button next to the Android development SDK root field and navigate to where the SDK is installed. If you've followed the earlier steps correctly, then the SDK will be in the Documents folder on Mac or you can navigate to C:Program Files (x86)Android to find it on Windows (or somewhere else, if you choose to use a custom location). Depending on the APIs that were loaded in the SDK Manager, you may get a message that the path does not include support for Android 2.2 (API 8). If so, use the Android SDK Manager to install it. LiveCode seems to want API 8 even though at this time Android 5.0 uses API 21. Phew! Now, let's do the same for iOS… Pop quiz – tasty code names An Android OS uses some curious code names for each version. At the time of writing this article, we were on Android OS 5, which had a code name of Lollipop. Version 4.1 was Jelly Bean and version 4.4 was KitKat. Which of these is most likely to be the code name for the next Android OS? Lemon Cheesecake Munchies Noodle Marshmallow Answer: 4 The pattern, if it isn't obvious, is that the code name takes on the next letter of the alphabet, is a kind of food, but more specifically, it's a dessert. "Munchies" almost works for Android OS 6, but "Marshmallow" or "Meringue Pie" would be a better choices! Becoming an iOS developer Creating iOS LiveCode applications requires that LiveCode must have access to the iOS SDK. This is installed as part of the Xcode developer tools and is a Mac-only program. Also, when you upload an app to the iOS App Store, the application used is Mac only and is part of the Xcode installation. If you are a Windows-based developer and wish to develop and publish for iOS, you need either an actual Mac based system or a virtual machine that can run the Mac OS. We can even use VirtualBox for running a Mac based virtual machine, but performance will be an issue. Refer to http://apple.stackexchange.com/questions/63147/is-mac-os-x-in-a-virtualbox-vm-suitable-for-ios-development for more information. The biggest difference between becoming an Android developer and becoming an iOS developer is that you have to sign up with Apple for their developer program even if you never produce an app for the iOS App Store, but no such signing up is required when becoming an Android developer. If things go well and you end up making an app for various stores, then this isn't such a big deal. It will cost you $25 to submit an app to the Android Market, $99 a year (with the first year free) to submit an app to the Amazon Appstore, and $99 a year (including the first year) to be an iOS developer with Apple. Just try to sell more than 300 copies of your amazing $0.99 app and you'll find that it has paid for itself! Note that there is a free iOS App Store and app licensing included, with LiveCode Membership, which also costs $99 per year. As a LiveCode member, you can submit your free non-commercial app to RunRev who will provide a license that will allow you to submit your app as "closed source" to iOS App Store. This service is exclusively available for LiveCode members. The first submission each year is free; after that, there is a $25 administration fee per submission. Refer to http://livecode.com/membership/ for more information. You can enroll yourself in the iOS Developer Program for iOS at http://developer.apple.com/programs/ios/: While signing up to be an iOS developer, there are a number of possibilities when it comes to your current status. If you already have an Apple ID, which you use with your iTunes or Apple online store purchases, you could choose the I already have an Apple ID… option. In order to illustrate all the steps to sign up, we will start as a brand new user, as shown in the following screenshot: You can choose whether you want to sign up as an individual or as a company. We will choose Individual, as shown in the following screenshot: With any such sign up process, you need to enter your personal details, set a security question, and enter your postal address: Most Apple software and services have their own legal agreement for you to sign. The one shown in the following screenshot is the general Registered Apple Developer Agreement: In order to verify the e-mail address you have used, a verification code is sent to you with a link in the e-mail, you can click this, or enter the code manually. Once you have completed the verification code step, you can then enter your billing details. It could be that you might go on to make LiveCode applications for the Mac App Store, in which case, you will need to add the Mac Developer Program product. For our purpose, we only need to sign up for the iOS Developer Program, as shown in the following screenshot: Each product that you sign up for has its own agreement. Lots of small print to read! The actual purchasing of the iOS developer account is handled through the Apple Store of your own region, shown as follows: As you can see in the next screenshot, it is going to cost you $99 per year or $198 per year if you also sign up for the Mac Developer account. Most LiveCode users won't need to sign up for the Mac Developer account unless their plan is to submit desktop apps to the Mac App Store. After submitting the order, you are rewarded with a message that tells you that you are now registered as an Apple developer! Sadly, you won't get an instant approval, as was the case with Android Market or Amazon Appstore. You have to wait for the approval for five days. In the early iPhone Developer days, the approval could take a month or more, so 24 hours is an improvement! Pop quiz – iOS code names You had it easy with the pop quiz about Android OS code names! Not so with iOS. Which of these names is more likely to be a code name for a future version of iOS? Las Vegas Laguna Beach Hunter Mountain Death Valley Answer: 3 Although not publicized, Apple does use code names for each version of iOS. Previous examples included Big Bear, Apex, Kirkwood, and Telluride. These, and all the others are apparently ski resorts. Hunter Mountain is a relatively small mountain (3,200 feet), so if it does get used, perhaps it would be a minor update! Installing Xcode Once you receive confirmation of becoming an iOS developer, you will be able to log in to the iOS Dev Center at https://developer.apple.com/devcenter/ios/index.action. This same page is used by iOS developers who are not using LiveCode and is full of support documents that can help you create native applications using Xcode and Objective-C. We don't need all the support documents, but we do need to download Xcode's support documents. In the downloads area of the iOS Dev Center page, you will see a link to the current version of Xcode and a link to get to the older versions as well. The current version is delivered via Mac App Store; when you try the given link, you will see a button that takes you to the App Store application. Installing Xcode from Mac App Store is very straightforward. It's just like buying any other app from the store, except that it's free! It does require you to use the latest version of Mac OS X. Xcode will show up in your Applications folder. If you are using an older system, then you need to download one of the older versions from the developer page. The older Xcode installation process is much like the installation process of any other Mac application: The older version of Xcode takes a long time to get installed, but in the end, you should have the Developer folder or a new Xcode application ready for LiveCode. Coping with newer and older devices In early 2012, Apple brought to the market a new version of iPad. The main selling point of this one compared to iPad 2 is that it has a Retina display. The original iPads have a resolution of 1024 x 768 and the Retina version has a resolution of 2048 x 1536. If you wish to build applications to take advantage of this, you must get the current version of Xcode from Mac App Store and not one of the older versions from the developer page. The new version of Xcode demands that you work on Mac OS 10.10 or its later versions. So, to fully support the latest devices, you may have to update your system software more than you were expecting! But wait, there's more… By taking a later version of Xcode, you are missing the iOS SDK versions needed to support older iOS devices, such as the original iPhone and iPhone 3G. Fortunately, you can go to Preferences in Xcode where there is a Downloads tab where you can get these older SDKs downloaded in the new version of Xcode. Typically, Apple only allows you to download one version older than the one that is currently provided in Xcode. There are older versions available, but are not accepted by Apple for App Store submission. Pointing LiveCode to the iOS SDKs Open the LiveCode Preferences and choose Mobile Support: Click on the Add Entry button in the upper-right section of the window to see a dialog box that asks whether you are using Xcode 4.2 or 4.3 or a later version. If you choose 4.2, then go on to select the folder named Developer at the root of your hard drive. For 4.3 or later versions, choose the Xcode application itself in your Applications folder. LiveCode knows where to find the SDKs for iOS. Before we make our first mobile app… Now that the required SDKs are installed and LiveCode knows where they are, we can make a stack and test it in a simulator or on a physical device. We do, however, have to get the simulators and physical devices warmed up… Getting ready for test development on an Android device Simulating on iOS is easier than it is on Android, and testing on a physical device is easier on Android than on iOS, but the setting up of physical Android devices can be horrendous! Time for action – starting an Android Virtual Device You will have to dig a little deep in the Android SDK folders to find the Android Virtual Device setup program. You might as well provide a shortcut or an alias to it for quicker access. The following steps will help you setup and start an Android virtual device: Navigate to the Android SDK tools folder located at C:Program Files (x86)Androidandroid-sdk on Windows and navigate to your Documents/android-sdk-macosx/tools folder on Mac. Open AVD Manager on Windows or android on Mac (these look like a Unix executable file; just double-click on it and the application will open via a command-line window). If you're on Mac, select Manage AVDs… from the Tools menu. Select Tablet from the list of devices if there is one. If not, you can add your own custom devices as described in the following section. Click on the Start button. Sit patiently while the virtual device starts up! Open LiveCode, create a new Mainstack, and click on Save to save the stack to your hard drive. Navigate to File | Standalone Application Settings…. Click on the Android icon and click on the Build for Android checkbox to select it. Close the settings dialog box and take a look at the Development menu. If the virtual machine is up and running, you should see it listed in the Test Target submenu. Creating an Android Virtual Device If there are no devices listed when you open the Android Virtual Device (AVD) Manager, you may If you wish to create a device, so click on the Create button. The following screenshot will appear when you do so. Further explanation of the various fields can be found at https://developer.android.com/tools/devices/index.html. After you have created a device, you can click on Start to start the virtual device and change some of the Launch Options. You should typically select Scale display to real size unless it is too big for your development screen. Then, click on Launch to fire up the emulator. Further information on how to run the emulator can be found at http://developer.android.com/tools/help/emulator.html. What just happened? Now that you've opened an Android virtual device, LiveCode will be able to test stacks using this device. Once it has finished loading, that is! Connecting a physical Android device Connecting a physical Android device can be extremely straightforward: Connect your device to the system by USB. Select your device from the Development | Test Target submenu. Select Test from the Development menu or click on the Test button in the Tool Bar. There can be problem cases though, and Google Search will become your best friend before you are done solving these problems! We should look at an example problem case, so that you get an idea of how to solve similar situations that you may encounter. Using Kindle Fire When it comes to finding Android devices, the Android SDK recognizes a lot of them automatically. Some devices are not recognized and you have to do something to help Android Debug Bridge (ADB) find these devices. Android Debug Bridge (ADB) is part of the Android SDK that acts as an intermediary between your device and any software that needs to access the device. In some cases, you will need to go to the Android system on the device to tell it to allow access for development purposes. For example, on an Android 3 (Honeycomb) device, you need to go to the Settings | Applications | Development menu and you need to activate the USB debugging mode. Before ADB connects to a Kindle Fire device, that device must first be configured, so that it allows connection. This is enabled by default on the first generation Kindle Fire device. On all other Kindle Fire models, go to the device settings screen, select Security, and set Enable ADB to On. The original Kindle Fire model comes with USB debugging already enabled, but the ADB system doesn't know about the device at all. You can fix this! Time for action – adding Kindle Fire to ADB It only takes one line of text to add Kindle Fire to the list of devices that ADB knows about. The hard part is tracking down the text file to edit and getting ADB to restart after making the required changes. Things are more involved when using Windows than with Mac because you also have to configure the USB driver, so the two systems are shown here as separate steps. The steps to be followed for adding a Kindle Fire to ADB for a Windows OS are as follows: In Windows Explorer, navigate to C:Usersyourusername.android where the adv_usb.ini file is located. Open the adv_usb.ini text file in a text editor. The file has no visible line breaks, so it is better to use WordPad than NotePad. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Now, save the file. Navigate to C:Program Files (x86)Androidandroid-sdkextrasgoogleusb_driver where android_winusb.inf is located. Right-click on the file and in Properties, Security, select Users from the list and click on Edit to set the permissions, so that you are allowed to write the file. Open the android_winusb.inf file in NotePad. Add the following three lines to the [Google.NTx86] and [Google.NTamd64] sections and save the file: ;Kindle Fire %SingleAdbInterface% = USB_Install, USBVID_1949&PID_0006 %CompositeAdbInterface% = USB_Install, USBVID_1949&PID_0006&MI_01 You need to set the Kindle so that it uses the Google USB driver that you just edited. In the Windows control panel, navigate to Device Manager and find the Kindle entry in the list that is under USB. Right-click on the Kindle entry and choose Update Driver Software…. Choose the option that lets you find the driver on your local drive, navigate to the googleusb_driver folder, and then select it to be the new driver. When the driver is updated, open a command window (a handy trick to open a command window is to use Shift-right-click on the desktop and to choose "Open command window here"). Change the directories to where the ADB tool is located by typing: cd C:Program Files (x86)Androidandroid-sdkplatform-tools Type the following three line of code and press Enter after each line: adb kill-server adb start-server adb devices You should see the Kindle Fire listed (as an obscure looking number) as well as the virtual device if you still have that running. The steps to be followed for a Mac (MUCH simpler!) system are as follows: Navigate to where the adv_usb.ini file is located. On Mac, in Finder, select the menu by navigating to Go | Go to Folder… and type ~/.android/in. Open the adv_usb.ini file in a text editor. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Save the adv_usb.ini file. Navigate to Utilities | Terminal. You can let OS X know how to find ADB from anywhere by typing the following line (replace yourusername with your actual username and also change the path if you've installed the Android SDK to some other location): export PATH=$PATH:/Users/yourusername/Documents/android-sdk-macosx/platform-tools Now, try the same three lines as we did with Windows: adb kill-server adb start-server adb devices Again, you should see the Kindle Fire listed here. What just happened? I suspect that you're going to have nightmares about all these steps! It took a lot of research on the Web to find out some of these obscure hacks. The general case with Android devices on Windows is that you have to modify the USB driver for the device to be handled using the Google USB driver, and you may have to modify the adb_usb.ini file (on Mac too) for the device to be considered as an ADB compatible device. Getting ready for test development on an iOS device If you carefully went through all these Android steps, especially on Windows, you will hopefully be amused by the brevity of this section! There is a catch though; you can't really test on an iOS device from LiveCode. We'll look at what you have to do instead in a moment, but first, we'll look at the steps required to test an app in the iOS simulator. Time for action – using the iOS simulator The initial steps are much like what we did for Android apps, but the process becomes a lot quicker in later steps. Remember, this only applies to a Mac OS; you can only do these things on Windows if you are using a Mac OS in a virtual machine, which may have performance issues. This is most likely not covered by the Mac OS's user agreement! In other words, get a Mac OS if you intend to develop for iOS. The following steps will help you achieve that: Open LiveCode and create a new Mainstack and save the stack to your hard drive. Select File and then Standalone Application Settings…. Click on the iOS icon to select the Build for iOS checkbox. Close the settings dialog box and take a look at the Test Target menu under Development. You will see a list of simulator options for iPhone and iPad and different versions of iOS. To start the iOS simulator, select an option and click on the Test button. What just happened? This was all it took for us to get the testing done using the iOS simulators! To test on a physical iOS device, we need to create an application file first. Let's do that. Appiness at last! At this point, you should be able to create a new Mainstack, save it, select either iOS or Android in the Standalone Settings dialog box, and be able to see simulators or virtual devices in the Development/Test menu item. In the case of an Android app, you will also see your device listed if it is connected via USB at the time. Time for action – testing a simple stack in the simulators Feel free to make things that are more elaborate than the ones we have made through these steps! The following instructions make an assumption that you know how to find things by yourself in the object inspector palette: Open LiveCode, create a new Mainstack, and save it someplace where it is easy to find in a moment from now. Set the card window to the size 480 x 320 and uncheck the Resizable checkbox. Drag a label field to the top-left corner of the card window and set its contents to something appropriate. Hello World might do. If you're developing on Windows, skip to step 11. Open the Standalone Application Settings dialog box, click on the iOS icon, and click on the Build for iOS checkbox. Under Orientation Options, set the iPhone Initial Orientation to Landscape Left. Close the dialog box. Navigate to the Development | Test Target submenu and choose an iPhone Simulator. Select Test from the Development menu. You should now be able to see your test stack running in the iOS simulator! As discussed earlier, launch the Android virtual device. Open the Standalone Application Settings dialog box, click on the Android icon, and click on the Build for Android checkbox. Under User Interface Options, set the Initial Orientation to Landscape. Close the dialog box. If the virtual device is running by now, do whatever it takes to get past the locked home screen, if that's what it is showing. From the Development/Test Target submenu, choose the Android emulator. Select Test from the Development menu. You should now see your test stack running in the Android emulator! What just happened? All being well, you just made and ran your first mobile app on both Android and iOS! For an encore, we should try this on physical devices only to give Android a chance to show how easy it can be done. There is a whole can of worms we didn't open yet that has to do with getting an iOS device configured, so that it can be used for testing. You could visit the iOS Provisioning Portal at https://developer.apple.com/ios/manage/overview/index.action and look at the How To tab in each of the different sections. Time for action – testing a simple stack on devices Now, let's try running our tests on physical devices. Get your USB cables ready and connect the devices to your computer. Lets go through the steps for an Android device first: You should still have Android selected in Standalone Application Settings. Get your device to its home screen past the initial Lock screen if there is one. Choose Development/Test Target and select your Android device. It may well say "Android" and a very long number. Choose Development/Test. The stack should now be running on your Android device. Now, we'll go through the steps to test a simple stack on an iOS device: Change the Standalone Application Settings back to iOS. Under Basic Application Settings of the iOS settings is a Profile drop-down menu of the provisioning files that you have installed. Choose one that is configured for the device you are going to test. Close the dialog box and choose Save as Standalone Application… from the File menu. In Finder, locate the folder that was just created and open it to reveal the app file itself. As we didn't give the stack a sensible name, it will be named Untitled 1. Open Xcode, which is in the Developer folder you installed earlier, in the Applications subfolder. In the Xcode folder, choose Devices from the Window menu if it isn't already selected. You should see your device listed. Select it and if you see a button labeled Use for Development, click on that button. Drag the app file straight from the Finder menu to your device in the Devices window. You should see a green circle with a + sign. You can also click on the + sign below Installed Apps and locate your app file in the Finder window. You can also replace or delete an installed app from this window. You can now open the app on your iOS device! What just happened? In addition to getting a test stack to work on real devices, we also saw how easy it is, once it's all configured, to test a stack, straight on an Android device. If you are developing an app that is to be deployed on both Android and iOS, you may find that the fastest way to work is to test with the iOS Simulator for iOS tests, but for this, you need to test directly on an Android device instead of using the Android SDK virtual devices. Have a go hero – Nook Until recently, the Android support for the Nook Color from Barnes & Noble wasn't good enough to install LiveCode apps. It seems to have improved though and could well be another worthwhile app store for you to target. Investigate about the sign up process, download their SDK, and so on. With any luck, some of the processes that you've learned while signing up for the other stores will also apply to the Nook store. You can start the signing up process at https://nookdeveloper.barnesandnoble.com. Further reading The SDK providers, Google and Apple, have extensive pages of information on how to set up development environments, create certificates and provisioning files, and so on. The information covers a lot of topics that don't apply to LiveCode, so try not to get lost! These URLs would be good starting points if you want to read further: http://developer.android.com/ http://developer.apple.com/ios/ Summary Signing up for programs, downloading files, using command lines all over the place, and patiently waiting for the Android emulator to launch. Fortunately, you only have to go through it once. In this article, we worked through a number of tasks that you have to do before you create a mobile app in LiveCode. We had to sign up as an iOS developer before we could download and install Xcode and iOS SDKs. We then downloaded and installed the Android SDK and configured LiveCode for devices and simulators. We also covered some topics that will be useful once you are ready to upload a finished app. We showed you how to sign up for the Android Market and Amazon Appstore. There will be a few more mundane things that we have to cover at the end of the article, but not for a while! Next up, we will start to play with some of the special abilities of mobile devices. Resources for Article: Further resources on this subject: LiveCode: Loops and Timers [article] Creating Quizzes [article] Getting Started with LiveCode for Mobile [article]
Read more
  • 0
  • 0
  • 4445

article-image-making-poiapp-location-aware
Packt
10 Jan 2014
4 min read
Save for later

Making POIApp Location Aware

Packt
10 Jan 2014
4 min read
(For more resources related to this topic, see here.) Location services While working with location services on the Android platform, you will primarily work with an instance of LocationManager. The process is fairly straightforward as follows: Obtain a reference to an instance of LocationManager. Use the instance of LocationManager to request location change notifications, either ongoing or a single notification. Process OnLocationChange() callbacks. Android devices generally provide two different means for determining a location: GPS and Network. When requesting location change notifications, you must specify the provider you wish to receive updates from. The Android platform defines a set of string constants for the following providers: Provider Name Description GPS_PROVIDER (gps) This provider determines a location using satellites. Depending on conditions, this provider may take a while to return a location fix. This requires the ACCESS_FINE_LOCATION permission. NETWORK_PROVIDER (network) This provider determines a location based on the availability of a cell tower and Wi-Fi access points. Its results are retrieved by means of a network lookup. PASSIVE_PROVIDER (passive) This provider can be used to passively receive location updates when other applications or services request them without actually having to request for the locations yourself. It requires the ACCESS_FINE_ LOCATION permission, although if the GPS is not enabled, this provider might only return coarse fixes. You will notice specific permissions in the provider descriptions that must be set on an app to be used. Setting app permissions App permissions are specified in the AndroidManifest.xml file. To set the appropriate permissions, perform the following steps: Double-click on Properties/AndroidManifest.xml in the Solution pad. The file will be opened in the manifest editor. There are two tabs at the bottom of the screen, Application and Source, which can be used to toggle between viewing a form for editing the file or the raw XML as follows: In the Required permissions list, check AccessCoarseLocation, AccessFineLocation, and Internet. Select File | Save. Switch to the Source View to view the XML as follows: Configuring the emulator To use an emulator for development, this article will require the emulator to be configured with Google APIs so that the address lookup and navigation to map app works. To install and configure Google APIs, perform the following steps: From the main menu, select Tools | Open Android SDK Manager. Select the platform version you are using, check Google APIs, and click on Install 1 package…, as seen in the following screenshot: After the installation is complete, close the Android SDK Manager and from the main menu, select Tools | Open Android Emulator Manager. Select the emulator you want to configure and click on Edit. For Target, select the Google APIs entry for the API level you want to work with. Click on OK to save. Obtaining an instance of LocationManager The LocationManager class is a system service that provides access to the location and bearing of a device, if the device supports these services. You do not explicitly create an instance of LocationManager; instead, you request an instance from a Context object using the GetSystemService() method. In most cases, the Context object is a subtype of Activity. The following code depicts declaring a reference of a LocationManager class and requesting an instance: LocationManager _locMgr; . . . _locMgr = GetSystemService (Context.LocationService) as LocationManager; Requesting location change notifications The LocationManager class provides a series of overloaded methods that can be used to request location update notifications. If you simply need a single update, you can call RequestSingleUpdate(); to receive ongoing updates, call RequestLocationUpdate(). Prior to requesting location updates, you must identify the location provider that should be used. In our case, we simply want to use the most accurate provider available at the time. This can be accomplished by specifying the criteria for the desired provider using an instance of Android.Location.Criteria. The following code example shows how to specify the minimum criteria: Criteria criteria = new Criteria(); criteria.Accuracy = Accuracy.NoRequirement; criteria.PowerRequirement = Power.NoRequirement; Now that we have the criteria, we are ready to request updates as follows: _locMgr.RequestSingleUpdate (criteria, this, null); Summary In this article, we stepped through integrating POIApp with location services and the Google map app. We dealt with the various options that developers have to make their apps location aware and walks the reader through adding logic to determine a device's location and the address of a location, and displaying a location within the map app. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating Dynamic UI with Android Fragments [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 4383
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-application-connectivity-and-network-events
Packt
26 Dec 2014
10 min read
Save for later

Application Connectivity and Network Events

Packt
26 Dec 2014
10 min read
 In this article by Kerri Shotts, author of PhoneGap for Enterprise, we will see how an app reacts to the network changes and activities. In an increasingly connected world, mobile devices aren't always connected to the network. As such, the app needs to be sensitive to changes in the device's network connectivity. It also needs to be sensitive to the type of network (for example, cellular versus wired), not to mention being sensitive to the device the app itself is running on. Given all this, we will cover the following topics: Determining network connectivity Getting the current network type Detecting changes in connectivity Handling connectivity issues (For more resources related to this topic, see here.) Determining network connectivity In a perfect world, we'd never have to worry if the device was connected to the Internet or not, and if our backend was reachable. Of course, we don't live in that world, so we need to respond appropriately when the device's network connectivity changes. What's critical to remember is that having a network connection in no way determines the reachability of a host. That is to say, it's entirely possible for a device to be connected to a Wi-Fi network or a mobile hotspot and yet is unable to contact your servers. This can happen for several reasons (any of which can prevent proper communication with your backend). In short, determining the network status and being sensitive to changes in the status really tells you only one thing: whether or not it is futile to attempt communication. After all, if the device isn't connected to any network, there's no reason to attempt communication over a nonexistent network. On the other hand, if a network is available, the only way to determine if your hosts are reachable or not is to try and contact them. The ability to determine the device's network connectivity and respond to changes in the status is not available in Cordova/PhoneGap by default. You'll need to add a plugin before you can use this particular feature. You can install the plugin as follows: cordova plugin add org.apache.cordova.network-information The plugin's complete documentation is available at: https://github.com/apache/cordova-plugin-network-information/blob/master/doc/index.md. Getting the current network type Anytime after the deviceready event fires, you can query the plugin for the status of the current network connection by querying navigator.connection.type: var networkType = navigator.connection.type; switch (networkType) { case Connection.UNKNOWN: console.log ("Unknown connection."); break; case Connection.ETHERNET: console.log ("Ethernet connection."); break; case Connection.WIFI: console.log ("Wi-Fi connection."); break; case Connection.CELL_2G: console.log ( "Cellular (2G) connection."); break; case Connection.CELL_3G: console.log ( "Cellular (3G) connection."); break; case Connection.CELL_4G: console.log ( "Cellular (4G) connection."); break; case Connection.CELL: console.log ( "Cellular connection."); break; case Connection.NONE: console.log ( "No network connection."); break; } If you executed the preceding code on a typical mobile device, you'd probably either see some variation of the Cellular connection or the Wi-Fi connection message. If your device was on Wi-Fi and you proceeded to disable it and rerun the app, the Wi-Fi notice will be replaced with the Cellular connection notice. Now, if you put the device into airplane mode and rerun the app, you should see No network connection. Based on the available network type constants, it's clear that we can use this information in various ways: We can tell if it makes sense to attempt a network request: if the type is Connection.NONE, there's no point in trying as there's no network to service the request. We can tell if we are on a wired network, a Wi-Fi network, or a cellular network. Consider a streaming video app; this app can not only permit full quality video on a wired/Wi-Fi network, but can also use a lower quality video stream if it was running on a cellular connection. Although tempting, there's one thing the earlier code does not tell us: the speed of the network. That is, we can't use the type of the network as a proxy for the available bandwidth, even though it feels like we can. After all, aren't Ethernet connections typically faster than Wi-Fi connections? Also, isn't a 4G cellular connection faster than a 2G connection? In ideal circumstances, you'd be right. Unfortunately, it's possible for a fast 4G cellular network to be very congested, thus resulting in poor throughput. Likewise, it is possible for an Ethernet connection to communicate over a noisy wire and interact with a heavily congested network. This can also slow throughput. Also, while it's important to recognize that although you can learn something about the network the device is connected to, you can't use this to learn anything about the network conditions beyond that network. The device might indicate that it is attached to a Wi-Fi network, but this Wi-Fi network might actually be a mobile hotspot. It could be connected to a satellite with high latency, or to a blazing fast fiber network. As such, the only two things we can know for sure is whether or not it makes sense to attempt a request, and whether or not we need to limit the bandwidth if the device knows it is on a cellular connection. That's it. Any other use of this information is an abuse of the plugin, and is likely to cause undesirable behavior. Detecting changes in connectivity Determining the type of network connection once does little good as the device can lose the connection or join a new network at any time. This means that we need to properly respond to these events in order to provide a good user experience. Do not rely on the following events being fired when your app starts up for the first time. On some devices, it might take several seconds for the first event to fire; however, in some cases, the events might never fire (specifically, if testing in a simulator). There are two events our app needs to listen to: the online event and the offline event. Their names are indicative of their function, so chances are good you already know what they do. The online event is fired when the device connects to a network, assuming it wasn't connected to a network before. The offline event does the opposite: it is fired when the device loses a connection to a network, but only if the device was previously connected to a network. This means that you can't depend on these events to detect changes in the type of the network: a move from a Wi-Fi network to a cellular network might not elicit any events at all. In order to listen to these events, you can use the following code: document.addEventListener ("online", handleOnlineEvent, false); document.addEventListener ("offline", handleOfflineEvent, false); The event listener doesn't receive any information, so you'll almost certainly want to check the network type when handling an online event. The offline event will always correspond to a Connection.NONE network type. Having the ability to detect changes in the connectivity status means that our app can be more intelligent about how it handles network requests, but it doesn't tell us if a request is guaranteed to succeed. Handling connectivity issues As the only way to know if a network request might succeed is to actually attempt the request; we need to know how to properly handle the errors that might rise out of such an attempt. Between the Mobile and the Middle tier, the following are the possible errors that you might encounter while connecting to a network: TimeoutError: This error is thrown when the XHR times out. (Default is 30 seconds for our wrapper, but if the XHR's timeout isn't otherwise set, it will attempt to wait forever.) HTTPError: This error is thrown when the XHR completes and receives a response other than 200 OK. This can indicate any number of problems, but it does not indicate a network connectivity issue. JSONError: This error is thrown when the XHR completes, but the JSON response from the server cannot be parsed. Something is clearly wrong on the server, of course, but this does not indicate a connectivity issue. XHRError: This error is thrown when an error occurs when executing the XHR. This is definitely indicative of something going very wrong (not necessarily a connectivity issue, but there's a good chance). MaxRetryAttemptsReached: This error is thrown when the XHR wrapper has given up retrying the request. The wrapper automatically retries in the case of TimeoutError and XHRError. In all the earlier cases, the catch method in the promise chain is called. At this point, you can attempt to determine the type of error in order to determine what to do next: function sendFailRequest() { XHR.send( "GET", "http://www.really-bad-host-name.com /this/will/fail" ) .then(function( response ) {    console.log( response ); }) .catch( function( err ) {    if ( err instanceof XHR.XHRError ||     err instanceof XHR.TimeoutError ||     err instanceof XHR.MaxRetryAttemptsReached ) {      if ( navigator.connection.type === Connection.NONE ) {        // we could try again once we have a network connection        var retryRequest = function() {          sendFailRequest();          APP.removeGlobalEventListener( "networkOnline",         retryRequest );        };        // wait for the network to come online – we'll cover       this method in a moment        APP.addGlobalEventListener( "networkOnline",       retryRequest );      } else {        // we have a connection, but can't get through       something's going on that we can't fix.        alert( "Notice: can't connect to the server." );      }    }    if ( err instanceof XHR.HTTPError ) {      switch ( err.HTTPStatus ) {      case 401: // unauthorized, log the user back in        break;        case 403: // forbidden, user doesn't have access        break;        case 404: // not found        break;        case 500: // internal server error        break;        default:       console.log( "unhandled error: ", err.HTTPStatus );      }    }    if ( err instanceof XHR.JSONParseError ) {      console.log( "Issue parsing XHR response from server." );    } }).done(); } sendFailRequest(); Once a connection error is encountered, it's largely up to you and the type of app you are building to determine what to do next, but there are several options to consider as your next course of action: Fail loudly and let the user know that their last action failed. It might not be terribly great for user experience, but it might be the only sensible thing to do. Check whether there is a network connection present, and if not, hold on to the request until an online event is received and then send the request again. This makes sense only if the request you are sending is a request for data, not a request for changing data, as the data might have changed in the interim. Summary In this article you learnt how an app built using PhoneGap/Cordova reacts to the changing network conditions, also how to handle the connectivity issues that you might encounter. Resources for Article: Further resources on this subject: Configuring the ChildBrowser plugin [article] Using Location Data with PhoneGap [article] Working with the sharing plugin [article]
Read more
  • 0
  • 0
  • 4282

Packt
17 Sep 2013
6 min read
Save for later

Kendo UI Mobile – Exploring Mobile Widgets

Packt
17 Sep 2013
6 min read
(For more resources related to this topic, see here.) Kendo Mobile widgets basics All Kendo Mobile widgets inherit from the base class kendo.mobile.ui.Widget, which is inherited from the base class of all Kendo widgets (both Web and Mobile), kendo.ui.Widget. The complete inheritance chain of the mobile widget class is shown in the following figure: kendo.Class acts as the base class for most of the Kendo UI objects while the kendo.Observable object contains methods for events. kendo.data.ObservableObject which is the building block of Kendo MVVM, is inherited from kendo.Observable. Mobile widget base methods From the inheritance chain, all Kendo Mobile widgets inherit a set of common methods. A thorough understanding of these methods is required while building highly performing, complex mobile apps. bind The bind() method defined in the kendo.Observable class, attaches a handler to an event. Using this method we can attach custom methods to any mobile widget. The bind() method takes the following two input parameters: eventName: The name of the event handler: The function to be fired when the event is raised The following example shows how to create a new mobile widget and attach a custom event to the widget: //create a new mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { // mobileWidget object can be accessed inside this function as //'e.sender' and 'this'. console.log('customEvent fired'); }); The event data is available in the object e. The object which raised the event is accessible inside the function as e.sender or using the this keyword. trigger The trigger() method executes all event handlers attached to the fired event. This method has two input parameters: eventName: The name of the event to be triggered eventData (optional): The event-specific data to be passed into the event handler Let's see how trigger works by modifying the code sample provided for bind: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { // mobileWidget object can be accessed //inside this function as //'e.sender' and 'this' . console.log('customEvent fired'); //read event specific data if it exists if(e.eventData !== undefined){ console.log('customEvent fired with data: ' + e.eventData); } }); //trigger the event with some data mobileWidget.trigger("customEvent", { eventData:'Kendo UI is cool!' }); Here we are triggering the custom event which is attached using the bind() method and sending some data along. This data is read inside the event and written to the console. When this code is run, we can see the following output in the console: customEvent fired customEvent fired with data: Kendo UI is cool! unbind The unbind() method detaches a previously attached event handler from the widget. It takes the following input parameters: eventName: The name of the event to be detached. If an event name is not specified, all handlers of all events will be detached. handler: The handler function to be detached. If a function is not specified, all functions attached to the event will be detached. The following code attaches an event to a widget and detaches it when the event is triggered: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); //attach a custom event mobileWidget.bind("customEvent", function(e) { console.log('customEvent fired'); this.unbind("customEvent"); }); //trigger the event first time mobileWidget.trigger("customEvent"); //trigger the event second time mobileWidget.trigger("customEvent"); Output: customEvent fired As seen from the output, even though we trigger the event twice, only on the first time is the event handler invoked. one The one() method is identical to the bind() method only with one exception; the handler is unbound after its first invocation. So the handler will be fired only once. To see this method in action, let's add a count variable to the existing sample code and track the number of times the handler is invoked. For this we will bind the event handler with one() and then trigger the event twice as shown in the following code: //create a mobile widget var mobileWidget = new kendo.mobile.ui.Widget(); var count = 0; //attach a custom event mobileWidget.one("customEvent", function(e) { count++; console.log('customEvent fired. count: ' + count); }); //trigger the event first time mobileWidget.trigger("customEvent"); //trigger the event second time mobileWidget.trigger("customEvent"); Output: customEvent fired. count: 1 If you replace the one() method with the bind() method, you can see that the handler will be invoked twice. destroy The destroy() method is inherited from the kendo.ui.Widget base object. The destroy() method kills all the event handler attachments and removes the widget object in the jquery.data() attribute so that the widget can be safely removed from the DOM without memory leaks. If there is a child widget available, the destroy() method of the child widget will also be invoked. Let's see how the destroy() method works using the Kendo Mobile Button widget and using your browser's developer tools' console. Create an HTML file, add the following code along with Kendo UI Mobile file references in the file, and open it in your browser: <div data-role="view" > <a class="button" data-role="button" id="btnHome" data-click="buttonClick">Home</a> </div> <script> var app = new kendo.mobile.Application(document.body); function buttonClick(e){ console.log('Inside button click event handler...'); $("#btnHome").data().kendoMobileButton.destroy(); } </script> In this code block, we created a Kendo Button widget and on the click event, we are invoking the destroy() method of the button. Now open up your browser's developer tools' Console window, type $("#btnHome").data() and press Enter . Now if you click on the Object link shown in the earlier screenshot, a detailed view of all properties can be seen: Now click on kendoMobilebutton once and then again in the Console , type $("#btnHome").data() and hit Enter . Now we can see that the kendoMobileButton object is removed from the object list: Even though the data object is gone, the button stays in the DOM without any data or events associated with it. view The view() method is specific to mobile widgets and it returns the view object in which the widget is loaded. In the previous example, we can assign an ID, mainView, to the view and then retrieve it in the button's click event using this.view().id as shown in the following code snippet: <div data-role="view" id="mainView" > <a class="button" data-role="button" id="btnHome" data-click="buttonClick">Home</a> </div> <script> var app = new kendo.mobile.Application(document.body); function buttonClick(e){ console.log("View id: " + this.view().id ); } </script> Output: View id: #mainView
Read more
  • 0
  • 0
  • 4263

article-image-creating-simple-application-sencha-touch
Packt
15 Feb 2012
10 min read
Save for later

Creating a Simple Application in Sencha Touch

Packt
15 Feb 2012
10 min read
  (For more resources on this topic, see here.) Setting up your folder structure Before we get started, you need to be sure that you've set up your development environment properly. Root folder You will need to have the folders and files for your application located in the correct web server folder, on your local machine. On the Mac, this will be the Sites folder in your Home folder. On Windows, this will be C:xamphtdocs (assuming you installed xampp). Setting up your application folder Before we can start writing code, we have to perform some initial set up, copying in a few necessary resources and creating the basic structure of our application folder. This section will walk you through the basic setup for the Sencha Touch files, creating your style sheets folder, and creating the index.html file. Locate the Sencha Touch folder you downloaded. Create a folder in the root folder of your local web server. You may name it whatever you like. I have used the folder name TouchStart in this article. Create three empty sub folders called lib, app, and css in your TouchStart folder. Now, copy the resources and src folders, from the Sencha Touch folder you downloaded earlier, into the TouchStart/lib folder. Copy the following files from your Sencha Touch folder to your TouchStart/lib folder: sencha-touch.js sencha-touch-debug.js sencha-touch-debug-w-comments.js Create an empty file in the TouchStart/css folder called TouchStart.css. This is where we will put custom styles for our application. Create an empty index.html file in the main TouchStart folder. We will flesh this out in the next section. Icon files Both iOS and Android applications use image icon files for display. This creates pretty rounded launch buttons, found on most touch-style applications. If you are planning on sharing your application, you should also create PNG image files for the launch image and application icon. Generally, there are two launch images, one with a resolution of 320 x 460 px, for iPhones, and one at 768 x 1004 px, for iPads. The application icon should be 72 x 72 px. See Apple's iOS Human Interface Guidelines for specifics, at http://developer.apple.com/library/ios/#documentation/userexperience/conceptual/mobilehig/IconsImages/IconsImages.html. When you're done, your folder structure should look as follows: Creating the HTML application file Using your favorite HTML editor, open the index.html file we created when we were setting up our application folder. This HTML file is where you specify links to the other files we will need in order to run our application. The following code sample shows how the HTML should look: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> <!-- Sencha Touch CSS --> <link rel="stylesheet" href="lib/resources/css/sencha-touch.css"type="text/css"> <!-- Sencha Touch JS --> <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> <!-- Application JS --> <script type="text/javascript" src="app/TouchStart.js"></script> <!-- Custom CSS --> <link rel="stylesheet" href="css/TouchStart.css" type="text/css"> </head> <body></body></html> Comments In HTML, anything between <!-- and --> is a comment, and it will not be displayed in the browser. These comments are to tell you what is going on in the file. It's a very good idea to add comments into your own files, in case you need to come back later and make changes. Let's take a look at this HTML code piece-by-piece, to see what is going on in this file. The first five lines are just the basic set-up lines for a typical web page: <!DOCTYPE html><html> <head> <meta http-equiv="Content-Type" content="text/html;charset=utf-8"> <title>TouchStart Application - My Sample App</title> With the exception of the last line containing the title, you should not need to change this code for any of your applications. The title line should contain the title of your application. In this case, TouchStart Application – Hello World is our title. The next few lines are where we begin loading the files to create our application, starting with the Sencha Touch files. The first file is the default CSS file for the Sencha Touch library—sencha-touch.css. <link rel="stylesheet" href="lib/resources/css/ext-touch.css"type="text/css"> CSS files CSS or Cascading Style Sheet files contain style information for the page, such as which items are bold or italic, which font sizes to use, and where items are positioned in the display. The Sencha Touch style library is very large and complex. It controls the default display of every single component in Sencha Touch. It should not be edited directly. The next file is the actual Sencha Touch JavaScript library. During development and testing, we use the debug version of the Sencha Touch library, sencha-touchdebug.js: <script type="text/javascript" src="lib/sencha-touch-debug.js"></script> The debug version of the library is not compressed and contains comments and documentation. This can be helpful if an error occurs, as it allows you to see exactly where in the library the error occurred. When you have completed your development and testing, you should edit this line to use sencha-touch.js instead. This alternate file is the version of the library that is optimized for production environments and takes less bandwidth and memory to use; but, it has no comments and is very hard to read. Neither the sencha-touch-debug.js nor the sencha-touch.js files should ever be edited directly. The next two lines are where we begin to include our own application files. The names of these files are totally arbitrary, as long as they match the name of the files you create later, in the next section of this chapter. It's usually a good idea to name the file the same as your application name, but that is entirely up to you. In this case, our files are named TouchStart.js and TouchStart.css. <script type="text/javascript" src="app/TouchStart.js"></script> This first file, TouchStart.js, is the file that will contain our JavaScript application code. The last file we need to include is our own custom CSS file, called TouchStart.css. This file will contain any style information we need for our application. It can also be used to override some of the existing Sencha Touch CSS styles. <link rel="stylesheet" href="resources/css/TouchStart.css"type="text/css"> This closes out the </head> area of the index.html file. The rest of the index.html file contains the <body></body> tags and the closing </html> tag. If you have any experience with traditional web pages, it may seem a bit odd to have empty <body></body> tags, in this fashion. In a traditional web page, this is where all the information for display would normally go. For our Sencha Touch application, the JavaScript we create will populate this area automatically. No further content is needed in the index.html file, and all of our code will live in our TouchStart.js file. So, without further delay, let's write some code!   Starting from scratch with TouchStart.js Let's start by opening the TouchStart.js file and adding the following: new Ext.Application({name: 'TouchStart',launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; }}); This is probably the most basic application you can possibly create: the ubiquitous "Hello World" application. Once you have saved the code, use the Safari web browser to navigate to the TouchStart folder in the root folder of your local web server. The address should look like the following: http://localhost/TouchStart/, on the PC http://127.0.0.1/~username/TouchStart, on the Mac (username should be replaced with the username for your Mac) As you can see, all that this bit of code does is create a single window with the words Hello World. However, there are a few important elements to note in this example. The first line, NewExt.Application({, creates a new application for Sencha Touch. Everything listed between the curly braces is a configuration option of this new application. While there are a number of configuration options for an application, most consist of at least the application's name and a launch function. Namespace One of the biggest problems with using someone else's code is the issue of naming. For example, if the framework you are using has an object called "Application", and you create your own object called "Application", the two functions will conflict. JavaScript uses the concept of namespaces to keep these conflicts from happening. In this case, Sencha Touch uses the namespace Ext. It is simply a way to eliminate potential conflicts between the frameworks' objects and code, and your own objects and code. Sencha will automatically set up a namespace for your own code as part of the new Ext.Application object. Ext is also part of the name of Sencha's web application framework called ExtJS. Sencha Touch uses the same namespace convention to allow developers familiar with one library to easily understand the other. When we create a new application, we need to pass it some configuration options. This will tell the application how to look and what to do. These configuration options are contained within the curly braces ({}) and separated by commas. The first option is as follows: name: 'TouchStart' The launch configuration option is actually a function that will tell the application what to do once it starts up. Let's start backwards on this last bit of code for the launch configuration and explain this.viewport. By default, a new application has a viewport. The viewport is a pseudo-container for your application. It's where you will add everything else for your application. Typically, this viewport will be set to a particular kind of container object. At the beginning of the launch function, we start out by creating a basic container, which we call hello: launch: function() {var hello = new Ext.Container({fullscreen: true,html: '<div id="hello">Hello World</div>' });this.viewport = hello; } Like the Application class, a new Ext.Container class is passed a configuration object consisting of a set of configuration options, contained within the curly braces ({}) and separated by commas. The Container object has over 40 different configuration options, but for this simple example, we only use two: fullscreen sets the size of the container to fill the entire screen (no matter which device is being used). html sets the content of the container itself. As the name implies, this can be a string containing either HTML or plain text. Admittedly, this is a very basic application, without much in the way of style. Let's add something extra using the container's layout configuration option. My application didn't work! When you are writing code, it is an absolute certainty that you will, at some point, encounter errors. Even a simple error can cause your application to behave in a number of interesting and aggravating ways. When this happens, it is important to keep in mind the following: Don't Panic. Retrace your steps and use the tools to track down the error and fix it.  
Read more
  • 0
  • 0
  • 4252

article-image-coding-real-time-web
Packt
30 Jul 2013
9 min read
Save for later

Coding for the Real-time Web

Packt
30 Jul 2013
9 min read
(For more resources related to this topic, see here.) As the lines between web apps and traditional desktop apps blur, our users have come to expect real-time behavior in our web apps—something that is traditionally the domain of the desktop. One cannot really blame them. Real-time interaction with data, services, and even other users has driven the connected revolution, and we are now connected in more ways than ever before. However valid this desire to be always connected and immediately informed of an event, there are inherent challenges in real-time interactions within web apps. The first challenge is that the Web is stateless. The Web is built on HTTP, a protocol that is request/response; for each request a browser makes, there is one and only one response. There are frameworks and techniques we can use to mask the statelessness of the Web, but there is no true state built into the Web or HTTP. This is further complicated as the Web is client/server. As it's stateless, a server only knows of the clients connected at any one given moment, and clients can only display data to the user based upon the last interaction with the server. The only time the client and server have any knowledge of the other is during an active request/response, and this action may change the state of the client or the server. Any change to the server's state is not reflected to the other clients until they connect to the server with a new request. It's somewhat like the uncertainty principle in that the more one tries to pin down one data point of the relationship, the more uncertain one becomes about the other points. All hope is not lost. There are several techniques that can be used to enable real-time (or near real-time) data exchange between the web server and any active client. Simulating a connected state In traditional web development, there has not been a way to maintain a persistent connection between a client browser and the web server. Web developers have gone to great lengths to try and simulate a connected world in the request/response world of HTTP. Several developers have met with success using creative thinking and loopholes within the standard itself to develop techniques such as long polling and the forever frame. Now, thanks to the realization that such a technique is needed, the organizations overseeing the next generation of web standards are also heeding the call with server-sent events and web sockets. Long polling Long polling is the default fallback for any client and server content exchange. It is not reliant on anything but HTTP—no special standards checklists or other chicanery are required. Long polling is like getting the silent treatment from your partner. You ask a question and you wait indefinitely for an answer. After some known period of time and what may seem like an eternity, you finally receive an answer or the request eventually times out. The process repeats again and again until the request is fully satisfied or the relationship terminates. So, yeah, it's exactly like the silent treatment. Forever Frame The Forever Frame technique relies on the HTTP 1.1 standard and a hidden iframe. When the page loads, it contains (or constructs) a hidden iframe used to make a request back to the server. The actual exchange between the client and the server leverages a feature of HTTP 1.1 known as Chunked Encoding. Chunked Encoding is identified by a value of chunked in the HTTP Transfer-Encoding header. This method of data transfer is intended to allow the server to begin sending portions of data to the client before the entire length of the content is known. When simulating a real-time connection between a browser and web server, the server can dispatch messages to the client as individual chunks on the request made by the iframe. Server-Sent Events Server-Sent Events (SSE) provide a mechanism for a server to raise DOM events within a client web browser. This means to use SSE, the browser must support it. As of this writing, support for SSE is minimal but it has been submitted to W3C for inclusion into the HTML5 specification. The use of SSE begins by declaring an EventSource variable: var source = new EventSource('/my-data-source'); If you then want to listen to any and all messages sent by the source, you simply treat it as a DOM event and handle it in JavaScript. source.onmessage = function(event) {// Process the event.} SSE supports the raising of specific events and complex event messaging. The message format is a simple text-based format derivative of JSON. Two newline characters separate each message within the stream, and each message may have an id, data, and event property. SSE also supports setting the retry time using the retry keyword within a message. :comment:simple messagedata:"this string is my message":complex message targeting an eventevent:thatjusthappeneddata:{ "who":"Professor Plum", "where":"Library", "with":"candlestick"} As of this writing, SSE is not supported in Internet Explorer and is partially implemented in a few mobile browsers. WebSockets The coup de grâce of real-time communication on the Web is WebSockets. WebSockets support a bidirectional stream between a web browser and web server and only leverage HTTP 1.1 to request a connection upgrade. Once a connection upgrade has been granted, WebSockets communicate in full-duplex using the WebSocket protocol over a TCP connection, literally creating a client-server connection within the browser that can be used for real-time messaging. All major desktop browsers and almost all mobile browsers support WebSockets. However, WebSocket usage requires support from the web server, and a WebSocket connection may have trouble working successfully behind a proxy. With all the tools and techniques available to enable real-time connections between our mobile web app and the web server, how does one make the choice? We could write our code to support long polling, but that would obviously use up resources on the server and require us to do some pretty extensive plumbing on our end. We could try and use WebSockets, but for browsers lacking support or for users behind proxies, we might be introducing more problems than we would solve. If only there was a framework to handle all of this for us, try the best option available and degrade to the almost guaranteed functionality of long polling when required. Wait. There is. It's called SignalR. SignalR provides a framework that abstracts all the previously mentioned real-time connection options into one cohesive communication platform supporting both web development and traditional desktop development. When establishing a connection between the client and server, SignalR will negotiate the best connection technique/technology possible based upon client and server capability. The actual transport used is hidden beneath a higher-level communication framework that exposes endpoints on the server and allows those endpoints to be invoked by the client. Clients, in turn, may register with the server and have messages pushed to them. Each client is uniquely identified to the server via a connection ID. This connection ID can be used to send messages explicitly to a client or away from a client. In addition, SignalR supports the concept of groups, each group being a collection of connection IDs. These groups, just like individual connections, can be specifically included or excluded from a communication exchange. All of these capabilities in SignalR are provided to us by two client/server communication mechanisms: persistent connections and hubs Persistent connections Persistent connections are the low-level connections of SignalR. That's not to say they provide access to the actual communication technique being used by SignalR, but to illustrate their primary usage as raw communication between client and server. Persistent connections behave much as sockets do in traditional network application development. They provide an abstraction above the lower-level communication mechanisms and protocols, but offer little more than that. When creating an endpoint to handle persistent connection requests over HTTP, the class for handling the connection requests must reside within the Controllers folder (or any other folder containing controllers) and extend the PersistentConnection class. public class MyPersistentConnection: PersistentConnection{} The PersistentConnection class manages connections from the client to the server by way of events. To handle these connection events, any class that is derived from PersistentConnection may override the methods defined within the PersistentConnection class. Client interactions with the server raise the following events: OnConnected: This is invoked by the framework when a new connection to the server is made. OnReconnected: This is invoked when a client connection that has been terminated has reestablished a connection to the server. OnRejoiningGroups: This is invoked when a client connection that has timed out is being reestablished so that the connection may be rejoined to the appropriate groups. OnReceived: This method is invoked when data is received from the client OnDisconnected: This is invoked when the connection between the client and server has been terminated. Interaction with the client occurs through the Connection property of the PersistentConnection class. When an event is raised, the implementing class can determine if it wishes to broadcast a message using Connection.Broadcast, respond to a specific client using Connection.Send, or add the client that triggered the message to a group using Connection.Groups. Hubs Hubs provide us an abstraction over the PersistentConnection class by masking some of the overhead involved in managing raw connections between client and server. Similar to a persistent connection, a hub is contained within the Controllers folder of your project but instead, extends the Hub base class. public class MyHub : Hub{} While a hub supports the ability to be notified of connection, reconnection, and disconnection events, unlike the event-driven persistent connection a hub handles the event dispatching for us. Any publicly available method on the Hub class is treated as an endpoint and is addressable by any client by name. public class MyHub : Hub{public void SendMeAMessage(string message){ /* ... */ }} A hub can communicate with any of its clients using the Clients property of the Hub base class. This property supports methods, just like the Connection property of PersistentConnection, to communicate with specific clients, all clients, or groups of clients. Rather than break down all the functionality available to us in the Hub class, we will instead learn from an example.
Read more
  • 0
  • 0
  • 4111
article-image-working-xamarinandroid
Packt
03 Nov 2015
10 min read
Save for later

Working with Xamarin.Android

Packt
03 Nov 2015
10 min read
In this article written by Matthew Leibowitz, author of the book Xamarin Mobile Development for Android Cookbook, wants us to learn about the Android version that can be used as support for your project. (For more resources related to this topic, see here.) Supporting all Android versions As the Android operating system evolves, many new features are added and older devices are often left behind. How to do it... In order to add the new features of the later versions of Android to the older versions of Android, all we need to do is add a small package: An Android app has three platform versions to be set. The first is the API features that are available to code against. We set this to always be the latest in the Target Framework dropdown of the project options. The next version to set (via Minimum Android version) is the lowest version of the OS that the app can be installed on. When using the support libraries, we can usually target versions down to version 2.3. Lastly, the Target Android version dropdown specifies how the app should behave when installed on a later version of the OS. Typically, this should always be the latest so that the app will always function as the user expects. If we want to add support for the new UI paradigm that uses fragments and action bars, we need to install two of the Android support packages: Create or open a project in Xamarin Studio. Right-click on the project folder in the Solution Explorer list. Select Add and then Add Packages…. In the Add Packages dialog that is displayed, search for Xamarin.Android.Support. Select both Xamarin Support Library v4 and Xamarin Support Library v7 AppCompat. Click on Add Package. There are several support library packages, each adding other types of forward compatibility, but these two are the most commonly used. Once the packages are installed, our activities can inherit from the AppCompatActivity type instead of the usual Activity type: public class MyActivity : AppCompatActivity { } We specify that the activity theme be one of the AppCompat derivatives using the Theme property in the [Activity] attribute: [Activity(..., Theme = "@style/Theme.AppCompat", ...)] If we need to access the ActionBar instance, it is available via the SupportActionBar property on the activity: SupportActionBar.Title = "Xamarin Cookbook"; By simply using the action bar, all the options menu items are added as action items. However, all of them are added under the action bar overflow menu: The XML for action bar items is exactly the same as the options menu: <menu ... >   <item     android_id="@+id/action_refresh"     android_icon="@drawable/ic_action_refresh"     android_title="@string/action_refresh"/> </menu> To get the menu items out of the overflow and onto the actual action bar, we can customize the items to be displayed and how they are displayed: To add action items with images to the actual action bar as well as more complex items, all that is needed is an attribute in the XML, showAsAction: <menu ... >   <item ... app_showAsAction="ifRoom"/> </menu> Sometimes, we may wish to only display the icon initially and then, when the user taps the icon, expand the item to display the action view: <menu ... >   <item ... app_showAsAction="ifRoom|collapseActionView"/> </menu> If we wish to add custom views, such as a search box, to the action bar, we make use of the actionViewClass attribute: <menu ... >   <item ...   app_actionViewClass="android.support.v7.widget.SearchView"/> </menu> If the view is in a layout resource file, we use the actionLayout attribute: <menu ... >   <item ... app_actionLayout="@layout/action_rating"/> </menu> How it works... As Android is developed, new features are added and designs change. We want to always provide the latest features to our users, but some users either haven't upgraded or can't upgrade to the latest version of Android. Xamarin.Android provides three version numbers to specify which types can be used and how they can be used. The target framework version specifies what types are available for consumption as well as what toolset to use during compilation. This should be the latest as we always want to use the latest tools. However, this will make some types and members available to apps even if they aren't actually available on the Android version that the user is using. For example, it will make the ActionBar type available to apps running on Android version 2.3. If the user were to run the app, it would probably crash. In these instances, we can set the minimum Android version to be a version that supports these types and members. But, this will then reduce the number of devices that we can install our app on. This is why we use the support libraries; they allow the types to be used on most versions of Android. Setting the minimum Android version for an app will prevent the app from being installed on devices with earlier versions of the OS. The support libraries By including the Android Support Libraries in our app, we can make use of the new features but still support the old versions. Types from the Android Support Library are available to almost all versions of Android currently in use. The Android Support Libraries provide us with a type that we know we can use everywhere, and then that base type manages the features to ensure that they function as expected. For example, we can use the ActionBar type on most versions of Android because the support library made it available through the AppCompatActivity type. Because the AppCompatActivity type is an adaptive extension for the traditional Activity type, we have to use a different theme. This theme adjusts so that the new look and feel of the UI gets carried all the way back to the old Android versions. When using the AppCompatActivity type, the activity theme must be one of the AppCompat theme variations. There are a few differences in the use when using the support library. With native support for the action bar, the AppCompatActivity type has a property named ActionBar; however, in the support library, the property is named SupportActionBar. This is just a property name change but the functionality is the same. Sometimes ,features have to be added to the existing types that are not in the support libraries. In these cases, static methods are provided. The native support for custom views in menu items includes a method named SetActionView(): menuItem.SetActionView(someView); This method does not exist on the IMenuItem type for the older versions of Android, so we make use of the static method on the MenuItemCompat type: MenuItemCompat.SetActionView(menuItem, someView); The action bar While adding an action bar on older Android versions, it is important to inherit it from the AppCompatActivity type. This type includes all the logic required for including an action bar in the app. It also provides many different methods and properties for accessing and configuring the action bar. In newer versions of Android, all the features are included in the Activity type. Although the functionality is the same, we do have to access the various pieces using the support members when using the support libraries. An example would be to use the SupportActionBar property instead of the ActionBar property. If we use the ActionBar property, the app will crash on devices that don't natively support the ActionBar property. In order to render the action bar, the activity needs to use a theme that contains a style for the action bar or one that inherits from such a theme. For the older versions of Android, we can use the AppCompat themes, such as Theme.AppCompat. The toolbar With the release of Android version 5.0, Google introduced a new style of action bar. The new Toolbar type performs the same function as the action bar but can be placed anywhere on the screen. The action bar is always placed at the top of the screen, but a toolbar is not restricted to that location and can even be placed inside other layouts. To make use of the Toolbar type, we can either use the native type, or we can use the type found in the support libraries. Like any Android View, we can add the ToolBar type to the layout: <android.support.v7.widget.Toolbar   android_id="@+id/my_toolbar"   android_layout_width="match_parent"   android_layout_height="?attr/actionBarSize"   android_background="?attr/colorPrimary"   android_elevation="4dp"   android_theme="@style/ThemeOverlay.AppCompat.ActionBar"   app_popupTheme="@style/ThemeOverlay.AppCompat.Light"/> The difference is in how the activity is set up. First, as we are not going to use the default ActionBar property, we can use the Theme.AppCompat.NoActionBar theme. Then, we have to let the activity know which view is used as the Toolbar type: var toolbar = FindViewById<Toolbar>(Resource.Id.toolbar); SetSupportActionBar(toolbar); The action bar items Action item buttons are just traditional options menu items but are optionally always visible on the action bar. The underlying logic to handle item selections is the same as that for the traditional options menu. No change is required to be made to the existing code inside the OnOptionsItemSelected() method. The value of the showAsAction attribute can be ifRoom, never, or always. This value can optionally be combined, using a pipe, with withText and/or collapseActionView. There's more... Besides using the Android Support Libraries to handle different versions, there is another way to handle different versions at runtime. Android provides the version number of the current operating system through the Build.VERSION type. This type has a property, SdkInt, which we use to detect the current version. It represents the current API level of the version. Each version of Android has a series of updates and new features. For example, Android 4 has numerous updates since its initial release, new features being added each time. Sometimes, the support library cannot cover all the cases, and we have to write specific code for particular versions: int apiLevel = (int)Build.VERSION.SdkInt; if (Build.VERSION.SdkInt >= BuildVersionCodes.IceCreamSandwich) {   // Android version 4.0 and above } else {   // Android versions below version 4.0 } Although the preceding can be done, it introduces spaghetti code and should be avoided. In addition to different code, the app may behave differently on different versions, even if the support library could have handled it. We will now have to manage these differences ourselves each time a new version of Android is released. Summary In this article, we learned that as the technology grows, new features are added and released in Android and older devices are often left behind. Thus, using the given steps we can add the new features of the later versions of Android to the older versions of Android, all we need to do is add packages by following the simple steps given in here. Resources for Article: Further resources on this subject: Creating the POI ListView layout [article] Code Sharing Between iOS and Android [article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 4064

article-image-getting-started-playstation-mobile
Packt
26 Apr 2013
7 min read
Save for later

Getting Started with PlayStation Mobile

Packt
26 Apr 2013
7 min read
(For more resources related to this topic, see here.) The PlayStation Mobile (PSM) SDK represents an exciting opportunity for game developers of all stripes, from hobbyists to indie and professional developers. It contains everything you need to quickly develop a game using the C# programming language. Perhaps more importantly, it provides a market for those games. If you are currently using XNA, you will feel right at home with the PSM SDK. You may be wondering at this point, Why develop for PlayStation Mobile at all? Obviously, the easiest answer is, so you can develop for PlayStation Vita , which of itself will be enough for many people. Perhaps, though the most important reason is that it represents a group of dedicated gamers hungry for games. While there are a wealth of games available for Android, finding them on the App Store is a mess, while supporting the literally thousands of devices is a nightmare. With PlayStation Mobile, you have a common development environment, targeting powerful devices with a dedicated store catering to gamers. We are now going to jump right in and get those tools up and running. Of course, we will also write some code and show how easy it is to get it running on your device. PlayStation Mobile allows you to target a number of different devices and we will cover the three major targets (the Simulator, PlayStation Vita, and Android). You do not need to have a device to follow along, although certain functionality will not be available on the Simulator. One thing to keep in mind with the PlayStation Mobile SDK is that it is essentially two SDKs in one. There is a much lower level set of libraries for accessing graphics, audio, and input, as well as a higher-level layer build over the top of this layer, mostly with the complete source available. Of course, underneath this all there is the .NET framework. In this article, we are going to deal with the lower level graphics interface. If the code seems initially quite long or daunting for what seems like a simple task, don't worry! There is a much easier way that we will cover later in the book. Accessing the PlayStation Mobile portal This recipe looks at creating a PSM portal account. For this process it is mandatory to download and use the PSM SDK. Getting ready You need to have a Sony Entertainment Network (SEN) account to register with the PSM portal. This is the standard account you use to bring your PlayStation device online, so you may already have one. If not, create one at http://bit.ly/Yiglfk before continuing. How to do it... Open a web browser and log in to http://psm.playstation.net. Locate and click on the Register button. Sign in using the SEN account. Agree to the Terms and Conditions. You need to scroll to the bottom of the text before the Agree button is enabled. But, you always read the fine print anyways... don't you? Finally select the e-mail address and language you want for the PlayStation Mobile portal. You can use the same e-mail you used for your SEN account. Click on Register. An e-mail will be sent to the e-mail account you used to sign up. Locate the activation link and either click on it, or copy and paste into a browser window: Your account is now completed, and you can log in to the PSM developer portal now. How it works... A PlayStation Mobile account is mandatory to download the PSM tools. Many of the links to the portal require you to be logged in before they will work. It is very important that you create and activate your account and log in to the portal before continuing on with the book! All future recipes assume you are logged in to the portal. Installing the PlayStation Mobile SDK This recipe demonstrates how to install the PlayStation Mobile SDK. Getting ready First you need to download the PlayStation Mobile SDK; you can download it from http://bit.ly/W8rhhx. How to do it... Locate the installation file you downloaded earlier and double-click to launch the installer. Say yes to any security related questions. Take the default settings when prompting, making sure to install the runtimes and GTK# libraries. The installer for the Vita drivers will now launch. There is no harm in installing them even if you do not have a Vita: Installation is now complete; a browser window with the current release notes will open. How it works... The SDK is now installed on your machines. Assuming you used default directories, the SDK will be installed to C:Program Files (x86)SCEPSM if you are running 64 bit Windows, or to C:Program FilesSCEPSM if you are running 32 bit Windows. Additionally all of the documentation and samples have been installed under the Public account, located in C:UsersPublicDocumentsPSM. There's more... There are a number of samples available in the samples directory and you should certainly take a moment to check them out. They range in complexity from simple Hello World applications, up to a full blown 3rd person 3D role playing game (RPG). They are, however, often documented in Japanese and often rely on other samples, making learning from them a frustrating experience at times, at least, for those of us who do not understand Japanese! Creating a simple game loop We are now going to create our first PSM SDK application, which is the main loop of your application. Actually all the code in this sample is going to be generated by PSM Studio for us. Getting ready From the start menu, locate and launch PSM Studio in the PlayStation Mobile folder. How to do it... In PSM Studio, select the File | New | Solution... menu. In the resulting dialog box, in the left-hand panel expand C# and select PlayStation Suite, then in the right-hand panel, select PlayStation Suite Application. Fill in the Name field, which will automatically populate the Solution name field. Click on OK. Your workspace and boilerplate code will now be created; hit the F5 key or select the Run | Start Debugging menu to run your code in the Simulator. Not much to look at, but it's your first running PlayStation Mobile application! Now let's take a quick look at the code it generated: using System; using System.Collections.Generic; using Sce.PlayStation.Core; using Sce.PlayStation.Core.Environment; using Sce.PlayStation.Core.Graphics; using Sce.PlayStation.Core.Input; namespace Ch1_Example1 { public class AppMain{ private static GraphicsContext graphics; public static void Main (string[] args){ Initialize (); while (true) { SystemEvents.CheckEvents (); Update (); Render (); } } public static void Initialize (){ graphics = new GraphicsContext (); } public static void Update (){ var gamePadData = GamePad.GetData (0); } public static void Render () { graphics.SetClearColor (0.0f, 0.0f, 0.0f, 0.0f); graphics.Clear (); graphics.SwapBuffers (); } } } How it works... This recipe shows us the very basic skeleton of an application. Essentially it loops forever, displaying a black screen. private static GraphicsContext graphics; The GraphicsContext variable represents the underlying OpenGL context. It is used to perform almost every graphically related action. Additionally, it contains the capabilities (resolution, pixel depth, and so on) of the underlying graphics device. All C# based applications have a main function, and this one is no exception. Within Main() we call our Initialize() method, then loop forever, checking for events, updating, and finally rendering the frame. The Initialize() method simply creates a new GraphicsContext variable. The Update() method polls the first gamepad for updates. Finally Render() uses our GraphicsContext variable to first clear the screen to black using an RGBA color value, then clears the screen and swaps the buffers, making it visible. Graphic operations in PSM SDK generally are drawn to a back buffer. There's more... The same process is used to create PlayStation Suite library projects, which will generate a DLL file. You can use almost any C# library that doesn't rely on native code (pInvoke or Unsafe); however, they need to be recompiled into a PSM compatible DLL format. Color in the PSM SDK is normally represented as an RGBA value. The RGBA acronym stands for red, green, blue, and alpha. Each is an int variable type, with values ranging from 0 to 255 representing the strength of each primary color. Alpha represents the level of transparency, with 0 being completely transparent and 256 being opaque.
Read more
  • 0
  • 0
  • 4057

article-image-working-sharing-plugin
Packt
23 May 2014
11 min read
Save for later

Working with the sharing plugin

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Now that we've dealt with the device events, let's get to the real meat of the project: let's add the sharing plugin and see how to use it. Getting ready Before continuing, be sure to add the plugin to your project: cordova plugin add https ://github.com/leecrossley/cordova-plugin-social-message.git Getting on with it This particular plugin is one of many socialnetwork plugins. Each one has its benefits and each one has its problems, and the available plugins are changing rapidly. This particular plugin is very easy to use, and supports a reasonable amount of social networks. On iOS, Facebook, Twitter, Mail, and Flickr are supported. On Android, any installed app that registers with the intent to share is supported. The full documentation is available at https://github.com/leecrossley/cordova-plugin-social-message at the time of writing this. It is easy to follow if you need to know more than what we cover here. To show a sharing sheet (the appearance varies based on platform and operation system), all we have to do is this: window.socialshare.send ( message ); message is an object that contains any of the following properties: text: This is the main content of the message. subject: This is the subject of the message. This is only applicable while sending e-mails; most other social networks will ignore this value. url: This is a link to attach to the message. image: This is an absolute path to the image in order to attach it to the message. It must begin with file:/// and the path should be properly escaped (that is, spaces should become %20, and so on). activityTypes (only for iOS): This supports activities on various social networks. Valid values are: PostToFacebook, PostToTwitter, PostToWeibo, Message, Mail, Print, CopyToPasteboard, AssignToContact, and SaveToCameraRoll. In order to create a simple message to share, we can use the following code: var message = {     text: "something to send" } window.socialshare.send ( message ); To add an image, we can go a step further, shown as follows: var message = {     text: "the caption",     image: "file://var/mobile/…/image.png" } window.socialshare.send ( message ); Once this method is called, the sharing sheet will appear. On iOS 7, you'll see something like the following screenshot: On Android, you will see something like the following screenshot: What did we do? In this section, we installed the sharing plugin and we learned how to use it. In the next sections, we'll cover the modifications required to use this plugin. Modifying the text note edit view We've dispatched most of the typical sections in this project—there's not really any user interface to design, nor are there any changes to the actual note models. All we need to do is modify the HTML template a little to include a share button and add the code to use the plugin. Getting on with it First, let's alter the template in www/html/textNoteEditView.html. I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar ui-       avoid-tool-bar">       <textarea class="ui-text-box" >%NOTE_CONTENTS%</textarea>     </div><div class="ui-tool-bar"><div class="ui-bar-button-group ui-align-left"></div><div class="ui-bar-button-group ui-align-center">     </div>         <div class="ui-bar-button-group ui-align-right">        <div class="ui-bar-button ui-background-tint-color ui- glyph ui-glyph-share share-button"></div></div>    </div></body></html> Now, let's make the modifications to the view in www/js/app/views/textNoteEditView.js. First, we need to add an internal property that references the share button: self._shareButton = null; Next, we need to add code to renderToElement so that we can add an event handler to the share button. We'll do a little bit of checking here to see if we've found the icon, because we don't support sharing of videos and sounds and we don't include that asset in those views. If we didn't have the null check, those views would fail to work. Consider the following code snippet: self.renderToElement = function () {   …   self._shareButton = self.element.querySelector ( ".share-button"     );   if (self._shareButton !== null) {     Hammer ( self._shareButton ).on("tap", self.shareNote);   }   … } Finally, we need to add the method that actually shares the note. Note that we save the note before we share it, since that's how the data in the DOM gets transmitted to the note model. Consider the following code snippet: self.shareNote = function () {   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   window.socialmessage.send ( message ); } What did we do? First, we added a toolbar to the view that looks like the following screenshot—note the new sharing icon: Then, we added the code that shares the note and attaches that code to the Share button. Here's an example of us sending a tweet from a note on iOS: What else do I need to know? Don't forget that social networks often have size limits. For example, Twitter only supports 140 characters, and so if you send a note using Twitter, it needs to be a very short note. We could, on iOS, prevent Twitter from being permitted, but there's no way to prevent this on Android. Even then, there's no real reason not to prevent Twitter from being an option. The user just needs to be familiar enough with the social network to know that they'll have to edit the content before posting it. Also, don't forget that the subject of a message only applies to mail; most other social networks will ignore it. If something is critical, be sure to include it in the text of the message, and not the subject only. Modifying the image note edit view The image note edit view presents an additional difficulty: we can't put the Share button in a toolbar. This is because doing so will cause positioning difficulties with TEXTAREA and the toolbar when the soft keyboard is visible. Instead, we'll put it in the lower-right corner of the image. This is done by using the same technique we used to outline the camera button. Getting on with it Let's edit the template in www/html/imageNoteEditView.html; again, I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-           button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar">       <div class="image-container">         <div class="ui-glyph ui-background-tint-color ui-glyphcamera-         outline"></div>               <div class="ui-glyph ui-background-tint-color ui-glyph-           camera outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           camera non-outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share non-outline share-button"></div>       </div>       <textarea class="ui-text-box"         onblur="this.classList.remove('editing');"         onfocus="this.classList.add('editing');         ">%NOTE_CONTENTS%</textarea>     </div>   </body> </html> Because sharing an image requires a little additional code, we need to override shareNote (which we inherit from the prior task) in www/js/app/views/imageNoteEditView.js: self.shareNote = function () {   var fm = noteStorageSingleton.fileManager;   var nativePath = fm.getNativeURL ( self._note.mediaContents );   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   if (self._note.unitValue > 0) {     message.image = nativePath;   }   window.socialmessage.send ( message ); } Finally, we need to add the following styles to www/css/style.css: div.ui-glyph.ui-background-tint-color.ui-glyph-share.outline, div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   left:inherit;   width:50px;   top: inherit;   height:50px; } {   -webkit-mask-position:15px 16px;   mask-position:15px 16px; } div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   -webkit-mask-position:15px 15px;   mask-position:15px 15px; } What did we do? Like the previous task, we first modified the template to add the share icon. Then, we added the shareNote code to the view (note that we don't have to add anything to find the button, because we inherit it from the Text Note Edit View). Finally, we modify the style sheet to reposition the Share button appropriately so that it looks like the following screenshot: What else do I need to know? The image needs to be a valid image, or the plugin may crash. This is why we check for the value of unitValue in shareNote to ensure that it is large enough to attach to the message. If not, we only share the text. Game Over... Wrapping it up And that's it! You've learned how to respond to device events, and you've also added sharing to text and image notes by using a third-party plugin. Can you take the HEAT? The Hotshot Challenge There are several ways to improve the project. Why don't you try a few? Implement the ability to save the note when the app receives a pause event, and then restore the note when the app is resumed. Remember which note is visible when the app is paused, and restore it when the app is resumed. (Hint: localStorage may come in handy.) Add video or audio sharing. You'll probably have to alter the sharing plugin or find another (or an additional) plugin. You'll probably also need to upload the data to an external server so that it can be linked via the social network. For example, it's often customary to link to a video on Twitter by using a link shortener. The File Transfer plugin might come in handy for this challenge (https://github.com/apache/cordova-plugin-file-transfer/blob/dev/doc/index.md). Summary This article introduced you to a third-party plugin that provides access to e-mail and various social networks. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article] Configuring the ChildBrowser plugin [Article] Using Location Data with PhoneGap [Article]
Read more
  • 0
  • 0
  • 3837
article-image-tabula-rasa-nurturing-your-site-tablets
Packt
09 Mar 2012
16 min read
Save for later

Tabula Rasa: Nurturing your Site for Tablets

Packt
09 Mar 2012
16 min read
The human touch There's a reason touchscreen interfaces were rarely used before Apple re-invented them in the iPhone. It's because programming them is very difficult. With a mouse-driven interface you have a single point of contact: the mouse's pointer. With a touchscreen, you potentially have ten points of contact, each one with a separate motion. And you also have to deal with limiting spurious input when the user accidentally touches the tablet when they didn't mean to. Does the user's swipe downward mean they want to scroll the page or to drag a single page element? The questions go on to infinity. With this article, we stand on the shoulders of those giants who have done the heavy lifting and given us a JavaScript interface that registers touch and gestures for use in our web pages. Many Bothans died to bring us this information. To understand the tablet is to understand the touch interface, and in order to understand the touch interface, we need to learn how touch events differ from mouse events. But that begs the question: what is an event? The event-driven model Many developers use JavaScript-based events and have not even the slightest clue as to what they can do or their power. In addition, many developers get into situations where they don't know why their events are misfiring or, worse yet, bubbling to other event handlers and causing a cascade of event activity. As you may or may not know, an HTML document is made up of a series of tags organized in a hierarchical structure called the HTML document. In JavaScript, this document is referred to through the reserved word document. Simple enough, right? Well, what if I want to interact with the tag inside of a document, and not the document as a whole? Well, for that we need a way of addressing nested items inside the main <html> tag. For that, we use the Document Object Model (DOM). DOM is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents. Aspects of the DOM (such as its elements) may be addressed and manipulated within the syntax of the programming language in use. The public interface of a DOM is specified in its Application Programming Interface (API). For more details on DOM, refer to the Wikipedia document at: http://en.wikipedia.org/wiki/Document_Object_Model. The body of that document then becomes document.body. The head of the document, likewise, becomes document.head. Now, what happens when your mouse interacts with this web page? This is said to be a DOM event. When you click, the elements that are the receivers of that action are said to propagate the event through the DOM. In the early days, Microsoft and Netscape/Firefox had competing ways of handling those events. But they finally gave way to the modern W3C's standard, which unifies the two ways and, even more importantly, jQuery has done a lot to standardize the way we think about events and event handling. In most browsers today, mouse events are pretty standardized, as we are now more than 20 years into the mouse-enabled computing era: For tablets and touchscreen phones, obviously, there is no mouse. There are only your fingers to serve the purpose of the mouse. And here's where things get simultaneously complicated as well as simple. Touch and go Much of what we talk about as touch interaction is made up of two distinct types of touches—single touches and gestures. A single touch is exactly that. One finger placed on the screen from the start till the end. A gesture is defined as one or more fingers touching the surface of the area and accompanied by a specific motion: Touch + Motion. To open most tablets, you swipe your finger across a specific area. To scroll inside a div element, you use two fingers pushing up and down. In fact, scrolling itself is a gesture and tablets only respond to the scroll event once it's over. We will cover more on that later. Gestures have redefined user interaction. I wonder how long it took for someone to figure out that the zoom in and zoom out is best accomplished with a pinch of the fingers? It seems so obvious once you do it and it immediately becomes second nature. My mom was pinching to zoom on her iPhone within the first 5 minutes of owning it. Touch events are very similar to multiple mouse events without a hover state. There is no response from the device when a finger is over the device but has not pressed down. There is an effort on the part of many mobile OS makers to simulate the hover event by allowing the hover event to trigger with the first click, and the click event to trigger with the second click on the same object. I would advise against using it for any meaningful user interaction as it is inconsistently implemented, and many times the single click triggers the link as well as the hover-reveal in drop-down menus. Not using the hover event to guide users through navigation changes the way we interact with a web page. Much of the work we've done to guide users through our pages is based on the hover-response event model to clue users in on where links are. We have to get beyond that. Drop-down menus quickly become frustrating at the second and third levels, especially if the click and hover events were incorrectly implemented in the desktop browser. Forward and back buttons are rendered obsolete by a forward and backwards swipe gesture. The main event There are basically three touch events—touchstart, touchmove, and touchend. Gesture events are, likewise: gesturestart, gesturemove, and gestureend. All gestures register a touch event but not all touch events register gestures. Gestures are registered when multiple fingers make contact with the touch surface and register significant location change in a concerted effort, such as two or more fingers swiping, a pinch action, and so on. In general, I've found it a good practice to use touch events to register finger actions; but it is required to return null on a touch event when there are multiple fingers involved and to handle such events with gestures. jQuery mobile has a nice suite of touch events built into its core that we can hook into. But jQuery and jQuery mobile sometimes fall short of the interaction we want to have for our users, so we'll outline best practices for adding customized user touch events to both the full and mobile version of the demo site. Let's get started… Time for action – adding a swipe advance to the home page The JavaScript to handle touch events is a little tricky; so, pay attention: Add the following lines to both sites/all/themes/dpk/js/global.js and sites/all/themes/dpk_mobile/js/global.js: Drupal.settings.isTouchDevice = function() { return "ontouchstart" in window; } if (Drupal.settings.isTouchDevice() ) { Drupal.behaviors.jQueryMobileSlideShowTouchAdvance = { attach: function(context, settings) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; jQuery.each(jQuery(".views_slideshow_cycle_main. viewsSlideshowCycle-processed"), function(idx, value) { value.addEventListener("touchstart", self. handleTouchStart); jQuery(value).addClass("views-slideshow-mobileprocessed"); }) jQuery(self).bind("swipe", self.handleSwipe); }, detach: function() { }, original: { x: 0, y: 0}, changed: { x: 0, y: 0}, direction: { x: "", y: "" }, fired: false,handleTouchStart: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; if (evt.touches) { if (evt.targetTouches.length != 1) { return false; } if (evt.touches.length) { evt.preventDefault(); evt. stopPropagation() } self.original = { x: evt.touches[0].clientX, y: evt. touches[0].clientY } self.target = jQuery(this).attr("id").replace("views_ slideshow_cycle_main_", ""); Drupal.viewsSlideshow.action({ "action": "pause", "slideshowID": self.target }); evt.target.addEventListener("touchmove", self. handleTouchMove); evt.target.addEventListener("touchend", self. handleTouchEnd); } }, handleTouchMove: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; self.changed = { x: (evt.touches.length) ? evt.touches[0].clientX: evt.changedTouches[0].clientX, y: (evt.touches.length) ? evt.touches[0].clientY: evt.changedTouches[0].clientY }; h = parseInt(self.original.x - self.changed.x), v = parseInt(self.original.y - self.changed.y); if (h !== 0) { self.direction.x = (h < 0) ? "right":"left"; } if (v !== 0) { self.direction.y = (v < 0) ? "up": "down"; } jQuery(self).trigger("swipe"); }, handleTouchEnd: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; evt.target.removeEventListener("touchmove", self. handleTouchMove); evt.target.removeEventListener("touchend", self. handleTouchEnd); self.fired = false; }, handleSwipe: function(evt) { self = Drupal.behaviors.jQueryMobileSlideShowTouchAdvance; if (evt != undefined && self.fired == false) { Drupal.viewsSlideshow.action({ "action": (self.direction.x == "left")?"nextSlide":"previousSlide", "slideshowID": self.target}); self.fired = true; //only fire advance once per touch } } } } Clear Drupal's cache by either navigating to Configuration | Performance and clicking on the Clear cache button or entering these lines in a terminal: cd ~/sites/dpk/drush cc all Navigate to either home page with a touch-enabled device and you should be able to advance the home page slideshow with your fingers. What just happened? Let's take a look at how this code works. First, we have a function, isTouchDevice. This function returns true/false values if touch events are enabled on the browser. We use an if statement to wall off the touchscreen code, so browsers that aren't capable don't register an error. The Drupal behavior jQueryMobileSlideShowTouchAdvance has the attach and detach functions to satisfy the Drupal behavior API. In each function, we locally assign the self variable with the value of the entire object. We'll use this in place of the this keyword. In the Drupal behavior object, this can sometimes ambiguously refer to the entire object, or to the current sub-object. In this case, we want the reference to be to just the sub-object so we assign it to self. The attach function grabs all slideshow_cycle div elements in a jQuery each loop. The iteration of the loop adds an event listener to the div tag. It's important to note that the event listener is not bound with jQuery event binding. jQuery event binding does not yet support touch events. There's an effort to add them, but they are not in the general release that is used with Drupal 7. We must then add them with the browser native function, AddEventListener. We use the handleTouchStart method to respond to the touchstart event. We will add touchend and touchmove events after the touchstart is triggered.The other event that we're adding listens to this object for the swipe event. This is a custom event we will create that will be triggered when a swipe action happens. We will cover more on that shortly.The detach function is used to add cleanup to items when they are removed from the DOM. Currently, we have no interaction that removes items from the DOM and therefore no cleanup that's necessary for that removal to take place.Next, we add some defaults—original, changed, direction, and fired. We'll use those properties in our event response methods.HandleTouchStart event is fired when the finger first touches the surface. We make sure the evt.touches object has value and is only one touch. We want to disregard touches that are gestures. Also, we use preventDefault and stopPropagation on the event to keep it from bubbling up to other items in the DOM. self.original is the variable that will hold the touch's original coordinates. We store the values for touch[0]. We also name the target by getting the DOM ID of the cycle containing the div element. We can use string transforms on that ID to obtain the ID of the jQuery cycle being touched and will use that value when we send messages to the slideshow, based on the touch actions, like we do in the next line. We tell the slideshow to pause normal activity while we figure out what the user wants. To figure that out, we add touchmove and touchend events listening to the div element. handleTouchMove figures out the changed touch value. It does so by looking at the ClientX and ClientY values in the touch event.Some browsers support the changedTouches value which will do some calculations on how much the touch has changed since the last event was triggered. If it's available, we use it, or we use the value of the X and Y coordinates in the touch event's touches array. We do some subtraction against the original touch to find out how much the touch has changed and in what direction. We use self.direction to store the direction of the change. We store the direction in and tell the world that a swipe has begun on our div element by triggering a custom event on our self object.If you remember correctly, we used the handleSwipe method to respond to the swipe event. In handleSwipe we make sure the event has not already fired. If it hasn't, we use that swipe event to trigger a next or previous action on our jQuery cycle slideshow. Once we've fired the event, we change the self.fired to true so it will only fire once per touch. In the touchend responder, HandleTouchEnd, we remove both the touchmove and touchend responders and reset the fired state.But adding the touch events to both the desktop and the mobile themes begs the question, "Into which category does the table fall?" Have a go hero – adding a swipe gesture Add a swipe gesture event to the Menu Item page that allows you to scroll through menu items. The changing landscape (or portrait) Responsive web design is a design discipline that believes that the same markup should be used for both desktop and mobile screens, with the browser managing the display of items, rather than the user choosing an experience. If the screen is smaller, the layout adjusts and content emphasis remains.Conversely, the popularity of Internet-connected game consoles and DVI ports on large screen televisions gives us yet another paradigm for web pages—the large screen. I sit in front of a 72" TV screen and connect it to either my laptop or iPad and I have a browsing experience that is more passive, but completely immersive.Right now, I bet you're thinking, "So which is it Mr Author, two sites or one?" Well, both, actually. In some cases, with some interactions it will be necessary to do two site themes and maintain them both. In some cases, when you can start from scratch, you can do a design that can work on every browser screen size. Let's start over and put responsive design principals to work with what we already know about media queries and touch interfaces. "Starting over" or "Everything you know about designing websites is wrong" Responsive web design forces the designer to start over—to forget the artificial limitations of the size that print imposes and to start with a blank canvas. Once that blank canvas is in place, though, how do you fill it? How do you create "The One True Design" (cue the theme music)?This book is not a treatise on how to create the perfect design. For that, I can recommend A Book Apart and anything published by smashingmagazine.com. Currently, they are at the forefront of this movement and regularly publish ideas and information that is helpful without too much technical jargon.No, this book is more about giving you strategies to implement the designs you're given or that you create using Drupal. In point of fact, responsive design, at the time of writing, is in its infancy and will change significantly over the next 10 years, as new technology forces us to rethink our assumptions about what books, television, and movies are and what the Web is.So suffice to say, it begins with content. Prioritizing content is the job of the designer. Items you want the user to perceive first, second, and third are the organizing structure of your responsive design makeover. In most instances, it's helpful to present the web developer with four views of the website. Wire framing made easy Start with wireframes. A great wire framing tool is called Balsamiq. It has a purposefully "rough" look to all of the elements you use. That way, it makes you focus on the elements and leave the design for a later stage. It's also helpful for focusing clients on the elements. Many times the stake holders see a mockup and immediately begin the discussion of "I like blue but I don't like green/I like this font, but don't like that one." It can be difficult to move the stake holders out of this mindset, but presenting them with black-and-white chalk-style drawings of website elements can, in many cases, be helpful. Balsamiq is a great tool for doing just that: These were created with Balsamiq but could have been created in almost any primitive drawing program. There are many free ones as well as the more specialized pay ones. A simple layout like this is very easy to plan and implement. But very few of the websites you develop will ever be this simple. Let's take for instance that the menu item we have not, as yet, implemented, is for online ordering. How does that work? What do those screens look like? At this point we have a Menu page but, as per this mockup, that menu page will become the online ordering section. How do we move these menu items we created to a place where they can be put in an order and paid for? And more importantly, how does each location know what was ordered from their location?These are questions that come up in the mockup and requirements phase and whether you are building the site yourself or being given requirements from a superior, or a client, you now have a better idea of the challenges you will face implementing the single design for this site. With that, we've been given these mockups for the new online ordering system. The following mockup diagram is for adding an order: The following mockup diagram is for placing an order: We'll implement these mockups using the Drupal 7 Commerce module. The Commerce module is just a series of customized data entities and views that we can use as the building blocks of our commerce portion. We'll theme the views in the standard Drupal way but with an eye to multi-width screens, lack of hover state, and keeping in mind "hit zones" with fingers on small mobile devices. We'll also add some location awareness to assist with the delivery process. Once an order is placed, an e-mail will need to be sent to the correct franchise notifying them of the pizza order and initiating the process of getting it out the door.
Read more
  • 0
  • 0
  • 3736

article-image-cloud-enabling-your-apps
Packt
08 May 2013
7 min read
Save for later

Cloud-enabling Your Apps

Packt
08 May 2013
7 min read
(For more resources related to this topic, see here.) Which cloud services can you use with Titanium? Here is a comparison of the services offered by three cloud-based providers who have been proven to work with Titanium: Appcelerator Cloud Services Parse StackMob Customizable storage Yes Yes Yes Push notifications Yes Yes Yes E-mail Yes No No Photos Yes Yes Yes Link with Facebook/Twitter account Yes Yes Yes User accounts Yes Yes Yes The services offered by these three leading contenders are very similar. The main difference is the cost. Which is the best one for you? It depends on your requirements; you will have to do the cost/benefit analysis to work out the best solution for you. Do you need more functionality than this? No problem, look around for other PaaS providers. The PaaS service offered by RedHat has been proven to integrate with Titanium and offers far more flexibility. There is an example of a Titanium app developed with RedHat Openshift at https://openshift.redhat.com/community/blogs/developing-mobile-appsfor-the-cloud-with-titanium-studio-and-the-openshift-paas It doesn't stop there; new providers are coming along almost every month with new and grand ideas for web and mobile integration. My advice would be to take the long view. Draw up a list of what you require initially for your app and what you realistically want in the next year. Check this list against the cloud providers. Can they satisfy all your needs at a workable cost? They should do; they should be flexible enough to cover your plans. You should not need to split your solution between providers. Clouds are everywhere Cloud-based services offer more than just storage. Appcelerator Cloud Services Appcelerator Cloud Services ( ACS) is well integrated into Titanium. The API includes commands for controlling ACS cloud objects. In the first example in this article we are going to add commentary functionality to the simple forex app. Forex commentary is an ideal example of the benefits of cloud-based storage where your data is available across all devices. First, let's cover some foreground to the requirements. First, let's cover some foreground to the requirements. The currency markets are open 24 hours a day, 5 days a week and trading opportunities can present themselves at any point. You will not be in front of your computer all of the time so you will need to be able to access and add commentary when you are on your phone or at home on your PC. This is where the power of the cloud really starts to hit home. We already know that you can create apps for a variety of devices using Appcelerator. This is good; we can access our app from most phones, but now using the cloud we can also access our commentary from anywhere. So, comments written on the train about the EURUSD rate can be seen later when at home looking at the PC. When we are creating forex commentary, we will store the following: The currency pair (that is EURUSD) ‹ The rate (the current exchange rate) The commentary (what we think about the exchange rate) We will also store the date and time of the commentary. This is done automatically by ACS. All objects include the date they were created. ACS allows you to store key value pairs (which is the same as Ti.App.Properties), that is AllowUserToSendEmails: True, or custom objects. We have several attributes to our commentary post so a key value pair will not suffice. Instead we will be using a custom object. We are going to add a screen that will be called when a user selects a currency. From this screen a user can enter commentary on the currency. Time for action – creating ACS custom objects Perform the following steps to create ACS custom objects: Enable ACS in your existing app. Go to tiapp.xml and click on the Enable... button on the Cloud Services section. Your project will gain a new Ti.Cloud module and the ACS authentication keys will be shown: Go to the cloud website, https://my.appcelerator.com/apps, find your app, and select Manage ACS. Select Development from the selection buttons at the top. You need to define a user so your app can log in to ACS. From the App Management tab select Users from the list on the right. If you have not already created a suitable user, do it now. We will split the functionality in this article over two files. The first file will be called forexCommentary.js and will contain the cloud functionality, and the second file called forexCommentaryView.js will contain the layout code. Create the two new files. Before we can do anything with ACS, we need to log in. Create an init function in forexCommentary.js which will log in the forex user created previously: function init(_args) { if (!Cloud.sessionId) { Cloud.Users.login({ login: 'forex', password: 'forex' }, function (e) { if (e.success) { _args.success({user : e.users[0]}); } else { _args.error({error: e.error}); } }); } This is not a secure login, it's not important for this example. If you need greater security, use the Ti.Cloud.Users. secureLogin functionality. Create another function to create a new commentary object on ACS. The function will accept a parameter containing the attribute's pair, rate, and commentary and create a new custom object from these. The first highlighted section shows how easy it is to define a custom object. The second highlighted section shows the custom object being passed to the success callback when the storage request completes: function addCommentary(_args) { // create a new currency commentary Cloud.Objects.create({ classname: className, fields: { pair: _args.pair, rate: _args.rate, comment: _args.commentary } }, function (e) { if (e.success) { _args.success(e.forexCommentary[0]); } else { _args.error({error: e.error}); } }); } Now to the layout. This will be a simple form with a text area where the commentary can be added. The exchange rate and currency pair will be provided from the app's front screen. Create a TextArea object and add it to the window. Note keyboardType of Ti.UI.KEYBOARD_ASCII which will force a full ASCII layout keyboard to be displayed and returnKeyType of Ti.UI.RETURNKEY_DONE which will add a done key used in the next step: var commentary = Ti.UI.createTextArea({ borderWidth:2, borderColour:'blue', borderRadius:5, keyboardType: Ti.UI.KEYBOARD_ASCII, returnKeyType: Ti.UI.RETURNKEY_DONE, textAlign: 'left', hintText: 'Enter your thoughts on '+thePair, width: '90%', height : 150 }); mainVw.add(commentary); Now add an event listener which will listen for the done key being pressed and when triggered will call the function to store the commentary with ACS: commentary.addEventListener('return', function(e) {forex.addCommentary({ pair: thePair, rate: theRate, commentary: e.value}) }); Finally add the call to log in the ACS user when the window is first opened: var forex = require('forexCommentary'); forex.init(); Run the app and enter some commentary. What just happened? You created functions to send a custom defined object to the server. Commentary entered on the phone is almost immediately available for viewing on the Appcelerator console (https://my.appcelerator.com/apps) and therefore available to be viewed by all other devices and formats. Uploading pictures Suppose you want to upload a picture, or a screenshot? This next example will show how easy it is to upload a picture to ACS.
Read more
  • 0
  • 0
  • 3695
Modal Close icon
Modal Close icon