Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cross-Platform Mobile Development

96 Articles
article-image-sharing-mvvmcross
Packt
19 Dec 2014
36 min read
Save for later

Sharing with MvvmCross

Packt
19 Dec 2014
36 min read
In this article by Mark Reynolds, author of the book Xamarin Essentials, we will take the next step and look at how the use of design patterns and frameworks can increase the amount of code that can be reused. We will cover the following topics: An introduction to MvvmCross The MVVM design pattern Core concepts Views, ViewModels, and commands Data binding Navigation (ViewModel to ViewModel) The project organization The startup process Creating NationalParks.MvvmCross It's more than a little ambitious to try to cover MvvmCross along with a working example in a single article. Our approach will be to introduce the core concepts at a high level and then dive in and create the national parks sample app using MvvmCross. This will give you a basic understanding of how to use the framework and the value associated with its use. With that in mind, let's get started. (For more resources related to this topic, see here.) Introducing MvvmCross                                            MvvmCross is an open source framework that was created by Stuart Lodge. It is based on the Model-View-ViewModel (MVVM) design pattern and is designed to enhance code reuse across numerous platforms, including Xamarin.Android, Xamarin.iOS, Windows Phone, Windows Store, WPF, and Mac OSX. The MvvmCross project is hosted on GitHub and can be accessed at https://github.com/MvvmCross/MvvmCross. The MVVM pattern MVVM is a variation of the Model-View-Controller pattern. It separates logic traditionally placed in a View object into two distinct objects, one called View and the other called ViewModel. The View is responsible for providing the user interface and the ViewModel is responsible for the presentation logic. The presentation logic includes transforming data from the Model into a form that is suitable for the user interface to work with and mapping user interaction with the View into requests sent back to the Model. The following diagram depicts how the various objects in MVVM communicate: While MVVM presents a more complex implementation model, there are significant benefits of it, which are as follows: ViewModels and their interactions with Models can generally be tested using frameworks (such as NUnit) that are much easier than applications that combine the user interface and presentation layers ViewModels can generally be reused across different user interface technologies and platforms These factors make the MVVM approach both flexible and powerful. Views Views in an MvvmCross app are implemented using platform-specific constructs. For iOS apps, Views are generally implemented as ViewControllers and XIB files. MvvmCross provides a set of base classes, such as MvxViewContoller, that iOS ViewControllers inherit from. Storyboards can also be used in conjunction with a custom presenter to create Views; we will briefly discuss this option in the section titled Implementing the iOS user interface later in this article. For Android apps, Views are generally implemented as MvxActivity or MvxFragment along with their associated layout files. ViewModels ViewModels are classes that provide data and presentation logic to views in an app. Data is exposed to a View as properties on a ViewModel, and logic that can be invoked from a View is exposed as commands. ViewModels inherit from the MvxViewModel base class. Commands Commands are used in ViewModels to expose logic that can be invoked from the View in response to user interactions. The command architecture is based on the ICommand interface used in a number of Microsoft frameworks such as Windows Presentation Foundation (WPF) and Silverlight. MvvmCross provides IMvxCommand, which is an extension of ICommand, along with an implementation named MvxCommand. The commands are generally defined as properties on a ViewModel. For example: public IMvxCommand ParkSelected { get; protected set; } Each command has an action method defined, which implements the logic to be invoked: protected void ParkSelectedExec(NationalPark park){   . . .// logic goes here} The commands must be initialized and the corresponding action method should be assigned: ParkSelected =   new MvxCommand<NationalPark> (ParkSelectedExec); Data binding Data binding facilitates communication between the View and the ViewModel by establishing a two-way link that allows data to be exchanged. The data binding capabilities provided by MvvmCross are based on capabilities found in a number of Microsoft XAML-based UI frameworks such as WPF and Silverlight. The basic idea is that you would like to bind a property in a UI control, such as the Text property of an EditText control in an Android app to a property of a data object such as the Description property of NationalPark. The following diagram depicts this scenario:  The binding modes There are four different binding modes that can be used for data binding: OneWay binding: This mode tells the data binding framework to transfer values from the ViewModel to the View and transfer any updates to properties on the ViewModel to their bound View property. OneWayToSource binding: This mode tells the data binding framework to transfer values from the View to the ViewModel and transfer updates to View properties to their bound ViewModel property. TwoWay binding: This mode tells the data binding framework to transfer values in both directions between the ViewModel and View, and updates on either object will cause the other to be updated. This binding mode is useful when values are being edited. OneTime binding: This mode tells the data binding framework to transfer values from ViewModel to View when the binding is established; in this mode, updates to ViewModel properties are not monitored by the View. The INotifyPropertyChanged interface The INotifyPropertyChanged interface is an integral part of making data binding work effectively; it acts as a contract between the source object and the target object. As the name implies, it defines a contract that allows the source object to notify the target object when data has changed, thus allowing the target to take any necessary actions such as refreshing its display. The interface consists of a single event—the PropertyChanged event—that the target object can subscribe to and that is triggered by the source if a property changes. The following sample demonstrates how to implement INotifyPropertyChanged: public class NationalPark : INotifyPropertyChanged{  public event PropertyChangedEventHandler     PropertyChanged;// rather than use "… code" it is safer to use// the comment formstring _name;public string Name{   get { return _name; }   set   {       if (value.Equals (_name,           StringComparison.Ordinal))      {     // Nothing to do - the value hasn't changed;     return;       }       _name = value;       OnPropertyChanged();   }}. . .void OnPropertyChanged(   [CallerMemberName] string propertyName = null){     var handler = PropertyChanged;if (handler != null){     handler(this,           new PropertyChangedEventArgs(propertyName));}}} Binding specifications Bindings can be specified in a couple of ways. For Android apps, bindings can be specified in layout files. The following example demonstrates how to bind the Text property of a TextView instance to the Description property in a NationalPark instance: <TextView   android_layout_width="match_parent"   android_layout_height="wrap_content"   android_id="@+id/descrTextView"  local:MvxBind="Text Park.Description" /> For iOS, binding must be accomplished using the binding API. CreateBinding() is a method than can be found on MvxViewController. The following example demonstrates how to bind the Description property to a UILabel instance: this.CreateBinding (this.descriptionLabel).   To ((DetailViewModel vm) => vm.Park.Description).   Apply (); Navigating between ViewModels Navigating between various screens within an app is an important capability. Within a MvvmCross app, this is implemented at the ViewModel level so that navigation logic can be reused. MvvmCross supports navigation between ViewModels through use of the ShowViewModel<T>() method inherited from MvxNavigatingObject, which is the base class for MvxViewModel. The following example demonstrates how to navigate to DetailViewModel: ShowViewModel<DetailViewModel>(); Passing parameters In many situations, there is a need to pass information to the destination ViewModel. MvvmCross provides a number of ways to accomplish this. The primary method is to create a class that contains simple public properties and passes an instance of the class into ShowViewModel<T>(). The following example demonstrates how to define and use a parameters class during navigation: public class DetailParams{   public int ParkId { get; set; }} // using the parameters classShowViewModel<DetailViewModel>(new DetailViewParam() { ParkId = 0 }); To receive and use parameters, the destination ViewModel implements an Init() method that accepts an instance of the parameters class: public class DetailViewModel : MvxViewModel{   . . .   public void Init(DetailViewParams parameters)   {       // use the parameters here . . .   }} Solution/project organization Each MvvmCross solution will have a single core PCL project that houses the reusable code and a series of platform-specific projects that contain the various apps. The following diagram depicts the general structure: The startup process MvvmCross apps generally follow a standard startup sequence that is initiated by platform-specific code within each app. There are several classes that collaborate to accomplish the startup; some of these classes reside in the core project and some of them reside in the platform-specific projects. The following sections describe the responsibilities of each of the classes involved. App.cs The core project has an App class that inherits from MvxApplication. The App class contains an override to the Initialize() method so that at a minimum, it can register the first ViewModel that should be presented when the app starts: RegisterAppStart<ViewModels.MasterViewModel>(); Setup.cs Android and iOS projects have a Setup class that is responsible for creating the App object from the core project during the startup. This is accomplished by overriding the CreateApp() method: protected override IMvxApplication CreateApp(){   return new Core.App();} For Android apps, Setup inherits from MvxAndroidSetup. For iOS apps, Setup inherits from MvxTouchSetup. The Android startup Android apps are kicked off using a special Activity splash screen that calls the Setup class and initiates the MvvmCross startup process. This is all done automatically for you; all you need to do is include the splash screen definition and make sure it is marked as the launch activity. The definition is as follows: [Activity(Label="NationalParks.Droid", MainLauncher = true,Icon="@drawable/icon", Theme="@style/Theme.Splash",NoHistory=true,ScreenOrientation = ScreenOrientation.Portrait)]public class SplashScreen : MvxSplashScreenActivity{   public SplashScreen():base(Resource.Layout.SplashScreen)   {   } The iOS startup The iOS app startup is slightly less automated and is initiated from within the FinishedLaunching() method of AppDelegate: public override bool FinishedLaunching (   UIApplication app, NSDictionary options){   _window = new UIWindow (UIScreen.MainScreen.Bounds);    var setup = new Setup(this, _window);   setup.Initialize();   var startup = Mvx.Resolve<IMvxAppStart>();   startup.Start();    _window.MakeKeyAndVisible ();   return true;} Creating NationalParks.MvvmCross Now that we have basic knowledge of the MvvmCross framework, let's put that knowledge to work and convert the NationalParks app to leverage the capabilities we just learned. Creating the MvvmCross core project We will start by creating the core project. This project will contain all the code that will be shared between the iOS and Android app primarily in the form of ViewModels. The core project will be built as a Portable Class Library. To create NationalParks.Core, perform the following steps: From the main menu, navigate to File | New Solution. From the New Solution dialog box, navigate to C# | Portable Library, enter NationalParks.Core for the project Name field, enter NationalParks.MvvmCross for the Solution field, and click on OK. Add the MvvmCross starter package to the project from Nuget. Select the NationalParks.Core project and navigate to Project | Add Packages from the main menu. Enter MvvmCross starter in the search field. Select the MvvmCross – Hot Tuna Starter Pack entry and click on Add Package. A number of things were added to NationalParks.Core as a result of adding the package, and they are as follows: A packages.config file, which contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to actual libraries in the Packages folder of the overall solution. A ViewModels folder with a sample ViewModel named FirstViewModel. An App class in App.cs, which contains an Initialize() method that starts the MvvmCross app by calling RegisterAppStart() to start FirstViewModel. We will eventually be changing this to start the MasterViewModel class, which will be associated with a View that lists national parks. Creating the MvvmCross Android app The next step is to create an Android app project in the same solution. To create NationalParks.Droid, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog box, navigate to C# | Android | Android Application, enter NationalParks.Droid for the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.Droid and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.Droid as a result of adding the package, which are as follows: packages.config: This file contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView : This class is present in the Views folder, which corresponds to FirstViewModel, which was created in NationalParks.Core. FirstView: This layout is present in Resourceslayout, which is used by the FirstView activity. This is a traditional Android layout file with the exception that it contains binding declarations in the EditView and TextView elements. Setup: This file inherits from MvxAndroidSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). SplashScreen: This class inherits from MvxSplashScreenActivity. The SplashScreen class is marked as the main launcher activity and thus initializes the MvvmCross app with a call to Setup.Initialize(). Add a reference to NationalParks.Core by selecting the References folder, right-click on it, select Edit References, select the Projects tab, check NationalParks.Core, and click on OK. Remove MainActivity.cs as it is no longer needed and will create a build error. This is because it is marked as the main launch and so is the new SplashScreen class. Also, remove the corresponding Resourceslayoutmain.axml layout file. Run the app. The app will present FirstViewModel, which is linked to the corresponding FirstView instance with an EditView class, and TextView presents the same Hello MvvmCross text. As you edit the text in the EditView class, the TextView class is automatically updated by means of data binding. The following screenshot depicts what you should see: Reusing NationalParks.PortableData and NationalParks.IO Before we start creating the Views and ViewModels for our app, we first need to bring in some code from our previous efforts that can be used to maintain parks. For this, we will simply reuse the NationalParksData singleton and the FileHandler classes that were created previously. To reuse the NationalParksData singleton and FileHandler classes, complete the following steps: Copy NationalParks.PortableData and NationalParks.IO to the NationalParks.MvvmCross solution folder. Add a reference to NationalParks.PortableData in the NationalParks.Droid project. Create a folder named NationalParks.IO in the NationalParks.Droid project and add a link to FileHandler.cs from the NationalParks.IO project. Recall that the FileHandler class cannot be contained in the Portable Class Library because it uses file IO APIs that cannot be references from a Portable Class Library. Compile the project. The project should compile cleanly now. Implementing the INotifyPropertyChange interface We will be using data binding to bind UI controls to the NationalPark object and thus, we need to implement the INotifyPropertyChange interface. This ensures that changes made to properties of a park are reported to the appropriate UI controls. To implement INotifyPropertyChange, complete the following steps: Open NationalPark.cs in the NationalParks.PortableData project. Specify that the NationalPark class implements INotifyPropertyChanged interface. Select the INotifyPropertyChanged interface, right-click on it, navigate to Refactor | Implement interface, and press Enter. Enter the following code snippet: public class NationalPark : INotifyPropertyChanged{   public event PropertyChangedEventHandler       PropertyChanged;   . . .} Add an OnPropertyChanged() method that can be called from each property setter method: void OnPropertyChanged(   CallerMemberName] string propertyName = null){   var handler = PropertyChanged;   if (handler != null)   {       handler(this,           new PropertyChangedEventArgs(propertyName));   }} Update each property definition to call the setter in the same way as it is depicted for the Name property: string _name;public string Name{get { return _name; }set{   if (value.Equals (_name, StringComparison.Ordinal))   {     // Nothing to do - the value hasn't changed;return;   }   _name = value;   OnPropertyChanged();}} Compile the project. The project should compile cleanly. We are now ready to use the NationalParksData singleton in our new project, and it supports data binding. Implementing the Android user interface Now, we are ready to create the Views and ViewModels required for our app. The app we are creating will follow the following flow: A master list view to view national parks A detail view to view details of a specific park An edit view to edit a new or previously existing park The process for creating views and ViewModels in an Android app generally consists of three different steps: Create a ViewModel in the core project with the data and event handlers (commands) required to support the View. Create an Android layout with visual elements and data binding specifications. Create an Android activity, which corresponds to the ViewModel and displays the layout. In our case, this process will be slightly different because we will reuse some of our previous work, specifically, the layout files and the menu definitions. To reuse layout files and menu definitions, perform the following steps: Copy Master.axml, Detail.axml, and Edit.axml from the Resourceslayout folder of the solution to the Resourceslayout folder in the NationalParks.Droid project, and add them to the project by selecting the layout folder and navigating to Add | Add Files. Copy MasterMenu.xml, DetailMenu.xml, and EditMenu.xml from the Resourcesmenu folder of the solution to the Resourcesmenu folder in the NationalParks.Droid project, and add them to the project by selecting the menu folder and navigating to Add | Add Files. Implementing the master list view We are now ready to implement the first of our View/ViewModel combinations, which is the master list view. Creating MasterViewModel The first step is to create a ViewModel and add a property that will provide data to the list view that displays national parks along with some initialization code. To create MasterViewModel, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, and navigate to Add | New File. In the New File dialog box, navigate to General | Empty Class, enter MasterViewModel for the Name field, and click on New. Modify the class definition so that MasterViewModel inherits from MvxViewModel; you will also need to add a few using directives: . . .using Cirrious.CrossCore.Platform;using Cirrious.MvvmCross.ViewModels;. . .namespace NationalParks.Core.ViewModels{public class MasterViewModel : MvxViewModel{         . . .   }} Add a property that is a list of NationalPark elements to MasterViewModel. This property will later be data-bound to a list view: private List<NationalPark> _parks;public List<NationalPark> Parks{   get { return _parks; }   set { _parks = value;         RaisePropertyChanged(() => Parks);   }} Override the Start() method on MasterViewModel to load the _parks collection with data from the NationalParksData singleton. You will need to add a using directive for the NationalParks.PortableData namespace again: . . .using NationalParks.PortableData;. . .public async override void Start (){   base.Start ();   await NationalParksData.Instance.Load ();   Parks = new List<NationalPark> (       NationalParksData.Instance.Parks);} We now need to modify the app startup sequence so that MasterViewModel is the first ViewModel that's started. Open App.cs in NationalParks.Core and change the call to RegisterAppStart() to reference MasterViewModel: RegisterAppStart<ViewModels.MasterViewModel>(); Updating the Master.axml layout Update Master.axml so that it can leverage the data binding capabilities provided by MvvmCross. To update Master.axml, complete the following steps: Open Master.axml and add a namespace definition to the top of the XML to include the NationalParks.Droid namespace: This namespace definition is required in order to allow Android to resolve the MvvmCross-specific elements that will be specified. Change the ListView element to a Mvx.MvxListView element: <Mvx.MvxListView   android_layout_width="match_parent"   android_layout_height="match_parent"   android_id="@+id/parksListView" /> Add a data binding specification to the MvxListView element, binding the ItemsSource property of the list view to the Parks property of MasterViewModel, as follows:    . . .   android_id="@+id/parksListView" local_MvxBind="ItemsSource Parks" /> Add a list item template attribute to the element definition. This layout controls the content of each item that will be displayed in the list view: local:MvxItemTemplate="@layout/nationalparkitem" Create the NationalParkItem layout and provide TextView elements to display both the name and description of a park, as follows: <LinearLayout    android_orientation="vertical"   android_layout_width="fill_parent"   android_layout_height="wrap_content">   <TextView       android_layout_width="match_parent"       android_layout_height="wrap_content"       android_textSize="40sp"/>   <TextView       android_layout_width="match_parent"       android_layout_height="wrap_content"       android_textSize="20sp"/></LinearLayout> Add data binding specifications to each of the TextView elements: . . .       local_MvxBind="Text Name" />. . .       local_MvxBind="Text Description" />. . . Note that in this case, the context for data binding is an instance of an item in the collection that was bound to MvxListView, for this example, an instance of NationalPark. Creating the MasterView activity Next, create MasterView, which is an MvxActivity instance that corresponds with MasterViewModel. To create MasterView, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, navigate to Add | New File. In the New File dialog, navigate to Android | Activity, enter MasterView in the Name field, and select New. Modify the class specification so that it inherits from MvxActivity; you will also need to add a few using directives as follows: using Cirrious.MvvmCross.Droid.Views;using NationalParks.Core.ViewModels;. . .namespace NationalParks.Droid.Views{   [Activity(Label = "Parks")]   public class MasterView : MvxActivity   {       . . .   }} Open Setup.cs and add code to initialize the file handler and path for the NationalParksData singleton to the CreateApp() method, as follows: protected override IMvxApplication CreateApp(){   NationalParksData.Instance.FileHandler =       new FileHandler ();   NationalParksData.Instance.DataDir =       System.Environment.GetFolderPath(         System.Environment.SpecialFolder.MyDocuments);   return new Core.App();} Compile and run the app; you will need to copy the NationalParks.json file to the device or emulator using the Android Device Monitor. All the parks in NationalParks.json should be displayed. Implementing the detail view Now that we have the master list view displaying national parks, we can focus on creating the detail view. We will follow the same steps for the detail view as the ones we just completed for the master view. Creating DetailViewModel We start creating DetailViewModel by using the following steps: Following the same procedure as the one that was used to create MasterViewModel, create a new ViewModel named DetailViewModel in the ViewModel folder of NationalParks.Core. Add a NationalPark property to support data binding for the view controls, as follows: protected NationalPark _park;public NationalPark Park{   get { return _park; }   set { _park = value;         RaisePropertyChanged(() => Park);     }} Create a Parameters class that can be used to pass a park ID for the park that should be displayed. It's convenient to create this class within the class definition of the ViewModel that the parameters are for: public class DetailViewModel : MvxViewModel{   public class Parameters   {       public string ParkId { get; set; }   }   . . . Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData: public void Init(Parameters parameters){   Park = NationalParksData.Instance.Parks.       FirstOrDefault(x => x.Id == parameters.ParkId);} Updating the Detail.axml layout Next, we will update the layout file. The main changes that need to be made are to add data binding specifications to the layout file. To update the Detail.axml layout, perform the following steps: Open Detail.axml and add the project namespace to the XML file: Add data binding specifications to each of the TextView elements that correspond to a national park property, as demonstrated for the park name: <TextView   android_layout_width="match_parent"   android_layout_height="wrap_content"   android_id="@+id/nameTextView"   local_MvxBind="Text Park.Name" /> Creating the DetailView activity Now, create the MvxActivity instance that will work with DetailViewModel. To create DetailView, perform the following steps: Following the same procedure as the one that was used to create MasterView, create a new view named DetailView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that our menus will be accessible. Copy the implementation of these methods from the solution. Comment out the section in OnOptionsItemSelect() related to the Edit action for now; we will fill that in once the edit view is completed. Adding navigation The last step is to add navigation so that when an item is clicked on in MvxListView on MasterView, the park is displayed in the detail view. We will accomplish this using a command property and data binding. To add navigation, perform the following steps: Open MasterViewModel and add an IMvxCommand property; this will be used to handle a park that is being selected: protected IMvxCommand ParkSelected { get; protected set; } Create an Action delegate that will be called when the ParkSelected command is executed, as follows: protected void ParkSelectedExec(NationalPark park){   ShowViewModel<DetailViewModel> (       new DetailViewModel.Parameters ()           { ParkId = park.Id });} Initialize the command property in the constructor of MasterViewModel: ParkClicked =   new MvxCommand<NationalPark> (ParkSelectedExec); Now, for the last step, add a data binding specification to MvvListView in Master.axml to bind the ItemClick event to the ParkClicked command on MasterViewModel, which we just created: local:MvxBind="ItemsSource Parks; ItemClick ParkClicked" Compile and run the app. Clicking on a park in the list view should now navigate to the detail view, displaying the selected park. Implementing the edit view We are now almost experts at implementing new Views and ViewModels. One last View to go is the edit view. Creating EditViewModel Like we did previously, we start with the ViewModel. To create EditViewModel, complete the following steps: Following the same process that was previously used in this article to create EditViewModel, add a data binding property and create a Parameters class for navigation. Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData in the case of editing an existing park or create a new instance if the user has chosen the New action. Inspect the parameters passed in to determine what the intent is: public void Init(Parameters parameters){   if (string.IsNullOrEmpty (parameters.ParkId))       Park = new NationalPark ();   else       Park =           NationalParksData.Instance.           Parks.FirstOrDefault(           x => x.Id == parameters.ParkId);} Updating the Edit.axml layout Update Edit.axml to provide data binding specifications. To update the Edit.axml layout, you first need to open Edit.axml and add the project namespace to the XML file. Then, add the data binding specifications to each of the EditView elements that correspond to a national park property. Creating the EditView activity Create a new MvxActivity instance named EditView to will work with EditViewModel. To create EditView, perform the following steps: Following the same procedure as the one that was used to create DetailView, create a new View named EditView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that the Done action will accessible from the ActionBar. You can copy the implementation of these methods from the solution. Change the implementation of Done to call the Done command on EditViewModel. Adding navigation Add navigation to two places: when New (+) is clicked from MasterView and when Edit is clicked in DetailView. Let's start with MasterView. To add navigation from MasterViewModel, complete the following steps: Open MasterViewModel.cs and add a NewParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as follows: protected IMvxCommand NewParkClicked { get; set; }protected void NewParkClickedExec(){ShowViewModel<EditViewModel> ();} Note that we do not pass in a parameter class into ShowViewModel(). This will cause a default instance to be created and passed in, which means that ParkId will be null. We will use this as a way to determine whether a new park should be created. Now, it's time to hook the NewParkClicked command up to the actionNew menu item. We do not have a way to accomplish this using data binding, so we will resort to a more traditional approach—we will use the OnOptionsItemSelected() method. Add logic to invoke the Execute() method on NewParkClicked, as follows: case Resource.Id.actionNew:   ((MasterViewModel)ViewModel).       NewParkClicked.Execute ();   return true; To add navigation from DetailViewModel, complete the following steps: Open DetailViewModel.cs and add a EditParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as shown in the following code snippet: protected IMvxCommand EditPark { get; protected set;}protected void EditParkHandler(){   ShowViewModel<EditViewModel> (       new EditViewModel.Parameters ()          { ParkId = _park.Id });} Note that an instance of the Parameters class is created, initialized, and passed into the ShowViewModel() method. This instance will in turn be passed into the Init() method on EditViewModel. Initialize the command property in the constructor for MasterViewModel, as follows: EditPark =    new MvxCommand<NationalPark> (EditParkHandler); Now, update the OnOptionsItemSelect() method in DetailView to invoke the DetailView.EditPark command when the Edit action is selected: case Resource.Id.actionEdit:   ((DetailViewModel)ViewModel).EditPark.Execute ();   return true; Compile and run NationalParks.Droid. You should now have a fully functional app that has the ability to create new parks and edit the existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailView. Creating the MvvmCross iOS app The process of creating the Android app with MvvmCross provides a solid understanding of how the overall architecture works. Creating the iOS solution should be much easier for two reasons: first, we understand how to interact with MvvmCross and second, all the logic we have placed in NationalParks.Core is reusable, so that we just need to create the View portion of the app and the startup code. To create NationalParks.iOS, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog, navigate to C# | iOS | iPhone | Single View Application, enter NationalParks.iOS in the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.iOS and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.iOS as a result of adding the package. They are as follows: packages.config: This file contains a list of libraries associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView: This class is placed in the Views folder, which corresponds to the FirstViewModel instance created in NationalParks.Core. Setup: This class inherits from MvxTouchSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). AppDelegate.cs.txt: This class contains the sample startup code, which should be placed in the actual AppDelete.cs file. Implementing the iOS user interface We are now ready to create the user interface for the iOS app. The good news is that we already have all the ViewModels implemented, so we can simply reuse them. The bad news is that we cannot easily reuse the storyboards from our previous work; MvvmCross apps generally use XIB files. One of the reasons for this is that storyboards are intended to provide navigation capabilities and an MvvmCross app delegates that responsibility to ViewModel and presenter. It is possible to use storyboards in combination with a custom presenter, but the remainder of this article will focus on using XIB files, as this is the more common use. The screen layouts can be used as depicted in the following screenshot: We are now ready to get started. Implementing the master view The first view we will work on is the master view. To implement the master view, complete the following steps: Create a new ViewController class named MasterView by right-clicking on the Views folder of NationalParks.iOS and navigating to Add | New File | iOS | iPhone View Controller. Open MasterView.xib and arrange controls as seen in the screen layouts. Add outlets for each of the edit controls. Open MasterView.cs and add the following boilerplate logic to deal with constraints on iOS 7, as follows: // ios7 layoutif (RespondsToSelector(new   Selector("edgesForExtendedLayout")))   EdgesForExtendedLayout = UIRectEdge.None; Within the ViewDidLoad() method, add logic to create MvxStandardTableViewSource for parksTableView: MvxStandardTableViewSource _source;. . ._source = new MvxStandardTableViewSource(   parksTableView,   UITableViewCellStyle.Subtitle,   new NSString("cell"),   "TitleText Name; DetailText Description",     0);parksTableView.Source = _source; Note that the example uses the Subtitle cell style and binds the national park name and description to the title and subtitle. Add the binding logic to the ViewDidShow() method. In the previous step, we provided specifications for properties of UITableViewCell to properties in the binding context. In this step, we need to set the binding context for the Parks property on MasterModelView: var set = this.CreateBindingSet<MasterView,   MasterViewModel>();set.Bind (_source).To (vm => vm.Parks);set.Apply(); Compile and run the app. All the parks in NationalParks.json should be displayed. Implementing the detail view Now, implement the detail view using the following steps: Create a new ViewController instance named DetailView. Open DetailView.xib and arrange controls as shown in the following code. Add outlets for each of the edit controls. Open DetailView.cs and add the binding logic to the ViewDidShow() method: this.CreateBinding (this.nameLabel).   To ((DetailViewModel vm) => vm.Park.Name).Apply ();this.CreateBinding (this.descriptionLabel).   To ((DetailViewModel vm) => vm.Park.Description).       Apply ();this.CreateBinding (this.stateLabel).   To ((DetailViewModel vm) => vm.Park.State).Apply ();this.CreateBinding (this.countryLabel).   To ((DetailViewModel vm) => vm.Park.Country).       Apply ();this.CreateBinding (this.latLabel).   To ((DetailViewModel vm) => vm.Park.Latitude).        Apply ();this.CreateBinding (this.lonLabel).   To ((DetailViewModel vm) => vm.Park.Longitude).       Apply (); Adding navigation Add navigation from the master view so that when a park is selected, the detail view is displayed, showing the park. To add navigation, complete the following steps: Open MasterView.cs, create an event handler named ParkSelected, and assign it to the SelectedItemChanged event on MvxStandardTableViewSource, which was created in the ViewDidLoad() method: . . .   _source.SelectedItemChanged += ParkSelected;. . .protected void ParkSelected(object sender, EventArgs e){   . . .} Within the event handler, invoke the ParkSelected command on MasterViewModel, passing in the selected park: ((MasterViewModel)ViewModel).ParkSelected.Execute (       (NationalPark)_source.SelectedItem); Compile and run NationalParks.iOS. Selecting a park in the list view should now navigate you to the detail view, displaying the selected park. Implementing the edit view We now need to implement the last of the Views for the iOS app, which is the edit view. To implement the edit view, complete the following steps: Create a new ViewController instance named EditView. Open EditView.xib and arrange controls as in the layout screenshots. Add outlets for each of the edit controls. Open EditView.cs and add the data binding logic to the ViewDidShow() method. You should use the same approach to data binding as the approach used for the details view. Add an event handler named DoneClicked, and within the event handler, invoke the Done command on EditViewModel: protected void DoneClicked (object sender, EventArgs e){  ((EditViewModel)ViewModel).Done.Execute();} In ViewDidLoad(), add UIBarButtonItem to NavigationItem for EditView, and assign the DoneClicked event handler to it, as follows: NavigationItem.SetRightBarButtonItem(   new UIBarButtonItem(UIBarButtonSystemItem.Done,       DoneClicked), true); Adding navigation Add navigation to two places: when New (+) is clicked from the master view and when Edit is clicked on in the detail view. Let's start with the master view. To add navigation to the master view, perform the following steps: Open MasterView.cs and add an event handler named NewParkClicked. In the event handler, invoke the NewParkClicked command on MasterViewModel: protected void NewParkClicked(object sender,       EventArgs e){   ((MasterViewModel)ViewModel).           NewParkClicked.Execute ();} In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView and assign the NewParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(   new UIBarButtonItem(UIBarButtonSystemItem.Add,       NewParkClicked), true); To add navigation to the details view, perform the following steps: Open DetailView.cs and add an event handler named EditParkClicked. In the event handler, invoke the EditParkClicked command on DetailViewModel: protected void EditParkClicked (object sender,   EventArgs e){   ((DetailViewModel)ViewModel).EditPark.Execute ();} In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView, and assign the EditParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(   new UIBarButtonItem(UIBarButtonSystemItem.Edit,       EditParkClicked), true); Refreshing the master view list One last detail that needs to be taken care of is to refresh the UITableView control on MasterView when items have been changed on EditView. To refresh the master view list, perform the following steps: Open MasterView.cs and call ReloadData() on parksTableView within the ViewDidAppear() method of MasterView: public override void ViewDidAppear (bool animated){   base.ViewDidAppear (animated);   parksTableView.ReloadData();} Compile and run NationalParks.iOS. You should now have a fully functional app that has the ability to create new parks and edit existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailVIew. Considering the pros and cons After completing our work, we now have the basis to make some fundamental observations. Let's start with the pros: MvvmCross definitely increases the amount of code that can be reused across platforms. The ViewModels house the data required by the View, the logic required to obtain and transform the data in preparation for viewing, and the logic triggered by user interactions in the form of commands. In our sample app, the ViewModels were somewhat simple; however, the more complex the app, the more reuse will likely be gained. As MvvmCross relies on the use of each platform's native UI frameworks, each app has a native look and feel and we have a natural layer that implements platform-specific logic when required. The data binding capabilities of MvvmCross also eliminate a great deal of tedious code that would otherwise have to be written. All of these positives are not necessarily free; let's look at some cons: The first con is complexity; you have to learn another framework on top of Xamarin, Android, and iOS. In some ways, MvvmCross forces you to align the way your apps work across platforms to achieve the most reuse. As the presentation logic is contained in the ViewModels, the views are coerced into aligning with them. The more your UI deviates across platforms; the less likely it will be that you can actually reuse ViewModels. With these things in mind, I would definitely consider using MvvmCross for a cross-platform mobile project. Yes, you need to learn an addition framework and yes, you will likely have to align the way some of the apps are laid out, but I think MvvmCross provides enough value and flexibility to make these issues workable. I'm a big fan of reuse and MvvmCross definitely pushes reuse to the next level. Summary In this article, we reviewed the high-level concepts of MvvmCross and worked through a practical exercise in order to convert the national parks apps to use the MvvmCross framework and the increase code reuse. In the next article, we will follow a similar approach to exploring the Xamarin.Forms framework in order to evaluate how its use can affect code reuse. Resources for Article: Further resources on this subject: XamChat – a Cross-platform App [Article] Configuring Your Operating System [Article] Updating data in the background [Article]
Read more
  • 0
  • 0
  • 3713

article-image-getting-ready-launch-your-phonegap-app-real-world
Packt
31 Oct 2014
7 min read
Save for later

Getting Ready to Launch Your PhoneGap App in the Real World

Packt
31 Oct 2014
7 min read
In this article by Yuxian, Eugene Liang, author of PhoneGap and AngularJS for Cross-platform Development, we will run through some of the stuff that you should be doing before launching your app to the world, whether it's through Apple App Store or Google Android Play Store. (For more resources related to this topic, see here.) Using phonegap.com The services on https://build.phonegap.com/ are a straightforward way for you to get your app compiled for various devices. While this is a paid service, there is a free plan if you only have one app that you want to work on. This would be fine in our case. Choose a plan from PhoneGap You will need to have an Adobe ID in order to use PhoneGap services. If not, feel free to create one. Since the process for generating compiled apps from PhoneGap may change, it's best that you visit https://build.phonegap.com/ and sign up for their services and follow their instructions. Preparing your PhoneGap app for an Android release This section generally focuses on things that are specific for the Android platform. This is by no means a comprehensive checklist, but has some of the common tasks that you should go through before releasing your app to the Android world. Testing your app on real devices It is always good to run your app on an actual handset to see how the app is working. To run your PhoneGap app on a real device, issue the following command after you plug your handset into your computer: cordova run android You will see that your app now runs on your handset. Exporting your app to install on other devices In the previous section we talked about installing your app on your device. What if you want to export the APK so that you can test the app on other devices? Here's what you can do: As usual, build your app using cordova build android Alternatively, if you can, run cordova build release The previous step will create an unsigned release APK at /path_to_your_project/platforms/android/ant-build. This app is called YourAppName-release-unsigned.apk. Now, you can simply copy YourAppName-release-unsigned.apk and install it on any android based device you want. Preparing promotional artwork for release In general, you will need to include screenshots of your app for upload to Google Play. In case your device does not allow you to take screenshots, here's what you can do: The first technique that you can use is to simply run your app in the emulator and take screenshots off it. The size of the screenshot may be substantially larger, so you can crop it using GIMP or some other online image resizer. Alternatively, use the web app version and open it in your Google Chrome Browser. Resize your browser window so that it is narrow enough to resemble the width of mobile devices. Building your app for release To build your app for release, you will need Eclipse IDE. To start your Eclipse IDE, navigate to File | New | Project. Next, navigate to Existing Code | Android | Android Project. Click on Browse and select the root directory of your app. The Project to Import window should show platforms/android. Now, select Copy projects into workspace if you want and then click on Finish. Signing the app We have previously exported the app (unsigned) so that we can test it on devices other than those plugged into our computer. However, to release your app to the Play Store, you need to sign them with keys. The steps here are the general steps that you need to follow in order to generate "signed" APK apps to upload your app to the Play Store. Right-click on the project that you have imported in the previous section, and then navigate to Android Tools | Export Signed Application Package. You will see the Project Checks dialog. In the Project Checks dialog, you will see if your project has any errors or not. Next, you should see the Keystore selection dialog. You will now create the key using the app name (without space) and the extension .keystore. Since this app is the first version, there is no prior original name to use. Now, you can browse to the location and save the keystore, and in the same box, give the name of the keystore. In the Keystore election dialog, add your desired password twice and click on Next. You will now see the Key Creation dialog. In the Key Creation dialog, use app_name as your alias (without any spaces) and give the password of your keystore. Feel free to enter 50 for validity (which means the password is valid for 50 years). The remaining fields such as names, organization, and so on are pretty straightforward, so you can just go ahead and fill them in. Finally, select the Destination APK file, which is the location to which you will export your .apk file. Bear in mind that the preceding steps are not a comprehensive list of instructions. For the official documentation, feel free to visit http://developer.android.com/tools/publishing/app-signing.html. Now that we are done with Android, it's time to prepare our app for iOS. iOS As you might already know, preparing your PhoneGap app for Apple App Store requires similar levels, if not more, as compared to your usual Android deployment. In this section, I will not be covering things like making sure your app is in tandem with Apple User Interface guidelines, but rather, how to improve your app before it reaches the App Store. Before we get started, there are some basic requirements: Apple Developer Membership (if you ultimately want to deploy to the App Store) Xcode Running your app on an iOS device If you already have an iOS device, all you need to do is to plug your iOS device to your computer and issue the following command: cordova run ios You should see that your PhoneGap app will build and launch on your device. Note that before running the preceding command, you will need to install the ios-deploy package. You can install it using the following command: sudo npm install –g ios-deploy Other techniques There are other ways to test and deploy your apps. These methods can be useful if you want to deploy your app to your own devices or even for external device testing. Using Xcode Now let's get started with Xcode: After starting your project using the command-line tool and after adding in iOS platform support, you may actually start developing using Xcode. You can start your Xcode and click on Open Other, as shown in the following screenshot: Once you have clicked on Open Other, you will need to browse to your ToDo app folder. Drill down until you see ToDo.xcodeproj (navigate to platforms | ios). Select and open this file. You will see your Xcode device importing the files. After it's all done, you should see something like the following screenshot: Files imported into Xcode Notice that all the files are now imported to your Xcode, and you can start working from here. You can also deploy your app either to devices or simulators: Deploy on your device or on simulators Summary In this article, we went through the basics of packaging your app before submission to the respective app stores. In general, you should have a good idea of how to develop AngularJS apps and apply mobile skins on them so that it can be used on PhoneGap. You should also notice that developing for PhoneGap apps typically takes the pattern of creating a web app first, before converting it to a PhoneGap version. Of course, you may structure your project so that you can build a PhoneGap version from day one, but it may make testing more difficult. Anyway, I hope that you enjoyed this article and feel free to follow me at http://www.liangeugene.com and http://growthsnippets.com. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Working with the sharing plugin [Article] Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article]
Read more
  • 0
  • 0
  • 1937

article-image-cordova-plugins
Packt
17 Oct 2014
20 min read
Save for later

Cordova Plugins

Packt
17 Oct 2014
20 min read
In this article by Hazem Saleh, author of JavaScript Mobile Application Development, we will continue to deep dive into Apache Cordova. You will learn how to create your own custom Cordova plugin on the three most popular mobile platforms: Android (using the Java programming language), iOS (using the Objective-C programming language), and Windows Phone 8 (using the C# programming language). (For more resources related to this topic, see here.) Developing a custom Cordova plugin Before going into the details of the plugin, it is important to note that developing custom Cordova plugins is not a common scenario if you are developing Apache Cordova apps. This is because the Apache Cordova core and community custom plugins already cover many of the use cases that are needed to access a device's native functions. So, make sure of two things: You are not developing a custom plugin that already exists in Apache Cordova core plugins. You are not developing a custom plugin whose functionality already exists in other good Apache Cordova custom plugin(s) that are developed by the Apache Cordova development community. Building plugins from scratch can consume precious time from your project; otherwise, you can save time by reusing one of the available good custom plugins. Another thing to note is that developing custom Cordova plugins is an advanced topic. It requires you to be aware of the native programming languages of the mobile platforms, so make sure you have an overview of Java, Objective-C, and C# (or at least one of them) before reading this section. This will be helpful in understanding all the plugin development steps (plugin structuring, JavaScript interface definition, and native plugin implementation). Now, let's start developing our custom Cordova plugin. It can be used in order to send SMS messages from one of the three popular mobile platforms (Android, iOS, and Windows Phone 8). Before we start creating our plugin, we need to define its API. The following code listing shows you how to call the sms.sendMessage method of our plugin, which will be used in order to send an SMS across platforms: var messageInfo = {    phoneNumber: "xxxxxxxxxx",    textMessage: "This is a test message" }; sms.sendMessage(messageInfo, function(message) {    console.log("success: " + message); }, function(error) {    console.log("code: " + error.code + ", message: " + error.message); }); The sms.sendMessage method has the following parameters: messageInfo: This is a JSON object that contains two main attributes: phoneNumber, which represents the phone number that will receive the SMS message, and textMessage, which represents the text message to be sent. successCallback: This is a callback that will be called if the message is sent successfully. errorCallback: This is a callback that will be called if the message is not sent successfully. This callback receives an error object as a parameter. The error object has code (the error code) and message (the error message) attributes. Using plugman In addition to the Apache Cordova CLI utility, you can use the plugman utility in order to add or remove plugin(s) to/from your Apache Cordova projects. However, it's worth mentioning that plugman is a lower-level tool that you can use if your Apache Cordova application follows platform-centered workflow and not cross-platform workflow. If your application follows cross-platform workflow, then Apache Cordova CLI should be your choice. If you want your application to run on different mobile platforms (which is a common use case if you want to use Apache Cordova), it's recommend that you follow cross-platform workflow. Use platform-centered workflow if you want to develop your Apache Cordova application on a single platform and modify your application using the platform-specific SDK. Besides adding and removing plugins to/from platform-centered workflow, the Cordova projects plugman can also be used: To create basic scaffolding for your custom Cordova plugin To add and remove a platform to/from your custom Cordova plugin To add user(s) to the Cordova plugin registry (a repository that hosts the different Apache Cordova core and custom plugins) To publish your custom Cordova plugin(s) to the Cordova plugin registry To unpublish your custom plugin(s) from the Cordova plugin registry To search for plugin(s) in the Cordova plugin registry In this section, we will use the plugman utility to create the basic scaffolding of our custom SMS plugin. In order to install plugman, you need to make sure that Node.js is installed in your operating system. Then, to install plugman, execute the following command: > npm install -g plugman After installing plugman, we can start generating our initial custom plugin artifacts using the plugman create command as follows: > plugman create --name sms --plugin_id com.jsmobile.plugins.sms -- plugin_version 0.0.1 It is important to note the following parameters: --name: This specifies the plugin name ( in our case, sms) --plugin_id: This specifies an ID for the plugin (in our case, com.jsmobile.plugins.sms) --plugin_version: This specifies the plugin version (in our case, 0.0.1) The following are the two parameters that the plugman create command can accept as well: --path: This specifies the directory path of the plugin --variable: This can specify extra variables such as author or description After executing the previous command, we will have initial artifacts for our custom plugin. As we will be supporting multiple platforms, we can use the plugman platform add command. The following two commands add the Android and iOS platforms to our custom plugin: > plugman platform add --platform_name android > plugman platform add --platform_name ios In order to run the plugman platform add command, we need to run it from the plugin directory. Unfortunately, for Windows Phone 8 platform support, we need to add it manually later to our plugin. Now, let's check the initial scaffolding of our custom plugin code. The following screenshot shows the hierarchy of our initial plugin code: Hierarchy of our initial plugin code As shown in the preceding screenshot, there is one file and two parent directories. They are as follows: plugin.xml file: This contains the plugin definition. src directory: This contains the plugin native implementation code for each platform. For now, it contains two subdirectories: android and ios. The android subdirectory contains sms.java. This represents the initial implementation of the plugin in the Android.ios subdirectory that contains sms.m, which represents the initial implementation of the plugin in iOS. www directory: This mainly contains the JavaScript interface of the plugin. It contains sms.js that represents the initial implementation of the JavaScript API plugin. We will need to edit these generated files (and may be, refactor and add new implementation files) in order to implement our custom SMS plugin. Plugin definition First of all, we need to define our plugin structure. In order to do so, we need to define our plugin in the plugin.xml file. The following code listing shows our plugin.xml code: <?xml version='1.0' encoding='utf-8'?> <plugin id="com.jsmobile.plugins.sms" version="0.0.1"    >      <name>sms</name>    <description>A plugin for sending sms messages</description>    <license>Apache 2.0</license>    <keywords>cordova,plugins,sms</keywords>    <js-module name="sms" src="www/sms.js">        <clobbers target="window.sms" />    </js-module>    <platform name="android">        <config-file parent="/*" target="res/xml/config.xml">            <feature name="Sms">                <param name="android-package" value="com.jsmobile.plugins.sms.Sms" />            </feature>        </config-file>        <config-file target="AndroidManifest.xml" parent="/manifest">          <uses-permission android_name="android.permission.SEND_SMS" />        </config-file>          <source-file src="src/android/Sms.java"                      target-dir="src/com/jsmobile/plugins/sms" />    </platform>      <platform name="ios">        <config-file parent="/*" target="config.xml">            <feature name="Sms">                <param name="ios-package" value="Sms" />            </feature>        </config-file>          <source-file src="src/ios/Sms.h" />        <source-file src="src/ios/Sms.m" />        <framework src="MessageUI.framework" weak="true" />    </platform>    <platform name="wp8">        <config-file target="config.xml" parent="/*">            <feature name="Sms">                <param name="wp-package" value="Sms" />            </feature>        </config-file>        <source-file src="src/wp8/Sms.cs" />    </platform> </plugin> The plugin.xml file defines the plugin structure and contains a top-level <plugin> , which contains the following attributes: /> tag mainly inserts the smsExport JavaScript object that is defined in the www/sms.js file and exported using module.exports (the smsExport object will be illustrated in the Defining the plugin's JavaScript interface section) into the window object as window.sms. This means that our plugin users will be able to access our plugin's API using the window.sms object (this will be shown in detail in the Testing our Cordova plugin section). The <plugin> element can contain one or more <platform> element(s). The <platform> element specifies the platform-specific plugin's configuration. It has mainly one attribute name that specifies the platform name (android, ios, wp8, bb10, wp7, and so on). The <platform> element can have the following child elements: <source-file>: This element represents the native platform source code that will be installed and executed in the plugin-client project. The <source-file> element has the following two main attributes: src: This attribute represents the location of the source file relative to plugin.xml. target-dir: This attribute represents the target directory (that is relative to the project root) in which the source file will be placed when the plugin is installed in the client project. This attribute is mainly needed in a Java platform (Android), because a file under the x.y.z package must be placed under x/y/z directories. For iOS and Windows platforms, this parameter should be ignored. <config-file>: This element represents the configuration file that will be modified. This is required for many cases; for example, in Android, in order to send an SMS from your Android application, you need to modify the Android configuration file to have the permission to send an SMS from the device. The <config-file> has two main attributes: target: This attribute represents the file to be modified and the path relative to the project root. parent: This attribute represents an XPath selector that references the parent of the elements to be added to the configuration file. <framework>: This element specifies a platform-specific framework that the plugin depends on. It mainly has the src attribute to specify the framework name and weak attribute to indicate whether the specified framework should be weakly linked. Giving this explanation for the <platform> element and getting back to our plugin.xml file, you will notice that we have the following three <platform> elements: Android (<platform name="android">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the res/xml/config.xml file to register our plugin in an Android project. In Android, the <feature> element's name attribute represents the service name, and its "android-package" parameter represents the fully qualified name of the Java plugin class: <feature name="Sms">    <param name="android-package" value="com.jsmobile.plugins.sms.Sms" /> </feature> It modifies the AndroidManifest.xml file to add the <uses-permission android_name="android.permission.SEND_SMS" /> element (to have a permission to send an SMS in an Android platform) under the <manifest> element. Finally, it specifies the plugin's implementation source file, "src/android/Sms.java", and its target directory, "src/com/jsmobile/plugins/sms" (we will explore the contents of this file in the Developing the Android code section). iOS (<platform name="ios">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the iOS project. In iOS, the <feature> element's name attribute represents the service name, and its "ios-package" parameter represents the Objective-C plugin class name: <feature name="Sms">    <param name="ios-package" value="Sms" /> </feature> It specifies the plugin implementation source files: Sms.h (the header file) and Sms.m (the methods file). We will explore the contents of these files in the Developing the iOS code section. It adds "MessageUI.framework" as a weakly linked dependency for our iOS plugin. Windows Phone 8 (<platform name="wp8">) performs the following operations: It creates a <feature> element for our SMS plugin under the root element of the config.xml file to register our plugin in the Windows Phone 8 project. The <feature> element's name attribute represents the service name, and its "wp-package" parameter represents the C# service class name: <feature name="Sms">        <param name="wp-package" value="Sms" /> </feature> It specifies the plugin implementation source file, "src/wp8/Sms.cs" (we will explore the contents of this file in the Developing Windows Phone 8 code section). This is all we need to know in order to understand the structure of our custom plugin; however, there are many more attributes and elements that are not mentioned here, as we didn't use them in our example. In order to get the complete list of attributes and elements of plugin.xml, you can check out the plugin specification page in the Apache Cordova documentation at http://cordova.apache.org/docs/en/3.4.0/plugin_ref_spec.md.html#Plugin%20Specification. Defining the plugin's JavaScript interface As indicated in the plugin definition file (plugin.xml), our plugin's JavaScript interface is defined in sms.js, which is located under the www directory. The following code snippet shows the sms.js file content: var smsExport = {}; smsExport.sendMessage = function(messageInfo, successCallback, errorCallback) {    if (messageInfo == null || typeof messageInfo !== 'object') {        if (errorCallback) {            errorCallback({                code: "INVALID_INPUT",                message: "Invalid Input"            });        }        return;    }    var phoneNumber = messageInfo.phoneNumber;    var textMessage = messageInfo.textMessage || "Default Text from SMS plugin";    if (! phoneNumber) {        console.log("Missing Phone Number");        if (errorCallback) {            errorCallback({                code: "MISSING_PHONE_NUMBER",                message: "Missing Phone number"            });        }        return;    }    cordova.exec(successCallback, errorCallback, "Sms", "sendMessage", [phoneNumber, textMessage]); }; module.exports = smsExport; The smsExport object contains a single method, sendMessage(messageInfo, successCallback, errorCallback). In the sendMessage method, phoneNumber and textMessage are extracted from the messageInfo object. If a phone number is not specified by the user, then errorCallback will be called with a JSON error object, which has a code attribute set to "MISSING_PHONE_NUMBER" and a message attribute set to "Missing Phone number". After passing this validation, a call is performed to the cordova.exec() API in order to call the native code (whether it is Android, iOS, Windows Phone 8, or any other supported platform) from Apache Cordova JavaScript. It is important to note that the cordova.exec(successCallback, errorCallback, "service", "action", [args]) API has the following parameters: successCallback: This represents the success callback function that will be called (with any specified parameter(s)) if the Cordova exec call completes successfully errorCallback: This represents the error callback function that will be called (with any specified error parameter(s)) if the Cordova exec call does not complete successfully "service": This represents the native service name that is mapped to a native class using the <feature> element (in sms.js, the native service name is "Sms") "action": This represents the action name to be executed, and an action is mapped to a class method in some platforms (in sms.js, the action name is "sendMessage") [args]: This is an array that represents the action arguments (in sms.js, the action arguments are [phoneNumber, textMessage]) It is very important to note that in cordova.exec(successCallback, errorCallback, "service", "action", [args]), the "service" parameter must match the name of the <feature> element, which we set in our plugin.xml file in order to call the mapped native plugin class correctly. Finally, the smsExport object is exported using module.exports. Do not forget that our JavaScript module is mapped to window.sms using the <clobbers target="window.sms" /> element inside <js-module src="www/sms.js"> element, which we discussed in the plugin.xml file. This means that in order to call the sendMessage method of the smsExport object from our plugin-client application, we use the sms.sendMessage() method. Developing the Android code As specified in our plugin.xml file's platform section for Android, the implementation of our plugin in Android is located at src/android/Sms.java. The following code snippet shows the first part of the Sms.java file: package com.jsmobile.plugins.sms;   import org.apache.cordova.CordovaPlugin; import org.apache.cordova.CallbackContext; import org.apache.cordova.PluginResult; import org.apache.cordova.PluginResult.Status; import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.app.Activity; import android.app.PendingIntent; import android.content.BroadcastReceiver; import android.content.Context; import android.content.Intent; import android.content.IntentFilter; import android.content.pm.PackageManager; import android.telephony.SmsManager; public class Sms extends CordovaPlugin {    private static final String SMS_GENERAL_ERROR = "SMS_GENERAL_ERROR";    private static final String NO_SMS_SERVICE_AVAILABLE = "NO_SMS_SERVICE_AVAILABLE";    private static final String SMS_FEATURE_NOT_SUPPORTED = "SMS_FEATURE_NOT_SUPPORTED";    private static final String SENDING_SMS_ID = "SENDING_SMS";    @Override    public boolean execute(String action, JSONArray args, CallbackContext callbackContext) throws JSONException {        if (action.equals("sendMessage")) {            String phoneNumber = args.getString(0);            String message = args.getString(1);            boolean isSupported = getActivity().getPackageManager().hasSystemFeature(PackageManager. FEATURE_TELEPHONY);            if (! isSupported) {                JSONObject errorObject = new JSONObject();                errorObject.put("code", SMS_FEATURE_NOT_SUPPORTED);                errorObject.put("message", "SMS feature is not supported on this device");                callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                return false;            }            this.sendSMS(phoneNumber, message, callbackContext);            return true;        }        return false;    }    // Code is omitted here for simplicity ...    private Activity getActivity() {        return this.cordova.getActivity();    } } In order to create our Cordova Android plugin class, our Android plugin class must extend the CordovaPlugin class and must override one of the execute() methods of CordovaPlugin. In our Sms Java class, the execute(String action, JSONArray args, CallbackContext callbackContext) execute method, which has the following parameters, is overridden: String action: This represents the action to be performed, and it matches the specified action parameter in the cordova.exec() JavaScript API JSONArray args: This represents the action arguments, and it matches the [args] parameter in the cordova.exec() JavaScript API CallbackContext callbackContext: This represents the callback context used when calling a function back to JavaScript In the execute() method of our Sms class, phoneNumber and message parameters are retrieved from the args parameter. Using getActivity().getPackageManager().hasSystemFeature(PackageManager.FEATURE_TELEPHONY), we can check if the device has a telephony radio with data communication support. If the device does not have this feature, this API returns false, so we create errorObject of the JSONObject type that contains an error code attribute ("code") and an error message attribute ("message") that inform the plugin user that the SMS feature is not supported on this device. The plugin tells the JavaScript caller that the operation failed by calling callbackContext.sendPluginResult() and specifying a PluginResult object as a parameter (the PluginResult object's status is set to Status.ERROR, and message is set to errorObject). As indicated in our Android implementation, in order to send a plugin result to JavaScript from Android, we use the callbackContext.sendPluginResult() method that specifies the PluginResult status and message. Other platforms (iOS and Windows Phone 8) have much a similar way. If an Android device supports sending SMS messages, then a call to the sendSMS() private method is performed. The following code snippet shows the sendSMS() code: private void sendSMS(String phoneNumber, String message, final CallbackContext callbackContext) throws JSONException {    PendingIntent sentPI = PendingIntent.getBroadcast(getActivity(), 0, new Intent(SENDING_SMS_ID), 0);    getActivity().registerReceiver(new BroadcastReceiver() {        @Override        public void onReceive(Context context, Intent intent) {            switch (getResultCode()) {            case Activity.RESULT_OK:                callbackContext.sendPluginResult(new PluginResult(Status.OK, "SMS message is sent successfully"));                break;            case SmsManager.RESULT_ERROR_NO_SERVICE:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", NO_SMS_SERVICE_AVAILABLE);                    errorObject.put("message", "SMS is not sent because no service is available");                     callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                break;            default:                try {                    JSONObject errorObject = new JSONObject();                    errorObject.put("code", SMS_GENERAL_ERROR);                    errorObject.put("message", "SMS general error");                    callbackContext.sendPluginResult(new PluginResult(Status.ERROR, errorObject));                } catch (JSONException exception) {                    exception.printStackTrace();                }                 break;            }        }    }, new IntentFilter(SENDING_SMS_ID));    SmsManager sms = SmsManager.getDefault();    sms.sendTextMessage(phoneNumber, null, message, sentPI, null); } In order to understand the sendSMS() method, let's look into the method's last two lines: SmsManager sms = SmsManager.getDefault(); sms.sendTextMessage(phoneNumber, null, message, sentPI, null); SmsManager is an Android class that provides an API to send text messages. Using SmsManager.getDefault() returns an object of SmsManager. In order to send a text-based message, a call to sms.sendTextMessage() should be performed. The sms.sendTextMessage (String destinationAddress, String scAddress, String text, PendingIntent sentIntent, PendingIntent deliveryIntent) method has the following parameters: destinationAddress: This represents the address (phone number) to send the message to. scAddress: This represents the service center address. It can be set to null to use the current default SMS center. text: This represents the text message to be sent. sentIntent: This represents PendingIntent, which broadcasts when the message is successfully sent or failed. It can be set to null. deliveryIntent: This represents PendingIntent, which broadcasts when the message is delivered to the recipient. It can be set to null. As shown in the preceding code snippet, we specified a destination address (phoneNumber), a text message (message), and finally, a pending intent (sendPI) in order to listen to the message-sending status. If you return to the sendSMS() code and look at it from the beginning, you will notice that sentPI is initialized by calling PendingIntent.getBroadcast(), and in order to receive the SMS-sending broadcast, BroadcastReceiver is registered. When the SMS message is sent successfully or fails, the onReceive() method of BroadcastReceiver will be called, and the resultant code can be retrieved using getResultCode(). The result code can indicate: Success when getResultCode() is equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.OK and message = "SMS message is sent successfully", and it is sent to the client using callbackContext.sendPluginResult(). Failure when getResultCode() is not equal to Activity.RESULT_OK. In this case, a PluginResult object is constructed with status = Status.ERROR and message = errorObject (which contains the error code and error message), and it is sent to the client using callbackContext.sendPluginResult(). These are the details of our SMS plugin implementation in the Android platform. Now, let's move to the iOS implementation of our plugin. Summary This article showed you how to design and develop your own custom Apache Cordova plugin using JavaScript and Java for Android, Objective-C for iOS, and finally, C# for Windows Phone 8. Resources for Article:   Further resources on this subject: Building Mobile Apps [article] Digging into the Architecture [article] So, what is KineticJS? [article]
Read more
  • 0
  • 0
  • 4134

article-image-using-sensors
Packt
10 Oct 2014
25 min read
Save for later

Using Sensors

Packt
10 Oct 2014
25 min read
In this article by Leon Anavi, author of the Tizen Cookbook, we will cover the following topics: Using location-based services to display current location Getting directions Geocoding Reverse geocoding Calculating distance Detecting device motion Detecting device orientation Using the Vibration API (For more resources related to this topic, see here.) The data provided by the hardware sensors of Tizen devices can be useful for many mobile applications. In this article, you will learn how to retrieve the geographic location of Tizen devices using the assisted GPS, to detect changes of the device orientation and motion as well as how to integrate map services into Tizen web applications. Most of the examples related to maps and navigation use Google APIs. Other service providers such as Nokia HERE, OpenStreetMap, and Yandex also offer APIs with similar capabilities and can be used as an alternative to Google in Tizen web applications. It was announced that Nokia HERE joined the Tizen association at the time of writing this book. Some Tizen devices will be shipped with built-in navigation applications powered by Nokia HERE. The smart watch Gear S is the first Tizen wearable device from Samsung that comes of the box with an application called Navigator, which is developed with Nokia HERE. Explore the full capabilities of Nokia HERE JavaScript APIs if you are interested in their integration in your Tizen web application at https://developer.here.com/javascript-apis. OpenStreetMap also deserves special attention because it is a high quality platform and very successful community-driven project. The main advantage of OpenStreetMap is that its usage is completely free. The recipe about Reverse geocoding in this article demonstrates address lookup using two different approaches: through Google and through OpenStreetMap API. Using location-based services to display current location By following the provided example in this recipe, you will master the HTML5 Geolocation API and learn how to retrieve the coordinates of the current location of a device in a Tizen web application. Getting ready Ensure that the positioning capabilities are turned on. On a Tizen device or Emulator, open Settings, select Locations, and turn on both GPS (if it is available) and Network position as shown in the following screenshot: Enabling GPS and network position from Tizen Settings How to do it... Follow these steps to retrieve the location in a Tizen web application: Implement JavaScript for handling errors: function showError(err) { console.log('Error ' + err.code + ': ' + err.message); } Implement JavaScript for processing the retrieved location: function showLocation(location) { console.log('latitude: ' + location.coords.longitude + '    longitude: ' + location.coords.longitude); } Implement a JavaScript function that searches for the current position using the HTML5 Geolocation API: function retrieveLocation() { if (navigator.geolocation) {    navigator.geolocation.getCurrentPosition(showLocation,      showError); } } At an appropriate place in the source code of the application, invoke the function created in the previous step: retrieveLocation(); How it works The getCurrentPosition() method of the HTML5 Geolocation API is used in the retrieveLocation() function to retrieve the coordinates of the current position of the device. The functions showLocation() and showError() are provided as callbacks, which are invoked on success or failure. An instance of the Position interface is provided as an argument to showLocation(). This interface has two properties: coords: This specifies an object that defines the retrieved position timestamp: This specifies the date and time when the position has been retrieved The getCurrentPosition() method accepts an instance of the PositionOptions interface as a third optional argument. This argument should be used for setting specific options such as enableHighAccuracy, timeout, and maximumAge. Explore the Geolocation API specification if you are interested in more details regarding the attributes of the discussed interface at http://www.w3.org/TR/geolocation-API/#position-options. There is no need to add any specific permissions explicitly in config.xml. When an application that implements the code from this recipe, is launched for the first time, it will ask for permission to access the location, as shown in the following screenshot: A request to access location in Tizen web application If you are developing a location-based application and want to debug it using the Tizen Emulator, use the Event Injector to set the position. There's more... A map view provided by Google Maps JavaScript API v3 can be easily embedded into a Tizen web application. An internet connection is required to use the API, but there is no need to install an additional SDK or tools from Google. Follow these instructions to display a map and a marker: Make sure that the application can access the Google API. For example, you can enable access to any website by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Visit https://code.google.com/apis/console to get the API keys. Click on Services and activate Google Maps API v3. After that, click on API and copy Key for browser apps. Its value will be used in the source code of the application. Implement the following source code to show a map inside div with the ID map-canvas: <style type="text/css"> #map-canvas { width: 320px; height: 425px; } </style> <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=<API Key>&sensor=false"></script> Replace <API Key> in the line above with the value of the key obtained on the previous step. <script type="text/javascript"> function initialize(nLatitude, nLongitude) { var mapOptions = {    center: new google.maps.LatLng(nLatitude, nLongitude),    zoom: 14 }; var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions); var marker = new google.maps.Marker({    position: new google.maps.LatLng(nLatitude,      nLongitude),    map: map }); } </script> In the HTML of the application, create the following div element: <div id="map-canvas"></div> Provide latitude and longitude to the function and execute it at an appropriate location. For example, these are the coordinates of a location in Westminster, London: initialize(51.501725, -0.126109); The following screenshot demonstrates a Tizen web application that has been created by following the preceding guidelines: Google Map in Tizen web application Combine the tutorial from the How to do it section of the recipe with these instructions to display a map with the current location. See also A source code of a simple Tizen web application is provided alongside the book following the tutorial from this recipe. Feel free to use it as you wish. More details are available in the W3C specification of the HTML5 Geolocation API at http://www.w3.org/TR/geolocation-API/. To learn more details and to explore the full capabilities of the Google Maps JavaScript API v3, please visit https://developers.google.com/maps/documentation/javascript/tutorial. Getting directions Navigation is another common task for mobile applications. The Google Directions API allows web and mobile developers to retrieve a route between locations by sending an HTTP request. It is mandatory to specify an origin and a destination, but it is also possible to set way points. All locations can be provided either by exact coordinates or by address. An example for getting directions and to reach a destination on foot is demonstrated in this recipe. Getting ready Before you start with the development, register an application and obtain API keys: Log in to Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Directions API. Click on API Access and get the value of Key for server apps, which should be used in all requests from your Tizen web application to the API. For more information about the API keys for the Directions API, please visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Use the following source code to retrieve and display step-by-step instructions on how to walk from one location to another using the Google Directions API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create an HTML unordered list: <ul id="directions" data-role="listview"></ul> Create JavaScript that will load retrieved directions: function showDirections(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to provide directions.');    return; } var directions = data.routes[0].legs[0].steps; for (nStep = 0; nStep < directions.length; nStep++) {     var listItem = $('<li>').append($( '<p>'      ).append(directions[nStep].html_instructions));    $('#directions').append(listItem); } $('#directions').listview('refresh'); } Create a JavaScript function that sends an asynchronous HTTP (AJAX) request to the Google Maps API to retrieve directions: function retrieveDirection(sLocationStart, sLocationEnd){ $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sLocationStart,        destination: sLocationEnd,        mode: 'walking',        sensor: 'true',        key: '<API key>' }, Do not forget to replace <API key> with the Key for server apps value provided by Google for the Directions API. Please note that a similar key has to be set to the source code in the subsequent recipes that utilize Google APIs too:    success : showDirections,    error : function (request, status, message) {    console.log('Error');    } }); } Provide start and end locations as arguments and execute the retrieveDirection() function. For example: retrieveDirection('Times Square, New York, NY, USA', 'Empire State Building, 350 5th Avenue, New York, NY 10118, USA'); How it works The first mandatory step is to allow access to the Tizen web application to Google servers. After that, an HTML unordered list with ID directions is constructed. An origin and destination is provided to the JavaScript function retrieveDirections(). On success, the showDirections() function is invoked as a callback and it loads step-by-step instructions on how to move from the origin to the destination. The following screenshot displays a Tizen web application with guidance on how to walk from Times Square in New York to the Empire State Building: The Directions API is quite flexible. The mandatory parameters are origin, destination, and sensor. Numerous other options can be configured at the HTTP request using different parameters. To set the desired transport, use the parameter mode, which has the following options: driving walking bicycling transit (for getting directions using public transport) By default, if the mode is not specified, its value will be set to driving. The unit system can be configured through the parameter unit. The options metric and imperial are available. The developer can also define restrictions using the parameter avoid and the addresses of one or more directions points at the waypoints parameter. A pipe (|) is used as a symbol for separation if more than one address is provided. There's more... An application with similar features for getting directions can also be created using services from Nokia HERE. The REST API can be used in the same way as Google Maps API. Start by acquiring the credentials at http://developer.here.com/get-started. An asynchronous HTTP request should be sent to retrieve directions. Instructions on how to construct the request to the REST API are provided in its documentation at https://developer.here.com/rest-apis/documentation/routing/topics/request-constructing.html. The Nokia HERE JavaScript API is another excellent solution for routing. Make instances of classes Display and Manager provided by the API to create a map and a routing manager. After that, create a list of way points whose coordinates are defined by an instance of the Coordinate class. Refer to the following example provided by the user's guide of the API to learn details at https://developer.here.com/javascript-apis/documentation/maps/topics/routing.html. The full specifications about classes Display, Manager, and Coordinate are available at the following links: https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.map.Display.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.routing.Manager.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.geo.Coordinate.html See also All details, options, and returned results from the Google Directions API are available at https://developers.google.com/maps/documentation/directions/. Geocoding Geocoding is the process of retrieving geographical coordinates associated with an address. It is often used in mobile applications that use maps and provide navigation. In this recipe, you will learn how to convert an address to longitude and latitude using JavaScript and AJAX requests to the Google Geocoding API. Getting ready You must obtain keys before you can use the Geocoding API in a Tizen web application: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and get the value of Key for server apps. Use it in all requests from your Tizen web application to the API. For more details regarding the API keys for the Geocoding API, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow these instructions to retrieve geographic coordinates of an address in a Tizen web application using the Google Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle results provided by the API: function retrieveCoordinates(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve coordinates');    return; } var latitude = data.results[0].geometry.location.lat; var longitude = data.results[0].geometry.location.lng; console.log('latitude: ' + latitude + ' longitude: ' +    longitude); } Create a JavaScript function that sends a request to the API: function geocoding(address) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { address: address,      sensor: 'true',      key: '<API key>' }, As in the previous recipes, you should again replace <API key> with the Key for server apps value provided by Google for the Geocoding API.    success : retrieveCoordinates,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide the address as an argument to the geocoding() function and invoke it. For example: geocoding('350 5th Avenue, New York, NY 10118, USA'); How it works The address is passed as an argument to the geocoding() function, which sends a request to the URL of Google Geocoding API. The URL specifies that the returned result should be serialized as JSON. The parameters of the URL contain information about the address and the API key. Additionally, there is a parameter that indicates whether the device has a sensor. In general, Tizen mobile devices are equipped with GPS so the parameter sensor is set to true. A successful response from the API is handled by the retrieveCoordinates() function, which is executed as a callback. After processing the data, the code snippet in this recipe prints the retrieved coordinates at the console. For example, if we provide the address of the Empire State Building to the geocoding() function on success, the following text will be printed: latitude: 40.7481829 longitude: -73.9850635. See also Explore the Google Geocoding API documentation to learn the details regarding the usage of the API and all of its parameters at https://developers.google.com/maps/documentation/geocoding/#GeocodingRequests. Nokia HERE provides similar features. Refer to the documentation of its Geocoder API to learn how to create the URL of a request to it at https://developer.here.com/rest-apis/documentation/geocoder/topics/request-constructing.html. Reverse geocoding Reverse geocoding, also known as address lookup, is the process of retrieving an address that corresponds to a location described with geographic coordinates. The Google Geocoding API provides methods for both geocoding as well as reverse geocoding. In this recipe, you will learn how to find the address of a location based on its coordinates using the Google API as well as an API provided by OpenStreetMap. Getting ready Same keys are required for geocoding and reverse geocoding. If you have already obtained a key for the previous recipe, you can directly use it here again. Otherwise, you can perform the following steps: Visit Google Developers Console at https://code.google.com/apis/console. Go to Services and turn on Geocoding API. Select API Access, locate the value of Key for server apps, and use it in all requests from the Tizen web application to the API. If you need more information about the Geocoding API keys, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow the described algorithm to retrieve an address based on geographic coordinates using the Google Maps Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle the data provided for a retrieved address: function retrieveAddress(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve address');    return; } var sAddress = data.results[0].formatted_address; console.log('Address: ' + sAddress); } Implement a function that performs a request to Google servers to retrieve an address based on latitude and longitude: function reverseGeocoding(latitude, longitude) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { latlng: latitude+','+longitude,        sensor: 'true',        key: '<API key>' }, Pay attention that <API key> has to be replaced with the Key for server apps value provided by Google for the Geocoding API:    success : retrieveAddress,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide coordinates as arguments of function and execute it, for example: reverseGeocoding('40.748183', '-73.985064'); How it works If an application developed using the preceding source code invokes the reverseGeocoding() function with latitude 40.748183 and longitude -73.985064, the printed result at the console will be: 350 5th Avenue, New York, NY 10118, USA. By the way, as in the previous recipe, the address corresponds to the location of the Empire State Building in New York. The reverseGeocoding() function sends an AJAX request to the API. The parameters at the URL specify that the response must be formatted as JSON. The longitude and latitude of the location are divided by commas and set as a value of the latlng parameter in the URL. There's more... OpenStreetMap also provides a reverse geocoding services. For example, the following URL will return a JSON result of a location with the latitude 40.7481829 and longitude -73.9850635: http://nominatim.openstreetmap.org/reverse?format=json&lat=40.7481829&lon=-73.9850635 The main advantage of OpenStreetMap is that it is an open project with a great community. Its API for reverse geocoding does not require any keys and it can be used for free. Leaflet is a popular open source JavaScript library based on OpenStreetMap optimized for mobile devices. It is well supported and easy to use, so you may consider integrating it in your Tizen web applications. Explore its features at http://leafletjs.com/features.html. See also All details regarding the Google Geocoding API are available at https://developers.google.com/maps/documentation/geocoding/#ReverseGeocoding If you prefer to user the API provided by OpenStreetMap, please have a look at http://wiki.openstreetmap.org/wiki/Nominatim#Reverse_Geocoding_.2F_Address_lookup Calculating distance This recipe is dedicated to a method for calculating the distance between two locations. The Google Directions API will be used again. Unlike the Getting directions recipe, this time only the information about the distance will be processed. Getting ready Just like the other recipe related to the Google API, in this case, the developer must obtain the API keys before the start of the development. Please follow these instructions to register and get an appropriate API key: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and save the value of Key for server apps. Use it in all requests from your Tizen web application to the API. If you need more information about the API keys for Directions API, visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Follow these steps to calculate the distance between two locations: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Implement a JavaScript function that will process the retrieved data: function retrieveDistance(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to retrieve distance');    return; } var sLocationStart =    data.routes[0].legs[0].start_address; var sLocationEnd = data.routes[0].legs[0].end_address; var sDistance = data.routes[0].legs[0].distance.text; console.log('The distance between ' + sLocationStart + '    and ' + sLocationEnd + ' is: ' +    data.routes[0].legs[0].distance.text); } Create a JavaScript function that will request directions using the Google Maps API: function checkDistance(sStart, sEnd) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sStart,        destination: sEnd,        sensor: 'true',        units: 'metric',        key: '<API key>' }, Remember to replace <API key> with the Key for server apps value provided by Google for the Direction API:        success : retrieveDistance,        error : function (request, status, message) {        console.log('Error: ' + message);        }    }); } Execute the checkDistance() function and provide the origin and the destination as arguments, for example: checkDistance('Plovdiv', 'Burgas'); Geographical coordinates can also be provided as arguments to the function checkDistance(). For example, let's calculate the same distances but this time by providing the latitude and longitude of locations in the Bulgarian cities Plovdiv and Burgas: checkDistance('42.135408,24.74529', '42.504793,27.462636'); How it works The checkDistance() function sends data to the Google Directions API. It sets the origin, the destination, the sensor, the unit system, and the API key as parameters of the URL. The result returned by the API is provided as JSON, which is handled in the retriveDistance() function. The output in the console of the preceding example, which retrieves the distance between the Bulgarian cities Plovdiv and Burgas, is The distance between Plovdiv, Bulgaria and Burgas, Bulgaria is: 253 km. See also For all details about the Directions API as well as a full description of the returned response, visit https://developers.google.com/maps/documentation/directions/. Detecting device motion This recipe offers a tutorial on how to detect and handle device motion in Tizen web applications. No specific Tizen APIs will be used. The source code in this recipe relies on the standard W3C DeviceMotionEvent, which is supported by Tizen web applications as well as any modern web browser. How to do it... Please follow these steps to detect device motion and display its acceleration in a Tizen web application: Create HTML components to show device acceleration, for example: <p>X: <span id="labelX"></span></p> <p>Y: <span id="labelY"></span></p> <p>Z: <span id="labelZ"></span></p> Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles motion events: function motionDetected(event) { var acc = event.accelerationIncludingGravity; var sDeviceX = (acc.x) ? acc.x.toFixed(2) : '?'; var sDeviceY = (acc.y) ? acc.y.toFixed(2) : '?'; var sDeviceZ = (acc.z) ? acc.z.toFixed(2) : '?'; $('#labelX').text(sDeviceX); $('#labelY').text(sDeviceY); $('#labelZ').text(sDeviceZ); } Create a JavaScript function that starts a listener for motion events: function deviceMotion() { try {    if (!window.DeviceMotionEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('devicemotion', motionDetected,      false); } catch (err) {    showError(err); } } Invoke a function at an appropriate location of the source code of the application: deviceMotion(); How it works The deviceMotion() function registers an event listener that invokes the motionDetected() function as a callback when device motion event is detected. All errors, including an error if DeviceMotionEvent is not supported, are handled in the showError() function. As shown in the following screenshot, the motionDetected() function loads the data of the properties of DeviceMotionEvent into the HTML5 labels that were created in the first step. The results are displayed using standard units for acceleration according to the international system of units (SI)—metres per second squared (m/s2). The JavaScript method toFixed() is invoked to convert the result to a string with two decimals: A Tizen web application that detects device motion See also Notice that the device motion event specification is part of the DeviceOrientationEvent specification. Both are still in draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. The source code of a sample Tizen web application that detects device motion is provided along with the book. You can import the project of the application into the Tizen IDE and explore it. Detecting device orientation In this recipe, you will learn how to monitor changes of the device orientation using the HTML5 DeviceOrientation event as well as get the device orientation using the Tizen SystemInfo API. Both methods for retrieving device orientation have advantages and work in Tizen web applications. It is up to the developer to decide which approach is more suitable for their application. How to do it... Perform the following steps to register a listener and handle device orientation events in your Tizen web application: Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles change of the orientation: function orientationDetected(event) { console.log('absolute: ' + event.absolute); console.log('alpha: ' + event.alpha); console.log('beta: ' + event.beta); console.log('gamma: ' + event.gamma); } Create a JavaScript function that adds a listener for the device orientation: function deviceOrientation() { try {    if (!window.DeviceOrientationEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('deviceorientation',      orientationDetected, false); } catch (err) {    showError(err); } } Execute the JavaScript function to start listening for device orientation events: deviceOrientation(); How it works If DeviceOrientationEvent is supported, the deviceOrientation() function binds the event to the orientationDetected() function, which is invoked as a callback only on success. The showError() function will be executed only if a problem occurs. An instance of the DeviceOrientationEvent interface is provided as an argument of the orientationDetected() function. In the preceding code snippet, the values of its four read-only properties absolute (Boolean value, true if the device provides orientation data absolutely), alpha (motion around the z axis), beta (motion around the x-axis), and gamma (motion around the y axis) are printed in the console. There's more... There is an easier way to determine whether a Tizen device is in landscape or portrait mode. In a Tizen web application, for this case, it is recommended to use the SystemInfo API. The following code snippet retrieves the device orientation: function onSuccessCallback(orientation) { console.log("Device orientation: " + orientation.status); } function onErrorCallback(error) { console.log("Error: " + error.message); }  tizen.systeminfo.getPropertyValue("DEVICE_ORIENTATION", onSuccessCallback, onErrorCallback); The status of the orientation can be one of the following values: PORTRAIT_PRIMARY PORTRAIT_SECONDARY LANDSCAPE_PRIMARY LANDSCAPE_SECONDARY See also The DeviceOrientationEvent specification is still a draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. For more information on the Tizen SystemInfo API, visit https://developer.tizen.org/dev-guide/2.2.1/org.tizen.web.device.apireference/tizen/systeminfo.html. Using the Vibration API Tizen is famous for its excellent support of HTML5 and W3C APIs. The standard Vibration API is also supported and it can be used in Tizen web applications. This recipe offers code snippets on how to activate vibration on a Tizen device. How to do it... Use the following code snippet to activate the vibration of the device for three seconds: if (navigator.vibrate) { navigator.vibrate(3000); } To cancel an ongoing vibration, just call the vibrate() method again with zero as a value of its argument: if (navigator.vibrate) { navigator.vibrate(0); } Alternatively, the vibration can be canceled by passing an empty array to the same method: navigator.vibrate([]); How it works The W3C Vibration API is used through the JavaScript object navigator. Its vibrate() method expects either a single value or an array of values. All values must be specified in milliseconds. The value provided to the vibrate() method in the preceding example is 3000 because 3 seconds is equal to 3000 milliseconds. There's more... The W3C Vibration API allows advanced tuning of the device vibration. A list of time intervals (with values in milliseconds), during which the device will vibrate, can be specified as an argument of the vibrate() method. For example, the following code snippet will make the device to vibrate for 100 ms, stand still for 3 seconds, and then again vibrate, but this time just for 50 ms: if (navigator.vibrate) { navigator.vibrate([100, 3000, 50]); } See also For more information on the vibration capabilities and the API usage, visit http://www.w3.org/TR/vibration/. Tizen native applications for the mobile profile have exposure to additional APIs written in C++ for light and proximity sensors. Explore the source code of the sample native application SensorApp which is provided with the Tizen SDK to learn how to use these sensors. More information about them is available at https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/light_sensor.htm and https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/proximity_sensor.htm. Summary In this article, we learned the details of various hardware sensors such as the GPS, accelerometer, and gyroscope sensor. The main focus of this article was on location-based services, maps, and navigation. Resources for Article: Further resources on this subject: Major SDK components [article] Getting started with Kinect for Windows SDK Programming [article] https://www.packtpub.com/books/content/cordova-plugins [article]
Read more
  • 0
  • 0
  • 3375

article-image-livecode-loops-and-timers
Packt
10 Sep 2014
9 min read
Save for later

LiveCode: Loops and Timers

Packt
10 Sep 2014
9 min read
In this article by Dr Edward Lavieri, author of LiveCode Mobile Development Cookbook, you will learn how to use timers and loops in your mobile apps. Timers can be used for many different functions, including a basketball shot clock, car racing time, the length of time logged into a system, and so much more. Loops are useful for counting and iterating through lists. All of this will be covered in this article. (For more resources related to this topic, see here.) Implementing a countdown timer To implement a countdown timer, we will create two objects: a field to display the current timer and a button to start the countdown. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to create a countdown timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Down. Add the following code to the Count Down button: on mouseUp local pTime put 19 into pTime put pTime into fld "timerDisplay" countDownTimer pTime end mouseUp Add the following code to the Count Down button: on countDownTimer currentTimerValue subtract 1 from currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue > 0 then send "countDownTimer" && currentTimerValue to me in 1 sec end if end countDownTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countDownTimer method will be called each second until the timer is zero. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... LiveCode provides us with the send command, which allows us to transfer messages to handlers and objects immediately or at a specific time, such as this recipe's example. Implementing a count-up timer To implement a count-up timer, we will create two objects: a field to display the current timer and a button to start the upwards counting. We will code two handlers: one for the button and one for the timer. How to do it... Perform the following steps to implement a count-up timer: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 10 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Test the code using a mobile simulator or an actual device. How it works... To implement our timer, we created a simple callback situation where the countUpTimer method will be called each second until the timer is at 10. We avoided the temptation to use a repeat loop because that would have blocked all other messages and introduced unwanted app behavior. There's more... Timers can be tricky, especially on mobile devices. For example, using the repeat loop control when working with timers is not recommended because repeat blocks other messages. Pausing a timer It can be important to have the ability to stop or pause a timer once it is started. The difference between stopping and pausing a timer is in keeping track of where the timer was when it was interrupted. In this recipe, you'll learn how to pause a timer. Of course, if you never resume the timer, then the act of pausing it has the same effect as stopping it. How to do it... Use the following steps to create a count-up timer and pause function: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp Add the following code to the Count Up button: on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp In LiveCode, the pendingMessages option returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. To test this, first click on the Count Up button, and then click on the Pause button before the timer reaches 60. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. Resuming a timer If you have a timer as part of your mobile app, you will most likely want the user to be able to pause and resume a timer, either directly or through in-app actions. See previous recipes in this article to create and pause a timer. This recipe covers how to resume a timer once it is paused. How to do it... Perform the following steps to resume a timer once it is paused: Create a new main stack. Place a field on the stack's card and name it timerDisplay. Place a button on the stack's card and name it Count Up. Add the following code to the Count Up button: on mouseUp local pTime put 0 into pTime put pTime into fld "timerDisplay" countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue < 60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer Add a button to the card and name it Pause. Add the following code to the Pause button: on mouseUp repeat for each line i in the pendingMessages cancel (item 1 of i) end repeat end mouseUp Place a button on the card and name it Resume. Add the following code to the Resume button: on mouseUp local pTime put the text of fld "timerDisplay" into pTime countUpTimer pTime end mouseUp on countUpTimer currentTimerValue add 1 to currentTimerValue put currentTimerValue into fld "timerDisplay" if currentTimerValue <60 then send "countUpTimer" && currentTimerValue to me in 1 sec end if end countUpTimer To test this, first, click on the Count Up button, then click on the Pause button before the timer reaches 60. Finally, click on the Resume button. How it works... We first created a timer that counts up from 0 to 60. Next, we created a Pause button that, when clicked, cancels all pending system messages, including the call to the countUpTimer handler. When the Resume button is clicked on, the current value of the timer, based on the timerDisplay button, is used to continue incrementing the timer. In LiveCode, pendingMessages returns a list of currently scheduled messages. These are messages that have been scheduled for delivery but are yet to be delivered. Using a loop to count There are numerous reasons why you might want to implement a counter in a mobile app. You might want to count the number of items on a screen (that is, cold pieces in a game), the number of players using your app simultaneously, and so on. One of the easiest methods of counting is to use a loop. This recipe shows you how to easily implement a loop. How to do it... Use the following steps to instantiate a loop that counts: Create a new main stack. Rename the stack's default card to MainScreen. Drag a label field to the card and name it counterDisplay. Drag five checkboxes to the card and place them anywhere. Change the names to 1, 2, 3, 4, and 5. Drag a button to the card and name it Loop to Count. Add the following code to the Loop to Count button: on mouseUp local tButtonNumber put the number of buttons on this card into tButtonNumber if tButtonNumber > 0 then repeat with tLoop = 1 to tButtonNumber set the label of btn value(tLoop) to "Changed " & tLoop end repeat put "Number of button's changed: " & tButtonNumber into fld "counterDisplay" end if end mouseUp Test the code by running it in a mobile simulator or on an actual device. How it works... In this recipe, we created several buttons on a card. Next, we created code to count the number of buttons and a repeat control structure to sequence through the buttons and change their labels. Using a loop to iterate through a list In this recipe, we will create a loop to iterate through a list of text items. Our list will be a to-do or action list. Our loop will process each line and number them on screen. This type of loop can be useful when you need to process lists of unknown lengths. How to do it... Perform the following steps to create an iterative loop: Create a new main stack. Drag a scrolling list field to the stack's card and name it myList. Change the contents of the myList field to the following, paying special attention to the upper- and lowercase values of each line: Wash Truck Write Paper Clean Garage Eat Dinner Study for Exam Drag a button to the card and name it iterate. Add the following code to the iterate button: on mouseUp local tLines put the number of lines of fld "myList" into tLines repeat with tLoop = 1 to tLines put tLoop & " - " & line tLoop of fld "myList"into line tLoop of fld "myList" end repeat end mouseUp Test the code by clicking on the iterate button. How it works... We used the repeat control structure to iterate through a list field one line at a time. This was accomplished by first determining the number of lines in that list field, and then setting the repeat control structure to sequence through the lines. Summary In this article we examined the LiveCode scripting required to implement and control count-up and countdown timers. We also learnt how to use loops to count and iterate through a list. Resources for Article:  Further resources on this subject: Introduction to Mobile Forensics [article] Working with Pentaho Mobile BI [article] Building Mobile Apps [article]
Read more
  • 0
  • 0
  • 9175

article-image-configuring-your-operating-system
Packt
18 Aug 2014
3 min read
Save for later

Configuring Your Operating System

Packt
18 Aug 2014
3 min read
In this article by William Smith, author of Learning Xamarin Studio, we will configure our operating system. (For more resources related to this topic, see here.) Configuring your Mac To configure your Mac, perform the following steps: From the Apple menu, open System Preferences. Open the Personal group. Select the Security and Privacy item. Open the Firewall tab, and ensure the Firewall is turned off. Configuring your Windows machine To configure your Windows machine, download and install the Xamarin Unified Installer. This installer includes a tool called Xamarin Bonjour Service, which runs Apple's network discovery protocol. Xamarin Bonjour Service requires administrator rights, so you may want to just run the installer as an administrator. Configuring a Windows VM within Mac There is really no difference between using the Visual Studio plugin from a Windows machine or from a VM using software, such as Parallels or VMware. However, if you are running Xamarin Studio on a Retina Macbook Pro, it is advisable to adjust the hardware video settings. Otherwise, some of the elements within Xamarin Studio will render poorly making them difficult to use. The following screenshot contains the recommended video settings: To adjust the settings in Parallels, follow these steps: If your Windows VM is running, shut it down. With your VM shut down, go to Virtual Machine | Configure…. Choose the Hardware tab. Select the Video group. Under Resolution, choose Scaled. Final installation steps Now that the necessary tools are installed and the settings have been enabled, you still need to link to your Xamarin account in Visual Studio, as well as connect Visual Studio to your Mac build machine. To connect to your Xamarin account, follow these steps: In Visual Studio, go to Tools | Xamarin Account…. Click Login to your Xamarin Account and enter your credentials. Once your credentials are verified, you will receive a confirmation message. To connect to your Mac build machine, follow these steps: On your Mac, open Spotlight and type Xamarin build host. Choose Xamarin.iOS Build Host under the Applications results group. After the Build Host utility dialog opens click the Pair button to continue. You will be provided with a PIN. Write this down. On your PC, open Xamarin Studio. Go to Tools | Options | Xamarin | iOS Settings. After the Build Host utility opens, click the Continue button. If your Mac and network are correctly configured, you will see your Mac in the list of available build machines. Choose your build machine and click the Continue button. You will be prompted to enter the PIN. Do so, then click the Pair button. Once the machines are paired, you can build, test, and deploy applications using the networked Mac. If for whatever reason you want to unpair these two machines, open the Xamarin.iOS Build Host on your Mac again, and click the Invalidate PIN button. When prompted, complete the process by clicking the Unpair button. Summary In this article, we learned how to configure our operating system. We also learned how to connect to your Mac build machine. Resources for Article: Further resources on this subject: Updating data in the background [Article] Gesture [Article] Making POIApp Location Aware [Article]
Read more
  • 0
  • 0
  • 1921
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-sprites
Packt
24 Jun 2014
6 min read
Save for later

Sprites

Packt
24 Jun 2014
6 min read
The goal of this article is to learn how to work with sprites and get to know their main properties. After reading this article, you will be able to add sprites to your games. In this article, we will cover the following topics: Setting up the initial project Sprites and their main properties Adding sprites to the scene Adding sprites as a child node of another sprite Manipulating sprites (moving, flipping, and so on) Performance considerations when working with many sprites Creating spritesheets and using the sprite batch node to optimize performance Using basic animation Creating the game project We could create many separate mini projects, each demonstrating a single Cocos2D aspect, but this way we won't learn how to make a complete game. Instead, we're going to create a game that will demonstrate every aspect of Cocos2D that we learn. The game we're going to make will be about hunting. Not that I'm a fan of hunting, but taking into account the material we need to cover and practically use in the game's code, a hunting game looks like the perfect candidate. The following is a screenshot from the game we're going to develop. It will have several levels demonstrating several different aspects of Cocos2D in action: Time for action – creating the Cocohunt Xcode project Let's start creating this game by creating a new Xcode project using the Cocos2D template, just as we did with HelloWorld project, using the following steps: Start Xcode and navigate to File | New | Project… to open the project creation dialog. Navigate to the iOS | cocos2d v3.x category on the left of the screen and select the cocos2d iOS template on the right. Click on the Next button. In the next dialog, fill out the form as follows: Product Name: Cocohunt Organization Name: Packt Publishing Company Identifier: com.packtpub Device Family: iPhone Click on the Next button and pick a folder where you'd like to save this project. Then, click on the Create button. Build and run the project to make sure that everything works. After running the project, you should see the already familiar Hello World screen, so we won't show it here. Make sure that you select the correct simulator version to use. This project will support iPhone, iPhone Retina (3.5-inch), iPhone Retina (4-inch), and iPhone Retina (4-inch, 64-bit) simulators, or an actual iPhone 3GS or newer device running iOS 5.0 or higher. What just happened? Now, we have a project that we'll be working on. The project creation part should be very similar to the process of creating the HelloWorld project, so let's keep the tempo and move on. Time for action – creating GameScene As we're going to work on this project for some time, let's keep everything clean and tidy by performing the following steps: First of all, let's remove the following files as we won't need them: HelloWorldScene.h HelloWorldScene.m IntroScene.h IntroScene.m We'll use groups to separate our classes. This will allow us to keep things organized. To create a group in Xcode, you should right-click on the root project folder in Xcode, Cocohunt in our case, and select the New Group menu option (command + alt + N). Refer to the following sceenshot: Go ahead and create a new group and name it Scenes. After the group is created, let's place our first scene in it. We're going to create a new Objective-C class called GameScene and make it a subclass of CCScene. Right-click on the Scenes group that we've just created and select the New File option. Right-clicking on the group and selecting New File instead of using File | New | File will place our new file in the selected group after creation. Select Cocoa Touch category on the left of the screen and the Objective-C class on the right. Then click on the Next button. In the next dialog, name the the class as GameScene and make it a subclass of the CCScene class. Then click on the Next button. Make sure that you're in the Cocohunt project folder to save the file and click on the Create button. You can create the Scenes folder while in the save dialog using New Folder button and save the GameScene class there. This way, the hierarchy of groups in Xcode will match the physical folders hierarchy on the disk. This is the way I'm going to do this so that you can easily find any file in the book's supporting file's projects. However, the groups and files organization within groups will be identical, so you can always just open the Cocohunt.xcodeproj project and review the code in Xcode. This should create the GameScene.h and GameScene.m files in the Scenes group, as you can see in the following screenshot: Now, switch to the AppDelegate.m file and remove the following header imports at the top: #import "IntroScene.h" #import "HelloWorldScene.h" It is important to remove these #import directives or we will get errors as we removed the files they are referencing. Import the GameScene.h header as follows: #import "GameScene.h" Then find the startScene: method and replace it with following: -(CCScene *)startScene { return [[GameScene alloc] init]; } Build and run the game. After the splash screen, you should see the already familiar black screen as follows: What just happened? We've just created another project using the Cocos2D template. Most of the steps should be familiar as we have already done them in the past. After creating the project, we removed the unneeded files generated by the Cocos2D template, just as you will do most of the time when creating a new project, since most of the time you don't need those example scenes in your project. We're going to work on this project for some time and it is best to start organizing things well right away. This is why we've created a new group to contain our game scene files. We'll add more groups to the project later. As a final step, we've created our GameScene scene and displayed it on the screen at the start of the game. This is very similar to what we did in our HelloWorld project, so you shouldn't have any difficulties with it.
Read more
  • 0
  • 0
  • 3254

article-image-working-sharing-plugin
Packt
23 May 2014
11 min read
Save for later

Working with the sharing plugin

Packt
23 May 2014
11 min read
(For more resources related to this topic, see here.) Now that we've dealt with the device events, let's get to the real meat of the project: let's add the sharing plugin and see how to use it. Getting ready Before continuing, be sure to add the plugin to your project: cordova plugin add https ://github.com/leecrossley/cordova-plugin-social-message.git Getting on with it This particular plugin is one of many socialnetwork plugins. Each one has its benefits and each one has its problems, and the available plugins are changing rapidly. This particular plugin is very easy to use, and supports a reasonable amount of social networks. On iOS, Facebook, Twitter, Mail, and Flickr are supported. On Android, any installed app that registers with the intent to share is supported. The full documentation is available at https://github.com/leecrossley/cordova-plugin-social-message at the time of writing this. It is easy to follow if you need to know more than what we cover here. To show a sharing sheet (the appearance varies based on platform and operation system), all we have to do is this: window.socialshare.send ( message ); message is an object that contains any of the following properties: text: This is the main content of the message. subject: This is the subject of the message. This is only applicable while sending e-mails; most other social networks will ignore this value. url: This is a link to attach to the message. image: This is an absolute path to the image in order to attach it to the message. It must begin with file:/// and the path should be properly escaped (that is, spaces should become %20, and so on). activityTypes (only for iOS): This supports activities on various social networks. Valid values are: PostToFacebook, PostToTwitter, PostToWeibo, Message, Mail, Print, CopyToPasteboard, AssignToContact, and SaveToCameraRoll. In order to create a simple message to share, we can use the following code: var message = {     text: "something to send" } window.socialshare.send ( message ); To add an image, we can go a step further, shown as follows: var message = {     text: "the caption",     image: "file://var/mobile/…/image.png" } window.socialshare.send ( message ); Once this method is called, the sharing sheet will appear. On iOS 7, you'll see something like the following screenshot: On Android, you will see something like the following screenshot: What did we do? In this section, we installed the sharing plugin and we learned how to use it. In the next sections, we'll cover the modifications required to use this plugin. Modifying the text note edit view We've dispatched most of the typical sections in this project—there's not really any user interface to design, nor are there any changes to the actual note models. All we need to do is modify the HTML template a little to include a share button and add the code to use the plugin. Getting on with it First, let's alter the template in www/html/textNoteEditView.html. I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar ui-       avoid-tool-bar">       <textarea class="ui-text-box" >%NOTE_CONTENTS%</textarea>     </div><div class="ui-tool-bar"><div class="ui-bar-button-group ui-align-left"></div><div class="ui-bar-button-group ui-align-center">     </div>         <div class="ui-bar-button-group ui-align-right">        <div class="ui-bar-button ui-background-tint-color ui- glyph ui-glyph-share share-button"></div></div>    </div></body></html> Now, let's make the modifications to the view in www/js/app/views/textNoteEditView.js. First, we need to add an internal property that references the share button: self._shareButton = null; Next, we need to add code to renderToElement so that we can add an event handler to the share button. We'll do a little bit of checking here to see if we've found the icon, because we don't support sharing of videos and sounds and we don't include that asset in those views. If we didn't have the null check, those views would fail to work. Consider the following code snippet: self.renderToElement = function () {   …   self._shareButton = self.element.querySelector ( ".share-button"     );   if (self._shareButton !== null) {     Hammer ( self._shareButton ).on("tap", self.shareNote);   }   … } Finally, we need to add the method that actually shares the note. Note that we save the note before we share it, since that's how the data in the DOM gets transmitted to the note model. Consider the following code snippet: self.shareNote = function () {   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   window.socialmessage.send ( message ); } What did we do? First, we added a toolbar to the view that looks like the following screenshot—note the new sharing icon: Then, we added the code that shares the note and attaches that code to the Share button. Here's an example of us sending a tweet from a note on iOS: What else do I need to know? Don't forget that social networks often have size limits. For example, Twitter only supports 140 characters, and so if you send a note using Twitter, it needs to be a very short note. We could, on iOS, prevent Twitter from being permitted, but there's no way to prevent this on Android. Even then, there's no real reason not to prevent Twitter from being an option. The user just needs to be familiar enough with the social network to know that they'll have to edit the content before posting it. Also, don't forget that the subject of a message only applies to mail; most other social networks will ignore it. If something is critical, be sure to include it in the text of the message, and not the subject only. Modifying the image note edit view The image note edit view presents an additional difficulty: we can't put the Share button in a toolbar. This is because doing so will cause positioning difficulties with TEXTAREA and the toolbar when the soft keyboard is visible. Instead, we'll put it in the lower-right corner of the image. This is done by using the same technique we used to outline the camera button. Getting on with it Let's edit the template in www/html/imageNoteEditView.html; again, I've highlighted the changes: <html>   <body>     <div class="ui-navigation-bar">       <div class="ui-title"         contenteditable="true">%NOTE_NAME%</div>       <div class="ui-bar-button-group ui-align-left">         <div class="ui-bar-button ui-tint-color ui-back-           button">%BACK%</div>       </div>       <div class="ui-bar-button-group ui-align-right">         <div class="ui-bar-button ui-destructive-           color">%DELETE_NOTE%</div>       </div>     </div>     <div class="ui-scroll-container ui-avoid-navigation-bar">       <div class="image-container">         <div class="ui-glyph ui-background-tint-color ui-glyphcamera-         outline"></div>               <div class="ui-glyph ui-background-tint-color ui-glyph-           camera outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           camera non-outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share outline"></div>         <div class="ui-glyph ui-background-tint-color ui-glyph-           share non-outline share-button"></div>       </div>       <textarea class="ui-text-box"         onblur="this.classList.remove('editing');"         onfocus="this.classList.add('editing');         ">%NOTE_CONTENTS%</textarea>     </div>   </body> </html> Because sharing an image requires a little additional code, we need to override shareNote (which we inherit from the prior task) in www/js/app/views/imageNoteEditView.js: self.shareNote = function () {   var fm = noteStorageSingleton.fileManager;   var nativePath = fm.getNativeURL ( self._note.mediaContents );   self.saveNote();   var message = {     subject: self._note.name,     text: self._note.textContents   };   if (self._note.unitValue > 0) {     message.image = nativePath;   }   window.socialmessage.send ( message ); } Finally, we need to add the following styles to www/css/style.css: div.ui-glyph.ui-background-tint-color.ui-glyph-share.outline, div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   left:inherit;   width:50px;   top: inherit;   height:50px; } {   -webkit-mask-position:15px 16px;   mask-position:15px 16px; } div.ui-glyph.ui-background-tint-color.ui-glyph-share.non-outline {   -webkit-mask-position:15px 15px;   mask-position:15px 15px; } What did we do? Like the previous task, we first modified the template to add the share icon. Then, we added the shareNote code to the view (note that we don't have to add anything to find the button, because we inherit it from the Text Note Edit View). Finally, we modify the style sheet to reposition the Share button appropriately so that it looks like the following screenshot: What else do I need to know? The image needs to be a valid image, or the plugin may crash. This is why we check for the value of unitValue in shareNote to ensure that it is large enough to attach to the message. If not, we only share the text. Game Over... Wrapping it up And that's it! You've learned how to respond to device events, and you've also added sharing to text and image notes by using a third-party plugin. Can you take the HEAT? The Hotshot Challenge There are several ways to improve the project. Why don't you try a few? Implement the ability to save the note when the app receives a pause event, and then restore the note when the app is resumed. Remember which note is visible when the app is paused, and restore it when the app is resumed. (Hint: localStorage may come in handy.) Add video or audio sharing. You'll probably have to alter the sharing plugin or find another (or an additional) plugin. You'll probably also need to upload the data to an external server so that it can be linked via the social network. For example, it's often customary to link to a video on Twitter by using a link shortener. The File Transfer plugin might come in handy for this challenge (https://github.com/apache/cordova-plugin-file-transfer/blob/dev/doc/index.md). Summary This article introduced you to a third-party plugin that provides access to e-mail and various social networks. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [Article] Configuring the ChildBrowser plugin [Article] Using Location Data with PhoneGap [Article]
Read more
  • 0
  • 0
  • 1987

article-image-mobile-application-development-ibm-worklight
Packt
18 Feb 2014
13 min read
Save for later

Mobile application development with IBM Worklight

Packt
18 Feb 2014
13 min read
(For more resources related to this topic, see here.) The mobile industry is evolving rapidly with an increasing number of mobile devices, such as smartphones and tablets. People are accessing more services from mobile devices than ever before. Mobile solutions are directly impacting businesses, organizations, and their growing number of customers and partners. Even employees now expect to access services on a mobile river. Several approaches currently exist for mobile application development; which include: Web Development: Uses open web (HTML5, JavaScript) client programming modules. Hybrid Development: The app source code consists of web code executed within a native container that is provided by Worklight and consist of native libraries. Hybrid Mixed: The developer uses arguments in the web code with native language to create unique features and access native APIs that are not yet available via JavaScript, such as AR, NFC, and others. Native Development: In this approach, the application is developed using native languages or transcoded into a native language via MAP tool's native appearance device capabilities, and performance. To achieve a similar application in different platforms, requires a different level of expertise hitting the cost, time, and complexity. The preceding list outlines the major aspects of the development approaches. Reviewing this list can help you choose which development approach is correct for your particular mobile application. The IBM Worklight solution In 2012, IBM acquired its very first set of mobile development and integration tools called IBM Worklight, which allows organizations to transform their business and deliver mobile solutions to their customers. IBM Worklight provides a truly open approach for developers to build an application and run it across multiple mobile platforms without having to port it for each environment, that is, Apple iOS, Google Android, Blackberry, and Microsoft Windows Phone. IBM Worklight also makes the developer's life easier by using standard technologies such as HTML5 and JavaScript with extensions for popular libraries such as jQuery Mobile, Dojo Toolkit, and Sencha Touch. IBM Worklight is a mobile application platform containing all of the tools needed to develop a mobile application. If we combine IBM Worklight components into a stream, it would be clean to say that hybrid mobile application development is tightly coupled with a baseline. Every specified component provides a bundle of functionalities and supports. Here is the lifecycle of Mobile Application Development; Worklight Studio: IBM Worklight provides a robust development Eclipse based environment called Worklight Studio that allow developers to quickly construct mobile application for multiple operating platforms. Worklight Server: This component is a runtime server that activates or enables secure data transmission through centralized back-end connectivity with adapters, used for offline encrypted storage, unified Push Notification and many more. Worklight Device Runtime: The device runtime provides a rich set of APIs that are cross-platform in nature and offer easy access to the services provided by the IBM Worklight Server. Worklight Console: This is a web dependent interface for real-time analytics, Managing Push Notification Authority and Mobile Version Management. The Worklight Console is a web-based interface and dedicated for ongoing administration of Worklight Server and its deployed apps, adapters and push notification services. Worklight Application Center: It's a cross-platform Mobile Application Store which pretends specific needs for Mobile Application Development team. There is a big advantage for using Worklight to creating user interface and that reflects on development at client side as well as server side. In general developer faces problems during development and support for creation of hybrid app by using other product, are typical not straight to define the use cases, debugging, preview testing for enterprise application but using a Worklight, developer can make simple architecture and enhanced improvised structures are amend to have mobile application. Creating a Simple IBM Worklight Application Let's start by creating a simple HelloWorld Worklight project; The steps described for creating an app are similar for IBM Worklight Studio and Eclipse IDE. The following is what you'll need to do: Start IBM Worklight Studio. Navigate to File| New and select Worklight Project, as shown in the following screenshot: Create New Worklight Project In the dialog that is displayed in the following screenshot, select Hybrid Application as the type of application defined in project templates, enter HelloWorld as the name of the first mobile project, and click on Next. Asked for Worklight Project Name and Project Type. You will see another dialog for Hybrid Application. In Application name, provide HelloWorld as the name of the application. Leave the checkboxes unchecked for now; these are used to extend supported JavaScript libraries into the app. Click on Finish. Define Worklight App Name in the window After clicking on Finish, you will see your project has been created from design perspective in Project Explorer, as shown in the following screenshot: Project Explore after complete Wizard. Adding an environment We have covered IBM Worklight Studio features and what they offer developers. It's time to see how this tool and plugin will make your life even easier. The cross platform development feature is a great deal to implement. It provides you with the means to achieve cross-development environment without any hurdles and with just a few clicks within its efficient interface. To add an environment for Android, iPhone, or any other platform, right-click on the Apps folder next to the adapters and navigate to New| Worklight Environment. You will see that a dialog box appears with checkboxes for currently supported environments, which you need to create an application for. The following screenshot illustrates this feature—we're adding an Android environment for this application: Worklight Environment selection window IBM Worklight Client Side API In this article, you will learn how the IBM Worklight client-side API can improve mobile application development. You will also see the IBM Worklight server side API improve client/server integration and communication between mobile applications and back end systems. The IBM Worklight client-side API allows mobile applications to access most of the features that are available in IBM Worklight during runtime, in order to get access to some defined libraries that appear to be bundled into the mobile application. Integration of the libraries for your mobile application using Worklight Server is used to access predefined communication interfaces. These libraries also offer unified access to native device features, which streamlines application development. The IBM Worklight client-side API contains hybrid, native, mixed hybrid, and web-based APIs. Besides, it extends those of these APIs that are responsible for supporting every mobile development framework. The development framework for a mobile application is used to improve security including custom and built-in authentication mechanisms for IBM Worklight provided by client-side API modules. It provides a semantic connection between web technologies such as HTML5, CSS3, and JavaScript with native functions that are available for different mobile platforms. Exploring Dojo Mobile Regarding the Dojo UI framework, you'll learn about Dojo Mobile in detail. Dojo Mobile, an extension for Dojo Toolkit, provides a series of widgets, or components, optimized for use on a mobile device, such as a smartphone or tablet. The Dojo framework is an extension of JavaScript and provides a built-in library which contains custom components such as text fields, validation menus, and image galleries. The components are modelled on their native counterparts and will look and feel native to those familiar with smartphone applications. The components are completely customizable using themes that let you make various customizations, such as pushing different sets of styles to iOS and Android users. Authentication and Security Modules Worklight has built-in authentication framework that allows developer to configure and use it with very little effort. The Worklight project has an authentication configuration file, which is used to declare and force security on mobile application, adapters, data and web resources which consist following security entities. We will talk about the various pre-defined authentication realms and security tests that are provided in Worklight out-of-box. To identify the importance of Mobile security you can see that in today's life we keep our personal and business data on mobile devices. The data and applications are both important to us. Both the data and applications should be protected against unauthorized access, particularly if they contain sensitive information or transmitting over the network. There are number of ways via a device can be compromised and it can leak data to malicious users. Worklight security principles, concepts and terminology IBM Worklight provides various security roles to protect applications, adapter procedures, and static resources from an unauthorized access. Each role can be defined by a security test that comprises one or more authentication realms. The authentication realm defines a process that will be used to authenticate the users. The authentication realm has the following parts: Challenge handler: This is a component on the device side Authenticator and login module: This is a component on the server One authentication realm can be used to protect multiple resources. We will look into each component in detail. Device Request Flow: The following screenshot shows a device that makes a request to access a protected resource, for example, an adapter function, on the server. In response to the request, the server sends back an authentication challenge to the device to submit its authenticity: Request/Response flow between Worklight application and enterprise server diagram> Push notification Mobile OS vendors such as Apple, Google, Microsoft, and others provide a free of cost feature through which a message can be delivered to any device running on the respective OS. The OS vendors send a message commonly known as a push message to a device for a particular app. It is not required for an app to be running in order to receive a push message. A push message can contain the following: Alerts: These would appear in the form of text messages Badges: These are small, circular marks on the app icon Sounds: These are audio alerts Alerts - Text Messages Messages will appear in the notification center (for iOS) and notification bar (for Android). IBM Worklight provides a unified push notification architecture that simplifies sending push messages across multiple devices running on different platforms. It provides a central management console to manage mobile vendor services, for example, APNS and GCM, in the background. Worklight provides the following push notification benefits: Easy to use: Users can easily subscribe and unsubscribe to a push service Quick message delivery: The push message gets delivered to a user's device even if the app is currently not running on the device Message feedback: It is possible to send feedback whenever a user receives and reads a push message Cordova Plugins A Cordova plugin is an open source, cross-platform mobile development architecture that allows the creation of multiplatform-deployable mobile apps. These apps can access native component features of devices using an API having web technologies such as HTML 5, JavaScript, and CSS 3. Apache Cordova Plugins are integrated into IBM Worklight Android and iOS projects. In this article, we will describe how Apache Cordova leverages the ability to merge the JavaScript interface as a wrapper on the web side in a native container with the device native interface on the mobile device platform. The most critical aspect of Cordova plugins is to deal with the native functionalities such as camera, bar code scanning, contacts list, and many other native features, currently running on multiple platforms. JavaScript doesn't provide such extensibility to enhance the scripting with respect to the native devices. In order to have a native feature's accessibility, we provide a library corresponding to the device's native feature so that JavaScript can communicate through it. When the need arises for a web page to execute the native feature functionality, the following points of access are available: The scenario has to be implemented in platform-specific manner, for example, in Android, iOS, or any other device In order to handle requests and responses between web pages and native pages, we need to communicate to/from web and native pages that are encrypted. By selecting the first option from the preceding list, we would find ourselves implementing and developing platform-dependent mobile applications. As we are in need of implementing mobile applications for a cross-platform mobile, and because it leads to provide cost-ineffective solutions, it is not a wise choice for Enterprise Mobile Development Solutions. It seems to be a really poor extensible for future enhancements and needs. Encrypted Offline Cache Encrypted Offline Cache (EOC) is the mechanism that is used for storing the repeated and the most sensitive data, which is used in the client's application. Encrypted Offline Cache is precisely known as EOC. It permits a flexible on-device data storage procedure for Android, iOS, BlackBerry, and Windows. This procedure provides a better alternative to the user for storing the manipulated data or the fetched response using the defined adapter data when offline and synchronizing the data for the usage of the server, which provides modifications that were completely developed when offline or without Internet connectivity. In order to dedicatedly create any mobile application for multiple platforms such as iOS and Android, consider using JSONStore rather than EOC. It seems to be much more practical to implement and is supposed to be the best practices of IBM. The JSONStore provides a mechanism to ease cryptographic procedures for encrypting forms and implementing security. PBKDF2 is a key derivation function that would act as the password to access encrypted data, which would be provided by the user. HTML5 cache can be used in EOC, which is not guaranteed to be persistent and is not a proper solution for the future updated versions of iOS. Storage JSONStore The local replica is a JSONStore. IBM Worklight delivers an API which do its work with a JSON Store consuming the class WL.JSONStore using the JavaScript defined method. You can generate an application that endures a local storage manipulated data with data copy and thrusts the local updates to a back-end provisioned service. Nearly every single method or process which is delivered in API for retrieving synchronized data to activate the native copy of the defined data that is kept on the client application or on-device. By means of the JSONStore API, you can encompass the functionality of existing adapter connectivity model to store data locally and impulse modifications from the client to a server. You can pursuit the local data storage and apprise or delete data within it. It can be used to protect the local data store by using password-based encryption. Summary In this article, we have discussed modern world mobile development techniques using IBM Worklight which surely allows an easy ,integrated, and secure enterprise mobile application with respect to time and development efforts. Beside it, most of the key functional areas have been covered including IBM Worklight components, mobile cross-platform environment handling and authentication, push notifications, Dojo mobile framework, and Encrypted Cache for Offline storage. IBM Worklight has the most diverse mechanism to enhance the mobile application functionalities with a more optimum and efficient way. This article also completely concludes the mobile application development techniques and features, by using which enterprise mobile app development will no longer be disquiet. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Viewing on Mobile Devices [Article] Creating mobile friendly themes [Article]
Read more
  • 0
  • 0
  • 3768

article-image-adding-graphics-map
Packt
13 Feb 2014
4 min read
Save for later

Adding Graphics to the Map

Packt
13 Feb 2014
4 min read
(For more resources related to this topic, see here.) Graphics are points, lines, or polygons that are drawn on top of your map in a layer that is independent of any other data layer associated with a map service. Most people associate a graphic object with the symbol that is displayed on a map to represent the graphic. However, each graphic in ArcGIS Server can be composed of up to four objects, including the geometry of the graphic, the symbology associated with the graphic, attributes that describe the graphic, and an info template that defines the format of the info window that appears when a graphic is clicked on. Although a graphic can be composed of up to four objects, it is not always necessary for this to happen. The objects you choose to associate with your graphic will be dependent on the needs of the application that you are building. For example, in an application that displays GPS coordinates on a map, you may not need to associate attributes or display info window for the graphic. However, in most cases, you will be defining the geometry and symbology for a graphic. Graphics are temporary objects stored in a separate layer on the map. They are displayed while an application is in use and are removed when the session is complete. The separate layer, called the graphics layer, stores all the graphics associated with your map. Just as with the other types of layers, GraphicsLayer also inherits from the Layer class. Therefore, all the properties, methods, and events found in the Layer class will also be present in GraphicsLayer. Graphics are displayed on top of any other layers that are present in your application. An example of point and polygon graphics is provided in the following screenshot. These graphics can be created by users or drawn by the application in response to the tasks that have been submitted. For example, a business analysis application might provide a tool that allows the user to draw a freehand polygon to represent a potential trade area. The polygon graphic would be displayed on top of the map, and could then be used as an input to a geoprocessing task that pulls demographic information pertaining to the potential trade area. Many ArcGIS Server tasks return their results as graphics. The QueryTask object can perform both attribute and spatial queries. The results of a query are then returned to the application in the form of a FeatureSet object, which is simply an array of features. You can then access each of these features as graphics and plot them on the map using a looping structure. Perhaps you'd like to find and display all land parcels that intersect the 100 year flood plain. A QueryTask object could perform the spatial query and then return the results to your application, where they would then be displayed as polygon graphics on the map. In this article, we will cover the following topics: The four parts of a graphic Creating geometry for graphics Symbolizing graphics Assigning attributes to graphics Displaying graphic attributes in an info window Creating graphics Adding graphics to the graphics layer The four parts of a graphic A graphic is composed of four items: Geometry, Symbol, Attributes, and InfoTemplate, as shown in the following diagram: A graphic has a geometric representation that describes where it is located. The geometry, along with a symbol, defines how the graphic is displayed. A graphic can also have attributes that provide descriptive information about the graphic. Attributes are defined as a set of name-value pairs. For example, a graphic depicting a wildfire location could have attributes that describe the name of the fire along with the number of acres burned. The info template defines what attributes should be displayed in the info window that appears when the graphic appears, along with how they should be displayed. After their creation, the graphic objects must be stored inside a GraphicsLayer object, before they can be displayed on the map. This GraphicsLayer object functions as a container for all the graphics that will be displayed. All the elements of a graphic are optional. However, the geometry and symbology of a graphic are almost always assigned. Without these two items, there would be nothing to display on the map, and there isn't much point in having a graphic unless you're going to display it. The following figure shows the typical process of creating a graphic and adding it to the graphics layer. In this case, we are applying the geometry of the graphic as well as a symbol to depict the graphic. However, we haven't specifically assigned attributes or an info template to this graphic.
Read more
  • 0
  • 0
  • 2746
Packt
10 Feb 2014
8 min read
Save for later

XamChat – a Cross-platform App

Packt
10 Feb 2014
8 min read
(For more resources related to this topic, see here.) Describing our sample application concept The concept is simple: a chat application that uses a standard Internet connection as an alternative to sending text messages. There are several popular applications like this in the Apple App Store, probably due to the cost of text messaging and support for devices such as the iPod Touch or iPad. This should be a neat real-world example that could be useful for users, and will cover specific topics in developing applications for iOS and Android. Before starting with the development, let's list the set of screens that we'll need: Login / sign up: This screen will include a standard login and sign-up process for the user List of conversations: This screen will include a button to start a new conversation List of friends: This screen will provide a way to add new friends when we start a new conversation Conversation: This screen will have a list of messages between you and another user, and an option to reply A quick wireframe layout of the application would help us grasp a better understanding of the layout of the app. The following figure shows the set of screens to be included in your app: Developing our model layer Since we have a good idea of what the application is, the next step is to develop the business objects or model layer of this application. Let's start out by defining a few classes that would contain the data to be used throughout the app. It is recommended, for the sake of organization, to add these to a Models folder in your project. Let's begin with a class representing a user. The class can be created as follows: public class User {   public int Id { get; set; }   public string Username { get; set; }   public string Password { get; set; } } Pretty straightforward so far; let's move on to create classes representing a conversation and a message as follows: public class Conversation {   public int Id { get; set; }   public int UserId { get; set; }   public string Username { get; set; } } public class Message {   public int Id { get; set; }   public int ConversationId { get; set; }   public int UserId { get; set; }   public string Username { get; set; }   public string Text { get; set; } } Notice that we are using integers as identifiers for the various objects. UserId is the value that would be set by the application to change the user that the object is associated with. Now let's go ahead and set up our solution by performing the following steps: Start by creating a new solution and a new C# Library project. Name the project as XamChat.Core and the solution as XamChat. Next, let's set the library to a Mono / .NET 4.5 project. This setting is found in the project option dialog under Build | General | Target Framework. You could also choose to use Portable Library for this project, Writing a mock web service Many times when developing a mobile application, you may need to begin the development of your application before the real backend web service is available. To prevent the development from halting entirely, a good approach would be to develop a mock version of the service. First, let's break down the operations our app will perform against a web server. The operations are as follows: Log in with a username and password. Register a new account. Get the user's list of friends. Add friends by their usernames. Get a list of the existing conversations for the user. Get a list of messages in a conversation. Send a message. Now let's define an interface that offers a method for each scenario. The interface is as follows: public interface IWebService {   Task<User> Login(string username, string password);   Task<User> Register(User user);   Task<User[]> GetFriends(int userId);   Task<User> AddFriend(int userId, string username);   Task<Conversation[]> GetConversations(int userId);   Task<Message[]> GetMessages(int conversationId);   Task<Message> SendMessage(Message message); } As you see, we're using asynchronous communication with the TPL(Task Parallel Library) technology. Since communicating with a web service can be a lengthy process, it is always a good idea to use the Task<T> class for these operations. Otherwise, you could inadvertently run a lengthy task on the user interface thread, which would prevent user inputs during the operation. Task is definitely needed for web requests, since users could easily be using a cellular Internet connection on iOS and Android, and it will give us the ability to use the async and await keywords down the road. Now let's implement a fake service that implements this interface. Place classes such as FakeWebService in the Fakes folder of the project. Let's start with the class declaration and the first method of the interface: public class FakeWebService {   public int SleepDuration { get; set; }   public FakeWebService()   {     SleepDuration = 1;   }   private Task Sleep()   {     return Task.Delay(SleepDuration);   }   public async Task<User> Login(     string username, string password)   {     await Sleep();     return new User { Id = 1, Username = username };   } } We started off with a SleepDuration property to store a number in milliseconds. This is used to simulate an interaction with a web server, which can take some time. It is also useful for changing the SleepDuration value in different situations. For example, you might want to set this to a small number when writing unit tests so that the tests execute quickly. Next, we implemented a simple Sleep method to return a task that introduce delays of a number of milliseconds. This method will be used throughout the fake service to cause a delay on each operation. Finally, the Login method merely used an await call on the Sleep method and returned a new User object with the appropriate Username. For now, any username or password combination will work; however, you may wish to write some code here to check specific credentials. Now, let's implement a few more methods to continue our FakeWebService class as follows: public async Task<User> Register(User user) {   await Sleep();   return user; } public async Task<User[]> GetFriends(int userId) {   await Sleep();   return new[]   {     new User { Id = 2, Username = "bobama" },     new User { Id = 2, Username = "bobloblaw" },     new User { Id = 3, Username = "gmichael" },   }; } public async Task<User> AddFriend(   int userId, string username) {   await Sleep();   return new User { Id = 4, Username = username }; } For each of these methods, we kept in mind exactly same pattern as the Login method. Each method will delay and return some sample data. Feel free to mix the data with your own values. Now, let's implement the GetConversations method required by the interface as follows: public async Task<Conversation[]> GetConversations(int userId) {   await Sleep();   return new[]   {     new Conversation { Id = 1, UserId = 2 },     new Conversation { Id = 1, UserId = 3 },     new Conversation { Id = 1, UserId = 4 },   }; } Basically, we just create a new array of the Conversation objects with arbitrary IDs. We also make sure to match up the UserId values with the IDs we've used on the User objects so far. Next, let's implement GetMessages to retrieve a list of messages as follows: public async Task<Message[]> GetMessages(int conversationId) {   await Sleep();   return new[]   {     new Message     {       Id = 1,       ConversationId = conversationId,       UserId = 2,       Text = "Hey",     },     new Message     {       Id = 2,       ConversationId = conversationId,       UserId = 1,       Text = "What's Up?",     },     new Message     {       Id = 3,       ConversationId = conversationId,       UserId = 2,       Text = "Have you seen that new movie?",     },     new Message     {       Id = 4,       ConversationId = conversationId,       UserId = 1,       Text = "It's great!",     },   }; } Once again, we are adding some arbitrary data here, and mainly making sure that UserId and ConversationId match our existing data so far. And finally, we will write one more method to send a message as follows: public async Task<Message> SendMessage(Message message) {   await Sleep();   return message; } Most of these methods are very straightforward. Note that the service doesn't have to work perfectly; it should merely complete each operation successfully with a delay. Each method should also return test data of some kind to be displayed in the UI. This will give us the ability to implement our iOS and Android applications while filling in the web service later. Next, we need to implement a simple interface for persisting application settings. Let's define an interface named ISettings as follows: public interface ISettings {   User User { get; set; }   void Save(); } Note that you might want to set up the Save method to be asynchronous and return Task if you plan on storing settings in the cloud. We don't really need this with our application since we will only be saving our settings locally. Later on, we'll implement this interface on each platform using Android and iOS APIs. For now, let's just implement a fake version that will be used later when we write unit tests. The interface is created by the following lines of code: public class FakeSettings : ISettings {   public User User { get; set; }   public void Save() { } } Note that the fake version doesn't actually need to do anything; we just need to provide a class that will implement the interface and not throw any unexpected errors. This completes the Model layer of the application. Here is a final class diagram of what we have implemented so far:
Read more
  • 0
  • 0
  • 2138

article-image-intents-mobile-components
Packt
20 Jan 2014
7 min read
Save for later

Intents for Mobile Components

Packt
20 Jan 2014
7 min read
(For more resources related to this topic, see here.) Common mobile components Due to the open source nature of the Android operating system, many different companies such as HTC and Samsung ported the Android OS on their devices with many different functionalities and styles. Each Android phone is unique in some way or the other and possesses many unique features and components different from other brands and phones. But there are some components that are found to be common in all the Android phones. We are using two key terms here: components and features. Component is the hardware part of an Android phone, such as camera, Bluetooth and so on. And Feature is the software part of an Android phone, such as the SMS feature, E-mail feature, and so on. This article is all about hardware components, their access, and their use through intents. These common components can be generally used and implemented independently of any mobile phone or model. And there is no doubt that intents are the best asynchronous messages to activate these Android components. These intents are used to trigger the Android OS when some event occurrs and some action should be taken. Android, on the basis of the data received, determines the receiver for the intent and triggers it. Here are a few common components found in each Android phone: The Wi-Fi component Each Android phone comes with a complete support of the Wi-Fi connectivity component. The new Android phones having Android Version 4.1 and above support the Wi-Fi Direct feature as well. This allows the user to connect to nearby devices without the need to connect with a hotspot or network access point. The Bluetooth component An Android phone includes Bluetooth network support that allows the users of Android phones to exchange data wirelessly in low range with other devices. The Android application framework provides developers with the access to Bluetooth functionality through Android Bluetooth APIs. The Cellular component No mobile phone is complete without a cellular component. Each Android phone has a cellular component for mobile communication through SMS, calls, and so on. The Android system provides very high, flexible APIs to utilize telephony and cellular components to create very interesting and innovative apps. Global Positioning System (GPS) and geo-location GPS is a very useful but battery-consuming component in any Android phone. It is used for developing location-based apps for Android users. Google Maps is the best feature related to GPS and geo-location. Developers have provided so many innovative apps and games utilizing Google Maps and GPS components in Android. The Geomagnetic field component Geomagnetic field component is found in most Android phones. This component is used to estimate the magnetic field of an Android phone at a given point on the Earth and, in particular, to compute magnetic declination from the North. The geomagnetic field component uses the World Magnetic Model produced by United States National Geospatial-Intelligence Agency. The current model that is being used for the geomagnetic field is valid until 2015. Newer Android phones will have the newer version of the geomagnetic field. Sensor components Most Android devices have built-in sensors that measure motion, orientation, environment conditions, and so on. These sensors sometimes act as the brains of the app. For example, they take actions on the basis of the mobile's surrounding (weather) and allow users to have an automatic interaction with the app. These sensors provide raw data with high precision and accuracy for measuring the respective sensor values. For example, gravity sensor can be used to track gestures and motions, such as tilt, shake, and so on, in any app or game. Similarly, a temperature sensor can be used to detect the mobile temperature, or a geomagnetic sensor (as introduced in the previous section) can be used in any travel application to track the compass bearing. Broadly, there are three categories of sensors in Android: motion, position, and environmental sensors. The following subsections discuss these types of sensors briefly. Motion sensors Motion sensors let the Android user monitor the motion of the device. There are both hardware-based sensors such as accelerometer, gyroscope, and software-based sensors such as gravity, linear acceleration, and rotation vector sensors. Motion sensors are used to detect a device's motion including tilt effect, shake effect, rotation, swing, and so on. If used properly, these effects can make any app or game very interesting and flexible, and can prove to provide a great user experience. Position sensors The two position sensors, geomagnetic sensor and orientation sensor, are used to determine the position of the mobile device. Another sensor, the proximity sensor, lets the user determine how close the face of a device is to an object. For example, when we get any call on an Android phone, placing the phone on the ear shuts off the screen, and when we hold the phone back in our hands, the screen display appears automatically. This simple application uses the proximity sensor to detect the ear (object) with the face of the device (the screen). Environmental sensors These sensors are not used much in Android apps, but used widely by the Android system to detect a lot of little things. For example, the temperature sensor is used to detect the temperature of the phone, and can be used in saving the battery and mobile life. At the time of writing this article, the Samsung Galaxy S4 Android phone has been launched. The phone has shown a great use of environmental gestures by allowing users to perform actions such as making calls by no-touch gestures such as moving your hand or face in front of the phone. Components and intents Android phones contain a large number of components and features. This becomes beneficial to both Android developers and users. Android developers can use these mobile components and features to customize the user experience. For most components, developers get two options; either they extend the components and customize those according to their application requirements, or they use the built-in interfaces provided by the Android system. We won't read about the first choice of extending components as it is beyond the scope of this article. However, we will study the other option of using built-in interfaces for mobile components. Generally, to use any mobile component from our Android app, the developers send intents to the Android system and then Android takes the action accordingly to call the respective component. Intents are asynchronous messages sent to the Android OS to perform any functionality. Most of the mobile components can be triggered by intents just by using a few lines of code and can be utilized fully by developers in their apps. In the following sections of this article, we will see few components and how they are used and triggered by intents with practical examples. We have divided the components in three ways: communication components, media components, and motion components. Now, let's discuss these components in the following sections. Communication components Any mobile phone's core purpose is communication. Android phones provide a lot of features other than communication features. Android phones contain SMS/MMS, Wi-Fi, and Bluetooth for communication purposes. This article focuses on the hardware components; so, we will discuss only Wi-Fi and Bluetooth. The Android system provides built-in APIs to manage and use Bluetooth devices, settings, discoverability, and much more. It offers full network APIs not only for Bluetooth but also for Wi-Fi, hotspots, configuring settings, Internet connectivity, and much more. More importantly, these APIs and components can be used very easily by writing few lines of code through intents. We will start by discussing Bluetooth, and how we can use Bluetooth through intents in the next section.
Read more
  • 0
  • 0
  • 2467

article-image-article-what_is_ngui
Packt
20 Jan 2014
8 min read
Save for later

What is NGUI?

Packt
20 Jan 2014
8 min read
(For more resources related to this topic, see here.) The Next-Gen User Interface kit is a plugin for Unity 3D. It has the great advantage of being easy to use, very powerful, and optimized compared to Unity's built-in GUI system, UnityGUI. Since it is written in C#, it is easily understandable and you may tweak it or add your own features, if necessary. The NGUI Standard License costs $95. With this, you will have useful example scenes included. I recommend this license to start comfortably—a free evaluation version is available, but it is limited, outdated, and not recommended. The NGUI Professional License, priced at $200, gives you access to NGUI's GIT repository to access the latest beta features and releases in advance. A $2000 Site License is available for an unlimited number of developers within the same studio. Let's have an overview of the main features of this plugin and see how they work. UnityGUI versus NGUI With Unity's GUI, you must create the entire UI in code by adding lines that display labels, textures, or any other UI element on the screen. These lines have to be written inside a special function, OnGUI(), that is called for every frame. This is no longer necessary; with NGUI, UI elements are simple GameObjects! You can create widgets—this is what NGUI calls labels, sprites, input fields, and so on—move them, rotate them, and change their dimensions using handles or the Inspector. Copying, pasting, creating prefabs, and every other useful feature of Unity's workflow is also available. These widgets are viewed by a camera and rendered on a layer that you can specify. Most of the parameters are accessible through Unity's Inspector, and you can see what your UI looks like directly in the Game window, without having to hit the Play button. Atlases Sprites and fonts are all contained in a large texture called atlas. With only a few clicks, you can easily create and edit your atlases. If you don't have any images to create your own UI assets, simple default atlases come with the plugin. That system means that for a complex UI window composed of different textures and fonts, the same material and texture will be used when rendering. This results in only one draw call for the entire window. This, along with other optimizations, makes NGUI the perfect tool to work on mobile platforms. Events NGUI also comes with an easy-to-use event framework that is written in C#. The plugin comes with a large number of additional components that you can attach to GameObjects. These components can perform advanced tasks depending on which events are triggered: hover, click, input, and so on. Therefore, you may enhance your UI experience while keeping it simple to configure. Code less, get more! Localization NGUI comes with its own localization system, enabling you to easily set up and change your UI's language with the push of a button. All your strings are located in the .txt files: one file per language. Shaders Lighting, normal mapping, and refraction shaders are supported in NGUI, which can give you beautiful results. Clipping is also a shader-controlled feature with NGUI, used for showing or hiding specific areas of your UI. We've now covered what NGUI's main features are, and how it can be useful to us as a plugin, and now it's time to import it inside Unity. Importing NGUI After buying the product from the Asset Store or getting the evaluation version, you have to download it. Perform the following steps to do so: Create a new Unity project. Navigate to Window | Asset Store. Select your downloads library. Click on the Download button next to NGUI: Next-Gen UI. When the download completes, click on the NGUI icon / product name in the library to access the product page. Click on the Import button and wait for a pop-up window to appear. Check the checkbox for NGUI v.3.0.2.unity package and click on Import. In the Project view, navigate to Assets | NGUI and double-click on NGUI v.3.0.2. A new imported pop-up window will appear. Click on Import again. Click any button on the toolbar to refresh it.The NGUI tray will appear! The NGUI tray will look like the following screenshot: You have now successfully imported NGUI to your project. Let's create your first 2D UI. Creating your UI We will now create our first 2D user interface with NGUI's UI Wizard. This wizard will add all the elements needed for NGUI to work. Before we continue, please save your scene as Menu.unity. UI Wizard Create your UI by opening the UI Wizard by navigating to NGUI | Open | UIWizard from the toolbar. Let's now take a look at the UI Wizard window and its parameters. Window You should now have the following pop-up window with two parameters: Parameters The two parameters are as follows: Layer: This is the layer on which your UI will be displayed Camera: This will decide if the UI will have a camera, and its drop-down options are as follows: None: No camera will be created Simple 2D:Uses a camera with orthographic projection Advanced 3D:Uses a camera with perspective projection Separate UI Layer I recommend that you separate your UI from other usual layers. We should do it as shown in the following steps: Click on the drop-down menu next to the Layer parameter. Select Add Layer. Create a new layer and name it GUI2D. Go back to the UI Wizard window and select this new GUI2D layer for your UI. You can now click on the Create Your UI button. Your first 2D UI has been created! Your UI structure The wizard has created four new GameObjects on the scene for us: UI Root (2D) Camera Anchor Panel Let's now review each in detail. UI Root (2D) The UIRoot component scales widgets down to keep them at a manageable size. It is also responsible for the Scaling Style—it will either scale UI elements to remain pixel perfect or to occupy the same percentage of the screen, depending on the parameters you specify. Select the UI Root (2D) GameObject in the Hierarchy. It has the UIRoot.cs script attached to it. This script adjusts the scale of the GameObject it's attached to in order to let you specify widget coordinates in pixels, instead of Unity units as shown in the following screenshot: Parameters The UIRoot component has four parameters: Scaling Style: The following are the available scaling styles: PixelPerfect:This will ensure that your UI will always try to remain at the same size in pixels, no matter what resolution. In this scaling mode, a 300 x 200 window will be huge on a 320 x 240 screen and tiny on a 1920 x 1080 screen. That also means that if you have a smaller resolution than your UI, it will be cropped. FixedSize:This will ensure that your UI will be proportionally resized depending on the screen's height. The result is that your UI will not be pixel perfect but will scale to fit the current screen size. FixedSizeOnMobiles:This will ensure fixed size on mobiles and pixel perfect everywhere else. Manual Height: With the FixedSize scaling style, the scale will be based on this height. If your screen's height goes over or under this value, it will be resized to be displayed identically while maintaining the aspect ratio(width/height proportional relationship). Minimum Height: With the PixelPerfect scaling style, this parameter defines the minimum height for the screen. If your screen height goes below this value, your UI will resize. It will be as if the Scaling Style parameter was set to FixedSize with Manual Height set to this value. Maximum Height: With the PixelPerfect scaling style, this parameter defines the maximum height for the screen. If your screen height goes over this value,your UI will resize. It will be as if the ScalingStyle parameter was set to FixedSize with Manual Height set to this value. Please set the Scaling Style parameter to FixedSize with a Manual Height value of 1080. This will allow us to have the same UI on any screen size up to 1920 x 1080. Even though the UI will look the same on different resolutions, the aspect ratio is still a problem since the rescale is based on the screen's height only. If you want to cover both 4:3 and 16:9 screens, your UI should not be too large—try to keep it square.Otherwise, your UI might be cropped on certain screen resolutions. On the other hand, if you want a 16:9 UI, I recommend you force this aspect ratio only. Let's do it now for this project by performing the following steps: Navigate to Edit| Project Settings | Player. In the Inspectoroption, unfold the Resolution and Presentationgroup. Unfold the Supported Aspect Ratios group. Check only the 16:9 box. Summary In this article, we discussed NGUI's basic workflow—it works with GameObjects, uses atlases to combine multiple textures in one large texture, has an event system, can use shaders, and has a localization system. After importing the NGUI plugin, we created our first 2D UI with the UI Wizard, reviewed its parameters, and created our own GUI 2D layer for our UI to reside on. Resources for Article: Further resources on this subject: Unity 3D Game Development: Don't Be a Clock Blocker [Article] Component-based approach of Unity [Article] Unity 3: Building a Rocket Launcher [Article]
Read more
  • 0
  • 0
  • 8702
article-image-making-poiapp-location-aware
Packt
10 Jan 2014
4 min read
Save for later

Making POIApp Location Aware

Packt
10 Jan 2014
4 min read
(For more resources related to this topic, see here.) Location services While working with location services on the Android platform, you will primarily work with an instance of LocationManager. The process is fairly straightforward as follows: Obtain a reference to an instance of LocationManager. Use the instance of LocationManager to request location change notifications, either ongoing or a single notification. Process OnLocationChange() callbacks. Android devices generally provide two different means for determining a location: GPS and Network. When requesting location change notifications, you must specify the provider you wish to receive updates from. The Android platform defines a set of string constants for the following providers: Provider Name Description GPS_PROVIDER (gps) This provider determines a location using satellites. Depending on conditions, this provider may take a while to return a location fix. This requires the ACCESS_FINE_LOCATION permission. NETWORK_PROVIDER (network) This provider determines a location based on the availability of a cell tower and Wi-Fi access points. Its results are retrieved by means of a network lookup. PASSIVE_PROVIDER (passive) This provider can be used to passively receive location updates when other applications or services request them without actually having to request for the locations yourself. It requires the ACCESS_FINE_ LOCATION permission, although if the GPS is not enabled, this provider might only return coarse fixes. You will notice specific permissions in the provider descriptions that must be set on an app to be used. Setting app permissions App permissions are specified in the AndroidManifest.xml file. To set the appropriate permissions, perform the following steps: Double-click on Properties/AndroidManifest.xml in the Solution pad. The file will be opened in the manifest editor. There are two tabs at the bottom of the screen, Application and Source, which can be used to toggle between viewing a form for editing the file or the raw XML as follows: In the Required permissions list, check AccessCoarseLocation, AccessFineLocation, and Internet. Select File | Save. Switch to the Source View to view the XML as follows: Configuring the emulator To use an emulator for development, this article will require the emulator to be configured with Google APIs so that the address lookup and navigation to map app works. To install and configure Google APIs, perform the following steps: From the main menu, select Tools | Open Android SDK Manager. Select the platform version you are using, check Google APIs, and click on Install 1 package…, as seen in the following screenshot: After the installation is complete, close the Android SDK Manager and from the main menu, select Tools | Open Android Emulator Manager. Select the emulator you want to configure and click on Edit. For Target, select the Google APIs entry for the API level you want to work with. Click on OK to save. Obtaining an instance of LocationManager The LocationManager class is a system service that provides access to the location and bearing of a device, if the device supports these services. You do not explicitly create an instance of LocationManager; instead, you request an instance from a Context object using the GetSystemService() method. In most cases, the Context object is a subtype of Activity. The following code depicts declaring a reference of a LocationManager class and requesting an instance: LocationManager _locMgr; . . . _locMgr = GetSystemService (Context.LocationService) as LocationManager; Requesting location change notifications The LocationManager class provides a series of overloaded methods that can be used to request location update notifications. If you simply need a single update, you can call RequestSingleUpdate(); to receive ongoing updates, call RequestLocationUpdate(). Prior to requesting location updates, you must identify the location provider that should be used. In our case, we simply want to use the most accurate provider available at the time. This can be accomplished by specifying the criteria for the desired provider using an instance of Android.Location.Criteria. The following code example shows how to specify the minimum criteria: Criteria criteria = new Criteria(); criteria.Accuracy = Accuracy.NoRequirement; criteria.PowerRequirement = Power.NoRequirement; Now that we have the criteria, we are ready to request updates as follows: _locMgr.RequestSingleUpdate (criteria, this, null); Summary In this article, we stepped through integrating POIApp with location services and the Google map app. We dealt with the various options that developers have to make their apps location aware and walks the reader through adding logic to determine a device's location and the address of a location, and displaying a location within the map app. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating Dynamic UI with Android Fragments [Article] So, what is Spring for Android? [Article]
Read more
  • 0
  • 0
  • 1863

Packt
31 Dec 2013
5 min read
Save for later

Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere

Packt
31 Dec 2013
5 min read
(For more resources related to this topic, see here.) Step 1 – define global variables We include the following variables and functions in our file: var map; var latitud; var longitud; var xmlDoc = loadXml("puntos.xml"); var marker; var markersArray = []; function loadXml(xmlUrl) { var xmlhttp; if (window.XMLHttpRequest) { xmlhttp = new XMLHttpRequest(); } else { xmlhttp = new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.open("GET", xmlUrl, false); xmlhttp.send(); xmlDoc = xmlhttp.responseXML; return xmlDoc; } With this code, we build our first function called loadXmlthat just loads the information into the XML file and then in to the smart phone memory. Step 2 – get current position We will build the following functions that we need to show our current position in Google Maps: getCurrentPosition: This function receives an object with the latitude and longitude. These values are set to two variables defined as global variables with the same name. This function receives three parameters, which are as follows: A function to manage the current latitude and longitude A function to manage errors if they occur when the device is trying to get the position Options to configure maximumAge: This is the time required to keep our position on the cache in milliseconds timeout: This is the maximum time required to wait for an answer from the getCurrentPosition function in milliseconds enableHighAccuracy: This provides a more accurate location The values of these functions can be set to either True or False. The following is the code for the preceding functions: getCurrentPosition(coordinates, errors, { maximumAge : 3000, timeout : 15000, enableHighAccuracy : true } The following is the code for the coordinates and errors functions: function coordinates(position) { latitud = position.coords.latitude; /* saving latitude */ longitud = position.coords.longitude; /* saving longitude*/ loadMap(); }// end function function errors(err) { /* Managing errors */ if (err.code == 0) { alert("Oops! Something is wrong"); } if (err.code == 1) { alert("Oops! Please accept share your position with us."); } if (err.code == 2) { alert("Oops! We can't get your current position."); } if (err.code == 3) { alert("Oops! Timeout!"); } }// end errors LocateMe: This function checks whether the device supports the PhoneGap geolocalization API or not. If the device supports the API, then this function calls a method to obtain the current position. If the device doesn't support geolocalization API, then we will use the current position's function, getCurrrentPosition, explained previously. The complete code for the LocateMe function is as follows: function locateMe() { if (navigator.geolocation) { /* The browser have geolocalization */ navigator.geolocation.getCurrentPosition(coordinates, errors, { maximumAge : 3000, timeout : 15000, enableHighAccuracy : true }); } else { alert('Oops! Your browser doesn't support geolocalization'); } } Step 3 – showing the current position using Google Maps To do this, we use a function that we called loadMap. This function is responsible for loading a map using Google Maps API with the current position. Also, we use a JavaScript object called myOptions with three properties, zoom to define the zoom, center to define where the maps will be centered, and mapTypeId to indicate the map style (that is Satellite). Following the code, we find two lines: the first one initializes the actualHeight variable according to the value that is returned by the getActualContentHeight function. This function returns the height that the map needs to have according to the screen size. In the second line, we change the height of the div "map canvas" through the jQuery Mobile method's CSS. In the following code file, we set the variable map with the google.maps.Map object that receives the parameters, the HTML element, and the myOptions object. We use an additional event to resize the map when the screen size changes. Now, we create a new marker object google.maps.Marker with four properties: position for the place of the marker, map that is the global variable, icon to define the image that we want to use as a marker, and the tooltip with the property title. We have two functions: the first one is createMarkers that allows us to read the XML file with the points uploaded in memory and, after that, puts the markers in the map with our second function addMarker that receives four parameters (the global map variable, point name, address and phone, and the google.maps.LatLng object with the point coordinates) to build the marker. We add a click event to show all the information about the place. Finally, we use the removePoints function that clears the maps, thereby deleting the points loaded. This is the code for the JS file: function loadMap() { var latlon = new google.maps.LatLng(latitud, longitud); var myOptions = { zoom : 17, center : latlon, mapTypeId : google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById('map_canvas'), myOptions); var mapDiv = document.getElementById('map_canvas'); google.maps.event.addDomListener(mapDiv, 'resize', function(){ google.maps.event.trigger(map,'resize'); }); var coorMarker = new google.maps.LatLng(latitud, longitud); marker = new google.maps.Marker({/* Create the marker */ position : coorMarker, map : map, icon : 'images/green-dot.png', title : "Where am I?" }); } Step 4 – edit the HTML file To implement the preceding functions, we need to add some code lines in our HTML file. We need to add the script line to include the new JS file, and add a complete JavaScript in the HTML head tag to execute the locateMe function. Now that the device is ready, another PhoneGap function calls watchPosition to update the position when the user is moving. Summary This article has shown us how to implement Geolocation in our app. It also shows how to use the API provided by PhoneGap to use in our app. The code for pointing out our current location was also discussed. Resources for Article: Further resources on this subject: Using Location Data with PhoneGap [Article] Configuring the ChildBrowser plugin [Article] Creating and configuring a basic mobile application [Article]
Read more
  • 0
  • 0
  • 2331