Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Mobile

213 Articles
article-image-building-gallery-application
Packt
19 Aug 2016
17 min read
Save for later

Building a Gallery Application

Packt
19 Aug 2016
17 min read
In this article by Michael Williams, author of the book Xamarin Blueprints, will walk you through native development with Xamarin by building an iOS and Android application that will read from your local gallery files and display them in a UITableView and ListView.  (For more resources related to this topic, see here.) Create an iOS project Let's begin our Xamarin journey; firstly we will start by setting up our iOS project in Xamarin Studio: Start by opening Xamarin Studio and creating a new iOS project. To do so, we simply select File | New | Solution and select an iOS Single View App; we must also give it a name and add in the bundle ID you want in order to run your application. It is recommended that for each project, a new bundle ID be created, along with a developer provisioning profile for each project. Now that we have created the iOS project, you will be taken to the following screen: Doesn't this look familiar? Yes, it is our AppDelegate file, notice the .cs on the end, because we are using C-sharp (C#), all our code files will have this extension (no more .h or .m files). Before we go any further, spend a few minutes moving around the IDE, expand the folders, and explore the project structure; it is very similar to an iOS project created in XCode. Create a UIViewController and UITableView Now that we have our new iOS project, we are going to start by creating a UIViewController. Right-click on the project file, select Add | New File, and select ViewController from the iOS menu selection in the left-hand box: You will notice three files generated, a .xib, a .cs and a .designer.cs file. We don't need to worry about the third file; this is automatically generated based upon the other two files: Right-click on the project item and select Reveal in Finder, This will bring up the finder where you will double-click on the GalleryCell.xib file; this will bring up the user-interface designer in XCode. You should see automated text inserted into the document to help you get started. Firstly, we must set our namespace accordingly, and import our libraries with using statements. In order to use the iOS user interface elements, we must import the UIKit and CoreGraphics libraries. Our class will inherit the UIViewController class in which we will override the ViewDidLoad function: namespace Gallery.iOS {     using System;     using System.Collections.Generic;       using CoreGraphics;     using UIKit;       public partial class MainController : UIViewController     {         private UITableView _tableView;           private TableSource _source;           private ImageHandler _imageHandler;           public MainController () : base ("MainController", null)         {             _source = new TableSource ();               _imageHandler = new ImageHandler ();             _imageHandler.AssetsLoaded += handleAssetsLoaded;         }           private void handleAssetsLoaded (object sender, EventArgs e)         {             _source.UpdateGalleryItems (_imageHandler.CreateGalleryItems());             _tableView.ReloadData ();         }           public override void ViewDidLoad ()         {             base.ViewDidLoad ();               var width = View.Bounds.Width;             var height = View.Bounds.Height;               tableView = new UITableView(new CGRect(0, 0, width, height));             tableView.AutoresizingMask = UIViewAutoresizing.All;             tableView.Source = _source;               Add (_tableView);         }     } }   Our first UI element created is a UITableView. This will be used to insert into the UIView of the UIViewController, and we also retrieve width and height values of the UIView to stretch the UITableView to fit the entire bounds of the UIViewController. We must also call Add to insert the UITableView into the UIView. In order to have the list filled with data, we need to create a UITableSource to contain the list of items to be displayed in the list. We will also need an object called GalleryModel; this will be the model of data to be displayed in each cell. Follow the previous process for adding in two new .cs files, one will be used to create our UITableSource class and the other for the GalleryModel class. In TableSource.cs, first we must import the Foundation library with the using statement: using Foundation; Now for the rest of our class. Remember, we have to override specific functions for our UITableSource to describe its behavior. It must also include a list for containing the item view-models that will be used for the data displayed in each cell: public class TableSource : UITableViewSource     {         protected List<GalleryItem> galleryItems;         protected string cellIdentifier = "GalleryCell";           public TableSource (string[] items)         {             galleryItems = new List<GalleryItem> ();         }     } We must override the NumberOfSections function; in our case, it will always be one because we are not having list sections: public override nint NumberOfSections (UITableView tableView)         {             return 1;         } To determine the number of list items, we return the count of the list: public override nint RowsInSection (UITableView tableview, nint section)         {             return galleryItems.Count;         } Then we must add the GetCell function, this will be used to get the UITableViewCell to render for a particular row. But before we do this, we need to create a custom UITableViewCell. Customizing a cells appearance We are now going to design our cells that will appear for every model found in the TableSource class. Add in a new .cs file for our custom UITableViewCell. We are not going to use a .xib and simply build the user interface directly in code using a single .cs file. Now for the implementation: public class GalleryCell: UITableViewCell      {         private UIImageView _imageView;           private UILabel _titleLabel;           private UILabel _dateLabel;           public GalleryCell (string cellId) : base (UITableViewCellStyle.Default, cellId)         {             SelectionStyle = UITableViewCellSelectionStyle.Gray;               _imageView = new UIImageView()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _titleLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               _dateLabel = new UILabel ()             {                 TranslatesAutoresizingMaskIntoConstraints = false,             };               ContentView.Add (imageView);             ContentView.Add (titleLabel);             ContentView.Add (dateLabel);         }     } Our constructor must call the base constructor, as we need to initialize each cell with a cell style and cell identifier. We then add in a UIImageView and two UILabels for each cell, one for the file name and one for the date. Finally, we add all three elements to the main content view of the cell. When we have our initializer, we add the following: public void UpdateCell (GalleryItem gallery)         {             _imageView.Image = UIImage.LoadFromData (NSData.FromArray (gallery.ImageData));             _titleLabel.Text = gallery.Title;             _dateLabel.Text = gallery.Date;         }           public override void LayoutSubviews ()         {             base.LayoutSubviews ();               ContentView.TranslatesAutoresizingMaskIntoConstraints = false;               // set layout constraints for main view             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[imageView(100)]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("imageView", imageView)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("V:|[titleLabel]|", NSLayoutFormatOptions.DirectionLeftToRight, null, new NSDictionary("titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[titleLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "titleLabel", titleLabel)));             AddConstraints (NSLayoutConstraint.FromVisualFormat("H:|-10-[imageView(100)]-10-[dateLabel]-10-|", NSLayoutFormatOptions.AlignAllTop, null, new NSDictionary ("imageView", imageView, "dateLabel", dateLabel)));         } Our first function, UpdateCell, simply adds the model data to the view, and our second function overrides the LayoutSubViews method of the UITableViewCell class (equivalent to the ViewDidLoad function of a UIViewController). Now that we have our cell design, let's create the properties required for the view model. We only want to store data in our GalleryItem model, meaning we want to store images as byte arrays. Let's create a property for the item model: namespace Gallery.iOS {     using System;       public class GalleryItem     {         public byte[] ImageData;           public string ImageUri;           public string Title;           public string Date;           public GalleryItem ()         {         }     } } Now back to our TableSource class. The next step is to implement the GetCell function: public override UITableViewCell GetCell (UITableView tableView, NSIndexPath indexPath)         {             var cell = (GalleryCell)tableView.DequeueReusableCell (CellIdentifier);             var galleryItem = galleryItems[indexPath.Row];               if (cell == null)             {                 // we create a new cell if this row has not been created yet                 cell = new GalleryCell (CellIdentifier);             }               cell.UpdateCell (galleryItem);               return cell;         } Notice the cell reuse on the if statement; you should be familiar with this type of approach, it is a common pattern for reusing cell views and is the same as the Objective-C implementation (this is a very basic cell reuse implementation). We also call the UpdateCell method to pass in the required GalleryItem data to show in the cell. Let's also set a constant height for all cells. Add the following to your TableSource class: public override nfloat GetHeightForRow (UITableView tableView, NSIndexPath indexPath)         {             return 100;         } So what is next? public override void ViewDidLoad () { .. table.Source = new TableSource(); .. } Let's stop development and have a look at what we have achieved so far. We have created our first UIViewController, UITableView, UITableViewSource, and UITableViewCell, and bound them all together. Fantastic! We now need to access the local storage of the phone to pull out the required gallery items. But before we do this, we are now going to create an Android project and replicate what we have done with iOS. Create an Android project Let's continue our Xamarin journey with Android. Our first step is to create new general Android app: The first screen you will land on is MainActivity. This is our starting activity, which will inflate the first user interface; take notice of the configuration attributes: [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")] The MainLauncher flag indicates the starting activity; one activity must have this flag set to true so the application knows what activity to load first. The icon property is used to set the application icon, and the Label property is used to set the text of the application, which appears in the top left of the navigation bar: namespace Gallery.Droid {     using Android.App;     using Android.Widget;     using Android.OS;       [Activity (Label = "Gallery.Droid", MainLauncher = true, Icon = "@mipmap/icon")]     public class MainActivity : Activity     {         int count = 1;           protected override void OnCreate (Bundle savedInstanceState)         {             base.OnCreate (savedInstanceState);               // Set our view from the "main" layout resource             SetContentView (Resource.Layout.Main);         }     } } The formula for our activities is the same as Java; we must override the OnCreate method for each activity where we will inflate the first XML interface Main.xml. Creating an XML interface and ListView Our starting point is the main.xml sheet; this is where we will be creating the ListView: <?xml version="1.0" encoding="utf-8"?> <LinearLayout     android_orientation="vertical"     android_layout_width="fill_parent"     android_layout_height="fill_parent">     <ListView         android_id="@+id/listView"         android_layout_width="fill_parent"         android_layout_height="fill_parent"         android_layout_marginBottom="10dp"         android_layout_marginTop="5dp"         android_background="@android:color/transparent"         android_cacheColorHint="@android:color/transparent"         android_divider="#CCCCCC"         android_dividerHeight="1dp"         android_paddingLeft="2dp" /> </LinearLayout> The main.xml file should already be in resource | layout directory, so simply copy and paste the previous code into this file. Excellent! We now have our starting activity and interface, so now we have to create a ListAdapter for our ListView. An adapter works very much like a UITableSource, where we must override functions to determine cell data, row design, and the number of items in the list. Xamarin Studio also has an Android GUI designer. Right-click on the Android project and add in a new empty class file for our adapter class. Our class must inherit the BaseAdapter class, and we are going to override the following functions: public override long GetItemId(int position); public override View GetView(int position, View convertView, ViewGroup parent); Before we go any further, we need to create a model for the objects used to contain the data to be presented in each row. In our iOS project, we created a GalleryItem to hold the byte array of image data used to create each UIImage. We have two approaches here: we could create another object to do the same as the GalleryItem, or even better, why don't we reuse this object using a shared project? Shared projects We are going to delve into our first technique for sharing code between different platforms. This is what Xamarin tries to achieve with all of its development, and we want to reuse as much code as possible. The biggest disadvantage when developing Android and iOS applications in two different languages is that we can't reuse anything. Let's create our first shared project: Our shared project will be used to contain the GalleryItem model, so whatever code we include in this shared project can be accessed by both the iOS and Android projects: In the preceding screenshot, have a look at the Solution explorer, and notice how the shared project doesn't contain anything more than .cs code sheets. Shared projects do not have any references or components, just code that is shared by all platform projects. When our native projects reference these shared projects, any libraries being referenced via using statements come from the native projects. Now we must have the iOS and Android projects reference the shared project; right-click on the References folder and select Edit References: Select the shared project you just created and we can now reference the GalleryItem object from both projects. Summary In this article, we have seen a walkthrough of building a gallery application on both iOS and Android using native libraries. This will be done on Android using a ListView and ListAdapter. Resources for Article:   Further resources on this subject: Optimizing Games for Android [article] Getting started with Android Development [article] Creating User Interfaces [article]
Read more
  • 0
  • 0
  • 22224

Packt
30 Mar 2016
15 min read
Save for later

ALM – Developers and QA

Packt
30 Mar 2016
15 min read
This article by Can Bilgin, the author of Mastering Cross-Platform Development with Xamarin, provides an introduction to Application Lifecycle Management (ALM) and continuous integration methodologies on Xamarin cross-platform applications. As the part of the ALM process that is most relevant for developers, unit test strategies will be discussed and demonstrated, as well as automated UI testing. This article is divided into the following sections: Development pipeline Troubleshooting Unit testing UI testing (For more resources related to this topic, see here.) Development pipeline The development pipeline can be described as the virtual production line that steers a project from a mere bundle of business requirements to the consumers. Stakeholders that are part of this pipeline include, but are not limited to, business proxies, developers, the QA team, the release and configuration team, and finally the consumers themselves. Each stakeholder in this production line assumes different responsibilities, and they should all function in harmony. Hence, having an efficient, healthy, and preferably automated pipeline that is going to provide the communication and transfer of deliverables between units is vital for the success of a project. In the Agile project management framework, the development pipeline is cyclical rather than a linear delivery queue. In the application life cycle, requirements are inserted continuously into a backlog. The backlog leads to a planning and development phase, which is followed by testing and QA. Once the production-ready application is released, consumers can be made part of this cycle using live application telemetry instrumentation. Figure 1: Application life cycle management In Xamarin cross-platform application projects, development teams are blessed with various tools and frameworks that can ease the execution of ALM strategies. From sketching and mock-up tools available for early prototyping and design to source control and project management tools that make up the backbone of ALM, Xamarin projects can utilize various tools to automate and systematically analyze project timeline. The following sections of this article concentrate mainly on the lines of defense that protect the health and stability of a Xamarin cross-platform project in the timeline between the assignment of tasks to developers to the point at which the task or bug is completed/resolved and checked into a source control repository. Troubleshooting and diagnostics SDKs associated with Xamarin target platforms and development IDEs are equipped with comprehensive analytic tools. Utilizing these tools, developers can identify issues causing app freezes, crashes, slow response time, and other resource-related problems (for example, excessive battery usage). Xamarin.iOS applications are analyzed using the XCode Instruments toolset. In this toolset, there are a number of profiling templates, each used to analyze a certain perspective of application execution. Instrument templates can be executed on an application running on the iOS simulator or on an actual device. Figure 2: XCode Instruments Similarly, Android applications can be analyzed using the device monitor provided by the Android SDK. Using Android Monitor, memory profile, CPU/GPU utilization, and network usage can also be analyzed, and application-provided diagnostic information can be gathered. Android Debug Bridge (ADB) is a command-line tool that allows various manual or automated device-related operations. For Windows Phone applications, Visual Studio provides a number of analysis tools for profiling CPU usage, energy consumption, memory usage, and XAML UI responsiveness. XAML diagnostic sessions in particular can provide valuable information on problematic sections of view implementation and pinpoint possible visual and performance issues: Figure 3: Visual Studio XAML analyses Finally, Xamarin Profiler, as a maturing application (currently in preview release), can help analyze memory allocations and execution time. Xamarin Profiler can be used with iOS and Android applications. Unit testing The test-driven development (TDD) pattern dictates that the business requirements and the granular use-cases defined by these requirements should be initially reflected on unit test fixtures. This allows a mobile application to grow/evolve within the defined borders of these assertive unit test models. Whether following a TDD strategy or implementing tests to ensure the stability of the development pipeline, unit tests are fundamental components of a development project. Figure 4: Unit test project templates Xamarin Studio and Visual Studio both provide a number of test project templates targeting different areas of a cross-platform project. In Xamarin cross-platform projects, unit tests can be categorized into two groups: platform-agnostic and platform-specific testing. Platform-agnostic unit tests Platform-agnostic components, such as portable class libraries containing shared logic for Xamarin applications, can be tested using the common unit test projects targeting the .NET framework. Visual Studio Test Tools or the NUnit test framework can be used according to the development environment of choice. It is also important to note that shared projects used to create shared logic containers for Xamarin projects cannot be tested with .NET unit test fixtures. For shared projects and the referencing platform-specific projects, platform-specific unit test fixtures should be prepared. When following an MVVM pattern, view models are the focus of unit test fixtures since, as previously explained, view models can be perceived as a finite state machine where the bindable properties are used to create a certain state on which the commands are executed, simulating a specific use-case to be tested. This approach is the most convenient way to test the UI behavior of a Xamarin application without having to implement and configure automated UI tests. While implementing unit tests for such projects, a mocking framework is generally used to replace the platform-dependent sections of the business logic. Loosely coupling these dependent components makes it easier for developers to inject mocked interface implementations and increases the testability of these modules. The most popular mocking frameworks for unit testing are Moq and RhinoMocks. Both Moq and RhinoMocks utilize reflection and, more specifically, the Reflection.Emit namespace, which is used to generate types, methods, events, and other artifacts in the runtime. Aforementioned iOS restrictions on code generation make these libraries inapplicable for platform-specific testing, but they can still be included in unit test fixtures targeting the .NET framework. For platform-specific implementation, the True Fakes library provides compile time code generation and mocking features. Depending on the implementation specifics (such as namespaces used, network communication, multithreading, and so on), in some scenarios it is imperative to test the common logic implementation on specific platforms as well. For instance, some multithreading and parallel task implementations give different results on Windows Runtime, Xamarin.Android, and Xamarin.iOS. These variations generally occur because of the underlying platform's mechanism or slight differences between the .NET and Mono implementation logic. In order to ensure the integrity of these components, common unit test fixtures can be added as linked/referenced files to platform-specific test projects and executed on the test harness. Platform-specific unit tests In a Xamarin project, platform-dependent features cannot be unit tested using the conventional unit test runners available in Visual Studio Test Suite and NUnit frameworks. Platform-dependent tests are executed on empty platform-specific projects that serve as a harness for unit tests for that specific platform. Windows Runtime application projects can be tested using the Visual Studio Test Suite. However, for Android and iOS, the NUnit testing framework should be used, since Visual Studio Test Tools are not available for the Xamarin.Android and Xamarin.iOS platforms.                              Figure 5: Test harnesses The unit test runner for Windows Phone (Silverlight) and Windows Phone 8.1 applications uses a test harness integrated with the Visual Studio test explorer. The unit tests can be executed and debugged from within Visual Studio. Xamarin.Android and Xamarin.iOS test project templates use NUnitLite implementation for the respective platforms. In order to run these tests, the test application should be deployed on the simulator (or the testing device) and the application has to be manually executed. It is possible to automate the unit tests on Android and iOS platforms through instrumentation. In each Xamarin target platform, the initial application lifetime event is used to add the necessary unit tests: [Activity(Label = "Xamarin.Master.Fibonacci.Android.Tests", MainLauncher = true, Icon = "@drawable/icon")] public class MainActivity : TestSuiteActivity { protected override void OnCreate(Bundle bundle) { // tests can be inside the main assembly //AddTest(Assembly.GetExecutingAssembly()); // or in any reference assemblies AddTest(typeof(Fibonacci.Android.Tests.TestsSample).Assembly); // Once you called base.OnCreate(), you cannot add more assemblies. base.OnCreate(bundle); } } In the Xamarin.Android implementation, the MainActivity class derives from the TestSuiteActivity, which implements the necessary infrastructure to run the unit tests and the UI elements to visualize the test results. On the Xamarin.iOS platform, the test application uses the default UIApplicationDelegate, and generally, the FinishedLaunching event delegate is used to create the ViewController for the unit test run fixture: public override bool FinishedLaunching(UIApplication application, NSDictionary launchOptions) { // Override point for customization after application launch. // If not required for your application you can safely delete this method var window = new UIWindow(UIScreen.MainScreen.Bounds); var touchRunner = new TouchRunner(window); touchRunner.Add(System.Reflection.Assembly.GetExecutingAssembly()); window.RootViewController = new UINavigationController(touchRunner.GetViewController()); window.MakeKeyAndVisible(); return true; } The main shortcoming of executing unit tests this way is the fact that it is not easy to generate a code coverage report and archive the test results. Neither of these testing methods provide the ability to test the UI layer. They are simply used to test platform-dependent implementations. In order to test the interactive layer, platform-specific or cross-platform (Xamarin.Forms) coded UI tests need to be implemented. UI testing In general terms, the code coverage of the unit tests directly correlates with the amount of shared code which amounts to, at the very least, 70-80 percent of the code base in a mundane Xamarin project. One of the main driving factors of architectural patterns was to decrease the amount of logic and code in the view layer so that the testability of the project utilizing conventional unit tests reaches a satisfactory level. Coded UI (or automated UI acceptance) tests are used to test the uppermost layer of the cross-platform solution: the views. Xamarin.UITests and Xamarin Test Cloud The main UI testing framework used for Xamarin projects is the Xamarin.UITests testing framework. This testing component can be used on various platform-specific projects, varying from native mobile applications to Xamarin.Forms implementations, except for the Windows Phone platform and applications. Xamarin.UITests is an implementation based on the Calabash framework, which is an automated UI acceptance testing framework targeting mobile applications. Xamarin.UITests is introduced to the Xamarin.iOS or Xamarin.Android applications using the publicly available NuGet packages. The included framework components are used to provide an entry point to the native applications. The entry point is the Xamarin Test Cloud Agent, which is embedded into the native application during the compilation. The cloud agent is similar to a local server that allows either the Xamarin Test Cloud or the test runner to communicate with the app infrastructure and simulate user interaction with the application. Xamarin Test Cloud is a subscription-based service allowing Xamarin applications to be tested on real mobile devices using UI tests implemented via Xamarin.UITests. Xamarin Test Cloud not only provides a powerful testing infrastructure for Xamarin.iOS and Xamarin.Android applications with an abundant amount of mobile devices but can also be integrated into Continuous Integration workflows. After installing the appropriate NuGet package, the UI tests can be initialized for a specific application on a specific device. In order to initialize the interaction adapter for the application, the app package and the device should be configured. On Android, the APK package path and the device serial can be used for the initialization: IApp app = ConfigureApp.Android.ApkFile("<APK Path>/MyApplication.apk") .DeviceSerial("<DeviceID>") .StartApp(); For an iOS application, the procedure is similar: IApp app = ConfigureApp.iOS.AppBundle("<App Bundle Path>/MyApplication.app") .DeviceIdentifier("<DeviceID of Simulator") .StartApp(); Once the App handle has been created, each test written using NUnit should first create the pre-conditions for the tests, simulate the interaction, and finally test the outcome. The IApp interface provides a set of methods to select elements on the visual tree and simulate certain interactions, such as text entry and tapping. On top of the main testing functionality, screenshots can be taken to document test steps and possible bugs. Both Visual Studio and Xamarin Studio provide project templates for Xamarin.UITests. Xamarin Test Recorder Xamarin Test Recorder is an application that can ease the creation of automated UI tests. It is currently in its preview version and is only available for the Mac OS platform. Figure 6: Xamarin Test Recorder Using this application, developers can select the application in need of testing and the device/simulator that is going to run the application. Once the recording session starts, each interaction on the screen is recorded as execution steps on a separate screen, and these steps can be used to generate the preparation or testing steps for the Xamarin.UITests implementation. Coded UI tests (Windows Phone) Coded UI tests are used for automated UI testing on the Windows Phone platform. Coded UI Tests for Windows Phone and Windows Store applications are not any different than their counterparts for other .NET platforms such as Windows Forms, WPF, or ASP.Net. It is also important to note that only XAML applications support Coded UI tests. Coded UI tests are generated on a simulator and written on an Automation ID premise. The Automation ID property is an automatically generated or manually configured identifier for Windows Phone applications (only in XAML) and the UI controls used in the application. Coded UI tests depend on the UIMap created for each control on a specific screen using the Automation IDs. While creating the UIMap, a crosshair tool can be used to select the application and the controls on the simulator screen to define the interactive elements: Figure 7:- Generating coded UI accessors and tests Once the UIMap has been created and the designer files have been generated, gestures and the generated XAML accessors can be used to create testing pre-conditions and assertions. For Coded UI tests, multiple scenario-specific input values can be used and tested on a single assertion. Using the DataRow attribute, unit tests can be expanded to test multiple data-driven scenarios. The code snippet below uses multiple input values to test different incorrect input values: [DataRow(0,"Zero Value")] [DataRow(-2, "Negative Value")] [TestMethod] public void FibonnaciCalculateTest_IncorrectOrdinal(int ordinalInput) { // TODO: Check if bad values are handled correctly } Automated tests can run on available simulators and/or a real device. They can also be included in CI build workflows and made part of the automated development pipeline. Calabash Calabash is an automated UI acceptance testing framework used to execute Cucumber tests. Cucumber tests provide an assertion strategy similar to coded UI tests, only broader and behavior oriented. The Cucumber test framework supports tests written in the Gherkin language (a human-readable programming grammar description for behavior definitions). Calabash makes up the necessary infrastructure to execute these tests on various platforms and application runtimes. A simple declaration of the feature and the scenario that is previously tested on Coded UI using the data-driven model would look similar to the excerpt below. Only two of the possible test scenarios are declared in this feature for demonstration; the feature can be extended: Feature: Calculate Single Fibonacci number. Ordinal entry should greater than 0. Scenario: Ordinal is lower than 0. Given I use the native keyboard to enter "-2" into text field Ordinal And I touch the "Calculate" button Then I see the text "Ordinal cannot be a negative number." Scenario: Ordinal is 0. Given I use the native keyboard to enter "0" into text field Ordinal And I touch the "Calculate" button Then I see the text "Cannot calculate the number for the 0th ordinal." Calabash test execution is possible on Xamarin target platforms since the Ruby API exposed by the Calabash framework has a bidirectional communication line with the Xamarin Test Cloud Agent embedded in Xamarin applications with NuGet packages. Calabash/Cucumber tests can be executed on Xamarin Test Cloud on real devices since the communication between the application runtime and Calabash framework is maintained by Xamarin Test Cloud Agent, the same as Xamarin.UI tests. Summary Xamarin projects can benefit from a properly established development pipeline and the use of ALM principles. This type of approach makes it easier for teams to share responsibilities and work out business requirements in an iterative manner. In the ALM timeline, the development phase is the main domain in which most of the concrete implementation takes place. In order for the development team to provide quality code that can survive the ALM cycle, it is highly advised to analyze and test native applications using the available tooling in Xamarin development IDEs. While the common codebase for a target platform in a Xamarin project can be treated and tested as a .NET implementation using the conventional unit tests, platform-specific implementations require more particular handling. Platform-specific parts of the application need to be tested on empty shell applications, called test harnesses, on the respective platform simulators or devices. To test views, available frameworks such as Coded UI tests (for Windows Phone) and Xamarin.UITests (for Xamarin.Android and Xamarin.iOS) can be utilized to increase the test code coverage and create a stable foundation for the delivery pipeline. Most tests and analysis tools discussed in this article can be integrated into automated continuous integration processes. Resources for Article:   Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Working with Xamarin.Android [article] Application Development Workflow [article]
Read more
  • 0
  • 0
  • 22117

article-image-cloud-and-async-communication
Packt
03 Oct 2016
6 min read
Save for later

Cloud and Async Communication

Packt
03 Oct 2016
6 min read
In this article by Matteo Bortolu and Engin Polat, the author of the book Xamarin 4 By Example, we are going to create a new projects called fast food with help of Service and Presentation layer. (For more resources related to this topic, see here.) Example project – Xamarin fast food First of all, we create a new Xamarin.Forms PCL project. Prepare the empty subfolders of Core to define the Business Logic of our project. To use the Base classes, we need to import on our projects the SQLite.Net PCL from the NuGet Package manager. It is a good practice to update all the packages before you start. As soon as a new package has been updated, we will be notified on the Packages folder. To update the package right click on the Packages folder and select Update from the contextual menu. We can create, under the Business subfolder of the Core, the class MenuItem that contains the properties of the available Items to order. A MenuItem will have: Name Price Required seconds. The class will be developed as: public class MenuItem : BaseEntity<int> { public string Name { get; set; } public int RequiredSeconds { get; set; } public float Price { get; set; } } We will also prepare the Data Layer element and the Business Layer element for this class. In first instance they will only use the inheritance with the base classes. The Data layer will be coded like this: public class MenuItemData : BaseData<MenuItem, int>{ public MenuItemData () { }} and the Business layer will look like: public class MenuItemBusiness : BaseBusiness<MenuItem, int> { public MenuItemBusiness () : base (new MenuItemData ()) { } } Now we can add a new base class under the Services subfolder of the base layer. Service layer In this example we will develop a simple service that make the request wait for the required seconds. We will change the bsssssase service later in the article in order to make server requests. We will define our Base Service using a generic Base Entity type: public class BaseService<TEntity, TKey> where TEntity : BaseEntity<TKey> { // we will write here the code for the base service } Inside the Base Service we need to define an event to throw when the response is ready to be dispatched: public event ResponseReceivedHandler ResponseReceived; public delegate void ResponseReceivedHandler (TEntity item); We will raise this event when our process has been completed. Before we raise an event we always need to check if it has been subscribed from someone. It is a good practice to use a design pattern called observer. A design pattern is a model of solution for common problems and they help us to reuse the design of the software. To be compliant with the Observer we only need to add to the code we wrote, the following code snippet that raises the event only when the event has been subscribed: protected void OnResponseReceived (TEntity item) { if (ResponseReceived != null) { ResponseReceived (item); } } The only thing we need to do in order to raise the ResponseReceived event, is to call the method OnResponseReceived. Now we will write a base method that gives us a response after a number of seconds that we will pass as parameter as seen in the following code: public virtual asyncTask<TEntity>GetDelayedResponse(TEntity item,int seconds) { await Task.Delay (seconds * 1000); OnResponseReceived(item); return item; } We will use this base to simulate a delayed response. Let's create the Core service layer object for MenuItem. We can name it MenuItemService and it will inherit the BaseService as follows: public class MenuItemService : BaseService<MenuItem,int> { public MenuItemService () { } } We have now all the core ingredients to start writing our UI. Add a new empty class named OrderPage in the Presentation subfolder of Core. We will insert here a label to read the results and three buttons to make the requests: public class OrderPage : ContentPage { public OrderPage () : base () { Label response = new Label (); Button buttonSandwich = new Button { Text = "Order Sandwich" }; Button buttonSoftdrink = new Button { Text = "Order Drink" }; Button buttonShowReceipt = new Button { Text = "Show Receipt" }; // ... insert here the presentation logic } } Presentation layer We can now define the presentation logic creating instances of the business object and the service object. We will also define our items. MenuItemBusiness menuManager = new MenuItemBusiness (); MenuItemService service = new MenuItemService (); MenuItem sandwich = new MenuItem { Name = "Sandwich", RequiredSeconds = 10, Price = 5 }; MenuItem softdrink = new MenuItem { Name = "Sprite", RequiredSeconds = 5, Price = 2 }; Now we need to subscribe the buttons click event to send the order to our service. The GetDelayedResponse method of the service is simulating a slow response. In this case we will have a real delay that depends on the network availability and the time that the remote server needs to process the request and send back a response: buttonSandwich.Clicked += (sender, e) => { service.GetDelayedResponse (sandwich, sandwich.RequiredSeconds); }; buttonSoftdrink.Clicked += (sender, e) => { service.GetDelayedResponse (softdrink, softdrink.RequiredSeconds); }; Our service will raise an event when the response is ready. We can subscribe this event to present the results on the label and to save the items in our local database: service.ResponseReceived += (item) => { // Append the received item to the label response.Text += String.Format ("nReceived: {0} ({1}$)", item.Name, item.Price); // Read the data from the local database List<MenuItem> itemlist = menuManager.Read (); //calculate the new database key for the item item.Key = itemlist.Count == 0 ? 0 : itemlist.Max (x => x.Key) + 1; //Add The item in the local database menuManager.Create (item); }; We now can subscribe the click event of the receipt button in order to display an alert that displays the number of the items saved in the local database and the total price to pay: buttonShowReceipt.Clicked += (object sender, EventArgs e) => { List<MenuItem> itemlist = menuManager.Read (); float total = itemlist.Sum (x => x.Price); DisplayAlert ( "Receipt", String.Format( "Total:{0}$ ({1} items)", total, itemlist.Count), "OK"); }; The last step is to add the component to the content page: Content = new StackLayout { VerticalOptions = LayoutOptions.CenterAndExpand, HorizontalOptions = LayoutOptions.CenterAndExpand, Children = { response, buttonSandwich, buttonSoftdrink, buttonShowReceipt } }; At this point we are ready to run the iOS version and to try it out. In order to make the Android version work we need to set the permissions to read and write in the database file. To do that we can double click the Droid project and, under the section Android Application, check the ReadExternalStorage and WriteExternalStorage permissions: In the OnCreate method of the MainActivity of the Droid project we also need to: Create the database file when it hasn't been created yet. Set the database path in the Configuration file. var path = System.Environment.GetFolderPath ( System.Environment.SpecialFolder.ApplicationData ); if (!Directory.Exists (path)) { Directory.CreateDirectory (path); } var filename = Path.Combine (path, "fastfood.db"); if (!File.Exists (filename)) { File.Create (filename); } Configuration.DatabasePath = filename; Summary In this article, we have learned how to create a project in Xamarin with the help of Service and Presentation layer. We have also seen that, how to set read and write permissions to make an Android version work. Resources for Article: Further resources on this subject: A cross-platform solution with Xamarin.Forms and MVVM architecture [article] Working with Xamarin.Android [article] Integrating Accumulo into Various Cloud Platforms [article]
Read more
  • 0
  • 0
  • 21938

article-image-top-5-must-have-android-applications
Packt
28 Jun 2011
6 min read
Save for later

Top 5 Must-have Android Applications

Packt
28 Jun 2011
6 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications     1. ES File Explorer Description: ES File Explorer has everything you would expect from a file explorer – you can copy, paste, rename and delete files. You can select multiple files at a time just as you would on your PC or MAC. You can also compress files to zip or gz. One of the best features of ES file explorer is the ability to connect to network shares – this means you can connect to a shared folder on your LAN and transfer files to and from your Android device. Its user interface is very simple, quick and easy to use. Screenshots:   Features: Multiselect and Operate files (Copy, Paste, Cut/Move, Create, Delete and Rename, Share/Send) in the phone and computers Application manager -- Manage apps(Install, Uninstall, Backup, Shortcuts, Category) View Different file formats, photos, docs, videos anywhere, support third party applications such as Document To Go to open document files Text viewers and editors Bluetooth file transfer tool Access your Home PC, via WIFI with SMB Compress and Decompress ZIP files, Unpack RAR files, Create encrypted (AES 256 bit) ZIP files Manage the files on the FTP server as the ones on the sd card Link: This application is available for download at: https://market.android.com/details?id=com.estrongs.android.pop 2. Go Sms Pro Description: GO SMS Pro is the ultimate messaging application for Android devices. There is a nice setting that launches a pop-up for incoming messages. Users can then respond or delete directly within the window. The app supports batch actions for deleting, marking all, or backing up. It is highly customizable; everything from the text color, to the color of the background, SMS ringtones for specific contacts, and themes for the SMS application can be customized. Another interesting feature that you can take advantage of is the multiple plug-ins that are available as free download in the market. The Facebook chat plug-in makes it possible to receive and send Facebook chat messages and since these messages are sent through the Facebook network it does not affect your SMS messages at all. Screenshots: Features: GO-MMS service (FREE), you may send picture/music to your friend(ever they are no GO SMS) through one SMS with 2G/3G/4G or WIFI Many cool themes; also support DIY theme, and Wallpaper Maker plug-in; Fully customizable look; Supports chat style and list style; Font changeable SMS backup and restore by all or by conversations, supports XML format, send backup file by email Support schedule SMS; Group texting Settings backup and restore Notification with privacy mode and reminder notification Security lock, support lock by thread; Blacklist Link: This application is available for download at: https://market.android.com/details?id=com.jb.gosms 3. Dolphin HD browser Description: Dolphin Browser HD is a professional mobile browser presented by Mobotap Inc. Dolphin Browser HD is the most advanced and customizable web browser. You can browse the Web with the greatest speed and efficiency by using Dolphin browser HD. The main browsing screen is clean and uncluttered. Other than the Home and Refresh buttons that flank the address bar, Dolphin HD doesn't clutter the main interface with other quick-access buttons. In addition to tabbed browsing, bookmarking (that syncs to Google bookmarks), and multitouch zooming, it can also flag sites to read later as well as a tie-in to Delicious. You can search content within a page, subscribe to RSS feeds through Google Reader, and share links with social networks. Another great feature of this app is the capability to download YouTube videos. Screenshots: Features: Manage Bookmarks Multi Touch pinch zoom Unlimited Tabs Colorful theme pack Gestures as shortcuts for common commands You can save web pages to read them offline with all images preserved Link: This application is available for download at: https://market.android.com/details?id=mobi.mgeek.TunnyBrowser 4. Winamp Description: The one big advantage that Winamp has over other playback apps is that it can sync tracks wirelessly to the device over your home network, so you don’t have to fuss with a USB cable, making it easier to manage your music. You can set Winamp to sync automatically, every time you connect your Android phone to Winamp, which makes it incredibly easy to send new playlists, purchases and downloads to your portable player, sans USB. The interface is probably the most notable upgrade over the stock player. The playback controls remain on-screen pretty much wherever you are in the app – a small touch, but one that vastly improves the functionality of Winamp. Being able to control playback from any point is more useful than you might expect. Screenshots: Features: iTunes library & playlist import Wireless & wired sync with the desktop Winamp Media Player Over 45k+ SHOUTcast Internet radio stations Integrated Android Search & “Listen to” voice actions Play queue management Playlists and playlist shortcuts Extras Menu – Now Playing data interacts with other installed apps Link: This application is available for download at: https://market.android.com/details?id=com.nullsoft.winamp 5. Advanced Task Killer Description: One click to kill applications running in the background. Advanced Task Killer is pretty simple and easy to use. It allows you to see what applications are currently running and offers the ability to terminate them quickly and easily, thus freeing up valuable memory for other processes. It also remembers your selections, so the next time you launch it, the previously spared apps remain unchecked and the previously selected ones are checked and are ready to be shut down. You can choose to have Advanced Task Killer start at launch and there’s even the option to have it appear in your notifications bar for swift access. Screenshots: Features: Kill multiple apps with one tap Adjust the security levels It comes with a notification bar icon You can kill apps automatically by selecting one of auto-kill level: Safe, Aggressive or Crazy Link: This application is available for download at: https://market.android.com/details?id=com.rechild.advancedtaskkiller Summary In this article we discussed the Top 5 must-have applications for your android phone. Further resources on this subject: Android Application Testing: Getting Started [Article] Flash Development for Android: Audio Input via Microphone [Article] Android Application Testing: TDD and the Temperature Converter [Article] Android User Interface Development: Animating Widgets and Layouts [Article]
Read more
  • 0
  • 1
  • 21403

article-image-conference-app
Packt
09 Aug 2016
4 min read
Save for later

Conference App

Packt
09 Aug 2016
4 min read
In this article, Indermohan Singh, the author of Ionic 2 Blueprints we will create a conference app. We will create an app which will provide list of speakers, schedule, directions to the venue, ticket booking, and lots of other features. We will learn the following things: Using the device's native features Leveraging localStorage Ionic menu and tabs Using RxJS to build a perfect search filter (For more resources related to this topic, see here.) Conference app is a companion application for conference attendees. In this application, we are using Lanyrd JSON Exportand hardcoded JSON file as our backend. We will have a tabs and side menu interface, just like our e-commerce application. When a user opens our app, the app will show a tab interface with SpeakersPageopen. It will have SchedulePage for conference schedule and AboutPage for information about conference. We will also make this app work offline, without any Internet connection. So, your user will still be able to view speakers, see the schedule, and do other stuff without using the Internet at all. JSON data In the application, we have used a hardcoded JSON file as our Database. But in the truest sense, we are actually using a JSON export of a Lanyrd event. I was trying to make this article using Lanyrd as the backend, but unfortunately, Lanyrd is mostly in maintenance mode. So I was not able to use it. In this article, I am still using a JSON export from Lanyrd, from a previous event. So, if you are able to get a JSON export for your event, you can just swap the URL and you are good to go. Those who don't want to use Lanyrd and instead want to use their own backend, must have a look at the next section. I have described the structure of JSON, which I have used to make this app. You can create your REST API accordingly. Understanding JSON Let's understand the structure of the JSON export. The whole JSON database is an object with two keys, timezone and sessions, like the following: { timezone: "Australia/Brisbane", sessions: [..] } Timezone is just a string, but sessions key is an array of lists of all the sessions of our conference. Items in the sessions array are divided according to days of the conference. Each item represents a day of the conference and has the following structure: { day: "Saturday 21st November", sessions: [..] } So, the sessions array of each day has actual sessions as items. Each item has the following structure: { start_time: "2015-11-21 09:30:00", topics: [], web_url: "url of event times: "9:30am - 10:00am", id: "sdtpgq", types: [ ], end_time_epoch: 1448064000, speakers: [], title: "Talk Title", event_id: "event_id", space: "Space", day: "Saturday 21st November", end_time: "2015-11-21 10:00:00", other_url: null, start_time_epoch: 1448062200, abstract: "<p>Abstract of Talk</p>" }, Here, the speakers array has a list of all speakers. We will use that speakers array to create a list of all speakers in an array. You can see the whole structure here: That's all we need to understand for JSON. Defining the app In this section, we will define various functionalities of our application. We will also show the architecture of our app using an app flow diagram. Functionalities We will be including the following functionalities in our application: List of speakers Schedule detail Search functionality using session title, abstract, and speaker's names Hide/Show any day of the schedule Favorite list for sessions Adding favorite sessions to the device calendar Ability to share sessions to other applications Directions to venue Offline working App flow This is how the control will flow inside our application: Let's understand the flow: RootComponent: RootComponent is the root Ionic component. It is defined inside the /app/app.ts file. TabsPage: TabsPage acts as a container for our SpeakersPage, SchedulePage, and AboutPage. SpeakersPage: SpeakersPage shows a list of all the speakers of our conference. SchedulePage: SchedulePage shows us the schedule of our conference and allows us various filter features. AboutPage: AboutPage provides us information about the conference. SpeakersDetail: SpeakerDetail page shows the details of the speaker and a list of his/her presentations in this conference. SessionDetail: SessionDetail page shows the details of a session with the title and abstract of the session. FavoritePage: FavoritePage shows a list of the user's favorite sessions. Summary In this article, we discussed about the JSON files that will used as database in our app. We also defined the the functionalities of our app and understood the flow of our app. Resources for Article:  Further resources on this subject: First Look at Ionic [article] Ionic JS Components [article] Creating Our First App with Ionic [article]
Read more
  • 0
  • 0
  • 21222

article-image-internationalization-and-localization
Packt
03 Mar 2018
16 min read
Save for later

Internationalization and localization

Packt
03 Mar 2018
16 min read
In this article by Dmitry Sheiko, the author of the book, Cross Platform Desktop Application Development: Electron, Node, NW.js and React, will cover the concept of Internationalization and localization and will be also covering context menu and system clipboard in detail. Internationalization, often abbreviated as i18n, implies a particular software design capable of adapting to the requirements of target local markets. In other words if we want to distribute our application to the markets other than USA we need to take care of translations, formatting of datetime, numbers, addresses, and such. (For more resources related to this topic, see here.) Date format by country Internationalization is a cross-cutting concern. When you are changing the locale it usually affects multiple modules. So I suggest going with the observer pattern that we already examined while working on DirService'. The ./js/Service/I18n.js file contains the following code: const EventEmitter = require( "events" ); class I18nService extends EventEmitter { constructor(){ super(); this.locale = "en-US"; } Internationalization and localization [ 2 ] notify(){ this.emit( "update" ); } } As you see, we can change the locale by setting a new value to locale property. As soon as we call notify method, then all the subscribed modules immediately respond. But locale is a public property and therefore we have no control on its access and mutation. We can fix it by using overloading. The ./js/Service/I18n.js file contains the following code: //... constructor(){ super(); this._locale = "en-US"; } get locale(){ return this._locale; } set locale( locale ){ // validate locale... this._locale = locale; } //... Now if we access locale property of I18n instance it gets delivered by the getter (get locale). When setting it a value, it goes through the setter (set locale). Thus we can add extra functionality such as validation and logging on property access and mutation. Remember we have in the HTML, a combobox for selecting language. Why not give it a view? The ./js/View/LangSelector.j file contains the following code: class LangSelectorView { constructor( boundingEl, i18n ){ boundingEl.addEventListener( "change", this.onChanged.bind( this ), false ); this.i18n = i18n; } onChanged( e ){ const selectEl = e.target; this.i18n.locale = selectEl.value; this.i18n.notify(); } } Internationalization and localization [ 3 ] exports.LangSelectorView = LangSelectorView; In the preceding code, we listen for change events on the combobox. When the event occurs we change locale property of the passed in I18n instance and call notify to inform the subscribers. The ./js/app.js file contains the following code: const i18nService = new I18nService(), { LangSelectorView } = require( "./js/View/LangSelector" ); new LangSelectorView( document.querySelector( "[data-bind=langSelector]" ), i18nService ); Well, we can change the locale and trigger the event. What about consuming modules? In FileList view we have static method formatTime that formats the passed in timeString for printing. We can make it formated in accordance with currently chosen locale. The ./js/View/FileList.js file contains the following code: constructor( boundingEl, dirService, i18nService ){ //... this.i18n = i18nService; // Subscribe on i18nService updates i18nService.on( "update", () => this.update( dirService.getFileList() ) ); } static formatTime( timeString, locale ){ const date = new Date( Date.parse( timeString ) ), options = { year: "numeric", month: "numeric", day: "numeric", hour: "numeric", minute: "numeric", second: "numeric", hour12: false }; return date.toLocaleString( locale, options ); } update( collection ) { //... this.el.insertAdjacentHTML( "beforeend", `<li class="file-list__li" data-file="${fInfo.fileName}"> <span class="file-list__li__name">${fInfo.fileName}</span> <span class="filelist__li__size">${filesize(fInfo.stats.size)}</span> <span class="file-list__li__time">${FileListView.formatTime( fInfo.stats.mtime, this.i18n.locale )}</span> </li>` ); //... } //... In the constructor, we subscribe for I18n update event and update the file list every time the locale changes. Static method formatTime converts passed in string into a Date object and uses Date.prototype.toLocaleString() method to format the datetime according to a given locale. This method belongs to so called The ECMAScript Internationalization API (http://norbertlindenberg.com/2012/12/ecmascript-internationalization-api/index .html). The API describes methods of built-in object String, Date and Number designed to format and compare localized data. But what it really does is formatting a Date instance with toLocaleString for the English (United States) locale ("en-US") and it returns the date as follows: 3/17/2017, 13:42:23 However if we feed to the method German locale ("de-DE") we get quite a different result: 17.3.2017, 13:42:23 To put it into action we set an identifier to the combobox. The ./index.html file contains the following code: .. <select class="footer__select" data-bind="langSelector"> .. And of course, we have to create an instance of I18n service and pass it in LangSelectorView and FileListView: ./js/app.js // ... const { I18nService } = require( "./js/Service/I18n" ), { LangSelectorView } = require( "./js/View/LangSelector" ), i18nService = new I18nService(); new LangSelectorView( document.querySelector( "[data-bind=langSelector]" ), i18nService ); // ... new FileListView( document.querySelector( "[data-bind=fileList]" ), dirService, i18nService ); Now we start the application. Yeah! As we change the language in the combobox the file modification dates adjust accordingly: Multilingual support Localization dates and number is a good thing, but it would be more exciting to provide translation to multiple languages. We have a number of terms across the application, namely the column titles of the file list and tooltips (via title attribute) on windowing action buttons. What we need is a dictionary. Normally it implies sets of token translation pairs mapped to language codes or locales. Thus when you request from the translation service a term, it can correlate to a matching translation according to currently used language/locale. Here I have suggested making the dictionary as a static module that can be loaded with the required function. The ./js/Data/dictionary.js file contains the following code: exports.dictionary = { "en-US": { NAME: "Name", SIZE: "Size", MODIFIED: "Modified", MINIMIZE_WIN: "Minimize window", Internationalization and localization [ 6 ] RESTORE_WIN: "Restore window", MAXIMIZE_WIN: "Maximize window", CLOSE_WIN: "Close window" }, "de-DE": { NAME: "Dateiname", SIZE: "Grösse", MODIFIED: "Geändert am", MINIMIZE_WIN: "Fenster minimieren", RESTORE_WIN: "Fenster wiederherstellen", MAXIMIZE_WIN: "Fenster maximieren", CLOSE_WIN: "Fenster schliessen" } }; So we have two locales with translations per term. We are going to inject the dictionary as a dependency into our I18n service. The ./js/Service/I18n.js file contains the following code: //... constructor( dictionary ){ super(); this.dictionary = dictionary; this._locale = "en-US"; } translate( token, defaultValue ) { const dictionary = this.dictionary[ this._locale ]; return dictionary[ token ] || defaultValue; } //... We also added a new method translate that accepts two parameters: token and default translation. The first parameter can be one of the keys from the dictionary like NAME. The second one is guarding value for the case when requested token does not yet exist in the dictionary. Thus we still get a meaningful text at least in English. Let's see how we can use this new method. The ./js/View/FileList.js file contains the following code: //... update( collection ) { this.el.innerHTML = `<li class="file-list__li file-list__head"> <span class="file-list__li__name">${this.i18n.translate( "NAME", "Name" )}</span> <span class="file-list__li__size">${this.i18n.translate( "SIZE", Internationalization and localization [ 7 ] "Size" )}</span> <span class="file-list__li__time">${this.i18n.translate( "MODIFIED", "Modified" )}</span> </li>`; //... We change in FileList view hardcoded column titles with calls for translate method of I18n instance, meaning that every time view updates it receives the actual translations. We shall not forget about TitleBarActions view where we have windowing action buttons. The ./js/View/TitleBarActions.js file contains the following code: constructor( boundingEl, i18nService ){ this.i18n = i18nService; //... // Subscribe on i18nService updates i18nService.on( "update", () => this.translate() ); } translate(){ this.unmaximizeEl.title = this.i18n.translate( "RESTORE_WIN", "Restore window" ); this.maximizeEl.title = this.i18n.translate( "MAXIMIZE_WIN", "Maximize window" ); this.minimizeEl.title = this.i18n.translate( "MINIMIZE_WIN", "Minimize window" ); this.closeEl.title = this.i18n.translate( "CLOSE_WIN", "Close window" ); } Here we add method translate, which updates button title attributes with actual translations. We subscribe for i18n update event to call the method every time user changes locale:   Context menu Well, with our application we can already navigate through the file system and open files. Yet, one might expect more of a File Explorer. We can add some file related actions like delete, copy/paste. Usually these tasks are available via the context menu, what gives us a good opportunity to examine how to make it with NW.js. With the environment integration API we can create an instance of system menu (http://docs.nwjs.io/en/latest/References/Menu/). Then we compose objects representing menu items and attach them to the menu instance (http://docs.nwjs.io/en/latest/References/MenuItem/). This menu can be shown in an arbitrary position: const menu = new nw.Menu(), menutItem = new nw.MenuItem({ label: "Say hello", click: () => console.log( "hello!" ) }); menu.append( menu ); menu.popup( 10, 10 ); Yet our task is more specific. We have to display the menu on the right mouse click in the position of the cursor. That is, we achieve by subscribing a handler to contextmenu DOM event: document.addEventListener( "contextmenu", ( e ) => { console.log( `Show menu in position ${e.x}, ${e.y}` ); }); Now whenever we right-click within the application window the menu shows up. It's not exactly what we want, isn't it? We need it only when the cursor resides within a particular region. For an instance, when it hovers a file name. That means we have to test if the target element matches our conditions: document.addEventListener( "contextmenu", ( e ) => { const el = e.target; if ( el instanceof HTMLElement && el.parentNode.dataset.file ) { console.log( `Show menu in position ${e.x}, ${e.y}` ); } }); Here we ignore the event until the cursor hovers any cell of file table row, given every row is a list item generated by FileList view and therefore provided with a value for data file attribute. This passage explains pretty much how to build a system menu and how to attach it to the file list. But before starting on a module capable of creating menu, we need a service to handle file operations. The ./js/Service/File.js file contains the following code: const fs = require( "fs" ), path = require( "path" ), // Copy file helper cp = ( from, toDir, done ) => { const basename = path.basename( from ), to = path.join( toDir, basename ), write = fs.createWriteStream( to ) ; fs.createReadStream( from ) .pipe( write ); write .on( "finish", done ); }; class FileService { Internationalization and localization [ 10 ] constructor( dirService ){ this.dir = dirService; this.copiedFile = null; } remove( file ){ fs.unlinkSync( this.dir.getFile( file ) ); this.dir.notify(); } paste(){ const file = this.copiedFile; if ( fs.lstatSync( file ).isFile() ){ cp( file, this.dir.getDir(), () => this.dir.notify() ); } } copy( file ){ this.copiedFile = this.dir.getFile( file ); } open( file ){ nw.Shell.openItem( this.dir.getFile( file ) ); } showInFolder( file ){ nw.Shell.showItemInFolder( this.dir.getFile( file ) ); } }; exports.FileService = FileService; What's going on here? FileService receives an instance of DirService as a constructor argument. It uses the instance to obtain the full path to a file by name ( this.dir.getFile( file ) ). It also exploits notify method of the instance to request all the views subscribed to DirService to update. Method showInFolder calls the corresponding method of nw.Shell to show the file in the parent folder with the system file manager. As you can recon method remove deletes the file. As for copy/paste we do the following trick. When user clicks copy we store the target file path in property copiedFile. So when user next time clicks paste we can use it to copy that file to the supposedly changed current location. Method open evidently opens file with the default associated program. That is what we do in FileList view directly. Actually this action belongs to FileService. So we rather refactor the view to use the service. The ./js/View/FileList.js file contains the following code: constructor( boundingEl, dirService, i18nService, fileService ){ this.file = fileService; //... } Internationalization and localization [ 11 ] bindUi(){ //... this.file.open( el.dataset.file ); //... } Now we have a module to handle context menu for a selected file. The module will subscribe for contextmenu DOM event and build a menu when user right clicks on a file. This menu will contain items Show Item in the Folder, Copy, Paste, and Delete. Whereas copy and paste are separated from other items with delimiters. Besides, Paste will be disabled until we store a file with copy. Further goes the source code. The ./js/View/ContextMenu.js file contains the following code: class ConextMenuView { constructor( fileService, i18nService ){ this.file = fileService; this.i18n = i18nService; this.attach(); } getItems( fileName ){ const file = this.file, isCopied = Boolean( file.copiedFile ); return [ { label: this.i18n.translate( "SHOW_FILE_IN_FOLDER", "Show Item in the Folder" ), enabled: Boolean( fileName ), click: () => file.showInFolder( fileName ) }, { type: "separator" }, { label: this.i18n.translate( "COPY", "Copy" ), enabled: Boolean( fileName ), click: () => file.copy( fileName ) }, { label: this.i18n.translate( "PASTE", "Paste" ), enabled: isCopied, click: () => file.paste() }, { type: "separator" }, { Internationalization and localization [ 12 ] label: this.i18n.translate( "DELETE", "Delete" ), enabled: Boolean( fileName ), click: () => file.remove( fileName ) } ]; } render( fileName ){ const menu = new nw.Menu(); this.getItems( fileName ).forEach(( item ) => menu.append( new nw.MenuItem( item ))); return menu; } attach(){ document.addEventListener( "contextmenu", ( e ) => { const el = e.target; if ( !( el instanceof HTMLElement ) ) { return; } if ( el.classList.contains( "file-list" ) ) { e.preventDefault(); this.render() .popup( e.x, e.y ); } // If a child of an element matching [data-file] if ( el.parentNode.dataset.file ) { e.preventDefault(); this.render( el.parentNode.dataset.file ) .popup( e.x, e.y ); } }); } } exports.ConextMenuView = ConextMenuView; So in ConextMenuView constructor, we receive instances of FileService and I18nService. During the construction we also call attach method that subscribes for contextmenu DOM event, creates the menu and shows it in the position of the mouse cursor. The event gets ignored unless the cursor hovers a file or resides in empty area of the file list component. When user right clicks the file list, the menu still appears, but with all items disable except paste (in case a file was copied before). Method render create an instance of menu and populates it with nw.MenuItems created by getItems method. The method creates an array representing menu items. Elements of the array are object literals. Internationalization and localization [ 13 ] Property label accepts translation for item caption. Property enabled defines the state of item depending on our cases (whether we have copied file or not, whether the cursor on a file or not). Finally property click expects the handler for click event. Now we need to enable our new components in the main module. The ./js/app.js file contains the following code: const { FileService } = require( "./js/Service/File" ), { ConextMenuView } = require( "./js/View/ConextMenu" ), fileService = new FileService( dirService ); new FileListView( document.querySelector( "[data-bind=fileList]" ), dirService, i18nService, fileService ); new ConextMenuView( fileService, i18nService ); Let's now run the application, right-click on a file and voilà! We have the context menu and new file actions. System clipboard Usually Copy/Paste functionality involves system clipboard. NW.js provides an API to control it (http://docs.nwjs.io/en/latest/References/Clipboard/). Unfortunately it's quite limited, we cannot transfer an arbitrary file between applications, what you may expect of a file manager. Yet some things we are still available to us. Transferring text In order to examine text transferring with the clipboard we modify the method copy of FileService: copy( file ){ this.copiedFile = this.dir.getFile( file ); const clipboard = nw.Clipboard.get(); clipboard.set( this.copiedFile, "text" ); } What does it do? As soon as we obtained file full path, we create an instance of nw.Clipboard and save the file path there as a text. So now, after copying a file within the File Explorer we can switch to an external program (for example, a text editor) and paste the copied path from the clipboard. Transferring graphics It doesn't look very handy, does it? It would be more interesting if we could copy/paste a file. Unfortunately NW.js doesn't give us many options when it comes to file exchange. Yet we can transfer between NW.js application and external programs PNG and JPEG images. The ./js/Service/File.js file contains the following code: //... copyImage( file, type ){ const clip = nw.Clipboard.get(), // load file content as Base64 data = fs.readFileSync( file ).toString( "base64" ), // image as HTML html = `<img src="file:///${encodeURI( data.replace( /^//, "" ) )}">`; // write both options (raw image and HTML) to the clipboard clip.set([ Internationalization and localization [ 16 ] { type, data: data, raw: true }, { type: "html", data: html } ]); } copy( file ){ this.copiedFile = this.dir.getFile( file ); const ext = path.parse( this.copiedFile ).ext.substr( 1 ); switch ( ext ){ case "jpg": case "jpeg": return this.copyImage( this.copiedFile, "jpeg" ); case "png": return this.copyImage( this.copiedFile, "png" ); } } //... We extended our FileService with private method copyImage. It reads a given file, converts its contents in Base64 and passes the resulting code in a clipboard instance. In addition, it creates HTML with image tag with Base64-encoded image in data Uniform Resource Identifier (URI). Now after copying an image (PNG or JPEG) in the File Explorer, we can paste it in an external program such as graphical editor or text processor. Receiving text and graphics We've learned how to pass a text and graphics from our NW.js application to external programs. But how can we receive data from outside? As you can guess it is accessible through get method of nw.Clipboard. Text can be retrieved that simple: const clip = nw.Clipboard.get(); console.log( clip.get( "text" ) ); When graphic is put in the clipboard we can get it with NW.js only as Base64-encoded content or as HTML. To see it in practice we add a few methods to FileService. The ./js/Service/File.js file contains the following code: //... hasImageInClipboard(){ const clip = nw.Clipboard.get(); return clip.readAvailableTypes().indexOf( "png" ) !== -1; } pasteFromClipboard(){ const clip = nw.Clipboard.get(); if ( this.hasImageInClipboard() ) { Internationalization and localization [ 17 ] const base64 = clip.get( "png", true ), binary = Buffer.from( base64, "base64" ), filename = Date.now() + "--img.png"; fs.writeFileSync( this.dir.getFile( filename ), binary ); this.dir.notify(); } } //... Method hasImageInClipboard checks if the clipboard keeps any graphics. Method pasteFromClipboard takes graphical content from the clipboard as Base64-encoded PNG. It converts the content into binary code, writes into a file and requests DirService subscribers to update. To make use of these methods we need to edit ContextMenu view. The ./js/View/ContextMenu.js file contains the following code: getItems( fileName ){ const file = this.file, isCopied = Boolean( file.copiedFile ); return [ //... { label: this.i18n.translate( "PASTE_FROM_CLIPBOARD", "Paste image from clipboard" ), enabled: file.hasImageInClipboard(), click: () => file.pasteFromClipboard() }, //... ]; } We add to the menu a new item Paste image from clipboard, which is enabled only when there is any graphic in the clipboard. Summary In this article, we have covered concept of internationalization and localization and also covered context menu and system clipboard in detail. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 21172
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-a-decade-of-android-slayer-of-blackberry-challenger-of-iphone-mother-of-the-modern-mobile-ecosystem
Sandesh Deshpande
06 Oct 2018
6 min read
Save for later

A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem

Sandesh Deshpande
06 Oct 2018
6 min read
If someone says Eclair, Honeycomb, Ice Cream Sandwich, or Jelly Bean then apart from getting a sugar rush, you will probably think of Android OS. From just being a newly launched OS, filled with apprehensions, to being the biggest and most loved operating system in the history, Android has seen it all. The OS which powers our phones and makes our everyday life simpler recently celebrated its 10th anniversary. Android’s rise from the ashes The journey to become the most popular mobile OS since its launch in 2008, was not that easy for Android. Back then it competed with iOS and Blackberry, which were considered the go-to smartphones of that time. Google’s idea was to give users a Blackberry-like experience as the 'G1' had a full-sized physical Qwerty keypad just like Blackberry. But G1 had some limitations as it could play videos only on YouTube as it didn’t have any inbuilt video player app and Android Market (now Google Play) and had just a handful of apps. Though the idea to give users blackberry like the experience was spot on, it was not a hit with the users as by then Apple had made touchscreen all the rage with its iPhone. But one thing Google did right with Android OS, which its competitors didn't offer, was customizations and that's where Google scored a home run. Blackberry and iPhone were great and users loved them. But both the OS tied the users in their ecosystem. Motorola saw the potential for customization and it adopted Android to launch Motorola Droid in 2009.  This is when Android OS came of age and started competing with Apple's iOS. With Android OS, people could customize their phones and with its open source platform developers could tweak the Base OS and customize it to their liking. This resulted in users having options to choose themes, wallpapers, and launchers. This change pioneered the requirement for customization which was later adopted in iPhone as well. By virtue of it being an open platform and thanks to regular updates from Google, there was a huge surge in Android adoption and mobile manufacturers like Motorola, HTC, and Samsung launched their devices powered by Android OS. Because of this rapid adoption of Android by a large number of manufacturers, Android became the most popular mobile platform, beating Nokia's Symbian OS by the end of 2010. This Android phenomenon saved many manufacturers like HTC, Motorola, Samsung, Sony for losing significant market share to the then mobile handset market leaders, Nokia, Blackberry and Apple. They sensed the change in user preferences and adopted Android OS. Nokia, on the other hand, didn’t adopt Android and stuck to it’s Symbian OS which resulted in customer and market loss. Android: Sugar, and spice and everything nice In the subsequent years, Google launched Android versions like Cupcake, Donut, Eclair, Froyo, Gingerbread, Ice cream Sandwich, Jelly Bean, KitKat, Lollipop, and Marshmallow. The Android team sure love their sugars evident from all Android operating systems named after desserts. It's not new that tech companies get unique names for their software versions. For instance,  Apple names its OS after cats like Tiger, Leopard and Snow Leopard. But Google officially never revealed why their OS is named after desserts. Just in case that wasn't nerdy enough, Google put these sugary names in alphabetical order. Each update came with some cool features. Here’s a quick list of some popular features with the respective versions. Eclair (2009): Phone which came with Eclair onboard had digital zoom and flashlight for photos for the first time ever. Honeycomb (2011): Honeycomb was compatible with a tablet without any major glitches. Ice cream Sandwich (2011): Probably not as sophisticated as today but Ice cream sandwich had facial recognition and also a feature to take screenshots. Lollipop (2014): With Android Lollipop rounded icons were introduced in Android for the first time. Nougat (2016): With Nougat update Google introduced more natural looking emojis including skin tone modifiers, Unicode 9 emojis, and a removal of previously gender-neutral characters. Pie (2018): The latest Android update Android Pie also comes  with a bunch of cool features. However, the standout feature in this release is the  Indoor navigation which enables indoor GPS style tracking by determining your location within a building and facilitating turn-by-turn directions to help you navigate indoors. Android’s greatest strength probably is its large open platform community which helps developers to develop apps for Android. Though developers can write Android apps in any Java virtual machine (JVM) compatible programming language and can run on JVM, Google’s primary language for writing Android apps was Java (besides C++). At Google I/O 2018, Google announced that it will officially support Kotlin on Android as a “first-class” language. Kotlin is a super new programming language built by JetBrains, which also coincidentally develops the JetBrains IDE that powers the Android Studio. Apart from rich features and strong open platform community, Google also enhanced security with the newer Android versions which made it unbeatable. Manufacturers like Samsung leveraged the power of Android with their Galaxy S series making them one of the leading mobile manufacturers. Today, Google have proven themselves as strong players in the mobile market not only with Android OS but also with their Flagship phones like the Pixel series which receive updates before any other Smartphone with Android OS. Android today: love it, hate it, but you can’t escape it Today with a staggering 2 Billion active devices, Android is the market leader in mobile OS platform by far. A decade ago, no one anticipated that one mobile OS could have such dominance. Google has developed the OS for televisions, smartwatches, smart home devices, VR Headsets and has even developed Android Auto for cars. As Google showcased in Google I/O 2018 the power of machine learning with Smart compose for Gmail and Google Duplex for Google assistant, with Google assistant now being introduced on almost all latest android phones it is making Android more powerful than ever. However, all is not all sunshine and rainbows in the Android nation. In July this year, EU slapped Google with $5 billion fine as an outcome of its antitrust investigations around Android. Google was found guilty of imposing illegal restrictions on Android device manufacturers and network operators, since 2011, in an attempt to get all the traffic from these devices to the Google search engine. It is ironic that the very restrictive locked-in ecosystems that Android rebelled against in its early days are something it is now increasingly endorsing. Furthermore, as interfaces become less text and screen-based and more touch, voice, and gesture-based, Google does seem to realize Android’s limitations to some extent. They have been investing a lot into Project Fuschia lately, which many believe could be Android’s replacement in the future. With the tech landscape changing more rapidly than ever it will be interesting to see what the future holds for Android but for now, Android is here to stay. 6 common challenges faced by Android App developers Entry level phones to taste the Go edition of the Android 9.0 Pie version Android 9 pie’s Smart Linkify: How Android’s new machine learning based feature works
Read more
  • 0
  • 0
  • 20241

article-image-upgrading-packaging-publishing-react-vr-app
Sunith Shetty
08 Jun 2018
19 min read
Save for later

Upgrading, packaging, and publishing your React VR app

Sunith Shetty
08 Jun 2018
19 min read
It is fun to develop and experience virtual worlds at home. Eventually, though, you want the world to see your creation. To do that, we need to package and publish our app. In the course of development, upgrades to React may come along; before publishing, you will need to decide whether you need to "code freeze" and ship with a stable version, or upgrade to a new version. This is a design decision. In today’s tutorial, we will learn to upgrade React VR and bundle the code in order to publish on the web. This article is an excerpt from a book written by John Gwinner titled Getting Started with React VR. This book will get you well-versed with Virtual Reality (VR) and React VR components to create your own VR apps. One of the neat things, although it can be frustrating, is that web projects are frequently updated.  There are a couple of different ways to do an upgrade: You can install/create a new app with the same name You will then go to your old app and copy everything over This is a facelift upgrade or Rip and Replace Do an update. Mostly, this is an update to package.json, and then delete node_modules and rebuild it. This is an upgrade in place. It is up to you which method you use, but the major difference is that an upgrade in place is somewhat easier—no source code to modify and copy—but it may or may not work. A Facelift upgrade also relies on you using the correct react-vr-cli. There is a notice that runs whenever you run React VR from the Command Prompt that will tell you whether it's old: The error or warning that comes up about an upgrade when you run React VR from a Command Prompt may fly by quickly. It takes a while to run, so you may go away for a cup of coffee. Pay attention to red lines, seriously. To do an upgrade in place, you will typically get an update notification from Git if you have subscribed to the project. If you haven't, you should go to: http://bit.ly/ReactVR, create an account (if you don't have one already), and click on the eyeball icon to join the watch list. Then, you will get an email every time there is an upgrade. We will cover the most straightforward way to do an upgrade—upgrade in place, first. Upgrading in place How do you know what version of React you have installed? From a Node.js prompt, type this: npm list react-vr Also, check the version of react-vr-web: npm list react-vr-web Check the version of react-vr-cli (the command-line interface, really only for creating the hello world app). npm list react-vr-cli Check the version of ovrui (open VR's user interface): npm list ovrui You can check these against the versions on the documentation. If you've subscribed to React VR on GitHub (and you should!), then you will get an email telling you that there is an upgrade. Note that the CLI will also tell you if it is out of date, although this only applies when you are creating a new application (folder/website). The release notes are at: http://bit.ly/VRReleases . There, you will find instructions to upgrade. The upgrade instructions usually have you do the following: Delete your node_modules directory. Open your package.json file. Update react-vr, react-vr-web, and ovrui to "New version number" for example, 2.0.0. Update react to "a.b.c". Update react-native to "~d.e.f". Update three to "^g.h.k". Run npm install or yarn. Note the ~ and ^ symbols; ~version means approximately equivalent to version and ^version means compatible with version. This is a help, as you may have other packages that may want other versions of react-native and three, specifically. To get the values of {a...k}, refer to the release notes. I have also found that you may need to include these modules in the devDependencies section of package.json: "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", You may see this error: module.js:529 throw err; ^ Error: Cannot find module './node_modules/react-native/packager/blacklist' If you do, make the following changes in your projects root folder in the rncli.config.js file. Replace the var blacklist = require('./node_modules/react-native/packager/blacklist'); line with var blacklist = require('./node_modules/metro-bundler/src/blacklist');. Third-party dependencies If you have been experimenting and adding modules with npm install <something>, you may find, after an upgrade, that things do not work. The package.json file also needs to know about all the additional packages you installed during experimentation. This is the project way (npm way) to ensure that Node.js knows we need a particular piece of software. If you have this issue, you'll need to either repeat the install with the—save parameter, or edit the dependencies section in your package.json file. { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } Again, this is the manual way; a better way is to use npm install <package> -save. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors even after removing node_modules, issue these commands: npm cache clean --force npm start -- --reset-cache The cache clean won't do it by itself; you need the reset-cache, otherwise, the problem packages will still be saved, even if they don't physically exist! Really broken upgrades – rip and replace If, however, after all that work, your upgrade still does not work, all is not lost. We can do a rip and replace upgrade. Note that this is sort of a "last resort", but it does work fairly well. Follow these steps: Ensure that your react-vr-cli package is up to date, globally: [F:ReactVR]npm install react-vr-cli -g C:UsersJohnAppDataRoamingnpmreact-vr -> C:UsersJohnAppDataRoamingnpmnode_modulesreact-vr-cliindex.js + react-vr-cli@0.3.6 updated 8 packages in 2.83s This is important, as when there is a new version of React, you may not have the most up-to-date react-vr-cli. It will tell you when you use it that there is a newer version out, but that line frequently scrolls by; if you get bored and don't note, you can spend a lot of time trying to install an updated version, to no avail. An npm generates a lot of verbiage, but it is important to read what it says, especially red formatted lines. Ensure that all CLI (DOS) windows, editing sessions, Node.js running CLIs, and so on, are closed. (You shouldn't need to reboot, however; just close everything using the old directory). Rename the old code to MyAppName140 (add a version number to the end of the old react-vr directory). Create the application, using react-vr init MyAppName, in other words, the original app name. The next step is easiest using a diff program (refer to http://bit.ly/WinDiff). I use Beyond Compare, but there are other ones too. Choose one and install it, if needed. Compare the two directories, .MyAppName (new) and .MyAppName140, and see what files have changed. Move over any new files from your old app, including assets (you can probably copy over the entire static_assets folder). Merge any files that have changed, except package.json. Generally, you will need to merge these files: index.vr.js client.js (if you changed it) For package.json, see what lines have been added, and install those packages in the new app via npm install <missed package> --save, or start the app and see what is missing. Remove any files seeded by the hello world app, such as chess-world.jpg (unless you are using that background, of course). Usually, you don't change the rn-cli.config.js file (unless you modified the seeded version). Most code will move directly over. Ensure that you change the application name if you changed the directory name, but with the preceding directions, you won't have to. The preceding list of upgrade steps may be slightly easier if there are massive changes to React VR; it will require some picking through source files. The source is pretty straightforward, so this should be easy in practice. I found that these techniques will work best if the automatic upgrade did not work. As mentioned earlier, the time to do a major upgrade probably is not right before publishing the app, unless there is some new feature you need. You want to adequately test your app to ensure that there aren't any bugs. I'm including the upgrade steps here, though, but not because you should do it right before publishing. Getting your code ready to publish Honestly, you should never put off organizing your clothes until, oh, wait, we're talking about code. You should never put off organizing your code until the night you want to ship it. Even the code you think is throw away may end up in production. Learn good coding habits and style from the beginning. Good code organization Good code, from the very start, is very important for many reasons: If your code uses sloppy indentation, it's more difficult to read. Many code editors, such as Visual Studio Code, Atom, and Webstorm, will format code for you, but don't rely on these tools. Poor naming conventions can hide problems. An improper case on variables can hide problems, such as using this.State instead of this.state. Most of the time spent coding, as much as 80%, is in maintenance. If you can't read the code, you can't maintain it. When you're a starting out programmer, you frequently think you'll always be able to read your own code, but when you pick up a piece years later and say "Who wrote this junk?" and then realize it was you, you will quit doing things like a, b, c, d variable names and the like. Most software at some point is maintained, read, copied, or used by someone other than the author. Most programmers think code standards are for "the other guy," yet complain when they have to code well. Who then does? Most programmers will immediately ask for the code documentation and roll their eyes when they don't find it. I usually ask to see the documentation they wrote for their last project. Every programmer I've hired usually gives me a deer in the headlights look. This is why I usually require good comments in the code. A good comment is not something like this: //count from 99 to 1 for (i=99; i>0; i--) ... A good comment is this: //we are counting bottles of beer for (i=99; i>0; i--) ... Cleaning the lint trap (checking code standards) When you wash clothes, the lint builds up and will eventually clog your washing machine or dryer, or cause a fire. In the PC world, old code, poorly typed names, and all can also build up. Refactoring is one way to clean up the code. I highly recommend that you use some form of version control, such as Git or bitbucket to check your code; while refactoring, it's quite possible to totally mess up your code and if you don't use version control, you may lose a lot of work. A great way to do a code review of your work, before you publish, is to use a linter. Linters go through your code and point out problems (crud), improper syntax, things that may work differently than you intend, and generally try to pick up your room after you, like your mom does. While you might not like it if your mom does that, these tools are invaluable. Computers are, after all, very picky and why not use the machines against each other? One of the most common ways to let software check your software for JavaScript is a program called ESLint. You can read about it at: http://bit.ly/JSLinter. To install ESLint, you can do it via npm like most packages—npm install eslint --save-dev. The --save-dev option puts a requirement in your project while you are developing. Once you've published your app, you won't need to pack the ESLint information with your project! There are a number of other things you need to get ESLint to work properly; read the configuration pages and go through the tutorials. A lot depends on what IDE you use. You can use ESLint with Visual Studio, for example. Once you've installed ESLint, you need to configure a local configuration file. Do this with eslint --init. The --init command will display a prompt that will ask you how to configure the rules it will follow. It will ask a series of questions, and ask what style to use. AirBNB is fairly common, although you can use others; there's no wrong choice. If you are working for a company, they may already have standards, so check with management. One of the prompts will ask if you need React. React VR coding style Coding style can be nearly religious, but in the JavaScript and React world, some standards are very common. AirBNB has one good, fairly well–regarded style guide at: http://bit.ly/JStyle. For React VR, some style options to consider are as follows: Use lowercase for the first letter of a variable name. In other words, this.props.currentX, not this.props.CurrentX, and don't use underscores (this is called camelCase). Use PascalCase only when naming constructors or classes. As you're using PascalCase for files, make the filename match the class, so   import MyClass from './MyClass'. Be careful about 0 vs {0}. In general, learn JavaScript and React. Always use const or let to declare variables to avoid polluting the global namespace. Avoid using ++ and --. This one was hard for me, being a C++ programmer. Hopefully, by the time you've read this, I've fixed it in the source examples. If not, do as I say, not as I do! Learn the difference between == and ===, and use them properly, another thing that is new for C++ and C# programmers. In general, I highly recommend that you pour over these coding styles and use a linter when you write your code: Third-party dependencies For your published website/application to really work reliably, we also need to update package.json; this is sort of the "project" way to ensure that Node.js knows we need a particular piece of software. We will edit the "dependencies" section to add the last line,(bold emphasis mine, bold won't show up in a text editor, obviously!): { "name": "WalkInAMaze", "version": "0.0.1", "private": true, "scripts": { "start": "node -e "console.log('open browser at http://localhost:8081/vr/\n\n');" && node node_modules/react-native/local-cli/cli.js start", "bundle": "node node_modules/react-vr/scripts/bundle.js", "open": "node -e "require('xopen')('http://localhost:8081/vr/')"", "devtools": "react-devtools", "test": "jest" }, "dependencies": { "ovrui": "~2.0.0", "react": "16.0.0", "react-native": "~0.48.0", "three": "^0.87.0", "react-vr": "~2.0.0", "react-vr-web": "~2.0.0", "mersenne-twister": "^1.1.0" }, "devDependencies": { "babel-jest": "^19.0.0", "babel-preset-react-native": "^1.9.1", "jest": "^19.0.2", "react-devtools": "^2.5.2", "react-test-renderer": "16.0.0", "xopen": "1.0.0" }, "jest": { "preset": "react-vr" } } This is the manual way; a better way is to use npm install <package> -s. The -s qualifier saves the new package you've installed in package.json. The manual edits can be handy to ensure that you've got the right versions, if you get a version mismatch. If you mess around with installing and removing enough packages, you will eventually mess up your modules. If you get errors, even after removing node_modules, issue these commands: npm start -- --reset-cache npm cache clean --force The cache clean won't do it by itself; you need the reset–cache, otherwise the problem packages will still be saved, even if they don't physically exist! Bundling for publishing on the web Assuming that you have your project dependencies set up correctly to get your project to run from a web server, typically through an ISP or service provider, you need to "bundle" it. React VR has a script that will package up everything into just a few files. Note, of course, that your desktop machine counts as a "web server", although I wouldn't recommend that you expose your development machine to the web. The better way to have other people experience your new Virtual Reality is to bundle it and put it on a commercial web service. Packaging React VR for release on a website The basic process is easy with the React VR provided script: Go to the VR directory where you normally run npm start, and run the npm run bundle command: You will then go to your website the same way you normally upload files, and create a directory called vr. In your project directory, in our case f:ReactVRWalkInAMaze, find the following files in .VRBuild: client.bundle.js index.bundle.js Copy those to your website. Make a directory called static_assets. Copy all of your files (that your app uses) from AppNamestatic_assets to the new static_assets folder. Ensure that you have MIME mapping set up for all of your content; in particular, .obj, .mtl, and .gltf files may need new mappings. Check with your web server documentation: For gltf files, use model/gltf-binary Any .bin files used by gltf should be application/octet-stream For .obj files, I've used application/octet-stream The official list is at http://bit.ly/MimeTypes Very generally, application/octet-stream will send the files "exactly" as they are on the server, so this is sort of a general purpose "catch all" Copy the index.html from the root of your application to the directory on your website where you are publishing the app; in our case, it'll be the vr directory, so the file is alongside the two .js files. Modify index.html for the following lines (note the change to ./index.vr): <html> <head> <title>WalkInAMaze</title> <style>body { margin: 0; }</style> <meta name="viewport" content="width=device-width, initial-scale=1, user-scalable=no"> </head> <body> <!-- When you're ready to deploy your app, update this line to point to your compiled client.bundle.js --> <script src="./client.bundle?platform=vr"></script> <script> // Initialize the React VR application ReactVR.init( // When you're ready to deploy your app, update this line to point to // your compiled index.bundle.js './index.vr.bundle?platform=vr&dev=false', // Attach it to the body tag document.body ); </script> </body> </html> Note for a production release, which means if you're pointing to a prebuilt bundle on a static web server and not the React Native bundler, the dev and platform flags actually won't do anything, so there's no difference between dev=true, dev=false, or even dev=foobar. Obtaining releases and attribution If you used any assets from anywhere on the web, ensure that you have the proper release. For example, many Daz3D or Poser models do not include the rights to publish the geometry information; including these on your website as an OBJ or glTF file may be a violation of that agreement. Someone could easily download the model, or nearly all the geometry fairly easily, and then use it for something else. I am not a lawyer; you should check with wherever you get your models to ensure that you have permission, and if necessary, attribute properly. Attribution licenses are a little difficult with a VR world, unless you embed the attribution into a graphic somewhere; as we've seen, adding text can sometimes be distracting, and you will always have scale issues. If you embed a VR world in a page with <iframe>, you can always give proper attribution on the HTML side. However, this isn't really VR. Checking image sizes and using content delivery sites Some of the images you use, especially the ones in a <pano> statement, can be quite large. You may need to optimize these for proper web speed and responsiveness. This is a fairly general topic, but one thing that can help is a content delivery network (CDN), especially if your world will be a high-volume one. Adding a CDN to your web server is easy. You host your asset files from a separate location, and you pass the root directory as the assetRoot at the ReactVR.init() call. For example, if your files were hosted at https://cdn.example.com/vr_assets/, you would change the method call in index.html to include the following third argument: ReactVR.init( './index.bundle.js?platform=vr&dev=false', document.body, { assetRoot: 'https://cdn.example.com/vr_assets/' } ); Optimizing your models If you were watching the web console, you may have noted this model being loaded over and over. It is not necessarily the most efficient way. Consider other techniques such as passing a model for the various child components as a prop. Polygon decimation is another technique that is very valuable in optimizing models for the web and VR. With the glTF file format, you can use "normal maps" and still make a low polygon model look like a high-resolution one. Techniques to do this are well documented in the game development field. These techniques really do work well. You should also optimize models to not display unseen geometry. If you are showing a car model with blacked out windows, for example, there is no need to have engine detail and interior details loaded (unless the windows are transparent). This sounds obvious, although I found the lamp that I used to illustrate the lighting examples had almost tripled the number of polygons than was needed; the glass lamp shade had inner and outer polygons that were inside the model. We learned to do version upgrades, and if need be, how to do rip and replace upgrades. We further discussed when to do an upgrade and how to publish it on the web. If you are interested to know about how to include existing high-performance web code into a VR app, you may refer to the book Getting Started with React VR.   Build a Virtual Reality Solar System in Unity for Google Cardboard Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 20134

article-image-getting-started-android-development
Packt
08 Aug 2016
14 min read
Save for later

Getting started with Android Development

Packt
08 Aug 2016
14 min read
In this article by Raimon Ràfols Montané, author of the book Learning Android Application Development, we will go through all the steps required to start developing Android devices. We have to be aware that Android is an evolving platform and so are its development tools. We will show how to download and install Android Studio and how to create a new project and run it on either an emulator or a real device. (For more resources related to this topic, see here.) Setting up Android Studio Before being able to build an Android application, we have to download and install Android Studio on our computer. It is still possible to download and use Eclipse with the Android Development Tools (ADT) plugin, but Google no longer supports it and they recommend that we migrate to Android Studio. In order to be aligned with this, we will only focus on Android Studio in this article. for more information on this, visit http://android-developers.blogspot.com.es/2015/06/an-update-on-eclipse-android-developer.html. Getting the right version of Android Studio The latest stable version of Android Studio can be found at http://developer.android.com/sdk/index.html. If you are among the bravest developers, and you are not afraid of bugs, you can always go to the Canary channel and download the latest version. The Canary channel is one of the preview channels available on the Android tools download page (available at http://tools.android.com/download/studio) and contains weekly builds. The following are other preview channels available at that URL: The Canary channel contains weekly builds. These builds are tested but they might contain some issues. Just use a build from this channel if you need or want to see the latest features. The Dev channel contains selected Canary builds. The beta channel contains the beta milestones for the next version of Android Studio. The stable channel contains the most recent stable builds of Android Studio. The following screenshot illustrates the Android tools download page: It is not recommended to use an unstable version for production. To be on the safer side, always use the latest stable version. In this article, we will use the version 2.2 preview. Although it is a beta version at this moment, we will have the main version quite soon. Installing Android Studio Android Studio requires JDK 6 or higher and, at least, JDK 7 is required if you aim to develop for Android 5.0 and higher. You can easily check which version you have installed by running this on your command line: javac –version If you don't have any version of the JDK or you have an unsupported version, please install or update your JDK before proceeding to install Android Studio. Refer to the official documentation for a more comprehensive installation guide and details on all platforms (Windows, Linux, and Mac OSX): http://developer.android.com/sdk/installing/index.html?pkg=studio Once you have JDK installed, unpack the package you have just downloaded from the Internet and proceed with the installation. For example, let's use Mac OSX. If you download the latest stable version, you will get a .dmg file that can be mounted on your filesystem. Once mounted, a new finder window that appears will ask us to drag the Android Studio icon to the Applications folder. Just doing this simple step will complete the basic installation. If you have downloaded a preview version, you will have a ZIP file that once unpacked will contain the Android Studio Application directly (can be just dragged to the Applications folder using finder). For other platforms, refer to the official installation guide provided by Google at the web address mentioned earlier. First run Once you have finished installing Android Studio, it is time to run it for the first time. On the first execution (at least if you have downloaded version 2.2), will let you configure some options and install some SDK components if you choose the custom installation type. Otherwise, both these settings and SDK components can be later configured or installed. The first option you will be able to choose is the UI theme. We have the default UI theme or the Darcula theme, which basically is a choice of light or dark background, respectively. After this step, the next window will show the SDK Components Setup where the installation process will let you choose some components to automatically download and install. On Mac OS, there is a bug in some versions of Android Studio 2.0 that sometimes does not allow selecting any option if the target folder does not exist. If that happens, follow these steps for a quick fix: Copy the contents of the Android SDK Location field, just the path or something like /Users/<username>/Library/Android/sdk, to the clipboard. Open the terminal application. Create the folder manually as mkdir /Users/<username>/Library/Android/sdk. Go back to Android Studio, press the Previous button and then the Next button to come back to this screen. Now, you will be able to select the components that you would like to install. If that still does not work, cancel the installation process, ensuring that you checked the option to rerun the setup on the next installation. Quit Android Studio and rerun it. Creating a sample project We will introduce some of the most common elements on Android Studio by creating a sample project, building it and running it on an android emulator or on a real android device. It is better to dispaly those elements when you need them rather than just enumerate a long list without a real use behind. Starting a new project Just click on the Start a new Android Studio project button to start a project from scratch. Android Studio will ask you to make some project configuration settings, and you will be able to launch your project. If you have an already existing project and would like to import it to Android Studio, you could do it now as well. Any projects based on Eclipse, Ant, or Gradle build can be easily imported into Android Studio. Projects can be also checked out from Version Control software such as Subversion or Git directly from Android Studio. When creating a new project, it will ask for the application name and the company domain name, which will be reversed into the application package name. Once this information is filled out, Android Studio will ask the type of devices or form factors your application will target. This includes not only phone and tablet, but also Android Wear, Android TV, Android Auto, or Google Glass. In this example, we will target only phone and tablet and require a minimum SDK API level of 14 (Android 4.0 or Ice Cream Sandwich). By setting the minimum required level to 14, we make sure that the app will run on approximately 96.2% of devices accessing Google Play Store, which is good enough. If we set 23 as the minimum API level (Android 6.0 Marshmallow), our application will only run on Android Marshmallow devices, which is less than 1% of active devices on Google Play right now. Unless we require a very specific feature available on a specific API level, we should use common sense and try to aim for as many devices as we can. Having said that, we should not waste time supporting very old devices (or very old versions of the Android), as they might be, for example, only 5% of the active devices but may imply lots and lots of work to make your application support them. In addition to the minimum SDK version, there is also the target SDK version. The target SDK version should be, ideally, set to the latest stable version of Android available to allow your application to take advantage of all the new features, styles, and behaviors from newer versions. As a rule of thumb, Google gives you the percentage of active devices on Google Play, not the percentage of devices out there in the wild. So, unless we need to build an enterprise application for a closed set of devices and installed ad hoc, we should not mind those people not even accessing Google Play, as they will not the users of our application because they do not usually download applications, unless we are targeting countries where Google Play is not available. In that case, we should analyze our requirements with real data from the available application stores in those countries. To see the Android OS version distribution, always check the Android's developer dashboard at http://developer.android.com/about/dashboards/index.html. Alternatively, when creating a new project from Android Studio, there is a link to help you choose the version that you would like to target, which will open a new screen with the cumulative percentage of coverage. If you click on each version, it will give you more details about that Android OS version and the features that were introduced. After this step, and to simplify our application creation process, Android Studio will allow us to add an Activity class to the project out from some templates. In this case, we can add an empty Activity class for the moment being. Let's not worry for the name of the Activity class and layout file at this moment; we can safely proceed with the prefilled values. As defined by Android developer documentation an: Activity is a single, focused thing that the user can do.  http://developer.android.com/reference/android/app/Activity.html To simplify further, we can consider an Activity class as every single screen of our application where the user can interact with it. If we take into consideration the MVC pattern, we can assume the activity to be the controller, as it will receive all the user inputs and events from the views, and the layouts XML and UI widgets to be the views. To know more about the MVC pattern, visit https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller. So, we have just added one activity to our application; let's see what else the Android Studio wizard created for us. Running your project Android Studio project wizard not only created an empty Activity class for us, but it also created an AndroidManifest, a layout file (activity_main.xml) defining the view controlled by the Activity class, an application icon placed carefully into different mipmaps (https://en.wikipedia.org/wiki/Mipmap) so that the most appropriate will be used depending on the screen resolution, some Gradle scripts, and and some other .xml files containing colors, dimensions, strings, and style definitions. We can have multiple resources, and even repeated resources, depending on screen resolution, screen orientation, night mode, layout direction, or even mobile country code of the SIM card. Take a look at the next topic to understand how to add qualifiers and filters to resources. For the time being, let's just try to run this example by pressing the play button next to our build configuration named app at the top of the screen. Android Studio will show us a small window where we can select the deployment target: a real device or emulator where our application will be installed and launched. If we have not connected any device or created any emulator, we can do it from the following screen. Let's press the Create New Emulator button. From this new screen, we can easily select a device and create an emulator that looks like that device. A Nexus 5X will suit us. After choosing the device, we can choose which version of the Android OS and architecture on which the platform will run. For instance, if we want to select Android Marshmallow (API level 23), we can choose from armeabi-v7a, x86 (Intel processors) and x86_64 (Intel 64bit processors). As we previously installed HAXM during our first run (https://software.intel.com/en-us/android/articles/intel-hardware-accelerated-execution-manager), we should install an Intel image, so emulator will be a lot faster than having to emulate an ARM processor. If we do not have the Android OS image downloaded to our computer, we can do it from this screen as well. Note that you can have an image of the OS with Google APIs or without them. We will use one image or another depending on whether the application uses any of the Google-specific libraries (Google Play Services) or only the Android core libraries. Once the image is selected (and downloaded and installed, if needed), we can proceed to finish the Android Virtual Device (AVD) configuration. On the last configuration screen, we can fine-tune some elements of our emulator, such as the default orientation (portrait or landscape), the screen scale, the SD card(if we enable the advanced settings), the amount of physical RAM, network latency, and we can use the webcam in our computer as emulator's camera. You are now ready to run your application on the Android emulator that you just created. Just select it as the deployment target and wait for it to load and install the app. If everything goes as it should, you should see this screen on the Android emulator: If you want to use a real device instead of an emulator, make sure that your device has the developer options enabled and it is connected to your computer using a USB cable (to enable development mode on your device or get information on how to develop and debug applications over the network, instead of having the device connected through an USB cable; check out the following links: http://developer.android.com/tools/help/adb.html http://developer.android.com/tools/device.html) If these steps are performed correctly, your device will appear as a connected device on the deployment target selection window. Resource configuration qualifiers As we introduced in the previous section, we can have multiple resources depending on the screen resolution or any other device configuration, and Android will choose the most appropriate resource in runtime. In order to do that, we have to use what is called configuration qualifiers. These qualifiers are only strings appended to the resource folder. Consider the following example: drawable drawable-hdpi drawable-mdpi drawable-en-rUS-land layout layout-en layout-sw600dp layout-v7 Qualifiers can be combined, but they must always follow the order specified by Google in the Providing Resource documentation, available at http://developer.android.com/guide/topics/resources/providing-resources.html. This allows us, for instance, to target multiple resolutions and have the best experience for each of them. It can be also used to have different images based on the country, in which the application is executed, or language. We have to be aware that putting too many resources (basically, images or any other media) will make our application grow in size. It is always good to apply common sense. And, in the case of having too many different resources or configurations, do not bloat the application and produce different binaries that can be deployed selectively to different devices on Google Play. We will briefly explain on the Gradle build system topic in this article, how to produce different binaries from one single source code. It will add some complexity on our development but will make our application smaller and more convenient for end users. For more information on multiple APK support, visit http://developer.android.com/google/play/publishing/multiple-apks.html. Summary In this article, we covered how to install Android Studio and get started with it. We also introduced some of the most common elements on Android Studio by creating a sample project, building it and running it on an android emulator or on a real android device. %MCEPASTEBIN% Resources for Article: Further resources on this subject: Hacking Android Apps Using the Xposed Framework [article] Speeding up Gradle builds for Android [article] The Art of Android Development Using Android Studio [article]
Read more
  • 0
  • 0
  • 19276

article-image-how-to-build-an-android-todo-app-with-phonegap-html-and-jquery
Robi Sen
14 Mar 2016
12 min read
Save for later

How to Build an Android To-Do App with PhoneGap, HTML and jQuery

Robi Sen
14 Mar 2016
12 min read
In this post, we are going to create a simple HTML 5, JavaScript, and CSS application then use PhoneGap to build it and turn it into an Android application, which will be useful for game development. We will learn how to structure a PhoneGap project, leverage Eclipse ADT for development, and use Eclipse as our build tool. To follow along with this post, it is useful to have a decent working knowledge of JavaScript and HTML, otherwise you might find the examples challenging. Understanding the typical workflow Before we begin developing our application, let’s look quickly at a workflow for creating a PhoneGap application. Generally you want to design your web application UI, create your HTML, and then develop your JavaScript application code. Then you should test it on your web browser to make sure everything works the way you would like it to. Finally, you will want to build it with PhoneGap and try deploying it to an emulator or mobile phone to test. And, if you plan to sell your application on an app store, you of course need to deploy it to an app store. The To-Do app For the example in this post we are going to build a simple To-Do app. The code for the whole application can be found here, but for now we will be working with two main files: the index.html and the todo.js. Usually we would create a new application using the command line argument phonegap create myapp but for this post we will just reuse the application we already made in Post 1. So, open your Eclipse ADT bundle and navigate to your project, which is most likely called HelloWorld since that’s the default app name. Now expand the application in the left pane of Eclipse and expand the www folder. You should end up seeing something like this: When PhoneGap creates an Android project it automatically creates several directories. The www directory under the root directory is where you create all your HTML, CSS, JavaScript, and store assets to be used in your project. When you build your project, using Eclipse or the command line, PhoneGap will turn your web application into your Android application. So, now that we know where to build our web application, let’s get started. Our goal is to make something that looks like the application in the following figure, which is the HTML we want to use shown in the Chrome browser: First let’s open the existing index.html file in Eclipse. We are going to totally rewrite the file so you can just delete all the existing HTML. Now let’s add the following code as shown here: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="format-detection" content="telephone=no" /> <meta name="msapplication-tap-highlight" content="no" /> <meta name="viewport" content="user-scalable=no, initial-scale=1, maximum-scale=1, minimum-scale=1, width=device-width, height=device-height, target-densitydpi=device-dpi" /> <title>PhoneGap ToDo</title> <link rel="stylesheet" type="text/css" href="css/jquery.mobile-1.4.3.min.css"> <link rel="stylesheet" type="text/css" href="css/index.css" /> <link rel="stylesheet" type="text/css" href="css/jquery.mobile-1.0.1.custom.css?" /> <script type="text/javascript" src="js/jquery-1.11.1.min.js"></script> <script type="text/javascript"src="js/jquery.mobile-1.4.3.min.js"></script> </head> OK; there is a bunch of stuff going on in this code. If you are familiar with HTML, you can see this is where we are importing a majority of our style sheets and JavaScript. For this example we are going to make use of JQuery and JQuery Mobile. You can get JQuery from here http://jquery.com/download/ and JQuery mobile from here http://jquerymobile.com/download/, but it’s easier if you just download the files from GitHub here. Those files need to go under mytestapp/www/js. Next, download the style sheets from here on GitHub and put them in mytestapp/www/cs. You will also notice the use of the meta tag. PhoneGap uses the meta tag to help set preferences for your application such as window sizing of the application, scaling, and the like. For now this topic is too big for discussion, but we will address it in further posts. OK, with that being said, let’s work on the HTML for the GUI. Now add the code shown here: <body> <script type="text/javascript"src="js/todo.js"></script> <div id="index" data-url="index" data-role="page"> <div data-role="header"> <h1>PhoneGap ToDo</h1> </div> <div data-role="content"> <ul id="task_list" data-role="listview"> <li data-role="list-divider">Add a task</li> </ul> <form id="form_336" method="GET"> <div data-role="fieldcontain"> <label for="inp_337"></label> <input type="text" name="inp_337" id="inp_337" /> </div> <input id="add" type="button" data-icon="plus" value="Add"/> </form> </div></div> <div id="confirm" data-url="confirm" data-role="page"> <div data-role="header"> <h1>Finish Task</h1> </div> <div data-role="content"> Mark this task as<br> <a class="remove_task" href="#done" data-role="button" data-icon="delete" data-theme="f">Done</a> <a class="remove_task" href="#notdone" data-role="button" data-icon="check" data-theme="g">Not Done</a> <br><br> <a href="#index" data-role="button" data-icon="minus">Cancel</a> </div></div> <div id="done" data-url="done" data-role="page"> <div data-role="header"> <h1>Right On</h1> </div> <div data-role="content"> You did it<br><br> <a href="#index" data-role="button">Good Job</a> </div></div> <div id="notdone" data-url="notdone" data-role="page"> <div data-role="header"> <h1>Get to work!</h1> </div> <div data-role="content"> Keep at it<br><br> <a href="#index" data-role="button">Back</a> </div></div> </body> </html> This HTML should make the GUI you saw earlier in this post. Go ahead and save the HTML code. Now go to the js directory under www. Create a new file by right clicking and selecting create new file, text. Name the new file todo.js. Now open the file in Eclipse and add the following code: var todo = {}; /** Read the new task and add it to the list */ todo.add = function(event) { // Read the task from the input var task=$('input').val(); if (task) { // Add the task to array and refresh list todo.list[todo.list.length] = task; todo.refresh_list(); // Clear the input $('input').val(''); } event.prevetodoefault(); }; /** Remove the task which was marked as selected */ todo.remove = function() { // Remove from array and refresh list todo.list.splice(todo.selected,1); todo.refresh_list(); }; /** Recreate the entire list from the available list of tasks */ todo.refresh_list = function() { var $tasks = $('#task_list'), i; // Clear the existing task list $tasks.empty(); if (todo.list.length) { // Add the header $tasks.append('<li data-role="list-divider">To Do&#39;s</li>'); for (var i=0;i<todo.list.length;i++){ // Append each task var li = '<li><a data-rel="dialog" data-task="' + i + '" href="#confirm">' + todo.list[i] + '</a></li>' $tasks.append(li); } } // Add the header for addition of new tasks $tasks.append('<li data-role="list-divider">Add a task</li>'); // Use jQuery Mobile's listview method to refresh $tasks.listview('refresh'); // Store back the list localStorage.todo_list = JSON.stringify(todo.list || []); }; // Initialize the index page $(document).delegate('#index','pageinit', function() { // If no list is already present, initialize it if (!localStorage.todo_list) { localStorage.todo_list = "[]"; } // Load the list by parsing the JSON from localStorage todo.list = JSON.parse(localStorage.todo_list); $('#add').bind('vclick', todo.add); $('#task_list').on('vclick', 'li a', function() { todo.selected = $(this).data('task'); }); // Refresh the list everytime the page is reloaded $('#index').bind('pagebeforeshow', todo.refresh_list); }); // Bind the 'Done' and 'Not Done' buttons to task removal $(document).delegate('#confirm', 'pageinit', function(){ $('.remove_task').bind('vclick', todo.remove); }); // Make the transition in reverse for the buttons on the done and notdone pages $(document).delegate('#done, #notdone', 'pageinit', function(){ // We reverse transition for any button linking to index page $('[href="#index"]').attr('data-direction','reverse'); }) What todo.js does is store the task list as a JavaScript array. We then just create simple functions to add or remove from the array and then a function to update the list. To allow us to persist the task list we use HTML 5’s localStorage (for information on localStorage go here) to act like a simple data base and store simple name/value pairs directly in the browser. Because of this, we don’t need to use an actual database like SQLite or a custom file storage option. Now save the file and try out the application in your browser. Try playing with the application a bit to test out how it’s working. Once you can confirm that it’s working, build and deploy the application in the Android emulate via Eclipse. To do this create a custom “builder” in Eclipse to allow you to easily build or rebuild your PhoneGap applications each time you make want to make changes. Making Eclipse auto-build your PhoneGap apps One of the reasons we want to use the Eclipse ADT with PhoneGap is that we can simplify our workflow, assuming you’re doing most of your work targeting Android devices, by being able to do all of our web development, potentially native Android develop, testing, and building, all through Eclipse. Doing this, though, is not covered in the PhoneGap documentation and can cause a lot of confusion, since most people assume you have to use the PhoneGap CLI command line interface to do all the application building. To make your application auto-build, first right-click on the application and select Properties. Then select Builders. Now select New, which will pop up a configuration type screen. On this screen select Program. You should now see the Edit Configuration screen: Name the new builder “PhoneGap Builder” and for the location field select Browse File System and navigate to /android/cordova/build.bat under our mytestapp folder. Then, for a working directory, you will want to put in the path to your mytestapp root directory. Finally, you’ll want to use the argument - -local. Then select ok. What this will do is that every time you build the application in Eclipse it will run the build.bat file with the —local argument. This will build the .apk and update the project with your latest changes made in the application www directory. For this post that would be mytestappwww. Also, if you made any changes to the Android source code, which we will not in this post, those changes will be updated and applied to the APK build. Now that we have created a new builder, right-click on the project in the selected build. The application should now take a few seconds and then build. Once it has completed building, go ahead and select the project again and select Run As an Android application. Like what was shown in Post 1, expect this to take a few minutes as Eclipse starts the Android emulator and deploys the new Android app (you can find your Android app in mytestappplatformsandroidbin). You should now see something like the following: Go ahead and play around with the application. Summary In this post, you learned how to use PhoneGap and the Eclipse ADT to build your first real web application with HTML 5 and JQuery and then deploy it as a real Android application. You also used JQuery and HTML 5’s localStorage to simplify the creation of your GUI. Try playing around with your application and clean up the UI with CSS. In our next post we will dive deeper into working with PhoneGap to make our application more sophisticated and add additional capabilities using the phone’s camera and other sensors. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build out new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 18849
article-image-json-pojo-using-gson-android-studio
Troy Miles
01 Jul 2014
6 min read
Save for later

How to Convert POJO to JSON Using Gson in Android Studio

Troy Miles
01 Jul 2014
6 min read
JSON has become the defacto standard of data exchange on the web. Compared to its cousin XML, it is smaller in size and faster to both create and parse. In fact, it seems so simple that many developers roll their own code to convert plain old Java objects or POJO to and from JSON. For simple objects, it is fairly easy to write the conversion code, but as your objects grow more complex, your code's complexity grows as well. Do you really want to maintain a bunch of code whose functionality is not truly intrinsic to your app? Luckily there is no reason for you to do so. There are quite a few alternatives to writing your own Java JSON serializer/deserializer; in fact, json.org lists 25 of them. One of them, Gson, was created by Google for use on internal projects and later was open sourced. Gson is hosted on Google Code and the source code is available in an SVN repo. Create an Android app The process of converting POJO to JSON is called serialization. The reversed process is deserialization. A big reason that GSON is such a popular library is how simple it makes both processes. For both, the only thing you need is the Gson class. Let's create a simple Android app and see how simple Gson is to use. Start Android Studio and select new project Change the Application name to GsonTest. Click Next Click Next again. Click Finish At this point we have a complete Android hello world app. In past Android IDEs, we would add the Gson library at this point, but we don't do that anymore. Instead we add a Gson dependency to our build.gradle script and that will take care of everything else for us. It is super important to edit the correct Gradle file. There is one at the root directory but the one we want is at the app directory. Double-click it to open. Locate the dependencies section near the bottom of the script. After the last entry add the following line: compile 'com.google.code.gson:gson:2.2.4' After you add it, save the script and then click the Sync Project with Gradle Files icon. It is the fifth icon from the right-hand side in the toolbar. At this point, the Gson library is visible to your app. So let's build some test code. Create test code with JSON For our test we are going to use the JSON Test web service at https://www.jsontest.com/. It is a testing platform for JSON. Basically it gives us a place to send data to in order to test if we are properly serializing and deserializing data. JSON Test has a lot of services but we will use the validate service. You pass it a JSON string URL encoded as a query string and it will reply with a JSON object that indicates whether or not the JSON was encoded correctly, as well as some statistical information. The first thing we need to do is create two classes. The first class, TestPojo, is the Java class that we are going to serialize and send to JSON Test. TestPojo doesn't do anything important. It is just for our test; however, it contains several different types of objects: ints, strings, and arrays of ints. Classes that you create can easily be much more complicated, but don't worry, Gson can handle it, for example: 1 package com.tekadept.gsontest.app; 2 3 public class TestPojo { 4 private intvalue1 = 1; 5 private String value2 = "abc"; 6 private intvalues[] = {1, 2, 3, 4}; 7 private transient intvalue3 = 3; 8 9 // no argsctor 10 TestPojo() { 11 } 12 } 13 Gson will also respect the Java transient modifier, which specifies that a field should not be serialized. Any field with it will not appear in the JSON. The second class, JsonValidate, will hold the results of our call to JSON Test. In order to make it easy to parse, I've kept the field names exactly the same as those returned by the service, except for one. Gson has an annotation, @SerializedName, if you place it before a field name, you can have name the class version of a field be different than the JSON name. For example, if we wanted to name the validate field isValid all we would have to do is: 1 package com.tekadept.gsontest.app; 2 3 import com.google.gson.annotations.SerializedName; 4 5 public class JsonValidate { 6 7 public String object_or_array; 8 public booleanempty; 9 public long parse_time_nanoseconds; 10 @SerializedName("validate") 11 public booleanisValid; 12 public intsize; 13 } By using the @SerializedName annotation, our name for the JSON validate becomes isValid. Just remember that you only need to use the annotation when you change the field's name. In order to call JSON Test's validate service, we follow the best practice of not doing it on the UI thread by using an async task. An async task has four steps: onPreExecute, doInBackground, onProgressUpdate, and onPostExecute. The doInBackground method happens on another thread. It allows us to wait for the JSON Test service to respond to us without triggering the dreaded application not responding error. You can see this in action in the following code: 60 @Override 61 protected String doInBackground(String... notUsed) { 62 TestPojotp = new TestPojo(); 63 Gsongson = new Gson(); 64 String result = null; 65 66 try { 67 String json = URLEncoder.encode(gson.toJson(tp), "UTF-8"); 68 String url = String.format("%s%s", Constants.JsonTestUrl, json); 69 result = getStream(url); 70 } catch (Exception ex){ 71 Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 72 } 73 return result; 74 } To encode our Java object, all we need to do is create an instance of the Gson class, then call its toJson method, passing an instance of the class we wish to serialize. Deserialization is nearly as simple. In the onPostExecute method, we get the string of JSON from the web service. We then call the convertFromJson method that does the conversion. First it makes sure that it got a valid string, then it does the conversion by calling Gson'sfromJson method, passing the string and the name of its the class, as follows: 81 @Override 82 protected void onPostExecute(String result) { 83 84 // convert JSON string to a POJO 85 JsonValidatejv = convertFromJson(result); 86 if (jv != null) { 87 Log.v(Constants.LOG_TAG, "Conversion Succeed: " + result); 88 } else { 89 Log.v(Constants.LOG_TAG, "Conversion Failed"); 90 } 91 } 92 93 private JsonValidateconvertFromJson(String result) { 94 JsonValidatejv = null; 95 if (result != null &&result.length() >0) { 96 try { 97 Gsongson = new Gson(); 98 jv = gson.fromJson(result, JsonValidate.class); 99 } catch (Exception ex) { 100     Log.v(Constants.LOG_TAG, "Error: " + ex.getMessage()); 101                 } 102             } 103             return jv; 104         } Conclusion For most developers this is all you need to know. There is a complete guide to Gson at https://sites.google.com/site/gson/gson-user-guide. The complete source code for the test app is at https://github.com/Rockncoder/GsonTest. Discover more Android tutorials and extra content on our Android page - find it here.
Read more
  • 0
  • 0
  • 18721

article-image-writing-fully-native-application
Packt
05 May 2015
15 min read
Save for later

Writing a Fully Native Application

Packt
05 May 2015
15 min read
In this article written by Sylvain Ratabouil, author of Android NDK Beginner`s Guide - Second Edition, we have breached Android NDK's surface using JNI. But there is much more to find inside! The NDK includes its own set of specific features, one of them being Native Activities. Native activities allow creating applications based only on native code, without a single line of Java. No more JNI! No more references! No more Java! (For more resources related to this topic, see here.) In addition to native activities, the NDK brings some APIs for native access to Android resources, such as display windows, assets, device configuration. These APIs help in getting rid of the tortuous JNI bridge often necessary to embed native code. Although there is a lot still missing, and not likely to be available (Java remains the main platform language for GUIs and most frameworks), multimedia applications are a perfect target to apply them. Here we initiate a native C++ project developed progressively throughout this article: DroidBlaster. Based on a top-down viewpoint, this sample scrolling shooter will feature 2D graphics, and, later on, 3D graphics, sound, input, and sensor management. We will be creating its base structure and main game components. Let's now enter the heart of the Android NDK by: Creating a fully native activity Handling main activity events Accessing display window natively Retrieving time and calculating delays Creating a native Activity The NativeActivity class provides a facility to minimize the work necessary to create a native application. It lets the developer get rid of all the boilerplate code to initialize and communicate with native code and concentrate on core functionalities. This glue Activity is the simplest way to write applications, such as games without a line of Java code. The resulting project is provided with this book under the name DroidBlaster_Part1. Time for action – creating a basic native Activity We are now going to see how to create a minimal native activity that runs an event loop. Create a new hybrid Java/C++ project:      Name it DroidBlaster.      Turn the project into a native project. Name the native module droidblaster.      Remove the native source and header files that have been created by ADT.      Remove the reference to the Java src directory in Project Properties | Java Build Path | Source. Then, remove the directory itself on disk.      Get rid of all layouts in the res/layout directory.      Get rid of jni/droidblaster.cpp if it has been created. In AndroidManifest.xml, use Theme.NoTitleBar.Fullscreen as the application theme. Declare a NativeActivity that refers to the native module named droidblaster (that is, the native library we will compile) using the meta-data property android.app.lib_name: <?xml version="1.0" encoding="utf-8"?> <manifest    package="com.packtpub.droidblaster2d" android_versionCode="1"    android_versionName="1.0">    <uses-sdk        android_minSdkVersion="14"        android_targetSdkVersion="19"/>      <application android_icon="@drawable/ic_launcher"        android_label="@string/app_name"        android_allowBackup="false"        android:theme        ="@android:style/Theme.NoTitleBar.Fullscreen">        <activity android_name="android.app.NativeActivity"            android_label="@string/app_name"            android_screenOrientation="portrait">            <meta-data android_name="android.app.lib_name"                android:value="droidblaster"/>            <intent-filter>                <action android:name ="android.intent.action.MAIN"/>                <category                    android_name="android.intent.category.LAUNCHER"/>            </intent-filter>        </activity>    </application> </manifest> Create the file jni/Types.hpp. This header will contain common types and the header cstdint: #ifndef _PACKT_TYPES_HPP_ #define _PACKT_TYPES_HPP_   #include <cstdint>   #endif Let's write a logging class to get some feedback in the Logcat.      Create jni/Log.hpp and declare a new class Log.      Define the packt_Log_debug macro to allow the activating or deactivating of debug messages with a simple compile flag: #ifndef _PACKT_LOG_HPP_ #define _PACKT_LOG_HPP_   class Log { public:    static void error(const char* pMessage, ...);    static void warn(const char* pMessage, ...);    static void info(const char* pMessage, ...);    static void debug(const char* pMessage, ...); };   #ifndef NDEBUG    #define packt_Log_debug(...) Log::debug(__VA_ARGS__) #else    #define packt_Log_debug(...) #endif   #endif Implement the jni/Log.cpp file and implement the info() method. To write messages to Android logs, the NDK provides a dedicated logging API in the android/log.h header, which can be used similarly as printf() or vprintf() (with varArgs) in C: #include "Log.hpp"   #include <stdarg.h> #include <android/log.h>   void Log::info(const char* pMessage, ...) {    va_list varArgs;    va_start(varArgs, pMessage);    __android_log_vprint(ANDROID_LOG_INFO, "PACKT", pMessage,        varArgs);    __android_log_print(ANDROID_LOG_INFO, "PACKT", "n");    va_end(varArgs); } ... Write other log methods, error(), warn(), and debug(), which are almost identical, except the level macro, which are respectively ANDROID_LOG_ERROR, ANDROID_LOG_WARN, and ANDROID_LOG_DEBUG instead. Application events in NativeActivity can be processed with an event loop. So, create jni/EventLoop.hpp to define a class with a unique method run(). Include the android_native_app_glue.h header, which defines the android_app structure. It represents what could be called an applicative context, where all the information is related to the native activity; its state, its window, its event queue, and so on: #ifndef _PACKT_EVENTLOOP_HPP_ #define _PACKT_EVENTLOOP_HPP_   #include <android_native_app_glue.h>   class EventLoop { public:    EventLoop(android_app* pApplication);      void run();   private:    android_app* mApplication; }; #endif Create jni/EventLoop.cpp and implement the activity event loop in the run() method. Include a few log events to get some feedback in Android logs. During the whole activity lifetime, the run() method loops continuously over events until it is requested to terminate. When an activity is about to be destroyed, the destroyRequested value in the android_app structure is changed internally to indicate to the client code that it must exit. Also, call app_dummy() to ensure the glue code that ties native code to NativeActivity is not stripped by the linker. #include "EventLoop.hpp" #include "Log.hpp"   EventLoop::EventLoop(android_app* pApplication):        mApplication(pApplication) {}   void EventLoop::run() {    int32_t result; int32_t events;    android_poll_source* source;      // Makes sure native glue is not stripped by the linker.    app_dummy();      Log::info("Starting event loop");    while (true) {        // Event processing loop.        while ((result = ALooper_pollAll(-1, NULL, &events,                (void**) &source)) >= 0) {            // An event has to be processed.            if (source != NULL) {                source->process(mApplication, source);            }            // Application is getting destroyed.            if (mApplication->destroyRequested) {                Log::info("Exiting event loop");                return;            }        }    } } Finally, create jni/Main.cpp to define the program entry point android_main(), which runs the event loop in a new file Main.cpp: #include "EventLoop.hpp" #include "Log.hpp"   void android_main(android_app* pApplication) {    EventLoop(pApplication).run(); } Edit the jni/Android.mk file to define the droidblaster module (the LOCAL_MODULE directive). Describe the C++ files to compile the LOCAL_SRC_FILES directive with the help of the LS_CPP macro. Link droidblaster with the native_app_glue module (the LOCAL_STATIC_LIBRARIES directive) and android (required by the Native App Glue module), as well as the log libraries (the LOCAL_LDLIBS directive): LOCAL_PATH := $(call my-dir)   include $(CLEAR_VARS)   LS_CPP=$(subst $(1)/,,$(wildcard $(1)/*.cpp)) LOCAL_MODULE := droidblaster LOCAL_SRC_FILES := $(call LS_CPP,$(LOCAL_PATH)) LOCAL_LDLIBS := -landroid -llog LOCAL_STATIC_LIBRARIES := android_native_app_glue   include $(BUILD_SHARED_LIBRARY)   $(call import-module,android/native_app_glue)   Create jni/Application.mk to compile the native module for multiple ABIs. We will use the most basic ones, as shown in the following code: APP_ABI := armeabi armeabi-v7a x86 What just happened? Build and run the application. Of course, you will not see anything tremendous when starting this application. Actually, you will just see a black screen! However, if you look carefully at the LogCat view in Eclipse (or the adb logcat command), you will discover a few interesting messages that have been emitted by your native application in reaction to activity events. We initiated a Java Android project without a single line of Java code! Instead of referencing a child of Activity in AndroidManifest, we referenced the android.app.NativeActivity class provided by the Android framework. NativeActivity is a Java class, launched like any other Android activity and interpreted by the Dalvik Virtual Machine like any other Java class. However, we never faced it directly. NativeActivity is in fact a helper class provided with Android SDK, which contains all the necessary glue code to handle application events (lifecycle, input, sensors, and so on) and broadcasts them transparently to native code. Thus, a native activity does not eliminate the need for JNI. It just hides it under the cover! However, the native C/C++ module run by NativeActivity is executed outside Dalvik boundaries in its own thread, entirely natively (using the Posix Thread API)! NativeActivity and native code are connected together through the native_app_glue module. The Native App Glue has the responsibility of: Launching the native thread, which runs our own native code Receiving events from NativeActivity Routing these events to the native thread event loop for further processing The Native glue module code is located in ${ANDROID_NDK}/sources/android/native_app_glue and can be analyzed, modified, or forked at will. The headers related to native APIs such as, looper.h, can be found in ${ANDROID_NDK}/platforms/<Target Platform>/<Target Architecture>/usr/include/android/. Let's see in more detail how it works. More about the Native App Glue Our own native code entry point is declared inside the android_main() method, which is similar to the main methods in desktop applications. It is called only once when NativeActivity is instantiated and launched. It loops over application events until NativeActivity is terminated by the user (for example, when pressing a device's back button) or until it exits by itself. The android_main() method is not the real native application entry point. The real entry point is the ANativeActivity_onCreate() method hidden in the android_native_app_glue module. The event loop we implemented in android_main() is in fact a delegate event loop, launched in its own native thread by the glue module. This design decouples native code from the NativeActivity class, which is run on the UI thread on the Java side. Thus, even if your code takes a long time to handle an event, NativeActivity is not blocked and your Android device still remains responsive. The delegate native event loop in android_main() is itself composed, in our example, of two nested while loops. The outer one is an infinite loop, terminated only when activity destruction is requested by the system (indicated by the destroyRequested flag). It executes an inner loop, which processes all pending application events. ... int32_t result; int32_t events; android_poll_source* source; while (true) {    while ((result = ALooper_pollAll(-1, NULL, &events,            (void**) &source)) >= 0) {        if (source != NULL) {            source->process(mApplication, source);        }        if (mApplication->destroyRequested) {            return;        }    } } ... The inner For loop polls events by calling ALooper_pollAll(). This method is part of the Looper API, which can be described as a general-purpose event loop manager provided by Android. When timeout is set to -1, like in the preceding example, ALooper_pollAll() remains blocked while waiting for events. When at least one is received, ALooper_pollAll() returns and the code flow continues. The android_poll_source structure describing the event is filled and is then used by client code for further processing. This structure looks as follows: struct android_poll_source {    int32_t id; // Source identifier  struct android_app* app; // Global android application context    void (*process)(struct android_app* app,            struct android_poll_source* source); // Event processor }; The process() function pointer can be customized to process application events manually. As we saw in this part, the event loop receives an android_app structure in parameter. This structure, described in android_native_app_glue.h, contains some contextual information as shown in the following table: void* userData Pointer to any data you want. This is essential in giving some contextual information to the activity or input event callbacks. void (*pnAppCmd)(…) and int32_t (*onInputEvent)(…) These member variables represent the event callbacks triggered by the Native App Glue when an activity or an input event occurs. ANativeActivity* activity Describes the Java native activity (its class as a JNI object, its data directories, and so on) and gives the necessary information to retrieve a JNI context. AConfiguration* config Describes the current hardware and system state, such as the current language and country, the current screen orientation, density, size, and so on. void* savedState size_t and savedStateSize Used to save a buffer of data when an activity (and thus its native thread) is destroyed and later restored. AInputQueue* inputQueue Provides input events (used internally by the native glue). ALooper* looper Allows attaching and detaching event queues used internally by the native glue. Listeners poll and wait for events sent on a communication pipe. ANativeWindow* window and ARect contentRect Represents the "drawable" area on which graphics can be drawn. The ANativeWindow API, declared in native_window.h, allows retrieval of the window width, height, and pixel format, and the changing of these settings. int activityState Current activity state, that is, APP_CMD_START, APP_CMD_RESUME, APP_CMD_PAUSE, and so on. int destroyRequested When equal to 1, it indicates that the application is about to be destroyed and the native thread must be terminated immediately. This flag has to be checked in the event loop. The android_app structure also contains some additional data for internal use only, which should not be changed. Knowing all these details is not essential to program native programs but can help you understand what's going on behind your back. Let's now see how to handle these activity events. Summary The Android NDK allows us to write fully native applications without a line of Java code. NativeActivity provides a skeleton to implement an event loop that processes application events. Associated with the Posix time management API, the NDK provides the required base to build complex multimedia applications or games. In summary, we created NativeActivity that polls activity events to start or stop native code accordingly. We accessed the display window natively, like a bitmap, to display raw graphics. Finally, we retrieved time to make the application adapt to device speed using a monotonic clock. Resources for Article: Further resources on this subject: Android Native Application API [article] Organizing a Virtual Filesystem [article] Android Fragmentation Management [article]
Read more
  • 0
  • 0
  • 18485

article-image-ar-experience-using-vuforia-and-features-definition
Packt
01 Oct 2013
4 min read
Save for later

AR experience using Vuforia and features definition

Packt
01 Oct 2013
4 min read
(For more resources related to this topic, see here.) What decides trackable score? Trackables are the foundation of the AR experience using Vuforia. It is paramount to understand and create a suitable trackable for the experience to be robust and useful. The score attributed to the trackable in the target manager is our indication of how robust the target image is going to perform, but what decides that score? Best way of understanding this, is by understanding how Vuforia tracks the images. The idea is simple, it looks for position of contrasting edges in clusters all around the image. Those edges are tracked, and based on the map of positions that are stored in the dataset, Vuforia can tell the relative position of the trackable in the real world and accordingly render the 3D content on top of it. This particularly means that tracking the image is not a function of its color or what really is in it, as much as how many contrasting edges are there in the image, and how well they are distributed on the image. To better understand this, we can look on the current edges that are recognizable in the image we have just uploaded. To do that, simply click on the Show Features link on the top left of the webpage. The following image shows features in image target stones: Once the Show Features link has been clicked, the image target manager layers over the target image an overlay of where it detects a recognizable edge that it can track in a Vuforia image target. Notice that it is only tracking the dark edges between the Stones and nothing else in the image. It is even tracking only the high contrast edges between the Stones, while ignoring some of the lighter ones. Also notice that the number of edges found in the image is large, and evenly distributed all around the image. This is a great factor in what made this image great for tracking. To contrast this image's result, let's try an image that will yield a 1-star score when tried on the target manager. The following image shows landscape image added to target image: Before adding this image, intuitively we might think that this image is suitable for tracking. It certainly has a lot of details of a wide-angle landscape. But this image yielded a shocking 1-star result when added to the Target Manager. The main reason for the low score for this image is the fact that the entire image is a shade of green. This greatly diminishes contrasting edges in the image. If we are to click on the Show Features link on the top, we will be able to see what the target manager detected from the image. The following image shows features in the mountain landscape image: Immediately, we notice the considerably lower number of features detected in the image compared to the stones one. It only detected the edges created by the shadows of the objects in the image, which is clearly not enough to award it any score above 1 star. Features definition To help us get a higher score, we must understand what are the features that the target manager is looking for. We do know now that the main thing that the target manager is looking for in an image is edges, but what kind of edges specifically? To understand that, we need the definition of features. A feature is a sharp and spiked detail in the image, like the corner of an edge. Features must be very contrasting to be found and it has to be distributed evenly across the image and in a random manner. The following image shows shapes and features recognized in them: In the shapes illustrated above, we can see the yellow crosses representation of the features recognizable in the shape. The representation is as follows: Shape 1: It is a perfect circle without any corners at all, and such no features are recognizable in it. Shape 2: It has an edge to the left with two recognizable corners. That yields two features recognizable in the shape. Shape 3: It is a square with four edges and four corners. This yields four recognizable features in the shape. This means that any curved object yields little to none features at all. Primarily, humans and animals make very poor trackables due to their curved nature. Summary Thus in this article, we learned about how to track an image and which features are recognizable in an image. Resources for Article: Further resources on this subject: Interface Designing for Games in iOS [Article] Unity Game Development: Welcome to the 3D world [Article] Unity Game Development: Interactions (Part 1) [Article]
Read more
  • 0
  • 0
  • 17585
article-image-delegate-pattern-limitations-swift
Anthony Miller
18 Mar 2016
5 min read
Save for later

Delegate Pattern Limitations in Swift

Anthony Miller
18 Mar 2016
5 min read
If you've ever built anything using UIKit, then you are probably familiar with the delegate pattern. The delegate pattern is used frequently throughout Apple's frameworks and many open source libraries you may come in contact with. But many times, it is treated as a one-size-fits-all solution for problems that it is just not suited for. This post will describe the major shortcomings of the delegate pattern. Note: This article assumes that you have a working knowledge of the delegate pattern. If you would like to learn more about the delegate pattern, see The Swift Programming Language - Delegation. 1. Too Many Lines! Implementation of the delegate pattern can be cumbersome. Most experienced developers will tell you that less code is better code, and the delegate pattern does not really allow for this. To demonstrate, let's try implementing a new view controller that has a delegate using the least amount of lines possible. First, we have to create a view controller and give it a property for its delegate: class MyViewController: UIViewController { var delegate: MyViewControllerDelegate? } Then, we define the delegate protocol. protocol MyViewControllerDelegate { func foo() } Now we have to implement the delegate. Let's make another view controller that presents a MyViewController: class DelegateViewController: UIViewController { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } } Next, our DelegateViewController needs to conform to the delegate protocol: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } Finally, we can make our DelegateViewController the delegate of MyViewController: class DelegateViewController: UIViewController, MyViewControllerDelegate { func presentMyViewController() { let myViewController = MyViewController() myViewController.delegate = self presentViewController(myViewController, animated: false, completion: nil) } func foo() { /// Respond to the delegate method. } } That's a lot of boilerplate code that is repeated every time you want to create a new delegate. This opens you up to a lot of room for errors. In fact, the above code has a pretty big error already that we are going to fix now. 2. No Non-Class Type Delegates Whenever you create a delegate property on an object, you should use the weak keyword. Otherwise, you are likely to create a retain cycle. Retain cycles are one of the most common ways to create memory leaks and can be difficult to track down. Let's fix this by making our delegate weak: class MyViewController: UIViewController { weak var delegate: MyViewControllerDelegate? } This causes another problem though. Now we are getting a build error from Xcode! 'weak' cannot be applied to non-class type 'MyViewControllerDelegate'; consider adding a class bound. This is because you can't make a weak reference to a value type, such as a struct or an enum, so in order to use the weak keyword here, we have to guarantee that our delegate is going to be a class. Let's take Xcode's advice here and add a class bound to our protocol: protocol MyViewControllerDelegate: class { func foo() } Well, now everything builds just fine, but we have another issue. Now your delegate must be an object (sorry structs and enums!). You are now creating more constraints on what can conform to your delegate. The whole point of the delegate pattern is to allow an unknown "something" to respond to the delegate events. We should be putting as few constraints as possible on our delegate object, which brings us to the next issue with the delegate pattern. 3. Optional Delegate Methods In pure Swift, protocols don't have optional functions. This means, your delegate must implement every method in the delegate protocol, even if it is irrelevant in your case. For example, you may not always need to be notified when a user taps a cell in a UITableView. There are ways to get around this though. In Swift 2.0+, you can make a protocol extension on your delegate protocol that contains a default implementation for protocol methods that you want to make optional. Let's make a new optional method on our delegate protocol using this method: protocol MyViewControllerDelegate: class { func foo() func optionalFunction() } extension MyViewControllerDelegate { func optionalFunction() { } } This adds even more unnecessary code. It isn't really clear what the intention of this extension is unless you understand what's going on already, and there is no way to explicitly show that this method is optional. Alternatively, if you mark your protocol as @objc, you can use the optional keyword in your function declaration. The problem here is that now your delegate must be an Objective-C object. Just like our last example, this is creating additional constraints on your delegate, and this time they are even more restrictive. 4. There Can Be Only One The delegate pattern only allows for one delegate to respond to events. This may be just fine for some situations, but if you need multiple objects to be notified of an event, the delegate pattern may not work for you. Another common scenario you may come across is when you need different objects to be notified of different delegate events. The delegate pattern can be a very useful tool, which is why it is so widely used, but recognizing the limitations that it creates is important when you are deciding whether it is the right solution for any given problem. About the author Anthony Miller is the lead iOS developer at App-Order in Las Vegas, Nevada, USA. He has written and released numerous apps on the App Store and is an avid open source contributor. When he's not developing, Anthony loves board games, line-dancing, and frequent trips to Disneyland.
Read more
  • 0
  • 0
  • 17579

article-image-understanding-uikitfundamentals
Packt
01 Jun 2016
9 min read
Save for later

Understanding UIKitFundamentals

Packt
01 Jun 2016
9 min read
In this article by Jak Tiano, author of the book Learning Xcode, we're mostly going to be talking about concepts rather than concrete code examples. Since we've been using UIKit throughout the whole book (and we will continue to do so), I'm going to do my best to elaborate on some things we've already seen and give you new information that you can apply to what we do in the future. (For more resources related to this topic, see here) As we've heard a lot about UIKit. We've seen it at the top of our Swift files in the form of import UIKit. We've used many of the UI elements and classes it provides for us. Now, it's time to take an isolated look at the biggest and most important framework in iOS development. Application management Unlike most other frameworks in the iOS SDK, UIKit is deeply integrated into the way your app runs. That's because UIKit is responsible for some of the most essential functionalities of an app. It also manages your application's window and view architecture, which we'll be talking about next. It also drives the main run loop, which basically means that it is executing your program. The UIDevice class In addition to these very important features, UIKit also gives you access to some other useful information about the device the app is currently running on through the UIDevice class. Using online resources and documentation: Since this article is about exploring frameworks, it is a good time to remind you that you can (and should!) always be searching online for anything and everything. For example, if you search for UIDevice, you'll end up on Apple's developer page for the UIDevice class, where you can see even more bits of information that you can pull from it. As we progress, keep in mind that searching the name of a class or framework will usually give you quick access to the full documentation. Here are some code examples of the information you can access: UIDevice.currentDevice().name UIDevice.currentDevice().model UIDevice.currentDevice().orientation UIDevice.currentDevice().batteryLevel UIDevice.currentDevice().systemVersion Some developers have a little bit of fun with this information: for example, Snapchat gives you a special filter to use for photos when your battery is fully charged.Always keep an open mind about what you can do with data you have access to! Views One of the most important responsibilities of UIKit is that it provides views and the view hierarchy architecture. We've talked before about what a view is within the MVC programming paradigm, but here we're referring to the UIView class that acts as the base for (almost) all of our visual content in iOS programming. While it wasn't too important to know about when just getting our feet wet, now is a good time to really dig in a bit and understand what UIViews are and how they work both on their own and together. Let's start from the beginning: a view (UIView) defines a rectangle on your screen that is responsible for output and input, meaning drawing to the screen and receiving touch events.It can also contain other views, known as subviews, which ultimately create a view hierarchy. As a result of this hierarchy, we have to be aware of the coordinate systems involved. Now, let's talk about each of these three functions: drawing, hierarchies, and coordinate systems. Drawing Each UIView is responsible for drawing itself to the screen. In order to optimize drawing performance, the views will usually try to render their content once and then reuse that image content when it doesn't change. It can even move and scale content around inside of it without needing to redraw, which can be an expensive operation: An overview of how UIView draws itself to the screen With the system provided views, all of this is handled automatically. However, if you ever need to create your own UIView subclass that uses custom drawing, it's important to know what goes on behind the scenes. To implement custom drawing in a view, you need to implement the drawRect() function in your subclass. When something changes in your view, you need to call the setNeedsDisplay() function, which acts as a marker to let the system know that your view needs to be redrawn. During the next drawing cycle, the code in your drawRect() function will be executed to refresh the content of your view, which will then be cached for performance. A code example of this custom drawing functionality is a bit beyond the scope of this article, but discussing this will hopefully give you a better understanding of how drawing works in addition to giving you a jumping off point should you need to do this in the future. Hierarchies Now, let's discuss view hierarchies. When we would use a view controller in a storyboard, we would drag UI elements onto the view controller. However, what we were actually doing is adding a subview to the base view of the view controller. And in fact, that base view was a subview of the UIWindow, which is also a UIView. So, though, we haven't really acknowledged it, we've already put view hierarchies to work many times. The easiest way to think about what happens in a view hierarchy is that you set one view's parent coordinate system relative to another view. By default, you'd be setting a view's coordinate system to be relative to the base view, which is normally just the whole screen. But you can also set the parent coordinate system to some other view so that when you move or transform the parent view, the children views are moved and transformed along with it. Example of how parenting works with a view hierarchy. It's also important to note that the view hierarchy impacts the draw order of your views. All of a view's subviews will be drawn on top of the parent view, and the subviews will be drawn in the order they were added (the last subview added will be on top). To add a subview through code, you can use the addSubview() function. Here's an example: var view1 = UIView() var view2 = UIView() view1.addSubview(view2) The top-most views will intercept a touch first, and if it doesn't respond, it will pass it down the view hierarchy until a view does respond. Coordinate systems With all of this drawing and parenting, we need to take a minute to look at how the coordinate system works in UIKit for our views.The origin (0,0 point) in UIKit is the top left of the screen, and increases along X to the right, and increases on the Y downward. Each view is placed in this upper-left positioning system relative to its parent view's origin. Be careful! Other frameworks in iOS use different coordinate systems. For example, SpriteKit uses the lower-left corner as the origin. Each view also has its own setof positioning information. This is composed of the view's frame, bounds, and center. The frame rectangle describes the origin and the size of view relative to its parent view's coordinate system. The bounds rectangle describes the origin and the size of the view from its local coordinate system. The center is just the center point of the view relative to the parent view. When dealing with so many different coordinate systems, it can seem like a nightmare to compare positions from different views. Luckily, the UIView class provides a simple convertPoint()function to convert points between systems. Try running this little experiment in a playground to see how the point gets converted from one view's coordinate system to the other: import UIKit let view1 = UIView(frame: CGRect(x: 0, y: 0, width: 50, height: 50)) let view2 = UIView(frame: CGRect(x: 10, y: 10, width: 30, height: 30)) view1.addSubview(view2) let pointFrom1 = CGPoint(x: 20, y: 20) let pointFromView2 = view1.convertPoint(pointFrom1, toView: view2) Hopefully, you now have a much better understanding of some of the underlying workings of the view system in UIKit. Documents, displays, printing, and more In this section, I'm going to do my best to introduce you to the many additional features of the UIKit framework. The idea is to give you a better understanding of what is possible with UIKit, and if anything sounds interesting to you, you can go off and explore these features on your own. Documents UIKit has built in support for documents, much like you'd find on a desktop operating system. Using the UIDocument class, UIKit can help you save and load documents in the background in addition to saving them to iCloud. This could be a powerful feature for any app that allows the user to create content that they expect to save and resume working on later. Displays On most new iOS devices, you can connect external screens via HDMI. You can take advantage of these external displays by creating a new instance of the UIWindow class, and associating it with the external display screen. You can then add subviews to that window to create a secondscreen experience for devices like a bigscreen TV. While most consumers don't ever use HDMI-connected external displays, this is a great feature to keep in mind when working on internal applications for corporate or personal use. Printing Using the UIPrintInteractionController, you can set up and send print jobs to AirPrint-enabled printers on the user's network. Before you print, you can also create PDFs by drawing content off screen to make printing easier. And more! There are many more features of UIKit that are just waiting to be explored! To be honest, UIKit seems to be pretty much a dumping ground for any general features that were just a bit too small to deserve their own framework. If you do some digging in Apple's documentation, you'll find all kinds of interesting things you can do with UIKit, such as creating custom keyboards, creating share sheets, and custom cut-copy-paste support. Summary In this article, we looked at the biggest and most important UIKit and learned about some of the most important system processes like the view hierarchy. Resources for Article:   Further resources on this subject: Building Surveys using Xcode [article] Run Xcode Run [article] Tour of Xcode [article]
Read more
  • 0
  • 0
  • 17290
Modal Close icon
Modal Close icon