Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cross-Platform Mobile Development

96 Articles
article-image-creating-poi-listview-layout
Packt
04 Sep 2015
27 min read
Save for later

Creating the POI ListView layout

Packt
04 Sep 2015
27 min read
In this article by Nilanchala Panigrahy, author of the book Xamarin Mobile Application Development for Android Second Edition, will walk you through the activities related to creating and populating a ListView, which includes the following topics: Creating the POIApp activity layout Creating a custom list row item layout The ListView and ListAdapter classes (For more resources related to this topic, see here.) It is technically "possible to create and attach the user interface elements to your activity using C# code. However, it is a bit of a mess. We will go with the most common approach by declaring the XML-based layout. Rather than deleting these files, let's give them more appropriate names and remove unnecessary content as follows: Select the Main.axml file in Resources | Layout and rename it to "POIList.axml. Double-click on the POIList.axml file to open it in a layout "designer window.Currently, the POIList.axml file contains the layout that was created as part of the default Xamarin Studio template. As per our requirement, we need to add a ListView widget that takes the complete screen width and a ProgressBar in the middle of the screen. The indeterminate progress "bar will be displayed to the user while the data is being downloaded "from the server. Once the download is complete and the data is ready, "the indeterminate progress bar will be hidden before the POI data is rendered on the list view. Now, open the "Document Outline tab in the designer window and delete both the button and LinearLayout. Now, in the designer Toolbox, search for RelativeLayout and drag it onto the designer layout preview window. Search for ListView in the Toolbox search field and drag it over the layout designer preview window. Alternatively, you can drag and drop it over RelativeLayout in the Document Outline tab. We have just added a ListView widget to POIList.axml. Let's now open the Properties pad view in the designer window and edit some of its attributes: There are five buttons at the top of the pad that switch the set of properties being edited. The @+id notation notifies the compiler that a new resource ID needs to be created to identify the widget in API calls, and listView1 identifies the name of the constant. Now, perform the following steps: Change the ID name to poiListView and save the changes. Switch back to the Document Outline pad and notice that the ListView ID is updated. Again, switch back to the Properties pad and click on the Layout button. Under the View Group section of the layout properties, set both the Width and Height properties to match_parent. The match_parent value "for the Height and Width properties tells us that the ListView can use the entire content area provided by the parent, excluding any margins specified. In our case, the parent would be the top-level RelativeLayout. Prior to API level 8, fill_parent was used instead of match_parent to accomplish the same effect. In API level 8, fill_parent was deprecated and replaced with match_parent for clarity. Currently, both the constants are defined as the same value, so they have exactly the same effect. However, fill_ parent may be removed from the future releases of the API; so, going forward, match_parent should be used. So far, we have added a ListView to RelativeLayout, let's now add a Progress Bar to the center of the screen. Search for Progress Bar in the Toolbox search field. You will notice that several types of progress bars will be listed, including horizontal, large, normal, and small. Drag the normal progress bar onto RelativeLayout.By default, the Progress Bar widget is aligned to the top left of its parent layout. To align it to the center of the screen, select the progress bar in the Document Outline tab, switch to the Properties view, and click on the Layout tab. Now select the Center In Parent checkbox, and you will notice that the progress bar is aligned to the center of the screen and will appear at the top of the list view. Currently, the progress bar is visible in the center of the screen. By default, this could be hidden in the layout and will be made visible only while the data is being downloaded. Change the Progress Bar ID to progressBar and save the changes. To hide the Progress Bar from the layout, click on the Behavior tab in the Properties view. From Visibility, select Box, and then select gone.This behavior can also be controlled by calling setVisibility() on any view by passing any of the following behaviors. The View.Visibility property" allows you to control whether a view is visible or not. It is based on the ViewStates enum, which defines the following values: Value Description Gone This value tells the parent ViewGroup to treat the View as though it does not exist, so no space will be allocated in the layout Invisible This value tells the parent ViewGroup to hide the content for the View; however, it occupies the layout space Visible This value tells the parent ViewGroup to display the content of the View Click on the Source tab to switch the IDE context from visual designer to code, and see what we have built so far. Notice that the following code is generated for the POIList.axml layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout p1_layout_width="match_parent" p1_layout_height="match_parent" p1_id="@+id/relativeLayout1"> <ListView p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="match_parent" p1_id="@+id/poiListView" /> <ProgressBar p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_id="@+id/progressBar" p1_layout_centerInParent="true" p1_visibility="gone" /> </RelativeLayout> Creating POIListActivity When we created the" POIApp solution, along with the default layout, a default activity (MainActivity.cs) was created. Let's rename the MainActivity.cs file "to POIListActivity.cs: Select the MainActivity.cs file from Solution Explorer and rename to POIListActivity.cs. Open the POIListActivity.cs file in the code editor and rename the "class to POIListActivity. The POIListActivity class currently contains the code that was "created automatically while creating the solution using Xamarin Studio. We will write our own activity code, so let's remove all the code from the POIListActivity class. Override the OnCreate() activity life cycle callback method. This method will be used to attach the activity layout, instantiate the views, and write other activity initialization logic. Add the following code blocks to the POIListActivity class: namespace POIApp { [Activity (Label = "POIApp", MainLauncher = true, Icon = "@ drawable/icon")] public class POIListActivity : Activity { protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); } } } Now let's set the activity content layout by calling the SetContentView(layoutId) method. This method places the layout content directly into the activity's view hierarchy. Let's provide the reference to the POIList layout created in previous steps. At this point, the POIListActivity class looks as follows: namespace POIApp { [Activity (Label = "POIApp", MainLauncher = true, Icon = "@drawable/icon")] public class POIListActivity : Activity { protected override void OnCreate (Bundle savedInstanceState) { base.OnCreate (savedInstanceState); SetContentView (Resource.Layout.POIList); } } } Notice that in the preceding code snippet, the POIListActivity class uses some of the [Activity] attributes such as Label, MainLauncher, and Icon. During the build process, Xamarin.Android uses these attributes to create an entry in the AndroidManifest.xml file. Xamarin makes it easier by allowing all of the Manifest properties to set using attributes so that you never have to modify them manually in AndroidManifest.xml. So far, we have "declared an activity and attached the layout to it. At this point, if you run the app on your Android device or emulator, you will notice that a blank screen will be displayed. Creating the POI list row layout We now turn our attention to the" layout for each row in the ListView widget. "The" Android platform provides a number of default layouts out of the box that "can be used with a ListView widget: Layout Description SimpleListItem1 A "single line with a single caption field SimpleListItem2 A "two-line layout with a larger font and a brighter text color for the first field TwoLineListItem A "two-line layout with an equal sized font for both lines and a brighter text color for the first line ActivityListItem A "single line of text with an image view All of the preceding three layouts provide a pretty standard design, but for more control over content layout, a custom layout can also be created, which is what is needed for poiListView. To create a new layout, perform the following steps: In the Solution pad, navigate to Resources | Layout, right-click on it, and navigate to Add | New File. Select Android from the list on the left-hand side, Android Layout from the template list, enter POIListItem in the name column, and click on New. Before we proceed to lay out the design for each of the row items in the list, we must draw on a piece of paper and analyze how the UI will look like. In our example, the POI data will be organized as follows: There are a "number of ways to achieve this layout, but we will use RelativeLayout to achieve the same result. There is a lot going on in this diagram. Let's break it down as follows: A RelativeLayout view group is used as the top-level container; it provides a number of flexible options for positioning relative content, its edges, or other content. An ImageView widget is used to display a photo of the POI, and it is anchored to the left-hand side of the RelativeLayout utility. Two TextView widgets are used to display the POI name and address information. They need to be anchored to the right-hand side of the ImageView widget and centered within the parent RelativeLayout "utility. The easiest way to accomplish this is to place both the TextView classes inside another layout; in this case, a LinearLayout widget with "the orientation set to vertical. An additional TextView widget is used to display the distance, and it is anchored on the right-hand side of the RelativeLayout view group and centered vertically. Now, our task is to get this definition into POIListItem.axml. The next few sections describe how to "accomplish this using the Content view of the designer when feasible and the Source view when required. Adding a RelativeLayout view group The RelativeLayout layout "manager allows its child views to be positioned relative to each other or relative to the container or another container. In our case, for building the row layout, as shown in the preceding diagram, we can use RelativeLayout as a top-level view group. When the POIListItem.axml layout file was created, by default a top-level LinearLayout was added. First, we need to change the top-level ViewGroup to RelativeLayout. The following section will take you through the steps to complete the layout design for the POI list row: With POIListItem.axml opened in the content mode, select the entire layout by clicking on the content area. You should see a blue outline going around the edge. Press Delete. The LinearLayout view group will be deleted, and you will see a message indicating that the layout is empty. Alternatively, you can also select the LinearLayout view group from the Document Outline tab and press Delete. Locate the RelativeLayout view group in the toolbox and drag it onto the layout. Select the RelativeLayout view group from Document Outline. Open the Properties pad and change the following properties: The Padding option to 5dp The Layout Height option to wrap_content The Layout Width option to match_parent The padding property controls how much space will be placed around each item as a margin, and the height determines the height of each list row. Setting the Layout Width option to match_ parent will cause the POIListItem content to consume the entire width of the screen, while setting the Layout Height option to wrap_content will cause each row to be equal to the longest control. Switch to the Code view to see what has been added to the layout. Notice that the following lines of code have been added to RelativeLayout: <RelativeLayout p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/relativeLayout1" p1_padding="5dp"/> Android runs on a "variety of devices that offer different screen sizes and densities. When specifying dimensions, you can use a number of different units, including pixels (px), inches (in), and density-independent pixels (dp). Density-independent pixels are abstract units based on 1 dp being 1 pixel on a 160 dpi screen. At runtime, Android will scale the actual size up or down based on the actual screen density. It is a best practice to specify dimensions using density-independent pixels. Adding an ImageView widget The ImageView widget in "Android is used to display the arbitrary image for different sources. In our case, we will download the images from the server and display them in the list. Let's add an ImageView widget to the left-hand side of the layout and set the following configurations: Locate the ImageView widget in the toolbox and drag it onto RelativeLayout. With the ImageView widget selected, use the Properties pad to set the ID to poiImageView. Now, click on the Layout tab in the Properties pad and set the Height and Width values to 65 dp. In the property grouping named RelativeLayout, set Center Vertical to true. Simply clicking on the checkbox does not seem to work, but you can click on the small icon that looks like an edit box, which is to the right-hand side, and just enter true. If everything else fails, just switch to the Source view and enter the following code: p1:layout_centerVertical="true" In the property grouping named ViewGroup, set the Margin Right to 5dp. This brings some space between the POI image and the POI name. Switch to the Code view to see what has been added to the layout. Notice the following lines of code added to ImageView: <ImageView p1_src="@android:drawable/ic_menu_gallery" p1_layout_width="65dp" p1_layout_height="65dp" p1_layout_marginRight="5dp" p1_id="@+id/poiImageView" /> Adding a LinearLayout widget LinearLayout is one of the "most basic layout managers that organizes its child "views either horizontally or vertically based on the value of its orientation property. Let's add a LinearLayout view group that will be used to lay out "the POI name and address data as follows: Locate the LinearLayout (vertical) view group in the toolbox. Adding this widget is a little trickier because we want it anchored to the right-hand side of the ImageView widget. Drag the LinearLayout view group to the right-hand side of the ImageView widget until the edge turns to a blue dashed line, and then drop the LinearLayout view group. It will be aligned with the right-hand side of the ImageView widget. In the property grouping named RelativeLayout of the Layout section, set Center Vertical to true. As before, you will need to enter true in the edit box or manually add it to the Source view. Switch to the Code view to see what has been added to the layout. Notice "the following lines of code added to LinearLayout: <LinearLayout p1_orientation="vertical" p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_layout_toRightOf="@id/poiImageView" p1_id="@+id/linearLayout1" p1_layout_centerVertical="true" /> Adding the name and address TextView classes Add the TextView classes to display the POI name and address: Locate TextView in" the Toolbox and add a TextView class to the layout. "This TextView needs to be added within the LinearLayout view group we just added, so drag TextView over the LinearLayout view group until it turns blue and then drop it. Name the TextView ID as nameTextView and set the text size to 20sp. "The text size can be set in the Style section of the Properties pad; you will need to expand the Text Appearance group by clicking on the ellipsis (...) button on the right-hand side. Scale-independent pixels (sp) "are like dp units, but they are also scaled by the user's font size preference. Android allows users to select a font size in the Accessibility section of Settings. When font sizes are specified using sp, Android will not only take into account the screen density when scaling text, but will also consider the user's accessibility settings. It is recommended that you specify font sizes using sp. Add "another TextView to the LinearLayout view group using the same technique except dragging the new widget to the bottom edge of the nameTextView until it changes to a blue dashed line and then drop it. This will cause the second TextView to be added below nameTextView. Set the font size to 14sp. Change the ID of the newly added TextView to addrTextView. Now change the sample text for both nameTextView and addrTextView to POI Name and City, State, Postal Code. To edit the text shown in TextView, just double tap the widget on the content panel. This enables a small editor that allows you to enter the text directly. Alternately, you can change the text by entering a value for the Text property in the Widget section of the Properties pad. It is a design practice to declare all your static strings in the Resources/values/string.xml file. By declaring the strings in the strings.xml file, you can easily translate your whole app to support other languages. Let's add the following strings to string.xml: <string name="poi_name_hint">POI Name</string> <string name="address_hint">City, State, Postal Code.</string> You can now change the Text property of both nameTextView and addrTextView by selecting the ellipsis (…) button, which is next to the Text property in the Widget section of the Properties pad. Notice that this will open a dialog window that lists all the strings declared in the string.xml file. Select the appropriate strings for both TextView objects. Now let's switch to the Code view to see what has been added to the layout. Notice the following lines of code added inside LinearLayout: <TextView p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/nameTextView " p1_textSize="20sp" p1_text="@string/app_name" /> <TextView p1_text="@string/address_hint" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/addrTextView " p1_textSize="14sp" /> Adding the distance TextView Add a TextView to show "the distance from POI: Locate the TextView in the toolbox and add a TextView to the layout. This TextView needs to be anchored to the right-hand side of the RelativeLayout view group, but there is no way to visually accomplish this; so, we will use a multistep process. Initially, align the TextView with the right-hand edge of the LinearLayout view group by dragging it to the left-hand side until the edge changes to a dashed blue line and drop it. In the Widget section of the Properties pad, name the widget as distanceTextView and set the font size to 14sp. In the Layout section of the Properties pad, set Align Parent Right to true, Center Vertical to true, and clear out the linearLayout1 view group name in the To Right Of layout property. Change the sample text to 204 miles. To do this, let's add a new string entry to string.xml and set the Text property from the Properties pad.The following screenshot depicts what should be seen from the Content view "at this point: Switch back to the "Source tab in the layout designer, and notice the following code generated for the POIListItem.axml layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout p1_minWidth="25px" p1_minHeight="25px" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/relativeLayout1" p1_padding="5dp"> <ImageView p1_src="@android:drawable/ic_menu_gallery" p1_layout_width="65dp" p1_layout_height="65dp" p1_layout_marginRight="5dp" p1_id="@+id/poiImageView" /> <LinearLayout p1_orientation="vertical" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_layout_toRightOf="@id/poiImageView" p1_id="@+id/linearLayout1" p1_layout_centerVertical="true"> <TextView p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/nameTextView " p1_textSize="20sp" p1_text="@string/app_name" /> <TextView p1_text="@string/address_hint" p1_layout_width="match_parent" p1_layout_height="wrap_content" p1_id="@+id/addrTextView " p1_textSize="14sp" /> </LinearLayout> <TextView p1_text="@string/distance_hint" p1_layout_width="wrap_content" p1_layout_height="wrap_content" p1_id="@+id/textView1" p1_layout_centerVertical="true" p1_layout_alignParentRight="true" /> </RelativeLayout> Creating the PointOfInterest apps entity class The first class that is needed is the one that represents the primary focus of the application, a PointofInterest class. POIApp will allow the following attributes "to be captured for the Point Of Interest app: Id Name Description Address Latitude Longitude Image The POI entity class can be nothing more than a simple .NET class, which houses these attributes. To create a POI entity class, perform the following steps: Select the POIApp project from the Solution Explorer in Xamarin Studio. Select the POIApp project and not the solution, which is the top-level node in the Solution pad. Right-click on it and select New File. On the left-hand side of the New File dialog box, select General. At the top of the template list, in the middle of the dialog box, select Empty Class (C#). Enter the name PointOfInterest and click on OK. The class will be created in the POIApp project folder. Change the "visibility of the class to public and fill in the attributes based on the list previously identified. The following code snippet is from POIAppPOIAppPointOfInterest.cs from the code bundle available for this article: public class PointOfInterest { public int Id { get; set;} public string Name { get; set; } public string Description { get; set; } public string Address { get; set; } public string Image { get; set; } public double? Latitude { get; set; } public double? Longitude { get; set; } } Note that the Latitude and Longitude attributes are all marked as nullable. In the case of latitude and longitude, (0, 0) is actually a valid location so a null value indicates that the attributes have never been set. Populating the ListView item All the adapter "views such as ListView and GridView use an Adapter that acts as a bridge between the data and views. The Adapter iterates through the content and generates Views for each data item in the list. The Android SDK provides three different adapter implementations such as ArrayAdapter, CursorAdapter, and SimpleAdapter. An ArrayAdapter expects an array or a list as input, while CursorAdapter accepts the instance of the Cursor, and SimpleAdapter maps the static data defined in the resources. The type of adapter that suits your app need is purely based on the input data type. The BaseAdapter is the generic implementation for all of the three adapter types, and it implements the IListAdapter, ISpinnerAdapter, and IDisposable interfaces. This means that the BaseAdapter can be used for ListView, GridView, "or Spinners. For POIApp, we will create a subtype of BaseAdapter<T> as it meets our specific needs, works well in many scenarios, and allows the use of our custom layout. Creating POIListViewAdapter In order to create "POIListViewAdapter, we will start by creating a custom adapter "as follows: Create a new class named POIListViewAdapter. Open the POIListViewAdapter class file, make the class a public class, "and specify that it inherits from BaseAdapter<PointOfInterest>. Now that the adapter class has been created, we need to provide a constructor "and implement four abstract methods. Implementing a constructor Let's implement a "constructor that accepts all the information we will need to work with to populate the list. Typically, you need to pass at least two parameters: an instance of an activity because we need the activity context while accessing the standard common "resources and an input data list that can be enumerated to populate the ListView. The following code shows the constructor from the code bundle: private readonly Activity context; private List<PointOfInterest> poiListData; public POIListViewAdapter (Activity _context, List<PointOfInterest> _poiListData) :base() { this.context = _context; this.poiListData = _poiListData; } Implementing Count { get } The BaseAdapter<T> class "provides an abstract definition for a read-only Count property. In our case, we simply need to provide the count of POIs as provided in poiListData. The following code example demonstrates the implementation from the code bundle: public override int Count { get { return poiListData.Count; } } Implementing GetItemId() The BaseAdapter<T> class "provides an abstract definition for a method that returns a long ID for a row in the data source. We can use the position parameter to access a POI object in the list and return the corresponding ID. The following code example demonstrates the implementation from the code bundle: public override long GetItemId (int position) { return position; } Implementing the index getter method The BaseAdapter<T> class "provides an abstract definition for an index getter method that returns a typed object based on a position parameter passed in as an index. We can use the position parameter to access the POI object from poiListData and return an instance. The following code example demonstrates the implementation from the code bundle: public override PointOfInterest this[int index] { get{ return poiListData [index]; } } Implementing GetView() The BaseAdapter<T> class "provides an abstract definition for GetView(), which returns a view instance that represents a single row in the ListView item. As in other scenarios, you can choose to construct the view entirely in code or to inflate it from a layout file. We will use the layout file we previously created. The following code example demonstrates inflating a view from a layout file: view = context.LayoutInflater.Inflate (Resource.Layout.POIListItem, null, false); The first parameter of Inflate is a resource ID and the second is a root ViewGroup, which in this case can be left null since the view will be added to the ListView item when it is returned. Reusing row Views The GetView() method is called" for each row in the source dataset. For datasets with large numbers of rows, hundreds, or even thousands, it would require a great deal of resources to create a separate view for each row, and it would seem wasteful since only a few rows are visible at any given time. The AdapterView architecture addresses this need by placing row Views into a queue that can be reused as they" scroll out of view of the user. The GetView() method accepts a parameter named convertView, which is of type view. When a view is available for reuse, convertView will contain a reference to the view; otherwise, it will be null and a new view should be created. The following code example depicts the use of convertView to facilitate the reuse of row Views: var view = convertView; if (view == null){ view = context.LayoutInflater.Inflate (Resource.Layout.POIListItem, null); } Populating row Views Now that we have an instance of the "view, we need to populate the fields. The View class defines a named FindViewById<T> method, which returns a typed instance of a widget contained in the view. You pass in the resource ID defined in the layout file to specify the control you wish to access. The following code returns access to nameTextView and sets the Text property: PointOfInterest poi = this [position]; view.FindViewById<TextView>(Resource.Id.nameTextView).Text = poi.Name; Populating addrTextView is slightly more complicated because we only want to use the portions of the address we have, and we want to hide the TextView if none of the address components are present. The View.Visibility property allows you to control the visibility property "of a view. In our case, we want to use the ViewState.Gone value if none of "the components of the address are present. The following code shows the "logic in GetView: if (String.IsNullOrEmpty (poi.Address)) { view.FindViewById<TextView> (Resource.Id.addrTextView).Visibility = ViewStates.Gone; } else{ view.FindViewById<TextView>(Resource.Id.addrTextView).Text = poi.Address; } Populating the value for the distance text view requires an understanding of the location services. We need to do some calculation, by considering the user's current location with the POI latitude and longitude. Populating the list thumbnail image Image downloading and" processing is a complex task. You need to consider the various aspects, such as network logic, to download images from the server, caching downloaded images for performance, and image resizing for avoiding the memory out conditions. Instead of writing our own logic for doing all the earlier mentioned tasks, we can use UrlImageViewHelper, which is a free component available in the Xamarin Component Store. The Xamarin Component Store provides a set of reusable components, "including both free and premium components, that can be easily plugged into "any Xamarin-based application. Using UrlImageViewHelper The following steps will walk you "through the process of adding a component from the Xamarin Component Store: To include the UrlImageViewHelper component in POIApp, you can either double-click on the Components folder in the Solution pad, or right-click and select Edit Components. Notice that the component manager will be loaded with the already downloaded components and a Get More Components button that allows you to open the Components store from the window. Note that to access the component manager, you need to log in to your Xamarin account: Search for UrlImageViewHelper in the components search box available in the left-hand side pane. Now click on the download button to add your Xamarin Studio solution. Now that we have added the UrlImageViewHelper component, let's go back to the GetView() method in the POIListViewAdapter class. Let's take a look at the following section of the code: var imageView = view.FindViewById<ImageView> (Resource.Id.poiImageView); if (!String.IsNullOrEmpty (poi.Address)) { Koush.UrlImageViewHelper.SetUrlDrawable (imageView, poi.Image, Resource.Drawable.ic_placeholder); } Let us examine how the preceding code snippet works: The SetUrlDrawable() method defined in the UrlImageViewHelper "component provides a logic to download an image using a single line of code. It accepts three parameters: an instance of imageView, where the image is to be displayed after the download, the image source URL, and the placeholder image. Add a new image ic_placeholder.png to the drawable Resources directory. While the image is being downloaded, the placeholder image will be displayed on imageView. Downloading the image over the network requires Internet permissions. The following section will walk you through the steps involved in defining permissions in your AndroidManifest.xml file. Adding Internet permissions Android apps must be "granted permissions while accessing certain features, such as downloading data from the Internet, saving an image in storage, and so on. You must specify the permissions that an app requires in the AndroidManifest.xml file. This allows the installer to show potential users the set of permissions an app requires at the time of installation. To set the appropriate permissions, perform the following steps: Double-click on AndroidManifest.xml in the Properties directory in the Solution pad. The file will open in the manifest editor. There are two tabs: Application and Source, at the bottom of the screen, that can be used to toggle between viewing a form for editing the file and the raw XML, as shown in the following screenshot: In the" Required permissions list, check Internet and navigate to File | Save. Switch to the Source view to view the XML as follows: Summary In this article, we covered a lot about how to create user interface elements using different layout managers and widgets such as TextView, ImageView, ProgressBar, and ListView. Resources for Article: Further resources on this subject: Code Sharing Between iOS and Android[article] XamChat – a Cross-platform App[article] Heads up to MvvmCross [article]
Read more
  • 0
  • 0
  • 2814

article-image-releasing-and-maintaining-application
Packt
25 Aug 2015
11 min read
Save for later

Releasing and Maintaining the Application

Packt
25 Aug 2015
11 min read
In this article by Andrey Kovalenko author of the book PhoneGap by Example we implemented several unit and integration tests with the Jasmine tool for our application. We used the headless browser PhantomJS, and we measured performance with Appium. All this is great and helps us automate the testing approach to find bugs in the early stages of application development. Once we finish creating our application and test it, we can think of delivering our application to other people. We can distribute the application in several different ways. Once we finish these tasks, we will be ready to do a full cycle of the application creation and distribution processes. We already know how to set up development environments to develop for iOS and Android. We will reuse these skills in this article as well to prepare our builds for distribution. This article read as a step-by-step tutorial for the setup of different tools. (For more resources related to this topic, see here.) We already know how to build our application using IDE (Xcode or Android Studio). However, now, we will explore how to build the application for different platforms using the PhoneGap Build service. PhoneGap Build helps us stay away from different SDKs. It works for us by compiling in the cloud. First of all, we should register on https://build.phonegap.com. It is pretty straightforward. Once we register, we can log in, and under the apps menu section, we will see something like this:   We entered a link to our git repository with source files or upload the zip archive with the same source code. However, there is a specific requirement for the structure of the folders for upload. We should take only the www directory of the Cordova/PhoneGap application, add config.xml inside it, and compress this folder. Let's look at this approach using an example of the Crazy Bubbles application. PhoneGap config.xml In the root folder of the game, we will place the following config.xml file: <?xml version="1.0" encoding="UTF-8" ?> <widget id = "com.cybind.crazybubbles" versionCode = "10" version = "1.0.0" > <name>Crazy Bubbles</name> <description> Nice PhoneGap game </description> <author href="https://build.phonegap.com" email="support@phonegap.com"> Andrew Kovalenko </author> <gap:plugin name="com.phonegap.plugin.statusbar" /> </widget> This configuration file specifies the main setup for the PhoneGap Build application. The setup is made up of these elements: widget is a root element of our XML file based on the W3C specification, with the following attributes: id: This is the application name in the reverse-domain style version: This is the version of the application in numbers format versionCode: This is optional and used only for Android name of the application description of the application name of the author with website link and e-mail List of plugins if required by the application We can use this XML file or enter the same information using a web interface. When we go to Settings | Configuration, we will see something like the following screenshot: PhoneGap plugins As you can see, we included one plugin in config.xml: <gap:plugin name="com.phonegap.plugin.statusbar" /> There are several attributes that the gap:plugin tag has. They are as follows: name: This is required, plugin ID in the reverse-domain format version: This is optional, plugin version source: This is optional, can be pgb, npm, or plugins.cordova.io. The default is pgb params: This is optional, configuration for plugin if needed We included the StatusBar plugin, which doesn't require JavaScript code. However, there are some other plugins that need JavaScript in the index.html file. So, we should not forget to add the code. Initial upload and build Once we finish the configuration steps and create a Zip archive of the www folder, we can upload it. Then, we will see the following screen:   Here, we can see generic information about the application, where we can enable remote debugging with Weinre. Weinre is a remote web inspector. It allows access to the DOM and JavaScript. Now, we can click on the Ready to build button, and it will trigger the build for us. Here, you can see that the iOS build has failed. Let's click on the application title and figure out what is going on. Once the application properties page loads, we will see the following screenshot: When we click on the Error button, we will see the reason why it failed:   So, we need to provide a signing key. Basically, you need a provisioning profile and certificate needed to build the application. We already downloaded the provisioning profile from the Apple Development portal, but we should export the certificate from the Keychain Access. We are going to open it, find our certificate in the list, and export it:   When we export it, we will be asked for the destination to store the .p12 file:   Add a password to protect the file:   Once we save the file, we can go back to the PhoneGap Build portal and create a signing key:   Just click on the No key selected button in the dropdown and upload the exported certificate and provisioning profile for the application. Once the upload is finished, the build will be triggered:   Now, we will get a successful result and can see all the build platforms:   Now, we can download the application for both iOS and Android and install it on the device. Alternatively, we can install the application by scanning the QR code on the application main page. We can do this with any mobile QR scanner application on our device. It will return a direct link for the build download for a specific platform. Once it is downloaded, we can install it and see it running on our device. Congratulations! We just successfully created the build with the PhoneGap Build service! Now, let's take a closer look at the versioning approach for the application. Beta release of the iOS application For the beta release of our application, we will use the TestFlight service from the Apple. As a developer, we need to be a member of the iOS Developer program. As a tester, we will need to install the application for beta testing and the TestFlight application from the App Store. After that, the tester can leave feedback about the application. First of all, let's go to https://itunesconnect.apple.com and login there. After that, we can go to the My Apps section and click on the plus sign in the top-left corner. We will get a popup with a request to enter some main information about the application. Let's add the information about our application so that it looks like this:   All the fields in the preceding screenshot are well known and do not require additional explanation. Once we click on the Create button, the application is created, and we can see the Versions tab of the application. Now, we need to build and upload our application. We can do this in two ways: Using Xcode Using Application Loader However, before submitting to beta testing, we need to generate a provisioning profile for distribution. Let's do it on the Developer portal. Generate a distribution provisioning profile Go to the Provisioning Profiles, and perform the following steps: Click on + to add a new provisioning profile and go to Distribution | App Store as presented in the following screenshot: Then, select the application ID. In my case, it is Travelly: After that, select the certificates to include in the provisioning profile. The certificate should be for distribution as well: Finally, generate the provisioning profile, set a name for the file, and download it: Now, we can build and upload our application to iTunes Connect. Upload to iTunes Connect with Xcode Let's open the Travelly application in Xcode. Go to cordova/platforms/ios and open Travelly.xcodeproj. After that, we have to select iOS Device to run our application. In this case, we will see the Archive option available. It would not be available if the emulator option is selected. Now, we can initiate archiving by going to Product | Archive:   Once the build is completed, we will see the list of archives:   Now, click on the Submit to App Store… button. It will ask us to select a development team if we have several teams:   At this stage, Xcode is looking for the provisioning profile we generated earlier. We would be notified if there is no distribution provisioning profile for our application. Once we click on Choose, we are redirected to the screen with binary and provisioning information:   When we click on the Submit button, Xcode starts to upload the application to iTunes Connect: Congratulations! We have successfully uploaded our build with Xcode: Upload to iTunes Connect with Application Loader Before the reviewing process of build upload with Application Loader, we need to install the tool first. Let's go to iTunes Connect | Resources and Help | App Preparation and Delivery and click on the Application Loader link. It will propose the installation file for download. We will download and install it. After that, we can review the upload process. Uploading with Application Loader is a little different than with XCode. We will follow the initial steps until we get the following screen:   In this case, on the screen, we will click on the Export button, where we can save the .ipa file. However, before that, we have to select the export method:   We are interested in distribution to the App Store, so we selected the first option. We need to save the generated file somewhere to the filesystem. Now, we will launch Application Loader and log in using our Apple Developer account:   After that, we will select Deliver Your App and pick the generated file:   In the following screenshot, we can see the application's generic information: name, version, and so on:   When we click on the Next button, we will trigger upload to iTunes Connect, which is successfully executed: During the process, the package will be uploaded to the iTunes Store, as shown here: Once the application is added, it will show you the following screenshot: Now, if we go to iTunes Connect | My Apps | Travelly | Prerelease | Builds, we will see our two uploaded builds:   As you can see, they are both inactive. We need to send our application to internal and external testers. Invite internal and external testers Let's work with version 0.0.2 of the application. First of all, we need to turn on the check box to the right of the TestFlight Beta Testing label. There are two types of testers we can invite: Internal testers are iTunes Connect users. It is possible to invite up to 25 internal testers. External testers are independent users who can install the application using the TestFlight mobile tool. To invite internal testers, let's go to the Internal Testers tab, add the e-mail of the desired tester, place the check mark, and click on the Invite button:   The user will receive an e-mail with the following content:   Users can click on the link and follow the instructions to install the application. To allow testing for external users, we will go to the External Testers tab. Before becoming available for external testing, the application should be reviewed. For the review, some generic information is needed. We need to add: Instructions for the testers on what to test Description of the application Feedback information Once this information is entered, we can click on the Next button and answer questions about cryptography usage in the application:   We do not use cryptography, so we select No and click on Submit. Now, our application is waiting for review approval:   Now, there is a button available to add external testers:   We can invite up to 1000 external testers. After the tester accepts the invite on their device, the invite will be linked to their current Apple ID. Once the application review is finished, it will become available for external testers. Summary In this article of the book, you learned how to release the PhoneGap application with the PhoneGap Build service. Also, we released the application through TestFlight for beta testing. Now, we will be able to develop different types of Cordova/PhoneGap applications, test them. I think it is pretty awesome, don't you? Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere[article] Getting Ready to Launch Your PhoneGap App in the Real World[article] Using Location Data with PhoneGap [article]
Read more
  • 0
  • 0
  • 1726

article-image-and-now-something-extra
Packt
25 Aug 2015
9 min read
Save for later

And now for something extra

Packt
25 Aug 2015
9 min read
 In this article by Paul F. Johnson, author of the book Cross-platform UI Development with Xamarin.Forms, we'll look at how to add a custom renderer for Windows Phone in particular. (For more resources related to this topic, see here.) This article doesn't depend on anything because there is no requirement to have a Xamarin subscription; the Xamarin Forms library is available for free via NuGet. All you require is Visual Studio 2013 (or higher) running on Windows 8 (or higher—this is needed for the Windows Phone 8 emulator). Let's make a start Before we can create a custom renderer, we have to create something to render. In this case, we need to create a Xamarin Forms application. For this, create a new project in Visual Studio, as shown in the following screenshot: Selecting the OK button creates the project. Once the project is created, you will see the following screenshot on the right-hand side: In the preceding screenshot, there are four projects created: Portable (also known as the PCL—portable class library) Droid (Android 4.0.3 or higher) iOS (iOS 7 or higher) Windows Phone (8 or higher). By default, it is 8.0, but it can be set to 8.1 If we expand the WinPhone profile and examine References, we will see the following screenshot: Here, you can see that Xamarin.Forms is already installed. You can also see the link to the PCL at the bottom. Creating a button Buttons are available natively in Xamarin Forms. You can perform some very basic operations on a button (such as assign text, a Click event, and so on). When built, the platform will render their own version of Button. This is how the code looks: var button = new Button { Text = "Hello" }; button.Click += delegate {…}; For our purposes, we don't want a dull standard button, but we want a button that looks similar to the following image: We may also want to do something really different by having a button with both text and an image, where the image and text positions can look similar to the following image on either side: Creating the custom button The first part to creating the button is to create an empty class that inherits Button, as shown in the following code: using Xamarin.Forms; namespace CustomRenderer { public class NewButton : Button { public NewButton() { } } } As NewButton inherits Button, it will have all the properties and events that a standard Button has. Therefore, we can use the following code: var btnLogin = new NewButton() { Text = "Login", }; btnLogin.Clicked += delegate { if (!string.IsNullOrEmpty(txtUsername.Text) && !string.IsNullOrEmpty(txtPassword.Text)) LoginUser(txtUsername.Text, txtPassword.Text); }; However, the difference here is that as we will use something that inherits a class, we can use the default renderer or define our own renderer. The custom renderer To start with, we need to tell the platform that we will use a custom renderer as follows: [assembly: ExportRenderer(typeof(NewButton), typeof(NewButtonRenderer))] namespace WinPhone { class NewButtonRenderer : ButtonRenderer We start by saying that we will use a renderer on the NewButton object from the PCL with the NewButtonRenderer class. The class itself has to inherit ButtonRenderer that contains the code we need to create the renderer. The next part is to override OnElementChanged. This method is triggered when an element from within the object being worked on changes. Considerations for Windows Phone A prime consideration on Windows Phone is that the ViewRenderer base is actually a Canvas that has the control (in this case, a button) on it as a child. This is an advantage for us. If we clear the child from the canvas, the canvas can be manipulated, and the button can be added back. It is important to remember that we are dealing with two distinct entities, and each has its own properties. For example, the white rectangle that surrounds a Windows Phone button is part of the control, whereas the color and styling are part of the canvas, as shown in the following code: protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Button> e) { base.OnElementChanged(e); if (Control != null) { // clear the children of the canvas. We are not deleting the button. Children.Clear(); // create the new background var border = new Border { CornerRadius = new System.Windows.CornerRadius(10), Background = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 130, 186, 132)), BorderBrush = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255,45,176,51)), BorderThickness = new System.Windows.Thickness(0.8), Child = Control // this adds the control back to the border }; Control.Foreground = new SolidColorBrush(Colors.White); // make the text white Control.BorderThickness = new System.Windows.Thickness(0); // remove the button border that is always there Children.Add(border); // add the border to the canvas. Remember, this also contains the Control } } When compiled, the UI will give you a button, as shown in the following image: I'm sure you'll agree; it's much nicer than the standard Windows Phone button. The sound of music An image button is also fairly simple to create. Again, create a new Xamarin.Forms project in Visual Studio. Once created, as we did before, create a new empty class that inherits Button. Why is it empty? Unfortunately, it's not that simple to pass additional properties with a custom renderer, so to ensure an easier life, the class just inherits the base class, and anything else that is needed to go to the renderer is accessed through the pointer to app. Setting up the PCL code In the PCL, we will have the following code: App.text = "This is a cow"; App.filename = "cow.png"; App.onTheLeft = true; var btnComposite = new NewCompositeButton(){ }; Text, filename, and onTheLeft are defined in the App class and are accessed from the PCL using CompositeUI.App.filename (CompositeUI is the namespace I've used). The PCL is now set up, so the renderer is needed. The Windows Phone renderer As before, we need to tell the platform that we will use our own renderer and override the default OnElementChanged event, as shown in the following code: [assembly: ExportRenderer(typeof(NewCompositeButton), typeof(NewCompositeButtonRenderer))] namespace WinPhone { class NewCompositeButtonRenderer :ButtonRenderer { protected override void OnElementChanged(ElementChangedEventArgs<Xamarin.Forms.Button> e) { base.OnElementChanged(e); As with the first example, we will deal with a base class that is a Canvas with a single child. This child needs to be removed from the canvas before it can be manipulated as follows: Children.Clear(); Our next problem is that we have an image and text. Accessing the image It is recommended that images are kept either in the Assets directory in the project or in the dedicated Images directory. For my example, my image is in assets. To create the image, we need to create a bitmap image, set the source, and finally assign it to an image (for good measure, a small amount of padding is also added) as follows: var bitmap = new BitmapImage(); bitmap.SetSource(App.GetResourceStream(new Uri(@"Assets/"+CompositeUI.App.filename, UriKind.Relative)).Stream); var image = new System.Windows.Controls.Image { Source = bitmap, Margin = new System.Windows.Thickness(8,0,8,0) }; Adding the image to the button We now have a problem. If we add the image directly to the canvas, we can't specify whether it is on the left-hand side or on the right-hand side of the text. Moreover, how do you add the image to the canvas? Yes, you can use the child property, but this still leads to the issue of position. Thankfully, Windows Phone provides a StackPanel class. If you think of a stack panel as a set of ladders, you will quickly understand how it works. A ladder can be vertical or horizontal. If it's vertical, each object is directly before or after each other. If it is horizontal, each object is either at the left-hand side or the right-hand side of each other. With the Orientation property of a StackPanel class, we can create a horizontal or vertical ladder for whatever we need. In the case of the button, we want the Panel to be horizontal, as shown in the following code: var panel = new StackPanel { Orientation = Orientation.Horizontal, }; Then, we can set the text for the button and any other attributes: Control.Foreground = new SolidColorBrush(Colors.White); Control.BorderThickness = new System.Windows.Thickness(0); Control.Content = CompositeUI.App.text; Note that there isn't a Text property for the button on Windows Phone. Its equivalent is Content. Our next step is to decide which side the image goes on and add it to the panel, as shown in the following code: if (CompositeUI.App.onTheLeft) { panel.Children.Add(image); panel.Children.Add(Control); } else { panel.Children.Add(Control); panel.Children.Add(image); } We can now create the border and add the panel as the child: var border = new Border { CornerRadius = new System.Windows.CornerRadius(10), Background = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 130, 186, 132)), BorderBrush = new SolidColorBrush(System.Windows.Media.Color.FromArgb(255, 45, 176, 51)), BorderThickness = new System.Windows.Thickness(0.8), Child = panel }; Lastly, add the border to the canvas: Children.Add(border); We now have a button with an image and text on it, as shown in the following image: This rendering technique can also be applied to Lists and anywhere else required. It's not difficult; it's just not as obvious as it really should be. Summary Creating styled buttons is certainly for the platform to work on, but the basics are there in the PCL. The code is not difficult to understand, and once you've used it a few times, you'll find that the styling buttons to create attractive user interfaces is not such as big effort. Xamarin Forms will always help you create your UI, but at the end of the day, it's only you who can make it stand out. Resources for Article: Further resources on this subject: Configuring Your Operating System[article] Heads up to MvvmCross[article] Code Sharing Between iOS and Android [article]
Read more
  • 0
  • 0
  • 4939

article-image-api-detail
Packt
20 Aug 2015
25 min read
Save for later

The API in Detail

Packt
20 Aug 2015
25 min read
In this article by Hugo Solis, author of book, Kivy Cookbook, we will learn to create simple API with the help of App class. We will also learn how to load images of asynchronous data, parsing of data, exception handling, utils applications and use of factory objects. Working of audio, video and camera in Kivy will be explained in this article. Also we will learn text manipulation and and usage of spellcheck option. We will see how to add different effects to the courser in this article. (For more resources related to this topic, see here.) Kivy is actually an API for Python, which lets us create cross-platform apps. An application programming interface (API) is a set of routines, protocols, and tools to build software applications. Generally, we call Kivy as a framework because it also has procedures and instructions, such as the Kv language, which are not present in Python. Frameworks are environments that come with support programs, compilers, code libraries, tool sets, and APIs. In this article, we want to review the Kivy API reference. We will go through some useful classes of the API. Every time we import a Kivy package, we will be dealing with an API. Even though the usual imports are from kivy.uix, there are more options and classes in the Kivy API. The Kivy developers have created the API reference, which you can refer to online at http://kivy.org/docs/api-kivy.html for an exhaustive information. Getting to know the API Our starting point is going to be the App class, which is the base to create Kivy applications. In this recipe, we are going to create a simple app that uses some resources from this class. Getting ready It is important to see the role of the App class in the code. How to do it… To complete this recipe, we will create a Python file to make the resources present in the App class. Let's follow these steps: Import the kivy package. Import the App package. Import the Widget package. Define the MyW() class. Define the e1App() class instanced as App. Define the build() method and give an icon and a title to the app. Define the on_start() method. Define the on_pause() method. Define the on_resume() method. Define the on_stop() method. End the app with the usual lines. import kivy from kivy.app import App from kivy.uix.widget import Widget class MyW(Widget): pass class e1App(App): def build(self): self.title = 'My Title' self.icon = 'f0.png' returnMyW() def on_start(self): print("Hi") return True def on_pause(self): print("paused") return True def on_resume(self): print("active") pass def on_stop(self): print("Bye!") pass if __name__ == '__main__': e1App().run() How it works… In the second line, we import the most common kivy package. This is the most used element of the API because it permits to create applications. The third line is an importation from kivy.uix that could be the second most used element, because the majority of the widgets are in there. In the e1app class, we have the usual build() method where we have the line: self.title = 'My Title' We are providing a title to the app. As you can remember, the default title should be e1 because of the class's name, but now we are using the title that we want. We have the next line: self.icon = 'f0.png' We are giving the app an icon. The default is the Kivy logo, but with this instruction, we are using the image in the file f0.png. In addition, we have the following method: def on_start(self): print("Hi") return True It is in charge of all actions performed when the app starts. In this case, it will print the word Hi in the console. The method is as follows: def on_pause(self): print("paused") return True This is the method that is performed when the app is paused when it is taken off from RAM. This event is very common when the app is running in a mobile device. You should return True if your app can go into pause mode, otherwise return False and your application will be stopped. In this case, we will print the word paused in the console, but it is very important that you save important information in the long-term memory, because there can be errors in the resume of the app and most mobiles don't allow real multitasking and pause apps when switching between them. This method is used with: def on_resume(self): print("active") pass The on_resume method is where we verify and correct any error in the sensible data of the app. In this case, we are only printing the word active in the console. The last method is: def on_stop(self): print("Bye!") pass It is where all the actions are performed before the app closes. Normally, we save data and take statistics in this method, but in this recipe, we just say Bye! in the console. There's more… There is another method, the load_kv method, that you can invoke in the build method, which permits to make our own selection of the KV file to use and not the default one. You only have to add follow line in the build() method: self.load_kv(filename='e2.kv') See also The natural way to go deeper in this recipe is to take a look at the special characteristics that the App has for the multiplatform support that Kivy provides. Using the asynchronous data loader An asynchronous data loader permits to load images even if its data is not available. It has diverse applications, but the most common is to load images from the Internet, because this makes our app always useful even in the absence of Web connectivity. In this recipe, we will generate an app that loads an image from the Internet. Getting ready We did image loading from the Internet. We need an image from the Web, so find it and grab its URL. How to do it… We need only a Python file and the URL in this recipe. To complete the recipe: Import the usual kivy package. Import the Image and Loader packages. Import the Widget package. Define the e2App class. Define the _image_Loaded() method, which loads the image in the app. Define the build() method. In this method, load the image in a proxy image. Define the image variable instanced as Image(). Return the image variable to display the load image: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.image import Image from kivy.loader import Loader class e2App(App): def _image_loaded(self, proxyImage): if proxyImage.image.texture: self.image.texture = proxyImage.image.texture def build(self): proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') proxyImage.bind(on_load=self._image_loaded) self.image = Image() return self.image if __name__ == '__main__': e2App().run() How it works… The line that loads the image is: proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') We assign the image to the proxyImage variable because we are not sure if the image exists or could be retrieved from the Web. We have the following line: proxyImage.bind(on_load=self._image_loaded) We bind the event on_load to the variable proxyImage. The used method is: def _image_loaded(self, proxyImage): if proxyImageifproxyImage.image.texture: self.image.texture = proxyImage.image.texture It verifies whether the image is loaded or not; if not, then it does not change the image. This is why, we said that this will load in an asynchronous way. There's more… You can also load an image from a file in the traditional way. We have the following line: proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') Replace the preceding line with: proxyImage = Loader.image('f0.png') Here, f0.png is the name of the file to load. Logging objects The log in any software is useful for many aspects, one of them being exception handling. Kivy is always logging information about its performance. It creates a log file of every running of our app. Every programmer knows how helpful logging is for software engineering. In this recipe, we want to show information of our app in that log. How to do it… We will use a Python file with the MyW() usual class where we will raise an error and display it in the Kivy log. To complete the recipe, follow these steps: Import the usual kivy package. Import the Logger packages. Define the MyW() class. Trigger an info log. Trigger a debug log. Perform an exception. Trigger an exception log: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.logger import Logger class MyW(Widget): Logger.info('MyW: This is an info message.') Logger.debug('MyW: This is a debug message.') try: raise Exception('exception') except Exception: Logger.exception('Something happened!') class e3App(App): def build(self): return MyW() if __name__ == '__main__': e3App().run() How it works… In this recipe, we are creating three logs. The first in the line is: Logger.info('MyW: This is an info message.') This is an info log, which is associated just with the supplementary information. The label MyW is just a convention, but you could use it in whatever way you like. Using the convention, we track where the log was performed in the code. We will see a log made by that line as: [INFO ] [MyW ] This is an info message The next line also performs a log notation: Logger.debug('MyW: This is a debug message.') This line will produce a debug log commonly used to debug the code. Consider the following line: Logger.exception('Something happened!') This will perform an error log, which would look like: [ERROR ] Something happened! In addition to the three present in this recipe, you can use trace, warning, and critical logging objects. There's more… We also have the trace, warning, error, and critical methods in the Logger class that work similarly to the methods described in this recipe. The log file by default is located in the .kivy/logs/ folder of the user running the app, but you can always change it in the Kivy configuration file. Additionally, you can access the last 100 messages for debugging purposes even if the logger is not enabled. This is made with the help of LoggerHistory as follows: from kivy.logger import LoggerHistory print(LoggerHistory.history) So, the console will display the last 100 logs. See also More information about logging can be found at http://kivy.org/docs/api-kivy.logger.html. Parsing Actually, Kivy has the parser package that helps in the CSS parsing. Even though it is not a complete parsing, it helps to parse instructions related to a framework. The recipe will show some that you could find useful in your context. How to do it… The parser package has eight classes, so we will work in Python to review all of them. Let's follow the next steps: Import the parser package. from kivy.parser import * Parse a color from a string. parse_color('#090909') Parse a string to a string. parse_string("(a,1,2)")a Parse a string to a boolean value. parse_bool("0") Parse a string to a list of two integers. parse_int2("12 54") Parse a string to a list of four floats. parse_float4('54 87.13 35 0.9') Parse a file name. parse_filename('e7.py') Finally, we have parse_int and parse_float, which are aliases of int and float, respectively. How it works… In the second step, we parse any of the common ways to define a color (that is, RGB(r, g, b), RGBA(r, g, b, a), aaa, rrggbb, #aaa or #rrggbb) to a Kivy color definition. The third step takes off the single or double quotes of the string. The fourth step takes a string True for 1 and False for 0 and parses it to its respective boolean value. The last step is probably very useful because it permits verification if that file name is a file available to be used. If the file is found, the resource path is returned. See also To use a more general parser, you can use ply package for Python. Visit https://pypi.python.org/pypi/ply for further information. Applying utils There are some methods in Kivy that cannot be arranged in any other class. They are miscellaneous and could be helpful in some contexts. In this recipe, we will see how to use them. How to do it… In the spirit to show all the methods available, let's work directly in Python. To do the package tour, follow these steps: Import the kivy package. from kivy.utils import * Find the intersection between two lists. intersection(('a',1,2), (1,2)) Find the difference between two lists. difference(('a',1,2), (1,2)) Convert a tuple in a string. strtotuple("1,2") Transform a hex string color to a Kivy color. get_color_from_hex('#000000') Transform a Kivy color to a hex value. get_hex_from_color((0, 1, 0)) Get a random color. get_random_color(alpha='random') Evaluate if a color is transparent. is_color_transparent((0,0,0,0)) Limit the value between a minimum value and maximum value. boundary(a,1,2) Interpolate between two values. interpolate(10, 50, step=10) Mark a function as deprecated. deprecated(MyW) Get the platform where the app is running. <p>platform()</p> How it works… Almost every method presented in this recipe has a transparent syntax. Let's get some detail on two of the steps. The ninth step is the boundary method. It evaluates the value of a, and if this is between 1 and 2, it conserves its value; if it is lower than 1, the method returns 1; if it is greater than 2, the method returns 2. The eleventh step is associated with the warning by using the function MyW; when this function is called the first time, the warning will be triggered. See also If you want to explore this package in detail, you can visit http://kivy.org/docs/api-kivy.utils.html. Leveraging the factory object The factory object represents the last step to create our own widgets because the factory can be used to automatically register any class or module and instantiate classes from any place in the app. This is a Kivy implementation of the factory pattern where a factory is an object to create other objects. This also opens a lot of possibilities to create dynamic codes in Kivy. In this recipe, we will register one of our widgets. Getting ready We will use an adaptation of the code to register the widget as a factory object. Copy the file in the same location of this recipe with the name e7.py. How to do it… In this recipe, we will use one of our simple Python files where we will register our widget using the factory package. Follow these steps: Import the usual kivy packages. In addition, import the Factory package. Register the MyWidget In the build() method of the usual e8App, return Factory.MyWidget. import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.factory import Factory Factory.register('MyWidget', module='e7') classMyW(Widget): pass class e8App(App): def build(self): returnFactory.MyWidget() if __name__ == '__main__': e8App().run() How it works… Let us note how the magic is done in the following line: Factory.register('MyWidget', module='e7') This line creates the factory object named MyWidget, and let's use it as we want. See how e7 is the name of the file. Actually, this sentence also will create a file named e7.pyc, which we can use as a replacement of the file e7.py if want to distribute our code; now it is not necessary to give the e7.py, since just the e7.pyc file is enough. There's more… This registration is actually permanent, so if you wish to change the registration in the same code, you need to unregister the object. For example, see the following: Factory.unregister('MyWidget') Factory.register('MyWidget', cls=CustomWidget) New_widget = Factory.MyWidget()   See also If you want to know more about this amazing package, you can visit http://kivy.org/docs/api-kivy.factory.html. Working with audio Nowadays, the audio integration in our app is vital. You could not realize a video game without audio or an app that does not use multimedia. We will create a sample with just one button which when pressed plays an audio. Getting ready We need an audio file in this recipe in the traditional audio formats (mp3, mp4, wav, wma, b-mtp, ogg, spx, midi). If you do not have any, you always can get one from sites such as https://www.freesound.org. How to do it… We will use a simple Python file with just one widget to play the audio file. To complete the recipe, let's follow these steps: Import the usual kivy package. Import the SoundLoader package. Define the MyW() class. Define the __init__() method. Create a button with the label Play. Bind the press action with the press() method. Add the widget to the app. Define the press() method. Call the SoundLoader.sound() method for your audio file. Play it with the play()method: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.button import Button from kivy.core.audio import SoundLoader class MyW(Widget): def __init__(self, **kwargs): super(MyW, self).__init__(**kwargs) b1=Button(text='Play') b1.bind(on_press=self.press) self.add_widget(b1) def press(self, instance): sound = SoundLoader.load('owl.wav') if sound: print("Sound found at %s" % sound.source) print("Sound is %.3f seconds" % sound.length) sound.play() print('playing') class e4App(App): def build(self): returnMyW() if __name__ == '__main__': e4App().run() How it works… In this recipe, the audio file is a load in the line: sound = SoundLoader.load('owl.wav') Here, we use a .wav format. Look at the following line: if sound: We handle that the file has been correctly loaded. We have the next line: print("Sound found at %s" % sound.source) This prints the name of the file that has been loaded in the console. The next line prints the duration of the file in seconds. See the other line: sound.play() It is where the file is played in the app. There's more… Also, you can use the seek() and stop() methods to navigate to the audio file. Let's say that you want to play the audio after the first minute, you will use: Sound.seek(60) The parameter received by the seek() method must be in seconds. See also If you need more control of the audio, you should visit http://kivy.org/docs/api-kivy.core.audio.html. Working with video The video reproduction is a useful tool for any app. In this app, we will load a widget to reproduce a video file in our app. Getting ready It is necessary to have a video file in the usual format to be reproduced in our app (.avi, .mov, .mpg, .mp4, .flv, .wmv, .ogg). If you do not have one, you can visit https://commons.wikimedia.org/wiki/Main_Page to get free media. How to do it… In this recipe, we are going to use a simple Python file to create our app within a player widget. To complete the task, follow these: Import the usual kivy packages. Import the VideoPlayer package. Define the MyW() class. Define the __init__() method. Define videoplayer with your video. Add the video player to the app. import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.videoplayer import VideoPlayer class MyW(Widget): def __init__(self, **kwargs): super(MyW, self).__init__(**kwargs) player= VideoPlayer( source='GOR.MOV',state='play', options={'allow_stretch': True}, size=(600,600)) self.add_widget(player) class e5App(App): def build(self): returnMyW() if __name__ == '__main__': e5App().run() How it works… In this recipe, the most important line is: player= VideoPlayer( source='GOR.MOV',state='play', options={'allow_stretch': True}, size=(600,600)) This line loads the file, sets some options, and gives the size to the widget. The option 'allow stretch' let's you modify the image of the video or not. In our recipe, 'allow stretch' is permitted, so the images will be maximized to fit in the widget. There's more… You can also integrate subtitles or annotations to the video in an easy way. You only need a JSON-based file with the same name as the video, in the same location with .jsa extension. For example, let's use this content in the .jsa file: [ {"start": 0, "duration": 2,
 "text": "Here your text"}, {"start": 2, "duration": 2,
"bgcolor": [0.5, 0.2, 0.4, 0.5],
 "text": "You can change the background color"} ] The "start" sentence locates in which second the annotation will show up in the video and the "duration" sentence gives the time in seconds that the annotation will be in the video. See also There are some apps that need more control of the video, so you can visit http://kivy.org/docs/api-kivy.core.video.html for better understanding. Working with a camera It is very common that almost all our personal devices have a camera. So you could find thousands of ways to use a camera signal in your app. In this recipe, we want to create an app that takes control of the camera present in a device. Getting ready Actually, you need to have the correct installation of the packages that permits you to interact with a camera. You can review http://kivy.org/docs/faq.html#gstreamer-compatibility to check if your installation is suitable. How to do it… We are going to use the Python and KV files in this recipe. The KV file will deal with the camera and button to interact with it. The Python code is one of our usual Python files with the definition of the root widget. Let's follow these steps: In the KV file, define the <MyW> rule. In the rule, define BoxLayout with a vertical orientation. Inside the Layout, define the camera widget with play property as false. Also, define the ToggleButton with the press property swifts between play and not play: <MyW>: BoxLayout: orientation: 'vertical' Camera: id: camera play: False ToggleButton: text: 'Play' on_press: camera.play = not camera.play size_hint_y: None height: '48dp' In the Python file, import the usual packages. Define the MyW() class instanced as BoxLayout: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.boxlayout import BoxLayout class MyW(BoxLayout): pass class e6App(App): def build(self): return MyW() if __name__ == '__main__': e6App().run() There's more… If we have a device with more than one camera, for example, the handheld device front and rear camera, you can use the index property to switch between them. We have the following line: id: camera Add this line in the KV file: index: 0 The preceding line is to select the first camera, index:1 for the second, and so on. Using spelling Depending on the kind of app that we will develop, we will need to spellcheck text provided by the user. In the Kivy API, there is a package to deal with it. In this recipe, we will give an example of how to do it. Getting ready If you are not using Mac OS X (or OS X as Apple called now), we will need to install the Python package: PyEnchant. For the installation, let's use the pip tool as follows: pip install PyEnchant How to do it… Because this recipe could use it in different contexts, let's work directly in Python. We want to make some suggestions to the word misspelled. To complete the task, follow these steps: Import the Spelling package. from kivy.core.spelling import Spelling Instance the object s as Spelling(). s = Spelling() List the available language. s.list_languages() In this case, select U.S. English. s.select_language('en_US') Ask for a suggestion to the object s. s.suggest('mispell') How it works… The first four steps actually set the kind of suggestion that we want. The fifth step makes the suggestion in line: s.suggest('mispell') The output of the expression is: [u'misspell', u'ispell'] The output is in the order of the used frequency, so misspell is the most probable word that the user wanted to use. Adding effects Effects are one of the most important advances in the computer graphics field. The physics engines help create better effects, and they are under continuous improvement. Effects are pleasing to the end user. They change the whole experience. The kinetic effect is the mechanism that Kivy uses to approach this technology. This effect can be used in diverse applications from the movement of a button to the simulation of real graphical environments. In this recipe, we will review how to set the effect to use it in our apps. Getting ready We are going to use some concepts from physics in this recipe, so it's necessary to have the clear basics. You should start reading about this on Wikipedia at http://en.wikipedia.org/wiki/Acceleration. How to do it… As the applications of this effect are as creative as you want, we are going to work directly in Python to set up the effect. Let's follow these steps: Import the KineticEffect package. from kivy.effects.kinetic import KineticEffect Instance the object effect as KineticEffect(). effect = KineticEffect() Start the effect at second 10. effect.start(10) Update the effect at second 15. effect.update(15) Update the effect again at second 30. effect.update(30) You can always add friction to the movement. effect.friction You can also update the velocity. effect.update_velocity(30) Stop the effect at second 48. effect.stop(48) Get the final velocity. effect.velocity() Get the value in seconds. effect.value() How it works… What we are looking for in this recipe is step 9: effect.velocity() The final velocity is how we can use to describe the movement of any object in a realistic way. As the distances are relatively fixed in the app, you need the velocity to describe any motion. We could incrementally repeat the steps to vary the velocity. There's more… There are other three effects based on the Kinetic effect, which are: ScrollEffect: This is the base class used to implement an effect. It only calculates scrolling and overscroll. DampedScrollEffect: This uses the overscroll information to allow the user to drag more than is expected. Once the user stops the drag, the position is returned to one of the bounds. OpacityScrollEffect: This uses the overscroll information to reduce the opacity of the ScrollView widget. When the user stops the drag, the opacity is set back to 1. See also If you want to go deeper in this topic, you should visit: http://kivy.org/docs/api-kivy.effects.html. Advanced text manipulation Text is one of the most commonly used contents used in the apps. The recipe will create an app with a label widget where we will use text rendering to make our Hello World. How to do it… We are going to use one simple Python files that will just show our Hello World text. To complete the recipe: Import the usual kivy packages. Also, import the label package. Define the e9app class instanced as app. Define the method build() to the class. Return the label widget with our Hello World text. import kivy kivy.require('1.9.0') # Code tested in this version! from kivy.app import App from kivy.uix.label import Label class e9App(App): def build(self): return Label(text='Hello [ref=world][color=0000ff]World[/color][/ref]', markup=True, font_size=80, font_name='DroidSans') if __name__ == '__main__': e9App().run() How it works… Here is the line: return Label(text='Hello [ref=world][color=0000ff]World[/color][/ref]', markup=True, font_size=80, font_name='DroidSans') This is the place where the rendering is done. Look at the text parameter where the token [ref] permits us to reference that specific part of the text (for example, to detect a click in the word World) the token [color] gives a particular color to that part of the text. The parameter markup=True allows the use of tokens. The parameters font_size and font_name will let you select the size and font to use for the text. There's more… There are others parameter with evident functions that the label widget can receive like: bold=False italic=False halign=left valign=bottom shorten=False text_size=None color=None line_height=1.0 Here, they have been evaluated with their default values. See also If you are interested in creating even more varieties of texts, you can visit http://kivy.org/docs/api-kivy.uix.label.html#kivy.uix.label.Labelor http://kivy.org/docs/api-kivy.core.text.html. Summary In this article we learned many things to change the API of our app. We learned to manage images of asynchronous data, to add different effects and to deal with the text visible on the screen. We used audio, video data and camera to create our app. We understood some concept such as exception handling, use of factory objects and parsing of data. Resources for Article: Further resources on this subject: Subtitles – tracking the video progression[article] Images, colors, and backgrounds[article] Sprites, Camera, Actions![article]
Read more
  • 0
  • 0
  • 5132

article-image-controls-and-widgets
Packt
10 Aug 2015
25 min read
Save for later

Controls and Widgets

Packt
10 Aug 2015
25 min read
In this article by Chip Lambert and Shreerang Patwardhan, author of the book, Mastering jQuery Mobile, we will take our Civic Center application to the next level and in the process of doing so, we will explore different widgets. We will explore the touch events provided by the jQuery Mobile framework further and then take a look at how this framework interacts with third-party plugins. We will be covering the following different widgets and topics in this article: Collapsible widget Listview widget Range slider widget Radio button widget Touch events Third-party plugins HammerJs FastClick Accessibility (For more resources related to this topic, see here.) Widgets We already made use of widgets as part of the Civic Center application. "Which? Where? When did that happen? What did I miss?" Don't panic as you have missed nothing at all. All the components that we use as part of the jQuery Mobile framework are widgets. The page, buttons, and toolbars are all widgets. So what do we understand about widgets from their usage so far? One thing is pretty evident, widgets are feature-rich and they have a lot of things that are customizable and that can be tweaked as per the requirements of the design. These customizable things are pretty much the methods and events that these small plugins offer to the developers. So all in all: Widgets are feature rich, stateful plugins that have a complete lifecycle, along with methods and events. We will now explore a few widgets as discussed before and we will start off with the collapsible widget. A collapsible widget, more popularly known as the accordion control, is used to display and style a cluster of related content together to be easily accessible to the user. Let's see this collapsible widget in action. Pull up the index.html file. We will be adding the collapsible widget to the facilities page. You can jump directly to the content div of the facilities page. We will replace the simple-looking, unordered list and add the collapsible widget in its place. Add the following code in place of the <ul>...<li></li>...</ul> portion: <div data-role="collapsibleset"> <div data-role="collapsible"> <h3>Banquet Halls</h3> <p>List of banquet halls will go here</p> </div> <div data-role="collapsible"> <h3>Sports Arena</h3> <p>List of sports arenas will go here</p> </div> <div data-role="collapsible">    <h3>Conference Rooms</h3> <p>List of conference rooms will come here</p> </div> <div data-role="collapsible"> <h3>Ballrooms</h3> <p>List of ballrooms will come here</p> </div> </div> That was pretty simple. As you must have noticed, we are creating a group of collapsibles defined by div with data-role="collapsibleset". Inside this div, we have multiple div elements each with data-role of "collapsible". These data roles instruct the framework to style div as a collapsible. Let's break individual collapsibles further. Each collapsible div has to have a heading tag (h1-h6), which acts as the title for that collapsible. This heading can be followed by any HTML structure that is required as per your application's design. In our application, we added a paragraph tag with some dummy text for now. We will soon be replacing this text with another widget—listview. Before we proceed to look at how we will be doing this, let's see what the facilities page is looking like right now: Now let's take a look at another widget that we will include in our project—the listview widget. The listview widget is a very important widget from the mobile website stand point. The listview widget is highly customizable and can play an important role in the navigation system of your web application as well. In our application, we will include listview within the collapsible div elements that we have just created. Each collapsible will hold the relevant list items which can be linked to a detailed page for each item. Without further discussion, let's take a look at the following code. We have replaced the contents of the first collapsible list item within the paragraph tag with the code to include the listview widget. We will break up the code and discuss the minute details later: <div data-role="collapsible"> <h3>Banquet Halls</h3> <p> <span>We have 3 huge banquet halls named after 3 most celebrated Chef's from across the world.</span> <ul data-role="listview" data-inset="true"> <li> <a href="#">Gordon Ramsay</a> </li> <li> <a href="#">Anthony Bourdain</a> </li> <li> <a href="#">Sanjeev Kapoor</a> </li> </ul> </p> </div> That was pretty simple, right? We replaced the dummy text from the paragraph tag with a span that has some details concerning what that collapsible list is about, and then we have an unordered list with data-role="listview" and some property called data-inset="true". We have seen several data-roles before, and this one is no different. This data-role attribute informs the framework to style the unordered list, such as a tappable button, while a data-inset property informs the framework to apply the inset appearance to the list items. Without this property, the list items would stretch from edge to edge on the mobile device. Try setting the data-inset property to false or removing the property altogether. You will see the results for yourself. Another thing worth noticing in the preceding code is that we have included an anchor tag within the li tags. This anchor tag informs the framework to add a right arrow icon on the extreme right of that list item. Again, this icon is customizable, along with its position and other styling attributes. Right now, our facilities page should appear as seen in the following image: We will now add similar listview widgets within the remaining three collapsible items. The content for the next collapsible item titled Sports Arena should be as follows. Once added, this collapsible item, when expanded, should look as seen in the screenshot that follows the code: <div data-role="collapsible">    <h3>Sports Arena</h3>    <p>        <span>We have 3 huge sport arenas named after 3 most celebrated sport personalities from across the world.       </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Sachin Tendulkar</a>            </li>            <li>                <a href="#">Roger Federer</a>            </li>            <li>                <a href="#">Usain Bolt</a>            </li>        </ul>    </p> </div> The code for the listview widgets that should be included in the next collapsible item titled Conference Rooms. Once added, this collapsible, item when expanded, should look as seen in the image that follows the code: <div data-role="collapsible">    <h3>Conference Rooms</h3>    <p>        <span>            We have 3 huge conference rooms named after 3 largest technology companies.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Google</a>            </li>            <li>                <a href="#">Twitter</a>            </li>            <li>                <a href="#">Facebook</a>            </li>        </ul>    </p> </div> The final collapsible list item – Ballrooms – should hold the following code, to include its share of the listview items: <div data-role="collapsible">    <h3>Ballrooms</h3>    <p>        <span>            We have 3 huge ball rooms named after 3 different dance styles from across the world.        </span>        <ul data-role="listview" data-inset="true">            <li>                <a href="#">Ballet</a>            </li>            <li>                <a href="#">Kathak</a>            </li>            <li>                <a href="#">Paso Doble</a>            </li>        </ul>    </p> </div> After adding these listview items, our facilities page should look as seen in the following image: The facilities page now looks much better than it did earlier, and we now understand a couple more very important widgets available in jQuery Mobile—the collapsible widget and the listview Widget. We will now explore two form widgets – slider widget and the radio buttons widget. For this, we will be enhancing our catering page. Let's build a simple tool that will help the visitors of this site estimate the food expense based on the number of guests and the type of cuisine that they choose. Let's get started then. First, we will add the required HTML, to include the slider widget and the radio buttons widget. Scroll down to the content div of the catering page, where we have the paragraph tag containing some text about the Civic Center's catering services. Add the following code after the paragraph tag: <form>    <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label>    <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50">    <fieldset data-role="controlgroup" id="cuisine-choices">        <legend style="font-weight: bold; padding: 15px 0px;">Choose your cuisine</legend>        <input type="radio" name="cuisine-choice" id="cuisine-choice-cont" value="15" checked="checked" />        <label for="cuisine-choice-cont">Continental</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-mex" value="12" />        <label for="cuisine-choice-mex">Mexican</label>        <input type="radio" name="cuisine-choice" id="cuisine-choice-ind" value="14" />        <label for="cuisine-choice-ind">Indian</label>    </fieldset>    <p>        The approximate cost will be: <span style="font-weight: bold;" id="totalCost"></span>    </p> </form> That is not much code, but we are adding and initializing two new form widgets here. Let's take a look at the code in detail: <label style="font-weight: bold; padding: 15px 0px;" for="slider">Number of guests</label> <input type="range" name="slider" id="slider" data-highlight="true" min="50" max="1000" value="50"> We are initializing our first form widget here—the slider widget. The slider widget is an input element of the type range, which accepts a minimum value and maximum value and a default value. We will be using this slider to accept the number of guests. Since the Civic Center can cater to a maximum of 1,000 people, we will set the maximum limit to 1,000 and we expect that we have at least 50 guests, so we set a minimum value of 50. Since the minimum number of guests that we cater for is 50, we set the input's default value to 50. We also set the data-highlight attribute value to true, which informs the framework that the selected area on the slider should be highlighted. Next comes the group of radio buttons. The most important attribute to be considered here is the data-role="controlgroup" set on the fieldset element. Adding this data-role combines the radio buttons into one single group, which helps inform the user that one of the radio buttons is to be selected. This gives a visual indication to the user that one radio button out of the whole lot needs to be selected. The values assigned to each of the radio inputs here indicate the cost per person for that particular cuisine. This value will help us calculate the final dollar value for the number of selected guests and the type of cuisine. Whenever you are using the form widgets, make sure you have the form elements in the hierarchy as required by the jQuery Mobile framework. When the elements are in the required hierarchy, the framework can apply the required styles. At the end of the previous code snippet, we have a paragraph tag where we will populate the approximate cost of catering for the selected number of guests and the type of cuisine selected. The catering page should now look as seen in the following image. Right now, we only have the HTML widgets in place. When you drag the slider or select different radio buttons, you will only see the UI interactions of these widgets and the UI treatments that the framework applies to these widgets. However, the total cost will not be populated yet. We will need to write some JavaScript logic to determine this value, and we will take a look at this in a minute. Before moving to the JavaScript part, make sure you have all the code that is needed: Now let's take a look at the magic part of the code (read JavaScript) that is going to make our widgets usable for the visitors of this Civic Center web application. Add the following JavaScript code in the script tag at the very end of our index.html file: $(document).on('pagecontainershow', function(){    var guests = 50;    var cost = 35;    var totalCost;    $("#slider").on("slidestop", function(event, ui){        guests = $('#slider').val();        totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    $("input:radio[name=cuisine-choice]").on("click", function() {        cost = $(this).val();        var totalCost = costCal();        $("#totalCost").text("$" + totalCost);    });    function costCal(){        return guests * cost;    } }); That is a pretty small chunk of code and pretty simple too. We will be looking at a few very important events that are part of the framework and that come in very handy when developing web applications with jQuery Mobile. One of the most important things that you must have already noticed is that we are not making use of the customary $(document).on('ready', function(){ in Jquery, but something that looks as the following code: $(document).on('pagecontainershow', function(){ The million dollar question here is "why doesn't DOM already work in jQuery Mobile?" As part of jQuery, the first thing that we often learn to do is execute our jQuery code as soon as the DOM is ready, and this is identified using the $(document).ready function. In jQuery Mobile, pages are requested and injected into the same DOM as the user navigates from one page to another and so the DOM ready event is as useful as it executes only for the first page. Now we need an event that should execute when every page loads, and $(document).pagecontainershow is the one. The pagecontainershow element is triggered on the toPage after the transition animation has completed. The pagecontainershow element is triggered on the pagecontainer element and not on the actual page. In the function, we initialize the guests and the cost variables to 50 and 35 respectively, as the minimum number of guests we can have is 50 and the "Continental" cuisine is selected by default, which has a value of 35. We will be calculating the estimated cost when the user changes the number of guests or selects a different radio button. This brings us to the next part of our code. We need to get the value of the number of guests as soon as the user stops sliding the slider. jQuery Mobile provides us with the slidestop event for this very purpose. As soon as the user stops sliding, we get the value of the slider and then call the costCal function, which returns a value that is the number of guests multiplied by the cost of the selected cuisine per person. We then display this value in the paragraph at the bottom for the user to get an estimated cost. We will discuss some more about the touch events that are available as part of the jQuery Mobile framework in the next section. When the user selects a different radio button, we retrieve the value of the selected radio button, call the costCal function again, and update the value displayed in the paragraph at the bottom of our page. If you have the code correct and your functions are all working fine, you should see something similar to the following image: Input with touch We will take a look at a couple of touch events, which are tap and taphold. The tap event is triggered after a quick touch; whereas the taphold event is triggered after a sustained, long press touch. The jQuery Mobile tap event is the gesture equivalent of the standard click event that is triggered on the release of the touch gesture. The following snippet of code should help you incorporate the tap event when you need to use it in your application: $(".selector").on("tap", function(){    console.log("tap event is triggered"); }); The jQuery Mobile taphold event triggers after a sustained, complete touch event, which is more commonly known as the long press event. The taphold event fires when the user taps and holds for a minimum of 750 milliseconds. You can also change the default value, but we will come to that in a minute. First, let's see how the taphold event is used: $(".selector").on("taphold", function(){    console.log("taphold event is triggered"); }); Now to change the default value for the long press event, we need to set the value for the following piece of code: $.event.special.tap.tapholdThreshold Working with plugins A number of times, we will come across scenarios where the capabilities of the framework are just not sufficient for all the requirements of your project. In such scenarios, we have to make use of third-party plugins in our project. We will be looking at two very interesting plugins in the course of this article, but before that, you need to understand what jQuery plugins exactly are. A jQuery plugin is simply a new method that has been used to extend jQuery's prototype object. When we include the jQuery plugin as part of our code, this new method becomes available for use within your application. When selecting jQuery plugins for your jQuery Mobile web application, make sure that the plugin is optimized for mobile devices and incorporates touch events as well, based on your requirements. The first plugin that we are going to look at today is called FastClick and is developed by FT Labs. This is an open source plugin and so can be used as part of your application. FastClick is a simple, easy-to-use library designed to eliminate the 300 ms delay between a physical tap and the firing on the click event on mobile browsers. Wait! What are we talking about? What is this 300 ms delay between tap and click? What exactly are we discussing? Sure. We understand the confusion. Let's explain this 300 ms delay issue. The click events have a 300 ms delay on touch devices, which makes web applications feel laggy on a mobile device and doesn't give users a native-like feel. If you go to a site that isn't mobile-optimized, it starts zoomed out. You have to then either pinch and zoom or double tap some content so that it becomes readable. The double-tap is a performance killer, because with every tap we have to wait to see whether it might be a double tap—and this wait is 300 ms. Here is how it plays out: touchstart touchend Wait 300ms in case of another tap click This pause of 300 ms applies to click events in JavaScript, but also other click-based interactions such as links and form controls. Most mobile web browsers out there have this 300 ms delay on the click events, but now a few modern browsers such as Chrome and FireFox for Android and iOS are removing this 300 ms delay. However, if you are supporting the older Android and iOS versions, with older mobile browsers, you might want to consider including the FastClick plugin in your application, which helps resolve this problem. Let's take a look at how we can use this plugin in any web application. First, you need to download the plugin files, or clone their GitHub repository here: https://github.com/ftlabs/fastclick. Once you have done that, include a reference to the plugin's JavaScript file in your application: <script type="application/javascript" src="path/fastclick.js"></script> Make sure that the script is loaded prior to instantiating FastClick on any element of the page. FastClick recommends you to instantiate the plugin on the body element itself. We can do this using the following piece of code: $(function){    FastClick.attach(document.body); } That is it! Your application is now free of the 300 ms click delay issue and will work as smooth as a native application. We have just provided you with an introduction to the FastClick plugin. There are several more features that this plugin provides. Make sure you visit their website—https://github.com/ftlabs/fastclick—for more details on what the plugin has to offer. Another important plugin that we will look at is HammerJs. HammerJs, again is an open source library that helps recognize gestures made by touch, mouse, and pointerEvents. Now, you would say that the jQuery Mobile framework already takes care of this, so why do we need a third-party plugin again? True, jQuery Mobile supports a variety of touch events such as tap, tap and hold, and swipe, as well as the regular mouse events, but what if in our application we want to make use of some touch gestures such as pan, pinch, rotate, and so on, which are not supported by jQuery Mobile by default? This is where HammerJs comes into the picture and plays nicely along with jQuery Mobile. Including HammerJS in your web application code is extremely simple and straightforward, like the FastClick plugin. You need to download the plugin files and then add a reference to the plugin JavaScript file: <script type="application/javascript" src="path/hammer.js"></script> Once you have included the plugin, you need to create a new instance on the Hammer object and then start using the plugin for all the touch gestures you need to support: var hammerPan = new Hammer(element_name, options); hammerPan.on('pan', function(){    console.log("Inside Pan event"); }); By default, Hammer adds a set of events—tap, double tap, swipe, pan, press, pinch, and rotate. The pinch and rotate recognizers are disabled by default, but can be turned on as and when required. HammerJS offers a lot of features that you might want to explore. Make sure you visit their website—http://hammerjs.github.io/ to understand the different features the library has to offer and how you can integrate this plugin within your existing or new jQuery Mobile projects. Accessibility Most of us today cannot imagine our lives without the Internet and our smartphones. Some will even argue that the Internet is the single largest revolutionary invention of all time that has touched numerous lives across the globe. Now, at the click of a mouse or the touch of your fingertip, the world is now at your disposal, provided you can use the mouse, see the screen, and hear the audio—impairments might make it difficult for people to access the Internet. This makes us wonder about how people with disabilities would use the Internet, their frustration in doing so, and the efforts that must be taken to make websites accessible to all. Though estimates vary on this, most studies have revealed that about 15% of the world's population have some kind of disability. Not all of these people would have an issue with accessing the web, but let's assume 5% of these people would face a problem in accessing the web. This 5% is also a considerable amount of users, which cannot be ignored by businesses on the web, and efforts must be taken in the right direction to make the web accessible to these users with disabilities. jQuery Mobile framework comes with built-in support for accessibility. jQuery Mobile is built with accessibility and universal access in mind. Any application that is built using jQuery Mobile is accessible via the screen reader as well. When you make use of the different jQuery Mobile widgets in your application, unknowingly you are also adding support for web accessibility into your application. jQuery Mobile framework adds all the necessary aria attributes to the elements in the DOM. Let's take a look at how the DOM looks for our facilities page: Look at the highlighted Events button in the top right corner and its corresponding HTML (also highlighted) in the developer tools. You will notice that there are a few attributes added to the anchor tag that start with aria-. We did not add any of these aria- attributes when we wrote the code for the Events button. jQuery Mobile library takes care of these things for you. The accessibility implementation is an ongoing process and the awesome developers at jQuery Mobile are working towards improving the support every new release. We spoke about aria- attributes, but what do they really represent? WAI - ARIA stands for Web Accessibility Initiative – Accessible Rich Internet Applications. This was a technical specification published by the World Wide Web Consortium (W3C) and basically specifies how to increase the accessibility of web pages. ARIA specifies the roles, properties, and states of a web page that make it accessible to all users. Accessibility is extremely vast, hence covering every detail of it is not possible. However, there is excellent material available on the Internet on this topic and we encourage you to read and understand this. Try to implement accessibility into your current or next project even if it is not based on jQuery Mobile. Web accessibility is an extremely important thing that should be considered, especially when you are building web applications that will be consumed by a huge consumer base—on e-commerce websites for example. Summary In this article, we made use of some of the available widgets from the jQuery Mobile framework and we built some interactivity into our existing Civic Center application. The widgets that we used included the range slider, the collapsible widget, the listview widget, and the radio button widget. We evaluated and looked at how to use two different third-party plugins—FastClick and HammerJs. We concluded the article by taking a look at the concept of Web Accessibility. Resources for Article: Further resources on this subject: Creating Mobile Dashboards [article] Speeding up Gradle builds for Android [article] Saying Hello to Unity and Android [article]
Read more
  • 0
  • 0
  • 2041

article-image-camera-api
Packt
07 Aug 2015
4 min read
Save for later

The Camera API

Packt
07 Aug 2015
4 min read
In this article by Purusothaman Ramanujam, the author of PhoneGap Beginner's Guide Third Edition, we will look at the Camera API. The Camera API provides access to the device's camera application using the Camera plugin identified by the cordova-plugin-camera key. With this plugin installed, an app can take a picture or gain access to a media file stored in the photo library and albums that the user created on the device. The Camera API exposes the following two methods defined in the navigator.camera object: getPicture: This opens the default camera application or allows the user to browse the media library, depending on the options specified in the configuration object that the method accepts as an argument cleanup: This cleans up any intermediate photo file available in the temporary storage location (supported only on iOS) (For more resources related to this topic, see here.) As arguments, the getPicture method accepts a success handler, failure handler, and optionally an object used to specify several camera options through its properties as follows: quality: This is a number between 0 and 100 used to specify the quality of the saved image. destinationType: This is a number used to define the format of the value returned in the success handler. The possible values are stored in the following Camera.DestinationType pseudo constants: DATA_URL(0): This indicates that the getPicture method will return the image as a Base64-encoded string FILE_URI(1): This indicates that the method will return the file URI NATIVE_URI(2): This indicates that the method will return a platform-dependent file URI (for example, assets-library:// on iOS or content:// on Android) sourceType: This is a number used to specify where the getPicture method can access an image. The following possible values are stored in the Camera.PictureSourceType pseudo constants: PHOTOLIBRARY (0), CAMERA (1), and SAVEDPHOTOALBUM (2): PHOTOLIBRARY: This indicates that the method will get an image from the device's library CAMERA: This indicates that the method will grab a picture from the camera SAVEDPHOTOALBUM: This indicates that the user will be prompted to select an album before picking an image allowEdit: This is a Boolean value (the value is true by default) used to indicate that the user can make small edits to the image before confirming the selection; it works only in iOS. encodingType: This is a number used to specify the encoding of the returned file. The possible values are stored in the Camera.EncodingType pseudo constants: JPEG (0) and PNG (1). targetWidth and targetHeight: These are the width and height in pixels, to which you want the captured image to be scaled; it's possible to specify only one of the two options. When both are specified, the image will be scaled to the value that results in the smallest aspect ratio (the aspect ratio of an image describes the proportional relationship between its width and height). mediaType: This is a number used to specify what kind of media files have to be returned when the getPicture method is called using the Camera.PictureSourceType.PHOTOLIBRARY or Camera.PictureSourceType.SAVEDPHOTOALBUM pseudo constants as sourceType; the possible values are stored in the Camera.MediaType object as pseudo constants and are PICTURE (0), VIDEO (1), and ALLMEDIA (2). correctOrientation: This is a Boolean value that forces the device camera to correct the device orientation during the capture. cameraDirection: This is a number used to specify which device camera has to be used during the capture. The values are stored in the Camera.Direction object as pseudo constants and are BACK (0) and FRONT (1). popoverOptions: This is an object supported on iOS to specify the anchor element location and arrow direction of the popover used on iPad when selecting images from the library or album. saveToPhotoAlbum: This is a Boolean value (the value is false by default) used in order to save the captured image in the device's default photo album. The success handler receives an argument that contains the URI to the file or data stored in the file's Base64-encoded string, depending on the value stored in the encodingType property of the options object. The failure handler receives a string containing the device's native code error message as an argument. Similarly, the cleanup method accepts a success handler and a failure handler. The only difference between the two is that the success handler doesn't receive any argument. The cleanup method is supported only on iOS and can be used when the sourceType property value is Camera.PictureSourceType.CAMERA and the destinationType property value is Camera.DestinationType.FILE_URI. Summary In this article, we looked at the various properties available with the Camera API. Resources for Article: Further resources on this subject: Geolocation – using PhoneGap features to improve an app's functionality, write once use everywhere [article] Using Location Data with PhoneGap [article] iPhone JavaScript: Installing Frameworks [article]
Read more
  • 0
  • 0
  • 2498
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-directives-and-services-ionic
Packt
28 Jul 2015
18 min read
Save for later

Directives and Services of Ionic

Packt
28 Jul 2015
18 min read
In this article by Arvind Ravulavaru, author of Learning Ionic, we are going to take a look at Ionic directives and services, which provides reusable components and functionality that can help us in developing applications even faster. (For more resources related to this topic, see here.) Ionic Platform service The first service we are going to deal with is the Ionic Platform service ($ionicPlatform). This service provides device-level hooks that you can tap into to better control your application behavior. We will start off with the very basic ready method. This method is fired once the device is ready or immediately, if the device is already ready. All the Cordova-related code needs to be written inside the $ionicPlatform.ready method, as this is the point in the app life cycle where all the plugins are initialized and ready to be used. To try out Ionic Platform services, we will be scaffolding a blank app and then working with the services. Before we scaffold the blank app, we will create a folder named chapter5. Inside that folder, we will run the following command: ionic start -a "Example 16" -i app.example.sixteen example16 blank Once the app is scaffolded, if you open www/js/app.js, you should find a section such as: .run(function($ionicPlatform) {   $ionicPlatform.ready(function() {     // Hide the accessory bar by default (remove this to show the accessory bar above the keyboard     // for form inputs)     if(window.cordova && window.cordova.plugins.Keyboard) {       cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true);     }     if(window.StatusBar) {       StatusBar.styleDefault();     }   }); }) You can see that the $ionicPlatform service is injected as a dependency to the run method. It is highly recommended to use $ionicPlatform.ready method inside other AngularJS components such as controllers and directives, where you are planning to interact with Cordova plugins. In the preceding run method, note that we are hiding the keyboard accessory bar by setting: cordova.plugins.Keyboard.hideKeyboardAccessoryBar(true); You can override this by setting the value to false. Also, do notice the if condition before the statement. It is always better to check for variables related to Cordova before using them. The $ionicPlatform service comes with a handy method to detect the hardware back button event. A few (Android) devices have a hardware back button and, if you want to listen to the back button pressed event, you will need to hook into the onHardwareBackButton method on the $ionicPlatform service: var hardwareBackButtonHandler = function() {   console.log('Hardware back button pressed');   // do more interesting things here }         $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler); This event needs to be registered inside the $ionicPlatform.ready method preferably inside AngularJS's run method. The hardwareBackButtonHandler callback will be called whenever the user presses the device back button. A simple functionally that you can do with this handler is to ask the user if they want to really quit your app, making sure that they have not accidently hit the back button. Sometimes this may be annoying. Thus, you can provide a setting in your app whereby the user selects if he/she wants to be alerted when they try to quit. Based on that, you can either defer registering the event or you can unsubscribe to it. The code for the preceding logic will look something like this: .run(function($ionicPlatform) {     $ionicPlatform.ready(function() {         var alertOnBackPress = localStorage.getItem('alertOnBackPress');           var hardwareBackButtonHandler = function() {             console.log('Hardware back button pressed');             // do more interesting things here         }           function manageBackPressEvent(alertOnBackPress) {             if (alertOnBackPress) {                 $ionicPlatform.onHardwareBackButton(hardwareBackButtonHandler);             } else {                 $ionicPlatform.offHardwareBackButton(hardwareBackButtonHandler);             }         }           // when the app boots up         manageBackPressEvent(alertOnBackPress);           // later in the code/controller when you let         // the user update the setting         function updateSettings(alertOnBackPressModified) {             localStorage.setItem('alertOnBackPress', alertOnBackPressModified);             manageBackPressEvent(alertOnBackPressModified)         }       }); }) In the preceding code snippet, we are looking in localStorage for the value of alertOnBackPress. Next, we create a handler named hardwareBackButtonHandler, which will be triggered when the back button is pressed. Finally, a utility method named manageBackPressEvent() takes in a Boolean value that decides whether to register or de-register the callback for HardwareBackButton. With this set up, when the app starts we call the manageBackPressEvent method with the value from localStorage. If the value is present and is equal to true, we register the event; otherwise, we do not. Later on, we can have a settings controller that lets users change this setting. When the user changes the state of alertOnBackPress, we call the updateSettings method passing in if the user wants to be alerted or not. The updateSettings method updates localStorage with this setting and calls the manageBackPressEvent method, which takes care of registering or de-registering the callback for the hardware back pressed event. This is one powerful example that showcases the power of AngularJS when combined with Cordova to provide APIs to manage your application easily. This example may seem a bit complex at first, but most of the services that you are going to consume will be quite similar. There will be events that you need to register and de-register conditionally, based on preferences. So, I thought this would be a good place to share an example such as this, assuming that this concept will grow on you. registerBackButtonAction The $ionicPlatform also provides a method named registerBackButtonAction. This is another API that lets you control the way your application behaves when the back button is pressed. By default, pressing the back button executes one task. For example, if you have a multi-page application and you are navigating from page one to page two and then you press the back button, you will be taken back to page one. In another scenario, when a user navigates from page one to page two and page two displays a pop-up dialog when it loads, pressing the back button here will only hide the pop-up dialog but will not navigate to page one. The registerBackButtonAction method provides a hook to override this behavior. The registerBackButtonAction method takes the following three arguments: callback: This is the method to be called when the event is fired priority: This is the number that indicates the priority of the listener actionId (optional): This is the ID assigned to the action By default the priority is as follows: Previous view = 100 Close side menu = 150 Dismiss modal = 200 Close action sheet = 300 Dismiss popup = 400 Dismiss loading overlay = 500 So, if you want a certain functionality/custom code to override the default behavior of the back button, you will be writing something like this: var cancelRegisterBackButtonAction = $ionicPlatform.registerBackButtonAction(backButtonCustomHandler, 201); This listener will override (take precedence over) all the default listeners below the priority value of 201—that is dismiss modal, close side menu, and previous view but not above the priority value. When the $ionicPlatform.registerBackButtonAction method executes, it returns a function. We have assigned that function to the cancelRegisterBackButtonAction variable. Executing cancelRegisterBackButtonAction de-registers the registerBackButtonAction listener. The on method Apart from the preceding handy methods, $ionicPlatform has a generic on method that can be used to listen to all of Cordova's events (https://cordova.apache.org/docs/en/edge/cordova_events_events.md.html). You can set up hooks for application pause, application resume, volumedownbutton, volumeupbutton, and so on, and execute a custom functionality accordingly. You can set up these listeners inside the $ionicPlatform.ready method as follows: var cancelPause = $ionicPlatform.on('pause', function() {             console.log('App is sent to background');             // do stuff to save power         });   var cancelResume = $ionicPlatform.on('resume', function() {             console.log('App is retrieved from background');             // re-init the app         });           // Supported only in BlackBerry 10 & Android var cancelVolumeUpButton = $ionicPlatform.on('volumeupbutton', function() {             console.log('Volume up button pressed');             // moving a slider up         });   var cancelVolumeDownButton = $ionicPlatform.on('volumedownbutton', function() {             console.log('Volume down button pressed');             // moving a slider down         }); The on method returns a function that, when executed, de-registers the event. Now you know how to control your app better when dealing with mobile OS events and hardware keys. Content Next, we will take a look at content-related directives. The first is the ion-content directive. Navigation The next component we are going to take a look at is the navigation component. The navigation component has a bunch of directives as well as a couple of services. The first directive we are going to take a look at is ion-nav-view. When the app boots up, $stateProvider will look for the default state and then will try to load the corresponding template inside the ion-nav-view. Tabs and side menu To understand navigation a bit better, we will explore the tabs directive and the side menu directive. We will scaffold the tabs template and go through the directives related to tabs; run this: ionic start -a "Example 19" -i app.example.nineteen example19 tabs Using the cd command, go to the example19 folder and run this:   ionic serve This will launch the tabs app. If you open www/index.html file, you will notice that this template uses ion-nav-bar to manage the header with ion-nav-back-button inside it. Next open www/js/app.js and you will find the application states configured: .state('tab.dash', {     url: '/dash',     views: {       'tab-dash': {         templateUrl: 'templates/tab-dash.html',         controller: 'DashCtrl'       }     }   }) Do notice that the views object has a named object: tab-dash. This will be used when we work with the tabs directive. This name will be used to load a given view when a tab is selected into the ion-nav-view directive with the name tab-dash. If you open www/templates/tabs.html, you will find a markup for the tabs component: <ion-tabs class="tabs-icon-top tabs-color-active-positive">     <!-- Dashboard Tab -->   <ion-tab title="Status" icon-off="ion-ios-pulse" icon-on="ion- ios-pulse-strong" href="#/tab/dash">     <ion-nav-view name="tab-dash"></ion-nav-view>   </ion-tab>     <!-- Chats Tab -->   <ion-tab title="Chats" icon-off="ion-ios-chatboxes-outline" icon-on="ion-ios-chatboxes" href="#/tab/chats">     <ion-nav-view name="tab-chats"></ion-nav-view>   </ion-tab>     <!-- Account Tab -->   <ion-tab title="Account" icon-off="ion-ios-gear-outline" icon- on="ion-ios-gear" href="#/tab/account">     <ion-nav-view name="tab-account"></ion-nav-view>   </ion-tab>   </ion-tabs> The tabs.html will be loaded before any of the child tabs load, since tab state is defined as an abstract route. The ion-tab directive is nested inside ion-tabs and every ion-tab directive has an ion-nav-view directive nested inside it. When a tab is selected, the route with the same name as the name attribute on the ion-nav-view will be loaded inside the corresponding tab. Very neatly structured! You can read more about tabs directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionTabs/. Next, we are going to scaffold an app using the side menu template and go through the navigation inside it; run this: ionic start -a "Example 20" -i app.example.twenty example20 sidemenu Using the cd command, go to the example20 folder and run this:   ionic serve This will launch the side menu app. We start off exploring with www/index.html. This file has only the ion-nav-view directive inside the body. Next, we open www/js/app/js. Here, the routes are defined as expected. But one thing to notice is the name of the views for search, browse, and playlists. It is the same—menuContent—for all: .state('app.search', {     url: "/search",     views: {       'menuContent': {         templateUrl: "templates/search.html"       }     } }) If we open www/templates/menu.html, you will notice ion-side-menus directive. It has two children ion-side-menu-content and ion-side-menu. The ion-side-menu-content displays the content for each menu item inside the ion-nav-view named menuContent. This is why all the menu items in the state router have the same view name. The ion-side-menu is displayed on the left-hand side of the page. You can set the location on the ion-side-menu to the right to show the side menu on the right or you can have two side menus. Do notice the menu-toggle directive on the button inside ion-nav-buttons. This directive is used to toggle the side menu. If you want to have the menu on both sides, your menu.html will look as follows: <ion-side-menus enable-menu-with-back-views="false">   <ion-side-menu-content>     <ion-nav-bar class="bar-stable">       <ion-nav-back-button>       </ion-nav-back-button>         <ion-nav-buttons side="left">         <button class="button button-icon button-clear ion- navicon" menu-toggle="left">         </button>       </ion-nav-buttons>       <ion-nav-buttons side="right">         <button class="button button-icon button-clear ion- navicon" menu-toggle="right">         </button>       </ion-nav-buttons>     </ion-nav-bar>     <ion-nav-view name="menuContent"></ion-nav-view>   </ion-side-menu-content>     <ion-side-menu side="left">     <ion-header-bar class="bar-stable">       <h1 class="title">Left</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu>   <ion-side-menu side="right">     <ion-header-bar class="bar-stable">       <h1 class="title">Right</h1>     </ion-header-bar>     <ion-content>       <ion-list>         <ion-item menu-close ng-click="login()">           Login         </ion-item>         <ion-item menu-close href="#/app/search">           Search         </ion-item>         <ion-item menu-close href="#/app/browse">           Browse         </ion-item>         <ion-item menu-close href="#/app/playlists">           Playlists         </ion-item>       </ion-list>     </ion-content>   </ion-side-menu> </ion-side-menus> You can read more about side menu directive and its services at http://ionicframework.com/docs/nightly/api/directive/ionSideMenus/. This concludes our journey through the navigation directives and services. Next, we will move to Ionic loading. Ionic loading The first service we are going to take a look at is $ionicLoading. This service is highly useful when you want to block a user's interaction from the main page and indicate to the user that there is some activity going on in the background. To test this, we will scaffold a new blank template and implement $ionicLoading; run this: ionic start -a "Example 21" -i app.example.twentyone example21 blank Using the cd command, go to the example21 folder and run this:   ionic serve This will launch the blank template in the browser. We will create an app controller and define the show and hide methods inside it. Open www/js/app.js and add the following code: .controller('AppCtrl', function($scope, $ionicLoading, $timeout) {       $scope.showLoadingOverlay = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     $scope.hideLoadingOverlay = function() {         $ionicLoading.hide();     };       $scope.toggleOverlay = function() {         $scope.showLoadingOverlay();           // wait for 3 seconds and hide the overlay         $timeout(function() {             $scope.hideLoadingOverlay();         }, 3000);     };   }) We have a function named showLoadingOverlay, which will call $ionicLoading.show(), and a function named hideLoadingOverlay(), which will call $ionicLoading.hide(). We have also created a utility function named toggleOverlay(), which will call showLoadingOverlay() and after 3 seconds will call hideLoadingOverlay(). We will update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-stable">         <h1 class="title">$ionicLoading service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-dark" ng-click="toggleOverlay()">             Toggle Overlay         </button>     </ion-content> </body> We have a button that calls toggleOverlay(). If you save all the files, head back to the browser, and click on the Toggle Overlay button, you will see the following screenshot: As you can see, the overlay is shown till the hide method is called on $ionicLoading. You can also move the preceding logic inside a service and reuse it across the app. The service will look like this: .service('Loading', function($ionicLoading, $timeout) {     this.show = function() {         $ionicLoading.show({             template: 'Loading...'         });     };     this.hide = function() {         $ionicLoading.hide();     };       this.toggle= function() {         var self  = this;         self.show();           // wait for 3 seconds and hide the overlay         $timeout(function() {             self.hide();         }, 3000);     };   }) Now, once you inject the Loading service into your controller or directive, you can use Loading.show(), Loading.hide(), or Loading.toggle(). If you would like to show only a spinner icon instead of text, you can call the $ionicLoading.show method without any options: $scope.showLoadingOverlay = function() {         $ionicLoading.show();     }; Then, you will see this: You can configure the show method further. More information is available at http://ionicframework.com/docs/nightly/api/service/$ionicLoading/. You can also use the $ionicBackdrop service to show just a backdrop. Read more about $ionicBackdrop at http://ionicframework.com/docs/nightly/api/service/$ionicBackdrop/. You can also checkout the $ionicModal service at http://ionicframework.com/docs/api/service/$ionicModal/; it is quite similar to the loading service. Popover and Popup services Popover is a contextual view that generally appears next to the selected item. This component is used to show contextual information or to show more information about a component. To test this service, we will be scaffolding a new blank app: ionic start -a "Example 23" -i app.example.twentythree example23 blank Using the cd command, go to the example23 folder and run this:   ionic serve This will launch the blank template in the browser. We will add a new controller to the blank project named AppCtrl. We will be adding our controller code in www/js/app.js. .controller('AppCtrl', function($scope, $ionicPopover) {       // init the popover     $ionicPopover.fromTemplateUrl('button-options.html', {         scope: $scope     }).then(function(popover) {         $scope.popover = popover;     });       $scope.openPopover = function($event, type) {         $scope.type = type;         $scope.popover.show($event);     };       $scope.closePopover = function() {         $scope.popover.hide();         // if you are navigating away from the page once         // an option is selected, make sure to call         // $scope.popover.remove();     };   }); We are using the $ionicPopover service and setting up a popover from a template named button-options.html. We are assigning the current controller scope as the scope to the popover. We have two methods on the controller scope that will show and hide the popover. The openPopover method receives two options. One is the event and second is the type of the button we are clicking (more on this in a moment). Next, we update our www/index.html body section as follows: <body ng-app="starter" ng-controller="AppCtrl">     <ion-header-bar class="bar-positive">         <h1 class="title">Popover Service</h1>     </ion-header-bar>     <ion-content class="padding">         <button class="button button-block button-dark" ng- click="openPopover($event, 'dark')">             Dark Button         </button>         <button class="button button-block button-assertive" ng- click="openPopover($event, 'assertive')">             Assertive Button         </button>         <button class="button button-block button-calm" ng- click="openPopover($event, 'calm')">             Calm Button         </button>     </ion-content>     <script id="button-options.html" type="text/ng-template">         <ion-popover-view>             <ion-header-bar>                 <h1 class="title">{{type}} options</h1>             </ion-header-bar>             <ion-content>                 <div class="list">                     <a href="#" class="item item-icon-left">                         <i class="icon ion-ionic"></i> Option One                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-help-buoy"></i> Option Two                     </a>                     <a href="#" class="item item-icon-left">                         <i class="icon ion-hammer"></i> Option Three                     </a>                     <a href="#" class="item item-icon-left" ng- click="closePopover()">                         <i class="icon ion-close"></i> Close                     </a>                 </div>             </ion-content>         </ion-popover-view>     </script> </body> Inside ion-content, we have created three buttons, each themed with a different mood (dark, assertive, and calm). When a user clicks on the button, we show a popover that is specific for that button. For this example, all we are doing is passing in the name of the mood and showing the mood name as the heading in the popover. But you can definitely do more. Do notice that we have wrapped our template content inside ion-popover-view. This takes care of positioning the modal appropriately. The template must be wrapped inside the ion-popover-view for the popover to work correctly. When we save all the files and head back to the browser, we will see the three buttons. Depending on the button you click, the heading of the popover changes, but the options remain the same for all of them: Then, when we click anywhere on the page or the close option, the popover closes. If you are navigating away from the page when an option is selected, make sure to call: $scope.popover.remove(); You can read more about Popover at http://ionicframework.com/docs/api/controller/ionicPopover/. Our GitHub organization With the ever-changing frontend world, keeping up with latest in the business is quite essential. During the course of the book, Cordova, Ionic, and Ionic CLI has evolved a lot and we are predicting that they will keep evolving till they become stable. So, we have created a GitHub organization named Learning Ionic (https://github.com/learning-ionic), which consists of code for all the chapters. You can raise issues, submit pull requests and we will also try and keep it updated with the latest changes. So, you can always refer back to GitHub organization for the possible changes. Summary In this article, we looked at various Ionic directives and services that help us develop applications easily. Resources for Article: Further resources on this subject: Mailing with Spring Mail [article] Implementing Membership Roles, Permissions, and Features [article] Time Travelling with Spring [article]
Read more
  • 0
  • 0
  • 8549

article-image-getting-started-livecode-mobile-0
Packt
03 Jun 2015
34 min read
Save for later

Getting Started with LiveCode Mobile

Packt
03 Jun 2015
34 min read
In this article written by Joel Gerdeen, author of the book LiveCode Mobile Development: Beginner's Guide - Second Edition we will learn the following topics: Sign up for Google Play Sign up for Amazon Appstore Download and install the Android SDK Configure LiveCode so that it knows where to look for the Android SDK Become an iOS developer with Apple Download and install Xcode Configure LiveCode so that it knows where to look for iOS SDKs Set up simulators and physical devices Test a stack in a simulator and physical device (For more resources related to this topic, see here.) Disclaimer This article references many Internet pages that are not under our control. Here, we do show screenshots or URLs, so remember that the content may have changed since we wrote this. The suppliers may also have changed some of the details, but in general, our description of procedures should still work the way we have described them. Here we go... iOS, Android, or both? It could be that you only have interest in iOS or Android. You should be able to easily skip to the sections you're interested in unless you're intrigued about how the other half works! If, like me, you're a capitalist, then you should be interested in both the operating systems. Far fewer steps are needed to get the Android SDK than the iOS developer tools because for iOS, we have to sign up as a developer with Apple. However, the configuration for Android is more involved. We'll go through all the steps for Android and then the ones for iOS. If you're an iOS-only kind of person, skip the next few pages and start up again at the Becoming an iOS Developer section. Becoming an Android developer It is possible to develop Android OS apps without signing up for anything. We'll try to be optimistic and assume that within the next 12 months, you will find time to make an awesome app that will make you rich! To that end, we'll go over everything that is involved in the process of signing up to publish your apps in both Google Play (formally known as Android Market) and Amazon Appstore. Google Play The starting location to open Google Play is http://developer.android.com/: We will come back to this page again, shortly to download the Android SDK, but for now, click on the Distribute link in the menu bar and then on the Developer Console button on the following screen. Since Google changes these pages occasionally, you can use the URL https://play.google.com/apps/publish/ or search for "Google Play Developer Console". The screens you will progress through are not shown here since they tend to change with time. There will be a sign-in page; sign in using your usual Google details. Which e-mail address to use? Some Google services are easier to sign up for if you have a Gmail account. Creating a Google+ account, or signing up for some of their cloud services, requires a Gmail address (or so it seemed to me at the time!). If you have previously set up Google Wallet as part of your account, some of the steps in signing up become simpler. So, use your Gmail address and if you don't have one, create one! Google charges you a $25 fee to sign up for Google Play. At least now, you know about this! Enter the developer name, e-mail address, website URL (if you have one), and your phone number. The payment of $25 will be done through Google Wallet, which will save you from entering the billing details yet again. Now, you're all signed up and ready to make your fortune! Amazon Appstore Although the rules and costs for Google Play are fairly relaxed, Amazon has a more Apple-like approach, both in the amount they charge you to register and in the review process to accept app submissions. The URL to open Amazon Appstore is http://developer.amazon.com/public: Follow these steps to start with Amazon Appstore: When you select Get Started, you need to sign in to your Amazon account. Which email address to use? This feels like déjà vu! There is no real advantage of using your Google e-mail address when signing up for the Amazon Appstore Developer Program, but if you happen to have an account with Amazon, sign in with that one. It will simplify the payment stage, and your developer account and the general Amazon account will be associated with each other. You are then asked to agree to the Appstore Distribution Agreement terms before learning about the costs. These costs are $99 per year, but the first year is free. So that's good! Unlike the Google Android Market, Amazon asks for your bank details up front, ready to send you lots of money later, we hope! That's it, you're ready to make another fortune to go along with the one that Google sent you! Pop quiz – when is something too much? You're at the end of developing your mega app, it's 49.5 MB in size, and you just need to add title screen music. Why would you not add the two-minute epic tune you have lined up? It would take too long to load. People tend to skip the title screen soon anyway. The file size is going to be over 50 MB. Heavy metal might not be appropriate for a children's storybook app! Answer: 3 The other answers are valid too, though you could play the music as an external sound to reduce loading time, but if your file size goes over 50 MB, you would then cut out potential sales from people who are connected by cellular and not wireless networks. At the time of writing this aticle, all the stores require that you be connected to the site via a wireless network if you intend to download apps that are over 50 MB. Downloading the Android SDK Head back to http://developer.android.com/ and click on the Get the SDK link or go straight to http://developer.android.com/sdk/index.html. This link defaults to the OS that you are running on. Click on the Other Download Options link to see the full set of options for other systems, as shown here: In this article, we're only going to cover Windows and Mac OS X (Intel) and only as much as is needed to make LiveCode work with the Android and iOS SDKs. If you intend to make native Java-based applications, you may be interested in reading through all the steps that are described in the web page http://developer.android.com/sdk/installing.html. Click on the SDK download link for your platform. Note that you don't need the ADT Bundle unless you plan to develop outside the LiveCode IDE. The steps you'll have to go through are different for Mac and Windows. Let's start with Mac. Installing the Android SDK on Mac OS X (Intel) LiveCode itself doesn't require Intel Mac; you can develop stacks using a PowerPC-based Mac, but both the Android SDK and some of the iOS tools require an Intel-based Mac, which sadly means that if you're reading this as you sit next to your Mac G4 or G5, you're not going to get too far! The Android SDK requires the Java Runtime Environment (JRE). Since Apple stopped including the JRE in more recent OS X systems, you should check whether you have it in your system by typing java –version in a Terminal window. The terminal will display the version of Java installed. If not, you may get a message like the following: Click on the More Info button and follow the instructions to install the JRE and verify its installation. At the time of writing this article, JRE 8 doesn't work with OS X 10.10 and I had to use the JRE 6 obtained from http://support.apple.com/kb/DL1572. The file that you just downloaded will automatically expand to show a folder named android-sdk-macosx. It may be in your downloads folder right now, but a more natural place for it would be in your Documents folder, so move it there before performing the next steps. There is an SDK readme text file that lists the steps you need to follow during the installation. If these steps are different to what we have here, then follow the steps in the readme file in case they have been updated since the procedure here was written. Open the Terminal application, which is in Applications/Utilities. You need to change the default directories present in the android-sdk-macosx folder. One handy trick, using Terminal, is that you can drag items into the Terminal window to get the file path to that item. Using this trick, you can type cd and a space in the Terminal window and then drag the android-sdk-macosx folder after the space character. You'll end up with this line if your username is Fred: new-host-3:~ fred$ cd /Users/fred/Documents/android-sdk-macosx Of course, the first part of the line and the user folder will match yours, not Fred's! Whatever your name is, press the Return or Enter key after entering the preceding line. The location line now changes to look like this: new-host-3:android-sdk-macosx colin$ Either carefully type or copy and paste the following line from the readme file: tools/android update sdk --no-ui Press Return or Enter again. How long the file takes to get downloaded depends on your Internet connection. Even with a very fast Internet connection, it could still take over an hour. If you care to follow the update progress, you can just run the android file in the tools directory. This will open the Android SDK Manager, which is similar to the Windows version shown a couple of pages further on in this article. Installing the Android SDK on Windows The downloads page recommends that you use the .exe download link, as it gives extra services to you, such as checking whether you have the Java Development Kit (JDK) installed. When you click on the link, either use the Run or Save options, as you would with any download of a Windows installer. Here, we've opted to use Run; if you use Save, then you need to open the file after it has been saved to your hard drive. In the following case, as the JDK wasn't installed, a dialog box appears saying go to Oracle's site to get the JDK: If you see this screen too, you can leave the dialog box open and click on the Visit java.oracle.com button. On the Oracle page, click on a checkbox to agree to their terms and then on the download link that corresponds with your platform. Choose the 64-bit option if you are running a 64-bit version of Windows or the x86 option if you are running a 32-bit version of Windows. Either way, you're greeted with another installer that you can Run or Save as you prefer. Naturally, it takes a while for the installer to do its thing too! When the installation is complete, you will see a JDK registration page and it's up to you, to register or not. Back at the Android SDK installer dialog box, you can click on the Back button and then the Next button to get back to the JDK checking stage; only now, it sees that you have the JDK installed. Complete the remaining steps of the SDK installer as you would with any Windows installer. One important thing to note is that the last screen of the installer offers to open the SDK Manager. You should do that, so resist the temptation to uncheck that box! Click on Finish and you'll be greeted with a command-line window for a few moments, as shown in the following screenshot, and then, the Android SDK Manager will appear and do its thing: As with the Mac version, it takes a very long time for all these add-ons to download. Pointing LiveCode to the Android SDK After all the installation and command-line work, it's a refreshing change to get back to LiveCode! Open the LiveCode Preferences and choose Mobile Support: We will set the two iOS entries after we get iOS going (but these options will be grayed out in Windows). For now, click on the … button next to the Android development SDK root field and navigate to where the SDK is installed. If you've followed the earlier steps correctly, then the SDK will be in the Documents folder on Mac or you can navigate to C:Program Files (x86)Android to find it on Windows (or somewhere else, if you choose to use a custom location). Depending on the APIs that were loaded in the SDK Manager, you may get a message that the path does not include support for Android 2.2 (API 8). If so, use the Android SDK Manager to install it. LiveCode seems to want API 8 even though at this time Android 5.0 uses API 21. Phew! Now, let's do the same for iOS… Pop quiz – tasty code names An Android OS uses some curious code names for each version. At the time of writing this article, we were on Android OS 5, which had a code name of Lollipop. Version 4.1 was Jelly Bean and version 4.4 was KitKat. Which of these is most likely to be the code name for the next Android OS? Lemon Cheesecake Munchies Noodle Marshmallow Answer: 4 The pattern, if it isn't obvious, is that the code name takes on the next letter of the alphabet, is a kind of food, but more specifically, it's a dessert. "Munchies" almost works for Android OS 6, but "Marshmallow" or "Meringue Pie" would be a better choices! Becoming an iOS developer Creating iOS LiveCode applications requires that LiveCode must have access to the iOS SDK. This is installed as part of the Xcode developer tools and is a Mac-only program. Also, when you upload an app to the iOS App Store, the application used is Mac only and is part of the Xcode installation. If you are a Windows-based developer and wish to develop and publish for iOS, you need either an actual Mac based system or a virtual machine that can run the Mac OS. We can even use VirtualBox for running a Mac based virtual machine, but performance will be an issue. Refer to http://apple.stackexchange.com/questions/63147/is-mac-os-x-in-a-virtualbox-vm-suitable-for-ios-development for more information. The biggest difference between becoming an Android developer and becoming an iOS developer is that you have to sign up with Apple for their developer program even if you never produce an app for the iOS App Store, but no such signing up is required when becoming an Android developer. If things go well and you end up making an app for various stores, then this isn't such a big deal. It will cost you $25 to submit an app to the Android Market, $99 a year (with the first year free) to submit an app to the Amazon Appstore, and $99 a year (including the first year) to be an iOS developer with Apple. Just try to sell more than 300 copies of your amazing $0.99 app and you'll find that it has paid for itself! Note that there is a free iOS App Store and app licensing included, with LiveCode Membership, which also costs $99 per year. As a LiveCode member, you can submit your free non-commercial app to RunRev who will provide a license that will allow you to submit your app as "closed source" to iOS App Store. This service is exclusively available for LiveCode members. The first submission each year is free; after that, there is a $25 administration fee per submission. Refer to http://livecode.com/membership/ for more information. You can enroll yourself in the iOS Developer Program for iOS at http://developer.apple.com/programs/ios/: While signing up to be an iOS developer, there are a number of possibilities when it comes to your current status. If you already have an Apple ID, which you use with your iTunes or Apple online store purchases, you could choose the I already have an Apple ID… option. In order to illustrate all the steps to sign up, we will start as a brand new user, as shown in the following screenshot: You can choose whether you want to sign up as an individual or as a company. We will choose Individual, as shown in the following screenshot: With any such sign up process, you need to enter your personal details, set a security question, and enter your postal address: Most Apple software and services have their own legal agreement for you to sign. The one shown in the following screenshot is the general Registered Apple Developer Agreement: In order to verify the e-mail address you have used, a verification code is sent to you with a link in the e-mail, you can click this, or enter the code manually. Once you have completed the verification code step, you can then enter your billing details. It could be that you might go on to make LiveCode applications for the Mac App Store, in which case, you will need to add the Mac Developer Program product. For our purpose, we only need to sign up for the iOS Developer Program, as shown in the following screenshot: Each product that you sign up for has its own agreement. Lots of small print to read! The actual purchasing of the iOS developer account is handled through the Apple Store of your own region, shown as follows: As you can see in the next screenshot, it is going to cost you $99 per year or $198 per year if you also sign up for the Mac Developer account. Most LiveCode users won't need to sign up for the Mac Developer account unless their plan is to submit desktop apps to the Mac App Store. After submitting the order, you are rewarded with a message that tells you that you are now registered as an Apple developer! Sadly, you won't get an instant approval, as was the case with Android Market or Amazon Appstore. You have to wait for the approval for five days. In the early iPhone Developer days, the approval could take a month or more, so 24 hours is an improvement! Pop quiz – iOS code names You had it easy with the pop quiz about Android OS code names! Not so with iOS. Which of these names is more likely to be a code name for a future version of iOS? Las Vegas Laguna Beach Hunter Mountain Death Valley Answer: 3 Although not publicized, Apple does use code names for each version of iOS. Previous examples included Big Bear, Apex, Kirkwood, and Telluride. These, and all the others are apparently ski resorts. Hunter Mountain is a relatively small mountain (3,200 feet), so if it does get used, perhaps it would be a minor update! Installing Xcode Once you receive confirmation of becoming an iOS developer, you will be able to log in to the iOS Dev Center at https://developer.apple.com/devcenter/ios/index.action. This same page is used by iOS developers who are not using LiveCode and is full of support documents that can help you create native applications using Xcode and Objective-C. We don't need all the support documents, but we do need to download Xcode's support documents. In the downloads area of the iOS Dev Center page, you will see a link to the current version of Xcode and a link to get to the older versions as well. The current version is delivered via Mac App Store; when you try the given link, you will see a button that takes you to the App Store application. Installing Xcode from Mac App Store is very straightforward. It's just like buying any other app from the store, except that it's free! It does require you to use the latest version of Mac OS X. Xcode will show up in your Applications folder. If you are using an older system, then you need to download one of the older versions from the developer page. The older Xcode installation process is much like the installation process of any other Mac application: The older version of Xcode takes a long time to get installed, but in the end, you should have the Developer folder or a new Xcode application ready for LiveCode. Coping with newer and older devices In early 2012, Apple brought to the market a new version of iPad. The main selling point of this one compared to iPad 2 is that it has a Retina display. The original iPads have a resolution of 1024 x 768 and the Retina version has a resolution of 2048 x 1536. If you wish to build applications to take advantage of this, you must get the current version of Xcode from Mac App Store and not one of the older versions from the developer page. The new version of Xcode demands that you work on Mac OS 10.10 or its later versions. So, to fully support the latest devices, you may have to update your system software more than you were expecting! But wait, there's more… By taking a later version of Xcode, you are missing the iOS SDK versions needed to support older iOS devices, such as the original iPhone and iPhone 3G. Fortunately, you can go to Preferences in Xcode where there is a Downloads tab where you can get these older SDKs downloaded in the new version of Xcode. Typically, Apple only allows you to download one version older than the one that is currently provided in Xcode. There are older versions available, but are not accepted by Apple for App Store submission. Pointing LiveCode to the iOS SDKs Open the LiveCode Preferences and choose Mobile Support: Click on the Add Entry button in the upper-right section of the window to see a dialog box that asks whether you are using Xcode 4.2 or 4.3 or a later version. If you choose 4.2, then go on to select the folder named Developer at the root of your hard drive. For 4.3 or later versions, choose the Xcode application itself in your Applications folder. LiveCode knows where to find the SDKs for iOS. Before we make our first mobile app… Now that the required SDKs are installed and LiveCode knows where they are, we can make a stack and test it in a simulator or on a physical device. We do, however, have to get the simulators and physical devices warmed up… Getting ready for test development on an Android device Simulating on iOS is easier than it is on Android, and testing on a physical device is easier on Android than on iOS, but the setting up of physical Android devices can be horrendous! Time for action – starting an Android Virtual Device You will have to dig a little deep in the Android SDK folders to find the Android Virtual Device setup program. You might as well provide a shortcut or an alias to it for quicker access. The following steps will help you setup and start an Android virtual device: Navigate to the Android SDK tools folder located at C:Program Files (x86)Androidandroid-sdk on Windows and navigate to your Documents/android-sdk-macosx/tools folder on Mac. Open AVD Manager on Windows or android on Mac (these look like a Unix executable file; just double-click on it and the application will open via a command-line window). If you're on Mac, select Manage AVDs… from the Tools menu. Select Tablet from the list of devices if there is one. If not, you can add your own custom devices as described in the following section. Click on the Start button. Sit patiently while the virtual device starts up! Open LiveCode, create a new Mainstack, and click on Save to save the stack to your hard drive. Navigate to File | Standalone Application Settings…. Click on the Android icon and click on the Build for Android checkbox to select it. Close the settings dialog box and take a look at the Development menu. If the virtual machine is up and running, you should see it listed in the Test Target submenu. Creating an Android Virtual Device If there are no devices listed when you open the Android Virtual Device (AVD) Manager, you may If you wish to create a device, so click on the Create button. The following screenshot will appear when you do so. Further explanation of the various fields can be found at https://developer.android.com/tools/devices/index.html. After you have created a device, you can click on Start to start the virtual device and change some of the Launch Options. You should typically select Scale display to real size unless it is too big for your development screen. Then, click on Launch to fire up the emulator. Further information on how to run the emulator can be found at http://developer.android.com/tools/help/emulator.html. What just happened? Now that you've opened an Android virtual device, LiveCode will be able to test stacks using this device. Once it has finished loading, that is! Connecting a physical Android device Connecting a physical Android device can be extremely straightforward: Connect your device to the system by USB. Select your device from the Development | Test Target submenu. Select Test from the Development menu or click on the Test button in the Tool Bar. There can be problem cases though, and Google Search will become your best friend before you are done solving these problems! We should look at an example problem case, so that you get an idea of how to solve similar situations that you may encounter. Using Kindle Fire When it comes to finding Android devices, the Android SDK recognizes a lot of them automatically. Some devices are not recognized and you have to do something to help Android Debug Bridge (ADB) find these devices. Android Debug Bridge (ADB) is part of the Android SDK that acts as an intermediary between your device and any software that needs to access the device. In some cases, you will need to go to the Android system on the device to tell it to allow access for development purposes. For example, on an Android 3 (Honeycomb) device, you need to go to the Settings | Applications | Development menu and you need to activate the USB debugging mode. Before ADB connects to a Kindle Fire device, that device must first be configured, so that it allows connection. This is enabled by default on the first generation Kindle Fire device. On all other Kindle Fire models, go to the device settings screen, select Security, and set Enable ADB to On. The original Kindle Fire model comes with USB debugging already enabled, but the ADB system doesn't know about the device at all. You can fix this! Time for action – adding Kindle Fire to ADB It only takes one line of text to add Kindle Fire to the list of devices that ADB knows about. The hard part is tracking down the text file to edit and getting ADB to restart after making the required changes. Things are more involved when using Windows than with Mac because you also have to configure the USB driver, so the two systems are shown here as separate steps. The steps to be followed for adding a Kindle Fire to ADB for a Windows OS are as follows: In Windows Explorer, navigate to C:Usersyourusername.android where the adv_usb.ini file is located. Open the adv_usb.ini text file in a text editor. The file has no visible line breaks, so it is better to use WordPad than NotePad. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Now, save the file. Navigate to C:Program Files (x86)Androidandroid-sdkextrasgoogleusb_driver where android_winusb.inf is located. Right-click on the file and in Properties, Security, select Users from the list and click on Edit to set the permissions, so that you are allowed to write the file. Open the android_winusb.inf file in NotePad. Add the following three lines to the [Google.NTx86] and [Google.NTamd64] sections and save the file: ;Kindle Fire %SingleAdbInterface% = USB_Install, USBVID_1949&PID_0006 %CompositeAdbInterface% = USB_Install, USBVID_1949&PID_0006&MI_01 You need to set the Kindle so that it uses the Google USB driver that you just edited. In the Windows control panel, navigate to Device Manager and find the Kindle entry in the list that is under USB. Right-click on the Kindle entry and choose Update Driver Software…. Choose the option that lets you find the driver on your local drive, navigate to the googleusb_driver folder, and then select it to be the new driver. When the driver is updated, open a command window (a handy trick to open a command window is to use Shift-right-click on the desktop and to choose "Open command window here"). Change the directories to where the ADB tool is located by typing: cd C:Program Files (x86)Androidandroid-sdkplatform-tools Type the following three line of code and press Enter after each line: adb kill-server adb start-server adb devices You should see the Kindle Fire listed (as an obscure looking number) as well as the virtual device if you still have that running. The steps to be followed for a Mac (MUCH simpler!) system are as follows: Navigate to where the adv_usb.ini file is located. On Mac, in Finder, select the menu by navigating to Go | Go to Folder… and type ~/.android/in. Open the adv_usb.ini file in a text editor. On the line after the three instruction lines, type 0x1949. Make sure that there are no blank lines; the last character in the text file would be 9 at the end of 0x1949. Save the adv_usb.ini file. Navigate to Utilities | Terminal. You can let OS X know how to find ADB from anywhere by typing the following line (replace yourusername with your actual username and also change the path if you've installed the Android SDK to some other location): export PATH=$PATH:/Users/yourusername/Documents/android-sdk-macosx/platform-tools Now, try the same three lines as we did with Windows: adb kill-server adb start-server adb devices Again, you should see the Kindle Fire listed here. What just happened? I suspect that you're going to have nightmares about all these steps! It took a lot of research on the Web to find out some of these obscure hacks. The general case with Android devices on Windows is that you have to modify the USB driver for the device to be handled using the Google USB driver, and you may have to modify the adb_usb.ini file (on Mac too) for the device to be considered as an ADB compatible device. Getting ready for test development on an iOS device If you carefully went through all these Android steps, especially on Windows, you will hopefully be amused by the brevity of this section! There is a catch though; you can't really test on an iOS device from LiveCode. We'll look at what you have to do instead in a moment, but first, we'll look at the steps required to test an app in the iOS simulator. Time for action – using the iOS simulator The initial steps are much like what we did for Android apps, but the process becomes a lot quicker in later steps. Remember, this only applies to a Mac OS; you can only do these things on Windows if you are using a Mac OS in a virtual machine, which may have performance issues. This is most likely not covered by the Mac OS's user agreement! In other words, get a Mac OS if you intend to develop for iOS. The following steps will help you achieve that: Open LiveCode and create a new Mainstack and save the stack to your hard drive. Select File and then Standalone Application Settings…. Click on the iOS icon to select the Build for iOS checkbox. Close the settings dialog box and take a look at the Test Target menu under Development. You will see a list of simulator options for iPhone and iPad and different versions of iOS. To start the iOS simulator, select an option and click on the Test button. What just happened? This was all it took for us to get the testing done using the iOS simulators! To test on a physical iOS device, we need to create an application file first. Let's do that. Appiness at last! At this point, you should be able to create a new Mainstack, save it, select either iOS or Android in the Standalone Settings dialog box, and be able to see simulators or virtual devices in the Development/Test menu item. In the case of an Android app, you will also see your device listed if it is connected via USB at the time. Time for action – testing a simple stack in the simulators Feel free to make things that are more elaborate than the ones we have made through these steps! The following instructions make an assumption that you know how to find things by yourself in the object inspector palette: Open LiveCode, create a new Mainstack, and save it someplace where it is easy to find in a moment from now. Set the card window to the size 480 x 320 and uncheck the Resizable checkbox. Drag a label field to the top-left corner of the card window and set its contents to something appropriate. Hello World might do. If you're developing on Windows, skip to step 11. Open the Standalone Application Settings dialog box, click on the iOS icon, and click on the Build for iOS checkbox. Under Orientation Options, set the iPhone Initial Orientation to Landscape Left. Close the dialog box. Navigate to the Development | Test Target submenu and choose an iPhone Simulator. Select Test from the Development menu. You should now be able to see your test stack running in the iOS simulator! As discussed earlier, launch the Android virtual device. Open the Standalone Application Settings dialog box, click on the Android icon, and click on the Build for Android checkbox. Under User Interface Options, set the Initial Orientation to Landscape. Close the dialog box. If the virtual device is running by now, do whatever it takes to get past the locked home screen, if that's what it is showing. From the Development/Test Target submenu, choose the Android emulator. Select Test from the Development menu. You should now see your test stack running in the Android emulator! What just happened? All being well, you just made and ran your first mobile app on both Android and iOS! For an encore, we should try this on physical devices only to give Android a chance to show how easy it can be done. There is a whole can of worms we didn't open yet that has to do with getting an iOS device configured, so that it can be used for testing. You could visit the iOS Provisioning Portal at https://developer.apple.com/ios/manage/overview/index.action and look at the How To tab in each of the different sections. Time for action – testing a simple stack on devices Now, let's try running our tests on physical devices. Get your USB cables ready and connect the devices to your computer. Lets go through the steps for an Android device first: You should still have Android selected in Standalone Application Settings. Get your device to its home screen past the initial Lock screen if there is one. Choose Development/Test Target and select your Android device. It may well say "Android" and a very long number. Choose Development/Test. The stack should now be running on your Android device. Now, we'll go through the steps to test a simple stack on an iOS device: Change the Standalone Application Settings back to iOS. Under Basic Application Settings of the iOS settings is a Profile drop-down menu of the provisioning files that you have installed. Choose one that is configured for the device you are going to test. Close the dialog box and choose Save as Standalone Application… from the File menu. In Finder, locate the folder that was just created and open it to reveal the app file itself. As we didn't give the stack a sensible name, it will be named Untitled 1. Open Xcode, which is in the Developer folder you installed earlier, in the Applications subfolder. In the Xcode folder, choose Devices from the Window menu if it isn't already selected. You should see your device listed. Select it and if you see a button labeled Use for Development, click on that button. Drag the app file straight from the Finder menu to your device in the Devices window. You should see a green circle with a + sign. You can also click on the + sign below Installed Apps and locate your app file in the Finder window. You can also replace or delete an installed app from this window. You can now open the app on your iOS device! What just happened? In addition to getting a test stack to work on real devices, we also saw how easy it is, once it's all configured, to test a stack, straight on an Android device. If you are developing an app that is to be deployed on both Android and iOS, you may find that the fastest way to work is to test with the iOS Simulator for iOS tests, but for this, you need to test directly on an Android device instead of using the Android SDK virtual devices. Have a go hero – Nook Until recently, the Android support for the Nook Color from Barnes & Noble wasn't good enough to install LiveCode apps. It seems to have improved though and could well be another worthwhile app store for you to target. Investigate about the sign up process, download their SDK, and so on. With any luck, some of the processes that you've learned while signing up for the other stores will also apply to the Nook store. You can start the signing up process at https://nookdeveloper.barnesandnoble.com. Further reading The SDK providers, Google and Apple, have extensive pages of information on how to set up development environments, create certificates and provisioning files, and so on. The information covers a lot of topics that don't apply to LiveCode, so try not to get lost! These URLs would be good starting points if you want to read further: http://developer.android.com/ http://developer.apple.com/ios/ Summary Signing up for programs, downloading files, using command lines all over the place, and patiently waiting for the Android emulator to launch. Fortunately, you only have to go through it once. In this article, we worked through a number of tasks that you have to do before you create a mobile app in LiveCode. We had to sign up as an iOS developer before we could download and install Xcode and iOS SDKs. We then downloaded and installed the Android SDK and configured LiveCode for devices and simulators. We also covered some topics that will be useful once you are ready to upload a finished app. We showed you how to sign up for the Android Market and Amazon Appstore. There will be a few more mundane things that we have to cover at the end of the article, but not for a while! Next up, we will start to play with some of the special abilities of mobile devices. Resources for Article: Further resources on this subject: LiveCode: Loops and Timers [article] Creating Quizzes [article] Getting Started with LiveCode for Mobile [article]
Read more
  • 0
  • 0
  • 2946

article-image-command-line-companion-called-artisan
Packt
06 May 2015
17 min read
Save for later

A Command-line Companion Called Artisan

Packt
06 May 2015
17 min read
In this article by Martin Bean, author of the book Laravel 5 Essentials, we will see how Laravel's command-line utility has far more capabilities and can be used to run and automate all sorts of tasks. In the next pages, you will learn how Artisan can help you: Inspect and interact with your application Enhance the overall performance of your application Write your own commands By the end of this tour of Artisan's capabilities, you will understand how it can become an indispensable companion in your projects. (For more resources related to this topic, see here.) Keeping up with the latest changes New features are constantly being added to Laravel. If a few days have passed since you first installed it, try running a composer update command from your terminal. You should see the latest versions of Laravel and its dependencies being downloaded. Since you are already in the terminal, finding out about the latest features is just one command away: $ php artisan changes This saves you from going online to find a change log or reading through a long history of commits on GitHub. It can also help you learn about features that you were not aware of. You can also find out which version of Laravel you are running by entering the following command: $ php artisan --version Laravel Framework version 5.0.16 All Artisan commands have to be run from your project's root directory. With the help of a short script such as Artisan Anywhere, available at https://github.com/antonioribeiro/artisan-anywhere, it is also possible to run Artisan from any subfolder in your project. Inspecting and interacting with your application With the route:list command, you can see at a glance which URLs your application will respond to, what their names are, and if any middleware has been registered to handle requests. This is probably the quickest way to get acquainted with a Laravel application that someone else has built. To display a table with all the routes, all you have to do is enter the following command: $ php artisan route:list In some applications, you might see /{v1}/{v2}/{v3}/{v4}/{v5} appended to particular routes. This is because the developer has registered a controller with implicit routing, and Laravel will try to match and pass up to five parameters to the controller. Fiddling with the internals When developing your application, you will sometimes need to run short, one-off commands to inspect the contents of your database, insert some data into it, or check the syntax and results of an Eloquent query. One way you could do this is by creating a temporary route with a closure that is going to trigger these actions. However, this is less than practical since it requires you to switch back and forth between your code editor and your web browser. To make these small changes easier, Artisan provides a command called tinker, which boots up the application and lets you interact with it. Just enter the following command: $ php artisan tinker This will start a Read-Eval-Print Loop (REPL) similar to what you get when running the php -a command, which starts an interactive shell. In this REPL, you can enter PHP commands in the context of the application and immediately see their output: > $cat = 'Garfield'; > AppCat::create(['name' => $cat,'date_of_birth' => new DateTime]); > echo AppCat::whereName($cat)->get(); [{"id":"4","name":"Garfield 2","date_of_birth":…}] > dd(Config::get('database.default')); Version 5 of Laravel leverages PsySH, a PHP-specific REPL that provides a more robust shell with support for keyboard shortcuts and history. Turning the engine off Whether it is because you are upgrading a database or waiting to push a fix for a critical bug to production, you may want to manually put your application on hold to avoid serving a broken page to your visitors. You can do this by entering the following command: $ php artisan down This will put your application into maintenance mode. You can determine what to display to users when they visit your application in this mode by editing the template file at resources/views/errors/503.blade.php (since maintenance mode sends an HTTP status code of 503 Service Unavailable to the client). To exit maintenance mode, simply run the following command: $ php artisan up Fine-tuning your application For every incoming request, Laravel has to load many different classes and this can slow down your application, particularly if you are not using a PHP accelerator such as APC, eAccelerator, or XCache. In order to reduce disk I/O and shave off precious milliseconds from each request, you can run the following command: $ php artisan optimize This will trim and merge many common classes into one file located inside storage/framework/compiled.php. The optimize command is something you could, for example, include in a deployment script. By default, Laravel will not compile your classes if app.debug is set to true. You can override this by adding the --force flag to the command but bear in mind that this will make your error messages less readable. Caching routes Apart from caching class maps to improve the response time of your application, you can also cache the routes of your application. This is something else you can include in your deployment process. The command? Simply enter the following: $ php artisan route:cache The advantage of caching routes is that your application will get a little faster as its routes will have been pre-compiled, instead of evaluating the URL and any matches routes on each request. However, as the routing process now refers to a cache file, any new routes added will not be parsed. You will need to re-cache them by running the route:cache command again. Therefore, this is not suitable during development, where routes might be changing frequently. Generators Laravel 5 ships with various commands to generate new files of different types. If you run $ php artisan list under the make namespace, you will find the following entries: make:command make:console make:controller make:event make:middleware make:migration make:model make:provider make:request These commands create a stub file in the appropriate location in your Laravel application containing boilerplate code ready for you to get started with. This saves keystrokes, creating these files from scratch. All of these commands require a name to be specified, as shown in the following command: $ php artisan make:model Cat This will create an Eloquent model class called Cat at app/Cat.php, as well as a corresponding migration to create a cats table. If you do not need to create a migration when making a model (for example, if the table already exists), then you can pass the --no-migration option as follows: $ php artisan make:model Cat --no-migration A new model class will look like this: <?php namespace App; use IlluminateDatabaseEloquentModel; class Cat extends Model { // } From here, you can define your own properties and methods. The other commands may have options. The best way to check is to append --help after the command name, as shown in the following command: $ php artisan make:command --help You will see that this command has --handler and --queued options to modify the class stub that is created. Rolling out your own Artisan commands At this stage you might be thinking about writing your own bespoke commands. As you will see, this is surprisingly easy to do with Artisan. If you have used Symfony's Console component, you will be pleased to know that an Artisan command is simply an extension of it with a slightly more expressive syntax. This means the various helpers will prompt for input, show a progress bar, or format a table, are all available from within Artisan. The command that we are going to write depends on the application we built. It will allow you to export all cat records present in the database as a CSV with or without a header line. If no output file is specified, the command will simply dump all records onto the screen in a formatted table. Creating the command There are only two required steps to create a command. Firstly, you need to create the command itself, and then you need to register it manually. We can make use of the following command to create a console command we have seen previously: $ php artisan make:console ExportCatsCommand This will generate a class inside app/Console/Commands. We will then need to register this command with the console kernel, located at app/Console/Kernel.php: protected $commands = [ 'AppConsoleCommandsExportCatsCommand', ]; If you now run php artisan, you should see a new command called command:name. This command does not do anything yet. However, before we start writing the functionality, let's briefly look at how it works internally. The anatomy of a command Inside the newly created command class, you will find some code that has been generated for you. We will walk through the different properties and methods and see what their purpose is. The first two properties are the name and description of the command. Nothing exciting here, this is only the information that will be shown in the command line when you run Artisan. The colon is used to namespace the commands, as shown here: protected $name = 'export:cats';   protected $description = 'Export all cats'; Then you will find the fire method. This is the method that gets called when you run a particular command. From there, you can retrieve the arguments and options passed to the command, or run other methods. public function fire() Lastly, there are two methods that are responsible for defining the list of arguments or options that are passed to the command: protected function getArguments() { /* Array of arguments */ } protected function getOptions() { /* Array of options */ } Each argument or option can have a name, a description, and a default value that can be mandatory or optional. Additionally, options can have a shortcut. To understand the difference between arguments and options, consider the following command, where options are prefixed with two dashes: $ command --option_one=value --option_two -v=1 argument_one argument_two In this example, option_two does not have a value; it is only used as a flag. The -v flag only has one dash since it is a shortcut. In your console commands, you'll need to verify any option and argument values the user provides (for example, if you're expecting a number, to ensure the value passed is actually a numerical value). Arguments can be retrieved with $this->argument($arg), and options—you guessed it—with $this->option($opt). If these methods do not receive any parameters, they simply return the full list of parameters. You refer to arguments and options via their names, that is, $this->argument('argument_name');. Writing the command We are going to start by writing a method that retrieves all cats from the database and returns them as an array: protected function getCatsData() { $cats = AppCat::with('breed')->get(); foreach ($cats as $cat) {    $output[] = [      $cat->name,      $cat->date_of_birth,      $cat->breed->name,    ]; } return $output; } There should not be anything new here. We could have used the toArray() method, which turns an Eloquent collection into an array, but we would have had to flatten the array and exclude certain fields. Then we need to define what arguments and options our command expects: protected function getArguments() { return [    ['file', InputArgument::OPTIONAL, 'The output file', null], ]; } To specify additional arguments, just add an additional element to the array with the same parameters: return [ ['arg_one', InputArgument::OPTIONAL, 'Argument 1', null], ['arg_two', InputArgument::OPTIONAL, 'Argument 2', null], ]; The options are defined in a similar way: protected function getOptions() { return [    ['headers', 'h', InputOption::VALUE_NONE, 'Display headers?',    null], ]; } The last parameter is the default value that the argument and option should have if it is not specified. In both the cases, we want it to be null. Lastly, we write the logic for the fire method: public function fire() { $output_path = $this->argument('file');   $headers = ['Name', 'Date of Birth', 'Breed']; $rows = $this->getCatsData();   if ($output_path) {    $handle = fopen($output_path, 'w');      if ($this->option('headers')) {        fputcsv($handle, $headers);      }      foreach ($rows as $row) {        fputcsv($handle, $row);      }      fclose($handle);   } else {        $table = $this->getHelperSet()->get('table');        $table->setHeaders($headers)->setRows($rows);        $table->render($this->getOutput());    } } While the bulk of this method is relatively straightforward, there are a few novelties. The first one is the use of the $this->info() method, which writes an informative message to the output. If you need to show an error message in a different color, you can use the $this->error() method. Further down in the code, you will see some functions that are used to generate a table. As we mentioned previously, an Artisan command extends the Symfony console component and, therefore, inherits all of its helpers. These can be accessed with $this->getHelperSet(). Then it is only a matter of passing arrays for the header and rows of the table, and calling the render method. To see the output of our command, we will run the following command: $ php artisan export:cats $ php artisan export:cats --headers file.csv Scheduling commands Traditionally, if you wanted a command to run periodically (hourly, daily, weekly, and so on), then you would have to set up a Cron job in Linux-based environments, or a scheduled task in Windows environments. However, this comes with drawbacks. It requires the user to have server access and familiarity with creating such schedules. Also, in cloud-based environments, the application may not be hosted on a single machine, or the user might not have the privileges to create Cron jobs. The creators of Laravel saw this as something that could be improved, and have come up with an expressive way of scheduling Artisan tasks. Your schedule is defined in app/Console/Kernel.php, and with your schedule being defined in this file, it has the added advantage of being present in source control. If you open the Kernel class file, you will see a method named schedule. Laravel ships with one by default that serves as an example: $schedule->command('inspire')->hourly(); If you've set up a Cron job in the past, you will see that this is instantly more readable than the crontab equivalent: 0 * * * * /path/to/artisan inspire Specifying the task in code also means we can easily change the console command to be run without having to update the crontab entry. By default, scheduled commands will not run. To do so, you need a single Cron job that runs the scheduler each and every minute: * * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1 When the scheduler is run, it will check for any jobs whose schedules match and then runs them. If no schedules match, then no commands are run in that pass. You are free to schedule as many commands as you wish, and there are various methods to schedule them that are expressive and descriptive: $schedule->command('foo')->everyFiveMinutes(); $schedule->command('bar')->everyTenMinutes(); $schedule->command('baz')->everyThirtyMinutes(); $schedule->command('qux')->daily(); You can also specify a time for a scheduled command to run: $schedule->command('foo')->dailyAt('21:00'); Alternatively, you can create less frequent scheduled commands: $schedule->command('foo')->weekly(); $schedule->command('bar')->weeklyOn(1, '21:00'); The first parameter in the second example is the day, with 0 representing Sunday, and 1 through 6 representing Monday through Saturday, and the second parameter is the time, again specified in 24-hour format. You can also explicitly specify the day on which to run a scheduled command: $schedule->command('foo')->mondays(); $schedule->command('foo')->tuesdays(); $schedule->command('foo')->wednesdays(); // And so on $schedule->command('foo')->weekdays(); If you have a potentially long-running command, then you can prevent it from overlapping: $schedule->command('foo')->everyFiveMinutes()          ->withoutOverlapping(); Along with the schedule, you can also specify the environment under which a scheduled command should run, as shown in the following command: $schedule->command('foo')->weekly()->environments('production'); You could use this to run commands in a production environment, for example, archiving data or running a report periodically. By default, scheduled commands won't execute if the maintenance mode is enabled. This behavior can be easily overridden: $schedule->command('foo')->weekly()->evenInMaintenanceMode(); Viewing the output of scheduled commands For some scheduled commands, you probably want to view the output somehow, whether that is via e-mail, logged to a file on disk, or sending a callback to a pre-defined URL. All of these scenarios are possible in Laravel. To send the output of a job via e-mail by using the following command: $schedule->command('foo')->weekly()          ->emailOutputTo('someone@example.com'); If you wish to write the output of a job to a file on disk, that is easy enough too: $schedule->command('foo')->weekly()->sendOutputTo($filepath); You can also ping a URL after a job is run: $schedule->command('foo')->weekly()->thenPing($url); This will execute a GET request to the specified URL, at which point you could send a message to your favorite chat client to notify you that the command has run. Finally, you can chain the preceding command to send multiple notifications: $schedule->command('foo')->weekly()          ->sendOutputTo($filepath)          ->emailOutputTo('someone@example.com'); However, note that you have to send the output to a file before it can be e-mailed if you wish to do both. Summary In this article, you have learned the different ways in which Artisan can assist you in the development, debugging, and deployment process. We have also seen how easy it is to build a custom Artisan command and adapt it to your own needs. If you are relatively new to the command line, you will have had a glimpse into the power of command-line utilities. If, on the other hand, you are a seasoned user of the command line and you have written scripts with other programming languages, you can surely appreciate the simplicity and expressiveness of Artisan. Resources for Article: Further resources on this subject: Your First Application [article] Creating and Using Composer Packages [article] Eloquent relationships [article]
Read more
  • 0
  • 0
  • 4954

article-image-code-sharing-between-ios-and-android
Packt
17 Mar 2015
24 min read
Save for later

Code Sharing Between iOS and Android

Packt
17 Mar 2015
24 min read
In this article by Jonathan Peppers, author of the book Xamarin Cross-platform Application Development, we will see how Xamarin's tools promise to share a good portion of your code between iOS and Android while taking advantage of the native APIs on each platform where possible. Doing so is an exercise in software engineering more than a programming skill or having the knowledge of each platform. To architect a Xamarin application to enable code sharing, it is a must to separate your application into distinct layers. We'll cover the basics of this in this article as well as specific options to consider in certain situations. In this article, we will cover: The MVVM design pattern for code sharing Project and solution organization strategies Portable Class Libraries (PCLs) Preprocessor statements for platform-specific code Dependency injection (DI) simplified Inversion of Control (IoC) (For more resources related to this topic, see here.) Learning the MVVM design pattern The Model-View-ViewModel (MVVM) design pattern was originally invented for Windows Presentation Foundation (WPF) applications using XAML for separating the UI from business logic and taking full advantage of data binding. Applications architected in this way have a distinct ViewModel layer that has no dependencies on its user interface. This architecture in itself is optimized for unit testing as well as cross-platform development. Since an application's ViewModel classes have no dependencies on the UI layer, you can easily swap an iOS user interface for an Android one and write tests against the ViewModellayer. The MVVM design pattern is also very similar to the MVC design pattern. The MVVM design pattern includes the following: Model: The Model layer is the backend business logic that drives the application and any business objects to go along with it. This can be anything from making web requests to a server to using a backend database. View: This layer is the actual user interface seen on the screen. In the case of cross-platform development, it includes any platform-specific code for driving the user interface of the application. On iOS, this includes controllers used throughout an application, and on Android, an application's activities. ViewModel: This layer acts as the glue in MVVM applications. The ViewModel layerscoordinate operations between the View and Model layers. A ViewModel layer will contain properties that the View will get or set, and functions for each operation that can be made by the user on each View. The ViewModel layer will also invoke operations on the Model layer if needed. The following figure shows you the MVVM design pattern: It is important to note that the interaction between the View and ViewModel layers is traditionally created by data binding with WPF. However, iOS and Android do not have built-in data binding mechanisms, so our general approach throughout the article will be to manually call the ViewModel layer from the View layer. There are a few frameworks out there that provide data binding functionality such as MVVMCross and Xamarin.Forms. Implementing MVVM in an example To understand this pattern better, let's implement a common scenario. Let's say we have a search box on the screen and a search button. When the user enters some text and clicks on the button, a list of products and prices will be displayed to the user. In our example, we use the async and await keywords that are available in C# 5 to simplify asynchronous programming. To implement this feature, we will start with a simple model class (also called a business object) as follows: public class Product{   public int Id { get; set; } //Just a numeric identifier   public string Name { get; set; } //Name of the product   public float Price { get; set; } //Price of the product} Next, we will implement our Model layer to retrieve products based on the searched term. This is where the business logic is performed, expressing how the search needs to actually work. This is seen in the following lines of code: // An example class, in the real world would talk to a web// server or database.public class ProductRepository{// a sample list of products to simulate a databaseprivate Product[] products = new[]{   new Product { Id = 1, Name = “Shoes”, Price = 19.99f },   new Product { Id = 2, Name = “Shirt”, Price = 15.99f },   new Product { Id = 3, Name = “Hat”, Price = 9.99f },};public async Task<Product[]> SearchProducts(   string searchTerm){   // Wait 2 seconds to simulate web request   await Task.Delay(2000);    // Use Linq-to-objects to search, ignoring case   searchTerm = searchTerm.ToLower();   return products.Where(p =>      p.Name.ToLower().Contains(searchTerm))   .ToArray();}} It is important to note here that the Product and ProductRepository classes are both considered as a part of the Model layer of a cross-platform application. Some might consider ProductRepository as a service that is generally a self-contained class to retrieve data. It is a good idea to separate this functionality into two classes. The Product class's job is to hold information about a product, while the ProductRepository class is in charge of retrieving products. This is the basis for the single responsibility principle, which states that each class should only have one job or concern. Next, we will implement a ViewModel class as follows: public class ProductViewModel{private readonly ProductRepository repository =    new ProductRepository(); public string SearchTerm{   get;   set;}public Product[] Products{   get;   private set;}public async Task Search(){   if (string.IsNullOrEmpty(SearchTerm))     Products = null;   else     Products = await repository.SearchProducts(SearchTerm);}} From here, your platform-specific code starts. Each platform will handle managing an instance of a ViewModel class, setting the SearchTerm property, and calling Search when the button is clicked. When the task completes, the user interface layer will update a list displayed on the screen. If you are familiar with the MVVM design pattern used with WPF, you might notice that we are not implementing INotifyPropertyChanged for data binding. Since iOS and Android don't have the concept of data binding, we omitted this functionality. If you plan on having a WPF or Windows 8 version of your mobile application or are using a framework that provides data binding, you should implement support for it where needed. Comparing project organization strategies You might be asking yourself at this point, how do I set up my solution in Xamarin Studio to handle shared code and also have platform-specific projects? Xamarin.iOS applications can only reference Xamarin.iOS class libraries, so setting up a solution can be problematic. There are several strategies for setting up a cross-platform solution, each with its own advantages and disadvantages. Options for cross-platform solutions are as follows: File Linking: For this option, you will start with either a plain .NET 4.0 or .NET 4.5 class library that contains all the shared code. You would then have a new project for each platform you want your app to run on. Each platform-specific project will have a subdirectory with all of the files linked in from the first class library. To set this up, add the existing files to the project and select the Add a link to the file option. Any unit tests can run against the original class library. The advantages and disadvantages of file linking are as follows: Advantages: This approach is very flexible. You can choose to link or not link certain files and can also use preprocessor directives such as #if IPHONE. You can also reference different libraries on Android versus iOS. Disadvantages: You have to manage a file's existence in three projects: core library, iOS, and Android. This can be a hassle if it is a large application or if many people are working on it. This option is also a bit outdated since the arrival of shared projects. Cloned Project Files: This is very similar to file linking. The main difference being that you have a class library for each platform in addition to the main project. By placing the iOS and Android projects in the same directory as the main project, the files can be added without linking. You can easily add files by right-clicking on the solution and navigating to Display Options | Show All Files. Unit tests can run against the original class library or the platform-specific versions: Advantages: This approach is just as flexible as file linking, but you don't have to manually link any files. You can still use preprocessor directives and reference different libraries on each platform. Disadvantages: You still have to manage a file's existence in three projects. There is additionally some manual file arranging required to set this up. You also end up with an extra project to manage on each platform. This option is also a bit outdated since the arrival of shared projects. Shared Projects: Starting with Visual Studio 2013 Update 2, Microsoft created the concept of shared projects to enable code sharing between Windows 8 and Windows Phone apps. Xamarin has also implemented shared projects in Xamarin Studio as another option to enable code sharing. Shared projects are virtually the same as file linking, since adding a reference to a shared project effectively adds its files to your project: Advantages: This approach is the same as file linking, but a lot cleaner since your shared code is in a single project. Xamarin Studio also provides a dropdown to toggle between each referencing project, so that you can see the effect of preprocessor statements in your code. Disadvantages: Since all the files in a shared project get added to each platform's main project, it can get ugly to include platform-specific code in a shared project. Preprocessor statements can quickly get out of hand if you have a large team or have team members that do not have a lot of experience. A shared project also doesn't compile to a DLL, so there is no way to share this kind of project without the source code. Portable Class Libraries: This is the most optimal option; you begin the solution by making a Portable Class Library (PCL) project for all your shared code. This is a special project type that allows multiple platforms to reference the same project, allowing you to use the smallest subset of C# and the .NET framework available in each platform. Each platform-specific project will reference this library directly as well as any unit test projects: Advantages: All your shared code is in one project, and all platforms use the same library. Since preprocessor statements aren't possible, PCL libraries generally have cleaner code. Platform-specific code is generally abstracted away by interfaces or abstract classes. Disadvantages: You are limited to a subset of .NET depending on how many platforms you are targeting. Platform-specific code requires use of dependency injection, which can be a more advanced topic for developers not familiar with it. Setting up a cross-platform solution To understand each option completely and what different situations call for, let's define a solution structure for each cross-platform solution. Let's use the product search example and set up a solution for each approach. To set up file linking, perform the following steps: Open Xamarin Studio and start a new solution. Select a new Library project under the general C# section. Name the project ProductSearch.Core, and name the solution ProductSearch. Right-click on the newly created project and select Options. Navigate to Build | General, and set the Target Framework option to .NET Framework 4.5. Add the Product, ProductRepository, and ProductViewModel classes to the project. You will need to add using System.Threading.Tasks; and using System.Linq; where needed. Navigate to Build | Build All from the menu at the top to be sure that everything builds properly. Now, let's create a new iOS project by right-clicking on the solution and navigating to Add | Add New Project. Then, navigate to iOS | iPhone | Single View Application and name the project ProductSearch.iOS. Create a new Android project by right-clicking on the solution and navigating to Add | Add New Project. Create a new project by navigating to Android | Android Application and name it ProductSearch.Droid. Add a new folder named Core to both the iOS and Android projects. Right-click on the new folder for the iOS project and navigate to Add | Add Files from Folder. Select the root directory for the ProductSearch.Core project. Check the three C# files in the root of the project. An Add File to Folder dialog will appear. Select Add a link to the file and make sure that the Use the same action for all selected files checkbox is selected. Repeat this process for the Android project. Navigate to Build | Build All from the menu at the top to double-check everything. You have successfully set up a cross-platform solution with file linking. When all is done, you will have a solution tree that looks something like what you can see in the following screenshot: You should consider using this technique when you have to reference different libraries on each platform. You might consider using this option if you are using MonoGame, or other frameworks that require you to reference a different library on iOS versus Android. Setting up a solution with the cloned project files approach is similar to file linking, except that you will have to create an additional class library for each platform. To do this, create an Android library project and an iOS library project in the same ProductSearch.Core directory. You will have to create the projects and move them to the proper folder manually, then re-add them to the solution. Right-click on the solution and navigate to Display Options | Show All Files to add the required C# files to these two projects. Your main iOS and Android projects can reference these projects directly. Your project will look like what is shown in the following screenshot, with ProductSearch.iOS referencing ProductSearch.Core.iOS and ProductSearch.Droid referencing ProductSearch.Core.Droid: Working with Portable Class Libraries A Portable Class Library (PCL) is a C# library project that can be supported on multiple platforms, including iOS, Android, Windows, Windows Store apps, Windows Phone, Silverlight, and Xbox 360. PCLs have been an effort by Microsoft to simplify development across different versions of the .NET framework. Xamarin has also added support for iOS and Android for PCLs. Many popular cross-platform frameworks and open source libraries are starting to develop PCL versions such as Json.NET and MVVMCross. Using PCLs in Xamarin Let's create our first portable class library: Open Xamarin Studio and start a new solution. Select a new Portable Library project under the general C# section. Name the project ProductSearch.Core and name the solution ProductSearch. Add the Product, ProductRepository, and ProductViewModel classes to the project. You will need to add using System.Threading.Tasks; and using System.Linq; where needed. Navigate to Build | Build All from the menu at the top to be sure that everything builds properly. Now, let's create a new iOS project by right-clicking on the solution and navigating to Add | Add New Project. Create a new project by navigating to iOS | iPhone | Single View Application and name it ProductSearch.iOS. Create a new Android project by right-clicking on the solution and navigating to Add | Add New Project. Then, navigate to Android | Android Application and name the project ProductSearch.Droid. Simply add a reference to the portable class library from the iOS and Android projects. Navigate to Build | Build All from the top menu and you have successfully set up a simple solution with a portable library. Each solution type has its distinct advantages and disadvantages. PCLs are generally better, but there are certain cases where they can't be used. For example, if you were using a library such as MonoGame, which is a different library for each platform, you would be much better off using a shared project or file linking. Similar issues would arise if you needed to use a preprocessor statement such as #if IPHONE or a native library such as the Facebook SDK on iOS or Android. Setting up a shared project is almost the same as setting up a portable class library. In step 2, just select Shared Project under the general C# section and complete the remaining steps as stated. Using preprocessor statements When using shared projects, file linking, or cloned project files, one of your most powerful tools is the use of preprocessor statements. If you are unfamiliar with them, C# has the ability to define preprocessor variables such as #define IPHONE , allowing you to use #if IPHONE or #if !IPHONE. The following is a simple example of using this technique: #if IPHONEConsole.WriteLine(“I am running on iOS”);#elif ANDROIDConsole.WriteLine(“I am running on Android”);#elseConsole.WriteLine(“I am running on ???”);#endif In Xamarin Studio, you can define preprocessor variables in your project's options by navigating to Build | Compiler | Define Symbols, delimited with semicolons. These will be applied to the entire project. Be warned that you must set up these variables for each configuration setting in your solution (Debug and Release); this can be an easy step to miss. You can also define these variables at the top of any C# file by declaring #define IPHONE, but they will only be applied within the C# file. Let's go over another example, assuming that we want to implement a class to open URLs on each platform: public static class Utility{public static void OpenUrl(string url){   //Open the url in the native browser}} The preceding example is a perfect candidate for using preprocessor statements, since it is very specific to each platform and is a fairly simple function. To implement the method on iOS and Android, we will need to take advantage of some native APIs. Refactor the class to look as follows: #if IPHONE//iOS using statementsusing MonoTouch.Foundation;using MonoTouch.UIKit;#elif ANDROID//Android using statementsusing Android.App;using Android.Content;using Android.Net;#else//Standard .Net using statementusing System.Diagnostics;#endif public static class Utility{#if ANDROID   public static void OpenUrl(Activity activity, string url)#else   public static void OpenUrl(string url)#endif{   //Open the url in the native browser   #if IPHONE     UIApplication.SharedApplication.OpenUrl(       NSUrl.FromString(url));   #elif ANDROID     var intent = new Intent(Intent.ActionView,       Uri.Parse(url));     activity.StartActivity(intent);   #else     Process.Start(url);   #endif}} The preceding class supports three different types of projects: Android, iOS, and a standard Mono or .NET framework class library. In the case of iOS, we can perform the functionality with static classes available in Apple's APIs. Android is a little more problematic and requires an Activity object to launch a browser natively. We get around this by modifying the input parameters on Android. Lastly, we have a plain .NET version that uses Process.Start() to launch a URL. It is important to note that using the third option would not work on iOS or Android natively, which necessitates our use of preprocessor statements. Using preprocessor statements is not normally the cleanest or the best solution for cross-platform development. They are generally best used in a tight spot or for very simple functions. Code can easily get out of hand and can become very difficult to read with many #if statements, so it is always better to use it in moderation. Using inheritance or interfaces is generally a better solution when a class is mostly platform specific. Simplifying dependency injection Dependency injection at first seems like a complex topic, but for the most part it is a simple concept. It is a design pattern aimed at making your code within your applications more flexible so that you can swap out certain functionality when needed. The idea builds around setting up dependencies between classes in an application so that each class only interacts with an interface or base/abstract class. This gives you the freedom to override different methods on each platform when you need to fill in native functionality. The concept originated from the SOLID object-oriented design principles, which is a set of rules you might want to research if you are interested in software architecture. There is a good article about SOLID on Wikipedia, (http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29) if you would like to learn more. The D in SOLID, which we are interested in, stands for dependencies. Specifically, the principle declares that a program should depend on abstractions, not concretions (concrete types). To build upon this concept, let's walk you through the following example: Let's assume that we need to store a setting in an application that determines whether the sound is on or off. Now let's declare a simple interface for the setting: interface ISettings { bool IsSoundOn { get; set; } }. On iOS, we'd want to implement this interface using the NSUserDefaults class. Likewise, on Android, we will implement this using SharedPreferences. Finally, any class that needs to interact with this setting will only reference ISettings so that the implementation can be replaced on each platform. For reference, the full implementation of this example will look like the following snippet: public interface ISettings{bool IsSoundOn{   get;   set;}}//On iOSusing MonoTouch.UIKit;using MonoTouch.Foundation; public class AppleSettings : ISettings{public bool IsSoundOn{   get   {     return NSUserDefaults.StandardUserDefaults     BoolForKey(“IsSoundOn”);   }   set   {     var defaults = NSUserDefaults.StandardUserDefaults;     defaults.SetBool(value, “IsSoundOn”);     defaults.Synchronize();   }}}//On Androidusing Android.Content; public class DroidSettings : ISettings{private readonly ISharedPreferences preferences; public DroidSettings(Context context){   preferences = context.GetSharedPreferences(     context.PackageName, FileCreationMode.Private);}public bool IsSoundOn{   get   {     return preferences.GetBoolean(“IsSoundOn”, true”);   }   set   {     using (var editor = preferences.Edit())     {       editor.PutBoolean(“IsSoundOn”, value);       editor.Commit();     }   }}} Now you will potentially have a ViewModel class that will only reference ISettings when following the MVVM pattern. It can be seen in the following snippet: public class SettingsViewModel{  private readonly ISettings settings;  public SettingsViewModel(ISettings settings)  {    this.settings = settings;  }  public bool IsSoundOn  {    get;    set;  }  public void Save()  {    settings.IsSoundOn = IsSoundOn;  }} Using a ViewModel layer for such a simple example is not necessarily needed, but you can see it would be useful if you needed to perform other tasks such as input validation. A complete application might have a lot more settings and might need to present the user with a loading indicator. Abstracting out your setting's implementation has other benefits that add flexibility to your application. Let's say you suddenly need to replace NSUserDefaults on iOS with the iCloud instead; you can easily do so by implementing a new ISettings class and the remainder of your code will remain unchanged. This will also help you target new platforms such as Windows Phone, where you might choose to implement ISettings in a platform-specific way. Implementing Inversion of Control You might be asking yourself at this point in time, how do I switch out different classes such as the ISettings example? Inversion of Control (IoC) is a design pattern meant to complement dependency injection and solve this problem. The basic principle is that many of the objects created throughout your application are managed and created by a single class. Instead of using the standard C# constructors for your ViewModel or Model classes, a service locator or factory class will manage them throughout the application. There are many different implementations and styles of IoC, so let's implement a simple service locator class as follows: public static class ServiceContainer{  static readonly Dictionary<Type, Lazy<object>> services =    new Dictionary<Type, Lazy<object>>();  public static void Register<T>(Func<T> function)  {    services[typeof(T)] = new Lazy<object>(() => function());  }  public static T Resolve<T>()  {    return (T)Resolve(typeof(T));  }  public static object Resolve(Type type)  {    Lazy<object> service;    if (services.TryGetValue(type, out service)    {      return service.Value;    }    throw new Exception(“Service not found!”);  }} This class is inspired by the simplicity of XNA/MonoGame's GameServiceContainer class and follows the service locator pattern. The main differences are the heavy use of generics and the fact that it is a static class. To use our ServiceContainer class, we will declare the version of ISettings or other interfaces that we want to use throughout our application by calling Register, as seen in the following lines of code: //iOS version of ISettingsServiceContainer.Register<ISettings>(() => new AppleSettings());//Android version of ISettingsServiceContainer.Register<ISettings>(() => new DroidSettings());//You can even register ViewModelsServiceContainer.Register<SettingsViewMode>(() =>   new SettingsViewModel()); On iOS, you can place this registration code in either your static void Main() method or in the FinishedLaunching method of your AppDelegate class. These methods are always called before the application is started. On Android, it is a little more complicated. You cannot put this code in the OnCreate method of your activity that acts as the main launcher. In some situations, the Android OS can close your application but restart it later in another activity. This situation is likely to cause an exception somewhere. The guaranteed safe place to put this is in a custom Android Application class which has an OnCreate method that is called prior to any activities being created in your application. The following lines of code show you the use of the Application class: [Application]public class Application : Android.App.Application{  //This constructor is required  public Application(IntPtr javaReference, JniHandleOwnership     transfer): base(javaReference, transfer)  {  }  public override void OnCreate()  {    base.OnCreate();    //IoC Registration here  }} To pull a service out of the ServiceContainer class, we can rewrite the constructor of the SettingsViewModel class so that it is similar to the following lines of code: public SettingsViewModel(){  this.settings = ServiceContainer.Resolve<ISettings>();} Likewise, you will use the generic Resolve method to pull out any ViewModel classes you would need to call from within controllers on iOS or activities on Android. This is a great, simple way to manage dependencies within your application. There are, of course, some great open source libraries out there that implement IoC for C# applications. You might consider switching to one of them if you need more advanced features for service location or just want to graduate to a more complicated IoC container. Here are a few libraries that have been used with Xamarin projects: TinyIoC: https://github.com/grumpydev/TinyIoC Ninject: http://www.ninject.org/ MvvmCross: https://github.com/slodge/MvvmCross includes a full MVVM framework as well as IoC Simple Injector: http://simpleinjector.codeplex.com OpenNETCF.IoC: http://ioc.codeplex.com Summary In this article, we learned about the MVVM design pattern and how it can be used to better architect cross-platform applications. We compared several project organization strategies for managing a Xamarin Studio solution that contains both iOS and Android projects. We went over portable class libraries as the preferred option for sharing code and how to use preprocessor statements as a quick and dirty way to implement platform-specific code. After completing this article, you should be able to speed up with several techniques for sharing code between iOS and Android applications using Xamarin Studio. Using the MVVM design pattern will help you divide your shared code and code that is platform specific. We also covered several options for setting up cross-platform Xamarin solutions. You should also have a firm understanding of using dependency injection and Inversion of Control to give your shared code access to the native APIs on each platform. Resources for Article:   Further resources on this subject: XamChat – a Cross-platform App [article] Configuring Your Operating System [article] Updating data in the background [article]
Read more
  • 0
  • 0
  • 9065
Packt
27 Feb 2015
25 min read
Save for later

Putting It All Together – Community Radio

Packt
27 Feb 2015
25 min read
In this article by Andy Matthews, author of the book Creating Mobile Apps with jQuery Mobile, Second Edition, we will see a website where listeners will be greeted with music from local, independent bands across several genres and geographic regions. Building this will take many of the skills, and we'll pepper in some new techniques that can be used in this new service. Let's see what technology and techniques we could bring to bear on this venture. In this article, we will cover: A taste of Balsamiq Organizing your code An introduction to the Web Audio API Prompting the user to install your app New device-level hardware access To app or not to app Three good reasons for compiling an app (For more resources related to this topic, see here.) A taste of Balsamiq Balsamiq (http://www.balsamiq.com/) is a very popular User Experience (UX) tool for rapid prototyping. It is perfect for creating and sharing interactive mockups: When I say very popular, I mean lots of major names that you're used to seeing. Over 80,000 companies create their software with the help of Balsamiq Mockups. So, let's take a look at what the creators of a community radio station might have in mind. They might start with a screen which looks like this; a pretty standard implementation. It features an icon toolbar at the bottom and a listview element in the content: Ideally, we'd like to keep this particular implementation as pure HTML/JavaScript/CSS. That way, we could compile it into a native app at some point, using PhoneGap. However, we'd like to stay true to the Don't Repeat Yourself (DRY) principle. That means, that we're going to want to inject this footer onto every page without using a server-side process. To that end, let's set up a hidden part of our app to contain all the global elements that we may want: <div id="globalComponents">   <div data-role="navbar" class="bottomNavBar">       <ul>           <li><a data-icon="music" href="#stations_by_region" data-transition="slideup">stations</a></li>         <li><a data-icon="search" href="#search_by_artist" data-transition="slideup">discover</a></li>           <li><a data-icon="calendar" href="#events_by_location" data-transition="slideup">events</a></li>           <li><a data-icon="gear" href="#settings" data-transition="slideup">settings</a></li>       </ul>   </div></div> We'll keep this code at the bottom of the page and hide it with a simple CSS rule in the stylesheet, #globalComponents{display:none;}. Now, we'll insert this global footer into each page, just before they are created. Using the clone() method (shown in the next code snippet) ensures that not only are we pulling over a copy of the footer, but also any data attached with it. In this way, each page is built with the exact same footer, just like it is in a server-side include. When the page goes through its normal initialization process, the footer will receive the same markup treatment as the rest of the page: /************************* The App************************/var radioApp = {universalPageBeforeCreate:function(){   var $page = $(this);   if($page.find(".bottomNavBar").length == 0){     $page.append($("#globalComponents .bottomNavBar").clone());   }}}/************************* The Events************************///Interface Events$(document).on("pagebeforecreate", "[data- "role="page"]",radioApp.universalPageBeforeCreate); Look at what we've done here in this piece of JavaScript code. We're actually organizing our code a little more effectively. Organizing your code I believe in a very pragmatic approach to coding, which leads me to use more simple structures and a bare minimum of libraries. However, there are values and lessons to be learned out there. MVC, MVVM, MV* For the last couple of years, serious JavaScript developers have been bringing backend development structures to the web, as the size and scope of their project demanded a more regimented approach. For highly ambitious, long-lasting, in-browser apps, this kind of structured approach can help. This is even truer if you're on a larger team. MVC stands for Model-View-Controller ( see http://en.wikipedia.org/wiki/Model–view–controller), MVVM is for Model View ViewModel (see http://en.wikipedia.org/wiki/Model_View_ViewModel), and MV* is shorthand for Model View Whatever and is the general term used to sum up this entire movement of bringing these kinds of structures to the frontend. Some of the more popular libraries include: Backbone.JS (http://backbonejs.org/): An adapter and sample of how to make Backbone play nicely with jQuery Mobile can be found at http://demos.jquerymobile.com/1.4.5/backbone-requirejs. Ember (http://emberjs.com/): An example for Ember can be found at https://github.com/LuisSala/emberjs-jqm. AngularJS (https://angularjs.org/): Angular also has adapters for jQM in progress. There are several examples at https://github.com/tigbro/jquery-mobile-angular-adapter. Knockout: (http://knockoutjs.com/). A very nice comparison of these, and more, is at http://readwrite.com/2014/02/06/angular-backbone-ember-best-javascript-framework-for-you. MV* and jQuery Mobile Yes, you can do it!! You can add any one of these MV* frameworks to jQuery Mobile and make as complex an app as you like. Of them all, I lean toward the Ember platform for desktop and Angular for jQuery Mobile. However, I'd like to propose another alternative. I'm not going to go in-depth into the concepts behind MVC frameworks. Ember, Angular, and Backbone, all help you to separate the concerns of your application into manageable pieces, offering small building blocks with which to create your application. But, we don't need yet another library/framework to do this. It is simple enough to write code in a more organized fashion. Let's create a structure similar to what I've started before: //JavaScript Document/******************** The Application*******************//******************** The Events*******************//******************** The Model*******************/ The application Under the application section, let's fill in some of our app code and give it a namespace. Essentially, namespacing is taking your application-specific code and putting it into its own named object, so that the functions and variables won't collide with other potential global variables and functions. It keeps you from polluting the global space and helps preserve your code from those who are ignorant regarding your work. Granted, this is JavaScript and people can override anything they wish. However, this also makes it a whole lot more intentional to override something like the radioApp.getStarted function, than simply creating your own function called getStarted. Nobody is going to accidentally override a namespaced function. /******************** The application*******************/var radioApp = {settings:{   initialized:false,   geolocation:{     latitude:null,     longitude:null,   },   regionalChoice:null,   lastStation:null},getStarted:function(){   location.replace("#initialize");},fireCustomEvent:function(){   var $clicked = $(this);   var eventValue = $clicked.attr("data-appEventValue");   var event = new jQuery.Event($(this).attr("data-appEvent"));   if(eventValue){ event.val = eventValue; }   $(window).trigger(event);},otherMethodsBlahBlahBlah:function(){}} Pay attention, in particular, to the fireCustomEvent. function With that, we can now set up an event management system. At its core, the idea is pretty simple. We'd like to be able to simply put tag attributes on our clickable objects and have them fire events, such as all the MV* systems. This fits the bill perfectly. It would be quite common to set up a click event handler on a link, or something, to catch the activity. This is far simpler. Just an attribute here and there and you're wired in. The HTML code becomes more readable too. It's easy to see how declarative this makes your code: <a href="javascript://" data-appEvent="playStation" data- appEventValue="country">Country</a> The events Now, instead of watching for clicks, we're listening for events. You can have as many parts of your app as you like registering themselves to listen for the event, and then execute appropriately. As we fill out more of our application, we'll start collecting a lot of events. Instead of letting them get scattered throughout multiple nested callbacks and such, we'll be keeping them all in one handy spot. In most JavaScript MV* frameworks, this part of the code is referred to as the Router. Hooked to each event, you will see nothing but namespaced application calls: /******************** The events*******************///Interface events$(document).on("click", "[data-appEvent]",radioApp.fireCustomEvent);"$(document).on("pagecontainerbeforeshow","[data-role="page"]",radioApp.universalPageBeforeShow);"$(document).on("pagebeforecreate","[data-role="page"]",radioApp.universalPageBeforeCreate);"$(document).on("pagecontainershow", "#initialize",radioApp.getLocation);"$(document).on("pagecontainerbeforeshow", "#welcome",radioApp.initialize);//Application events$(window).on("getStarted",radioApp.getStarted);$(window).on("setHomeLocation",radioApp.setHomeLocation);$(window).on("setNotHomeLocation",radioApp.setNotHomeLocation);$(window).on("playStation",radioApp.playStation); Notice the separation of concerns into interface events and application events. We're using this as a point of distinction between events that are fired as a result of natural jQuery Mobile events (interface events), and events that we have thrown (application events). This may be an arbitrary distinction, but for someone who comes along later to maintain your code, this could come in handy. The model The model section contains the data for your application. This is typically the kind of data that is pulled in from your backend APIs. It's probably not as important here, but it never hurts to namespace what's yours. Here, we have labeled our data as the modelData label. Any information we pull in from the APIs can be dumped right into this object, like we've done here with the station data: /******************** The Model*******************/var modelData = {station:{   genres:[       {       display:"Seattle Grunge",       genreId:12,       genreParentId:1       }   ],   metroIds[14,33,22,31],   audioIds[55,43,26,23,11]}} Pair this style of programming with client-side templating, and you'll be looking at some highly maintainable, well-structured code. However, there are some features that are still missing. Typically, these frameworks will also provide bindings for your templates. This means that you only have to render the templates once. After that, simply updating your model object will be enough to cause the UI to update itself. The problem with these bound templates, is that they update the HTML in a way that would be perfect for a desktop application. But remember, jQuery Mobile does a lot of DOM manipulation to make things happen. In jQuery Mobile, a listview element starts like this: <ul data-role="listview" data-inset="true"><li><a href="#stations">Local Stations</a></li></ul> After the normal DOM manipulation, you get this: <ul data-role="listview" data-inset="true" data-theme="b" style="margin-top:0" class="ui-listview ui-listview-inset ui- corner-all ui-shadow"> <li data-corners="false" data-shadow="false" data- iconshadow="true" data-wrapperels="div" data-icon="arrow-r" data- iconpos="right" data-theme="b" class="ui-btn ui-btn-icon-right ui- li-has-arrow ui-li ui-corner-top ui-btn-up-b">       <div class="ui-btn-inner ui-li ui-corner-top">           <div class="ui-btn-text">               <a href="#stations" class="ui-link-inherit">Local Stations</a>           </div><span class="ui-icon ui-icon-arrow-r ui-icon- shadow">&nbsp;</span>       </div>   </li></ul> And that's just a single list item. You really don't want to include all that junk in your templates; so what you need to do, is just add your usual items to the listview element and then call the .listview("refresh") function. Even if you're using one of the MV* systems, you'll still have to either find, or write, an adapter that will refresh the listviews when something is added or deleted. With any luck, these kinds of things will be solved at the platform level soon. Until then, using a real MV* system with jQM will be a pain in the posterior. Introduction to the Web Audio API The Web Audio API is a fairly new development and, at the time of writing this, only existed within the mobile space on Mobile Safari and Chrome for Android (http://caniuse.com/#feat=audio-api). The Web Audio API is available on the latest versions of desktop Chrome, Safari, and Firefox, so you can still do your initial test coding there. It's only a matter of time before this is built into other major platforms. Most of the code for this part of the project, and the full explanation of the API, can be found at http://tinyurl.com/webaudioapi2014. Let's use feature detection to branch our capabilities: function init() {if("webkitAudioContext" in window) {   context = new webkitAudioContext();   // an analyzer is used for the spectrum   analyzer = context.createAnalyzer();   analyzer.smoothingTimeConstant = 0.85;   analyzer.connect(context.destination);   fetchNextSong();} else {   //do the old stuff}} The original code for this page was designed to kick off simultaneous downloads for every song in the queue. With a fat connection, this would probably be OK. Not so much on mobile. Because of the limited connectivity and bandwidth, it would be better to just chain downloads to ensure a better experience and a more respectful use of bandwidth: function fetchNextSong() {var request = new XMLHttpRequest();var nextSong = songs.pop();if(nextSong){   request = new XMLHttpRequest();   // the underscore prefix is a common naming convention   // to remind us that the variable is developer-supplied   request._soundName = nextSong;   request.open("GET", PATH + request._soundName + ".mp3", " true);   request.responseType = "arraybuffer";   request.addEventListener("load", bufferSound, false);   request.send();}} Now, the bufferSound function just needs to call the fetchNextSong function after buffering, as shown in the following code snippet: function bufferSound(event) {   var request = event.target;   context.decodeAudioData(request.response, function onSuccess(decodedBuffer) {       myBuffers.push(decodedBuffer);       fetchNextSong();   }, function onFailure() {       alert("Decoding the audio buffer failed");   });} One last thing we need to change from the original, is telling the buffer to pull the songs in the order that they were inserted: function playSound() {   // create a new AudioBufferSourceNode   var source = context.createBufferSource();   source.buffer = myBuffers.shift();  source.loop = false;   source = routeSound(source);   // play right now (0 seconds from now)   // can also pass context.currentTime   source.start(0); mySpectrum = setInterval(drawSpectrum, 30);   mySource = source;} For anyone on iOS, this solution is pretty nice. There is a lot more to this API for those who want to dig in. With this out-of-the-box example, you get a nice canvas-based audio analyzer that gives you a very nice professional look, as the audio levels bounce to the music. Slider controls are used to change the volume, the left-right balance, and the high-pass filter. If you don't know what a high-pass filter is, don't worry; I think that filter's usefulness went the way of the cassette deck. Regardless, it's fun to play with: The Web Audio API is a very serious piece of business. This example was adapted from the example on Apple's site. It only plays one sound. However, the Web Audio API was designed with the idea of making it possible to play multiple sounds, alter them in multiple ways, and even dynamically generate sounds using JavaScript. In the meantime, if you want to see this proof of concept in jQuery Mobile, you will find it in the example source in the webaudioapi.html file. For an even deeper look at what is coming, you can check the docs at https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html. Prompting the user to install your app Now, let's take a look at how we can prompt our users to download the Community Radio app to their home screens. It is very likely that you've seen it before; it's the little bubble that pops up and instructs the user with the steps to install the app. There are many different projects out there, but the best one that I have seen is a derivative of the one started by Google. Much thanks and respect to Okamototk on GitHub (https://github.com/okamototk) for taking and improving it. Okamototk evolved the bubble to include several versions of Android, legacy iOS, and even BlackBerry. You can find his original work at https://github.com/okamototk/jqm-mobile-bookmark-bubble. However, unless you can read Japanese, or enjoy translating it. Don't worry about annoying your customers too much. With this version, if they dismiss the bookmarking bubble three times, they won't see it again. The count is stored in HTML5 LocalStorage; so, if they clear out the storage, they'll see the bubble again. Thankfully, most people out there don't even know that can be done, so it won't happen very often. Usually, it's geeks like us that clear things like LocalStorage and cookies, and we know what we're getting into when we do it. In my edition of the code, I've combined all the JavaScript into a single file meant to be placed between your import of jQuery and jQuery Mobile. At the top, the first non-commented line is: page_popup_bubble="#welcome"; This is what you would change to be your own first page, or where you want the bubble to pop up. In my version, I have hardcoded the font color and text shadow properties into the bubble. This was needed, because in jQM the font color and text shadow color vary, based on the theme you're using. Consequently, in jQuery Mobile's original default A theme (white text on a black background), the font was showing up as white with a dark shadow on top of a white bubble. With my modified version, for older jQuery Mobile versions, it will always look right. We just need to be sure we've set up our page with the proper links in the head, and that our images are in place: <link rel="apple-touch-icon-precomposed" sizes="144x144" href="images/album144.png"><link rel="apple-touch-icon-precomposed" sizes="114x114" href="images/album114.png"><link rel="apple-touch-icon-precomposed" sizes="72x72" href="images/album72.png"><link rel="apple-touch-icon-precomposed" href="images/album57.png"><link rel="shortcut icon" href="img/images/album144.png"> Note the Community Radio logo here. The logo is pulled from our link tags marked with rel="apple-touch-icon-precomposed" and injected into the bubble. So, really, the only thing in the jqm_bookmark_bubble.js file that you would need to alter is the page_popup_bubble function. New device-level hardware access New kinds of hardware-level access are coming to our mobile browsers every year. Here is a look at some of what you can start doing now, and what's on the horizon. Not all of these are applicable to every project, but if you think creatively, you can probably find innovative ways to use them. Accelerometers Accelerometers are the little doo-dads inside your phone that measure the phone's orientation in space. To geek out on this, read http://en.wikipedia.org/wiki/Accelerometer. This goes beyond the simple orientation we've been using. This is true access to the accelerometers, in detail. Think about the user being able to shake their device, or tilting it as a method of interaction with your app. Maybe, Community Radio is playing something they don't like and we can give them a fun way to rage against the song. Something such as, shake a song to never hear it again. Here is a simple marble rolling game somebody made as a proof of concept. See http://menscher.com/teaching/woaa/examples/html5_accelerometer.html. Camera Apple's iOS 8 and Android's Lollipop can both access photos on their filesystems as well as the cameras. Granted, these are the latest and greatest versions of these two platforms. If you intend to support the many woefully out of date Android devices (2.3, 2.4) that are still being sold off the shelves as if brand new, then you're going to want to go with a native compilation such as PhoneGap or Apache Cordova to get that capability: <input type="file" accept="image/*"><input type="file" accept="video/*"> The following screenshot has iOS to the left and Android to the right: APIs on the horizon Mozilla is doing a lot to push the mobile web API envelope. You can check out what's on the horizon here: https://wiki.mozilla.org/WebAPI. To app or not to app, that is the question Should you or should you not compile your project into a native app? Here are some things to consider. Raining on the parade (take this seriously) When you compile your first project into an app, there is a certain thrill that you get. You did it! You made a real app! It is at this point that we need to remember the words of Dr. Ian Malcolm from the movie Jurassic Park (Go watch it again. I'll wait): "You stood on the shoulders of geniuses to accomplish something as fast as you could, and before you even knew what you had, you patented it, and packaged it, and slapped it on a plastic lunchbox, and now [bangs on the table] you're selling it, you wanna sell it. Well... your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."                                                                                                  – Dr. Ian Malcolm These words are very close to prophetic for us. In the end, their own creation ate most of the guests for lunch. According to this report from August 2012 http://www.webpronews.com/over-two-thirds-of-the-app-store-has-never-been-downloaded-2012-08 (and several others like it that I've seen before), over two-thirds of all apps on the app stores have never been downloaded. Not even once! So, realistically, app stores are where most projects go to die. Even if your app is discovered, the likelihood that anyone will use it for any significant period of time is astonishingly small. According to this article in Forbes (http://tech.fortune.cnn.com/2009/02/20/the-half-life-of-an-iphone-app/), most apps are abandoned in the space of minutes and never opened again. Paid apps last about twice as long, before either being forgotten or removed. Games have some staying power, but let's be honest, jQuery Mobile isn't exactly a compelling gaming platform, is it?? The Android world is in terrible shape. Devices can still be purchased running ancient versions of the OS, and carriers and hardware partners are not providing updates to them in anything even resembling a timely fashion. If you want to monitor the trouble you could be bringing upon yourself by embracing a native strategy, look here: http://developer.android.com/about/dashboards/index.html: You can see how fractured the Android landscape is, as well as how many older versions you'll probably have to support. On the flip side, if you're publishing strictly to the web, then every time your users visit your site, they'll be on the latest edition using the latest APIs, and you'll never have to worry about somebody using some out-of-date version. Do you have a security patch you need to apply? You can do it in seconds. If you're on the Apple app store, this patch could take days or even weeks. Three good reasons for compiling an app Yes, I know I just finished telling you about your slim chances of success and the fire and brimstone you will face for supporting apps. However, here are a few good reasons to make a real app. In fact, in my opinion, they're the only acceptable reasons. The project itself is the product This is the first and only sure sign that you need to package your project as an app. I'm not talking about selling things through your project. I'm talking about the project itself. It should be made into an app. May the force be with you. Access to native only hardware capabilities GPS and camera are reliably available for the two major platforms in their latest editions. iOS even supports accelerometers. However, if you're looking for more than this, you'll need to compile down to an app to get access to these APIs. Push notifications Do you like them? I don't know about you, but I get way too many push notifications; any app that gets too pushy either gets uninstalled or its notifications are completely turned off. I'm not alone in this. However, if you simply must have push notifications and can't wait for the web-based implementation, you'll have to compile an app. Supporting current customers Ok, this one is a stretch, but if you work in corporate America, you're going to hear it. The idea is that you're an established business and you want to give mobile support to your clients. You or someone above you has read a few whitepapers and/or case studies that show that almost 50 percent of people search in the app stores first. Even if that were true (which I'm still not sold on), you're talking to a businessperson. They understand money, expenses, and escalated maintenance. Once you explain to them the cost, complexity, and potential ongoing headaches of building and testing for all the platforms and their OS versions in the wild, it becomes a very appealing alternative to simply put out a marketing push to your current customers that you're now supporting mobile, and all they have to do is go to your site on their mobile device. Marketing folks are always looking for reasons to toot their horns at customers anyway. Marketing might still prefer to have the company icon on the customer's device to reinforce brand loyalty, but this is simply a matter of educating them that it can be done without an app. You still may not be able to convince all the right people that apps are the wrong way to go when it comes to customer support. If you can't do it on your own, slap them on their heads with a little Jakob Nielson. If they won't listen to you, maybe they'll listen to him. I would defy anyone who says that the Nielsen Norman Group doesn't know what they're saying. See http://www.nngroup.com/articles/mobile-sites-vs-apps-strategy-shift/ for the following quote: "Summary: Mobile apps currently have better usability than mobile sites, but forthcoming changes will eventually make a mobile site the superior strategy." So the $64,000 question becomes: are we making something for right now or for the future? If we're making it for right now, what are the criteria that should mark the retirement of the native strategy? Or do we intend to stay locked on it forever? Don't go into that war without an exit strategy. Summary I don't know about you, but I'm exhausted. I really don't think there's any more that can be said about jQuery Mobile, or its supporting technologies at this time. You've got examples on how to build things for a whole host of industries, and ways to deploy it through the web. At this point, you should be quoting Bob the Builder. Can we build it? Yes, we can! I hope this article has assisted and/or inspired you to go and make something great. I hope you change the world and get filthy stinking rich doing it. I'd love to hear your success stories as you move forward. To let me know how you're doing, or to let me know of any errata, or even if you just have some questions, please don't hesitate to email us directly at http://andy@commadelimited.com. Now, go be awesome! Resources for Article: Further resources on this subject: Tips and Tricks for Working with jQuery and WordPress [article] Building a Custom Version of jQuery [article] jQuery UI 1.8: The Accordion Widget [article]
Read more
  • 0
  • 0
  • 1800

article-image-sound-recorder-android
Packt
05 Feb 2015
23 min read
Save for later

Sound Recorder for Android

Packt
05 Feb 2015
23 min read
In this article by Mark Vasilkov, author of the book, Kivy Blueprints, we will emulate the Modern UI by using the grid structure and scalable vector icons and develop a sound recorder for the Android platform using Android Java classes. (For more resources related to this topic, see here.) Kivy apps usually end up being cross-platform, mainly because the Kivy framework itself supports a wide range of target platforms. In this write-up, however, we're building an app that will be single-platform. This gives us an opportunity to rely on platform-specific bindings that provide extended functionality. The need for such bindings arises from the fact that the input/output capabilities of a pure Kivy program are limited to those that are present on all platforms. This amounts to a tiny fraction of what a common computer system, such as a smartphone or a laptop, can actually do. Comparison of features Let's take a look at the API surface of a modern mobile device (let's assume it's running Android). We'll split everything in two parts: things that are supported directly by Python and/or Kivy and things that aren't. The following are features that are directly available in Python or Kivy: Hardware-accelerated graphics Touchscreen input with optional multitouch Sound playback (at the time of writing, this feature is available only from the file on the disk) Networking, given the Internet connectivity is present The following are the features that aren't supported or require an external library: Modem, support for voice calls, and SMS Use of built-in cameras for filming videos and taking pictures Use of a built-in microphone to record sound Cloud storage for application data associated with a user account Bluetooth and other near-field networking features Location services and GPS Fingerprinting and other biometric security Motion sensors, that is, accelerometer and gyroscope Screen brightness control Vibration and other forms of haptic feedback Battery charge level For most entries in the "not supported" list, different Python libraries are already present to fill the gap, such as audiostream for a low-level sound recording, and Plyer that handles many platform-specific tasks. So, it's not like these features are completely unavailable to your application; realistically, the challenge is that these bits of functionality are insanely fragmented across different platforms (or even consecutive versions of the same platform, for example, Android); thus, you end up writing platform-specific, not portable code anyway. As you can see from the preceding comparison, a lot of functionality is available on Android and only partially covered by an existing Python or Kivy API. There is a huge untamed potential in using platform-specific features in your applications. This is not a limitation, but an opportunity. Shortly, you will learn how to utilize any Android API from Python code, allowing your Kivy application to do practically anything. Another advantage of narrowing the scope of your app to only a small selection of systems is that there are whole new classes of programs that can function (or even make sense) only on a mobile device with fitting hardware specifications. These include augmented reality apps, gyroscope-controlled games, panoramic cameras, and so on. Introducing Pyjnius To harness the full power of our chosen platform, we're going to use a platform-specific API, which happens to be in Java and is thus primarily Java oriented. We are going to build a sound recorder app, similar to the apps commonly found in Android and iOS, albeit more simplistic. Unlike pure Kivy apps, the underlying Android API certainly provides us with ways of recording sound programmatically. The rest of the article will cover this little recorder program throughout its development to illustrate the Python-Java interoperability using the excellent Pyjnius library, another great project made by Kivy developers. The concept we chose—sound recording and playback—is deliberately simple so as to outline the features of such interoperation without too much distraction caused by the sheer complexity of a subject and abundant implementation details. The source code of Pyjnius, together with the reference manual and some examples, can be found in the official repository at https://github.com/kivy/pyjnius. Modern UI While we're at it, let's build a user interface that resembles the Windows Phone home screen. This concept, basically a grid of colored rectangles (tiles) of various sizes, was known as Metro UI at some point in time but was later renamed to Modern UI due to trademark issues. Irrespective of the name, this is how it looks. This will give you an idea of what we'll be aiming at during the course of this app's development: Design inspiration – a Windows Phone home screen with tiles Obviously, we aren't going to replicate it as is; we will make something that resembles the depicted user interface. The following list pretty much summarizes the distinctive features we're after: Everything is aligned to a rectangular grid UI elements are styled using the streamlined, flat design—tiles use bright, solid colors and there are no shadows or rounded corners Tiles that are considered more useful (for an arbitrary definition of "useful") are larger and thus easier to hit If this sounds easy to you, then you're absolutely right. As you will see shortly, the Kivy implementation of such a UI is rather straightforward. The buttons To start off, we are going to tweak the Button class in Kivy language (let's name the file recorder.kv): #:import C kivy.utils.get_color_from_hex <Button>:background_normal: 'button_normal.png'background_down: 'button_down.png'background_color: C('#95A5A6')font_size: 40 The texture we set as the background is solid white, exploiting the same trick that was used while creating the color palette. The background_color property acts as tint color, and assigning a plain white texture equals to painting the button in background_color. We don't want borders this time. The second (pressed background_down) texture is 25 percent transparent white. Combined with the pitch-black background color of the app, we're getting a slightly darker shade of the same background color the button was assigned: Normal (left) and pressed (right) states of a button – the background color is set to #0080FF The grid structure The layout is a bit more complex to build. In the absence of readily available Modern UI-like tiled layout, we are going to emulate it with the built-in GridLayout widget. One such widget could have fulfilled all our needs, if not for the last requirement: we want to have bigger and smaller buttons. Presently, GridLayout doesn't allow the merging of cells to create bigger ones (a functionality similar to the rowspan and colspan attributes in HTML would be nice to have). So, we will go in the opposite direction: start with the root GridLayout with big cells and add another GridLayout inside a cell to subdivide it. Thanks to nested layouts working great in Kivy, we arrive at the following Kivy language structure (in recorder.kv): #:import C kivy.utils.get_color_from_hex GridLayout:    padding: 15    Button:        background_color: C('#3498DB')        text: 'aaa'    GridLayout:        Button:            background_color: C('#2ECC71')            text: 'bbb1 '        Button:            background_color: C('#1ABC9C')            text: 'bbb2'        Button:            background_color: C('#27AE60')            text: 'bbb3'        Button:            background_color: C('#16A085')            text: 'bbb4'    Button:        background_color: C('#E74C3C')        text: 'ccc'    Button:        background_color: C('#95A5A6')        text: 'ddd' Note how the nested GridLayout sits on the same level as that of outer, large buttons. This should make perfect sense if you look at the previous screenshot of the Windows Phone home screen: a pack of four smaller buttons takes up the same space (one outer grid cell) as a large button. The nested GridLayout is a container for those smaller buttons. Visual attributes On the outer grid, padding is provided to create some distance from the edges of the screen. Other visual attributes are shared between GridLayout instances and moved to a class. The following code is present inside recorder.kv: <GridLayout>:    cols: 2    spacing: 10    row_default_height:        (0.5 * (self.width - self.spacing[0]) -        self.padding[0])    row_force_default: True It's worth mentioning that both padding and spacing are effectively lists, not scalars. spacing[0] refers to a horizontal spacing, followed by a vertical one. However, we can initialize spacing with a single value, as shown in the preceding code; this value will then be used for everything. Each grid consists of two columns with some spacing in between. The row_default_height property is trickier: we can't just say, "Let the row height be equal to the row width." Instead, we compute the desired height manually, where the value 0.5 is used because we have two columns: If we don't apply this tweak, the buttons inside the grid will fill all the available vertical space, which is undesirable, especially when there aren't that many buttons (every one of them ends up being too large). Instead, we want all the buttons nice and square, with empty space at the bottom left, well, empty. The following is the screenshot of our app's "Modern UI" tiles, which we obtained as result from the preceding code: The UI so far – clickable tiles of variable size not too dissimilar from our design inspiration Scalable vector icons One of the nice finishing touches we can apply to the application UI is the use of icons, and not just text, on buttons. We could, of course, just throw in a bunch of images, but let's borrow another useful technique from modern web development and use an icon font instead—as you will see shortly, these provide great flexibility at no cost. Icon fonts Icon fonts are essentially just like regular ones, except their glyphs are unrelated to the letters of a language. For example, you type P and the Python logo is rendered instead of the letter; every font invents its own mnemonic on how to assign letters to icons. There are also fonts that don't use English letters, instead they map icons to Unicode's "private use area" character code. This is a technically correct way to build such a font, but application support for this Unicode feature varies—not every platform behaves the same in this regard, especially the mobile platform. The font that we will use for our app does not assign private use characters and uses ASCII (plain English letters) instead. Rationale to use icon fonts On the Web, icon fonts solve a number of problems that are commonly associated with (raster) images: First and foremost, raster images don't scale well and may become blurry when resized—there are certain algorithms that produce better results than others, but as of today, the "state of the art" is still not perfect. In contrast, a vector picture is infinitely scalable by definition. Raster image files containing schematic graphics (such as icons and UI elements) tend to be larger than vector formats. This does not apply to photos encoded as JPEG obviously. With an icon font, color changes literally take seconds—you can do just that by adding color: red (for example) to your CSS file. The same is true for size, rotation, and other properties that don't involve changing the geometry of an image. Effectively, this means that making trivial adjustments to an icon does not require an image editor, like it normally would when dealing with bitmaps. Some of these points do not apply to Kivy apps that much, but overall, the use of icon fonts is considered a good practice in contemporary web development, especially since there are many free high-quality fonts to choose from—that's hundreds of icons readily available for inclusion in your project. Using the icon font in Kivy In our application, we are going to use the Modern Pictograms (Version 1) free font, designed by John Caserta. To load the font into our Kivy program, we'll use the following code (in main.py): from kivy.app import Appfrom kivy.core.text import LabelBaseclass RecorderApp(App):    passif __name__ == '__main__':    LabelBase.register(name='Modern Pictograms',                       fn_regular='modernpics.ttf')    RecorderApp().run() The actual use of the font happens inside recorder.kv. First, we want to update the Button class once again to allow us to change the font in the middle of a text using markup tags. This is shown in the following snippet: <Button>:    background_normal: 'button_normal.png'    background_down: 'button_down.png'    font_size: 24    halign: 'center'    markup: True The halign: 'center' attribute means that we want every line of text centered inside the button. The markup: True attribute is self-evident and required because the next step in customization of buttons will rely heavily on markup. Now we can update button definitions. Here's an example of this: Button:    background_color: C('#3498DB')    text:        ('[font=Modern Pictograms][size=120]'        'e[/size][/font]nNew recording') Notice the character 'e' inside the [font][size] tags. That's the icon code. Every button in our app will use a different icon, and changing an icon amounts to replacing a single letter in the recorder.kv file. Complete mapping of these code for the Modern Pictograms font can be found on its official website at http://modernpictograms.com/. Long story short, this is how the UI of our application looks after the addition of icons to buttons: The sound recorder app interface – a modern UI with vector icons from the Modern Pictograms font This is already pretty close to the original Modern UI look. Using the native API Having completed the user interface part of the app, we will now turn to a native API and implement the sound recording and playback logic using the suitable Android Java classes, MediaRecorder and MediaPlayer. Thankfully, the task at hand is relatively simple. To record a sound using the Android API, we only need the following five Java classes: The class android.os.Environment provides access to many useful environment variables. We are going to use it to determine the path where the SD card is mounted so we can save the recorded audio file. It's tempting to just hardcode '/sdcard/' or a similar constant, but in practice, every other Android device has a different filesystem layout. So let's not do this even for the purposes of the tutorial. The class android.media.MediaRecorder is our main workhorse. It facilitates capturing audio and video and saving it to the filesystem. The classes android.media.MediaRecorder$AudioSource, android.media.MediaRecorder$AudioEncoder, and android.media.MediaRecorder$OutputFormat are enumerations that hold the values we need to pass as arguments to the various methods of MediaRecorder. Loading Java classes The code to load the aforementioned Java classes into your Python application is as follows: from jnius import autoclassEnvironment = autoclass('android.os.Environment')MediaRecorder = autoclass('android.media.MediaRecorder')AudioSource = autoclass('android.media.MediaRecorder$AudioSource')OutputFormat = autoclass('android.media.MediaRecorder$OutputFormat')AudioEncoder = autoclass('android.media.MediaRecorder$AudioEncoder') If you try to run the program at this point, you'll receive an error, something along the lines of: ImportError: No module named jnius: You'll encounter this error if you don't have Pyjnius installed on your machine jnius.JavaException: Class not found 'android/os/Environment': You'll encounter this error if Pyjnius is installed, but the Android classes we're trying to load are missing (for example, when running on a desktop) This is one of the rare cases when receiving an error means we did everything right. From now on, we should do all of the testing on Android device or inside an emulator because the code isn't cross-platform anymore. It relies unequivocally on Android-specific Java features. Now we can use Java classes seamlessly in our Python code. Looking up the storage path Let's illustrate the practical cross-language API use with a simple example. In Java, we will do something like this in order to find out where an SD card is mounted: import android.os.Environment;String path = Environment.getExternalStorageDirectory().getAbsolutePath(); When translated to Python, the code is as follows: Environment = autoclass('android.os.Environment')path = Environment.getExternalStorageDirectory().getAbsolutePath() This is the exact same thing as shown in the previous code, only written in Python instead of Java. While we're at it, let's also log this value so that we can see which exact path in the Kivy log the getAbsolutePath method returned to our code: from kivy.logger import LoggerLogger.info('App: storage path == "%s"' % path) On my testing device, this produces the following line in the Kivy log: [INFO] App: storage path == "/storage/sdcard0" Recording sound Now, let's dive deeper into the rabbit hole of the Android API and actually record a sound from the microphone. The following code is again basically a translation of Android API documents into Python. If you're interested in the original Java version of this code, you may find it at http://developer.android.com/guide/topics/media/audio-capture.html —it's way too lengthy to include here. The following preparation code initializes a MediaRecorder object: storage_path = (Environment.getExternalStorageDirectory()                .getAbsolutePath() + '/kivy_recording.3gp')recorder = MediaRecorder()def init_recorder():    recorder.setAudioSource(AudioSource.MIC)    recorder.setOutputFormat(OutputFormat.THREE_GPP)    recorder.setAudioEncoder(AudioEncoder.AMR_NB)    recorder.setOutputFile(storage_path)    recorder.prepare() This is the typical, straightforward, verbose, Java way of initializing things, which is rewritten in Python word for word. Now for the fun part, the Begin recording/End recording button: class RecorderApp(App):    is_recording = False    def begin_end_recording(self):        if (self.is_recording):            recorder.stop()            recorder.reset()            self.is_recording = False            self.root.ids.begin_end_recording.text =                 ('[font=Modern Pictograms][size=120]'                 'e[/size][/font]nBegin recording')            return        init_recorder()        recorder.start()        self.is_recording = True        self.root.ids.begin_end_recording.text =             ('[font=Modern Pictograms][size=120]'             '%[/size][/font]nEnd recording') As you can see, no rocket science was applied here either. We just stored the current state, is_recording, and then took the action depending on it, namely: Start or stop the MediaRecorder object (the highlighted part). Flip the is_recording flag. Update the button text so that it reflects the current state (see the next screenshot). The last part of the application that needs updating is the recorder.kv file. We need to tweak the Begin recording/End recording button so that it calls our begin_end_recording() function: Button:        id: begin_end_recording        background_color: C('#3498DB')        text:            ('[font=Modern Pictograms][size=120]'            'e[/size][/font]nBegin recording')        on_press: app.begin_end_recording() That's it! If you run the application now, chances are that you'll be able to actually record a sound file that is going to be stored on the SD card. However, please see the next section before you do this. The button that you created will look something like this: Begin recording and End recording – this one button summarizes our app's functionality so far. Major caveat – permissions The default Kivy Launcher app at the time of writing this doesn't have the necessary permission to record sound, android.permission.RECORD_AUDIO. This results in a crash as soon as the MediaRecorder instance is initialized. There are many ways to mitigate this problem. For the sake of this tutorial, we provide a modified Kivy Launcher that has the necessary permission enabled. The latest version of the package is also available for download at https://github.com/mvasilkov/kivy_launcher_hack. Before you install the provided .apk file, please delete the existing version of the app, if any, from your device. Alternatively, if you're willing to fiddle with the gory details of bundling Kivy apps for Google Play, you can build Kivy Launcher yourself from the source code. Everything you need to do this can be found in the official Kivy GitHub account, https://github.com/kivy. Playing sound Getting sound playback to work is easier; there is no permission for this and the API is somewhat more concise too. We need to load just one more class, MediaPlayer: MediaPlayer = autoclass('android.media.MediaPlayer')player = MediaPlayer() The following code will run when the user presses the Play button. We'll also use the reset_player() function in the Deleting files section discussed later in this article; otherwise, there could have been one slightly longer function: def reset_player():    if (player.isPlaying()):        player.stop()    player.reset()def restart_player():    reset_player()    try:        player.setDataSource(storage_path)        player.prepare()        player.start()    except:        player.reset() The intricate details of each API call can be found in the official documents, but overall, this listing is pretty self-evident: reset the player to its initial state, load the sound file, and press the Play button. The file format is determined automatically, making our task at hand a wee bit easier. Deleting files This last feature will use the java.io.File class, which is not strictly related to Android. One great thing about the official Android documentation is that it contains reference to these core Java classes too, despite the fact they predate the Android operating system by more than a decade. The actual code needed to implement file removal is exactly one line; it's highlighted in the following listing: File = autoclass('java.io.File')class RecorderApp(App):    def delete_file(self):        reset_player()        File(storage_path).delete() First, we stop the playback (if any) by calling the reset_player() function and then remove the file—short and sweet. Interestingly, the File.delete() method in Java won't throw an exception in the event of a catastrophic failure, so there is no need to perform try ... catch in this case. Consistency, consistency everywhere. An attentive reader will notice that we could also delete the file using Python's own os.remove() function. Doing this using Java achieves nothing special compared to a pure Python implementation; it's also slower. On the other hand, as a demonstration of Pyjnius, java.io.File works as good as any other Java class. At this point, with the UI and all three major functions done, our application is complete for the purposes of this tutorial. Summary Writing nonportable code has its strengths and weaknesses, just like any other global architectural decision. This particular choice, however, is especially hard because the switch to native API typically happens early in the project and may be completely impractical to undo at a later stage. The major advantage of the approach was discussed at the beginning of this article: with platform-specific code, you can do virtually anything that your platform is capable of. There are no artificial limits; your Python code has unrestricted access to the same underlying API as the native code. On the downside, depending on a single-platform is risky for a number of reasons: The market of Android alone is provably smaller than that of Android plus iOS (this holds true for about every combination of operating systems). Porting the program over to a new system becomes harder with every platform-specific feature you use. If the project runs on just one platform, exactly one political decision may be sufficient to kill it. The chances of getting banned by Google is higher than that of getting the boot from both App Store and Google Play simultaneously. (Again, this holds true for practically every set of application marketplaces.) Now that you're well aware of the options, it's up to you to make an educated choice regarding every app you develop. Resources for Article: Further resources on this subject: Reversing Android Applications [Article] Creating a Direct2D game window class [Article] Images, colors, and backgrounds [Article]
Read more
  • 0
  • 0
  • 6334

article-image-heads-mvvmcross
Packt
29 Dec 2014
33 min read
Save for later

Heads up to MvvmCross

Packt
29 Dec 2014
33 min read
In this article, by Mark Reynolds, author of the book Xamarin Essentials, we will take the next step and look at how the use of design patterns and frameworks can increase the amount of code that can be reused. We will cover the following topics: An introduction to MvvmCross The MVVM design pattern Core concepts Views, ViewModels, and commands Data binding Navigation (ViewModel to ViewModel) The project organization The startup process Creating NationalParks.MvvmCross Our approach will be to introduce the core concepts at a high level and then dive in and create the national parks sample app using MvvmCross. This will give you a basic understanding of how to use the framework and the value associated with its use. With that in mind, let's get started. (For more resources related to this topic, see here.) Introducing MvvmCross MvvmCross is an open source framework that was created by Stuart Lodge. It is based on the Model-View-ViewModel (MVVM) design pattern and is designed to enhance code reuse across numerous platforms, including Xamarin.Android, Xamarin.iOS, Windows Phone, Windows Store, WPF, and Mac OS X. The MvvmCross project is hosted on GitHub and can be accessed at https://github.com/MvvmCross/MvvmCross. The MVVM pattern MVVM is a variation of the Model-View-Controller pattern. It separates logic traditionally placed in a View object into two distinct objects, one called View and the other called ViewModel. The View is responsible for providing the user interface and the ViewModel is responsible for the presentation logic. The presentation logic includes transforming data from the Model into a form that is suitable for the user interface to work with and mapping user interaction with the View into requests sent back to the Model. The following diagram depicts how the various objects in MVVM communicate: While MVVM presents a more complex implementation model, there are significant benefits of it, which are as follows: ViewModels and their interactions with Models can generally be tested using frameworks (such as NUnit) that are much easier than applications that combine the user interface and presentation layers ViewModels can generally be reused across different user interface technologies and platforms These factors make the MVVM approach both flexible and powerful. Views Views in an MvvmCross app are implemented using platform-specific constructs. For iOS apps, Views are generally implemented as ViewControllers and XIB files. MvvmCross provides a set of base classes, such as MvxViewContoller, that iOS ViewControllers inherit from. Storyboards can also be used in conjunction with a custom presenter to create Views; we will briefly discuss this option in the section titled Implementing the iOS user interface later in this article. For Android apps, Views are generally implemented as MvxActivity or MvxFragment along with their associated layout files. ViewModels ViewModels are classes that provide data and presentation logic to views in an app. Data is exposed to a View as properties on a ViewModel, and logic that can be invoked from a View is exposed as commands. ViewModels inherit from the MvxViewModel base class. Commands Commands are used in ViewModels to expose logic that can be invoked from the View in response to user interactions. The command architecture is based on the ICommand interface used in a number of Microsoft frameworks such as Windows Presentation Foundation (WPF) and Silverlight. MvvmCross provides IMvxCommand, which is an extension of ICommand, along with an implementation named MvxCommand. The commands are generally defined as properties on a ViewModel. For example: public IMvxCommand ParkSelected { get; protected set; } Each command has an action method defined, which implements the logic to be invoked: protected void ParkSelectedExec(NationalPark park) {    . . .// logic goes here } The commands must be initialized and the corresponding action method should be assigned: ParkSelected =    new MvxCommand<NationalPark> (ParkSelectedExec); Data binding Data binding facilitates communication between the View and the ViewModel by establishing a two-way link that allows data to be exchanged. The data binding capabilities provided by MvvmCross are based on capabilities found in a number of Microsoft XAML-based UI frameworks such as WPF and Silverlight. The basic idea is that you would like to bind a property in a UI control, such as the Text property of an EditText control in an Android app to a property of a data object such as the Description property of NationalPark. The following diagram depicts this scenario: The binding modes There are four different binding modes that can be used for data binding: OneWay binding: This mode tells the data binding framework to transfer values from the ViewModel to the View and transfer any updates to properties on the ViewModel to their bound View property. OneWayToSource binding: This mode tells the data binding framework to transfer values from the View to the ViewModel and transfer updates to View properties to their bound ViewModel property. TwoWay binding: This mode tells the data binding framework to transfer values in both directions between the ViewModel and View, and updates on either object will cause the other to be updated. This binding mode is useful when values are being edited. OneTime binding: This mode tells the data binding framework to transfer values from ViewModel to View when the binding is established; in this mode, updates to ViewModel properties are not monitored by the View. The INotifyPropertyChanged interface The INotifyPropertyChanged interface is an integral part of making data binding work effectively; it acts as a contract between the source object and the target object. As the name implies, it defines a contract that allows the source object to notify the target object when data has changed, thus allowing the target to take any necessary actions such as refreshing its display. The interface consists of a single event—the PropertyChanged event—that the target object can subscribe to and that is triggered by the source if a property changes. The following sample demonstrates how to implement INotifyPropertyChanged: public class NationalPark : INotifyPropertyChanged {   public event PropertyChangedEventHandler      PropertyChanged; // rather than use "… code" it is safer to use // the comment form string _name; public string Name {    get { return _name; }    set    {        if (value.Equals (_name,            StringComparison.Ordinal))        {      // Nothing to do - the value hasn't changed;      return;        }        _name = value;        OnPropertyChanged();    } } . . . void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {      var handler = PropertyChanged; if (handler != null) {      handler(this,            new PropertyChangedEventArgs(propertyName)); } } } Binding specifications Bindings can be specified in a couple of ways. For Android apps, bindings can be specified in layout files. The following example demonstrates how to bind the Text property of a TextView instance to the Description property in a NationalPark instance: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/descrTextView"    local_MvxBind="Text Park.Description" /> For iOS, binding must be accomplished using the binding API. CreateBinding() is a method than can be found on MvxViewController. The following example demonstrates how to bind the Description property to a UILabel instance: this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).    Apply (); Navigating between ViewModels Navigating between various screens within an app is an important capability. Within a MvvmCross app, this is implemented at the ViewModel level so that navigation logic can be reused. MvvmCross supports navigation between ViewModels through use of the ShowViewModel<T>() method inherited from MvxNavigatingObject, which is the base class for MvxViewModel. The following example demonstrates how to navigate to DetailViewModel: ShowViewModel<DetailViewModel>(); Passing parameters In many situations, there is a need to pass information to the destination ViewModel. MvvmCross provides a number of ways to accomplish this. The primary method is to create a class that contains simple public properties and passes an instance of the class into ShowViewModel<T>(). The following example demonstrates how to define and use a parameters class during navigation: public class DetailParams {    public int ParkId { get; set; } }   // using the parameters class ShowViewModel<DetailViewModel>( new DetailViewParam() { ParkId = 0 }); To receive and use parameters, the destination ViewModel implements an Init() method that accepts an instance of the parameters class: public class DetailViewModel : MvxViewModel {    . . .    public void Init(DetailViewParams parameters)    {        // use the parameters here . . .    } } Solution/project organization Each MvvmCross solution will have a single core PCL project that houses the reusable code and a series of platform-specific projects that contain the various apps. The following diagram depicts the general structure: The startup process MvvmCross apps generally follow a standard startup sequence that is initiated by platform-specific code within each app. There are several classes that collaborate to accomplish the startup; some of these classes reside in the core project and some of them reside in the platform-specific projects. The following sections describe the responsibilities of each of the classes involved. App.cs The core project has an App class that inherits from MvxApplication. The App class contains an override to the Initialize() method so that at a minimum, it can register the first ViewModel that should be presented when the app starts: RegisterAppStart<ViewModels.MasterViewModel>(); Setup.cs Android and iOS projects have a Setup class that is responsible for creating the App object from the core project during the startup. This is accomplished by overriding the CreateApp() method: protected override IMvxApplication CreateApp() {    return new Core.App(); } For Android apps, Setup inherits from MvxAndroidSetup. For iOS apps, Setup inherits from MvxTouchSetup. The Android startup Android apps are kicked off using a special Activity splash screen that calls the Setup class and initiates the MvvmCross startup process. This is all done automatically for you; all you need to do is include the splash screen definition and make sure it is marked as the launch activity. The definition is as follows: [Activity( Label="NationalParks.Droid", MainLauncher = true, Icon="@drawable/icon", Theme="@style/Theme.Splash", NoHistory=true, ScreenOrientation = ScreenOrientation.Portrait)] public class SplashScreen : MvxSplashScreenActivity {    public SplashScreen():base(Resource.Layout.SplashScreen)    {    } } The iOS startup The iOS app startup is slightly less automated and is initiated from within the FinishedLaunching() method of AppDelegate: public override bool FinishedLaunching (    UIApplication app, NSDictionary options) {    _window = new UIWindow (UIScreen.MainScreen.Bounds);      var setup = new Setup(this, _window);    setup.Initialize();    var startup = Mvx.Resolve<IMvxAppStart>();    startup.Start();      _window.MakeKeyAndVisible ();      return true; } Creating NationalParks.MvvmCross Now that we have basic knowledge of the MvvmCross framework, let's put that knowledge to work and convert the NationalParks app to leverage the capabilities we just learned. Creating the MvvmCross core project We will start by creating the core project. This project will contain all the code that will be shared between the iOS and Android app primarily in the form of ViewModels. The core project will be built as a Portable Class Library. To create NationalParks.Core, perform the following steps: From the main menu, navigate to File | New Solution. From the New Solution dialog box, navigate to C# | Portable Library, enter NationalParks.Core for the project Name field, enter NationalParks.MvvmCross for the Solution field, and click on OK. Add the MvvmCross starter package to the project from NuGet. Select the NationalParks.Core project and navigate to Project | Add Packages from the main menu. Enter MvvmCross starter in the search field. Select the MvvmCross – Hot Tuna Starter Pack entry and click on Add Package. A number of things were added to NationalParks.Core as a result of adding the package, and they are as follows: A packages.config file, which contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to actual libraries in the Packages folder of the overall solution. A ViewModels folder with a sample ViewModel named FirstViewModel. An App class in App.cs, which contains an Initialize() method that starts the MvvmCross app by calling RegisterAppStart() to start FirstViewModel. We will eventually be changing this to start the MasterViewModel class, which will be associated with a View that lists national parks. Creating the MvvmCross Android app The next step is to create an Android app project in the same solution. To create NationalParks.Droid, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog box, navigate to C# | Android | Android Application, enter NationalParks.Droid for the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.Droid and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.Droid as a result of adding the package, which are as follows: packages.config: This file contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView : This class is present in the Views folder, which corresponds to FirstViewModel, which was created in NationalParks.Core. FirstView: This layout is present in Resourceslayout, which is used by the FirstView activity. This is a traditional Android layout file with the exception that it contains binding declarations in the EditView and TextView elements. Setup: This file inherits from MvxAndroidSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). SplashScreen: This class inherits from MvxSplashScreenActivity. The SplashScreen class is marked as the main launcher activity and thus initializes the MvvmCross app with a call to Setup.Initialize(). Add a reference to NationalParks.Core by selecting the References folder, right-click on it, select Edit References, select the Projects tab, check NationalParks.Core, and click on OK. Remove MainActivity.cs as it is no longer needed and will create a build error. This is because it is marked as the main launch and so is the new SplashScreen class. Also, remove the corresponding Resourceslayoutmain.axml layout file. Run the app. The app will present FirstViewModel, which is linked to the corresponding FirstView instance with an EditView class, and TextView presents the same Hello MvvmCross text. As you edit the text in the EditView class, the TextView class is automatically updated by means of data binding. The following screenshot depicts what you should see: Reusing NationalParks.PortableData and NationalParks.IO Before we start creating the Views and ViewModels for our app, we first need to bring in some code from our previous efforts that can be used to maintain parks. For this, we will simply reuse the NationalParksData singleton and the FileHandler classes that were created previously. To reuse the NationalParksData singleton and FileHandler classes, complete the following steps: Copy NationalParks.PortableData and NationalParks.IO from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the NationalParks.MvvmCross solution folder. Add a reference to NationalParks.PortableData in the NationalParks.Droid project. Create a folder named NationalParks.IO in the NationalParks.Droid project and add a link to FileHandler.cs from the NationalParks.IO project. Recall that the FileHandler class cannot be contained in the Portable Class Library because it uses file IO APIs that cannot be references from a Portable Class Library. Compile the project. The project should compile cleanly now. Implementing the INotifyPropertyChanged interface We will be using data binding to bind UI controls to the NationalPark object and thus, we need to implement the INotifyPropertyChanged interface. This ensures that changes made to properties of a park are reported to the appropriate UI controls. To implement INotifyPropertyChanged, complete the following steps: Open NationalPark.cs in the NationalParks.PortableData project. Specify that the NationalPark class implements INotifyPropertyChanged interface. Select the INotifyPropertyChanged interface, right-click on it, navigate to Refactor | Implement interface, and press Enter. Enter the following code snippet: public class NationalPark : INotifyPropertyChanged {    public event PropertyChangedEventHandler        PropertyChanged;    . . . } Add an OnPropertyChanged() method that can be called from each property setter method: void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {    var handler = PropertyChanged;    if (handler != null)    {        handler(this,            new PropertyChangedEventArgs(propertyName));    } } Update each property definition to call the setter in the same way as it is depicted for the Name property: string _name; public string Name { get { return _name; } set {    if (value.Equals (_name, StringComparison.Ordinal))    {      // Nothing to do - the value hasn't changed; return;    }    _name = value;    OnPropertyChanged(); } } Compile the project. The project should compile cleanly. We are now ready to use the NationalParksData singleton in our new project, and it supports data binding. Implementing the Android user interface Now, we are ready to create the Views and ViewModels required for our app. The app we are creating will follow the following flow: A master list view to view national parks A detail view to view details of a specific park An edit view to edit a new or previously existing park The process for creating views and ViewModels in an Android app generally consists of three different steps: Create a ViewModel in the core project with the data and event handlers (commands) required to support the View. Create an Android layout with visual elements and data binding specifications. Create an Android activity, which corresponds to the ViewModel and displays the layout. In our case, this process will be slightly different because we will reuse some of our previous work, specifically, the layout files and the menu definitions. To reuse layout files and menu definitions, perform the following steps: Copy Master.axml, Detail.axml, and Edit.axml from the Resourceslayout folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourceslayout folder in the NationalParks.Droid project, and add them to the project by selecting the layout folder and navigating to Add | Add Files. Copy MasterMenu.xml, DetailMenu.xml, and EditMenu.xml from the Resourcesmenu folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourcesmenu folder in the NationalParks.Droid project, and add them to the project by selecting the menu folder and navigating to Add | Add Files. Implementing the master list view We are now ready to implement the first of our View/ViewModel combinations, which is the master list view. Creating MasterViewModel The first step is to create a ViewModel and add a property that will provide data to the list view that displays national parks along with some initialization code. To create MasterViewModel, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, and navigate to Add | New File. In the New File dialog box, navigate to General | Empty Class, enter MasterViewModel for the Name field, and click on New. Modify the class definition so that MasterViewModel inherits from MvxViewModel; you will also need to add a few using directives: . . . using Cirrious.CrossCore.Platform; using Cirrious.MvvmCross.ViewModels; . . . namespace NationalParks.Core.ViewModels { public class MasterViewModel : MvxViewModel {          . . .    } } Add a property that is a list of NationalPark elements to MasterViewModel. This property will later be data-bound to a list view: private List<NationalPark> _parks; public List<NationalPark> Parks {    get { return _parks; }    set { _parks = value;          RaisePropertyChanged(() => Parks);    } } Override the Start() method on MasterViewModel to load the _parks collection with data from the NationalParksData singleton. You will need to add a using directive for the NationalParks.PortableData namespace again: . . . using NationalParks.PortableData; . . . public async override void Start () {    base.Start ();    await NationalParksData.Instance.Load ();    Parks = new List<NationalPark> (        NationalParksData.Instance.Parks); } We now need to modify the app startup sequence so that MasterViewModel is the first ViewModel that's started. Open App.cs in NationalParks.Core and change the call to RegisterAppStart() to reference MasterViewModel:RegisterAppStart<ViewModels.MasterViewModel>(); Updating the Master.axml layout Update Master.axml so that it can leverage the data binding capabilities provided by MvvmCross. To update Master.axml, complete the following steps: Open Master.axml and add a namespace definition to the top of the XML to include the NationalParks.Droid namespace: This namespace definition is required in order to allow Android to resolve the MvvmCross-specific elements that will be specified. Change the ListView element to a Mvx.MvxListView element: <Mvx.MvxListView    android_layout_width="match_parent"    android_layout_height="match_parent"    android_id="@+id/parksListView" /> Add a data binding specification to the MvxListView element, binding the ItemsSource property of the list view to the Parks property of MasterViewModel, as follows:    . . .    android_id="@+id/parksListView"    local_MvxBind="ItemsSource Parks" /> Add a list item template attribute to the element definition. This layout controls the content of each item that will be displayed in the list view: local:MvxItemTemplate="@layout/nationalparkitem" Create the NationalParkItem layout and provide TextView elements to display both the name and description of a park, as follows: <LinearLayout    android_orientation="vertical"    android_layout_width="fill_parent"    android_layout_height="wrap_content">    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"         android:textSize="40sp"/>    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"        android_textSize="20sp"/> </LinearLayout> Add data binding specifications to each of the TextView elements: . . .        local_MvxBind="Text Name" /> . . .        local_MvxBind="Text Description" /> . . . Note that in this case, the context for data binding is an instance of an item in the collection that was bound to MvxListView, for this example, an instance of NationalPark. Creating the MasterView activity Next, create MasterView, which is an MvxActivity instance that corresponds with MasterViewModel. To create MasterView, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, navigate to Add | New File. In the New File dialog, navigate to Android | Activity, enter MasterView in the Name field, and select New. Modify the class specification so that it inherits from MvxActivity; you will also need to add a few using directives as follows: using Cirrious.MvvmCross.Droid.Views; using NationalParks.Core.ViewModels; . . . namespace NationalParks.Droid.Views {    [Activity(Label = "Parks")]    public class MasterView : MvxActivity    {        . . .    } } Open Setup.cs and add code to initialize the file handler and path for the NationalParksData singleton to the CreateApp() method, as follows: protected override IMvxApplication CreateApp() {    NationalParksData.Instance.FileHandler =        new FileHandler ();    NationalParksData.Instance.DataDir =        System.Environment.GetFolderPath(          System.Environment.SpecialFolder.MyDocuments);    return new Core.App(); } Compile and run the app; you will need to copy the NationalParks.json file to the device or emulator using the Android Device Monitor. All the parks in NationalParks.json should be displayed. Implementing the detail view Now that we have the master list view displaying national parks, we can focus on creating the detail view. We will follow the same steps for the detail view as the ones we just completed for the master view. Creating DetailViewModel We start creating DetailViewModel by using the following steps: Following the same procedure as the one that was used to create MasterViewModel, create a new ViewModel named DetailViewModel in the ViewModel folder of NationalParks.Core. Add a NationalPark property to support data binding for the view controls, as follows: protected NationalPark _park; public NationalPark Park {    get { return _park; }    set { _park = value;          RaisePropertyChanged(() => Park);      } } Create a Parameters class that can be used to pass a park ID for the park that should be displayed. It's convenient to create this class within the class definition of the ViewModel that the parameters are for: public class DetailViewModel : MvxViewModel {    public class Parameters    {        public string ParkId { get; set; }    }    . . . Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData: public void Init(Parameters parameters) {    Park = NationalParksData.Instance.Parks.        FirstOrDefault(x => x.Id == parameters.ParkId); } Updating the Detail.axml layout Next, we will update the layout file. The main changes that need to be made are to add data binding specifications to the layout file. To update the Detail.axml layout, perform the following steps: Open Detail.axml and add the project namespace to the XML file: Add data binding specifications to each of the TextView elements that correspond to a national park property, as demonstrated for the park name: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/nameTextView"    local_MvxBind="Text Park.Name" /> Creating the DetailView activity Now, create the MvxActivity instance that will work with DetailViewModel. To create DetailView, perform the following steps: Following the same procedure as the one that was used to create MasterView, create a new view named DetailView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that our menus will be accessible. Copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials)[AR4] . Comment out the section in OnOptionsItemSelect() related to the Edit action for now; we will fill that in once the edit view is completed. Adding navigation The last step is to add navigation so that when an item is clicked on in MvxListView on MasterView, the park is displayed in the detail view. We will accomplish this using a command property and data binding. To add navigation, perform the following steps: Open MasterViewModel and add an IMvxCommand property; this will be used to handle a park that is being selected: protected IMvxCommand ParkSelected { get; protected set; } Create an Action delegate that will be called when the ParkSelected command is executed, as follows: protected void ParkSelectedExec(NationalPark park) {    ShowViewModel<DetailViewModel> (        new DetailViewModel.Parameters ()            { ParkId = park.Id }); } Initialize the command property in the constructor of MasterViewModel: ParkClicked =    new MvxCommand<NationalPark> (ParkSelectedExec); Now, for the last step, add a data binding specification to MvvListView in Master.axml to bind the ItemClick event to the ParkClicked command on MasterViewModel, which we just created: local:MvxBind="ItemsSource Parks; ItemClick ParkClicked" Compile and run the app. Clicking on a park in the list view should now navigate to the detail view, displaying the selected park. Implementing the edit view We are now almost experts at implementing new Views and ViewModels. One last View to go is the edit view. Creating EditViewModel Like we did previously, we start with the ViewModel. To create EditViewModel, complete the following steps: Following the same process that was previously used in this article to create EditViewModel, add a data binding property and create a Parameters class for navigation. Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData in the case of editing an existing park or create a new instance if the user has chosen the New action. Inspect the parameters passed in to determine what the intent is: public void Init(Parameters parameters) {    if (string.IsNullOrEmpty (parameters.ParkId))        Park = new NationalPark ();    else        Park =            NationalParksData.Instance.            Parks.FirstOrDefault(            x => x.Id == parameters.ParkId); } Updating the Edit.axml layout Update Edit.axml to provide data binding specifications. To update the Edit.axml layout, you first need to open Edit.axml and add the project namespace to the XML file. Then, add the data binding specifications to each of the EditView elements that correspond to a national park property. Creating the EditView activity Create a new MvxActivity instance named EditView to will work with EditViewModel. To create EditView, perform the following steps: Following the same procedure as the one that was used to create DetailView, create a new View named EditView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that the Done action will accessible from the ActionBar. You can copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials). Change the implementation of Done to call the Done command on EditViewModel. Adding navigation Add navigation to two places: when New (+) is clicked from MasterView and when Edit is clicked in DetailView. Let's start with MasterView. To add navigation from MasterViewModel, complete the following steps: Open MasterViewModel.cs and add a NewParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as follows: protected IMvxCommand NewParkClicked { get; set; } protected void NewParkClickedExec() { ShowViewModel<EditViewModel> (); } Note that we do not pass in a parameter class into ShowViewModel(). This will cause a default instance to be created and passed in, which means that ParkId will be null. We will use this as a way to determine whether a new park should be created. Now, it's time to hook the NewParkClicked command up to the actionNew menu item. We do not have a way to accomplish this using data binding, so we will resort to a more traditional approach—we will use the OnOptionsItemSelected() method. Add logic to invoke the Execute() method on NewParkClicked, as follows: case Resource.Id.actionNew:    ((MasterViewModel)ViewModel).        NewParkClicked.Execute ();    return true; To add navigation from DetailViewModel, complete the following steps: Open DetailViewModel.cs and add a EditParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as shown in the following code snippet: protected IMvxCommand EditPark { get; protected set;} protected void EditParkHandler() {    ShowViewModel<EditViewModel> (        new EditViewModel.Parameters ()            { ParkId = _park.Id }); } Note that an instance of the Parameters class is created, initialized, and passed into the ShowViewModel() method. This instance will in turn be passed into the Init() method on EditViewModel. Initialize the command property in the constructor for MasterViewModel, as follows: EditPark =    new MvxCommand<NationalPark> (EditParkHandler); Now, update the OnOptionsItemSelect() method in DetailView to invoke the DetailView.EditPark command when the Edit action is selected: case Resource.Id.actionEdit:    ((DetailViewModel)ViewModel).EditPark.Execute ();    return true; Compile and run NationalParks.Droid. You should now have a fully functional app that has the ability to create new parks and edit the existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailView. Creating the MvvmCross iOS app The process of creating the Android app with MvvmCross provides a solid understanding of how the overall architecture works. Creating the iOS solution should be much easier for two reasons: first, we understand how to interact with MvvmCross and second, all the logic we have placed in NationalParks.Core is reusable, so that we just need to create the View portion of the app and the startup code. To create NationalParks.iOS, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog, navigate to C# | iOS | iPhone | Single View Application, enter NationalParks.iOS in the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.iOS and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.iOS as a result of adding the package. They are as follows: packages.config: This file contains a list of libraries associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView: This class is placed in the Views folder, which corresponds to the FirstViewModel instance created in NationalParks.Core. Setup: This class inherits from MvxTouchSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). AppDelegate.cs.txt: This class contains the sample startup code, which should be placed in the actual AppDelete.cs file. Implementing the iOS user interface We are now ready to create the user interface for the iOS app. The good news is that we already have all the ViewModels implemented, so we can simply reuse them. The bad news is that we cannot easily reuse the storyboards from our previous work; MvvmCross apps generally use XIB files. One of the reasons for this is that storyboards are intended to provide navigation capabilities and an MvvmCross app delegates that responsibility to ViewModel and presenter. It is possible to use storyboards in combination with a custom presenter, but the remainder of this article will focus on using XIB files, as this is the more common use. The screen layouts can be used as depicted in the following screenshot: We are now ready to get started. Implementing the master view The first view we will work on is the master view. To implement the master view, complete the following steps: Create a new ViewController class named MasterView by right-clicking on the Views folder of NationalParks.iOS and navigating to Add | New File | iOS | iPhone View Controller. Open MasterView.xib and arrange controls as seen in the screen layouts. Add outlets for each of the edit controls. Open MasterView.cs and add the following boilerplate logic to deal with constraints on iOS 7, as follows: // ios7 layout if (RespondsToSelector(new    Selector("edgesForExtendedLayout")))    EdgesForExtendedLayout = UIRectEdge.None; Within the ViewDidLoad() method, add logic to create MvxStandardTableViewSource for parksTableView: MvxStandardTableViewSource _source; . . . _source = new MvxStandardTableViewSource(    parksTableView,    UITableViewCellStyle.Subtitle,    new NSString("cell"),    "TitleText Name; DetailText Description",      0); parksTableView.Source = _source; Note that the example uses the Subtitle cell style and binds the national park name and description to the title and subtitle. Add the binding logic to the ViewDidShow() method. In the previous step, we provided specifications for properties of UITableViewCell to properties in the binding context. In this step, we need to set the binding context for the Parks property on MasterModelView: var set = this.CreateBindingSet<MasterView,    MasterViewModel>(); set.Bind (_source).To (vm => vm.Parks); set.Apply(); Compile and run the app. All the parks in NationalParks.json should be displayed. Implementing the detail view Now, implement the detail view using the following steps: Create a new ViewController instance named DetailView. Open DetailView.xib and arrange controls as shown in the following code. Add outlets for each of the edit controls. Open DetailView.cs and add the binding logic to the ViewDidShow() method: this.CreateBinding (this.nameLabel).    To ((DetailViewModel vm) => vm.Park.Name).Apply (); this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).        Apply (); this.CreateBinding (this.stateLabel).    To ((DetailViewModel vm) => vm.Park.State).Apply (); this.CreateBinding (this.countryLabel).    To ((DetailViewModel vm) => vm.Park.Country).        Apply (); this.CreateBinding (this.latLabel).    To ((DetailViewModel vm) => vm.Park.Latitude).        Apply (); this.CreateBinding (this.lonLabel).    To ((DetailViewModel vm) => vm.Park.Longitude).        Apply (); Adding navigation Add navigation from the master view so that when a park is selected, the detail view is displayed, showing the park. To add navigation, complete the following steps: Open MasterView.cs, create an event handler named ParkSelected, and assign it to the SelectedItemChanged event on MvxStandardTableViewSource, which was created in the ViewDidLoad() method: . . .    _source.SelectedItemChanged += ParkSelected; . . . protected void ParkSelected(object sender, EventArgs e) {    . . . } Within the event handler, invoke the ParkSelected command on MasterViewModel, passing in the selected park: ((MasterViewModel)ViewModel).ParkSelected.Execute (        (NationalPark)_source.SelectedItem); Compile and run NationalParks.iOS. Selecting a park in the list view should now navigate you to the detail view, displaying the selected park. Implementing the edit view We now need to implement the last of the Views for the iOS app, which is the edit view. To implement the edit view, complete the following steps: Create a new ViewController instance named EditView. Open EditView.xib and arrange controls as in the layout screenshots. Add outlets for each of the edit controls. Open EditView.cs and add the data binding logic to the ViewDidShow() method. You should use the same approach to data binding as the approach used for the details view. Add an event handler named DoneClicked, and within the event handler, invoke the Done command on EditViewModel:protected void DoneClicked (object sender, EventArgs e) {    ((EditViewModel)ViewModel).Done.Execute(); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for EditView, and assign the DoneClicked event handler to it, as follows: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Done,        DoneClicked), true); Adding navigation Add navigation to two places: when New (+) is clicked from the master view and when Edit is clicked on in the detail view. Let's start with the master view. To add navigation to the master view, perform the following steps: Open MasterView.cs and add an event handler named NewParkClicked. In the event handler, invoke the NewParkClicked command on MasterViewModel: protected void NewParkClicked(object sender,        EventArgs e) {    ((MasterViewModel)ViewModel).            NewParkClicked.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView and assign the NewParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Add,        NewParkClicked), true); To add navigation to the details view, perform the following steps: Open DetailView.cs and add an event handler named EditParkClicked. In the event handler, invoke the EditParkClicked command on DetailViewModel: protected void EditParkClicked (object sender,    EventArgs e) {    ((DetailViewModel)ViewModel).EditPark.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView, and assign the EditParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Edit,        EditParkClicked), true); Refreshing the master view list One last detail that needs to be taken care of is to refresh the UITableView control on MasterView when items have been changed on EditView. To refresh the master view list, perform the following steps: Open MasterView.cs and call ReloadData() on parksTableView within the ViewDidAppear() method of MasterView: public override void ViewDidAppear (bool animated) {    base.ViewDidAppear (animated);    parksTableView.ReloadData(); } Compile and run NationalParks.iOS. You should now have a fully functional app that has the ability to create new parks and edit existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailVIew. Considering the pros and cons After completing our work, we now have the basis to make some fundamental observations. Let's start with the pros: MvvmCross definitely increases the amount of code that can be reused across platforms. The ViewModels house the data required by the View, the logic required to obtain and transform the data in preparation for viewing, and the logic triggered by user interactions in the form of commands. In our sample app, the ViewModels were somewhat simple; however, the more complex the app, the more reuse will likely be gained. As MvvmCross relies on the use of each platform's native UI frameworks, each app has a native look and feel and we have a natural layer that implements platform-specific logic when required. The data binding capabilities of MvvmCross also eliminate a great deal of tedious code that would otherwise have to be written. All of these positives are not necessarily free; let's look at some cons: The first con is complexity; you have to learn another framework on top of Xamarin, Android, and iOS. In some ways, MvvmCross forces you to align the way your apps work across platforms to achieve the most reuse. As the presentation logic is contained in the ViewModels, the views are coerced into aligning with them. The more your UI deviates across platforms; the less likely it will be that you can actually reuse ViewModels. With these things in mind, I would definitely consider using MvvmCross for a cross-platform mobile project. Yes, you need to learn an addition framework and yes, you will likely have to align the way some of the apps are laid out, but I think MvvmCross provides enough value and flexibility to make these issues workable. I'm a big fan of reuse and MvvmCross definitely pushes reuse to the next level. Summary In this article, we reviewed the high-level concepts of MvvmCross and worked through a practical exercise in order to convert the national parks apps to use the MvvmCross framework and the increase code reuse. Resources for Article: Further resources on this subject: Kendo UI DataViz – Advance Charting [article] The Kendo MVVM Framework [article] Sharing with MvvmCross [article]
Read more
  • 0
  • 0
  • 5792
article-image-application-connectivity-and-network-events
Packt
26 Dec 2014
10 min read
Save for later

Application Connectivity and Network Events

Packt
26 Dec 2014
10 min read
 In this article by Kerri Shotts, author of PhoneGap for Enterprise, we will see how an app reacts to the network changes and activities. In an increasingly connected world, mobile devices aren't always connected to the network. As such, the app needs to be sensitive to changes in the device's network connectivity. It also needs to be sensitive to the type of network (for example, cellular versus wired), not to mention being sensitive to the device the app itself is running on. Given all this, we will cover the following topics: Determining network connectivity Getting the current network type Detecting changes in connectivity Handling connectivity issues (For more resources related to this topic, see here.) Determining network connectivity In a perfect world, we'd never have to worry if the device was connected to the Internet or not, and if our backend was reachable. Of course, we don't live in that world, so we need to respond appropriately when the device's network connectivity changes. What's critical to remember is that having a network connection in no way determines the reachability of a host. That is to say, it's entirely possible for a device to be connected to a Wi-Fi network or a mobile hotspot and yet is unable to contact your servers. This can happen for several reasons (any of which can prevent proper communication with your backend). In short, determining the network status and being sensitive to changes in the status really tells you only one thing: whether or not it is futile to attempt communication. After all, if the device isn't connected to any network, there's no reason to attempt communication over a nonexistent network. On the other hand, if a network is available, the only way to determine if your hosts are reachable or not is to try and contact them. The ability to determine the device's network connectivity and respond to changes in the status is not available in Cordova/PhoneGap by default. You'll need to add a plugin before you can use this particular feature. You can install the plugin as follows: cordova plugin add org.apache.cordova.network-information The plugin's complete documentation is available at: https://github.com/apache/cordova-plugin-network-information/blob/master/doc/index.md. Getting the current network type Anytime after the deviceready event fires, you can query the plugin for the status of the current network connection by querying navigator.connection.type: var networkType = navigator.connection.type; switch (networkType) { case Connection.UNKNOWN: console.log ("Unknown connection."); break; case Connection.ETHERNET: console.log ("Ethernet connection."); break; case Connection.WIFI: console.log ("Wi-Fi connection."); break; case Connection.CELL_2G: console.log ( "Cellular (2G) connection."); break; case Connection.CELL_3G: console.log ( "Cellular (3G) connection."); break; case Connection.CELL_4G: console.log ( "Cellular (4G) connection."); break; case Connection.CELL: console.log ( "Cellular connection."); break; case Connection.NONE: console.log ( "No network connection."); break; } If you executed the preceding code on a typical mobile device, you'd probably either see some variation of the Cellular connection or the Wi-Fi connection message. If your device was on Wi-Fi and you proceeded to disable it and rerun the app, the Wi-Fi notice will be replaced with the Cellular connection notice. Now, if you put the device into airplane mode and rerun the app, you should see No network connection. Based on the available network type constants, it's clear that we can use this information in various ways: We can tell if it makes sense to attempt a network request: if the type is Connection.NONE, there's no point in trying as there's no network to service the request. We can tell if we are on a wired network, a Wi-Fi network, or a cellular network. Consider a streaming video app; this app can not only permit full quality video on a wired/Wi-Fi network, but can also use a lower quality video stream if it was running on a cellular connection. Although tempting, there's one thing the earlier code does not tell us: the speed of the network. That is, we can't use the type of the network as a proxy for the available bandwidth, even though it feels like we can. After all, aren't Ethernet connections typically faster than Wi-Fi connections? Also, isn't a 4G cellular connection faster than a 2G connection? In ideal circumstances, you'd be right. Unfortunately, it's possible for a fast 4G cellular network to be very congested, thus resulting in poor throughput. Likewise, it is possible for an Ethernet connection to communicate over a noisy wire and interact with a heavily congested network. This can also slow throughput. Also, while it's important to recognize that although you can learn something about the network the device is connected to, you can't use this to learn anything about the network conditions beyond that network. The device might indicate that it is attached to a Wi-Fi network, but this Wi-Fi network might actually be a mobile hotspot. It could be connected to a satellite with high latency, or to a blazing fast fiber network. As such, the only two things we can know for sure is whether or not it makes sense to attempt a request, and whether or not we need to limit the bandwidth if the device knows it is on a cellular connection. That's it. Any other use of this information is an abuse of the plugin, and is likely to cause undesirable behavior. Detecting changes in connectivity Determining the type of network connection once does little good as the device can lose the connection or join a new network at any time. This means that we need to properly respond to these events in order to provide a good user experience. Do not rely on the following events being fired when your app starts up for the first time. On some devices, it might take several seconds for the first event to fire; however, in some cases, the events might never fire (specifically, if testing in a simulator). There are two events our app needs to listen to: the online event and the offline event. Their names are indicative of their function, so chances are good you already know what they do. The online event is fired when the device connects to a network, assuming it wasn't connected to a network before. The offline event does the opposite: it is fired when the device loses a connection to a network, but only if the device was previously connected to a network. This means that you can't depend on these events to detect changes in the type of the network: a move from a Wi-Fi network to a cellular network might not elicit any events at all. In order to listen to these events, you can use the following code: document.addEventListener ("online", handleOnlineEvent, false); document.addEventListener ("offline", handleOfflineEvent, false); The event listener doesn't receive any information, so you'll almost certainly want to check the network type when handling an online event. The offline event will always correspond to a Connection.NONE network type. Having the ability to detect changes in the connectivity status means that our app can be more intelligent about how it handles network requests, but it doesn't tell us if a request is guaranteed to succeed. Handling connectivity issues As the only way to know if a network request might succeed is to actually attempt the request; we need to know how to properly handle the errors that might rise out of such an attempt. Between the Mobile and the Middle tier, the following are the possible errors that you might encounter while connecting to a network: TimeoutError: This error is thrown when the XHR times out. (Default is 30 seconds for our wrapper, but if the XHR's timeout isn't otherwise set, it will attempt to wait forever.) HTTPError: This error is thrown when the XHR completes and receives a response other than 200 OK. This can indicate any number of problems, but it does not indicate a network connectivity issue. JSONError: This error is thrown when the XHR completes, but the JSON response from the server cannot be parsed. Something is clearly wrong on the server, of course, but this does not indicate a connectivity issue. XHRError: This error is thrown when an error occurs when executing the XHR. This is definitely indicative of something going very wrong (not necessarily a connectivity issue, but there's a good chance). MaxRetryAttemptsReached: This error is thrown when the XHR wrapper has given up retrying the request. The wrapper automatically retries in the case of TimeoutError and XHRError. In all the earlier cases, the catch method in the promise chain is called. At this point, you can attempt to determine the type of error in order to determine what to do next: function sendFailRequest() { XHR.send( "GET", "http://www.really-bad-host-name.com /this/will/fail" ) .then(function( response ) {    console.log( response ); }) .catch( function( err ) {    if ( err instanceof XHR.XHRError ||     err instanceof XHR.TimeoutError ||     err instanceof XHR.MaxRetryAttemptsReached ) {      if ( navigator.connection.type === Connection.NONE ) {        // we could try again once we have a network connection        var retryRequest = function() {          sendFailRequest();          APP.removeGlobalEventListener( "networkOnline",         retryRequest );        };        // wait for the network to come online – we'll cover       this method in a moment        APP.addGlobalEventListener( "networkOnline",       retryRequest );      } else {        // we have a connection, but can't get through       something's going on that we can't fix.        alert( "Notice: can't connect to the server." );      }    }    if ( err instanceof XHR.HTTPError ) {      switch ( err.HTTPStatus ) {      case 401: // unauthorized, log the user back in        break;        case 403: // forbidden, user doesn't have access        break;        case 404: // not found        break;        case 500: // internal server error        break;        default:       console.log( "unhandled error: ", err.HTTPStatus );      }    }    if ( err instanceof XHR.JSONParseError ) {      console.log( "Issue parsing XHR response from server." );    } }).done(); } sendFailRequest(); Once a connection error is encountered, it's largely up to you and the type of app you are building to determine what to do next, but there are several options to consider as your next course of action: Fail loudly and let the user know that their last action failed. It might not be terribly great for user experience, but it might be the only sensible thing to do. Check whether there is a network connection present, and if not, hold on to the request until an online event is received and then send the request again. This makes sense only if the request you are sending is a request for data, not a request for changing data, as the data might have changed in the interim. Summary In this article you learnt how an app built using PhoneGap/Cordova reacts to the changing network conditions, also how to handle the connectivity issues that you might encounter. Resources for Article: Further resources on this subject: Configuring the ChildBrowser plugin [article] Using Location Data with PhoneGap [article] Working with the sharing plugin [article]
Read more
  • 0
  • 0
  • 2075

article-image-building-middle-tier
Packt
23 Dec 2014
34 min read
Save for later

Building the Middle-Tier

Packt
23 Dec 2014
34 min read
In this article by Kerri Shotts , the author of the book PhoneGap for Enterprise covered how to build a web server that bridges the gap between our database backend and our mobile application. If you browse any Cordova/PhoneGap forum, you'll often come across posts asking how to connect to and query a backend database. In this article, we will look at the reasons why it is necessary to interact with your backend database using an intermediary service. If the business logic resides within the database, the middle-tier might be a very simple layer wrapping the data store, but it can also implement a significant portion of business logic as well. The middle-tier also usually handles session authentication logic. Although many enterprise projects will already have a middle-tier in place, it's useful to understand how a middle-tier works, and how to implement one if you ever need to build a solution from the ground up. In this article, we'll focus heavily on these topics: Typical middle-tier architecture Designing a RESTful-like API Implementing a RESTful-like hypermedia API using Node.js Connecting to the backend database Executing queries Handling authentication using Passport Building API handlers You are welcome to implement your middle-tier using any technology with which you are comfortable. The topics that we will cover in this article can be applied to any middle-tier platform. Middle-tier architecture It's tempting, especially for simple applications, to have the desire to connect your mobile app directly to your data store. This is an incredibly bad idea, which means your data store is vulnerable and exposed to attacks from the outside world (unless you require the user to log in to a VPN). It also means that your mobile app has a lot of code dedicated solely to querying your data store, which makes for a tightly coupled environment. If you ever want to change your database platform or modify the table structures, you will need to update the app, and any app that wasn't updated will stop working. Furthermore, if you want another system to access the data, for example, a reporting solution, you will need to repeat the same queries and logic already implemented in your app in order to ensure consistency. For these reasons alone, it's a bad idea to directly connect your mobile app to your backend database. However, there's one more good reason: Cordova has no nonlocal database drivers whatsoever. Although it's not unusual for a desktop application to make a direct connection to your database on an internal network, Cordova has no facility to load a database driver to interface directly with an Oracle or MySQL database. This means that you must build an intermediary service to bridge the gap from your database backend to your mobile app. No middle-tier is exactly the same, but for web and mobile apps, this intermediary service—also called an application server—is typically a relatively simple web server. This server accepts incoming requests from a client (our mobile app or a website), processes them, and returns the appropriate results. In order to do so, the web server parses these requests using a variety of middleware (security, session handling, cookie handling, request parsing, and so on) and then executes the appropriate request handler for the request. This handler then needs to pass this request on to the business logic handler, which, in our case, lives on the database server. The business logic will determine how to react to the request and returns the appropriate data to the request handler. The request handler transforms this data into something usable by the client, for example, JSON or XML, and returns it to the client. The middle-tier provides an Application Programming Interface (API). Beyond authentication and session handling, the middle-tier provides a set of reusable components that perform specific tasks by delegating these tasks to lower tiers. As an example, one of the components of our Tasker app is named get-task-comments. Provided the user is properly authenticated, the component will request a specific task from the business logic and return the attached comments. Our mobile app (or any other consumer) only needs to know how to call get-task-comments. This decouples the client from the database and ensures that we aren't unnecessarily repeating code. The flow of request and response looks a lot like the following figure: Designing a RESTful-like API A mobile app interfaces with your business logic and data store via an API provided by the application server middle-tier. Exactly how this API is implemented and how the client uses it is up to the developers of the system. In the past, this has often meant using web services (over HTTP) with information interchange via Simple Object Access Protocol (SOAP). Recently, RESTful APIs have become the norm when working with web and mobile applications. These APIs conform to the following constraints: Client/Server: Clients are not concerned with how data is stored, (that's the server's job), and servers are not concerned with state (that's the client's job). They should be able to be developed and/or replaced completely independently of each other (low coupling) as long as the API remains the same. Stateless: Each request should have the necessary information contained within it so that the server can properly handle the request. The server isn't concerned about session states; this is the sole domain of the client. Cacheable: Responses must specify if they can be cached or not. Proper management of this can greatly improve performance and scalability. Layered: The client shouldn't be able to tell if there are any intermediary servers between it and the server. This ensures that additional servers can be inserted into the chain to provide caching, security, load balancing, and so on. Code-on-demand: This is an optional constraint. The server can send the necessary code to handle the response to the client. For a mobile PhoneGap app, this might involve sending a small snippet of JavaScript, for example, to handle how to display and interact with a Facebook post. Uniform Interface: Resources are identified by a Uniform Resource Identifier (URI), for example, https://pge-as.example.com/task/21 refers to the task with an identifier of 21. These resources can be expressed in any number of formats to facilitate data interchange. Furthermore, when the client has the resource (in whatever representation it is provided), the client should also have enough information to manipulate the resource. Finally, the representation should indicate valid state transitions by providing links that the client can use to navigate the state tree of the system. There are many good web APIs in production, but often they fail to address the last constraint very well. They might represent resources using URIs, but typically the client is expected to know all the endpoints of the API and how to transition between them without the server telling the client how to do so. This means that the client is tightly coupled to the API. If the URIs or the API change, then the client breaks. RESTful APIs should instead provide all the valid state transitions with each response. This lets the client reduce its coupling by looking for specific actions rather than assuming that a specific URI request will work. Properly implemented, the underlying URIs could change and the client app would be unaffected. The only thing that needs to be constant is the entry URI to the API. There are many good examples of these kinds of APIs, PayPal's is quite good as are many others. The responses from these APIs always contain enough information for the client to advance to the next state in the chain. So in the case of PayPal, a response will always contain enough information to advance to the next step of the monetary transaction. Because the response contains this information, the client only needs to look at the response rather than having the URI of the next step hardcoded. RESTful APIs aren't standardized; one API might provide links to the next state in one format, while another API might use a different format. That said, there are several attempts to create a standard response format, Collection+JSON is just one example. The lack of standardization in the response format isn't as bad as it sounds; the more important issue is that as long as your app understands the response format, it can be decoupled from the URI structure of your API and its resources. The API becomes a list of methods with explicit transitions rather than a list of URIs alone. As long as the action names remain the same, the underlying URIs can be changed without affecting the client. This works well when it comes to most APIs where authorization is provided using an API key or an encoded token. For example, an API will often require authorization via OAuth 2.0. Your code asks for the proper authorization first, and upon each subsequent request, it passes an appropriate token that enables access to the requested resource. Where things become problematic, and why we're calling our API RESTful-like, is when it comes to the end user authentication. Whether the user of our mobile app recognizes it or not, they are an immediate consumer of our API. Because the data itself is protected based upon the roles and access of each particular user, users must authenticate themselves prior to accessing any data. When an end user is involved with authentication, the idea of sessions is inevitably required largely for the end user's convenience. Some sessions can be incredibly short-lived, for example, many banks will terminate a session if no activity is seen for 10 minutes, while others can be long-lived, and others might even be effectively eternal until explicitly revoked by the user. Regardless of the session length, the fact that a session is present indicates that the server must often store some information about state. Even if this information applies only to the user's authentication and session validity, it still violates the second rule of RESTful APIs. Tasker's web API, then, is a RESTful-like API. In everything except session handling and authentication, our API is like any other RESTful API. However, when it comes to authentication, the server maintains some state in order to ensure that users are properly authenticated. In the case of Tasker, the maintained state is limited. Once a user authenticates, a unique single-use token and an Hash Message Authentication Code (HMAC) secret are generated and returned to the client. This token is expected to be sent with the next API request and this request is expected to be signed with the HMAC secret. Upon completion of this API request, a new token is generated. Each token expires after a specified amount of time, or can be expired immediately by an explicit logout. Each token is stored in the backend, which means we violate the stateless rule. Our tokens are just a cryptographically random series of bytes, and because of this, there's nothing in the token that can be used to identify the user. This means we need to maintain the valid tokens and their user associations in the database. If the token contained user-identifiable information, we could technically avoid maintaining state, but this also means that the token could be forged if the attacker knew how tokens were constructed. A random token, on the other hand, means that there's no method of construction that can fool the server; the attacker will have to be very lucky to guess it right. Since Tasker's tokens are continually expiring after a short period of time and are continually regenerated upon each request, guessing a token is that much more difficult. Of course, it's not impossible for an attacker to get lucky and guess the right token on the first try, but considering the amount of entropy in most usernames and passwords, it's more likely that the attacker could guess the user's password than they could guess the correct token. Because these tokens are managed by the backend, our Tasker's API isn't truly stateless, and so it's not truly RESTful, hence the term RESTful-like. If you want to implement your API as a pure RESTful API, feel free. If your API is like that of many other APIs (such as Twitter, PayPal, Facebook, and so on), you'll probably want to do so. All this sounds well and good, but how should we go about designing and defining our API? Here's how I suggest going about it: Identify the resources. In Tasker, the resources are people, tasks, and task comments. Essentially, these are the data models. (If you take security into account, Tasker also has user and role resources in addition to sessions.) Define how the URI should represent the resource. For example, Bob Smith might be represented by /person/bob-smith or /person/29481. Query parameters are also acceptable: /person?administeredBy=john-doe will refer to the set of all individuals who have John Doe as their administrator. If this helps, think of each instance of a resource and each collection of these resources as web pages each having their own URL. Identify the actions that can be performed for each resource. For example, a task can be created and modified by the owner of the task. This task can be assigned to another user. A task's status and progress can be updated by both the owner and the assignee. With RESTful APIs, these actions are typically handled by using the HTTP verbs (also known as methods) GET, POST, PUT, and DELETE. Others can also be used, such as OPTIONS, PATCH, and so on. We'll cover in a moment how these usually line up against typical Create, Read, Update, Delete (CRUD) operations. Identify the state transitions that are valid for resources. As an example, a client's first steps might be to request a list of all tasks assigned to a particular user. As part of the response, it should be given URIs that indicate how the app should retrieve information about a particular task. Furthermore, within this single task's response, there should be information that tells the client how to modify the task. Most APIs generally mirror the typical CRUD operations. The following is how the HTTP verbs line up against the familiar CRUD counterparts for a collection of items: HTTP verb CRUD operation Description GET READ This returns the collection of items in the desired format. Often can be filtered and sorted via query parameters. POST CREATE This creates an item within the collection. The return result includes the URI for the new resource. DELETE N/A This is not typically used at the collection level, unless one wants to remove the entire collection. PUT N/A This is not typically used at the collection level, though it can be used to update/replace each item in the collection.  The same verbs are used for items within a collection: HTTP verb CRUD operation Description GET READ This returns a specific item, given the ID. POST N/A This is not typically used at the item level. DELETE DELETE This deletes a specific item, given the ID. PUT UPDATE This updates an existing item. Sometimes PATCH is used to update only specific properties of the item.  Here's an example of a state transition diagram for a portion of the Tasker API along with the corresponding HTTP verbs: Now that we've determined the states and the valid transitions, we're ready to start modeling the API and the responses it should generate. This is particularly useful before you start coding, as one will often notice issues with the API during this phase, and it's far easier to fix them now rather than after a lot of code has been written (or worse, after the API is in production). How you model your API is up to you. If you want to create a simple text document that describes the various requests and expected responses, that's fine. You can also use any number of tools that aid in modeling your API. Some even allow you to provide mock responses for testing. Some of these are identified as follows: RAML (http://raml.org): This is a markup language to model RESTful-like APIs. You can build API models using any text editor, but there is also an API designer online. Apiary (http://apiary.io): Apiary uses a markdown-like language (API blueprint) to model APIs. If you're familiar with markdown, you shouldn't have much trouble using this service. API mocking and automated testing are also provided. Swagger (http://swagger.io): This is similar to RAML, where it uses YAML as the modeling language. Documentation and client code can be generated directly from the API model. Building our API using Node.js In this section, we'll cover connecting our web service to our Oracle database, handling user authentication and session management using Passport, and defining handlers for state transitions. You'll definitely want to take a look at the /tasker-srv directory in the code package for this book, which contains the full web server for Tasker. In the following sections, we've only highlighted some snippets of the code. Connecting to the backend database Node.js's community has provided a large number of database drivers, so chances are good that whatever your backend, Node.js has a driver available for it. In our example app, we're using an Oracle database as the backend, which means we'll be using the oracle driver (https://www.npmjs.org/package/oracle). Connecting to the database is actually pretty easy, the following code shows how: var oracle = require("oracle"); oracle.connect ( { hostname: "localhost", port: 1521, database: "xe", user: "tasker", password: "password" }, function (err, client) { if (err) { /* error; return or next(err) */ } /* query the database; when done call client.close() */ }); In the real world, a development version of our server will be using a test database, and a production version of our server will use the production database. To facilitate this, we made the connection information configurable. The /config/development.json and /config/production.json files contain connection information, and the main code simply requests the configuration information when making a connection, the following code line is used to get the configuration information: oracle.connect ( config.get ( "oracle" ), … ); Since we're talking about the real world, we also need to recognize that database connections are slow and they need to be pooled in order to improve performance as well as permit parallel execution. To do this, we added the generic-pool NPM module (https://www.npmjs.org/package/generic-pool) and added the following code to app.js: var clientPool = pool.Pool( { name: "oracle", create: function ( cb ) {    return new oracle.connect( config.get("oracle"),      function ( err, client ) {        cb ( err, client );      }    ) }, destroy: function ( client ) {    try {      client.close();    } catch (err) {      // do nothing, but if we don't catch the error,      // the server crashes    } }, max: 5, min: 1, idleTimeoutMillis: 30000 }); Because our pool will always contain at least one connection, we need to ensure that when the process exits, the pool is properly drained, as follows: process.on("exit", function () { clientPool.drain( function () {    clientPool.destroyAllNow(); }); }); On its own, this doesn't do much yet. We need to ensure that the pool is available to the entire app: app.set ( "client-pool", clientPool ); Executing queries We've built our business logic in the Oracle database using PL/SQL stored procedures and functions. In PL/SQL, functions can return table-like structures. While this is similar in concept to a view, writing a function using PL/SQL provides us more flexibility. As such, our queries won't actually be talking to the base tables, they'll be talking to functions that return results based on the user's authorization. This means that we don't need additional conditions in a WHERE clause to filter based on the user's authorization, which helps eliminate code duplication. Regardless of the previous statement, executing queries and stored procedures is done using the same method, that is execute. Before we can execute anything, we need to first acquire a client connection from the pool. To this end, we added a small set of database utility methods; you can see the code in the /db-utils directory. The query utility method is shown in the following code snippet: DBUtils.prototype.query = function ( sql, bindParameters, cb ) { var self = this,    clientPool = self._clientPool,    deferred = Q.defer();    clientPool.acquire( function ( err, client ) {    if ( err ) {    winston.error("Failed to acquire connection.");      if ( cb ) {        cb( new Error( err ) );      else {        deferred.reject( err );      }    }    try {      client.execute( sql, bindParameters,        function ( err, results ) {          if ( err ) {            clientPool.release( client );            if ( cb ) {            cb( new Error( err ) );            } else {              deferred.reject( err );            }           }          clientPool.release( client );            if ( cb ) {            cb( err, results );          } else {            deferred.resolve( results );          }        } );      }      catch ( err2 ) {      try {        clientPool.release( client );      }      catch ( err3 ) {        // can't do anything...      }      if ( cb ) {        cb( err2 );      } else { deferred.reject( err2 );      }    } } ); if ( !cb ) {    return deferred.promise; } }; It's then possible to retrieve the results to an arbitrary query using the preceding method, as shown in the following code snippet: dbUtil.query( "SELECT * FROM " + "table(tasker.task_mgmt.get_task(:1,:2))", [ taskId, req.user.userId ] ) .then( function ( results ) { // if no results, return 404 not found if ( results.length === 0 ) {    return next( Errors.HTTP_NotFound() ); } // create a new task with the database results // (will be in first row) req.task = new Task( results[ 0 ] ); return next(); } ) .catch( function ( err ) { return next( new Error( err ) ); } ) .done(); The query used in the preceding code is an example of calling a stored function that returns a table structure. The results of the SELECT statement will depend on parameters (taskId and username), and get_task will decide what data can be returned based on the user's authorization. Using Passport to handle authentication and sessions Although we've implemented our own authentication protocol, it's better that we use one that has already been well vetted and is well understood as well as one that suits our particular needs. In our case, we needed the demo to stand on its own without a lot of additional services, and as such, we built our own protocol. Even so, we chose a well known cryptographic method (PBKDF2), and are using a large number of iterations and large key lengths. In order to implement authentication easily in Node.js, you'll probably want to use Passport (https://www.npmjs.org/package/passport). It has a large community, and supports a large number of authentication schemes. If at all possible, try to use third-party authentication systems as often as possible (for example, LDAP, AD, Kerberos, and so on). In our case, because our authentication method is custom, we chose to use the passport-req strategy (https://www.npmjs.org/package/passport-req). Since Tasker's authentication is token-based, we will use this to inspect a custom header that the client will use to pass us the authentication token. The following is a simplified diagram of how Tasker's authentication process works: Please don't use our authentication strategy for anything that requires high levels of security. It's just an example, and isn't guaranteed to be secure in any way. Before we can actually use Passport, we need to define how our authentication strategy actually works. We do this by calling passport.use in our app.js file: var passport = require("passport"); var ReqStrategy = require("passport-req").Strategy; var Session = require("./models/session"); passport.use ( new ReqStrategy ( function ( req, done ) {    var clientAuthToken = req.headers["x-auth-token"];    var session = new Session ( new DBUtils ( clientPool ) );    session.findSession( clientAuthToken )    .then( function ( results ) {    if ( !results ) { return done( null, false ); }    done( null, results );    } )    .catch( function ( err ) {    return done( err );    } )    .done(); } )); In the preceding code, we've given Passport a new authentication strategy. Now, whenever Passport needs to authenticate a request, it will call this small section of code. You might be wondering what's going on in findSession. Here's the code: Session.prototype.findSession = function ( clientAuthToken, cb ) { var self = this, deferred = Q.defer(); // if no token, no sense in continuing if ( typeof clientAuthToken === "undefined" ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // an auth token is of the form 1234.ABCDEF10284128401ABC13... var clientAuthTokenParts = clientAuthToken.split( "." ); if ( !clientAuthTokenParts ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // no auth token, no session. // get the parts var sessionId = clientAuthTokenParts[ 0 ], authToken = clientAuthTokenParts[ 1 ]; // ask the database via dbutils if the token is recognized self._dbUtils.execute( "CALL tasker.security.verify_token (:1, :2, :3, :4, :5 ) INTO :6", [ sessionId, authToken, // authorization token self._dbUtils.outVarchar2( { size: 32 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 1 } ) ] ) .then( function ( results ) {    // returnParam3 has a Y or N; Y is good auth    if ( results.returnParam3 === "Y" ) {      // notify callback of successful auth      var user = {        userId:   results.returnParam, sessionId: sessionId,        nextToken: results.returnParam1,        hmacSecret: results.returnParam2      };      if ( cb ) { cb( null, user ) }      else { deferred.resolve( user ); }    } else {      // auth failed      if ( cb ) { cb( null, false ); } else { deferred.reject(); }    } } ) .catch( function ( err ) {    if ( cb ) { return cb( err, false ); }    else { deferred.reject(); } } ) .done(); if ( !cb ) { return deferred.promise; } }; The dbUtils.execute() method is a wrapper method around the Oracle query method we covered in the Executing queries section. Once a session has been retrieved from the database, Passport will want to serialize the user. This is usually just the user's ID, but we serialize a little more (which, from the preceding code, is the user's ID, session ID, and the HMAC secret): passport.serializeUser(function( user, done ) { done (null, user); }); The serializeUser method is called after a successful authentication and it must be present, or an error will occur. There's also a deserializeUser method if you're using typical Passport sessions: this method is designed to restore the user information from the Passport session. Before any of this will work, we also need to tell Express to use the Passport middleware: app.use ( passport.initialize() ); Passport makes handling authentication simple, but it also provides session support as well. While we don't use it for Tasker, you can use it to support a typical session-based username/password authentication system quite easily with a single line of code: app.use ( passport.session() ); If you're intending to use sessions with Passport, make sure you also provide a deserializeUser method. Next, we need to implement the code to authenticate a user with their username and password. Remember, we initially require the user to log in using their username and password, and once authenticated, we handle all further requests using tokens. To do this, we need to write a portion of our API code. Building API handlers We won't cover the entire API in this section, but we will cover a couple of small pieces, especially as they pertain to authentication and retrieving data. First, we've codified our API in /tasker-srv/api-def in the code package for this book. You'll also want to take a look at /tasker-srv/api-utils to see how we parse out this data structure into useable routes for the Express router. Basically, we codify our API by building a simple structure: [ { "route": "/auth", "actions": [ … ] }, { "route": "/task", "actions": [ … ] }, { "route": "/task/{:taskId}", "params": [ … ],   "actions": [ … ] }, … ] Each route can have any number of actions and parameters. Parameters are equivalent to the Express Router's parameters. In the preceding example, {:taskId} is a parameter that will take on the value of whatever is in that particular location in the URI. For example, /task/21 will result in taskId with the value of 21. This is useful for our actions because each action can then assume that the parameters have already been parsed, so any actions on the /task/{:taskId} route will already have task information at hand. The parameters are defined as follows: { "name": "taskId", "type": "number", "description": "…", "returns": [ … ], "securedBy": "tasker-auth", "handler": function (req, res, next, taskId) {…} } Actions are defined as follows: { "title": "Task", "action": "get-task", "verb": "get", "description": { … }, // hypermedia description "returns": [ … ],     // http status codes that are returned "example": { … },     // example response "href": "/task/{taskId}", "template": true, "accepts": [ "application/json", … ], "sends": [ "application/json", … ], "securedBy": "tasker-auth", "hmac": "tasker-256", "store": { … }, "query-parameters": { … }, "handler": function ( req, res, next ) { … } } Each handler is called whenever that particular route is accessed by a client using the correct HTTP verbs (identified by verb in the prior code). This allows us to write a handler for each specific state transition in our API, which is nicer than having to write a large method that's responsible for the entire route. It also makes describing the API using hypermedia that much simpler, since we can require a portion of the API and call a simple utility method (/tasker-srv/api-utils/index.js) to generate the description for the client. Since we're still working on how to handle authentication, here's how the API definition for the POST /auth route looks (the complete version is located at /tasker-srv/api-def/auth/login.js): action = {    "title": "Authenticate User",    "action": "login",    "description": [ … ], "example":     { … },    "returns":     {      200: "User authenticated; see information in body.",      401: "Incorrect username or password.", …    },    "verb": "post", "href": "/auth",    "accepts": [ "application/json", … ],    "sends": [ "application/json", … ],    "csrf": "tasker-csrf",    "store": {      "body": [ { name: "session-id", key: "sessionId" },      { name: "hmac-secret", key: "hmacSecret" },      { name: "user-id", key: "userId" },      { name: "next-token", key: "nextToken" } ]    },    "template": {      "user-id": {        "title": "User Name", "key": "userId",        "type": "string", "required": true,        "maxLength": 32, "minLength": 1      },      "candidate-password": {        "title": "Password", "key": "candidatePassword",        "type": "string", "required": true,        "maxLength": 255, "minLength": 1      }    }, The earlier code is largely documentation (but it is returned to the client when they request this resource). The following code handler is what actually performs the authentication:    "handler": function ( req, res, next ) {      var session = new Session( new DBUtils(      req.app.get( "client-pool" ) ) ),        username,        password;      // does our input validate?      var validationResults =       objUtils.validate( req.body, action.template );      if ( !validationResults.validates ) {        return next(         Errors.HTTP_Bad_Request( validationResults.message ) );      }      // got here -- good; copy the values out      username = req.body.userId;      password = req.body.candidatePassword;      // create a session with the username and password      session.createSession( username, password )        .then( function ( results ) {          // no session? bad username or password          if ( !results ) {            return next( Errors.HTTP_Unauthorized() );          }        // return the session information to the client        var o = {          sessionId: results.sessionId,          hmacSecret: results.hmacSecret,          userId:   results.userId,          nextToken: results.nextToken,          _links:   {}, _embedded: {}       };        // generate hypermedia        apiUtils.generateHypermediaForAction(         action, o._links, security, "self" );          [ require( "../task/getTaskList" ),          require( "../task/getTask" ), …          require( "../auth/logout" )          ].forEach( function ( apiAction ) {            apiUtils.generateHypermediaForAction(            apiAction, o._links, security );          } );          resUtils.json( res, 200, o );        } )        .catch( function ( err ) {          return next( err );          } )        .done();      }    }; The session.createSession method looks very similar to session.findSession, as shown in the following code: Session.prototype.createSession = function ( userName, candidatePassword, cb ) { var self = this, deferred = Q.defer(); if ( typeof userName === "undefined" || typeof candidatePassword === "undefined" ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // attempt to authenticate self._dbUtils.execute( "CALL tasker.security.authenticate_user( :1, :2, :3," + " :4, :5 ) INTO :6", [ userName, candidatePassword, self._dbUtils.outVarchar2( { size: 4000 }, self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 1 } ] ) .then( function ( results ) {    // ReturnParam3 has Y or N; Y is good auth    if ( results.returnParam3 === "Y" ) {      // notify callback of auth info      var user = {        userId:   userName,        sessionId: results.returnParam,        nextToken: results.returnParam1,        hmacSecret: results.returnParam2      };      if ( cb ) { cb( null, user ); }      else { deferred.resolve( user ); }    } else {      // auth failed      if ( cb ) { cb( null, false ); }      else { deferred.reject(); }    } } ) .catch( function ( err ) {    if ( cb ) { return cb( err, false ) }    else { deferred.reject(); } } ) .done(); if ( !cb ) { return deferred.promise; } }; Once the API is fully codified, we need to go back to app.js and tell Express that it should use the API's routes: app.use ( "/", apiUtils.createRouterForApi(apiDef, checkAuth)); We also add a global variable so that whenever an API section needs to return the entire API as a hypermedia structure, it can do so without traversing the entire API again: app.set( "x-api-root", apiUtils.generateHypermediaForApi( apiDef, securityDef ) ); The checkAuth method shown previously is pretty simple; all it does is ensure that we don't authenticate more than once in a single request: function checkAuth ( req, res, next ) { if (req.isAuthenticated()) {    return next(); } passport.authenticate ( "req" )(req, res, next); } You might be wondering where we're actually forcing our handlers to use authentication. There's actually a bit of magic in /tasker-srv/api-utils. I've highlighted the relevant portions: createRouterForApi:function (api, checkAuthFn) { var router = express.Router(); // process each route in the api; a route consists of the // uri (route) and a series of verbs (get, post, etc.) api.forEach ( function ( apiRoute ) {    // add params    if ( typeof apiRoute.params !== "undefined" ) {      apiRoute.params.forEach ( function ( param ) {        if (typeof param.securedBy !== "undefined" ) {          router.param( param.name, function ( req, res,          next, v) {            return checkAuthFn( req, res,            param.handler.bind(this, req, res, next, v) );          });        } else {          router.param(param.name, param.handler);        }      });    }    var uri = apiRoute.route;    // create a new route with the uri    var route = router.route ( uri );    // process through each action    apiRoute.actions.forEach ( function (action) {      // just in case we have more than one verb, split them out      var verbs = action.verb.split(",");      // and add the handler specified to the route      // (if it's a valid verb)      verbs.forEach ( function (verb) {        if (typeof route[verb] === "function") {          if (typeof action.securedBy !== "undefined") {            route[verb]( checkAuthFn, action.handler );          } else {            route[verb]( action.handler );          }        }      });    }); }); return router; }; Once you've finished writing even a few handlers, you should be able to verify that the system works by posting requests to your API. First, make sure your server has started ; we use the following code line to start the server: export NODE_ENV=development; npm start For some of the routes, you could just load up a browser and point it at your server. If you type https://localhost:4443/ in your browser, you should see a response that looks a lot like this: If you're thinking this looks styled, you're right. The Tasker API generates responses based on the client's requested format. The browser requests data in HTML, and so our API generates a styled HTML page as a response. For an app, the response is JSON because the app requests that the response be in JSON. If you want to see how this works, see /tasker-srv/res-utils/index.js. If you want to actually send and receive data, though, you'll want to get a REST client rather than using the browser. There are many good free clients: Firefox has a couple of good clients as does Chrome. Or you can find a native client for your operating system. Although you can do everything with curl on the command prompt, RESTful clients are much easier to use and often offer useful features, such as dynamic variables, various authentication methods built in, and many can act as simple automated testers. Summary In this article, we've covered how to build a web server that bridges the gap between our database backend and our mobile application. We've provided an overview of RESTful-like APIs, and we've also quickly shown how to implement such a web API using Node.js. We've also covered authentication and session handling using Passport. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 5228