Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cross-Platform Mobile Development

96 Articles
article-image-qr-codes-geolocation-google-maps-api-and-html5-video
Packt
07 Jun 2013
9 min read
Save for later

QR Codes, Geolocation, Google Maps API, and HTML5 Video

Packt
07 Jun 2013
9 min read
(For more resources related to this topic, see here.) QR codes We love our smartphones. We love showing off what our smartphones can do. So, when those cryptic squares, as shown in the following figure, started showing up all over the place and befuddling the masses, smartphone users quickly stepped up and started showing people what it's all about in the same overly-enthusiastic manner that we whip them out to answer even the most trivial question heard in passing. And, since it looks like NFC isn't taking off anytime soon, we'd better be familiar with QR codes and how to leverage them. The data shows that knowledge and usage of QR codes is very high according to surveys:(http://researchaccess.com/2012/01/new-data-on-qrcode-adoption/) More than two-thirds of smartphone users have scanned a code More than 70 percent of the users say they'd do it again (especially for a discount) Wait, what does this have to do with jQuery Mobile? Traffic. Big-time successful traffic. A banner ad is considered successful if only two percent of people lick through (http://en.wikipedia.org/wiki/Clickthrough_rate). QR codes get more than 66 percent! I'd say it's a pretty good way to get people to our reations and, thus, should be of concern. But QR codes are for more than just URLs. Here we have a URL, a block of text, a phone number, and an SMS in the following QR codes: There are many ways to generate QR codes (http://www.the-qrcode-generator.com/, http://www.qrstuff.com/). Really, just search for QR Code Generator on Google and you'll have numerous options. Let us consider a local movie theater chain. Dickinson Theatres (dtmovies.com) has been around since the 1920s and is considering throwing its hat into the mobile ring. Perhaps they will invest in a mobile website, and go all-out in placing posters and ads in bus stops and other outdoor locations. Naturally, people are going to start scanning, and this is valuable to us because they're going to tell us exactly which locations are paying off. This is really a first in the advertising industry. We have a medium that seems to spur people to interact on devices that will tell us exactly where they were when they scanned it. Geolocation matters and this can help us find the right locations. Geolocation When GPS first came out on phones, it was pretty useless for anything other than police tracking in case of emergencies. Today, it is making the devices that we hold in our hands even more personal than our personal computers. For now, we can get a latitude, longitude, and timestamp very dependably. The geolocation API specification from the W3C can be found at http://dev.w3.org/geo/api/spec-source.html. For now, we'll pretend that we have a poster prompting the user to scan a QR code to find the nearest theater and show the timings. It would bring the user to a page like this: Since there's no better first date than dinner and a movie, the movie going crowd tends to skew a bit to the younger side. Unfortunately, that group does not tend to have a lot of money. They may have more feature phones than smartphones. Some might only have very basic browsers. Maybe they have JavaScript, but we can't count on it. If they do, they might have geolocation. Regardless, given the audience, progressive enhancement is going to be the key. The first thing we'll do is create a base level page with a simple form that will submit a zip code to a server. Since we're using our template from before, we'll add validation to the form for anyone who has JavaScript using the validateMe class. If they have JavaScript and geolocation, we'll replace the form with a message saying that we're trying to find their location. For now, don't worry about creating this file. The source code is incomplete at this stage. This page will evolve and the final version will be in the source package for the article in the file called qrresponse. php as shown in the following code: <?php $documentTitle = "Dickinson Theatres"; $headerLeftHref = "/"; $headerLeftLinkText = "Home"; $headerLeftIcon = "home"; $headerTitle = ""; $headerRightHref = "tel:8165555555"; $headerRightLinkText = "Call"; $headerRightIcon = "grid"; $fullSiteLinkHref = "/"; ?> <!DOCTYPE html> <html> <head> <?php include("includes/meta.php"); ?> </head> <body> <div id="qrfindclosest" data-role="page"> <div class="logoContainer ui-shadow"></div> <div data-role="content"> <div id="latLong> <form id="findTheaterForm" action="fullshowtimes.php" method="get" class="validateMe"> <p> <label for="zip">Enter Zip Code</label> <input type="tel" name="zip" id="zip" class="required number"/> </p> <p><input type="submit" value="Go"></p> </form> </div> <p> <ul id="showing" data-role="listview" class="movieListings" data-dividertheme="g"> </ul> </p> </div> <?php include("includes/footer.php"); ?> </div> <script type="text/javascript"> //We'll put our page specific code here soon </script> </body> </html> For anyone who does not have JavaScript, this is what they will see, nothing special. We could spruce it up with a little CSS but what would be the point? If they're on a browser that doesn't have JavaScript, there's pretty good chance their browser is also miserable at rendering CSS. That's fine really. After all, progressive enhancement doesn't necessarily mean making it wonderful for everyone, it just means being sure it works for everyone. Most will never see this but if they do, it will work just fine For everyone else, we'll need to start working with JavaScript to get our theater data in a format we can digest programmatically. JSON is perfectly suited for this task. If you are already familiar with the concept of JSON, skip to the next paragraph now. If you're not familiar with it, basically, it's another way of shipping data across the Interwebs. It's like XML but more useful. It's less verbose and can be directly interacted with and manipulated using JavaScript because it's actually written in JavaScript. JSON is an acronym for JavaScript Object Notation. A special thank you goes out to Douglas Crockford (the father of JSON). XML still has its place on the server. It has no business in the browser as a data format if you can get JSON. This is such a widespread view that at the last developer conference I went to, one of the speakers chuckled as he asked, "Is anyone still actually using XML?" { "theaters":[ { "id":161, "name":"Chenal 9 IMAX Theatre", "address":"17825 Chenal Parkway", "city":"Little Rock", "state":"AR", "zip":"72223", "distance":9999, "geo":{"lat":34.7684775,"long":-92.4599322}, "phone":"501-821-2616" }, { "id":158, "name":"Gateway 12 IMAX Theatre", "address":"1935 S. Signal Butte", "city":"Mesa", "state":"AZ", "zip":"85209", "distance":9999, "geo":{"lat":33.3788674,"long":-111.6016081}, "phone":"480-354-8030" }, { "id":135, "name":"Northglen 14 Theatre", "address":"4900 N.E. 80th Street", "city":"Kansas City", "state":"MO", "zip":"64119", "distance":9999, "geo":{"lat":39.240027,"long":-94.5226432}, "phone":"816-468-1100" } ] } Now that we have data to work with, we can prepare the on-page scripts. Let's put the following chunks of JavaScript in a script tag at the bottom of the HTML where we had the comment: We'll put our page specific code here soon //declare our global variables var theaterData = null; var timestamp = null; var latitude = null; var longitude = null; var closestTheater = null; //Once the page is initialized, hide the manual zip code form //and place a message saying that we're attempting to find //their location. $(document).on("pageinit", "#qrfindclosest", function(){ if(navigator.geolocation){ $("#findTheaterForm").hide(); $("#latLong").append("<p id='finding'>Finding your location...</ p>"); } }); //Once the page is showing, go grab the theater data and find out which one is closest. $(document).on("pageshow", "#qrfindclosest", function(){ theaterData = $.getJSON("js/theaters.js", function(data){ theaterData = data; selectClosestTheater(); }); }); function selectClosestTheater(){ navigator.geolocation.getCurrentPosition( function(position) { //success latitude = position.coords.latitude; longitude = position.coords.longitude; timestamp = position.timestamp; for(var x = 0; x < theaterData.theaters.length; x++) { var theater = theaterData.theaters[x]; var distance = getDistance(latitude, longitude, theater.geo.lat, theater.geo.long); theaterData.theaters[x].distance = distance; }} theaterData.theaters.sort(compareDistances); closestTheater = theaterData.theaters[0]; _gaq.push(['_trackEvent', "qr", "ad_scan", (""+latitude+","+longitude) ]); var dt = new Date(); dt.setTime(timestamp); $("#latLong").html("<div class='theaterName'>" +closestTheater.name+"</div><strong>" +closestTheater.distance.toFixed(2) +"miles</strong><br/>" +closestTheater.address+"<br/>" +closestTheater.city+", "+closestTheater.state+" " +closestTheater.zip+"<br/><a href='tel:" +closestTheater.phone+"'>" +closestTheater.phone+"</a>"); $("#showing").load("showtimes.php", function(){ $("#showing").listview('refresh'); }); }, function(error){ //error switch(error.code) { case error.TIMEOUT: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Timeout</div>"); break; case error.POSITION_UNAVAILABLE: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Position unavailable</div>"); break; case error.PERMISSION_DENIED: $("#latLong").prepend("<div class='ui-bar-e'> Unable to get your position: Permission denied. You may want to check your settings.</div>"); break; case error.UNKNOWN_ERROR: $("#latLong").prepend("<div class='ui-bar-e'> Unknown error while trying to access your position.</div>"); break; } $("#finding").hide(); $("#findTheaterForm").show(); }, {maximumAge:600000}); //nothing too stale } The key here is the function geolocation.getCurrentPosition, which will prompt the user to allow us access to their location data, as shown here on iPhone If somebody is a privacy advocate, they may have turned off all location services. In this case, we'll need to inform the user that their choice has impacted our ability to help them. That's what the error function is all about. In such a case, we'll display an error message and show the standard form again.
Read more
  • 0
  • 0
  • 9271

article-image-creating-instagram-clone-layout-using-ionic-framework
Packt
06 Oct 2015
7 min read
Save for later

Creating Instagram Clone Layout using Ionic framework

Packt
06 Oct 2015
7 min read
In this article by Zainul Setyo Pamungkas, author of the book PhoneGap 4 Mobile Application Development Cookbook, we will see how Ionic framework is one of the most popular HTML5 framework for hybrid application development. Ionic framework provides native such as the UI component that user can use and customize. (For more resources related to this topic, see here.) In this article, we will create a clone of Instagram mobile app layout: First, we need to create new Ionic tabs application project named ionSnap: ionic start ionSnap tabs Change directory to ionSnap: cd ionSnap Then add device platforms to the project: ionic platform add ios ionic platform add android Let's change the tab name. Open www/templates/tabs.html and edit each title attribute of ion-tab: <ion-tabs class="tabs-icon-top tabs-color-active-positive"> <ion-tab title="Timeline" icon-off="ion-ios-pulse" icon-on="ion-ios-pulse-strong" href="#/tab/dash"> <ion-nav-view name="tab-dash"></ion-nav-view> </ion-tab> <ion-tab title="Explore" icon-off="ion-ios-search" icon-on="ion-ios-search" href="#/tab/chats"> <ion-nav-view name="tab-chats"></ion-nav-view> </ion-tab> <ion-tab title="Profile" icon-off="ion-ios-person-outline" icon-on="ion-person" href="#/tab/account"> <ion-nav-view name="tab-account"></ion-nav-view> </ion-tab> </ion-tabs> We have to clean our application to start a new tab based application. Open www/templates/tab-dash.html and clean the content so we have following code: <ion-view view-title="Timeline"> <ion-content class="padding"> </ion-content> </ion-view> Open www/templates/tab-chats.html and clean it up: <ion-view view-title="Explore"> <ion-content> </ion-content> </ion-view> Open www/templates/tab-account.html and clean it up: <ion-view view-title="Profile"> <ion-content> </ion-content> </ion-view> Open www/js/controllers.js and delete methods inside controllers so we have following code: angular.module('starter.controllers', []) .controller('DashCtrl', function($scope) {}) .controller('ChatsCtrl', function($scope, Chats) { }) .controller('ChatDetailCtrl', function($scope, $stateParams, Chats) { }) .controller('AccountCtrl', function($scope) { }); We have clean up our tabs application. If we run our application, we will have view like this: The next steps, we will create layout for timeline view. Each post of timeline will be displaying username, image, Like button, and Comment button. Open www/template/tab-dash.html and add following div list: <ion-view view-title="Timelines"> <ion-content class="has-header"> <div class="list card"> <div class="item item-avatar"> <img src="http://placehold.it/50x50"> <h2>Some title</h2> <p>November 05, 1955</p> </div> <div class="item item-body"> <img class="full-image" src="http://placehold.it/500x500"> <p> <a href="#" class="subdued">1 Like</a> <a href="#" class="subdued">5 Comments</a> </p> </div> <div class="item tabs tabs-secondary tabs-icon-left"> <a class="tab-item" href="#"> <i class="icon ion-heart"></i> Like </a> <a class="tab-item" href="#"> <i class="icon ion-chatbox"></i> Comment </a> <a class="tab-item" href="#"> <i class="icon ion-share"></i> Share </a> </div> </div> </ion-content> </ion-view> Our timeline view will be like this: Then, we will create explore page to display photos in a grid view. First we need to add some styles on our www/css/styles.css: .profile ul { list-style-type: none; } .imageholder { width: 100%; height: auto; display: block; margin-left: auto; margin-right: auto; } .profile li img { float: left; border: 5px solid #fff; width: 30%; height: 10%; -webkit-transition: box-shadow 0.5s ease; -moz-transition: box-shadow 0.5s ease; -o-transition: box-shadow 0.5s ease; -ms-transition: box-shadow 0.5s ease; transition: box-shadow 0.5s ease; } .profile li img:hover { -webkit-box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); box-shadow: 0px 0px 7px rgba(255, 255, 255, 0.9); } Then we just put list with image item like so: <ion-view view-title="Explore"> <ion-content> <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/50x50"></a> </li> </ul> </ion-content> </ion-view> Now, our explore page will look like this: For the last, we will create our profile page. The profile page consists of two parts. The first one is profile header, which shows user information such as username, profile picture, and number of post. The second part is a grid list of picture uploaded by user. It's similar to grid view on explore page. To add profile header, open www/css/style.css and add following styles bellow existing style: .text-white{ color:#fff; } .profile-pic { width: 30%; height: auto; display: block; margin-top: -50%; margin-left: auto; margin-right: auto; margin-bottom: 20%; border-radius: 4em 4em 4em / 4em 4em; } Open www/templates/tab-account.html and then add following code inside ion-content: <ion-content> <div class="user-profile" style="width:100%;heigh:auto;background-color:#fff;float:left;"> <img src="img/cover.jpg"> <div class="avatar"> <img src="img/ionic.png" class="profile-pic"> <ul> <li> <p class="text-white text-center" style="margin-top:-15%;margin-bottom:10%;display:block;">@ionsnap, 6 Pictures</p> </li> </ul> </div> </div> … The second part of profile page is the grid list of user images. Let's add some pictures under profile header and before the close of ion-content tag: <ul class="profile" style="margin-left:5%;"> <li class="profile"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> <li class="profile" style="list-style-type: none;"> <a href="#"><img src="http://placehold.it/100x100"></a> </li> </ul> </ion-content> Our profile page will now look like this: Summary In this article we have seen steps to create Instagram clone with an Ionic framework with the help of an example. If you are a developer who wants to get started with mobile application development using PhoneGap, then this article is for you. Basic understanding of web technologies such as HTML, CSS and JavaScript is a must. Resources for Article: Further resources on this subject: The Camera API [article] Working with the sharing plugin [article] Building the Middle-Tier [article]
Read more
  • 0
  • 0
  • 9265

article-image-application-development-workflow
Packt
08 Sep 2015
15 min read
Save for later

Application Development Workflow

Packt
08 Sep 2015
15 min read
 In this article by Ivan Turkovic, author of the book PhoneGap Essentials, you will learn some of the basics on how to work with the PhoneGap application development and how to start building the application. We will go over some useful steps and tips to get the most out of your PhoneGap application. In this article, you will learn the following topics: An introduction to a development workflow Best practices Testing (For more resources related to this topic, see here.) An introduction to a development workflow PhoneGap solves a great problem of developing mobile applications for multiple platforms at the same time, but still it is pretty much open about how you want to approach the creation of an application. You do not have any predefined frameworks that come out of-the-box by default. It just allows you to use the standard web technologies such as the HTML5, CSS3, and JavaScript languages for hybrid mobile application development. The applications are executed in wrappers that are custom-built to work on every platform and the underlying web view behaves in the same way on all the platforms. For accessing device APIs, it relies on the standard API bindings to access every device's sensors or the other features. The developers who start using PhoneGap usually come from different backgrounds, as shown in the following list: Mobile developers who want to expand the functionality of their application on other platforms but do not want to learn a new language for each platform Web developers who want to port their existing desktop web application to a mobile application; if they are using a responsive design, it is quite simple to do this Experienced mobile developers who want to use both the native and web components in their application, so that the web components can communicate with the internal native application code as well The PhoneGap project itself is pretty simple. By default, it can open an index.html page and load the initial CSS file, JavaScript, and other resources needed to run it. Besides the user's resources, it needs to refer the cordova.js file, which provides the API bindings for all the plugins. From here onwards, you can take different steps but usually the process falls in two main workflows: web development workflow and native platform development. Web project development A web project development workflow can be used when you want to create a PhoneGap application that runs on many mobile operating systems with as little as possible changes to a specific one. So there is a single codebase that is working along with all the different devices. It has become possible with the latest versions since the introduction of the command-line interface (CLI). This automates the tedious work involved in a lot of the functionalities while taking care of each platform, such as building the app, copying the web assets in the correct location for every supported platform, adding platform-specific changes, and finally running build scripts to generate binaries. This process can be automated even more with build system automating tasks such as Gulp or Grunt. You can run these tasks before running PhoneGap commands. This way you can optimize the assets before they are used. Also you can run JSLint automatically for any change or doing automatic builds for every platform that is available. Native platform development A native platform development workflow can be imagined as a focus on building an application for a single platform and the need to change the lower-level platform details. The benefit of using this approach is that it gives you more flexibility and you can mix the native code with a WebView code and impose communication between them. This is appropriate for those functionalities that contain a section of the features that are not hard to reproduce with web views only; for example, a video app where you can do the video editing in the native code and all the social features and interaction can be done with web views. Even if you want to start with this approach, it is better to start the new project as a web project development workflow and then continue to separate the code for your specific needs. One thing to keep in mind is that, to develop with this approach, it is better to develop the application in more advanced IDE environments, which you would usually use for building native applications. Best practices                            The running of hybrid mobile applications requires some sacrifices in terms of performance and functionality; so it is good to go over some useful tips for new PhoneGap developers. Use local assets for the UI As mobile devices are limited by the connection speeds and mobile data plans are not generous with the bandwidth, you need to prepare all the UI components in the application before deploying to the app store. Nobody will want to use an application that takes a few seconds to load the server-rendered UI when the same thing could be done on the client. For example, the Google Fonts or other non-UI assets that are usually loaded from the server for the web applications are good enough as for the development process, but for the production; you need to store all the assets in the application's container and not download them during its run process. You do not want the application to wait while an important part is being loaded. The best advice on the UI that I can give you is to adopt the Single Page Application (SPA) design; it is a client-side application that is run from one request from a web page. Initial loading means taking care of loading all the assets that are required for the application in order to function, and any further updates are done via AJAX (such as loading data). When you use SPA, not only do you minimize the amount of interaction with the server, you also organize your application in a more efficient manner. One of the benefits is that the application doesn't need to wait for every deviceready event for each additional page that it loads from the start. Network access for data As you have seen in the previous section, there are many limitations that mobile applications face with the network connection—from mobile data plans to the network latency. So you do not want it to rely on the crucial elements, unless real-time communication is required for the application. Try to keep the network access only to access crucial data and everything else that is used frequently can be packed into assets. If the received data does not change often, it is advisable to cache it for offline use. There are many ways to achieve this, such as localStorage, sessionStorage, WebSQL, or a file. When loading data, try to load only the data you need at that moment. If you have a comment section, it will make sense if you load all thousand comments; the first twenty comments should be enough to start with. Non-blocking UI When you are loading additional data to show in the application, don't try to pause the application until you receive all the data that you need. You can add some animation or a spinner to show the progress. Do not let the user stare at the same screen when he presses the button. Try to disable the actions once they are in motion in order to prevent sending the same action multiple times. CSS animations As most of the modern mobile platforms now support CSS3 with a more or less consistent feature set, it is better to make the animations and transitions with CSS rather than with the plain JavaScript DOM manipulation, which was done before CSS3. CSS3 is much faster as the browser engine supports the hardware acceleration of CSS animations and is more fluid than the JavaScript animations. CSS3 supports translations and full keyframe animations as well, so you can be really creative in making your application more interactive. Click events You should avoid click events at any cost and use only touch events. They work in the same way as they do in the desktop browser. They take a longer time to process as the mobile browser engine needs to process the touch or touchhold events before firing a click event. This usually takes 300 ms, which is more than enough to give an additional impression of slow responses. So try to start using touchstart or touchend events. There is a solution for this called FastClick.js. It is a simple, easy-to-use library for eliminating the 300 ms delay between a physical tap and the firing of a click event on mobile browsers. Performance The performance that we get on the desktops isn't reflected in mobile devices. Most of the developers assume that the performance doesn't change a lot, especially as most of them test the applications on the latest mobile devices and a vast majority of the users use mobile devices that are 2-3 years old. You have to keep in mind that even the latest mobile devices have a slower CPU, less RAM, and a weaker GPU. Recently, mobile devices are catching up in the sheer numbers of these components but, in reality, they are slower and the maximum performance is limited due to the battery life that prevents it from using the maximum performance for a prolonged time. Optimize the image assets We are not limited any more by the app size that we need to deploy. However, you need to optimize the assets, especially images, as they take a large part of the assets, and make them appropriate for the device. You should prepare images in the right size; do not add the biggest size of the image that you have and force the mobile device to scale the image in HTML. Choosing the right image size is not an easy task if you are developing an application that should support a wide array of screens, especially for Android that has a very fragmented market with different screen sizes. The scaled images might have additional artifacts on the screen and they might not look so crisp. You will be hogging additional memory just for an image that could leave a smaller memory footprint. You should remember that mobile devices still have limited resources and the battery doesn't last forever. If you are going to use PhoneGap Build, you will need to make sure you do not exceed the limit as the service still has a limited size. Offline status As we all know, the network access is slow and limited, but the network coverage is not perfect so it is quite possible that your application will be working in the offline mode even in the usual locations. Bad reception can be caused by being inside a building with thick walls or in the basement. Some weather conditions can affect the reception too. The application should be able to handle this situation and respond to it properly, such as by limiting the parts of the application that require a network connection or caching data and syncing it when you are online once again. This is one of the aspects that developers usually forget to test in the offline mode to see how the app behaves under certain conditions. You should have a plugin available in order to detect the current state and the events when it passes between these two modes. Load only what you need There are a lot of developers that do this, including myself. We need some part of the library or a widget from a framework, which we don't need for anything other than this, and yet we are a bit lazy about loading a specific element and the full framework. This can load an immense amount of resources that we will never need but they will still run in the background. It might also be the root cause of some of the problems as some libraries do not mix well and we can spend hours trying to solve this problem. Transparency You should try to use as little as possible of the elements that have transparent parts as they are quite processor-intensive because you need to update screen on every change behind them. The same things apply to the other visual elements that are processor-intensive such as shadows or gradients. The great thing is that all the major platforms have moved away from flashy graphical elements and started using the flat UI design. JSHint If you use JSHint throughout the development, it will save you a lot of time when developing things in JavaScript. It is a static code analysis tool for checking whether the JavaScript source code complies with the coding rules. It will detect all the common mistakes done with JavaScript, as JavaScript is not a compiled language and you can't see the error until you run the code. At the same time, JSHint can be a very restrictive and demanding tool. Many beginners in JavaScript, PhoneGap, or mobile programming could be overwhelmed with the number of errors or bad practices that JSHint will point out. Testing The testing of applications is an important aspect of build applications, and mobile applications are no exception. With a slight difference for most of the development that doesn't require native device APIs, you can use the platform simulators and see the results. However, if you are using the native device APIs that are not supported through simulators, then you need to have a real device in order to run a test on it. It is not unusual to use desktop browsers resized to mobile device screen resolution to emulate their screen while you are developing the application just to test the UI screens, since it is much faster and easier than building and running the application on a simulator or real device for every small change. There is a great plugin for the Google Chrome browser called Apache Ripple. It can be run without any additional tools. The Apache Ripple simulator runs as a web app in the Google Chrome browser. In Cordova, it can be used to simulate your app on a number of iOS and Android devices and it provides basic support for the core Cordova plugins such as Geolocation and Device Orientation. You can run the application in a real device browser or use the PhoneGap developer app. This simplifies the workflow as you can test the application on your mobile device without the need to re-sign, recompile, or reinstall your application to test the code. The only disadvantage is that with simulators, you cannot access the device APIs that aren't available in the regular web browsers. The PhoneGap developer app allows you to access device APIs as long as you are using one of the supplied APIs. It is good if you remember to always test the application on real devices at least before deploying to the app store. Computers have almost unlimited resources as compared to mobile devices, so the application that runs flawlessly on the computer might fail on mobile devices due to low memory. As simulators are faster than the real device, you might get the impression that it will work on every device equally fast, but it won't—especially with older devices. So, if you have an older device, it is better to test the response on it. Another reason to use the mobile device instead of the simulator is that it is hard to get a good usability experience from clicking on the interface on the computer screen without your fingers interfering and blocking the view on the device. Even though it is rare that you would get some bugs with the plain PhoneGap that was introduced with the new version, it might still happen. If you use the UI framework, it is good if you try it on the different versions of the operating systems as they might not work flawlessly on each of them. Even though hybrid mobile application development has been available for some time, it is still evolving, and as yet there are no default UI frameworks to use. Even the PhoneGap itself is still evolving. As with the UI, the same thing applies to the different plugins. Some of the features might get deprecated or might not be supported, so it is good if you implement alternatives or give feedback to the users about why this will not work. From experience, the average PhoneGap application will use at least ten plugins or different libraries for the final deployment. Every additional plugin or library installed can cause conflicts with another one. Summary In this article, we learned more advanced topics that any PhoneGap developer should get into more detail once he/she has mastered the essential topics. Resources for Article: Further resources on this subject: Building the Middle-Tier[article] Working with the sharing plugin[article] Getting Ready to Launch Your PhoneGap App in the Real World [article]
Read more
  • 0
  • 0
  • 8258

article-image-building-middle-tier
Packt
23 Dec 2014
34 min read
Save for later

Building the Middle-Tier

Packt
23 Dec 2014
34 min read
In this article by Kerri Shotts , the author of the book PhoneGap for Enterprise covered how to build a web server that bridges the gap between our database backend and our mobile application. If you browse any Cordova/PhoneGap forum, you'll often come across posts asking how to connect to and query a backend database. In this article, we will look at the reasons why it is necessary to interact with your backend database using an intermediary service. If the business logic resides within the database, the middle-tier might be a very simple layer wrapping the data store, but it can also implement a significant portion of business logic as well. The middle-tier also usually handles session authentication logic. Although many enterprise projects will already have a middle-tier in place, it's useful to understand how a middle-tier works, and how to implement one if you ever need to build a solution from the ground up. In this article, we'll focus heavily on these topics: Typical middle-tier architecture Designing a RESTful-like API Implementing a RESTful-like hypermedia API using Node.js Connecting to the backend database Executing queries Handling authentication using Passport Building API handlers You are welcome to implement your middle-tier using any technology with which you are comfortable. The topics that we will cover in this article can be applied to any middle-tier platform. Middle-tier architecture It's tempting, especially for simple applications, to have the desire to connect your mobile app directly to your data store. This is an incredibly bad idea, which means your data store is vulnerable and exposed to attacks from the outside world (unless you require the user to log in to a VPN). It also means that your mobile app has a lot of code dedicated solely to querying your data store, which makes for a tightly coupled environment. If you ever want to change your database platform or modify the table structures, you will need to update the app, and any app that wasn't updated will stop working. Furthermore, if you want another system to access the data, for example, a reporting solution, you will need to repeat the same queries and logic already implemented in your app in order to ensure consistency. For these reasons alone, it's a bad idea to directly connect your mobile app to your backend database. However, there's one more good reason: Cordova has no nonlocal database drivers whatsoever. Although it's not unusual for a desktop application to make a direct connection to your database on an internal network, Cordova has no facility to load a database driver to interface directly with an Oracle or MySQL database. This means that you must build an intermediary service to bridge the gap from your database backend to your mobile app. No middle-tier is exactly the same, but for web and mobile apps, this intermediary service—also called an application server—is typically a relatively simple web server. This server accepts incoming requests from a client (our mobile app or a website), processes them, and returns the appropriate results. In order to do so, the web server parses these requests using a variety of middleware (security, session handling, cookie handling, request parsing, and so on) and then executes the appropriate request handler for the request. This handler then needs to pass this request on to the business logic handler, which, in our case, lives on the database server. The business logic will determine how to react to the request and returns the appropriate data to the request handler. The request handler transforms this data into something usable by the client, for example, JSON or XML, and returns it to the client. The middle-tier provides an Application Programming Interface (API). Beyond authentication and session handling, the middle-tier provides a set of reusable components that perform specific tasks by delegating these tasks to lower tiers. As an example, one of the components of our Tasker app is named get-task-comments. Provided the user is properly authenticated, the component will request a specific task from the business logic and return the attached comments. Our mobile app (or any other consumer) only needs to know how to call get-task-comments. This decouples the client from the database and ensures that we aren't unnecessarily repeating code. The flow of request and response looks a lot like the following figure: Designing a RESTful-like API A mobile app interfaces with your business logic and data store via an API provided by the application server middle-tier. Exactly how this API is implemented and how the client uses it is up to the developers of the system. In the past, this has often meant using web services (over HTTP) with information interchange via Simple Object Access Protocol (SOAP). Recently, RESTful APIs have become the norm when working with web and mobile applications. These APIs conform to the following constraints: Client/Server: Clients are not concerned with how data is stored, (that's the server's job), and servers are not concerned with state (that's the client's job). They should be able to be developed and/or replaced completely independently of each other (low coupling) as long as the API remains the same. Stateless: Each request should have the necessary information contained within it so that the server can properly handle the request. The server isn't concerned about session states; this is the sole domain of the client. Cacheable: Responses must specify if they can be cached or not. Proper management of this can greatly improve performance and scalability. Layered: The client shouldn't be able to tell if there are any intermediary servers between it and the server. This ensures that additional servers can be inserted into the chain to provide caching, security, load balancing, and so on. Code-on-demand: This is an optional constraint. The server can send the necessary code to handle the response to the client. For a mobile PhoneGap app, this might involve sending a small snippet of JavaScript, for example, to handle how to display and interact with a Facebook post. Uniform Interface: Resources are identified by a Uniform Resource Identifier (URI), for example, https://pge-as.example.com/task/21 refers to the task with an identifier of 21. These resources can be expressed in any number of formats to facilitate data interchange. Furthermore, when the client has the resource (in whatever representation it is provided), the client should also have enough information to manipulate the resource. Finally, the representation should indicate valid state transitions by providing links that the client can use to navigate the state tree of the system. There are many good web APIs in production, but often they fail to address the last constraint very well. They might represent resources using URIs, but typically the client is expected to know all the endpoints of the API and how to transition between them without the server telling the client how to do so. This means that the client is tightly coupled to the API. If the URIs or the API change, then the client breaks. RESTful APIs should instead provide all the valid state transitions with each response. This lets the client reduce its coupling by looking for specific actions rather than assuming that a specific URI request will work. Properly implemented, the underlying URIs could change and the client app would be unaffected. The only thing that needs to be constant is the entry URI to the API. There are many good examples of these kinds of APIs, PayPal's is quite good as are many others. The responses from these APIs always contain enough information for the client to advance to the next state in the chain. So in the case of PayPal, a response will always contain enough information to advance to the next step of the monetary transaction. Because the response contains this information, the client only needs to look at the response rather than having the URI of the next step hardcoded. RESTful APIs aren't standardized; one API might provide links to the next state in one format, while another API might use a different format. That said, there are several attempts to create a standard response format, Collection+JSON is just one example. The lack of standardization in the response format isn't as bad as it sounds; the more important issue is that as long as your app understands the response format, it can be decoupled from the URI structure of your API and its resources. The API becomes a list of methods with explicit transitions rather than a list of URIs alone. As long as the action names remain the same, the underlying URIs can be changed without affecting the client. This works well when it comes to most APIs where authorization is provided using an API key or an encoded token. For example, an API will often require authorization via OAuth 2.0. Your code asks for the proper authorization first, and upon each subsequent request, it passes an appropriate token that enables access to the requested resource. Where things become problematic, and why we're calling our API RESTful-like, is when it comes to the end user authentication. Whether the user of our mobile app recognizes it or not, they are an immediate consumer of our API. Because the data itself is protected based upon the roles and access of each particular user, users must authenticate themselves prior to accessing any data. When an end user is involved with authentication, the idea of sessions is inevitably required largely for the end user's convenience. Some sessions can be incredibly short-lived, for example, many banks will terminate a session if no activity is seen for 10 minutes, while others can be long-lived, and others might even be effectively eternal until explicitly revoked by the user. Regardless of the session length, the fact that a session is present indicates that the server must often store some information about state. Even if this information applies only to the user's authentication and session validity, it still violates the second rule of RESTful APIs. Tasker's web API, then, is a RESTful-like API. In everything except session handling and authentication, our API is like any other RESTful API. However, when it comes to authentication, the server maintains some state in order to ensure that users are properly authenticated. In the case of Tasker, the maintained state is limited. Once a user authenticates, a unique single-use token and an Hash Message Authentication Code (HMAC) secret are generated and returned to the client. This token is expected to be sent with the next API request and this request is expected to be signed with the HMAC secret. Upon completion of this API request, a new token is generated. Each token expires after a specified amount of time, or can be expired immediately by an explicit logout. Each token is stored in the backend, which means we violate the stateless rule. Our tokens are just a cryptographically random series of bytes, and because of this, there's nothing in the token that can be used to identify the user. This means we need to maintain the valid tokens and their user associations in the database. If the token contained user-identifiable information, we could technically avoid maintaining state, but this also means that the token could be forged if the attacker knew how tokens were constructed. A random token, on the other hand, means that there's no method of construction that can fool the server; the attacker will have to be very lucky to guess it right. Since Tasker's tokens are continually expiring after a short period of time and are continually regenerated upon each request, guessing a token is that much more difficult. Of course, it's not impossible for an attacker to get lucky and guess the right token on the first try, but considering the amount of entropy in most usernames and passwords, it's more likely that the attacker could guess the user's password than they could guess the correct token. Because these tokens are managed by the backend, our Tasker's API isn't truly stateless, and so it's not truly RESTful, hence the term RESTful-like. If you want to implement your API as a pure RESTful API, feel free. If your API is like that of many other APIs (such as Twitter, PayPal, Facebook, and so on), you'll probably want to do so. All this sounds well and good, but how should we go about designing and defining our API? Here's how I suggest going about it: Identify the resources. In Tasker, the resources are people, tasks, and task comments. Essentially, these are the data models. (If you take security into account, Tasker also has user and role resources in addition to sessions.) Define how the URI should represent the resource. For example, Bob Smith might be represented by /person/bob-smith or /person/29481. Query parameters are also acceptable: /person?administeredBy=john-doe will refer to the set of all individuals who have John Doe as their administrator. If this helps, think of each instance of a resource and each collection of these resources as web pages each having their own URL. Identify the actions that can be performed for each resource. For example, a task can be created and modified by the owner of the task. This task can be assigned to another user. A task's status and progress can be updated by both the owner and the assignee. With RESTful APIs, these actions are typically handled by using the HTTP verbs (also known as methods) GET, POST, PUT, and DELETE. Others can also be used, such as OPTIONS, PATCH, and so on. We'll cover in a moment how these usually line up against typical Create, Read, Update, Delete (CRUD) operations. Identify the state transitions that are valid for resources. As an example, a client's first steps might be to request a list of all tasks assigned to a particular user. As part of the response, it should be given URIs that indicate how the app should retrieve information about a particular task. Furthermore, within this single task's response, there should be information that tells the client how to modify the task. Most APIs generally mirror the typical CRUD operations. The following is how the HTTP verbs line up against the familiar CRUD counterparts for a collection of items: HTTP verb CRUD operation Description GET READ This returns the collection of items in the desired format. Often can be filtered and sorted via query parameters. POST CREATE This creates an item within the collection. The return result includes the URI for the new resource. DELETE N/A This is not typically used at the collection level, unless one wants to remove the entire collection. PUT N/A This is not typically used at the collection level, though it can be used to update/replace each item in the collection.  The same verbs are used for items within a collection: HTTP verb CRUD operation Description GET READ This returns a specific item, given the ID. POST N/A This is not typically used at the item level. DELETE DELETE This deletes a specific item, given the ID. PUT UPDATE This updates an existing item. Sometimes PATCH is used to update only specific properties of the item.  Here's an example of a state transition diagram for a portion of the Tasker API along with the corresponding HTTP verbs: Now that we've determined the states and the valid transitions, we're ready to start modeling the API and the responses it should generate. This is particularly useful before you start coding, as one will often notice issues with the API during this phase, and it's far easier to fix them now rather than after a lot of code has been written (or worse, after the API is in production). How you model your API is up to you. If you want to create a simple text document that describes the various requests and expected responses, that's fine. You can also use any number of tools that aid in modeling your API. Some even allow you to provide mock responses for testing. Some of these are identified as follows: RAML (http://raml.org): This is a markup language to model RESTful-like APIs. You can build API models using any text editor, but there is also an API designer online. Apiary (http://apiary.io): Apiary uses a markdown-like language (API blueprint) to model APIs. If you're familiar with markdown, you shouldn't have much trouble using this service. API mocking and automated testing are also provided. Swagger (http://swagger.io): This is similar to RAML, where it uses YAML as the modeling language. Documentation and client code can be generated directly from the API model. Building our API using Node.js In this section, we'll cover connecting our web service to our Oracle database, handling user authentication and session management using Passport, and defining handlers for state transitions. You'll definitely want to take a look at the /tasker-srv directory in the code package for this book, which contains the full web server for Tasker. In the following sections, we've only highlighted some snippets of the code. Connecting to the backend database Node.js's community has provided a large number of database drivers, so chances are good that whatever your backend, Node.js has a driver available for it. In our example app, we're using an Oracle database as the backend, which means we'll be using the oracle driver (https://www.npmjs.org/package/oracle). Connecting to the database is actually pretty easy, the following code shows how: var oracle = require("oracle"); oracle.connect ( { hostname: "localhost", port: 1521, database: "xe", user: "tasker", password: "password" }, function (err, client) { if (err) { /* error; return or next(err) */ } /* query the database; when done call client.close() */ }); In the real world, a development version of our server will be using a test database, and a production version of our server will use the production database. To facilitate this, we made the connection information configurable. The /config/development.json and /config/production.json files contain connection information, and the main code simply requests the configuration information when making a connection, the following code line is used to get the configuration information: oracle.connect ( config.get ( "oracle" ), … ); Since we're talking about the real world, we also need to recognize that database connections are slow and they need to be pooled in order to improve performance as well as permit parallel execution. To do this, we added the generic-pool NPM module (https://www.npmjs.org/package/generic-pool) and added the following code to app.js: var clientPool = pool.Pool( { name: "oracle", create: function ( cb ) {    return new oracle.connect( config.get("oracle"),      function ( err, client ) {        cb ( err, client );      }    ) }, destroy: function ( client ) {    try {      client.close();    } catch (err) {      // do nothing, but if we don't catch the error,      // the server crashes    } }, max: 5, min: 1, idleTimeoutMillis: 30000 }); Because our pool will always contain at least one connection, we need to ensure that when the process exits, the pool is properly drained, as follows: process.on("exit", function () { clientPool.drain( function () {    clientPool.destroyAllNow(); }); }); On its own, this doesn't do much yet. We need to ensure that the pool is available to the entire app: app.set ( "client-pool", clientPool ); Executing queries We've built our business logic in the Oracle database using PL/SQL stored procedures and functions. In PL/SQL, functions can return table-like structures. While this is similar in concept to a view, writing a function using PL/SQL provides us more flexibility. As such, our queries won't actually be talking to the base tables, they'll be talking to functions that return results based on the user's authorization. This means that we don't need additional conditions in a WHERE clause to filter based on the user's authorization, which helps eliminate code duplication. Regardless of the previous statement, executing queries and stored procedures is done using the same method, that is execute. Before we can execute anything, we need to first acquire a client connection from the pool. To this end, we added a small set of database utility methods; you can see the code in the /db-utils directory. The query utility method is shown in the following code snippet: DBUtils.prototype.query = function ( sql, bindParameters, cb ) { var self = this,    clientPool = self._clientPool,    deferred = Q.defer();    clientPool.acquire( function ( err, client ) {    if ( err ) {    winston.error("Failed to acquire connection.");      if ( cb ) {        cb( new Error( err ) );      else {        deferred.reject( err );      }    }    try {      client.execute( sql, bindParameters,        function ( err, results ) {          if ( err ) {            clientPool.release( client );            if ( cb ) {            cb( new Error( err ) );            } else {              deferred.reject( err );            }           }          clientPool.release( client );            if ( cb ) {            cb( err, results );          } else {            deferred.resolve( results );          }        } );      }      catch ( err2 ) {      try {        clientPool.release( client );      }      catch ( err3 ) {        // can't do anything...      }      if ( cb ) {        cb( err2 );      } else { deferred.reject( err2 );      }    } } ); if ( !cb ) {    return deferred.promise; } }; It's then possible to retrieve the results to an arbitrary query using the preceding method, as shown in the following code snippet: dbUtil.query( "SELECT * FROM " + "table(tasker.task_mgmt.get_task(:1,:2))", [ taskId, req.user.userId ] ) .then( function ( results ) { // if no results, return 404 not found if ( results.length === 0 ) {    return next( Errors.HTTP_NotFound() ); } // create a new task with the database results // (will be in first row) req.task = new Task( results[ 0 ] ); return next(); } ) .catch( function ( err ) { return next( new Error( err ) ); } ) .done(); The query used in the preceding code is an example of calling a stored function that returns a table structure. The results of the SELECT statement will depend on parameters (taskId and username), and get_task will decide what data can be returned based on the user's authorization. Using Passport to handle authentication and sessions Although we've implemented our own authentication protocol, it's better that we use one that has already been well vetted and is well understood as well as one that suits our particular needs. In our case, we needed the demo to stand on its own without a lot of additional services, and as such, we built our own protocol. Even so, we chose a well known cryptographic method (PBKDF2), and are using a large number of iterations and large key lengths. In order to implement authentication easily in Node.js, you'll probably want to use Passport (https://www.npmjs.org/package/passport). It has a large community, and supports a large number of authentication schemes. If at all possible, try to use third-party authentication systems as often as possible (for example, LDAP, AD, Kerberos, and so on). In our case, because our authentication method is custom, we chose to use the passport-req strategy (https://www.npmjs.org/package/passport-req). Since Tasker's authentication is token-based, we will use this to inspect a custom header that the client will use to pass us the authentication token. The following is a simplified diagram of how Tasker's authentication process works: Please don't use our authentication strategy for anything that requires high levels of security. It's just an example, and isn't guaranteed to be secure in any way. Before we can actually use Passport, we need to define how our authentication strategy actually works. We do this by calling passport.use in our app.js file: var passport = require("passport"); var ReqStrategy = require("passport-req").Strategy; var Session = require("./models/session"); passport.use ( new ReqStrategy ( function ( req, done ) {    var clientAuthToken = req.headers["x-auth-token"];    var session = new Session ( new DBUtils ( clientPool ) );    session.findSession( clientAuthToken )    .then( function ( results ) {    if ( !results ) { return done( null, false ); }    done( null, results );    } )    .catch( function ( err ) {    return done( err );    } )    .done(); } )); In the preceding code, we've given Passport a new authentication strategy. Now, whenever Passport needs to authenticate a request, it will call this small section of code. You might be wondering what's going on in findSession. Here's the code: Session.prototype.findSession = function ( clientAuthToken, cb ) { var self = this, deferred = Q.defer(); // if no token, no sense in continuing if ( typeof clientAuthToken === "undefined" ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // an auth token is of the form 1234.ABCDEF10284128401ABC13... var clientAuthTokenParts = clientAuthToken.split( "." ); if ( !clientAuthTokenParts ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // no auth token, no session. // get the parts var sessionId = clientAuthTokenParts[ 0 ], authToken = clientAuthTokenParts[ 1 ]; // ask the database via dbutils if the token is recognized self._dbUtils.execute( "CALL tasker.security.verify_token (:1, :2, :3, :4, :5 ) INTO :6", [ sessionId, authToken, // authorization token self._dbUtils.outVarchar2( { size: 32 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 1 } ) ] ) .then( function ( results ) {    // returnParam3 has a Y or N; Y is good auth    if ( results.returnParam3 === "Y" ) {      // notify callback of successful auth      var user = {        userId:   results.returnParam, sessionId: sessionId,        nextToken: results.returnParam1,        hmacSecret: results.returnParam2      };      if ( cb ) { cb( null, user ) }      else { deferred.resolve( user ); }    } else {      // auth failed      if ( cb ) { cb( null, false ); } else { deferred.reject(); }    } } ) .catch( function ( err ) {    if ( cb ) { return cb( err, false ); }    else { deferred.reject(); } } ) .done(); if ( !cb ) { return deferred.promise; } }; The dbUtils.execute() method is a wrapper method around the Oracle query method we covered in the Executing queries section. Once a session has been retrieved from the database, Passport will want to serialize the user. This is usually just the user's ID, but we serialize a little more (which, from the preceding code, is the user's ID, session ID, and the HMAC secret): passport.serializeUser(function( user, done ) { done (null, user); }); The serializeUser method is called after a successful authentication and it must be present, or an error will occur. There's also a deserializeUser method if you're using typical Passport sessions: this method is designed to restore the user information from the Passport session. Before any of this will work, we also need to tell Express to use the Passport middleware: app.use ( passport.initialize() ); Passport makes handling authentication simple, but it also provides session support as well. While we don't use it for Tasker, you can use it to support a typical session-based username/password authentication system quite easily with a single line of code: app.use ( passport.session() ); If you're intending to use sessions with Passport, make sure you also provide a deserializeUser method. Next, we need to implement the code to authenticate a user with their username and password. Remember, we initially require the user to log in using their username and password, and once authenticated, we handle all further requests using tokens. To do this, we need to write a portion of our API code. Building API handlers We won't cover the entire API in this section, but we will cover a couple of small pieces, especially as they pertain to authentication and retrieving data. First, we've codified our API in /tasker-srv/api-def in the code package for this book. You'll also want to take a look at /tasker-srv/api-utils to see how we parse out this data structure into useable routes for the Express router. Basically, we codify our API by building a simple structure: [ { "route": "/auth", "actions": [ … ] }, { "route": "/task", "actions": [ … ] }, { "route": "/task/{:taskId}", "params": [ … ],   "actions": [ … ] }, … ] Each route can have any number of actions and parameters. Parameters are equivalent to the Express Router's parameters. In the preceding example, {:taskId} is a parameter that will take on the value of whatever is in that particular location in the URI. For example, /task/21 will result in taskId with the value of 21. This is useful for our actions because each action can then assume that the parameters have already been parsed, so any actions on the /task/{:taskId} route will already have task information at hand. The parameters are defined as follows: { "name": "taskId", "type": "number", "description": "…", "returns": [ … ], "securedBy": "tasker-auth", "handler": function (req, res, next, taskId) {…} } Actions are defined as follows: { "title": "Task", "action": "get-task", "verb": "get", "description": { … }, // hypermedia description "returns": [ … ],     // http status codes that are returned "example": { … },     // example response "href": "/task/{taskId}", "template": true, "accepts": [ "application/json", … ], "sends": [ "application/json", … ], "securedBy": "tasker-auth", "hmac": "tasker-256", "store": { … }, "query-parameters": { … }, "handler": function ( req, res, next ) { … } } Each handler is called whenever that particular route is accessed by a client using the correct HTTP verbs (identified by verb in the prior code). This allows us to write a handler for each specific state transition in our API, which is nicer than having to write a large method that's responsible for the entire route. It also makes describing the API using hypermedia that much simpler, since we can require a portion of the API and call a simple utility method (/tasker-srv/api-utils/index.js) to generate the description for the client. Since we're still working on how to handle authentication, here's how the API definition for the POST /auth route looks (the complete version is located at /tasker-srv/api-def/auth/login.js): action = {    "title": "Authenticate User",    "action": "login",    "description": [ … ], "example":     { … },    "returns":     {      200: "User authenticated; see information in body.",      401: "Incorrect username or password.", …    },    "verb": "post", "href": "/auth",    "accepts": [ "application/json", … ],    "sends": [ "application/json", … ],    "csrf": "tasker-csrf",    "store": {      "body": [ { name: "session-id", key: "sessionId" },      { name: "hmac-secret", key: "hmacSecret" },      { name: "user-id", key: "userId" },      { name: "next-token", key: "nextToken" } ]    },    "template": {      "user-id": {        "title": "User Name", "key": "userId",        "type": "string", "required": true,        "maxLength": 32, "minLength": 1      },      "candidate-password": {        "title": "Password", "key": "candidatePassword",        "type": "string", "required": true,        "maxLength": 255, "minLength": 1      }    }, The earlier code is largely documentation (but it is returned to the client when they request this resource). The following code handler is what actually performs the authentication:    "handler": function ( req, res, next ) {      var session = new Session( new DBUtils(      req.app.get( "client-pool" ) ) ),        username,        password;      // does our input validate?      var validationResults =       objUtils.validate( req.body, action.template );      if ( !validationResults.validates ) {        return next(         Errors.HTTP_Bad_Request( validationResults.message ) );      }      // got here -- good; copy the values out      username = req.body.userId;      password = req.body.candidatePassword;      // create a session with the username and password      session.createSession( username, password )        .then( function ( results ) {          // no session? bad username or password          if ( !results ) {            return next( Errors.HTTP_Unauthorized() );          }        // return the session information to the client        var o = {          sessionId: results.sessionId,          hmacSecret: results.hmacSecret,          userId:   results.userId,          nextToken: results.nextToken,          _links:   {}, _embedded: {}       };        // generate hypermedia        apiUtils.generateHypermediaForAction(         action, o._links, security, "self" );          [ require( "../task/getTaskList" ),          require( "../task/getTask" ), …          require( "../auth/logout" )          ].forEach( function ( apiAction ) {            apiUtils.generateHypermediaForAction(            apiAction, o._links, security );          } );          resUtils.json( res, 200, o );        } )        .catch( function ( err ) {          return next( err );          } )        .done();      }    }; The session.createSession method looks very similar to session.findSession, as shown in the following code: Session.prototype.createSession = function ( userName, candidatePassword, cb ) { var self = this, deferred = Q.defer(); if ( typeof userName === "undefined" || typeof candidatePassword === "undefined" ) {    if ( cb ) { return cb( null, false ); }    else { deferred.reject(); } } // attempt to authenticate self._dbUtils.execute( "CALL tasker.security.authenticate_user( :1, :2, :3," + " :4, :5 ) INTO :6", [ userName, candidatePassword, self._dbUtils.outVarchar2( { size: 4000 }, self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 4000 } ), self._dbUtils.outVarchar2( { size: 1 } ] ) .then( function ( results ) {    // ReturnParam3 has Y or N; Y is good auth    if ( results.returnParam3 === "Y" ) {      // notify callback of auth info      var user = {        userId:   userName,        sessionId: results.returnParam,        nextToken: results.returnParam1,        hmacSecret: results.returnParam2      };      if ( cb ) { cb( null, user ); }      else { deferred.resolve( user ); }    } else {      // auth failed      if ( cb ) { cb( null, false ); }      else { deferred.reject(); }    } } ) .catch( function ( err ) {    if ( cb ) { return cb( err, false ) }    else { deferred.reject(); } } ) .done(); if ( !cb ) { return deferred.promise; } }; Once the API is fully codified, we need to go back to app.js and tell Express that it should use the API's routes: app.use ( "/", apiUtils.createRouterForApi(apiDef, checkAuth)); We also add a global variable so that whenever an API section needs to return the entire API as a hypermedia structure, it can do so without traversing the entire API again: app.set( "x-api-root", apiUtils.generateHypermediaForApi( apiDef, securityDef ) ); The checkAuth method shown previously is pretty simple; all it does is ensure that we don't authenticate more than once in a single request: function checkAuth ( req, res, next ) { if (req.isAuthenticated()) {    return next(); } passport.authenticate ( "req" )(req, res, next); } You might be wondering where we're actually forcing our handlers to use authentication. There's actually a bit of magic in /tasker-srv/api-utils. I've highlighted the relevant portions: createRouterForApi:function (api, checkAuthFn) { var router = express.Router(); // process each route in the api; a route consists of the // uri (route) and a series of verbs (get, post, etc.) api.forEach ( function ( apiRoute ) {    // add params    if ( typeof apiRoute.params !== "undefined" ) {      apiRoute.params.forEach ( function ( param ) {        if (typeof param.securedBy !== "undefined" ) {          router.param( param.name, function ( req, res,          next, v) {            return checkAuthFn( req, res,            param.handler.bind(this, req, res, next, v) );          });        } else {          router.param(param.name, param.handler);        }      });    }    var uri = apiRoute.route;    // create a new route with the uri    var route = router.route ( uri );    // process through each action    apiRoute.actions.forEach ( function (action) {      // just in case we have more than one verb, split them out      var verbs = action.verb.split(",");      // and add the handler specified to the route      // (if it's a valid verb)      verbs.forEach ( function (verb) {        if (typeof route[verb] === "function") {          if (typeof action.securedBy !== "undefined") {            route[verb]( checkAuthFn, action.handler );          } else {            route[verb]( action.handler );          }        }      });    }); }); return router; }; Once you've finished writing even a few handlers, you should be able to verify that the system works by posting requests to your API. First, make sure your server has started ; we use the following code line to start the server: export NODE_ENV=development; npm start For some of the routes, you could just load up a browser and point it at your server. If you type https://localhost:4443/ in your browser, you should see a response that looks a lot like this: If you're thinking this looks styled, you're right. The Tasker API generates responses based on the client's requested format. The browser requests data in HTML, and so our API generates a styled HTML page as a response. For an app, the response is JSON because the app requests that the response be in JSON. If you want to see how this works, see /tasker-srv/res-utils/index.js. If you want to actually send and receive data, though, you'll want to get a REST client rather than using the browser. There are many good free clients: Firefox has a couple of good clients as does Chrome. Or you can find a native client for your operating system. Although you can do everything with curl on the command prompt, RESTful clients are much easier to use and often offer useful features, such as dynamic variables, various authentication methods built in, and many can act as simple automated testers. Summary In this article, we've covered how to build a web server that bridges the gap between our database backend and our mobile application. We've provided an overview of RESTful-like APIs, and we've also quickly shown how to implement such a web API using Node.js. We've also covered authentication and session handling using Passport. Resources for Article:  Further resources on this subject: Building Mobile Apps [article] Adding a Geolocation Trigger to the Salesforce Account Object [article] Introducing SproutCore [article]
Read more
  • 0
  • 0
  • 8221

article-image-learning-nodejs-mobile-application-development
Packt
23 Sep 2015
5 min read
Save for later

Learning Node.js for Mobile Application Development

Packt
23 Sep 2015
5 min read
  In Learning Node.js for Mobile Application Development by Christopher Svanefalk and Stefan Buttigieg, the overarching goal of this article is to give you the tools and know-how to install Node.js on multiple OS platforms and how to verify the installation. After reading this article you will know how to install, configure and use the fundamental software components. You will also have a good understanding of why these tools are appropriate for developing modern applications. (For more resources related to this topic, see here.) Why Node.js? Modern apps have several requirements which cannot be provided by the app itself, such as central data storage, communication routing, and user management. In order to provide such services, apps rely on an external software component known as a backend. The backend we will use for this is Node.js, a powerful but strange beast in its category. Node.js is known for being both reliable and highly performing. Node.js comes with its own package management system, NPM (Node Package Manager), through which you can easily install, remove and manage packages for your project. What this article covers? This article covers the installation of Node.js on multiple OS platforms and how to verify the installation. The installation Node.js is delivered as a set of JavaScript libraries, executing on a C/C++ runtime that is built around the Google V8 JavaScript Engine. The two come bundled together for most major operating systems, and we will look at the specifics of installing it. Google V8 JavaScript Engine is the same JavaScript engine that is used in the Chrome browser, built for speed and efficiency. Windows For Windows, there is a dedicated MSI wizard that can be used to install Node.js, which can be downloaded from the project's official website. To do so, go to the main page, navigate to Downloads, and then select Windows Installer. After it is downloaded, run MSI, follow the steps given to select the installation options, and conclude the install. Keep in mind that you will need to restart your system in order to make the changes effective. Linux Most major Linux distributions provide convenient installs of Node.js through their own package management systems. However, it is important to keep in mind that for many of them, NPM will not come bundled with the main Node.js package. Rather, it will be provided as a separate package. We will show how to install both in the following section. Ubuntu/Debian Open a terminal and issue sudo apt-get update to make sure that you have the latest package listings. After this, issue apt-get install nodejsnpm in order to install both Node.js and NPM in one swoop. Fedora/RHEL/CentOS On Fedora 18 or later, open a terminal and issue sudo yum install nodejsnpm. The system will perform the full setup for you. If you are running RHEL or CentOS, you will need to enable the optional EPEL repository. This can be done in conjunction with the install process, so that you do not need to do it again while upgrading, by issuing the sudo yum install nodejsnpm --enablerepo=epel command. Verifying your installation Now that we have finished the install, let's do a sanity check and make sure that everything works as expected. To do so, we can use the Node.js shell, which is an interactive runtime environment that is used to execute JavaScript code. To open it, first open a terminal, and then issue the following on it: node This will start the interpreter, which will appear as a shell, with the input line starting with the > sign. Once you are in it, type the following: console.log(“Hello world!); Then, press Enter. The Hello world! phrase will appear on the next line. Congratulations, your system is now set up to run Node.js! Mac OS X For Mac OS X, you can find a ready-to-install PKG file by going to www.nodejs.org, navigating to Downloads, and selecting the Mac OS X Installer option. Otherwise, you can click on Install, and your package file will automatically be downloaded as shown in the followin screenshot: Once you have downloaded the file, run it and follow the instructions on the screen. It is recommended that you keep all the offered default settings, unless there are compelling reasons for you to change something with regard to your specific machine. Verifying your installation for Mac OS X After the install finishes, open a terminal and start the Node.js shell by issuing the following command: node This will start the interactive node shell where you can execute JavaScript code. To make sure that everything works, try issuing the following command to the interpreter: console.log(“hello world!”); After pressing Enter, the Hello world! phrase will appear on your screen. Congratulations, Node.js is all set up and good to go! Who this article is written for Intended for web developers of all levels of expertise who want to deep dive into cross-platform mobile application development without going through the pains of understanding the languages and native frameworks which form an integral part of developing for different mobile platforms. This article will provide the readers with the necessary basic idea to develop mobile applications with near-native functionality and help them understand the process to develop a successful cross-platform mobile application. Summary In this article, we learned the different techniques that can be used to install Node.js across different platforms. Read Learning Node.js for Mobile Application Development to dive into cross-platform mobile application development. The following are some other related titles: Node.js Design Patterns Web Development with MongoDB and Node.js Deploying Node.js Node Security Resources for Article: Further resources on this subject: Welcome to JavaScript in the full stack[article] Introduction and Composition[article] Deployment and Maintenance [article]
Read more
  • 0
  • 0
  • 8185

article-image-apps-different-platforms
Packt
01 Oct 2015
9 min read
Save for later

Apps for Different Platforms

Packt
01 Oct 2015
9 min read
In this article by Hoc Phan, the author of the book Ionic Cookbook, we will cover tasks related to building and publishing apps, such as: Building and publishing an app for iOS Building and publishing an app for Android Using PhoneGap Build for cross–platform (For more resources related to this topic, see here.) Introduction In the past, it used to be very cumbersome to build and successfully publish an app. However, there are many documentations and unofficial instructions on the Internet today that can pretty much address any problem you may run into. In addition, Ionic also comes with its own CLI to assist in this process. This article will guide you through the app building and publishing steps at a high level. You will learn how to: Build iOS and Android app via Ionic CLI Publish iOS app using Xcode via iTunes Connect Build Windows Phone app using PhoneGap Build The purpose of this article is to provide ideas on what to look for and some "gotchas". Apple, Google, and Microsoft are constantly updating their platforms and processes so the steps may not look exactly the same over time. Building and publishing an app for iOS Publishing on App Store could be a frustrating process if you are not well prepared upfront. In this section, you will walk through the steps to properly configure everything in Apple Developer Center, iTunes Connect and local Xcode Project. Getting ready You must register for Apple Developer Program in order to access https://developer.apple.com and https://itunesconnect.apple.com because those websites will require an approved account. In addition, the instructions given next use the latest version of these components: Mac OS X Yosemite 10.10.4 Xcode 6.4 Ionic CLI 1.6.4 Cordova 5.1.1 How to do it Here are the instructions: Make sure you are in the app folder and build for the iOS platform. $ ionic build ios Go to the ios folder under platforms/ to open the .xcodeproj file in Xcode. Go through the General tab to make sure you have correct information for everything, especially Bundle Identifier and Version. Change and save as needed. Visit Apple Developer website and click on Certificates, Identifiers & Profiles. For iOS apps, you just have to go through the steps in the website to fill out necessary information. The important part you need to do correctly here is to go to Identifiers | App IDs because it must match your Bundle Identifier in Xcode. Visit iTunes Connect and click on the My Apps button. Select the Plus (+) icon to click on New iOS App. Fill out the form and make sure to select the right Bundle Identifier of your app. There are several additional steps to provide information about the app such as screenshots, icons, addresses, and so on. If you just want to test the app, you could just provide some place holder information initially and come back to edit later. That's it for preparing your Developer and iTunes Connect account. Now open Xcode and select iOS Device as the archive target. Otherwise, the archive feature will not turn on. You will need to archive your app before you can submit it to the App Store. Navigate to Product | Archive in the top menu. After the archive process completed, click on Submit to App Store to finish the publishing process. At first, the app could take an hour to appear in iTunes Connect. However, subsequent submission will go faster. You should look for the app in the Prerelease tab in iTunes Connect. iTunes Connect has very nice integration with TestFlight to test your app. You can switch on and off this feature. Note that for each publish, you have to change the version number in Xcode so that it won't conflict with existing version in iTunes Connect. For publishing, select Submit for Beta App Review. You may want to go through other tabs such as Pricing and In-App Purchases to configure your own requirements. How it works Obviously this section does not cover every bit of details in the publishing process. In general, you just need to make sure your app is tested thoroughly, locally in a physical device (either via USB or TestFlight) before submitting to the App Store. If for some reason the Archive feature doesn't build, you could manually go to your local Xcode folder to delete that specific temporary archived app to clear cache: ~/Library/Developer/Xcode/Archives See also TestFlight is a separate subject by itself. The benefit of TestFlight is that you don't need your app to be approved by Apple in order to install the app on a physical device for testing and development. You can find out more information about TestFlight here: https://developer.apple.com/library/prerelease/ios/documentation/LanguagesUtilities/Conceptual/iTunesConnect_Guide/Chapters/BetaTestingTheApp.html Building and publishing an app for Android Building and publishing an Android app is a little more straightforward than iOS because you just interface with the command line to build the .apk file and upload to Google Play's Developer Console. Ionic Framework documentation also has a great instruction page for this: http://ionicframework.com/docs/guide/publishing.html. Getting ready The requirement is to have your Google Developer account ready and login to https://play.google.com/apps/publish. Your local environment should also have the right SDK as well as keytool, jarsigner, and zipalign command line for that specific version. How to do it Here are the instructions: Go to your app folder and build for Android using this command: $ ionic build --release android You will see the android-release-unsigned.apk file in the apk folder under /platforms/android/build/outputs. Go to that folder in the Terminal. If this is the first time you create this app, you must have a keystore file. This file is used to identify your app for publishing. If you lose it, you cannot update your app later on. To create a keystore, type the following command in the command line and make sure it's the same keytool version of the SDK: $ keytool -genkey -v -keystore my-release-key.keystore -alias alias_name -keyalg RSA -keysize 2048 -validity 10000 Once you fill out the information in the command line, make a copy of this file somewhere safe because you will need it later. The next step is to use that file to sign your app so it will create a new .apk file that Google Play allow users to install: $ jarsigner -verbose -sigalg SHA1withRSA -digestalg SHA1 -keystore my-release-key.keystore HelloWorld-release-unsigned.apk alias_name To prepare for final .apk before upload, you must package it using zipalign: $ zipalign -v 4 HelloWorld-release-unsigned.apk HelloWorld.apk Log in to Google Developer Console and click on Add new application. Fill out as much information as possible for your app using the left menu. Now you are ready to upload your .apk file. First is to perform a beta testing. Once you are completed with beta testing, you can follow Developer Console instructions to push the app to Production. How it works This section does not cover other Android marketplaces such as Amazon Appstore because each of them has different processes. However, the common idea is that you need to completely build the unsigned version of the apk folder, sign it using existing or new keystore file, and finally zipalign to prepare for upload. Using PhoneGap Build for cross-platform Adobe PhoneGap Build is a very useful product that provides build-as-a-service in the cloud. If you have trouble building the app locally in your computer, you could upload the entire Ionic project to PhoneGap Build and it will build the app for Apple, Android, and Windows Phone automatically. Getting ready Go to https://build.phonegap.com and register for a free account. You will be able to build one private app for free. For additional private apps, there is monthly fee associated with the account. How to do it Here are the instructions: Zip your entire /www folder and replace cordova.js to phonegap.js in index.html as described in http://docs.build.phonegap.com/en_US/introduction_getting_started.md.html#Getting%20Started%20with%20Build. You may have to edit config.xml to ensure all plugins are included. Detailed changes are at PhoneGap documentation: http://docs.build.phonegap.com/en_US/configuring_plugins.md.html#Plugins. Select Upload a .zip file under private tab. Upload the .zip file of the www folder. Make sure to upload appropriate key for each platform. For Windows Phone, upload publisher ID file. After that, you just build the app and download the completed build file for each platform. How it works In a nutshell, PhoneGap Build is a convenience way when you are only familiar with one platform during development process but you want your app to be built quickly for other platforms. Under the hood, PhoneGap Build has its own environment to automate the process for each user. However, the user still has to own the responsibility of providing key file for signing the app. PhoneGap Build just helps attach the key to your app. See also The common issue people usually face with when using PhoneGap Build is failure to build. You may want to refer to their documentation for troubleshooting: http://docs.build.phonegap.com/en_US/support_failed-builds.md.html#Failed%20Builds Summary This article provided you with general information about tasks related to building and publishing apps for Android, for iOS and for cross-platform using PhoneGap, wherein you came to know how to publish an app in various places such as App Store and Google Play. Resources for Article: Further resources on this subject: Our App and Tool Stack[article] Directives and Services of Ionic[article] AngularJS Project [article]
Read more
  • 0
  • 0
  • 8173
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-quizzes
Packt
24 Oct 2013
9 min read
Save for later

Creating Quizzes

Packt
24 Oct 2013
9 min read
(For more resources related to this topic, see here.) Creating a short-answer question For this task, we will create a card to host an interface for a short-answer question. This type of question allows the user to input their answers via the keyboard. Evaluating this type of answer can be especially challenging, since there could be several correct answers and users are prone to make spelling mistakes. Engage Thrusters Create a new card and name it SA. Copy the Title label from the Main card and paste it onto the new SA card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new SA card. Copy the Progress label from the TF card and paste it onto the new SA card. Copy the Submit button from the Sequencing card and paste it onto the new SA card. Drag a text entry field onto the card and make the following modifications: Change the name to answer. Set the size to 362 by 46. Set the location to 237, 185. Change the text size to 14. We are now ready to program our interface. Enter the following code at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr # Section 2 put "" into fld "question" put "" into fld "answer" put "" into fld "progress" # Section 3 put "What farm animal eats shrubs, can be eaten, and are smaller than cows?" into qArray["1"]["question"] put "goat" into qArray["1"]["answer"] -- put "What is used in pencils for writing?" into qArray["2"]["question"] put "lead" into qArray["2"]["answer"] -- put "What programming language are you learning" into qArray["3"] ["question"] put "livecode" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we reset the question counter (qNbr) variable to 1. Section 2 contains the code to clear the question, answer, and progress fields. Section 3 populates the question/answer array (qArray). As you can see, this is the simplest array we have used. It only contains a question and answer pairing for each row. Our last step for the short answer question interface is to program the Submit button. Here is the code for that button: on mouseUp global qNbr, qArray local tResult # Section 1 if the text of fld "answer" contains qArray[qNbr]["answer"] then put "correct" into tResult else put "incorrect" into tResult end if #Section 2 switch tResult case "correct" if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if break case "incorrect" if qNbr < 3 then answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Next" titled "Wrong Answer" else answer "The correct answer is: " & qArray[qNbr]["answer"] & "." with "Okay" titled "Wrong Answer" end if break end switch # Section 3 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp Our Submit button script is divided into three sections. The first section (section 1) checks to see if the answer contained in the array (qArray) is part of the answer the user entered. This is a simple string comparison and is not case sensitive. Section 2 of this button's code contains a switch statement based on the local variable tResult. Here, we provide the user with the actual answer if they do not get it right on their own. The final section (section 3) navigates to the next question or to the main card, depending upon which question set the user is on. Objective Complete - Mini Debriefing We have successfully coded our short answer quiz card. Our approach was to use a simple question and data input design with a Submit button. Your user interface should resemble the following screenshot: Creating a picture question card Using pictures as part of a quiz, poll, or other interface can be fun for the user. It might also be more appropriate than simply using text. Let's create a card that uses pictures as part of a quiz. Engage Thrusters Create a new card and name it Pictures. Copy the Title label from the Main card and paste it onto the new Pictures card. This will ensure the title label field has a consistent format and location. Copy the Question label from the TF card and paste it onto the new Pictures card. Copy the Progress label from the TF card and paste it onto the new Pictures card. Drag a Rectangle Button onto the card and make the following customizations: Change the name to picture1. Set the size to 120 x 120. Set the location to 128, 196. Drag a second Rectangle Button onto the card and make the following customizations: Change the name to picture2. Set the size to 120 x 120. Set the location to 336, 196. Upload the following listed files into your mobile application's Image Library. This LiveCode function is available by selecting the Development pull-down menu, then selecting Image Library. Near the bottom of the Image Library dialog is an Import File button. Once your files are uploaded, take note of the ID numbers assigned by LiveCode: q1a1.png q1a2.png q2a1.png q2a2.png q3a1.png q3a2.png With our interface fully constructed, we are now ready to add LiveCode script to the card. Here is the code you will enter at the card level: on preOpenCard global qNbr, qArray # Section 1 put 1 into qNbr set the icon of btn "picture1" to empty set the icon of btn "picture2" to empty # Section 2 put "" into fld "question" put "" into fld "progress" # Section 3 put "Which puppy is real?" into qArray["1"]["question"] put "2175" into qArray["1"]["pic1"] put "2176" into qArray["1"]["pic2"] put "q1a1" into qArray["1"]["answer"] -- put "Which puppy looks bigger?" into qArray["2"]["question"] put "2177" into qArray["2"]["pic1"] put "2178" into qArray["2"]["pic2"] put "q2a2" into qArray["2"]["answer"] -- put "Which scene is likely to make her owner more upset?" into qArray["3"]["question"] put "2179" into qArray["3"]["pic1"] put "2180" into qArray["3"]["pic2"] put "q3a1" into qArray["3"]["answer"] end preOpenCard In section 1 of this code, we set the qNbr to 1. This is our question counter. We also ensure that there is no image visible in the two buttons. We do this by setting the icon of the buttons to empty. In section 2, we empty the contents of the two onscreen fields (Question and Progress). In the third section, we populate the question set array (qArray). Each question has an answer that corresponds with the filename of the images you added to your stack in the previous step. The ID numbers of the six images you uploaded are also added to the array, so you will need to refer to your notes from step 7. Our next step is to program the picture1 and picture2 buttons. Here is the code for the picture1 button: on mouseUp global qNbr, qArray # Section 1 if qArray[qNbr]["answer"] contains "a1" then if qNbr < 3 then answer "Very Good." with "Next" titled "Correct" else answer "Very Good." with "Okay" titled "Correct" end if else if qNbr < 3 then answer "That is not correct." with "Next" titled "Wrong Answer" else answer "That is not correct." with "Okay" titled "Wrong Answer" end if end if # Section 2 if qNbr < 3 then add 1 to qNbr nextQuestion else go to card "Main" end if end mouseUp In section 1 of our code, we check to see if the answer from the array contains a1, which indicates that the picture on the left is the correct answer. Based on the answer evaluation, one of two text feedbacks is provided to the user. The name of the button on the feedback dialog is either Next or Okay, depending upon which question set the user is currently on. The second section of this code routes the user to either the main card (if they finished all three questions) or to the next question. Copy the code you entered in the picture1 button and paste it onto the picture2 button. Only one piece of code needs to change. On the first line of the section 1 code, change the string from a1 to a2. That line of code should be as follows: if qArray[qNbr]["answer"] contains "a2" then Objective Complete - Mini Debriefing In just 9 easy steps, we created a picture-based question type that uses images we uploaded to our stack's image library and a question set array. Your final interface should look similar to the following screenshot: Adding navigational scripting In this task, we will add scripts to the interface buttons on the Main card. Engage Thrusters Navigate to the Main card. Add the following script to the true-false button: on mouseUp set the disabled of me to true go to card "TF" end mouseUp Add the following script to the m-choice button: on mouseUp set the disabled of me to true go to card "MC" end mouseUp Add the following script to the sequence button: on mouseUp set the disabled of me to true go to card "Sequencing" end mouseUp Add the following script to the short-answer button: on mouseUp set the disabled of me to true go to card "SA" end mouseUp Add the following script to the pictures button: on mouseUp set the disabled of me to true go to card "Pictures" end mouseUp The last step in this task is to program the Reset button. Here is the code for that button: on mouseUp global theScore, totalQuestions, totalCorrect # Section 1 set the disabled of btn "true-false" to false set the disabled of btn "m-choice" to false set the disabled of btn "sequence" to false set the disabled of btn "short-answer" to false set the disabled of btn "pictures" to false # Section 2 set the backgroundColor of grc "progress1" to empty set the backgroundColor of grc "progress2" to empty set the backgroundColor of grc "progress3" to empty set the backgroundColor of grc "progress4" to empty set the backgroundColor of grc "progress5" to empty # Section3 put 0 into theScore put 0 into totalQuestions put 0 into totalCorrect put theScore & "%" into fld "Score" end mouseUp There are three sections to this code. In section 1, we are enabling each of the buttons. In the second section, we are clearing out the background color of each of the five progress circles in the bottom-center of the screen. In the final section, section 3, we reset the score and the score display. Objective Complete - Mini Debriefing That is all there was to this task, seven easy steps. There are no visible changes to the mobile application's interface. Summary In this article, we saw how to create a couple of quiz apps for mobile such as short-answer questions and picture card questions. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] Creating mobile friendly themes [Article] So, what is XenMobile? [Article]
Read more
  • 0
  • 0
  • 8094

article-image-sound-recorder-android
Packt
05 Feb 2015
23 min read
Save for later

Sound Recorder for Android

Packt
05 Feb 2015
23 min read
In this article by Mark Vasilkov, author of the book, Kivy Blueprints, we will emulate the Modern UI by using the grid structure and scalable vector icons and develop a sound recorder for the Android platform using Android Java classes. (For more resources related to this topic, see here.) Kivy apps usually end up being cross-platform, mainly because the Kivy framework itself supports a wide range of target platforms. In this write-up, however, we're building an app that will be single-platform. This gives us an opportunity to rely on platform-specific bindings that provide extended functionality. The need for such bindings arises from the fact that the input/output capabilities of a pure Kivy program are limited to those that are present on all platforms. This amounts to a tiny fraction of what a common computer system, such as a smartphone or a laptop, can actually do. Comparison of features Let's take a look at the API surface of a modern mobile device (let's assume it's running Android). We'll split everything in two parts: things that are supported directly by Python and/or Kivy and things that aren't. The following are features that are directly available in Python or Kivy: Hardware-accelerated graphics Touchscreen input with optional multitouch Sound playback (at the time of writing, this feature is available only from the file on the disk) Networking, given the Internet connectivity is present The following are the features that aren't supported or require an external library: Modem, support for voice calls, and SMS Use of built-in cameras for filming videos and taking pictures Use of a built-in microphone to record sound Cloud storage for application data associated with a user account Bluetooth and other near-field networking features Location services and GPS Fingerprinting and other biometric security Motion sensors, that is, accelerometer and gyroscope Screen brightness control Vibration and other forms of haptic feedback Battery charge level For most entries in the "not supported" list, different Python libraries are already present to fill the gap, such as audiostream for a low-level sound recording, and Plyer that handles many platform-specific tasks. So, it's not like these features are completely unavailable to your application; realistically, the challenge is that these bits of functionality are insanely fragmented across different platforms (or even consecutive versions of the same platform, for example, Android); thus, you end up writing platform-specific, not portable code anyway. As you can see from the preceding comparison, a lot of functionality is available on Android and only partially covered by an existing Python or Kivy API. There is a huge untamed potential in using platform-specific features in your applications. This is not a limitation, but an opportunity. Shortly, you will learn how to utilize any Android API from Python code, allowing your Kivy application to do practically anything. Another advantage of narrowing the scope of your app to only a small selection of systems is that there are whole new classes of programs that can function (or even make sense) only on a mobile device with fitting hardware specifications. These include augmented reality apps, gyroscope-controlled games, panoramic cameras, and so on. Introducing Pyjnius To harness the full power of our chosen platform, we're going to use a platform-specific API, which happens to be in Java and is thus primarily Java oriented. We are going to build a sound recorder app, similar to the apps commonly found in Android and iOS, albeit more simplistic. Unlike pure Kivy apps, the underlying Android API certainly provides us with ways of recording sound programmatically. The rest of the article will cover this little recorder program throughout its development to illustrate the Python-Java interoperability using the excellent Pyjnius library, another great project made by Kivy developers. The concept we chose—sound recording and playback—is deliberately simple so as to outline the features of such interoperation without too much distraction caused by the sheer complexity of a subject and abundant implementation details. The source code of Pyjnius, together with the reference manual and some examples, can be found in the official repository at https://github.com/kivy/pyjnius. Modern UI While we're at it, let's build a user interface that resembles the Windows Phone home screen. This concept, basically a grid of colored rectangles (tiles) of various sizes, was known as Metro UI at some point in time but was later renamed to Modern UI due to trademark issues. Irrespective of the name, this is how it looks. This will give you an idea of what we'll be aiming at during the course of this app's development: Design inspiration – a Windows Phone home screen with tiles Obviously, we aren't going to replicate it as is; we will make something that resembles the depicted user interface. The following list pretty much summarizes the distinctive features we're after: Everything is aligned to a rectangular grid UI elements are styled using the streamlined, flat design—tiles use bright, solid colors and there are no shadows or rounded corners Tiles that are considered more useful (for an arbitrary definition of "useful") are larger and thus easier to hit If this sounds easy to you, then you're absolutely right. As you will see shortly, the Kivy implementation of such a UI is rather straightforward. The buttons To start off, we are going to tweak the Button class in Kivy language (let's name the file recorder.kv): #:import C kivy.utils.get_color_from_hex <Button>:background_normal: 'button_normal.png'background_down: 'button_down.png'background_color: C('#95A5A6')font_size: 40 The texture we set as the background is solid white, exploiting the same trick that was used while creating the color palette. The background_color property acts as tint color, and assigning a plain white texture equals to painting the button in background_color. We don't want borders this time. The second (pressed background_down) texture is 25 percent transparent white. Combined with the pitch-black background color of the app, we're getting a slightly darker shade of the same background color the button was assigned: Normal (left) and pressed (right) states of a button – the background color is set to #0080FF The grid structure The layout is a bit more complex to build. In the absence of readily available Modern UI-like tiled layout, we are going to emulate it with the built-in GridLayout widget. One such widget could have fulfilled all our needs, if not for the last requirement: we want to have bigger and smaller buttons. Presently, GridLayout doesn't allow the merging of cells to create bigger ones (a functionality similar to the rowspan and colspan attributes in HTML would be nice to have). So, we will go in the opposite direction: start with the root GridLayout with big cells and add another GridLayout inside a cell to subdivide it. Thanks to nested layouts working great in Kivy, we arrive at the following Kivy language structure (in recorder.kv): #:import C kivy.utils.get_color_from_hex GridLayout:    padding: 15    Button:        background_color: C('#3498DB')        text: 'aaa'    GridLayout:        Button:            background_color: C('#2ECC71')            text: 'bbb1 '        Button:            background_color: C('#1ABC9C')            text: 'bbb2'        Button:            background_color: C('#27AE60')            text: 'bbb3'        Button:            background_color: C('#16A085')            text: 'bbb4'    Button:        background_color: C('#E74C3C')        text: 'ccc'    Button:        background_color: C('#95A5A6')        text: 'ddd' Note how the nested GridLayout sits on the same level as that of outer, large buttons. This should make perfect sense if you look at the previous screenshot of the Windows Phone home screen: a pack of four smaller buttons takes up the same space (one outer grid cell) as a large button. The nested GridLayout is a container for those smaller buttons. Visual attributes On the outer grid, padding is provided to create some distance from the edges of the screen. Other visual attributes are shared between GridLayout instances and moved to a class. The following code is present inside recorder.kv: <GridLayout>:    cols: 2    spacing: 10    row_default_height:        (0.5 * (self.width - self.spacing[0]) -        self.padding[0])    row_force_default: True It's worth mentioning that both padding and spacing are effectively lists, not scalars. spacing[0] refers to a horizontal spacing, followed by a vertical one. However, we can initialize spacing with a single value, as shown in the preceding code; this value will then be used for everything. Each grid consists of two columns with some spacing in between. The row_default_height property is trickier: we can't just say, "Let the row height be equal to the row width." Instead, we compute the desired height manually, where the value 0.5 is used because we have two columns: If we don't apply this tweak, the buttons inside the grid will fill all the available vertical space, which is undesirable, especially when there aren't that many buttons (every one of them ends up being too large). Instead, we want all the buttons nice and square, with empty space at the bottom left, well, empty. The following is the screenshot of our app's "Modern UI" tiles, which we obtained as result from the preceding code: The UI so far – clickable tiles of variable size not too dissimilar from our design inspiration Scalable vector icons One of the nice finishing touches we can apply to the application UI is the use of icons, and not just text, on buttons. We could, of course, just throw in a bunch of images, but let's borrow another useful technique from modern web development and use an icon font instead—as you will see shortly, these provide great flexibility at no cost. Icon fonts Icon fonts are essentially just like regular ones, except their glyphs are unrelated to the letters of a language. For example, you type P and the Python logo is rendered instead of the letter; every font invents its own mnemonic on how to assign letters to icons. There are also fonts that don't use English letters, instead they map icons to Unicode's "private use area" character code. This is a technically correct way to build such a font, but application support for this Unicode feature varies—not every platform behaves the same in this regard, especially the mobile platform. The font that we will use for our app does not assign private use characters and uses ASCII (plain English letters) instead. Rationale to use icon fonts On the Web, icon fonts solve a number of problems that are commonly associated with (raster) images: First and foremost, raster images don't scale well and may become blurry when resized—there are certain algorithms that produce better results than others, but as of today, the "state of the art" is still not perfect. In contrast, a vector picture is infinitely scalable by definition. Raster image files containing schematic graphics (such as icons and UI elements) tend to be larger than vector formats. This does not apply to photos encoded as JPEG obviously. With an icon font, color changes literally take seconds—you can do just that by adding color: red (for example) to your CSS file. The same is true for size, rotation, and other properties that don't involve changing the geometry of an image. Effectively, this means that making trivial adjustments to an icon does not require an image editor, like it normally would when dealing with bitmaps. Some of these points do not apply to Kivy apps that much, but overall, the use of icon fonts is considered a good practice in contemporary web development, especially since there are many free high-quality fonts to choose from—that's hundreds of icons readily available for inclusion in your project. Using the icon font in Kivy In our application, we are going to use the Modern Pictograms (Version 1) free font, designed by John Caserta. To load the font into our Kivy program, we'll use the following code (in main.py): from kivy.app import Appfrom kivy.core.text import LabelBaseclass RecorderApp(App):    passif __name__ == '__main__':    LabelBase.register(name='Modern Pictograms',                       fn_regular='modernpics.ttf')    RecorderApp().run() The actual use of the font happens inside recorder.kv. First, we want to update the Button class once again to allow us to change the font in the middle of a text using markup tags. This is shown in the following snippet: <Button>:    background_normal: 'button_normal.png'    background_down: 'button_down.png'    font_size: 24    halign: 'center'    markup: True The halign: 'center' attribute means that we want every line of text centered inside the button. The markup: True attribute is self-evident and required because the next step in customization of buttons will rely heavily on markup. Now we can update button definitions. Here's an example of this: Button:    background_color: C('#3498DB')    text:        ('[font=Modern Pictograms][size=120]'        'e[/size][/font]nNew recording') Notice the character 'e' inside the [font][size] tags. That's the icon code. Every button in our app will use a different icon, and changing an icon amounts to replacing a single letter in the recorder.kv file. Complete mapping of these code for the Modern Pictograms font can be found on its official website at http://modernpictograms.com/. Long story short, this is how the UI of our application looks after the addition of icons to buttons: The sound recorder app interface – a modern UI with vector icons from the Modern Pictograms font This is already pretty close to the original Modern UI look. Using the native API Having completed the user interface part of the app, we will now turn to a native API and implement the sound recording and playback logic using the suitable Android Java classes, MediaRecorder and MediaPlayer. Thankfully, the task at hand is relatively simple. To record a sound using the Android API, we only need the following five Java classes: The class android.os.Environment provides access to many useful environment variables. We are going to use it to determine the path where the SD card is mounted so we can save the recorded audio file. It's tempting to just hardcode '/sdcard/' or a similar constant, but in practice, every other Android device has a different filesystem layout. So let's not do this even for the purposes of the tutorial. The class android.media.MediaRecorder is our main workhorse. It facilitates capturing audio and video and saving it to the filesystem. The classes android.media.MediaRecorder$AudioSource, android.media.MediaRecorder$AudioEncoder, and android.media.MediaRecorder$OutputFormat are enumerations that hold the values we need to pass as arguments to the various methods of MediaRecorder. Loading Java classes The code to load the aforementioned Java classes into your Python application is as follows: from jnius import autoclassEnvironment = autoclass('android.os.Environment')MediaRecorder = autoclass('android.media.MediaRecorder')AudioSource = autoclass('android.media.MediaRecorder$AudioSource')OutputFormat = autoclass('android.media.MediaRecorder$OutputFormat')AudioEncoder = autoclass('android.media.MediaRecorder$AudioEncoder') If you try to run the program at this point, you'll receive an error, something along the lines of: ImportError: No module named jnius: You'll encounter this error if you don't have Pyjnius installed on your machine jnius.JavaException: Class not found 'android/os/Environment': You'll encounter this error if Pyjnius is installed, but the Android classes we're trying to load are missing (for example, when running on a desktop) This is one of the rare cases when receiving an error means we did everything right. From now on, we should do all of the testing on Android device or inside an emulator because the code isn't cross-platform anymore. It relies unequivocally on Android-specific Java features. Now we can use Java classes seamlessly in our Python code. Looking up the storage path Let's illustrate the practical cross-language API use with a simple example. In Java, we will do something like this in order to find out where an SD card is mounted: import android.os.Environment;String path = Environment.getExternalStorageDirectory().getAbsolutePath(); When translated to Python, the code is as follows: Environment = autoclass('android.os.Environment')path = Environment.getExternalStorageDirectory().getAbsolutePath() This is the exact same thing as shown in the previous code, only written in Python instead of Java. While we're at it, let's also log this value so that we can see which exact path in the Kivy log the getAbsolutePath method returned to our code: from kivy.logger import LoggerLogger.info('App: storage path == "%s"' % path) On my testing device, this produces the following line in the Kivy log: [INFO] App: storage path == "/storage/sdcard0" Recording sound Now, let's dive deeper into the rabbit hole of the Android API and actually record a sound from the microphone. The following code is again basically a translation of Android API documents into Python. If you're interested in the original Java version of this code, you may find it at http://developer.android.com/guide/topics/media/audio-capture.html —it's way too lengthy to include here. The following preparation code initializes a MediaRecorder object: storage_path = (Environment.getExternalStorageDirectory()                .getAbsolutePath() + '/kivy_recording.3gp')recorder = MediaRecorder()def init_recorder():    recorder.setAudioSource(AudioSource.MIC)    recorder.setOutputFormat(OutputFormat.THREE_GPP)    recorder.setAudioEncoder(AudioEncoder.AMR_NB)    recorder.setOutputFile(storage_path)    recorder.prepare() This is the typical, straightforward, verbose, Java way of initializing things, which is rewritten in Python word for word. Now for the fun part, the Begin recording/End recording button: class RecorderApp(App):    is_recording = False    def begin_end_recording(self):        if (self.is_recording):            recorder.stop()            recorder.reset()            self.is_recording = False            self.root.ids.begin_end_recording.text =                 ('[font=Modern Pictograms][size=120]'                 'e[/size][/font]nBegin recording')            return        init_recorder()        recorder.start()        self.is_recording = True        self.root.ids.begin_end_recording.text =             ('[font=Modern Pictograms][size=120]'             '%[/size][/font]nEnd recording') As you can see, no rocket science was applied here either. We just stored the current state, is_recording, and then took the action depending on it, namely: Start or stop the MediaRecorder object (the highlighted part). Flip the is_recording flag. Update the button text so that it reflects the current state (see the next screenshot). The last part of the application that needs updating is the recorder.kv file. We need to tweak the Begin recording/End recording button so that it calls our begin_end_recording() function: Button:        id: begin_end_recording        background_color: C('#3498DB')        text:            ('[font=Modern Pictograms][size=120]'            'e[/size][/font]nBegin recording')        on_press: app.begin_end_recording() That's it! If you run the application now, chances are that you'll be able to actually record a sound file that is going to be stored on the SD card. However, please see the next section before you do this. The button that you created will look something like this: Begin recording and End recording – this one button summarizes our app's functionality so far. Major caveat – permissions The default Kivy Launcher app at the time of writing this doesn't have the necessary permission to record sound, android.permission.RECORD_AUDIO. This results in a crash as soon as the MediaRecorder instance is initialized. There are many ways to mitigate this problem. For the sake of this tutorial, we provide a modified Kivy Launcher that has the necessary permission enabled. The latest version of the package is also available for download at https://github.com/mvasilkov/kivy_launcher_hack. Before you install the provided .apk file, please delete the existing version of the app, if any, from your device. Alternatively, if you're willing to fiddle with the gory details of bundling Kivy apps for Google Play, you can build Kivy Launcher yourself from the source code. Everything you need to do this can be found in the official Kivy GitHub account, https://github.com/kivy. Playing sound Getting sound playback to work is easier; there is no permission for this and the API is somewhat more concise too. We need to load just one more class, MediaPlayer: MediaPlayer = autoclass('android.media.MediaPlayer')player = MediaPlayer() The following code will run when the user presses the Play button. We'll also use the reset_player() function in the Deleting files section discussed later in this article; otherwise, there could have been one slightly longer function: def reset_player():    if (player.isPlaying()):        player.stop()    player.reset()def restart_player():    reset_player()    try:        player.setDataSource(storage_path)        player.prepare()        player.start()    except:        player.reset() The intricate details of each API call can be found in the official documents, but overall, this listing is pretty self-evident: reset the player to its initial state, load the sound file, and press the Play button. The file format is determined automatically, making our task at hand a wee bit easier. Deleting files This last feature will use the java.io.File class, which is not strictly related to Android. One great thing about the official Android documentation is that it contains reference to these core Java classes too, despite the fact they predate the Android operating system by more than a decade. The actual code needed to implement file removal is exactly one line; it's highlighted in the following listing: File = autoclass('java.io.File')class RecorderApp(App):    def delete_file(self):        reset_player()        File(storage_path).delete() First, we stop the playback (if any) by calling the reset_player() function and then remove the file—short and sweet. Interestingly, the File.delete() method in Java won't throw an exception in the event of a catastrophic failure, so there is no need to perform try ... catch in this case. Consistency, consistency everywhere. An attentive reader will notice that we could also delete the file using Python's own os.remove() function. Doing this using Java achieves nothing special compared to a pure Python implementation; it's also slower. On the other hand, as a demonstration of Pyjnius, java.io.File works as good as any other Java class. At this point, with the UI and all three major functions done, our application is complete for the purposes of this tutorial. Summary Writing nonportable code has its strengths and weaknesses, just like any other global architectural decision. This particular choice, however, is especially hard because the switch to native API typically happens early in the project and may be completely impractical to undo at a later stage. The major advantage of the approach was discussed at the beginning of this article: with platform-specific code, you can do virtually anything that your platform is capable of. There are no artificial limits; your Python code has unrestricted access to the same underlying API as the native code. On the downside, depending on a single-platform is risky for a number of reasons: The market of Android alone is provably smaller than that of Android plus iOS (this holds true for about every combination of operating systems). Porting the program over to a new system becomes harder with every platform-specific feature you use. If the project runs on just one platform, exactly one political decision may be sufficient to kill it. The chances of getting banned by Google is higher than that of getting the boot from both App Store and Google Play simultaneously. (Again, this holds true for practically every set of application marketplaces.) Now that you're well aware of the options, it's up to you to make an educated choice regarding every app you develop. Resources for Article: Further resources on this subject: Reversing Android Applications [Article] Creating a Direct2D game window class [Article] Images, colors, and backgrounds [Article]
Read more
  • 0
  • 0
  • 7996

article-image-using-resources
Packt
06 Oct 2015
6 min read
Save for later

Using Resources

Packt
06 Oct 2015
6 min read
In this article written by Mathieu Nayrolles, author of the book Xamarin Studio for Android Programming: A C# Cookbook, wants us to learn of how to play sound and how to play a movie by using an user-interactive—press on button in a programmative way. (For more resources related to this topic, see here.) Playing sound There are an infinite number of occasions in which you want your applications to play sound. In this section, we will learn how to play a song from our application in a user-interactive—press on a button—or programmative way. Getting ready For using this you need to create a project on your own: How to do it… Add the following using parameter to your MainActivity.cs file. using Android.Media; Create a class variable named _myPlayer of the MediaPlayer class: MediaPlayer _myPlayer; Create a subfolder named raw under Resources. Place the sound you wish to play inside the newly created folder. We will use the Mario theme, free of rights for non-commercial use, downloaded from http://mp3skull.com. Add the following lines at the end of the OnCreate() method of your MainActivity. _myPlayer = MediaPlayer.Create (this, Resource.Raw.mario); Button button = FindViewById<Button> (Resource.Id.myButton); button.Click += delegate { _myPlayer.Start(); }; In the preceding code sample, the first line creates an instance of the MediaPlayer using the this statement as Context and Resource.Raw.mario as the file to play with this MediaPlayer. The rest of the code is trivial, we just acquired a reference to the button and create a behavior for the OnClick() event of the button. In this event, we call the Start() method of the _myPlayer() variable. Run your application and press on the button as shown on the following screenshot: You should hear the Mario theme playing right after you pressed the button, even if you are running the application on the emulator. How it works... Playing sound (and video) is an activity handled by the Media Player class of the Android platform. This class involves some serious implementations and a multitude of states in the same way as activities. However, as an Android Applications Developer and not as an Android Platform Developer we only require a little background on this. The Android multimedia framework includes—thought the Media Player class—a support for playing a very large variety of media such MP3 from the filesystem or from the Internet. Also, you can only play a song on the current sound device which could be the phone speakers, headset or even a Bluetooth enabled speaker. In other words, even if there are many sound outputs available on the phone, the current default settled by the user is the one where your sound will be played. Finally, you cannot play a sound during a call. There's more... Playing sound which is not stored locally Obviously, you may want to play sounds that are not stored locally—in the raw folder—but anywhere else on the phone, like SDcard or so. To do it, you have to use the following code sample: Uri myUri = new Uri ("uriString"); _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(myUri); _myPlayer.Prepare(); The first line defines a Uri for the targeting file to play. The three following lines set the StreamType, the Uri prepares the MediaPlayer. The Prepare() is a method which prepares the player for playback in a synchrone manner. Meaning that this instruction blocks the program until the player is ready to play, until the player has loaded the file. You could also call the PrepareAsync() which returns immediately and performs the loading in an asynchronous way. Playing online sound Using a code very similar to the one required to play sounds stored somewhere on the phone, we could play sound from the Internet. Basically, we just have to replace the Uri parameter by some HTTP address just like the following: String url = "http://myWebsite/mario.mp3"; _myPlayer = new MediaPlayer(); _myPlayer.SetAudioStreamType (AudioManager.UseDefaultStreamType); _myPlayer.SetDataSource(url); _myPlayer.Prepare(); Also, you must request the permission to access the Internet with your application. This is done in the manifest by adding an <uses-permission> tag for your application as shown by the following code sample: <application android_icon="@drawable/Icon" android_label="Splash"> <uses-permission android_name="android.permission.INTERNET" /> </application> See Also See also the next recipe for playing video. Playing a movie As a final recipe in this article, we will see how to play a movie with your Android application. Playing video, unlike playing audio, involves some special views for displaying it to the users. Getting ready For the last time, we will reuse the same project, and more specifically, we will play a Mario video under the button for playing the Mario theme seen in the previous recipe. How to do it... Add the following code to your Main.axml under Layout file: <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As a result, the content of your Main.axml file should look like the following screenshot: Add the following code to the MainActivity.cs file in the OnCreate() method: var videoView = FindViewById<VideoView> (Resource.Id.SampleVideoView); var uri = Android.Net.Uri.Parse ("url of your video"); videoView.SetVideoURI (uri); Finally, invoke the videoView.Start() method. Note that playing Internet based video, even short one, will take a very long time as the video need to be fully loaded while using this technique. How it works... Playing video should involve some special view to display it to users. This special view is named the VideoView tag and should be used as same as simple TextView tag. <VideoView android_id="@+id/myVideoView" android_layout_width="fill_parent" android_layout_height="fill_parent"> </VideoView> As you can see in the preceding code sample, you can apply the same parameters to VideoView tag as TextView tag such as layout based options. The VideoView tag, like the MediaPlayer for audio, have a method to set the video URI named SetVideoURI and another one to start the video named Start();. Summary In this article, we've learned how to play a sound clip as well as video on the Android application which we've developed. Resources for Article: Further resources on this subject: Heads up to MvvmCross [article] XamChat – a Cross-platform App [article] Gesture [article]
Read more
  • 0
  • 0
  • 7799

article-image-heads-mvvmcross
Packt
29 Dec 2014
33 min read
Save for later

Heads up to MvvmCross

Packt
29 Dec 2014
33 min read
In this article, by Mark Reynolds, author of the book Xamarin Essentials, we will take the next step and look at how the use of design patterns and frameworks can increase the amount of code that can be reused. We will cover the following topics: An introduction to MvvmCross The MVVM design pattern Core concepts Views, ViewModels, and commands Data binding Navigation (ViewModel to ViewModel) The project organization The startup process Creating NationalParks.MvvmCross Our approach will be to introduce the core concepts at a high level and then dive in and create the national parks sample app using MvvmCross. This will give you a basic understanding of how to use the framework and the value associated with its use. With that in mind, let's get started. (For more resources related to this topic, see here.) Introducing MvvmCross MvvmCross is an open source framework that was created by Stuart Lodge. It is based on the Model-View-ViewModel (MVVM) design pattern and is designed to enhance code reuse across numerous platforms, including Xamarin.Android, Xamarin.iOS, Windows Phone, Windows Store, WPF, and Mac OS X. The MvvmCross project is hosted on GitHub and can be accessed at https://github.com/MvvmCross/MvvmCross. The MVVM pattern MVVM is a variation of the Model-View-Controller pattern. It separates logic traditionally placed in a View object into two distinct objects, one called View and the other called ViewModel. The View is responsible for providing the user interface and the ViewModel is responsible for the presentation logic. The presentation logic includes transforming data from the Model into a form that is suitable for the user interface to work with and mapping user interaction with the View into requests sent back to the Model. The following diagram depicts how the various objects in MVVM communicate: While MVVM presents a more complex implementation model, there are significant benefits of it, which are as follows: ViewModels and their interactions with Models can generally be tested using frameworks (such as NUnit) that are much easier than applications that combine the user interface and presentation layers ViewModels can generally be reused across different user interface technologies and platforms These factors make the MVVM approach both flexible and powerful. Views Views in an MvvmCross app are implemented using platform-specific constructs. For iOS apps, Views are generally implemented as ViewControllers and XIB files. MvvmCross provides a set of base classes, such as MvxViewContoller, that iOS ViewControllers inherit from. Storyboards can also be used in conjunction with a custom presenter to create Views; we will briefly discuss this option in the section titled Implementing the iOS user interface later in this article. For Android apps, Views are generally implemented as MvxActivity or MvxFragment along with their associated layout files. ViewModels ViewModels are classes that provide data and presentation logic to views in an app. Data is exposed to a View as properties on a ViewModel, and logic that can be invoked from a View is exposed as commands. ViewModels inherit from the MvxViewModel base class. Commands Commands are used in ViewModels to expose logic that can be invoked from the View in response to user interactions. The command architecture is based on the ICommand interface used in a number of Microsoft frameworks such as Windows Presentation Foundation (WPF) and Silverlight. MvvmCross provides IMvxCommand, which is an extension of ICommand, along with an implementation named MvxCommand. The commands are generally defined as properties on a ViewModel. For example: public IMvxCommand ParkSelected { get; protected set; } Each command has an action method defined, which implements the logic to be invoked: protected void ParkSelectedExec(NationalPark park) {    . . .// logic goes here } The commands must be initialized and the corresponding action method should be assigned: ParkSelected =    new MvxCommand<NationalPark> (ParkSelectedExec); Data binding Data binding facilitates communication between the View and the ViewModel by establishing a two-way link that allows data to be exchanged. The data binding capabilities provided by MvvmCross are based on capabilities found in a number of Microsoft XAML-based UI frameworks such as WPF and Silverlight. The basic idea is that you would like to bind a property in a UI control, such as the Text property of an EditText control in an Android app to a property of a data object such as the Description property of NationalPark. The following diagram depicts this scenario: The binding modes There are four different binding modes that can be used for data binding: OneWay binding: This mode tells the data binding framework to transfer values from the ViewModel to the View and transfer any updates to properties on the ViewModel to their bound View property. OneWayToSource binding: This mode tells the data binding framework to transfer values from the View to the ViewModel and transfer updates to View properties to their bound ViewModel property. TwoWay binding: This mode tells the data binding framework to transfer values in both directions between the ViewModel and View, and updates on either object will cause the other to be updated. This binding mode is useful when values are being edited. OneTime binding: This mode tells the data binding framework to transfer values from ViewModel to View when the binding is established; in this mode, updates to ViewModel properties are not monitored by the View. The INotifyPropertyChanged interface The INotifyPropertyChanged interface is an integral part of making data binding work effectively; it acts as a contract between the source object and the target object. As the name implies, it defines a contract that allows the source object to notify the target object when data has changed, thus allowing the target to take any necessary actions such as refreshing its display. The interface consists of a single event—the PropertyChanged event—that the target object can subscribe to and that is triggered by the source if a property changes. The following sample demonstrates how to implement INotifyPropertyChanged: public class NationalPark : INotifyPropertyChanged {   public event PropertyChangedEventHandler      PropertyChanged; // rather than use "… code" it is safer to use // the comment form string _name; public string Name {    get { return _name; }    set    {        if (value.Equals (_name,            StringComparison.Ordinal))        {      // Nothing to do - the value hasn't changed;      return;        }        _name = value;        OnPropertyChanged();    } } . . . void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {      var handler = PropertyChanged; if (handler != null) {      handler(this,            new PropertyChangedEventArgs(propertyName)); } } } Binding specifications Bindings can be specified in a couple of ways. For Android apps, bindings can be specified in layout files. The following example demonstrates how to bind the Text property of a TextView instance to the Description property in a NationalPark instance: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/descrTextView"    local_MvxBind="Text Park.Description" /> For iOS, binding must be accomplished using the binding API. CreateBinding() is a method than can be found on MvxViewController. The following example demonstrates how to bind the Description property to a UILabel instance: this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).    Apply (); Navigating between ViewModels Navigating between various screens within an app is an important capability. Within a MvvmCross app, this is implemented at the ViewModel level so that navigation logic can be reused. MvvmCross supports navigation between ViewModels through use of the ShowViewModel<T>() method inherited from MvxNavigatingObject, which is the base class for MvxViewModel. The following example demonstrates how to navigate to DetailViewModel: ShowViewModel<DetailViewModel>(); Passing parameters In many situations, there is a need to pass information to the destination ViewModel. MvvmCross provides a number of ways to accomplish this. The primary method is to create a class that contains simple public properties and passes an instance of the class into ShowViewModel<T>(). The following example demonstrates how to define and use a parameters class during navigation: public class DetailParams {    public int ParkId { get; set; } }   // using the parameters class ShowViewModel<DetailViewModel>( new DetailViewParam() { ParkId = 0 }); To receive and use parameters, the destination ViewModel implements an Init() method that accepts an instance of the parameters class: public class DetailViewModel : MvxViewModel {    . . .    public void Init(DetailViewParams parameters)    {        // use the parameters here . . .    } } Solution/project organization Each MvvmCross solution will have a single core PCL project that houses the reusable code and a series of platform-specific projects that contain the various apps. The following diagram depicts the general structure: The startup process MvvmCross apps generally follow a standard startup sequence that is initiated by platform-specific code within each app. There are several classes that collaborate to accomplish the startup; some of these classes reside in the core project and some of them reside in the platform-specific projects. The following sections describe the responsibilities of each of the classes involved. App.cs The core project has an App class that inherits from MvxApplication. The App class contains an override to the Initialize() method so that at a minimum, it can register the first ViewModel that should be presented when the app starts: RegisterAppStart<ViewModels.MasterViewModel>(); Setup.cs Android and iOS projects have a Setup class that is responsible for creating the App object from the core project during the startup. This is accomplished by overriding the CreateApp() method: protected override IMvxApplication CreateApp() {    return new Core.App(); } For Android apps, Setup inherits from MvxAndroidSetup. For iOS apps, Setup inherits from MvxTouchSetup. The Android startup Android apps are kicked off using a special Activity splash screen that calls the Setup class and initiates the MvvmCross startup process. This is all done automatically for you; all you need to do is include the splash screen definition and make sure it is marked as the launch activity. The definition is as follows: [Activity( Label="NationalParks.Droid", MainLauncher = true, Icon="@drawable/icon", Theme="@style/Theme.Splash", NoHistory=true, ScreenOrientation = ScreenOrientation.Portrait)] public class SplashScreen : MvxSplashScreenActivity {    public SplashScreen():base(Resource.Layout.SplashScreen)    {    } } The iOS startup The iOS app startup is slightly less automated and is initiated from within the FinishedLaunching() method of AppDelegate: public override bool FinishedLaunching (    UIApplication app, NSDictionary options) {    _window = new UIWindow (UIScreen.MainScreen.Bounds);      var setup = new Setup(this, _window);    setup.Initialize();    var startup = Mvx.Resolve<IMvxAppStart>();    startup.Start();      _window.MakeKeyAndVisible ();      return true; } Creating NationalParks.MvvmCross Now that we have basic knowledge of the MvvmCross framework, let's put that knowledge to work and convert the NationalParks app to leverage the capabilities we just learned. Creating the MvvmCross core project We will start by creating the core project. This project will contain all the code that will be shared between the iOS and Android app primarily in the form of ViewModels. The core project will be built as a Portable Class Library. To create NationalParks.Core, perform the following steps: From the main menu, navigate to File | New Solution. From the New Solution dialog box, navigate to C# | Portable Library, enter NationalParks.Core for the project Name field, enter NationalParks.MvvmCross for the Solution field, and click on OK. Add the MvvmCross starter package to the project from NuGet. Select the NationalParks.Core project and navigate to Project | Add Packages from the main menu. Enter MvvmCross starter in the search field. Select the MvvmCross – Hot Tuna Starter Pack entry and click on Add Package. A number of things were added to NationalParks.Core as a result of adding the package, and they are as follows: A packages.config file, which contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to actual libraries in the Packages folder of the overall solution. A ViewModels folder with a sample ViewModel named FirstViewModel. An App class in App.cs, which contains an Initialize() method that starts the MvvmCross app by calling RegisterAppStart() to start FirstViewModel. We will eventually be changing this to start the MasterViewModel class, which will be associated with a View that lists national parks. Creating the MvvmCross Android app The next step is to create an Android app project in the same solution. To create NationalParks.Droid, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog box, navigate to C# | Android | Android Application, enter NationalParks.Droid for the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.Droid and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.Droid as a result of adding the package, which are as follows: packages.config: This file contains a list of libraries (dlls) associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView : This class is present in the Views folder, which corresponds to FirstViewModel, which was created in NationalParks.Core. FirstView: This layout is present in Resourceslayout, which is used by the FirstView activity. This is a traditional Android layout file with the exception that it contains binding declarations in the EditView and TextView elements. Setup: This file inherits from MvxAndroidSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). SplashScreen: This class inherits from MvxSplashScreenActivity. The SplashScreen class is marked as the main launcher activity and thus initializes the MvvmCross app with a call to Setup.Initialize(). Add a reference to NationalParks.Core by selecting the References folder, right-click on it, select Edit References, select the Projects tab, check NationalParks.Core, and click on OK. Remove MainActivity.cs as it is no longer needed and will create a build error. This is because it is marked as the main launch and so is the new SplashScreen class. Also, remove the corresponding Resourceslayoutmain.axml layout file. Run the app. The app will present FirstViewModel, which is linked to the corresponding FirstView instance with an EditView class, and TextView presents the same Hello MvvmCross text. As you edit the text in the EditView class, the TextView class is automatically updated by means of data binding. The following screenshot depicts what you should see: Reusing NationalParks.PortableData and NationalParks.IO Before we start creating the Views and ViewModels for our app, we first need to bring in some code from our previous efforts that can be used to maintain parks. For this, we will simply reuse the NationalParksData singleton and the FileHandler classes that were created previously. To reuse the NationalParksData singleton and FileHandler classes, complete the following steps: Copy NationalParks.PortableData and NationalParks.IO from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the NationalParks.MvvmCross solution folder. Add a reference to NationalParks.PortableData in the NationalParks.Droid project. Create a folder named NationalParks.IO in the NationalParks.Droid project and add a link to FileHandler.cs from the NationalParks.IO project. Recall that the FileHandler class cannot be contained in the Portable Class Library because it uses file IO APIs that cannot be references from a Portable Class Library. Compile the project. The project should compile cleanly now. Implementing the INotifyPropertyChanged interface We will be using data binding to bind UI controls to the NationalPark object and thus, we need to implement the INotifyPropertyChanged interface. This ensures that changes made to properties of a park are reported to the appropriate UI controls. To implement INotifyPropertyChanged, complete the following steps: Open NationalPark.cs in the NationalParks.PortableData project. Specify that the NationalPark class implements INotifyPropertyChanged interface. Select the INotifyPropertyChanged interface, right-click on it, navigate to Refactor | Implement interface, and press Enter. Enter the following code snippet: public class NationalPark : INotifyPropertyChanged {    public event PropertyChangedEventHandler        PropertyChanged;    . . . } Add an OnPropertyChanged() method that can be called from each property setter method: void OnPropertyChanged(    [CallerMemberName] string propertyName = null) {    var handler = PropertyChanged;    if (handler != null)    {        handler(this,            new PropertyChangedEventArgs(propertyName));    } } Update each property definition to call the setter in the same way as it is depicted for the Name property: string _name; public string Name { get { return _name; } set {    if (value.Equals (_name, StringComparison.Ordinal))    {      // Nothing to do - the value hasn't changed; return;    }    _name = value;    OnPropertyChanged(); } } Compile the project. The project should compile cleanly. We are now ready to use the NationalParksData singleton in our new project, and it supports data binding. Implementing the Android user interface Now, we are ready to create the Views and ViewModels required for our app. The app we are creating will follow the following flow: A master list view to view national parks A detail view to view details of a specific park An edit view to edit a new or previously existing park The process for creating views and ViewModels in an Android app generally consists of three different steps: Create a ViewModel in the core project with the data and event handlers (commands) required to support the View. Create an Android layout with visual elements and data binding specifications. Create an Android activity, which corresponds to the ViewModel and displays the layout. In our case, this process will be slightly different because we will reuse some of our previous work, specifically, the layout files and the menu definitions. To reuse layout files and menu definitions, perform the following steps: Copy Master.axml, Detail.axml, and Edit.axml from the Resourceslayout folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourceslayout folder in the NationalParks.Droid project, and add them to the project by selecting the layout folder and navigating to Add | Add Files. Copy MasterMenu.xml, DetailMenu.xml, and EditMenu.xml from the Resourcesmenu folder of the solution created in Chapter 5, Developing Your First Android App with Xamarin.Android in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials), to the Resourcesmenu folder in the NationalParks.Droid project, and add them to the project by selecting the menu folder and navigating to Add | Add Files. Implementing the master list view We are now ready to implement the first of our View/ViewModel combinations, which is the master list view. Creating MasterViewModel The first step is to create a ViewModel and add a property that will provide data to the list view that displays national parks along with some initialization code. To create MasterViewModel, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, and navigate to Add | New File. In the New File dialog box, navigate to General | Empty Class, enter MasterViewModel for the Name field, and click on New. Modify the class definition so that MasterViewModel inherits from MvxViewModel; you will also need to add a few using directives: . . . using Cirrious.CrossCore.Platform; using Cirrious.MvvmCross.ViewModels; . . . namespace NationalParks.Core.ViewModels { public class MasterViewModel : MvxViewModel {          . . .    } } Add a property that is a list of NationalPark elements to MasterViewModel. This property will later be data-bound to a list view: private List<NationalPark> _parks; public List<NationalPark> Parks {    get { return _parks; }    set { _parks = value;          RaisePropertyChanged(() => Parks);    } } Override the Start() method on MasterViewModel to load the _parks collection with data from the NationalParksData singleton. You will need to add a using directive for the NationalParks.PortableData namespace again: . . . using NationalParks.PortableData; . . . public async override void Start () {    base.Start ();    await NationalParksData.Instance.Load ();    Parks = new List<NationalPark> (        NationalParksData.Instance.Parks); } We now need to modify the app startup sequence so that MasterViewModel is the first ViewModel that's started. Open App.cs in NationalParks.Core and change the call to RegisterAppStart() to reference MasterViewModel:RegisterAppStart<ViewModels.MasterViewModel>(); Updating the Master.axml layout Update Master.axml so that it can leverage the data binding capabilities provided by MvvmCross. To update Master.axml, complete the following steps: Open Master.axml and add a namespace definition to the top of the XML to include the NationalParks.Droid namespace: This namespace definition is required in order to allow Android to resolve the MvvmCross-specific elements that will be specified. Change the ListView element to a Mvx.MvxListView element: <Mvx.MvxListView    android_layout_width="match_parent"    android_layout_height="match_parent"    android_id="@+id/parksListView" /> Add a data binding specification to the MvxListView element, binding the ItemsSource property of the list view to the Parks property of MasterViewModel, as follows:    . . .    android_id="@+id/parksListView"    local_MvxBind="ItemsSource Parks" /> Add a list item template attribute to the element definition. This layout controls the content of each item that will be displayed in the list view: local:MvxItemTemplate="@layout/nationalparkitem" Create the NationalParkItem layout and provide TextView elements to display both the name and description of a park, as follows: <LinearLayout    android_orientation="vertical"    android_layout_width="fill_parent"    android_layout_height="wrap_content">    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"         android:textSize="40sp"/>    <TextView        android_layout_width="match_parent"        android_layout_height="wrap_content"        android_textSize="20sp"/> </LinearLayout> Add data binding specifications to each of the TextView elements: . . .        local_MvxBind="Text Name" /> . . .        local_MvxBind="Text Description" /> . . . Note that in this case, the context for data binding is an instance of an item in the collection that was bound to MvxListView, for this example, an instance of NationalPark. Creating the MasterView activity Next, create MasterView, which is an MvxActivity instance that corresponds with MasterViewModel. To create MasterView, complete the following steps: Select the ViewModels folder in NationalParks.Core, right-click on it, navigate to Add | New File. In the New File dialog, navigate to Android | Activity, enter MasterView in the Name field, and select New. Modify the class specification so that it inherits from MvxActivity; you will also need to add a few using directives as follows: using Cirrious.MvvmCross.Droid.Views; using NationalParks.Core.ViewModels; . . . namespace NationalParks.Droid.Views {    [Activity(Label = "Parks")]    public class MasterView : MvxActivity    {        . . .    } } Open Setup.cs and add code to initialize the file handler and path for the NationalParksData singleton to the CreateApp() method, as follows: protected override IMvxApplication CreateApp() {    NationalParksData.Instance.FileHandler =        new FileHandler ();    NationalParksData.Instance.DataDir =        System.Environment.GetFolderPath(          System.Environment.SpecialFolder.MyDocuments);    return new Core.App(); } Compile and run the app; you will need to copy the NationalParks.json file to the device or emulator using the Android Device Monitor. All the parks in NationalParks.json should be displayed. Implementing the detail view Now that we have the master list view displaying national parks, we can focus on creating the detail view. We will follow the same steps for the detail view as the ones we just completed for the master view. Creating DetailViewModel We start creating DetailViewModel by using the following steps: Following the same procedure as the one that was used to create MasterViewModel, create a new ViewModel named DetailViewModel in the ViewModel folder of NationalParks.Core. Add a NationalPark property to support data binding for the view controls, as follows: protected NationalPark _park; public NationalPark Park {    get { return _park; }    set { _park = value;          RaisePropertyChanged(() => Park);      } } Create a Parameters class that can be used to pass a park ID for the park that should be displayed. It's convenient to create this class within the class definition of the ViewModel that the parameters are for: public class DetailViewModel : MvxViewModel {    public class Parameters    {        public string ParkId { get; set; }    }    . . . Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData: public void Init(Parameters parameters) {    Park = NationalParksData.Instance.Parks.        FirstOrDefault(x => x.Id == parameters.ParkId); } Updating the Detail.axml layout Next, we will update the layout file. The main changes that need to be made are to add data binding specifications to the layout file. To update the Detail.axml layout, perform the following steps: Open Detail.axml and add the project namespace to the XML file: Add data binding specifications to each of the TextView elements that correspond to a national park property, as demonstrated for the park name: <TextView    android_layout_width="match_parent"    android_layout_height="wrap_content"    android_id="@+id/nameTextView"    local_MvxBind="Text Park.Name" /> Creating the DetailView activity Now, create the MvxActivity instance that will work with DetailViewModel. To create DetailView, perform the following steps: Following the same procedure as the one that was used to create MasterView, create a new view named DetailView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that our menus will be accessible. Copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials)[AR4] . Comment out the section in OnOptionsItemSelect() related to the Edit action for now; we will fill that in once the edit view is completed. Adding navigation The last step is to add navigation so that when an item is clicked on in MvxListView on MasterView, the park is displayed in the detail view. We will accomplish this using a command property and data binding. To add navigation, perform the following steps: Open MasterViewModel and add an IMvxCommand property; this will be used to handle a park that is being selected: protected IMvxCommand ParkSelected { get; protected set; } Create an Action delegate that will be called when the ParkSelected command is executed, as follows: protected void ParkSelectedExec(NationalPark park) {    ShowViewModel<DetailViewModel> (        new DetailViewModel.Parameters ()            { ParkId = park.Id }); } Initialize the command property in the constructor of MasterViewModel: ParkClicked =    new MvxCommand<NationalPark> (ParkSelectedExec); Now, for the last step, add a data binding specification to MvvListView in Master.axml to bind the ItemClick event to the ParkClicked command on MasterViewModel, which we just created: local:MvxBind="ItemsSource Parks; ItemClick ParkClicked" Compile and run the app. Clicking on a park in the list view should now navigate to the detail view, displaying the selected park. Implementing the edit view We are now almost experts at implementing new Views and ViewModels. One last View to go is the edit view. Creating EditViewModel Like we did previously, we start with the ViewModel. To create EditViewModel, complete the following steps: Following the same process that was previously used in this article to create EditViewModel, add a data binding property and create a Parameters class for navigation. Implement an Init() method that will accept an instance of the Parameters class and get the corresponding national park from NationalParkData in the case of editing an existing park or create a new instance if the user has chosen the New action. Inspect the parameters passed in to determine what the intent is: public void Init(Parameters parameters) {    if (string.IsNullOrEmpty (parameters.ParkId))        Park = new NationalPark ();    else        Park =            NationalParksData.Instance.            Parks.FirstOrDefault(            x => x.Id == parameters.ParkId); } Updating the Edit.axml layout Update Edit.axml to provide data binding specifications. To update the Edit.axml layout, you first need to open Edit.axml and add the project namespace to the XML file. Then, add the data binding specifications to each of the EditView elements that correspond to a national park property. Creating the EditView activity Create a new MvxActivity instance named EditView to will work with EditViewModel. To create EditView, perform the following steps: Following the same procedure as the one that was used to create DetailView, create a new View named EditView in the Views folder of NationalParks.Droid. Implement the OnCreateOptionsMenu() and OnOptionsItemSelected() methods so that the Done action will accessible from the ActionBar. You can copy the implementation of these methods from the solution created in Chapter 6, The Sharing Game in the book Xamarin Essentials (available at https://www.packtpub.com/application-development/xamarin-essentials). Change the implementation of Done to call the Done command on EditViewModel. Adding navigation Add navigation to two places: when New (+) is clicked from MasterView and when Edit is clicked in DetailView. Let's start with MasterView. To add navigation from MasterViewModel, complete the following steps: Open MasterViewModel.cs and add a NewParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as follows: protected IMvxCommand NewParkClicked { get; set; } protected void NewParkClickedExec() { ShowViewModel<EditViewModel> (); } Note that we do not pass in a parameter class into ShowViewModel(). This will cause a default instance to be created and passed in, which means that ParkId will be null. We will use this as a way to determine whether a new park should be created. Now, it's time to hook the NewParkClicked command up to the actionNew menu item. We do not have a way to accomplish this using data binding, so we will resort to a more traditional approach—we will use the OnOptionsItemSelected() method. Add logic to invoke the Execute() method on NewParkClicked, as follows: case Resource.Id.actionNew:    ((MasterViewModel)ViewModel).        NewParkClicked.Execute ();    return true; To add navigation from DetailViewModel, complete the following steps: Open DetailViewModel.cs and add a EditParkClicked command property along with the handler for the command. Be sure to initialize the command in the constructor, as shown in the following code snippet: protected IMvxCommand EditPark { get; protected set;} protected void EditParkHandler() {    ShowViewModel<EditViewModel> (        new EditViewModel.Parameters ()            { ParkId = _park.Id }); } Note that an instance of the Parameters class is created, initialized, and passed into the ShowViewModel() method. This instance will in turn be passed into the Init() method on EditViewModel. Initialize the command property in the constructor for MasterViewModel, as follows: EditPark =    new MvxCommand<NationalPark> (EditParkHandler); Now, update the OnOptionsItemSelect() method in DetailView to invoke the DetailView.EditPark command when the Edit action is selected: case Resource.Id.actionEdit:    ((DetailViewModel)ViewModel).EditPark.Execute ();    return true; Compile and run NationalParks.Droid. You should now have a fully functional app that has the ability to create new parks and edit the existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailView. Creating the MvvmCross iOS app The process of creating the Android app with MvvmCross provides a solid understanding of how the overall architecture works. Creating the iOS solution should be much easier for two reasons: first, we understand how to interact with MvvmCross and second, all the logic we have placed in NationalParks.Core is reusable, so that we just need to create the View portion of the app and the startup code. To create NationalParks.iOS, complete the following steps: Select the NationalParks.MvvmCross solution, right-click on it, and navigate to Add | New Project. From the New Project dialog, navigate to C# | iOS | iPhone | Single View Application, enter NationalParks.iOS in the Name field, and click on OK. Add the MvvmCross starter kit package to the new project by selecting NationalParks.iOS and navigating to Project | Add Packages from the main menu. A number of things were added to NationalParks.iOS as a result of adding the package. They are as follows: packages.config: This file contains a list of libraries associated with the MvvmCross starter kit package. These entries are links to an actual library in the Packages folder of the overall solution, which contains the actual downloaded libraries. FirstView: This class is placed in the Views folder, which corresponds to the FirstViewModel instance created in NationalParks.Core. Setup: This class inherits from MvxTouchSetup. This class is responsible for creating an instance of the App class from the core project, which in turn displays the first ViewModel via a call to RegisterAppStart(). AppDelegate.cs.txt: This class contains the sample startup code, which should be placed in the actual AppDelete.cs file. Implementing the iOS user interface We are now ready to create the user interface for the iOS app. The good news is that we already have all the ViewModels implemented, so we can simply reuse them. The bad news is that we cannot easily reuse the storyboards from our previous work; MvvmCross apps generally use XIB files. One of the reasons for this is that storyboards are intended to provide navigation capabilities and an MvvmCross app delegates that responsibility to ViewModel and presenter. It is possible to use storyboards in combination with a custom presenter, but the remainder of this article will focus on using XIB files, as this is the more common use. The screen layouts can be used as depicted in the following screenshot: We are now ready to get started. Implementing the master view The first view we will work on is the master view. To implement the master view, complete the following steps: Create a new ViewController class named MasterView by right-clicking on the Views folder of NationalParks.iOS and navigating to Add | New File | iOS | iPhone View Controller. Open MasterView.xib and arrange controls as seen in the screen layouts. Add outlets for each of the edit controls. Open MasterView.cs and add the following boilerplate logic to deal with constraints on iOS 7, as follows: // ios7 layout if (RespondsToSelector(new    Selector("edgesForExtendedLayout")))    EdgesForExtendedLayout = UIRectEdge.None; Within the ViewDidLoad() method, add logic to create MvxStandardTableViewSource for parksTableView: MvxStandardTableViewSource _source; . . . _source = new MvxStandardTableViewSource(    parksTableView,    UITableViewCellStyle.Subtitle,    new NSString("cell"),    "TitleText Name; DetailText Description",      0); parksTableView.Source = _source; Note that the example uses the Subtitle cell style and binds the national park name and description to the title and subtitle. Add the binding logic to the ViewDidShow() method. In the previous step, we provided specifications for properties of UITableViewCell to properties in the binding context. In this step, we need to set the binding context for the Parks property on MasterModelView: var set = this.CreateBindingSet<MasterView,    MasterViewModel>(); set.Bind (_source).To (vm => vm.Parks); set.Apply(); Compile and run the app. All the parks in NationalParks.json should be displayed. Implementing the detail view Now, implement the detail view using the following steps: Create a new ViewController instance named DetailView. Open DetailView.xib and arrange controls as shown in the following code. Add outlets for each of the edit controls. Open DetailView.cs and add the binding logic to the ViewDidShow() method: this.CreateBinding (this.nameLabel).    To ((DetailViewModel vm) => vm.Park.Name).Apply (); this.CreateBinding (this.descriptionLabel).    To ((DetailViewModel vm) => vm.Park.Description).        Apply (); this.CreateBinding (this.stateLabel).    To ((DetailViewModel vm) => vm.Park.State).Apply (); this.CreateBinding (this.countryLabel).    To ((DetailViewModel vm) => vm.Park.Country).        Apply (); this.CreateBinding (this.latLabel).    To ((DetailViewModel vm) => vm.Park.Latitude).        Apply (); this.CreateBinding (this.lonLabel).    To ((DetailViewModel vm) => vm.Park.Longitude).        Apply (); Adding navigation Add navigation from the master view so that when a park is selected, the detail view is displayed, showing the park. To add navigation, complete the following steps: Open MasterView.cs, create an event handler named ParkSelected, and assign it to the SelectedItemChanged event on MvxStandardTableViewSource, which was created in the ViewDidLoad() method: . . .    _source.SelectedItemChanged += ParkSelected; . . . protected void ParkSelected(object sender, EventArgs e) {    . . . } Within the event handler, invoke the ParkSelected command on MasterViewModel, passing in the selected park: ((MasterViewModel)ViewModel).ParkSelected.Execute (        (NationalPark)_source.SelectedItem); Compile and run NationalParks.iOS. Selecting a park in the list view should now navigate you to the detail view, displaying the selected park. Implementing the edit view We now need to implement the last of the Views for the iOS app, which is the edit view. To implement the edit view, complete the following steps: Create a new ViewController instance named EditView. Open EditView.xib and arrange controls as in the layout screenshots. Add outlets for each of the edit controls. Open EditView.cs and add the data binding logic to the ViewDidShow() method. You should use the same approach to data binding as the approach used for the details view. Add an event handler named DoneClicked, and within the event handler, invoke the Done command on EditViewModel:protected void DoneClicked (object sender, EventArgs e) {    ((EditViewModel)ViewModel).Done.Execute(); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for EditView, and assign the DoneClicked event handler to it, as follows: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Done,        DoneClicked), true); Adding navigation Add navigation to two places: when New (+) is clicked from the master view and when Edit is clicked on in the detail view. Let's start with the master view. To add navigation to the master view, perform the following steps: Open MasterView.cs and add an event handler named NewParkClicked. In the event handler, invoke the NewParkClicked command on MasterViewModel: protected void NewParkClicked(object sender,        EventArgs e) {    ((MasterViewModel)ViewModel).            NewParkClicked.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView and assign the NewParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Add,        NewParkClicked), true); To add navigation to the details view, perform the following steps: Open DetailView.cs and add an event handler named EditParkClicked. In the event handler, invoke the EditParkClicked command on DetailViewModel: protected void EditParkClicked (object sender,    EventArgs e) {    ((DetailViewModel)ViewModel).EditPark.Execute (); } In ViewDidLoad(), add UIBarButtonItem to NavigationItem for MasterView, and assign the EditParkClicked event handler to it: NavigationItem.SetRightBarButtonItem(    new UIBarButtonItem(UIBarButtonSystemItem.Edit,        EditParkClicked), true); Refreshing the master view list One last detail that needs to be taken care of is to refresh the UITableView control on MasterView when items have been changed on EditView. To refresh the master view list, perform the following steps: Open MasterView.cs and call ReloadData() on parksTableView within the ViewDidAppear() method of MasterView: public override void ViewDidAppear (bool animated) {    base.ViewDidAppear (animated);    parksTableView.ReloadData(); } Compile and run NationalParks.iOS. You should now have a fully functional app that has the ability to create new parks and edit existing parks. Changes made to EditView should automatically be reflected in MasterView and DetailVIew. Considering the pros and cons After completing our work, we now have the basis to make some fundamental observations. Let's start with the pros: MvvmCross definitely increases the amount of code that can be reused across platforms. The ViewModels house the data required by the View, the logic required to obtain and transform the data in preparation for viewing, and the logic triggered by user interactions in the form of commands. In our sample app, the ViewModels were somewhat simple; however, the more complex the app, the more reuse will likely be gained. As MvvmCross relies on the use of each platform's native UI frameworks, each app has a native look and feel and we have a natural layer that implements platform-specific logic when required. The data binding capabilities of MvvmCross also eliminate a great deal of tedious code that would otherwise have to be written. All of these positives are not necessarily free; let's look at some cons: The first con is complexity; you have to learn another framework on top of Xamarin, Android, and iOS. In some ways, MvvmCross forces you to align the way your apps work across platforms to achieve the most reuse. As the presentation logic is contained in the ViewModels, the views are coerced into aligning with them. The more your UI deviates across platforms; the less likely it will be that you can actually reuse ViewModels. With these things in mind, I would definitely consider using MvvmCross for a cross-platform mobile project. Yes, you need to learn an addition framework and yes, you will likely have to align the way some of the apps are laid out, but I think MvvmCross provides enough value and flexibility to make these issues workable. I'm a big fan of reuse and MvvmCross definitely pushes reuse to the next level. Summary In this article, we reviewed the high-level concepts of MvvmCross and worked through a practical exercise in order to convert the national parks apps to use the MvvmCross framework and the increase code reuse. Resources for Article: Further resources on this subject: Kendo UI DataViz – Advance Charting [article] The Kendo MVVM Framework [article] Sharing with MvvmCross [article]
Read more
  • 0
  • 0
  • 7699
article-image-command-line-companion-called-artisan
Packt
06 May 2015
17 min read
Save for later

A Command-line Companion Called Artisan

Packt
06 May 2015
17 min read
In this article by Martin Bean, author of the book Laravel 5 Essentials, we will see how Laravel's command-line utility has far more capabilities and can be used to run and automate all sorts of tasks. In the next pages, you will learn how Artisan can help you: Inspect and interact with your application Enhance the overall performance of your application Write your own commands By the end of this tour of Artisan's capabilities, you will understand how it can become an indispensable companion in your projects. (For more resources related to this topic, see here.) Keeping up with the latest changes New features are constantly being added to Laravel. If a few days have passed since you first installed it, try running a composer update command from your terminal. You should see the latest versions of Laravel and its dependencies being downloaded. Since you are already in the terminal, finding out about the latest features is just one command away: $ php artisan changes This saves you from going online to find a change log or reading through a long history of commits on GitHub. It can also help you learn about features that you were not aware of. You can also find out which version of Laravel you are running by entering the following command: $ php artisan --version Laravel Framework version 5.0.16 All Artisan commands have to be run from your project's root directory. With the help of a short script such as Artisan Anywhere, available at https://github.com/antonioribeiro/artisan-anywhere, it is also possible to run Artisan from any subfolder in your project. Inspecting and interacting with your application With the route:list command, you can see at a glance which URLs your application will respond to, what their names are, and if any middleware has been registered to handle requests. This is probably the quickest way to get acquainted with a Laravel application that someone else has built. To display a table with all the routes, all you have to do is enter the following command: $ php artisan route:list In some applications, you might see /{v1}/{v2}/{v3}/{v4}/{v5} appended to particular routes. This is because the developer has registered a controller with implicit routing, and Laravel will try to match and pass up to five parameters to the controller. Fiddling with the internals When developing your application, you will sometimes need to run short, one-off commands to inspect the contents of your database, insert some data into it, or check the syntax and results of an Eloquent query. One way you could do this is by creating a temporary route with a closure that is going to trigger these actions. However, this is less than practical since it requires you to switch back and forth between your code editor and your web browser. To make these small changes easier, Artisan provides a command called tinker, which boots up the application and lets you interact with it. Just enter the following command: $ php artisan tinker This will start a Read-Eval-Print Loop (REPL) similar to what you get when running the php -a command, which starts an interactive shell. In this REPL, you can enter PHP commands in the context of the application and immediately see their output: > $cat = 'Garfield'; > AppCat::create(['name' => $cat,'date_of_birth' => new DateTime]); > echo AppCat::whereName($cat)->get(); [{"id":"4","name":"Garfield 2","date_of_birth":…}] > dd(Config::get('database.default')); Version 5 of Laravel leverages PsySH, a PHP-specific REPL that provides a more robust shell with support for keyboard shortcuts and history. Turning the engine off Whether it is because you are upgrading a database or waiting to push a fix for a critical bug to production, you may want to manually put your application on hold to avoid serving a broken page to your visitors. You can do this by entering the following command: $ php artisan down This will put your application into maintenance mode. You can determine what to display to users when they visit your application in this mode by editing the template file at resources/views/errors/503.blade.php (since maintenance mode sends an HTTP status code of 503 Service Unavailable to the client). To exit maintenance mode, simply run the following command: $ php artisan up Fine-tuning your application For every incoming request, Laravel has to load many different classes and this can slow down your application, particularly if you are not using a PHP accelerator such as APC, eAccelerator, or XCache. In order to reduce disk I/O and shave off precious milliseconds from each request, you can run the following command: $ php artisan optimize This will trim and merge many common classes into one file located inside storage/framework/compiled.php. The optimize command is something you could, for example, include in a deployment script. By default, Laravel will not compile your classes if app.debug is set to true. You can override this by adding the --force flag to the command but bear in mind that this will make your error messages less readable. Caching routes Apart from caching class maps to improve the response time of your application, you can also cache the routes of your application. This is something else you can include in your deployment process. The command? Simply enter the following: $ php artisan route:cache The advantage of caching routes is that your application will get a little faster as its routes will have been pre-compiled, instead of evaluating the URL and any matches routes on each request. However, as the routing process now refers to a cache file, any new routes added will not be parsed. You will need to re-cache them by running the route:cache command again. Therefore, this is not suitable during development, where routes might be changing frequently. Generators Laravel 5 ships with various commands to generate new files of different types. If you run $ php artisan list under the make namespace, you will find the following entries: make:command make:console make:controller make:event make:middleware make:migration make:model make:provider make:request These commands create a stub file in the appropriate location in your Laravel application containing boilerplate code ready for you to get started with. This saves keystrokes, creating these files from scratch. All of these commands require a name to be specified, as shown in the following command: $ php artisan make:model Cat This will create an Eloquent model class called Cat at app/Cat.php, as well as a corresponding migration to create a cats table. If you do not need to create a migration when making a model (for example, if the table already exists), then you can pass the --no-migration option as follows: $ php artisan make:model Cat --no-migration A new model class will look like this: <?php namespace App; use IlluminateDatabaseEloquentModel; class Cat extends Model { // } From here, you can define your own properties and methods. The other commands may have options. The best way to check is to append --help after the command name, as shown in the following command: $ php artisan make:command --help You will see that this command has --handler and --queued options to modify the class stub that is created. Rolling out your own Artisan commands At this stage you might be thinking about writing your own bespoke commands. As you will see, this is surprisingly easy to do with Artisan. If you have used Symfony's Console component, you will be pleased to know that an Artisan command is simply an extension of it with a slightly more expressive syntax. This means the various helpers will prompt for input, show a progress bar, or format a table, are all available from within Artisan. The command that we are going to write depends on the application we built. It will allow you to export all cat records present in the database as a CSV with or without a header line. If no output file is specified, the command will simply dump all records onto the screen in a formatted table. Creating the command There are only two required steps to create a command. Firstly, you need to create the command itself, and then you need to register it manually. We can make use of the following command to create a console command we have seen previously: $ php artisan make:console ExportCatsCommand This will generate a class inside app/Console/Commands. We will then need to register this command with the console kernel, located at app/Console/Kernel.php: protected $commands = [ 'AppConsoleCommandsExportCatsCommand', ]; If you now run php artisan, you should see a new command called command:name. This command does not do anything yet. However, before we start writing the functionality, let's briefly look at how it works internally. The anatomy of a command Inside the newly created command class, you will find some code that has been generated for you. We will walk through the different properties and methods and see what their purpose is. The first two properties are the name and description of the command. Nothing exciting here, this is only the information that will be shown in the command line when you run Artisan. The colon is used to namespace the commands, as shown here: protected $name = 'export:cats';   protected $description = 'Export all cats'; Then you will find the fire method. This is the method that gets called when you run a particular command. From there, you can retrieve the arguments and options passed to the command, or run other methods. public function fire() Lastly, there are two methods that are responsible for defining the list of arguments or options that are passed to the command: protected function getArguments() { /* Array of arguments */ } protected function getOptions() { /* Array of options */ } Each argument or option can have a name, a description, and a default value that can be mandatory or optional. Additionally, options can have a shortcut. To understand the difference between arguments and options, consider the following command, where options are prefixed with two dashes: $ command --option_one=value --option_two -v=1 argument_one argument_two In this example, option_two does not have a value; it is only used as a flag. The -v flag only has one dash since it is a shortcut. In your console commands, you'll need to verify any option and argument values the user provides (for example, if you're expecting a number, to ensure the value passed is actually a numerical value). Arguments can be retrieved with $this->argument($arg), and options—you guessed it—with $this->option($opt). If these methods do not receive any parameters, they simply return the full list of parameters. You refer to arguments and options via their names, that is, $this->argument('argument_name');. Writing the command We are going to start by writing a method that retrieves all cats from the database and returns them as an array: protected function getCatsData() { $cats = AppCat::with('breed')->get(); foreach ($cats as $cat) {    $output[] = [      $cat->name,      $cat->date_of_birth,      $cat->breed->name,    ]; } return $output; } There should not be anything new here. We could have used the toArray() method, which turns an Eloquent collection into an array, but we would have had to flatten the array and exclude certain fields. Then we need to define what arguments and options our command expects: protected function getArguments() { return [    ['file', InputArgument::OPTIONAL, 'The output file', null], ]; } To specify additional arguments, just add an additional element to the array with the same parameters: return [ ['arg_one', InputArgument::OPTIONAL, 'Argument 1', null], ['arg_two', InputArgument::OPTIONAL, 'Argument 2', null], ]; The options are defined in a similar way: protected function getOptions() { return [    ['headers', 'h', InputOption::VALUE_NONE, 'Display headers?',    null], ]; } The last parameter is the default value that the argument and option should have if it is not specified. In both the cases, we want it to be null. Lastly, we write the logic for the fire method: public function fire() { $output_path = $this->argument('file');   $headers = ['Name', 'Date of Birth', 'Breed']; $rows = $this->getCatsData();   if ($output_path) {    $handle = fopen($output_path, 'w');      if ($this->option('headers')) {        fputcsv($handle, $headers);      }      foreach ($rows as $row) {        fputcsv($handle, $row);      }      fclose($handle);   } else {        $table = $this->getHelperSet()->get('table');        $table->setHeaders($headers)->setRows($rows);        $table->render($this->getOutput());    } } While the bulk of this method is relatively straightforward, there are a few novelties. The first one is the use of the $this->info() method, which writes an informative message to the output. If you need to show an error message in a different color, you can use the $this->error() method. Further down in the code, you will see some functions that are used to generate a table. As we mentioned previously, an Artisan command extends the Symfony console component and, therefore, inherits all of its helpers. These can be accessed with $this->getHelperSet(). Then it is only a matter of passing arrays for the header and rows of the table, and calling the render method. To see the output of our command, we will run the following command: $ php artisan export:cats $ php artisan export:cats --headers file.csv Scheduling commands Traditionally, if you wanted a command to run periodically (hourly, daily, weekly, and so on), then you would have to set up a Cron job in Linux-based environments, or a scheduled task in Windows environments. However, this comes with drawbacks. It requires the user to have server access and familiarity with creating such schedules. Also, in cloud-based environments, the application may not be hosted on a single machine, or the user might not have the privileges to create Cron jobs. The creators of Laravel saw this as something that could be improved, and have come up with an expressive way of scheduling Artisan tasks. Your schedule is defined in app/Console/Kernel.php, and with your schedule being defined in this file, it has the added advantage of being present in source control. If you open the Kernel class file, you will see a method named schedule. Laravel ships with one by default that serves as an example: $schedule->command('inspire')->hourly(); If you've set up a Cron job in the past, you will see that this is instantly more readable than the crontab equivalent: 0 * * * * /path/to/artisan inspire Specifying the task in code also means we can easily change the console command to be run without having to update the crontab entry. By default, scheduled commands will not run. To do so, you need a single Cron job that runs the scheduler each and every minute: * * * * * php /path/to/artisan schedule:run 1>> /dev/null 2>&1 When the scheduler is run, it will check for any jobs whose schedules match and then runs them. If no schedules match, then no commands are run in that pass. You are free to schedule as many commands as you wish, and there are various methods to schedule them that are expressive and descriptive: $schedule->command('foo')->everyFiveMinutes(); $schedule->command('bar')->everyTenMinutes(); $schedule->command('baz')->everyThirtyMinutes(); $schedule->command('qux')->daily(); You can also specify a time for a scheduled command to run: $schedule->command('foo')->dailyAt('21:00'); Alternatively, you can create less frequent scheduled commands: $schedule->command('foo')->weekly(); $schedule->command('bar')->weeklyOn(1, '21:00'); The first parameter in the second example is the day, with 0 representing Sunday, and 1 through 6 representing Monday through Saturday, and the second parameter is the time, again specified in 24-hour format. You can also explicitly specify the day on which to run a scheduled command: $schedule->command('foo')->mondays(); $schedule->command('foo')->tuesdays(); $schedule->command('foo')->wednesdays(); // And so on $schedule->command('foo')->weekdays(); If you have a potentially long-running command, then you can prevent it from overlapping: $schedule->command('foo')->everyFiveMinutes()          ->withoutOverlapping(); Along with the schedule, you can also specify the environment under which a scheduled command should run, as shown in the following command: $schedule->command('foo')->weekly()->environments('production'); You could use this to run commands in a production environment, for example, archiving data or running a report periodically. By default, scheduled commands won't execute if the maintenance mode is enabled. This behavior can be easily overridden: $schedule->command('foo')->weekly()->evenInMaintenanceMode(); Viewing the output of scheduled commands For some scheduled commands, you probably want to view the output somehow, whether that is via e-mail, logged to a file on disk, or sending a callback to a pre-defined URL. All of these scenarios are possible in Laravel. To send the output of a job via e-mail by using the following command: $schedule->command('foo')->weekly()          ->emailOutputTo('someone@example.com'); If you wish to write the output of a job to a file on disk, that is easy enough too: $schedule->command('foo')->weekly()->sendOutputTo($filepath); You can also ping a URL after a job is run: $schedule->command('foo')->weekly()->thenPing($url); This will execute a GET request to the specified URL, at which point you could send a message to your favorite chat client to notify you that the command has run. Finally, you can chain the preceding command to send multiple notifications: $schedule->command('foo')->weekly()          ->sendOutputTo($filepath)          ->emailOutputTo('someone@example.com'); However, note that you have to send the output to a file before it can be e-mailed if you wish to do both. Summary In this article, you have learned the different ways in which Artisan can assist you in the development, debugging, and deployment process. We have also seen how easy it is to build a custom Artisan command and adapt it to your own needs. If you are relatively new to the command line, you will have had a glimpse into the power of command-line utilities. If, on the other hand, you are a seasoned user of the command line and you have written scripts with other programming languages, you can surely appreciate the simplicity and expressiveness of Artisan. Resources for Article: Further resources on this subject: Your First Application [article] Creating and Using Composer Packages [article] Eloquent relationships [article]
Read more
  • 0
  • 0
  • 7481

article-image-gesture
Packt
25 Oct 2013
5 min read
Save for later

Gesture

Packt
25 Oct 2013
5 min read
(For more resources related to this topic, see here.) Gestures While there are ongoing arguments in the courts of America at the time of writing over who invented the likes of dragging images, it is without a doubt that a key feature of iOS is the ability to use gestures. To put it simply, when you tap the screen to start an app or select a part of an image to enlarge it or anything like that, you are using gestures. A gesture (in terms of iOS) is any touch interaction between the UI and the device. With iOS 6, there are six gestures the user has the ability to use. These gestures, along with brief explanations, have been listed in the following table: Class Name and type Gesture UIPanGesture Recognizer PanGesture; Continuous type Pan images or over-sized views by dragging across the screen UISwipeGesture Recognizer SwipeGesture; Continuous type Similar to panning, except it is a swipe UITapGesture Recognizer TapGesture; Discrete type Tap the screen a number of times (configurable) UILongPressGesture Recognizer LongPressGesture; Discrete type Hold the finger down on the screen UIPinchGesture Recognizer PinchGesture; Continuous type Zoom by pinching an area and moving your fingers in or out UIRotationGesture Recognizer RotationGesture; Continuous type Rotate by moving your fingers in opposite directions Gestures can be added by programming or via Xcode. The available gestures are listed in the following screenshot with the rest of the widgets on the right-hand side of the designer: To add a gesture, drag the gesture you want to use under the view on the View bar (shown in the following screenshot): Design the UI as you want and while pressing the Ctrl key, drag the gesture to what you want to recognize using the gesture. In my example, the object you want to recognize is anywhere on the screen. Once you have connected the gesture to what you want to recognize, you will see the configurable options of the gesture. The Taps field is the number of taps required before the Recognizer is triggered, and the Touches field is the number of points onscreen required to be touched for the Recognizer to be triggered. When you come to connect up the UI, the gesture must also be added. Gesture code When using Xcode, it is simple to code gestures. The class defined in the Xcode design for the tapping gesture is called tapGesture and is used in the following code: private int tapped = 0; public override void ViewDidLoad() { base.ViewDidLoad(); tapGesture.AddTarget(this, new Selector("screenTapped")); View.AddGestureRecognizer(tapGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { tapped++; lblCounter.Text = tapped.ToString(); } There is nothing really amazing to the code; it just displays how many times the screen has been tapped. The Selector method is called by the code when the tap has been seen. The method name doesn't make any difference as long as the Selector and Export names are the same. Types When the gesture types were originally described, they were given a type. The type reflects the number of messages sent to the Selector method. A discrete one generates a single message. A continuous one generates multiple messages, which requires the Selector method to be more complex. The complexity is added by the Selector method having to check the State of the gesture to decide on what to do with what message and whether it has been completed. Adding a gesture in code It is not a requirement that Xcode be used to add a gesture. To perform the same task in the following code as my preceding code did in Xcode is easy. The code will be as follows: UITapGestureRecognizer t'pGesture = new UITapGestureRecognizer() { NumberOfTapsRequired = 1 }; The rest of the code from AddTarget can then be used. Continuous types The following code, a Pinch Recognizer, shows a simple rescaling. There are a couple of other states that I'll explain after the code. The only difference in the designer code is that I have UIImageView instead of a label and a UIPinchGestureRecognizer class instead of a UITapGestureRecognizer class. public override void ViewDidLoad() { base.ViewDidLoad(); uiImageView.Image =UIImage.FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f); pinchGesture.AddTarget(this, new Selector("screenTapped")); uiImageView.AddGestureRecognizer(pinchGesture); } [Export("screenTapped")]public void SingleTap(UIGestureRecognizer s) { UIPinchGestureRecognizer pinch = (UIPinchGestureRecognizer)s; float scale = 0f; PointF location; switch(s.State) { case UIGestureRecognizerState.Began: Console.WriteLine("Pinch begun"); location = s.LocationInView(s.View); break; case UIGestureRecognizerState.Changed: Console.WriteLine("Pinch value changed"); scale = pinch.Scale; uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f), scale); break; case UIGestureRecognizerState.Cancelled: Console.WriteLine("Pinch cancelled"); uiImageView.Image = UIImage FromFile("graphics/image.jpg") Scale(new SizeF(160f, 160f)); scale = 0f; break; case UIGestureRecognizerState.Recognized: Console.WriteLine("Pinch recognized"); break; } } Other UIGestureRecognizerState values The following table gives a list of other Recognizer states: State Description Notes Possible Default state; gesture hasn't been recognized Used by all gestures Failed Gesture failed No messages sent for this state Translation Direction of pan Used in the pan gesture Velocity Speed of pan Used in the pan gesture In addition to these, it should be noted that discrete types only use Possible and Recognized states. Summary Gestures certainly can add a lot to your apps. They can enable the user to speed around an image, move about a map, enlarge and reduce, as well as select areas of anything on a view. Their flexibility underpins why iOS is recognized as being an extremely versatile device for users to manipulate images, video, and anything else on-screen. Resources for Article: Further resources on this subject: Creating and configuring a basic mobile application [Article] So, what is XenMobile? [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 7402

article-image-api-detail
Packt
20 Aug 2015
25 min read
Save for later

The API in Detail

Packt
20 Aug 2015
25 min read
In this article by Hugo Solis, author of book, Kivy Cookbook, we will learn to create simple API with the help of App class. We will also learn how to load images of asynchronous data, parsing of data, exception handling, utils applications and use of factory objects. Working of audio, video and camera in Kivy will be explained in this article. Also we will learn text manipulation and and usage of spellcheck option. We will see how to add different effects to the courser in this article. (For more resources related to this topic, see here.) Kivy is actually an API for Python, which lets us create cross-platform apps. An application programming interface (API) is a set of routines, protocols, and tools to build software applications. Generally, we call Kivy as a framework because it also has procedures and instructions, such as the Kv language, which are not present in Python. Frameworks are environments that come with support programs, compilers, code libraries, tool sets, and APIs. In this article, we want to review the Kivy API reference. We will go through some useful classes of the API. Every time we import a Kivy package, we will be dealing with an API. Even though the usual imports are from kivy.uix, there are more options and classes in the Kivy API. The Kivy developers have created the API reference, which you can refer to online at http://kivy.org/docs/api-kivy.html for an exhaustive information. Getting to know the API Our starting point is going to be the App class, which is the base to create Kivy applications. In this recipe, we are going to create a simple app that uses some resources from this class. Getting ready It is important to see the role of the App class in the code. How to do it… To complete this recipe, we will create a Python file to make the resources present in the App class. Let's follow these steps: Import the kivy package. Import the App package. Import the Widget package. Define the MyW() class. Define the e1App() class instanced as App. Define the build() method and give an icon and a title to the app. Define the on_start() method. Define the on_pause() method. Define the on_resume() method. Define the on_stop() method. End the app with the usual lines. import kivy from kivy.app import App from kivy.uix.widget import Widget class MyW(Widget): pass class e1App(App): def build(self): self.title = 'My Title' self.icon = 'f0.png' returnMyW() def on_start(self): print("Hi") return True def on_pause(self): print("paused") return True def on_resume(self): print("active") pass def on_stop(self): print("Bye!") pass if __name__ == '__main__': e1App().run() How it works… In the second line, we import the most common kivy package. This is the most used element of the API because it permits to create applications. The third line is an importation from kivy.uix that could be the second most used element, because the majority of the widgets are in there. In the e1app class, we have the usual build() method where we have the line: self.title = 'My Title' We are providing a title to the app. As you can remember, the default title should be e1 because of the class's name, but now we are using the title that we want. We have the next line: self.icon = 'f0.png' We are giving the app an icon. The default is the Kivy logo, but with this instruction, we are using the image in the file f0.png. In addition, we have the following method: def on_start(self): print("Hi") return True It is in charge of all actions performed when the app starts. In this case, it will print the word Hi in the console. The method is as follows: def on_pause(self): print("paused") return True This is the method that is performed when the app is paused when it is taken off from RAM. This event is very common when the app is running in a mobile device. You should return True if your app can go into pause mode, otherwise return False and your application will be stopped. In this case, we will print the word paused in the console, but it is very important that you save important information in the long-term memory, because there can be errors in the resume of the app and most mobiles don't allow real multitasking and pause apps when switching between them. This method is used with: def on_resume(self): print("active") pass The on_resume method is where we verify and correct any error in the sensible data of the app. In this case, we are only printing the word active in the console. The last method is: def on_stop(self): print("Bye!") pass It is where all the actions are performed before the app closes. Normally, we save data and take statistics in this method, but in this recipe, we just say Bye! in the console. There's more… There is another method, the load_kv method, that you can invoke in the build method, which permits to make our own selection of the KV file to use and not the default one. You only have to add follow line in the build() method: self.load_kv(filename='e2.kv') See also The natural way to go deeper in this recipe is to take a look at the special characteristics that the App has for the multiplatform support that Kivy provides. Using the asynchronous data loader An asynchronous data loader permits to load images even if its data is not available. It has diverse applications, but the most common is to load images from the Internet, because this makes our app always useful even in the absence of Web connectivity. In this recipe, we will generate an app that loads an image from the Internet. Getting ready We did image loading from the Internet. We need an image from the Web, so find it and grab its URL. How to do it… We need only a Python file and the URL in this recipe. To complete the recipe: Import the usual kivy package. Import the Image and Loader packages. Import the Widget package. Define the e2App class. Define the _image_Loaded() method, which loads the image in the app. Define the build() method. In this method, load the image in a proxy image. Define the image variable instanced as Image(). Return the image variable to display the load image: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.image import Image from kivy.loader import Loader class e2App(App): def _image_loaded(self, proxyImage): if proxyImage.image.texture: self.image.texture = proxyImage.image.texture def build(self): proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') proxyImage.bind(on_load=self._image_loaded) self.image = Image() return self.image if __name__ == '__main__': e2App().run() How it works… The line that loads the image is: proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') We assign the image to the proxyImage variable because we are not sure if the image exists or could be retrieved from the Web. We have the following line: proxyImage.bind(on_load=self._image_loaded) We bind the event on_load to the variable proxyImage. The used method is: def _image_loaded(self, proxyImage): if proxyImageifproxyImage.image.texture: self.image.texture = proxyImage.image.texture It verifies whether the image is loaded or not; if not, then it does not change the image. This is why, we said that this will load in an asynchronous way. There's more… You can also load an image from a file in the traditional way. We have the following line: proxyImage = Loader.image( 'http://iftucr.org/IFT/ANL_files/artistica.jpg') Replace the preceding line with: proxyImage = Loader.image('f0.png') Here, f0.png is the name of the file to load. Logging objects The log in any software is useful for many aspects, one of them being exception handling. Kivy is always logging information about its performance. It creates a log file of every running of our app. Every programmer knows how helpful logging is for software engineering. In this recipe, we want to show information of our app in that log. How to do it… We will use a Python file with the MyW() usual class where we will raise an error and display it in the Kivy log. To complete the recipe, follow these steps: Import the usual kivy package. Import the Logger packages. Define the MyW() class. Trigger an info log. Trigger a debug log. Perform an exception. Trigger an exception log: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.logger import Logger class MyW(Widget): Logger.info('MyW: This is an info message.') Logger.debug('MyW: This is a debug message.') try: raise Exception('exception') except Exception: Logger.exception('Something happened!') class e3App(App): def build(self): return MyW() if __name__ == '__main__': e3App().run() How it works… In this recipe, we are creating three logs. The first in the line is: Logger.info('MyW: This is an info message.') This is an info log, which is associated just with the supplementary information. The label MyW is just a convention, but you could use it in whatever way you like. Using the convention, we track where the log was performed in the code. We will see a log made by that line as: [INFO ] [MyW ] This is an info message The next line also performs a log notation: Logger.debug('MyW: This is a debug message.') This line will produce a debug log commonly used to debug the code. Consider the following line: Logger.exception('Something happened!') This will perform an error log, which would look like: [ERROR ] Something happened! In addition to the three present in this recipe, you can use trace, warning, and critical logging objects. There's more… We also have the trace, warning, error, and critical methods in the Logger class that work similarly to the methods described in this recipe. The log file by default is located in the .kivy/logs/ folder of the user running the app, but you can always change it in the Kivy configuration file. Additionally, you can access the last 100 messages for debugging purposes even if the logger is not enabled. This is made with the help of LoggerHistory as follows: from kivy.logger import LoggerHistory print(LoggerHistory.history) So, the console will display the last 100 logs. See also More information about logging can be found at http://kivy.org/docs/api-kivy.logger.html. Parsing Actually, Kivy has the parser package that helps in the CSS parsing. Even though it is not a complete parsing, it helps to parse instructions related to a framework. The recipe will show some that you could find useful in your context. How to do it… The parser package has eight classes, so we will work in Python to review all of them. Let's follow the next steps: Import the parser package. from kivy.parser import * Parse a color from a string. parse_color('#090909') Parse a string to a string. parse_string("(a,1,2)")a Parse a string to a boolean value. parse_bool("0") Parse a string to a list of two integers. parse_int2("12 54") Parse a string to a list of four floats. parse_float4('54 87.13 35 0.9') Parse a file name. parse_filename('e7.py') Finally, we have parse_int and parse_float, which are aliases of int and float, respectively. How it works… In the second step, we parse any of the common ways to define a color (that is, RGB(r, g, b), RGBA(r, g, b, a), aaa, rrggbb, #aaa or #rrggbb) to a Kivy color definition. The third step takes off the single or double quotes of the string. The fourth step takes a string True for 1 and False for 0 and parses it to its respective boolean value. The last step is probably very useful because it permits verification if that file name is a file available to be used. If the file is found, the resource path is returned. See also To use a more general parser, you can use ply package for Python. Visit https://pypi.python.org/pypi/ply for further information. Applying utils There are some methods in Kivy that cannot be arranged in any other class. They are miscellaneous and could be helpful in some contexts. In this recipe, we will see how to use them. How to do it… In the spirit to show all the methods available, let's work directly in Python. To do the package tour, follow these steps: Import the kivy package. from kivy.utils import * Find the intersection between two lists. intersection(('a',1,2), (1,2)) Find the difference between two lists. difference(('a',1,2), (1,2)) Convert a tuple in a string. strtotuple("1,2") Transform a hex string color to a Kivy color. get_color_from_hex('#000000') Transform a Kivy color to a hex value. get_hex_from_color((0, 1, 0)) Get a random color. get_random_color(alpha='random') Evaluate if a color is transparent. is_color_transparent((0,0,0,0)) Limit the value between a minimum value and maximum value. boundary(a,1,2) Interpolate between two values. interpolate(10, 50, step=10) Mark a function as deprecated. deprecated(MyW) Get the platform where the app is running. <p>platform()</p> How it works… Almost every method presented in this recipe has a transparent syntax. Let's get some detail on two of the steps. The ninth step is the boundary method. It evaluates the value of a, and if this is between 1 and 2, it conserves its value; if it is lower than 1, the method returns 1; if it is greater than 2, the method returns 2. The eleventh step is associated with the warning by using the function MyW; when this function is called the first time, the warning will be triggered. See also If you want to explore this package in detail, you can visit http://kivy.org/docs/api-kivy.utils.html. Leveraging the factory object The factory object represents the last step to create our own widgets because the factory can be used to automatically register any class or module and instantiate classes from any place in the app. This is a Kivy implementation of the factory pattern where a factory is an object to create other objects. This also opens a lot of possibilities to create dynamic codes in Kivy. In this recipe, we will register one of our widgets. Getting ready We will use an adaptation of the code to register the widget as a factory object. Copy the file in the same location of this recipe with the name e7.py. How to do it… In this recipe, we will use one of our simple Python files where we will register our widget using the factory package. Follow these steps: Import the usual kivy packages. In addition, import the Factory package. Register the MyWidget In the build() method of the usual e8App, return Factory.MyWidget. import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.factory import Factory Factory.register('MyWidget', module='e7') classMyW(Widget): pass class e8App(App): def build(self): returnFactory.MyWidget() if __name__ == '__main__': e8App().run() How it works… Let us note how the magic is done in the following line: Factory.register('MyWidget', module='e7') This line creates the factory object named MyWidget, and let's use it as we want. See how e7 is the name of the file. Actually, this sentence also will create a file named e7.pyc, which we can use as a replacement of the file e7.py if want to distribute our code; now it is not necessary to give the e7.py, since just the e7.pyc file is enough. There's more… This registration is actually permanent, so if you wish to change the registration in the same code, you need to unregister the object. For example, see the following: Factory.unregister('MyWidget') Factory.register('MyWidget', cls=CustomWidget) New_widget = Factory.MyWidget()   See also If you want to know more about this amazing package, you can visit http://kivy.org/docs/api-kivy.factory.html. Working with audio Nowadays, the audio integration in our app is vital. You could not realize a video game without audio or an app that does not use multimedia. We will create a sample with just one button which when pressed plays an audio. Getting ready We need an audio file in this recipe in the traditional audio formats (mp3, mp4, wav, wma, b-mtp, ogg, spx, midi). If you do not have any, you always can get one from sites such as https://www.freesound.org. How to do it… We will use a simple Python file with just one widget to play the audio file. To complete the recipe, let's follow these steps: Import the usual kivy package. Import the SoundLoader package. Define the MyW() class. Define the __init__() method. Create a button with the label Play. Bind the press action with the press() method. Add the widget to the app. Define the press() method. Call the SoundLoader.sound() method for your audio file. Play it with the play()method: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.button import Button from kivy.core.audio import SoundLoader class MyW(Widget): def __init__(self, **kwargs): super(MyW, self).__init__(**kwargs) b1=Button(text='Play') b1.bind(on_press=self.press) self.add_widget(b1) def press(self, instance): sound = SoundLoader.load('owl.wav') if sound: print("Sound found at %s" % sound.source) print("Sound is %.3f seconds" % sound.length) sound.play() print('playing') class e4App(App): def build(self): returnMyW() if __name__ == '__main__': e4App().run() How it works… In this recipe, the audio file is a load in the line: sound = SoundLoader.load('owl.wav') Here, we use a .wav format. Look at the following line: if sound: We handle that the file has been correctly loaded. We have the next line: print("Sound found at %s" % sound.source) This prints the name of the file that has been loaded in the console. The next line prints the duration of the file in seconds. See the other line: sound.play() It is where the file is played in the app. There's more… Also, you can use the seek() and stop() methods to navigate to the audio file. Let's say that you want to play the audio after the first minute, you will use: Sound.seek(60) The parameter received by the seek() method must be in seconds. See also If you need more control of the audio, you should visit http://kivy.org/docs/api-kivy.core.audio.html. Working with video The video reproduction is a useful tool for any app. In this app, we will load a widget to reproduce a video file in our app. Getting ready It is necessary to have a video file in the usual format to be reproduced in our app (.avi, .mov, .mpg, .mp4, .flv, .wmv, .ogg). If you do not have one, you can visit https://commons.wikimedia.org/wiki/Main_Page to get free media. How to do it… In this recipe, we are going to use a simple Python file to create our app within a player widget. To complete the task, follow these: Import the usual kivy packages. Import the VideoPlayer package. Define the MyW() class. Define the __init__() method. Define videoplayer with your video. Add the video player to the app. import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.videoplayer import VideoPlayer class MyW(Widget): def __init__(self, **kwargs): super(MyW, self).__init__(**kwargs) player= VideoPlayer( source='GOR.MOV',state='play', options={'allow_stretch': True}, size=(600,600)) self.add_widget(player) class e5App(App): def build(self): returnMyW() if __name__ == '__main__': e5App().run() How it works… In this recipe, the most important line is: player= VideoPlayer( source='GOR.MOV',state='play', options={'allow_stretch': True}, size=(600,600)) This line loads the file, sets some options, and gives the size to the widget. The option 'allow stretch' let's you modify the image of the video or not. In our recipe, 'allow stretch' is permitted, so the images will be maximized to fit in the widget. There's more… You can also integrate subtitles or annotations to the video in an easy way. You only need a JSON-based file with the same name as the video, in the same location with .jsa extension. For example, let's use this content in the .jsa file: [ {"start": 0, "duration": 2,
 "text": "Here your text"}, {"start": 2, "duration": 2,
"bgcolor": [0.5, 0.2, 0.4, 0.5],
 "text": "You can change the background color"} ] The "start" sentence locates in which second the annotation will show up in the video and the "duration" sentence gives the time in seconds that the annotation will be in the video. See also There are some apps that need more control of the video, so you can visit http://kivy.org/docs/api-kivy.core.video.html for better understanding. Working with a camera It is very common that almost all our personal devices have a camera. So you could find thousands of ways to use a camera signal in your app. In this recipe, we want to create an app that takes control of the camera present in a device. Getting ready Actually, you need to have the correct installation of the packages that permits you to interact with a camera. You can review http://kivy.org/docs/faq.html#gstreamer-compatibility to check if your installation is suitable. How to do it… We are going to use the Python and KV files in this recipe. The KV file will deal with the camera and button to interact with it. The Python code is one of our usual Python files with the definition of the root widget. Let's follow these steps: In the KV file, define the <MyW> rule. In the rule, define BoxLayout with a vertical orientation. Inside the Layout, define the camera widget with play property as false. Also, define the ToggleButton with the press property swifts between play and not play: <MyW>: BoxLayout: orientation: 'vertical' Camera: id: camera play: False ToggleButton: text: 'Play' on_press: camera.play = not camera.play size_hint_y: None height: '48dp' In the Python file, import the usual packages. Define the MyW() class instanced as BoxLayout: import kivy kivy.require('1.9.0') from kivy.app import App from kivy.uix.widget import Widget from kivy.uix.boxlayout import BoxLayout class MyW(BoxLayout): pass class e6App(App): def build(self): return MyW() if __name__ == '__main__': e6App().run() There's more… If we have a device with more than one camera, for example, the handheld device front and rear camera, you can use the index property to switch between them. We have the following line: id: camera Add this line in the KV file: index: 0 The preceding line is to select the first camera, index:1 for the second, and so on. Using spelling Depending on the kind of app that we will develop, we will need to spellcheck text provided by the user. In the Kivy API, there is a package to deal with it. In this recipe, we will give an example of how to do it. Getting ready If you are not using Mac OS X (or OS X as Apple called now), we will need to install the Python package: PyEnchant. For the installation, let's use the pip tool as follows: pip install PyEnchant How to do it… Because this recipe could use it in different contexts, let's work directly in Python. We want to make some suggestions to the word misspelled. To complete the task, follow these steps: Import the Spelling package. from kivy.core.spelling import Spelling Instance the object s as Spelling(). s = Spelling() List the available language. s.list_languages() In this case, select U.S. English. s.select_language('en_US') Ask for a suggestion to the object s. s.suggest('mispell') How it works… The first four steps actually set the kind of suggestion that we want. The fifth step makes the suggestion in line: s.suggest('mispell') The output of the expression is: [u'misspell', u'ispell'] The output is in the order of the used frequency, so misspell is the most probable word that the user wanted to use. Adding effects Effects are one of the most important advances in the computer graphics field. The physics engines help create better effects, and they are under continuous improvement. Effects are pleasing to the end user. They change the whole experience. The kinetic effect is the mechanism that Kivy uses to approach this technology. This effect can be used in diverse applications from the movement of a button to the simulation of real graphical environments. In this recipe, we will review how to set the effect to use it in our apps. Getting ready We are going to use some concepts from physics in this recipe, so it's necessary to have the clear basics. You should start reading about this on Wikipedia at http://en.wikipedia.org/wiki/Acceleration. How to do it… As the applications of this effect are as creative as you want, we are going to work directly in Python to set up the effect. Let's follow these steps: Import the KineticEffect package. from kivy.effects.kinetic import KineticEffect Instance the object effect as KineticEffect(). effect = KineticEffect() Start the effect at second 10. effect.start(10) Update the effect at second 15. effect.update(15) Update the effect again at second 30. effect.update(30) You can always add friction to the movement. effect.friction You can also update the velocity. effect.update_velocity(30) Stop the effect at second 48. effect.stop(48) Get the final velocity. effect.velocity() Get the value in seconds. effect.value() How it works… What we are looking for in this recipe is step 9: effect.velocity() The final velocity is how we can use to describe the movement of any object in a realistic way. As the distances are relatively fixed in the app, you need the velocity to describe any motion. We could incrementally repeat the steps to vary the velocity. There's more… There are other three effects based on the Kinetic effect, which are: ScrollEffect: This is the base class used to implement an effect. It only calculates scrolling and overscroll. DampedScrollEffect: This uses the overscroll information to allow the user to drag more than is expected. Once the user stops the drag, the position is returned to one of the bounds. OpacityScrollEffect: This uses the overscroll information to reduce the opacity of the ScrollView widget. When the user stops the drag, the opacity is set back to 1. See also If you want to go deeper in this topic, you should visit: http://kivy.org/docs/api-kivy.effects.html. Advanced text manipulation Text is one of the most commonly used contents used in the apps. The recipe will create an app with a label widget where we will use text rendering to make our Hello World. How to do it… We are going to use one simple Python files that will just show our Hello World text. To complete the recipe: Import the usual kivy packages. Also, import the label package. Define the e9app class instanced as app. Define the method build() to the class. Return the label widget with our Hello World text. import kivy kivy.require('1.9.0') # Code tested in this version! from kivy.app import App from kivy.uix.label import Label class e9App(App): def build(self): return Label(text='Hello [ref=world][color=0000ff]World[/color][/ref]', markup=True, font_size=80, font_name='DroidSans') if __name__ == '__main__': e9App().run() How it works… Here is the line: return Label(text='Hello [ref=world][color=0000ff]World[/color][/ref]', markup=True, font_size=80, font_name='DroidSans') This is the place where the rendering is done. Look at the text parameter where the token [ref] permits us to reference that specific part of the text (for example, to detect a click in the word World) the token [color] gives a particular color to that part of the text. The parameter markup=True allows the use of tokens. The parameters font_size and font_name will let you select the size and font to use for the text. There's more… There are others parameter with evident functions that the label widget can receive like: bold=False italic=False halign=left valign=bottom shorten=False text_size=None color=None line_height=1.0 Here, they have been evaluated with their default values. See also If you are interested in creating even more varieties of texts, you can visit http://kivy.org/docs/api-kivy.uix.label.html#kivy.uix.label.Labelor http://kivy.org/docs/api-kivy.core.text.html. Summary In this article we learned many things to change the API of our app. We learned to manage images of asynchronous data, to add different effects and to deal with the text visible on the screen. We used audio, video data and camera to create our app. We understood some concept such as exception handling, use of factory objects and parsing of data. Resources for Article: Further resources on this subject: Subtitles – tracking the video progression[article] Images, colors, and backgrounds[article] Sprites, Camera, Actions![article]
Read more
  • 0
  • 0
  • 7170
article-image-using-sensors
Packt
10 Oct 2014
25 min read
Save for later

Using Sensors

Packt
10 Oct 2014
25 min read
In this article by Leon Anavi, author of the Tizen Cookbook, we will cover the following topics: Using location-based services to display current location Getting directions Geocoding Reverse geocoding Calculating distance Detecting device motion Detecting device orientation Using the Vibration API (For more resources related to this topic, see here.) The data provided by the hardware sensors of Tizen devices can be useful for many mobile applications. In this article, you will learn how to retrieve the geographic location of Tizen devices using the assisted GPS, to detect changes of the device orientation and motion as well as how to integrate map services into Tizen web applications. Most of the examples related to maps and navigation use Google APIs. Other service providers such as Nokia HERE, OpenStreetMap, and Yandex also offer APIs with similar capabilities and can be used as an alternative to Google in Tizen web applications. It was announced that Nokia HERE joined the Tizen association at the time of writing this book. Some Tizen devices will be shipped with built-in navigation applications powered by Nokia HERE. The smart watch Gear S is the first Tizen wearable device from Samsung that comes of the box with an application called Navigator, which is developed with Nokia HERE. Explore the full capabilities of Nokia HERE JavaScript APIs if you are interested in their integration in your Tizen web application at https://developer.here.com/javascript-apis. OpenStreetMap also deserves special attention because it is a high quality platform and very successful community-driven project. The main advantage of OpenStreetMap is that its usage is completely free. The recipe about Reverse geocoding in this article demonstrates address lookup using two different approaches: through Google and through OpenStreetMap API. Using location-based services to display current location By following the provided example in this recipe, you will master the HTML5 Geolocation API and learn how to retrieve the coordinates of the current location of a device in a Tizen web application. Getting ready Ensure that the positioning capabilities are turned on. On a Tizen device or Emulator, open Settings, select Locations, and turn on both GPS (if it is available) and Network position as shown in the following screenshot: Enabling GPS and network position from Tizen Settings How to do it... Follow these steps to retrieve the location in a Tizen web application: Implement JavaScript for handling errors: function showError(err) { console.log('Error ' + err.code + ': ' + err.message); } Implement JavaScript for processing the retrieved location: function showLocation(location) { console.log('latitude: ' + location.coords.longitude + '    longitude: ' + location.coords.longitude); } Implement a JavaScript function that searches for the current position using the HTML5 Geolocation API: function retrieveLocation() { if (navigator.geolocation) {    navigator.geolocation.getCurrentPosition(showLocation,      showError); } } At an appropriate place in the source code of the application, invoke the function created in the previous step: retrieveLocation(); How it works The getCurrentPosition() method of the HTML5 Geolocation API is used in the retrieveLocation() function to retrieve the coordinates of the current position of the device. The functions showLocation() and showError() are provided as callbacks, which are invoked on success or failure. An instance of the Position interface is provided as an argument to showLocation(). This interface has two properties: coords: This specifies an object that defines the retrieved position timestamp: This specifies the date and time when the position has been retrieved The getCurrentPosition() method accepts an instance of the PositionOptions interface as a third optional argument. This argument should be used for setting specific options such as enableHighAccuracy, timeout, and maximumAge. Explore the Geolocation API specification if you are interested in more details regarding the attributes of the discussed interface at http://www.w3.org/TR/geolocation-API/#position-options. There is no need to add any specific permissions explicitly in config.xml. When an application that implements the code from this recipe, is launched for the first time, it will ask for permission to access the location, as shown in the following screenshot: A request to access location in Tizen web application If you are developing a location-based application and want to debug it using the Tizen Emulator, use the Event Injector to set the position. There's more... A map view provided by Google Maps JavaScript API v3 can be easily embedded into a Tizen web application. An internet connection is required to use the API, but there is no need to install an additional SDK or tools from Google. Follow these instructions to display a map and a marker: Make sure that the application can access the Google API. For example, you can enable access to any website by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Visit https://code.google.com/apis/console to get the API keys. Click on Services and activate Google Maps API v3. After that, click on API and copy Key for browser apps. Its value will be used in the source code of the application. Implement the following source code to show a map inside div with the ID map-canvas: <style type="text/css"> #map-canvas { width: 320px; height: 425px; } </style> <script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?key=<API Key>&sensor=false"></script> Replace <API Key> in the line above with the value of the key obtained on the previous step. <script type="text/javascript"> function initialize(nLatitude, nLongitude) { var mapOptions = {    center: new google.maps.LatLng(nLatitude, nLongitude),    zoom: 14 }; var map = new google.maps.Map(document.getElementById("map-canvas"), mapOptions); var marker = new google.maps.Marker({    position: new google.maps.LatLng(nLatitude,      nLongitude),    map: map }); } </script> In the HTML of the application, create the following div element: <div id="map-canvas"></div> Provide latitude and longitude to the function and execute it at an appropriate location. For example, these are the coordinates of a location in Westminster, London: initialize(51.501725, -0.126109); The following screenshot demonstrates a Tizen web application that has been created by following the preceding guidelines: Google Map in Tizen web application Combine the tutorial from the How to do it section of the recipe with these instructions to display a map with the current location. See also A source code of a simple Tizen web application is provided alongside the book following the tutorial from this recipe. Feel free to use it as you wish. More details are available in the W3C specification of the HTML5 Geolocation API at http://www.w3.org/TR/geolocation-API/. To learn more details and to explore the full capabilities of the Google Maps JavaScript API v3, please visit https://developers.google.com/maps/documentation/javascript/tutorial. Getting directions Navigation is another common task for mobile applications. The Google Directions API allows web and mobile developers to retrieve a route between locations by sending an HTTP request. It is mandatory to specify an origin and a destination, but it is also possible to set way points. All locations can be provided either by exact coordinates or by address. An example for getting directions and to reach a destination on foot is demonstrated in this recipe. Getting ready Before you start with the development, register an application and obtain API keys: Log in to Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Directions API. Click on API Access and get the value of Key for server apps, which should be used in all requests from your Tizen web application to the API. For more information about the API keys for the Directions API, please visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Use the following source code to retrieve and display step-by-step instructions on how to walk from one location to another using the Google Directions API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create an HTML unordered list: <ul id="directions" data-role="listview"></ul> Create JavaScript that will load retrieved directions: function showDirections(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to provide directions.');    return; } var directions = data.routes[0].legs[0].steps; for (nStep = 0; nStep < directions.length; nStep++) {     var listItem = $('<li>').append($( '<p>'      ).append(directions[nStep].html_instructions));    $('#directions').append(listItem); } $('#directions').listview('refresh'); } Create a JavaScript function that sends an asynchronous HTTP (AJAX) request to the Google Maps API to retrieve directions: function retrieveDirection(sLocationStart, sLocationEnd){ $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sLocationStart,        destination: sLocationEnd,        mode: 'walking',        sensor: 'true',        key: '<API key>' }, Do not forget to replace <API key> with the Key for server apps value provided by Google for the Directions API. Please note that a similar key has to be set to the source code in the subsequent recipes that utilize Google APIs too:    success : showDirections,    error : function (request, status, message) {    console.log('Error');    } }); } Provide start and end locations as arguments and execute the retrieveDirection() function. For example: retrieveDirection('Times Square, New York, NY, USA', 'Empire State Building, 350 5th Avenue, New York, NY 10118, USA'); How it works The first mandatory step is to allow access to the Tizen web application to Google servers. After that, an HTML unordered list with ID directions is constructed. An origin and destination is provided to the JavaScript function retrieveDirections(). On success, the showDirections() function is invoked as a callback and it loads step-by-step instructions on how to move from the origin to the destination. The following screenshot displays a Tizen web application with guidance on how to walk from Times Square in New York to the Empire State Building: The Directions API is quite flexible. The mandatory parameters are origin, destination, and sensor. Numerous other options can be configured at the HTTP request using different parameters. To set the desired transport, use the parameter mode, which has the following options: driving walking bicycling transit (for getting directions using public transport) By default, if the mode is not specified, its value will be set to driving. The unit system can be configured through the parameter unit. The options metric and imperial are available. The developer can also define restrictions using the parameter avoid and the addresses of one or more directions points at the waypoints parameter. A pipe (|) is used as a symbol for separation if more than one address is provided. There's more... An application with similar features for getting directions can also be created using services from Nokia HERE. The REST API can be used in the same way as Google Maps API. Start by acquiring the credentials at http://developer.here.com/get-started. An asynchronous HTTP request should be sent to retrieve directions. Instructions on how to construct the request to the REST API are provided in its documentation at https://developer.here.com/rest-apis/documentation/routing/topics/request-constructing.html. The Nokia HERE JavaScript API is another excellent solution for routing. Make instances of classes Display and Manager provided by the API to create a map and a routing manager. After that, create a list of way points whose coordinates are defined by an instance of the Coordinate class. Refer to the following example provided by the user's guide of the API to learn details at https://developer.here.com/javascript-apis/documentation/maps/topics/routing.html. The full specifications about classes Display, Manager, and Coordinate are available at the following links: https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.map.Display.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.routing.Manager.html https://developer.here.com/javascript-apis/documentation/maps/topics_api_pub/nokia.maps.geo.Coordinate.html See also All details, options, and returned results from the Google Directions API are available at https://developers.google.com/maps/documentation/directions/. Geocoding Geocoding is the process of retrieving geographical coordinates associated with an address. It is often used in mobile applications that use maps and provide navigation. In this recipe, you will learn how to convert an address to longitude and latitude using JavaScript and AJAX requests to the Google Geocoding API. Getting ready You must obtain keys before you can use the Geocoding API in a Tizen web application: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and get the value of Key for server apps. Use it in all requests from your Tizen web application to the API. For more details regarding the API keys for the Geocoding API, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow these instructions to retrieve geographic coordinates of an address in a Tizen web application using the Google Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle results provided by the API: function retrieveCoordinates(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve coordinates');    return; } var latitude = data.results[0].geometry.location.lat; var longitude = data.results[0].geometry.location.lng; console.log('latitude: ' + latitude + ' longitude: ' +    longitude); } Create a JavaScript function that sends a request to the API: function geocoding(address) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { address: address,      sensor: 'true',      key: '<API key>' }, As in the previous recipes, you should again replace <API key> with the Key for server apps value provided by Google for the Geocoding API.    success : retrieveCoordinates,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide the address as an argument to the geocoding() function and invoke it. For example: geocoding('350 5th Avenue, New York, NY 10118, USA'); How it works The address is passed as an argument to the geocoding() function, which sends a request to the URL of Google Geocoding API. The URL specifies that the returned result should be serialized as JSON. The parameters of the URL contain information about the address and the API key. Additionally, there is a parameter that indicates whether the device has a sensor. In general, Tizen mobile devices are equipped with GPS so the parameter sensor is set to true. A successful response from the API is handled by the retrieveCoordinates() function, which is executed as a callback. After processing the data, the code snippet in this recipe prints the retrieved coordinates at the console. For example, if we provide the address of the Empire State Building to the geocoding() function on success, the following text will be printed: latitude: 40.7481829 longitude: -73.9850635. See also Explore the Google Geocoding API documentation to learn the details regarding the usage of the API and all of its parameters at https://developers.google.com/maps/documentation/geocoding/#GeocodingRequests. Nokia HERE provides similar features. Refer to the documentation of its Geocoder API to learn how to create the URL of a request to it at https://developer.here.com/rest-apis/documentation/geocoder/topics/request-constructing.html. Reverse geocoding Reverse geocoding, also known as address lookup, is the process of retrieving an address that corresponds to a location described with geographic coordinates. The Google Geocoding API provides methods for both geocoding as well as reverse geocoding. In this recipe, you will learn how to find the address of a location based on its coordinates using the Google API as well as an API provided by OpenStreetMap. Getting ready Same keys are required for geocoding and reverse geocoding. If you have already obtained a key for the previous recipe, you can directly use it here again. Otherwise, you can perform the following steps: Visit Google Developers Console at https://code.google.com/apis/console. Go to Services and turn on Geocoding API. Select API Access, locate the value of Key for server apps, and use it in all requests from the Tizen web application to the API. If you need more information about the Geocoding API keys, visit https://developers.google.com/maps/documentation/geocoding/#api_key. How to do it... Follow the described algorithm to retrieve an address based on geographic coordinates using the Google Maps Geocoding API: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Create a JavaScript function to handle the data provided for a retrieved address: function retrieveAddress(data) { if (!data || !data.results || (0 == data.results.length)) {    console.log('Unable to retrieve address');    return; } var sAddress = data.results[0].formatted_address; console.log('Address: ' + sAddress); } Implement a function that performs a request to Google servers to retrieve an address based on latitude and longitude: function reverseGeocoding(latitude, longitude) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/geocode/json?',    data: { latlng: latitude+','+longitude,        sensor: 'true',        key: '<API key>' }, Pay attention that <API key> has to be replaced with the Key for server apps value provided by Google for the Geocoding API:    success : retrieveAddress,    error : function (request, status, message) {    console.log('Error: ' + message);    } }); } Provide coordinates as arguments of function and execute it, for example: reverseGeocoding('40.748183', '-73.985064'); How it works If an application developed using the preceding source code invokes the reverseGeocoding() function with latitude 40.748183 and longitude -73.985064, the printed result at the console will be: 350 5th Avenue, New York, NY 10118, USA. By the way, as in the previous recipe, the address corresponds to the location of the Empire State Building in New York. The reverseGeocoding() function sends an AJAX request to the API. The parameters at the URL specify that the response must be formatted as JSON. The longitude and latitude of the location are divided by commas and set as a value of the latlng parameter in the URL. There's more... OpenStreetMap also provides a reverse geocoding services. For example, the following URL will return a JSON result of a location with the latitude 40.7481829 and longitude -73.9850635: http://nominatim.openstreetmap.org/reverse?format=json&lat=40.7481829&lon=-73.9850635 The main advantage of OpenStreetMap is that it is an open project with a great community. Its API for reverse geocoding does not require any keys and it can be used for free. Leaflet is a popular open source JavaScript library based on OpenStreetMap optimized for mobile devices. It is well supported and easy to use, so you may consider integrating it in your Tizen web applications. Explore its features at http://leafletjs.com/features.html. See also All details regarding the Google Geocoding API are available at https://developers.google.com/maps/documentation/geocoding/#ReverseGeocoding If you prefer to user the API provided by OpenStreetMap, please have a look at http://wiki.openstreetmap.org/wiki/Nominatim#Reverse_Geocoding_.2F_Address_lookup Calculating distance This recipe is dedicated to a method for calculating the distance between two locations. The Google Directions API will be used again. Unlike the Getting directions recipe, this time only the information about the distance will be processed. Getting ready Just like the other recipe related to the Google API, in this case, the developer must obtain the API keys before the start of the development. Please follow these instructions to register and get an appropriate API key: Visit Google Developers Console at https://code.google.com/apis/console. Click on Services and turn on Geocoding API. Click on API Access and save the value of Key for server apps. Use it in all requests from your Tizen web application to the API. If you need more information about the API keys for Directions API, visit https://developers.google.com/maps/documentation/directions/#api_key. How to do it... Follow these steps to calculate the distance between two locations: Allow the application to access websites by adding the following line to config.xml: <access origin="*" subdomains="true"></access> Implement a JavaScript function that will process the retrieved data: function retrieveDistance(data) { if (!data || !data.routes || (0 == data.routes.length)) {    console.log('Unable to retrieve distance');    return; } var sLocationStart =    data.routes[0].legs[0].start_address; var sLocationEnd = data.routes[0].legs[0].end_address; var sDistance = data.routes[0].legs[0].distance.text; console.log('The distance between ' + sLocationStart + '    and ' + sLocationEnd + ' is: ' +    data.routes[0].legs[0].distance.text); } Create a JavaScript function that will request directions using the Google Maps API: function checkDistance(sStart, sEnd) { $.ajax({    type: 'GET',    url: 'https://maps.googleapis.com/maps/api/directions/json?',    data: { origin: sStart,        destination: sEnd,        sensor: 'true',        units: 'metric',        key: '<API key>' }, Remember to replace <API key> with the Key for server apps value provided by Google for the Direction API:        success : retrieveDistance,        error : function (request, status, message) {        console.log('Error: ' + message);        }    }); } Execute the checkDistance() function and provide the origin and the destination as arguments, for example: checkDistance('Plovdiv', 'Burgas'); Geographical coordinates can also be provided as arguments to the function checkDistance(). For example, let's calculate the same distances but this time by providing the latitude and longitude of locations in the Bulgarian cities Plovdiv and Burgas: checkDistance('42.135408,24.74529', '42.504793,27.462636'); How it works The checkDistance() function sends data to the Google Directions API. It sets the origin, the destination, the sensor, the unit system, and the API key as parameters of the URL. The result returned by the API is provided as JSON, which is handled in the retriveDistance() function. The output in the console of the preceding example, which retrieves the distance between the Bulgarian cities Plovdiv and Burgas, is The distance between Plovdiv, Bulgaria and Burgas, Bulgaria is: 253 km. See also For all details about the Directions API as well as a full description of the returned response, visit https://developers.google.com/maps/documentation/directions/. Detecting device motion This recipe offers a tutorial on how to detect and handle device motion in Tizen web applications. No specific Tizen APIs will be used. The source code in this recipe relies on the standard W3C DeviceMotionEvent, which is supported by Tizen web applications as well as any modern web browser. How to do it... Please follow these steps to detect device motion and display its acceleration in a Tizen web application: Create HTML components to show device acceleration, for example: <p>X: <span id="labelX"></span></p> <p>Y: <span id="labelY"></span></p> <p>Z: <span id="labelZ"></span></p> Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles motion events: function motionDetected(event) { var acc = event.accelerationIncludingGravity; var sDeviceX = (acc.x) ? acc.x.toFixed(2) : '?'; var sDeviceY = (acc.y) ? acc.y.toFixed(2) : '?'; var sDeviceZ = (acc.z) ? acc.z.toFixed(2) : '?'; $('#labelX').text(sDeviceX); $('#labelY').text(sDeviceY); $('#labelZ').text(sDeviceZ); } Create a JavaScript function that starts a listener for motion events: function deviceMotion() { try {    if (!window.DeviceMotionEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('devicemotion', motionDetected,      false); } catch (err) {    showError(err); } } Invoke a function at an appropriate location of the source code of the application: deviceMotion(); How it works The deviceMotion() function registers an event listener that invokes the motionDetected() function as a callback when device motion event is detected. All errors, including an error if DeviceMotionEvent is not supported, are handled in the showError() function. As shown in the following screenshot, the motionDetected() function loads the data of the properties of DeviceMotionEvent into the HTML5 labels that were created in the first step. The results are displayed using standard units for acceleration according to the international system of units (SI)—metres per second squared (m/s2). The JavaScript method toFixed() is invoked to convert the result to a string with two decimals: A Tizen web application that detects device motion See also Notice that the device motion event specification is part of the DeviceOrientationEvent specification. Both are still in draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. The source code of a sample Tizen web application that detects device motion is provided along with the book. You can import the project of the application into the Tizen IDE and explore it. Detecting device orientation In this recipe, you will learn how to monitor changes of the device orientation using the HTML5 DeviceOrientation event as well as get the device orientation using the Tizen SystemInfo API. Both methods for retrieving device orientation have advantages and work in Tizen web applications. It is up to the developer to decide which approach is more suitable for their application. How to do it... Perform the following steps to register a listener and handle device orientation events in your Tizen web application: Create a JavaScript function to handle errors: function showError(err) { console.log('Error: ' + err.message); } Create a JavaScript function that handles change of the orientation: function orientationDetected(event) { console.log('absolute: ' + event.absolute); console.log('alpha: ' + event.alpha); console.log('beta: ' + event.beta); console.log('gamma: ' + event.gamma); } Create a JavaScript function that adds a listener for the device orientation: function deviceOrientation() { try {    if (!window.DeviceOrientationEvent) {      throw new Error('device motion not supported.');    }    window.addEventListener('deviceorientation',      orientationDetected, false); } catch (err) {    showError(err); } } Execute the JavaScript function to start listening for device orientation events: deviceOrientation(); How it works If DeviceOrientationEvent is supported, the deviceOrientation() function binds the event to the orientationDetected() function, which is invoked as a callback only on success. The showError() function will be executed only if a problem occurs. An instance of the DeviceOrientationEvent interface is provided as an argument of the orientationDetected() function. In the preceding code snippet, the values of its four read-only properties absolute (Boolean value, true if the device provides orientation data absolutely), alpha (motion around the z axis), beta (motion around the x-axis), and gamma (motion around the y axis) are printed in the console. There's more... There is an easier way to determine whether a Tizen device is in landscape or portrait mode. In a Tizen web application, for this case, it is recommended to use the SystemInfo API. The following code snippet retrieves the device orientation: function onSuccessCallback(orientation) { console.log("Device orientation: " + orientation.status); } function onErrorCallback(error) { console.log("Error: " + error.message); }  tizen.systeminfo.getPropertyValue("DEVICE_ORIENTATION", onSuccessCallback, onErrorCallback); The status of the orientation can be one of the following values: PORTRAIT_PRIMARY PORTRAIT_SECONDARY LANDSCAPE_PRIMARY LANDSCAPE_SECONDARY See also The DeviceOrientationEvent specification is still a draft. The latest published version is available at http://www.w3.org/TR/orientation-event/. For more information on the Tizen SystemInfo API, visit https://developer.tizen.org/dev-guide/2.2.1/org.tizen.web.device.apireference/tizen/systeminfo.html. Using the Vibration API Tizen is famous for its excellent support of HTML5 and W3C APIs. The standard Vibration API is also supported and it can be used in Tizen web applications. This recipe offers code snippets on how to activate vibration on a Tizen device. How to do it... Use the following code snippet to activate the vibration of the device for three seconds: if (navigator.vibrate) { navigator.vibrate(3000); } To cancel an ongoing vibration, just call the vibrate() method again with zero as a value of its argument: if (navigator.vibrate) { navigator.vibrate(0); } Alternatively, the vibration can be canceled by passing an empty array to the same method: navigator.vibrate([]); How it works The W3C Vibration API is used through the JavaScript object navigator. Its vibrate() method expects either a single value or an array of values. All values must be specified in milliseconds. The value provided to the vibrate() method in the preceding example is 3000 because 3 seconds is equal to 3000 milliseconds. There's more... The W3C Vibration API allows advanced tuning of the device vibration. A list of time intervals (with values in milliseconds), during which the device will vibrate, can be specified as an argument of the vibrate() method. For example, the following code snippet will make the device to vibrate for 100 ms, stand still for 3 seconds, and then again vibrate, but this time just for 50 ms: if (navigator.vibrate) { navigator.vibrate([100, 3000, 50]); } See also For more information on the vibration capabilities and the API usage, visit http://www.w3.org/TR/vibration/. Tizen native applications for the mobile profile have exposure to additional APIs written in C++ for light and proximity sensors. Explore the source code of the sample native application SensorApp which is provided with the Tizen SDK to learn how to use these sensors. More information about them is available at https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/light_sensor.htm and https://developer.tizen.org/dev-guide/2.2.1/org.tizen.native.appprogramming/html/guide/uix/proximity_sensor.htm. Summary In this article, we learned the details of various hardware sensors such as the GPS, accelerometer, and gyroscope sensor. The main focus of this article was on location-based services, maps, and navigation. Resources for Article: Further resources on this subject: Major SDK components [article] Getting started with Kinect for Windows SDK Programming [article] https://www.packtpub.com/books/content/cordova-plugins [article]
Read more
  • 0
  • 0
  • 7082

article-image-appcelerator-titanium-creating-animations-transformations-and-understanding-drag-and-d
Packt
22 Dec 2011
10 min read
Save for later

Appcelerator Titanium: Creating Animations, Transformations, and Understanding Drag-and-drop

Packt
22 Dec 2011
10 min read
(For more resources related to this subject, see here.) Animating a View using the "animate" method Any Window, View, or Component in Titanium can be animated using the animate method. This allows you to quickly and confidently create animated objects that can give your applications the "wow" factor. Additionally, you can use animations as a way of holding information or elements off screen until they are actually required. A good example of this would be if you had three different TableViews but only wanted one of those views visible at any one time. Using animations, you could slide those tables in and out of the screen space whenever it suited you, without the complication of creating additional Windows. In the following recipe, we will create the basic structure of our application by laying out a number of different components and then get down to animating four different ImageViews. These will each contain a different image to use as our "Funny Face" character. Complete source code for this recipe can be found in the /Chapter 7/Recipe 1 folder. Getting ready To prepare for this recipe, open up Titanium Studio and log in if you have not already done so. If you need to register a new account, you can do so for free directly from within the application. Once you are logged in, click on New Project, and the details window for creating a new project will appear. Enter in FunnyFaces as the name of the app, and fill in the rest of the details with your own information. Pay attention to the app identifier, which is written normally in reverse domain notation (that is, com.packtpub.funnyfaces). This identifier cannot be easily changed after the project is created and you will need to match it exactly when creating provisioning profiles for distributing your apps later on. The first thing to do is copy all of the required images into an images folder under your project's Resources folder. Then, open the app.js file in your IDE and replace its contents with the following code. This code will form the basis of our FunnyFaces application layout. // this sets the background color of the master UIView Titanium.UI.setBackgroundColor('#fff');////create root window//var win1 = Titanium.UI.createWindow({ title:'Funny Faces', backgroundColor:'#fff'});//this will determine whether we load the 4 funny face//images or whether one is selected alreadyvar imageSelected = false;//the 4 image face objects, yet to be instantiatedvar image1;var image2;var image3;var image4;var imageViewMe = Titanium.UI.createImageView({ image: 'images/me.png', width: 320, height: 480, zIndex: 0 left: 0, top: 0, zIndex: 0, visible: false});win1.add(imageViewMe);var imageViewFace = Titanium.UI.createImageView({ image: 'images/choose.png', width: 320, height: 480, zIndex: 1});imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! }});win1.add(imageViewFace);//this footer will hold our save button and zoom slider objectsvar footer = Titanium.UI.createView({ height: 40, backgroundColor: '#000', bottom: 0, left: 0, zIndex: 2});var btnSave = Titanium.UI.createButton({ title: 'Save Photo', width: 100, left: 10, height: 34, top: 3});footer.add(btnSave);var zoomSlider = Titanium.UI.createSlider({ left: 125, top: 8, height: 30, width: 180});footer.add(zoomSlider);win1.add(footer);//open root windowwin1.open(); Build and run your application in the emulator for the first time, and you should end up with a screen that looks just similar to the following example: How to do it… Now, back in the app.js file, we are going to animate the four ImageViews which will each provide an option for our funny face image. Inside the declaration of the imageViewFace object's event handler, type in the following code: imageViewFace.addEventListener('click', function(e){ if(imageSelected == false){ //transform our 4 image views onto screen so //the user can choose one! image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2 }); image1.addEventListener('click', setChosenImage); win1.add(image1); image2 = Titanium.UI.createImageView({ backgroundImage: 'images/policewoman.png', left: 321, top: -140, width: 160, height: 220, zIndex: 2 }); image2.addEventListener('click', setChosenImage); win1.add(image2); image3 = Titanium.UI.createImageView({ backgroundImage: 'images/vampire.png', left: -160, bottom: -220, width: 160, height: 220, zIndex: 2 }); image3.addEventListener('click', setChosenImage); win1.add(image3); image4 = Titanium.UI.createImageView({ backgroundImage: 'images/monk.png', left: 321, bottom: -220, width: 160, height: 220, zIndex: 2 }); image4.addEventListener('click', setChosenImage); win1.add(image4); image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN }); image2.animate({ left: 160, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_OUT }); image3.animate({ left: 0, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate({ left: 160, bottom: 20, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_LINEAR }); }}); Now launch the emulator from Titanium Studio and you should see the initial layout with our "Tap To Choose An Image" view visible. Tapping the choose ImageView should now animate our four funny face options onto the screen, as seen in the following screenshot: How it works… The first block of code creates the basic layout for our application, which consists of a couple of ImageViews, a footer view holding our "save" button, and the Slider control, which we'll use later on to increase the zoom scale of our own photograph. Our second block of code is where it gets interesting. Here, we're doing a simple check that the user hasn't already selected an image using the imageSelected Boolean, before getting into our animated ImageViews, named image1, image2, image3, and image4. The concept behind the animation of these four ImageViews is pretty simple. All we're essentially doing is changing the properties of our control over a period of time, defined by us in milliseconds. Here, we are changing the top and left properties of all of our images over a period of half a second so that we get an effect of them sliding into place on our screen. You can further enhance these animations by adding more properties to animate, for example, if we wanted to change the opacity of image1 from 50 percent to 100 percent as it slides into place, we could change the code to look something similar to the following: image1 = Titanium.UI.createImageView({ backgroundImage: 'images/clown.png', left: -160, top: -140, width: 160, height: 220, zIndex: 2, opacity: 0.5});image1.addEventListener('click', setChosenImage);win1.add(image1);image1.animate({ left: 0, top: 0, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN, opacity: 1.0}); Finally, the curve property of animate() allows you to adjust the easing of your animated component. Here, we used all four animation-curve constants on each of our ImageViews. They are: Titanium.UI.ANIMATION_CURVE_EASE_IN: Accelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_OUT: Decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT: Accelerate and decelerate the animation slowly Titanium.UI.ANIMATION_CURVE_LINEAR: Make the animation speed constant throughout the animation cycles Animating a View using 2D matrix and 3D matrix transforms You may have noticed that each of our ImageViews in the previous recipe had a click event listener attached to them, calling an event handler named setChosenImage. This event handler is going to handle setting our chosen "funny face" image to the imageViewFace control. It will then animate all four "funny face" ImageView objects on our screen area using a number of different 2D and 3D matrix transforms. Complete source code for this recipe can be found in the /Chapter 7/Recipe 2 folder. How to do it… Replace the existing setChosenImage function, which currently stands empty, with the following source code: //this function sets the chosen image and removes the 4//funny faces from the screenfunction setChosenImage(e){ imageViewFace.image = e.source.backgroundImage; imageViewMe.visible = true; //create the first transform var transform1 = Titanium.UI.create2DMatrix(); transform1 = transform1.rotate(-180); var animation1 = Titanium.UI.createAnimation({ transform: transform1, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image1.animate(animation1); animation1.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image1); }); //create the second transform var transform2 = Titanium.UI.create2DMatrix(); transform2 = transform2.scale(0); var animation2 = Titanium.UI.createAnimation({ transform: transform2, duration: 500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image2.animate(animation2); animation2.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image2); }); //create the third transform var transform3 = Titanium.UI.create2DMatrix(); transform3 = transform3.rotate(180); transform3 = transform3.scale(0); var animation3 = Titanium.UI.createAnimation({ transform: transform3, duration: 1000, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image3.animate(animation3); animation3.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image3); }); //create the fourth and final transform var transform4 = Titanium.UI.create3DMatrix(); transform4 = transform4.rotate(200,0,1,1); transform4 = transform4.scale(2); transform4 = transform4.translate(20,50,170); //the m34 property controls the perspective of the 3D view transform4.m34 = 1.0/-3000; //m34 is the position at [3,4] //in the matrix var animation4 = Titanium.UI.createAnimation({ transform: transform4, duration: 1500, curve: Titanium.UI.ANIMATION_CURVE_EASE_IN_OUT }); image4.animate(animation4); animation4.addEventListener('complete',function(e){ //remove our image selection from win1 win1.remove(image4); }); //change the status of the imageSelected variable imageSelected = true;} How it works… Again, we are creating animations for each of the four ImageViews, but this time in a slightly different way. Instead of using the built-in animate method, we are creating a separate animation object for each ImageView, before calling the ImageView's animate method and passing this animation object to it. This method of creating animations allows you to have finer control over them, including the use of transforms. Transforms have a couple of shortcuts to help you perform some of the most common animation types quickly and easily. The image1 and image2 transforms, as shown in the previous code, use the rotate and scale methods respectively. Scale and rotate in this case are 2D matrix transforms, meaning they only transform the object in two-dimensional space along its X-axis and Y-axis. Each of these transformation types takes a single integer parameter; for scale, it is 0-100 percent and for rotate, the number of it is 0-360 degrees. Another advantage of using transforms for your animations is that you can easily chain them together to perform a more complex animation style. In the previous code, you can see that both a scale and a rotate transform are transforming the image3 component. When you run the application in the emulator or on your device, you should notice that both of these transform animations are applied to the image3 control! Finally, the image4 control also has a transform animation applied to it, but this time we are using a 3D matrix transform instead of the 2D matrix transforms used for the other three ImageViews. These work the same way as regular 2D matrix transforms, except that you can also animate your control in 3D space, along the Z-axis. It's important to note that animations have two event listeners: start and complete. These event handlers allow you to perform actions based on the beginning or ending of your animation's life cycle. As an example, you could chain animations together by using the complete event to add a new animation or transform to an object after the previous animation has finished. In our previous example, we are using this complete event to remove our ImageView from the Window once its animation has finished.
Read more
  • 0
  • 0
  • 7077
Modal Close icon
Modal Close icon