Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Mobile

49 Articles
article-image-unity-plugins-for-augmented-reality-app-development
Sugandha Lahoti
10 Apr 2018
4 min read
Save for later

Unity plugins for augmented reality application development

Sugandha Lahoti
10 Apr 2018
4 min read
Augmented Reality is the powerhouse for the next set of magic tricks headed to our mobile devices.  Augmented Reality combines real-world objects with Digital information. Heard about Pokemon Go? It was first showcased by Niantic at WWDC 2017 and was built on Apple’s augmented reality framework, ARKit. Following the widespread success of Pokemon Go, a large number of companies are eager to invest in AR technology. Unity is one of the dominant players in the industry when it comes to creating desktop, console and mobile games. Augmented Reality has been exciting game developers for quite some time now, and following this excitement Unity has released prominent tools for developers to experiment with AR Apps. Bear in mind that Unity is not designed exclusively for Augmented Reality and so developers can access additional functionality by importing extensions. These extensions also provide pre-designed game components such as characters or game props. Let us briefly look at 3 prominent tools or extensions for Augmented Reality development provided by Unity: Unity ARKit plugin The Unity ARKit plugin uses the functionality of the ARKit SDK within Unity projects. As on September 2017, this plugin is also extended for iOS apps as iOS ARKit plugin. The ARKit plugin provides Unity developers with access to features such as motion tracking, vertical and horizontal plane finding, live video rendering, hit-testing, raw point cloud data, ambient light estimation, and more for their AR projects. This plugin also provides easy integration of AR features in existing Unity projects. A new tool, the Unity ARKit Remote speeds up iteration by allowing developers to make real-time changes to the scene and debug scripts in the Unity Editor. The latest update to iOS ARKit is version 1.5 which provides developers with the more tools to power more immersive AR experiences. Google ARCore Google ARCore for Unity provides mobile AR experiences for Android, without the need for additional hardware. The latest major version ARCore 1.0 enables AR applications to track a phone’s motion in the real world, detect planes in the environment, and understand lighting in the camera scene. ARCore 1.0 introduces featured oriented points which help in the placement of anchors on textured surfaces. These feature points enhance the environmental understanding of the scene. So ARCore is not just limited to horizontal and vertical planes like ARKit, but can create AR Apps on any surface. ARCore 1.0 is supported by the Android Emulator in Android Studio 3.1 Beta and is available for use on multiple supported Android devices. Vuforia integration with Unity Vuforia allows developers to build cross-platform AR apps directly from the Unity editor. It provides Augmented Reality support for Android, iOS, and UWP devices, through a single API. It attaches digital content to different types of objects and environments using Model Targets and Ground Plane, across a broad range of devices and operating systems. Ground Plane attaches digital content to horizontal surfaces. Model Targets provides Object Recognition capabilities. Other targets include Image (to put AR content on flat objects) and Cloud (manage large collections of Image Targets from your own CMS). Vuforia also includes Device Tracking capability which provides an inside-out device tracker for rotational head and hand tracking. It also provides APIs to create immersive experiences that transition between AR and VR. You can browse through various AR projects from the Unity community to help you get started with your next big AR idea as well as to choose the toolkit best suited for you. Leap Motion open sources its $100 augmented reality headset, North Star Unity and Unreal comparison Types of Augmented Reality targets Create Your First Augmented Reality Experience: The Tools and Terms You Need to Understand
Read more
  • 0
  • 0
  • 21791

article-image-introduction-phonegap
Robi Sen
27 Feb 2015
9 min read
Save for later

An Introduction to PhoneGap

Robi Sen
27 Feb 2015
9 min read
This is the first of a series of posts that will focus on using PhoneGap, the free and open source framework for creating mobile applications using web technologies such as HTML, CSS, and JavaScript that will come in handy for game development. In this first article, we will introduce PhoneGap and build a very simple Android application using PhoneGap, the Android SDK, and Eclipse. In a follow-on article, we will look at how you can use PhoneGap and PhoneGap Build to create iOS apps, Android apps, BlackBerry apps, and others from the same web source code. In future articles, we will dive deeper into exploring the various tools and features of PhoneGap that will help you build great mobile applications that perform and function just like native applications. Before we get into setting up and working with PhoneGap, let’s talk a little bit about what PhoneGap is. PhoneGap was originally developed by a company called Nitobi but was later purchased by Adobe Inc. in 2011. When Adobe acquired PhoneGap, it donated the code of the project to the Apache Software Foundation, which renamed the project to Apache Cordova. While both tools are similar and open source, and PhoneGap is built upon Cordova, PhoneGap has additional capabilities to integrate tightly with Adobe’s Enterprise products, and users can opt for full support and training. Furthermore, Adobe offers PhoneGap Build, which is a web-based service that greatly simplifies building Cordova/PhoneGap projects. We will look at PhoneGap Build in a future post.   Apache Cordova is the core code base that Adobe PhoneGap draws from. While both are open source and free, PhoneGap has a paid-for Enterprise version with greater Adobe product integration, management tools, and support. Finally, Adobe offers a free service called PhoneGap Build that eases the process of building applications, especially for those needing to build for many devices. Getting Started For this post, to save space, we are going to jump right into getting started with PhoneGap and Android and spend a minimal amount of time on other configurations. To follow along, you need to install node.js, PhoneGap, Apache Ant, Eclipse, the Android Developer Tools for Eclipse, and the Android SDK. We’ll be using Windows 8.1 for development in this post, but the instructions are similar regardless of the operating system. Installation guides, for any major OS, can be found at each of the links provided for the tools you need to install. Eclipse and the Android SDK The easiest way to install the Android SDK and the Android ADT for Eclipse is to download the Eclipse ADT bundle here. Just downloading the bundle and unpacking it to a directory of your choice will include everything you need to get moving. If you already have Eclipse installed on your development machine, then you should go to this link here, which will let you download the SDK and the Android Development Tools along with instructions on how to integrate the ADT into Eclipse. Even if you have Eclipse, I would recommend just downloading the Eclipse ADT bundle and installing it into your own unique environment. The ADT plugin can sometimes have conflicts with other Eclipse plugins. Making sure Android tooling is set up One thing you will need to do, no matter whether you use the Eclipse ADT bundle or not, is to make sure that the Android tools are added to your class path. This is because PhoneGap uses the Android Development Tools and Android SDK to build and compile the Android application. The easiest way to make sure everything is added to your path is to edit your environment variables. To do that, just search for “Edit Environment” and select Edit the system environment variables. This will open your System Properties window. From there, select Advanced and then Environment Variables as shown in the next figure. Under System Variables, select Path and Edit. Now you need to add sdkplatform-tools and sdktools  to your path as shown in the next figure. If you have used the Eclipse ADT bundle, your SDK directory should be of the form C:adt-bundle-windows-x86_64-20131030sdk.  If you cannot find your Android SDK, search for your ADT. In our case, the two directory paths we add to the Path  variable are C:adt-bundle-windows-x86_64-20131030sdkplatform-tools  and C:adt-bundle-windows-x86_64-20131030sdktools. Once you’re done, select OK , but don’t just exit the Environment Variables  screen yet since we will need to do this again when installing Ant. Installing Ant PhoneGap makes use of Apache Ant to help build projects. Download Ant from here and make sure to add the bin directory to your path. It is also good to set the environment variable ANT_HOME as well. To do that, create a new variable in the Environment Variables screen under System Variables called ANT_HOME and point it to the directory where you installed Ant: For more detailed instructions, you can read the official install guide for Apache Ant here. Installing Node.js Node.js is a development platform built on Chrome’s JavaScript runtime engine that can be used for building large-scale, real-time, server-based applications. Node.js is used to provide a lot of the command-line tools for PhoneGap, and to install PhoneGap, we first need Node.js. Unix, OS X, and Windows users can find installers as well as source code here on the Node.js download site. For this post, we will be using the Windows 64-bit installer, which you should be able to double-click and install. Once you’re done installing, you should be able to open a command prompt and type npm –version and see something like this: Installing PhoneGap Once you have Node.js installed, open a command line and type npm install –g phonegap. Node will now download and install PhoneGap and its dependencies as shown here: Creating an initial project in PhoneGap Now that you have PhoneGap installed, let’s use the command-line tools to create an initial PhoneGap project. First, create a folder where you want to store your project. Then, to create a basic project, all you need to do is type phonegap create mytestapp as shown in the following figure. PhoneGap will now build a basic project with a deployable app. Now go to the directory you are using for your project’s root directory. You should see a directory called mytestapp, and if you open that directory, you should see something like the following: Now look under platforms>android and you should see something like what is shown in the next figure, which is the directory structure that PhoneGap made for your Android project. Make sure to note the assets directory, which contains the HTML and JavaScript of the application or the Cordova directories that contain the necessary code to tie Android’s API’s to PhoneGap/Cordova’s API calls. Now let’s import the project into Eclipse. Open Eclipse and select Create a New Project, and select Android Project from Existing Code. Browse to your project directory and select the platforms/android folder and select Finish, like this: You should now see the mytestapp project, but you may see a lot of little red X’s and warnings about the project not building correctly. To fix this, all you need to do is clean and build the project again like so: Right-click on the project directory. In the resulting Properties dialog, select Android from the navigation pane. For the project build target, select the highest Android API level you have installed. Click on OK. Select Clean from the Project menu. This should correct all the errors in the project. If it does not, you may need to then select Build again if it does not automatically build. Now you can finally launch your project. To do this, select the HelloWorld project and right-click on it, and select Run as and then Android application. You may now be warned that you do not have an Android Virtual Device, and Eclipse will launch the AVD manager for you. Follow the wizard and set up an AVD image for your API. You can do this by selecting Create in the AVD manager and copying the values you see here: Once you have built the image, you should now be able to launch the emulator. You may have to again right-click on the HelloWorld directory and select Run as then Android application. Select your AVD image and Eclipse will launch the Android emulator and push the HelloWorld application to the virtual image. Note that this can take up to 5 minutes! In a later post, we will look at deploying to an actual Android phone, but for now, the emulator will be sufficient. Once the Android emulator has started, you should see the Android phone home screen. You will have to click-and-drag on the home screen to open it, and you should see the phone launch pad with your PhoneGap HelloWorld app. If you click on it, you should see something like the following: Summary Now that probably seemed like a lot of work, but now that you are set up to work with PhoneGap and Eclipse, you will find that the workflow will be much faster when we start to build a simple application. That being said, in this post, you learned how to set up PhoneGap, how to build a simple application structure, how to install and set up Android tooling, and how to integrate PhoneGap with the Eclipse ADT. In the next post, we will actually get into making a real application, look at how to update and deploy code, and how to push your applications to a real phone. About the author Robi Sen, CSO at Department 13, is an experienced inventor, serial entrepreneur, and futurist whose dynamic twenty-plus year career in technology, engineering, and research has led him to work on cutting edge projects for DARPA, TSWG, SOCOM, RRTO, NASA, DOE, and the DOD. Robi also has extensive experience in the commercial space, including the co-creation of several successful start-up companies. He has worked with companies such as UnderArmour, Sony, CISCO, IBM, and many others to help build new products and services. Robi specializes in bringing his unique vision and thought process to difficult and complex problems, allowing companies and organizations to find innovative solutions that they can rapidly operationalize or go to market with.
Read more
  • 0
  • 0
  • 21779

article-image-why-mobile-vr-sucks
Amarabha Banerjee
09 Jul 2018
4 min read
Save for later

Why mobile VR sucks

Amarabha Banerjee
09 Jul 2018
4 min read
If you’re following the news, chances are you’ve heard about Virtual Reality or VR headsets like Oculus, Samsung Gear, HTC Vive etc. Trending terms and buzzwords are all good for a business or tech that’s novel and yet to be adopted by the majority of consumers. But the proof of the pudding is when people have started using the tech. And the first reactions to mobile VR are not at all good. This has even made the founder of Oculus Rift, John Carmack to give a statement, “We are coasting on novelty, and the initial wonder of being something people have never seen before”. The jury is out on present day Mobile VR technologies and headsets -  ‘It Sucks’ in its present form. If you want to know why and what can make it better then read ahead. Hardware are expensive Mobile headsets are costly, mostly in the $399- $799 range. The most successful VR headset till date is Google Cardboard. The reason - it’s dirt cheap and it doesn’t need too much set up and customization. Such a high price at the initial launching phase of any tech is going to make the users worried. Not many people would want to buy an expensive new toy without knowing exactly how it’s going to be. VR games don’t match up to video game quality The initial VR games for mobile were very poor. There are 13 billion mobile gamers across the world, undeniably a huge market to tap into. But we have to keep in mind that these gamers have already access to high quality games which they can play just by tapping their mobile screen. For them to strap on that headset and get immersed in VR games, the incentive needs to be too alluring to resist. The current crop of VR games lack complexity, their UI design is not intuitive enough to hold the attention of a user for longer duration of time, especially when playing a VR game means strapping up that head gear. These VR games also take too much time to load which is a huge negative for VR games. The hype vs reality gap is improving, but it’s painfully slow The current phase of VR is the initial breakthrough stage where there are lot of expectations from it. But the games and apps are not upto the mark and hence those who have used it are giving it a thumbs down. The word of mouth publicity is mostly negative and this is creating a negative impact on mobile VR as a whole. The chart below shows the gap between initial expectation and the reality of VR and how it might shape up in the near future according to Unity's CEO John Riccitiello. AR vs VR vs MR: A concoction for confusion The popularity of Augmented Reality (AR) and the emergence of Mixed Reality - an amalgamation of both AR and VR have distracted the developers as per which platform and what methodology to adapt. The UX and UI design are quite different for both AR and VR and MR and hence all of these three disciplines would need dedicated development resources. For this to happen, these disciplines would have to be formalized first and until that time, the quality of the apps will not improve drastically. No unified VR development platform Mobile VR is dependant on SDKs and primarily on the two game engines Unity and Unreal Engine that have come up with support for VR game development. While Unity is one of the biggest names in game development industry, a dedicated and unified VR development platform is still missing in action. As for Unity and Unreal Engine their priority will not be VR any time soon. Things can change if and when some tech giant like Google, Microsoft, Facebook etc. will dedicate their resources to create VR apps and Games for mobile. Although Google has cardboard, Facebook unveiled React VR and support for AR development, Microsoft has their own game going on with Hololens AR and MR development, the trend that started it all still seems to be lost among its newer cousins. I think, VR will be big, but it will have to wait till its implementation by some major business or company. Till then, we will have to wear our ghastly headsets and imagine that we are living in the future. Game developers say Virtual Reality is here to stay Microsoft introduces SharePoint Spaces, adds virtual reality support to SharePoint Build a Virtual Reality Solar System in Unity for Google Cardboard  
Read more
  • 0
  • 0
  • 21724

article-image-android-o-whats-new-and-why-its-been-introduced
Raka Mahesa
07 May 2017
5 min read
Save for later

Android O: What's new and why it's been introduced

Raka Mahesa
07 May 2017
5 min read
Eclaire, Froyo, Gingerbread, Honeycomb, Ice Cream Sandwich, Jelly Bean, Kit Kat, Lollipop, Marshmallow, and Nougat. If you thought that was just a list of various sweet treats, well, you're not wrong, but it's also a list of Android version names. And if you guessed that the next version of Android starts with O, well you're exactly right because Google themselves have announced Android O – the latest version of Android.  So, what's new in the O version of Android? Let's find out.  Notifications have always been one of Android's biggest strengths. Notifications on Android are informative, versatile, and customizable so they fit their users' need. Google clearly understands this and has kept improving the notification system of Android. They have overhauled how the notifications look, made notifications more interactive, and given users a way to manage the importance of each notification. So, of course, for this version of Android, Google added even more features to the notification system.  The biggest feature added to the notification system on Android O is the Notification Channel. Basically, Notification Channel is an API that allows developers to define categories for notifications from their apps. App users will then be able to control the setting for each category of notifications. This way, users can fine tune applications so they only show notifications that the users think are important.  For example, let's say you have a chat application and it has 2 notification channels. The first channel is for notifying users when a new chat message arrives and the second one is for when the user is added to someone else's friend list. Some users may only care about the new chat messages, so they can turn off certain types of notifications instead of turning of all notifications from the app.  Other features added to Android O notification system is Notification Snoozing and Notification Timeout. Just like in alarm, Notification Snoozing allows the user to snooze a notification and let it reappear later when the user has time. Meanwhile, Notification Timeout allows developers to set a timeout duration for the notifications. Imagine that you want to notify a user about a flash sale that only runs for 2 hours. By adding timeout, the notification can remove itself when the event is over. Okay, enough about notifications – what else is new in Android O?  Autofill Framework  One of the newest things introduced with Android O is the Autofill Framework. You know how browsers can remember your full name, email address, home address, and other stuff and automatically fill in a registration form with that data? Well, the same capability is coming to Android apps via the Autofill Framework. An app can also register itself as an Autofill Service. For example, if you made a social media app, you can let other apps use the user's account data from your app to help users fill their forms.  Account data  Speaking of account data, with Android O, Google has removed the ability for developers to get user's account data using the GET_ACCOUNT permission, forcing developers to use the account chooser dialog instead. So with Android O, developers can no longer automatically fill in a text field with the user's email address and name, and have to let users pick accounts on their own.  And it's not just form filling that gets reworked. In an effort to improve battery life and phone performance, Android O has added a number of limitations to background processes. For example, on Android O, apps running in the background (that is, apps that don't have any of their interface visible to users) will not be able to get users’ location data as frequently as before. Also, apps in the background can no longer create and use background processes.  Do keep in mind that some of those limitations will impact any application running on Android O, not just apps that were built using the O version of the SDK. So if you have an app that relies on background processes, you may want to check your app to ensure it works fine on Android O.  App icons  Let's talk about something with more visual: App icons. You know how manufacturers add custom skins to their phones to differentiate their products from competitors? Well, some time ago they also changed the shape of all app icons to fit the overall UI of their phones and thisbroke some carefully designed icons. Fortunately, with the Adaptive Icon feature introduced in Android O, developers will be able to design an icon that can adjust to a variety of shapes.  We've covered a lot, but there are still tons of other features added to Android O that we haven't discussed, including: multi-display support, a new native Audio API, Keyboard Navigation, new APIs to manage WebView, new Java 8 APIs, and more. Do check out the official documentation for those.  That being said, we're still missing the most important thing: What is going to be the full name for Android O? I can only think of Oreo at the moment. What about you?  About the author  Raka Mahesa is a game developer at Chocoarts (chocoarts.com), who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 21187

article-image-swift-2016
Owen Roberts
16 Mar 2016
4 min read
Save for later

Swift in 2016

Owen Roberts
16 Mar 2016
4 min read
It’s only been 2 years since Swift was first released to the public and it’s amazing how quickly it has been adopted by iOS developers all over. Seen as a great jumping point for many people and a perfect alternative to Objective-C with some of the best modern language features built in, like tuples and generics; being Open Source is the icing on the cake for tinker-happy devs looking to make the language their own. Swift is in an interesting position though; despite it being one of the fastest languages being picked up right now, do you know how many apps made by Apple actually use it in iOS 9.2? Only 1. Calculator. It’s not a huge surprise when you think about it – the language is new and constantly evolving, and we can safely assume that Calculator’s use of Swift is to test the water as the features and workings of the language settle down. Maybe in the next 2-3 years Apple will have finally moved to a pure Swift world, but other developers? They’re really jumping into the language. IBM, for example, uses Swift for all its iOS apps. What does this mean for you? It means that, as a developer, you have the ability to help shape a young language that rarely happens on today’s web. So here are a few reasons you should take the plunge and get deeper into Swift in 2016, and if you haven’t started yet, then there’s no better time! Swift 3 is coming What better time to get even deeper into the language when it’s about to add a host of great new features? Swift 3.0 is currently scheduled to launch around the tail end of 2016 and Apple aren’t keeping what they want to include close to their chest. The biggest additions are looking to be stabilizing the ABI, refining the language even more with added resilience to changes, and further increasing portability. All these changes have been on the wishlists of Swift devs for ages and now that we’re finally going to get them there’s sure to be more professional projects made purely in Swift. 3.0 looks to be the edition of Swift that you can use for your customers without worry, so if you haven’t gotten into the language yet, this is the version you should be prepping for! It’s no longer an iOS only language Probably the biggest change to happen to Swift since it became Open Source is that the language is now available on Ubuntu officially, while dedicated fans are also currently creating an Android port of all things. What does this mean for you as a developer? Well, the potential for a greater number of platforms your apps can be deployed on has grown; and one of Swift’s main complaints, that it’s an iOS only language, is rendered moot. It’s getting easier to learn and use In the last 2 years we’ve seen a variety of different tools and package managers for those looking to get more out of Swift. If you’re already using Swift it’s most likely you’re using Xcode to write apps. However, if you’re looking to try something new or just don’t like Xcode then there’s now a host of options for you. Testing frameworks like Quick are starting to appear on the market and alternatives such as AppCode look to build on the feedback the community gives to Xcode and fill in the gaps with what it’s missing. Suggestions as you type and decent project monitoring are becoming more commonplace with these new environments, and there are more environments around if you look, so why not jump on them and see which one suits your style of development? The Swift job market is expanding Last year the Swift job market expanded by an incredible 600%, and that was in its first year alone. With Apple giving Swift its full support and the community having grown so quickly, companies are beginning to take notice. Many companies who produce iOS apps are looking for the benefits that Swift offers over Objective-C and having that language as part of your skillset is something that is beginning to set iOS developers apart from one another… With everything happening with Spring this year it looks to be one of the best times to jump on board or dig deeper into the language. If you’re looking to get started building your Swift skills then be sure to check out our iOS tech page, it has all our most popular iOS books for you to explore along with the list of upcoming titles for you to preorder, Swift included.
Read more
  • 0
  • 0
  • 19981

article-image-progressive-web-amps-combining-progressive-web-apps-and-amp
Sugandha Lahoti
14 Jun 2018
8 min read
Save for later

Progressive Web AMPs: Combining Progressive Wep Apps and AMP

Sugandha Lahoti
14 Jun 2018
8 min read
Modern day web development is getting harder. Users are looking for relentless, responsive and reliable browsing. They want faster results and richer experiences. In addition to this, Modern apps need to be designed so as to support a large number of ecosystems from mobile web, desktop web, Native ioS, Native Android, Instant articles etc. Every new technology which launches has its own USP. The need for today is combining the features of the various popular mobile tech in the market and reaping their benefits as a combination. Acknowledging the standalones In a study by google it was found that “53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.” This calls for making page loads faster and effortless. A cure for this illness is in the form of AMP or Accelerated Mobile Pages, the brainwork of Google and Twitter. They are blazingly fast web pages purely meant for readability and speed. Essentially they are HTML, most of CSS, but no JavaScript. So heavy duty things such as images are not loaded until they are scrolled into view. In AMPs, links are pre-rendered before you click on them. This is made possible using the AMP caching infrastructure. It automatically caches and calls on the content to be displayed atop the AMP and that is why it feels instant. Because the developers almost never write JavaScript, it leads to a cheap, yet fairly interactive deployment model. However, AMPs are useful for a narrow range of content. They have limited functionality. Users, on the other hand, are also looking for reliability and engagement. This called for the development of what is known as Progressive web apps. Proposed by Google in 2015, PWAs combine the best of mobile and web applications to offer users an enriching experience. Think of Progressive web apps as a website that acts and feels like a complete app. Once the user starts exploring the app within the browser, it progressively becomes smarter, faster and makes user experience richer.  Application Shell Architecture and Service Workers are two core drivers that enable PWA to offer speed and functionality. Key benefits that PWA offers over traditional mobile sites include push notifications, highly responsive UI, all types of hardware access which includes access to camera & microphones, and low data usage to name a few. The concoction: PWA + AMP AMPs are fast and easy to deploy. PWAs are engaging and reliable. AMPs are effortless, more retentive and instant. PWAs supports dynamic content, provides push notifications and web manifests. AMPs work on user acquisition. PWAs enhance user experiences. They seemingly work perfectly well on different levels. But users want to Start quick and Stay quick. They want the content they view to make the first hop blazingly fast, but then provide richer pages by amazing reliability and engagement. This called for combining the features of both into one and this was how Progressive web AMPs was born. PWAMP, as the developers call it, combines the capabilities of native app ecosystem with the reach of the mobile Web. Let us look at how exactly it functions and does the needful. The Best of Both Worlds: Reaping benefits of both AMPs fall back when you have dynamic content. Lack of JavaScript means dynamic functionality such as Payments, or push notifications are unavailable. PWA, on the other hand, can never be as fast as an AMP on the first click. Progressive Web AMPs combines the best features of both by making the first click super fast and then rendering subsequent PWA pages/content. So AMP opens a webpage in the blink of an eye with zero time lag and then the subsequent swift transition to PWA leads to beautiful results with dynamic functionalities. So it starts fast and builds up as users browse further. Now, this merger is made possible using three different ways. AMP as PWA: AMP pages in combination with PWA features This involves enabling PWA features in AMP pages. The user clicks on the link, it boots up fast and you see an AMP page which loads from the AMP cache. On clicking subsequent links, the user moves away from AMP cache to the site’s domain(origin). The website continues using the AMP library, but because it is supported on origin now, service workers become active, making it possible to prompt users (by web manifests) to install a PWA version of their website for a progressive experience. AMP to PWA: AMP pages utilized for a smooth transition to PWA features In PWAs the service workers and app shells kick in only after the second step. Hence AMPs can be a perfect entry point for your apps whereas the user discovers content at fast rates with AMP pages, the service worker of the PWA installs in the background and the user is instantly upgraded to PWA in subsequent clicks which can add push notifications, add reminders, web manifests etc. So basically the next click is also going to be instant. AMP in PWA: AMP as a data source for PWA AMPs are easy and safe to embed. As they are self-contained units, they are easily embeddable in websites. Hence they can be utilized as a data source for PWAs.  AMPs make use of Shadow AMP, which can be introduced in your PWA. This AMP library, loads in the top level page. It can amplify the portions in the content as decided by the developer and connect to a whole load of documents for rendering them out. As the AMP library is compiled and loaded only once for, the entire PWA, it would, in turn, reduce backend implementations and client complexity. How are they used in the real world scenario: Shopping PWAMP offers a high engagement feature to the shoppers. Because AMP sites are automatically kept at the top by Google search engines, AMP attracts the customers to your sites by the faster discovery of the apps. The PWA keeps them thereby allowing a rich, immersive, and app-like shopping experience that keeps the shoppers engaged. Lancôme, the L’Oréal Paris cosmetics brand is soon combining AMP with their existing PWA. Their PWA had led to a 17% year over year increase in the mobile sales. With the addition of AMP, they aim to build lightweight mobile pages that load as fast as possible on smartphones to make the site faster and more engaging. Travel PWAMP features allow users to browse through a list of hotels which instantly loads up at the first click. The customer may then book a hotel of his choice in the subsequent click which upgrades him to the PWA experience. Wego, is a Singapore-based travel service. Its PWAMP has achieved a load time for new users at 1.6 seconds and 1 second for returning customers. This has helped to increase site visits by 26%, reduce bounce rates by 20% and increase conversions by 95%, since its launch. News and Media Progressive Web AMPs are also highly useful in the news apps. As the user engages with content using AMP, PWA downloads in the background creating frictionless, uninterrupted reading. Washington Post has come up with one such app where users can experience the Progressive Web App when reading an AMP article and clicking through to the PWA link when it appears in the menu. In addition, their PWA icon can be added to a user’s home screen through the phone’s browser. All the above examples showcase how the concoction proves to always be fast no matter what. Progressive Web AMPs are progressively enhanced with just one backend-the AMP to rule them all meaning that deploy targets are reduced considerably. So all ecosystems namely web, Android, and iOS are supported with just thin layers of extra code. Thus making them highly beneficial in cases of constrained engineering resources or reduced infrastructure complexity. In addition to this, Progressive Web AMPs are highly useful when a site has a lot of static content on individual pages, such as travel, media, news etc. All these statements assert the fact that PWAMP has the power to provide a full mobile web experience with an artful and strategic combination of the AMP and PWA technologies. To know more about how to build your own Progressive Web AMPs, you can visit the official developer’s website. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Build powerful progressive web apps with Firebase
Read more
  • 0
  • 0
  • 19653
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-apple-usb-restricted-mode-everything-you-need-to-know
Amarabha Banerjee
15 Jun 2018
4 min read
Save for later

Apple USB Restricted Mode: Here's Everything You Need to Know

Amarabha Banerjee
15 Jun 2018
4 min read
You must have heard about the incident where the FBI was looking to unlock the iPhone of a mass shooting suspect (one of the attackers in the San Bernardino shooting in 2015). The feds could not unlock the phone, as Apple didn’t budge from their stand of protecting user data. After a few days, police said that they have found a private agency to open the phone. The seed of that feud between the feds and Apple has evolved into a fully grown tree now. This month, Apple announced a new security feature called restricted USB mode. This disables the device’s lightning port after one hour of being locked. Quite expectedly, the law enforcement agencies are not at ease with this particular development. This feature was first introduced in the iOS 11.3 release and then retracted in the next release. But now Apple plans to introduce this feature in the upcoming iOS 12 beta release. The reason as stated by Apple is to protect user data from third party hackers and malwares which have the potential to access iPhone data remotely. You must be wondering, to what extent are these threats genuine. Whether this will mean you locking yourself out of your phone unwittingly with nothing to get you out of the situation. Well, the answer is multilayered. Firstly, if you are not an avid supporter of data privacy and feel you have nothing to hide, then this move might just annoy you for a while. You might wonder about times  when your phone is locked and suddenly forget your unlocking/ passkey. Pretty simple, write it somewhere safe and remember where you have kept it. But in case you are like me, you keep seeing the recent news of user data being hacked, and that worries you. Users are being profiled by different companies for varying end objectives from selling products to shaping up your opinion about politics and other aspects of your life. As such this news might make you a bit comfortable about your next iOS update. Private agencies coming up with solutions to open locked iPhones worried Apple. Companies like Cellebrite and Grayshift are selling devices that can hack any locked Apple device (iPhone and iPad) by using the lightning port. The apparent price of one such device is around 15k USD. What prompted Apple to introduce this security feature into their devices was that government agencies were buying these devices on a regular basis to hack into devices. Hence the threat was real, and the only way to address over 700 million iPhone users’ fears seemed to be introducing the USB restricted mode. The war is however just beginning. Third party companies are already claiming that they have devised a way to overcome this new security feature, which is yet unconfirmed. But Apple is sure to take cognizance of this fact and press their developers more to stay ahead in this cat and mouse game. This has not gone well with the law enforcement agencies as well, they see it as an attempt by Apple to create more hurdles in preventing serious and heinous crimes such as paedophilia. Their side of the argument states that now with the one hour timer since the user locks his or her phone, it becomes much more difficult for them to indict the guilty because they have more room to escape. What do you think this means? Does this give you more faith on your Apple product and will it really compel you to buy that $1200 iPhone with the confidence that your banking data, personal messages, pictures and your other sensitive data are safe at the hands of Apple? Or will it empower the perpetrators of crime to have more confidence that now their activities are not just protected by a passkey, but by an hour of time since they lock it, after which it becomes a black box? No matter what your thoughts are, the war is on, between hackers and Apple. If you belong to either of these communities, these are exciting times. If you are one of the 700 million Apple users, you can feel a bit more secure after the iOS 12 update rolls out. Apple changes app store guidelines on cryptocurrency mining Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others
Read more
  • 0
  • 0
  • 18439

article-image-shift-swift-2017
Shawn Major
27 Jan 2017
3 min read
Save for later

Shift to Swift in 2017

Shawn Major
27 Jan 2017
3 min read
It’s a great time to be a Swift developer because this modern programming language has a lot of momentum and community support behind it and a big future ahead of it. Swift became a real contender when it became open source in December 2015, giving developers the power to build their own tools and port it into the environments in which they work. The release of Swift 3 in September 2016 really shook things up by enabling broad scale adoption across multiple platforms – including portability to Linus/x86, Raspberry Pi, and Android. Swift 3 is the “spring cleaning” release that, while not being backwards compatible, has resulted in a massively cleaner language and ensured sound and consistent language fundamentals that will carry across to future releases. If you’re a developer using Swift, the best thing you can do is get on board with Swift 3 as the next release promises to deliver stability from 3.0 onwards. Swift 4 is expected to be released in late 2017 with the goals of providing source stability for Swift 3 code and ABI stability for the Swift standard library. Despite this shake up that occurred with the new release, developers are still enthusiastic about Swift – it was one of the “most loved” programming languages in StackOverflow’s 2015 and 2016 Developer Surveys. Swift was also one of the top 3 trending techs in 2016 as it’s been stealing market share from Objective C. The keen interest that developers have in Swift is reflected by the +35,000 stars it has amassed on Github and the impressive amount of ongoing collaboration between its core team and the wider community. Rumour has it that Google is considering making Swift a “first class” language and that Facebook and Uber are looking to make Swift more central to their operations. Lyft’s migration of its iOS app to Swift in 2015 shows that the lightness, leanness, and maintainability of the code are worth it and services like the web server and toolkit Perfect are proof that the server-side Swift is ready. People are starting to do some cool and surprising things with Swift. Including… Shaping the language itself. Apple has made a repository on Github called swift-evolution that houses proposals for enhancements and changes to the Swift language. Developers are bringing Swift 3 to as many ARM-based systems as possible. For example, you can get Swift 3 for all the Raspberry Pi boards or you can program a robot in Swift on a BeagleBone. IBM has adopted Swift as the core language for their cloud platform. This opens the door to radically simpler app dev. Developers will be able to build the next generation of apps in native Swift from end-to-end, deploy applications with both server and client components, and build microservice APIs on the cloud. The Swift Sandbox lets developers of any level of experience can actively build server-based code. Since launching it’s had over 2 million code runs from over 100 countries. We think there are going to be a lot of exciting opportunities for developers to work with Swift in the near future. The iOS Developer Skill Plan on Mapt is perfect for diving into Swift and we have plenty of Swift 3 books and videos if you have more specific projects in mind.The large community of developers using iOS/OSX and making libraries combined with the growing popularity of Swift as a general-purpose language makes jumping into Swift a worthwhile venture. Interested in what other developers have been up to across the tech landscape? Find out in our free Skill Up: Developer Talk report on the state of software in 2017.
Read more
  • 0
  • 0
  • 18378

article-image-an-ethical-mobile-operating-system-e-trick-or-treat
Prasad Ramesh
01 Nov 2018
2 min read
Save for later

An ethical mobile operating system, /e/ - Trick or Treat?

Prasad Ramesh
01 Nov 2018
2 min read
Previously known as eelo, /e/ is an ‘ethical’ operating system for mobile phones. Leading the project is Gaël Duval who is also the creator of Mandrake Linux. Is it a new OS? Well not exactly, it is a forked version of Lineage OS stripped of Google apps, with a focus on privacy and considered as an ethical OS. What’s so good about /e/? The good thing here is that this is a unique effort for an ethical OS. Something different from the data collection of Android or the expensive devices by Apple. With a functional ROM including all functionalities, Duval seems to be pretty serious about this. An OS that respects user privacy does sound like a very nice thing. However, as pointed out by people on Reddit, this is what Cyanogen was in the beginning. The ethical OS /e/ is not actually a new OS from scratch. Who has the time or funding for that today? You have /e/ services instead of Google services, but ummm can you trust them? Is /e/ a trick… or a treat? We have mixed feelings about this one, it is a commendable effort, the idea is right. But with the recent privacy debates everywhere trusting a new OS is tricky. We’ll reserve judgement till it is out of beta and has a name that you can Google search for.
Read more
  • 0
  • 0
  • 17703

article-image-solving-day-7-advent-code-using-swift
Nicky Gerritsen
07 Mar 2016
7 min read
Save for later

Solving Day 7 of Advent of Code using Swift

Nicky Gerritsen
07 Mar 2016
7 min read
Eric Wastl created the website Advent of Code, a website that published a new programming exercise from the first of December until Christmas. I came across this website somewhere in the first few days of December and as I participated in the ACM ICPC in the past, I expected I should be able to solve these problems. I decided it would be a good idea to use Swift to write these solutions. While solving the problems, I came across one problem that I was able to do really well in Swift and I'd like to explain that one in this post. Introduction After reading the problem, I immediately noticed some interesting points: We can model the input as a graph, where each wire is a vertex and each connection in the circuit connects some vertices to another vertex. For example x AND y -> z connect both x and y to vertex z. The example input is ordered in such a way that you can just iterate over the lines from top to bottom and apply the changes. However, the real input does not have this ordering. To get the real input in the correct ordering, one should note that the input is basically a DAG. Or at least it should be, otherwise it cannot be solved. This means we can use topological sorting to sort the vertices of the graph in the order we should walk them. Although in the example input, it seems that AND, OR, NOT, LSHIFT and RSHFT always operated on a wire, this is not the case. They can also operate on a constant value. Implementation Note that I replaced some guard lines with forced unwrapping here. The source code linked at the end contains the original code. First off, we define a Source, which is an element of an operation, i.e. in x AND y both x and y are a Source: enum Source { case Vertex(String) case Constant(UInt16) func valueForGraph(graph: Graph) -> UInt16 { switch self { case let .Vertex(vertex): return graph.vertices[vertex]!.value! case let .Constant(val): return val } } var vertex: String? { switch self { case let .Vertex(v): return v case .Constant(_): return nil } } static func parse(s: String) -> Source { if let i = UInt16(s) { return .Constant(i) } else { return .Vertex(s) } } } A Source is either a Vertex (i.e. a wire) and then it has a corresponding string as identifier, or it is a constant and then it contains some value. We define a function that will return the value for this Source given a Graph (more on this later). For a constant source the whole graph does not matter, but for a wire we should look up the value in the graph. The second function is used to extract the identifier of the vertex of the source, if any. Finally we also have a function that helps us parse a string or integer into a Source. Next up we have an Operation enumeration, which holds all information about one line of input: enum Operation { case Assign(Source) case And(Source, Source) case Or(Source, Source) case Not(Source) case LeftShift(Source, UInt16) case RightShift(Source, UInt16) func applytoGraph(graph: Graph, vertex: String) { let v = graph.vertices[vertex]! switch self { case let .Assign(source1): v.value = source1.valueForGraph(graph) case let .And(source1, source2): v.value = source1.valueForGraph(graph) & source2.valueForGraph(graph) /* etc for other cases */ case let .RightShift(source1, bits): v.value = source1.valueForGraph(graph) >> bits } } static func parseOperation(input: String) -> Operation { if let and = input.rangeOfString(" AND ") { let before = input.substringToIndex(and.startIndex) let after = input.substringFromIndex(and.endIndex) return .And(Source.parse(before), Source.parse(after)) } /* etc for other options */ var sourceVertices: [String] { /* code that switches on self and extracts vertex from each source */ } The operation enum has a static function that allows us to parse a line from the input into an Operation and a function that will allow us to apply it to a graph. Furthermore, it has a computed variable that returns all source vertices for the operation. Now a Vertex is an easy class: class Vertex { var idx: String var outgoing: Set<String> var incoming: Set<String> var operations: [Operation] var value: UInt16? init(idx: String) { self.idx = idx self.outgoing = [] self.incoming = [] self.operations = [] } } It has an ID and keeps track of a set of incoming and outgoing edges (we need both for topological sorting). Furthermore, it has a value (which is initially not set) and a list of operations that has this vertex as target. Because we want to store vertices in a set, we need to let it conform to Equatable and Hashable. Because we have a unique string identifier for each vertex, this is easy: extension Vertex: Equatable {} func ==(lhs: Vertex, rhs: Vertex) -> Bool { return lhs.idx == rhs.idx } extension Vertex: Hashable { var hashValue: Int { return self.idx.hashValue } } The last structure we need is a graph, which basically hold a list of all vertices: class Graph { var vertices: [String: Vertex] init() { self.vertices = [:] } func addVertexIfNotExists(idx: String) { if let _ = self.vertices[idx] { return } self.vertices[idx] = Vertex(idx: idx) } func addOperation(operation: Operation, target: String) { // Add an operation for a given target to this graph self.addVertexIfNotExists(target) self.vertices[target]?.operations.append(operation) let sourceVertices = operation.sourceVertices for v in sourceVertices { self.addVertexIfNotExists(v) self.vertices[target]?.incoming.insert(v) self.vertices[v]?.outgoing.insert(target) } } } We define a helper function that ads a vertex if not already added. We then use this function to define a function that can add operation to the graph, together with all required vertices and edges. Now we need to be able to topologically sort the vertices of the graph, which can be done using Kahn's Algorithm[MA6] . This[MA7]  can be done in Swift almost exactly using the pseudo-code explained there: extension Graph { func topologicalOrder() -> [Vertex] { var L: [Vertex] = [] var S: Set<Vertex> = Set(vertices.values.filter { $0.incoming.count == 0 }) while S.count > 0 { guard let n = S.first else { fatalError("No more nodes in S") } S.remove(n) L.append(n) for midx in n.outgoing { guard let m = self.vertices[midx] else { fatalError("Can not find vertex") } n.outgoing.remove(m.idx) m.incoming.remove(n.idx) if m.incoming.count == 0 { S.insert(m) } } } return L } } Now we are basically done, as we can now write up a function that calculates the value of a given wire in a graph: func getFinalValueInGraph(graph: Graph, vertex: String) -> UInt16? { let topo = graph.topologicalOrder() for vertex in topo { for op in v.operations { op.applytoGraph(graph, vertex: vertex.idx) } } return graph.vertices[vertex]?.value } Conclusions This post (hopefully) gave you some insight intohow I solved one of the bigger Advent of Code problems. As you can see Swift has some really nice features that help in this case, like enums with types and functional methods like filter. If you like these kinds of problems I suggest you go to the Advent of Code website and start solving the problems. There are quite a few that are really easy to get started. The complete code for this blogpost can be found at my GitHub account. About the author Nicky Gerritsen is currently a Software Architect at StreamOne, a small Dutch company specialized in video streaming and storage. In his spare time he loves to code on Swift projects and learn about new things Swift. He can be found on Twitter @nickygerritsen and on GitHub: https://github.com/nickygerritsen/.
Read more
  • 0
  • 0
  • 17453
article-image-difference-between-native-mobile-development-vs-cross-platform-development
Amarabha Banerjee
27 Jun 2018
4 min read
Save for later

What’s the difference between cross platform and native mobile development?

Amarabha Banerjee
27 Jun 2018
4 min read
Mobile has become an increasingly important part of many modern businesses tech strategy. In everything from eCommerce to financial services, mobile applications aren’t simply a ‘nice to have’, they’re essential. Customers expect them. The most difficult question today isn’t ‘do we need a mobile app’ Instead, it’s ‘which type of mobile app should we build: native vs cross platform?’ There are arguments to be made for cross platform mobile development and native app development. Developers who have worked on either project will probably have an opinion on the right way to go. Like many things in tech, however, the cross platform v native debate is really a question of which one is right for you. From both a business and capability perspective, you need to understand what you want to achieve and when. Let’s take a look at the difference between cross-platform framework or a native development platforms. You should then feel comfortable enough to make the right decision about which mobile platform is right for you. Cross platform development? A cross platform application runs across all mobile operating systems without any extra coding. By all mobile operating systems, I mean iOS and Android (windows phones are probably on their way out). A cross platform framework provides all the tools to help you create cross-platform apps easily. Some of the most popular cross- platform frameworks include: Xamarin Corona SDK appcelerator titanium PhoneGap Hybrid mobile apps One specific form of cross-platform mobile  application is Hybrid. With hybrid mobile apps, the graphical user interface (GUI) is developed using HTML5. These are then wrapped in native webpack containers and deployed on iOS and Android devices. A native app is specifically designed for one particular operating system. This means it will work better in that specific environment than one created for multiple platforms. One of the latest native android development framework is Google Flutter. For iOS, it’s Xcode.. Native mobile development vs Cross platform development If you’re a mobile developer, which is better? Let’s compare cross platform development with mobile development: Cross-platform development is more cost effective. This is simply because you can reuse 80% of your code becase you’re essentially building one application. The cost of native development is roughly double to that of Cross-platform development, although cost of android development is roughly 30% more than iOS development. Cross-platform development takes less time. Although some coding has to be done natively, the time taken to develop one app is, obviously, less than to develop two. Native apps can use all system resources. No other app can have any additional features . They are able to use the maximum computing power provided by the GPU and CPU; this means that load times are often pretty fast.. Cross platform apps have restricted access to system resources. Their access is dependent on framework plugins and permissions. Hybrid apps usually take more time to loadbecause smartphone GPUs are generallyless powerful than other machines. Consequently, unpacking a HTML5 UI takes more time on a mobile device. The same reason forced Facebook to shift their mobile apps from Hybrid to Native which according to facebook, improved their app load time and loading of newsfeed and images in the app. The most common challenge with about cross-platform mobile development is been balancing the requirements of iOS and Android UX design. iOS is quite strict about their UX and UI design formats. That increases the chances of rejection from the app store and causes more recurring cost. A critical aspect of Native mobile apps is that if they are designed properly and properly synchronized with the OS, they get regular software updates. That can be quite a difficult task for cross-platform apps. Finally, the most important consideration that should determine your choice are your (or the customer’s) requirements. If you want to build a brand around your app, like a business or an institution, or your app is going to need a lot of GPU support like a game, then native is the way to go. But if your requirement is simply to create awareness and spread information about an existing brand or business on a limited budget then cross-platform is probably the best route to go down. How to integrate Firebase with NativeScript for cross-platform app development Xamarin Forms 3, the popular cross-platform UI Toolkit, is here! A cross-platform solution with Xamarin.Forms and MVVM architecture  
Read more
  • 0
  • 0
  • 17105

article-image-theres-another-player-in-the-advertising-game-augmented-reality
Guest Contributor
10 Jul 2018
7 min read
Save for later

There’s another player in the advertising game: augmented reality

Guest Contributor
10 Jul 2018
7 min read
Customer purchase does not necessarily depend on the need for the product; instead, it often depends on how well the product has been advertised. Most advertising companies target customer emotions and experiences to sell their product. However, with the increasing online awareness, intrusive ads and an oversaturated advertising space, customers rely more on online reviews before purchasing any product. Companies have to think out-of-the-box to get the customers engaged with their product! Augmented Reality can help companies fetch their audience back by creating an interactive buying experience on their device that converts their casual browsing activity into a successful purchase. It is estimated that there are around 4 billion users in the world who are actively engaged on the internet. This shows that over half of the world’s population is active online which means having an online platform will be beneficial, but there’s a large audience that requires engaging within the right way because it’s becoming the norm. For now, AR is still fairly new in the advertising world but it’s expected that by 2020, AR revenue will outweigh VR (Virtual Reality) by about $120 billion and it’s no surprise this is the case. Ways AR can benefit businesses There are many reasons why AR could be beneficial to a business: Creates Emotional Connection AR provides the platform for advertising companies to engage with their audiences in a unique way, using an immersive advertisement to create a connection that brings the consumers emotions into play. A memorable experience encourages them to make purchases because psychologically, it was an experience that they’ve had like no other and one they’re unlikely to get elsewhere. It can also help create exposure. Because of the excitement that users had, they’ll encourage others to try it too. Saves Money It’s amazing to think that such advanced technology can be cheaper than your traditional method of advertising. Print advertising can still be an extremely expensive method in many cases given that it is a high volume game and due to the costs of applying an ad on the front page of a publication. AR ads can vary depending on the quality but even some of the simplest forms of AR advertising can be affordable. Increases Sales Not only is AR a useful tool for promoting goods and services, but it also provides the opportunity to increase conversions. One issue that many customers have is whether the goods they are purchasing are right for them. AR removes this barrier and enables them to ‘try out’ the product before they purchase, making it more likely for the customer to buy. Examples of AR advertising Early adopters have already taken up the technology for showcasing their services and products. It’s not mainstream yet but as the above figures suggest, it won’t be long before AR becomes widespread. Here are a few examples of companies using AR technology in their marketing strategy. IKEA’s virtual furnitures IKEA is the famous Swedish home retailer who adopted the technology back in 2013 for their iOS app. Their idea allowed potential purchasers to scan their catalogue with their mobile phone and then browse their products through the app. When they selected something that they think might be suitable for their home they could see the virtual furniture through their app or tablet in their living space. This way customers could judge whether it was the right product or not. Pepsi Max’s Unbelievable Campaign Pepsi didn’t necessarily use the technology to promote their product directly but instead used it to create a buzz for the brand. They installed screens into an everyday bus shelter in London and used it to layer virtual images over a real-life camera. Audiences were able to interact with the video in the bus shelter through the camera that was installed on the bus shelter. The video currently has over 8 million views on Youtube and several shares have been made through social networks. Lacoste’s virtual trial room on a marker Lacoste launched an app that used marker-based AR technology where users were able to stand on a marker in the store that allowed them to try on different LCST branded trainers. As mentioned before, this would be a great way for users to try on their apparel before deciding whether to purchase it. Challenges businesses face with integrating AR into their advertising plan Although AR is an exciting prospect for businesses and many positives can be taken from implementing it into advertising plans, it has its fair share of challenges. Let’s take a brief look into what these could be. Mobile Application is required AR requires a specific type of application in order to work. For consumers to engage themselves within an AR world they’ll need to be able to download the specific app to their mobile first. This means that customers will find themselves downloading different applications for the companies that released their app. This is potentially one of the reasons why some companies have chosen not to invest in AR, yet. Solutions like augmented reality digital placement (ARDP) are in the process of resolving this problem. ARDP uses media-rich banners to bring AR to life in a consumer’s handheld device without having to download multiple apps. ARDP would require both AR and app developers to come together to make AR more accessible to users. Poor Hardware Specifications Similar to video and console games, the quality of graphics on an AR app greatly impacts the user experience. If you think of the power that console systems output, if a user was to come across a game they played that had poor graphics knowing the console's capabilities, they will be less likely to play it. In order for it to work, the handheld device would need enough hardware power to produce the ideal graphics. Phone companies such as Apple and Samsung have done this over time when they’ve released new phones. So in the near future, we should expect modern smartphones to produce top of the range AR. Complexity in the Development Phase Creating an AR advertisement requires a high level of expertise. Unless you have AR developers already in your in-house team, the development stage of the process may prove difficult for your business. There are AR software development toolkits available that have made the process easier but it still requires a good level of coding knowledge. If the resources aren’t available in-house, you can either seek help from app development companies that have AR software engineering experience or you could outsource the work through websites such as Elance, Upwork, and Guru. In short, the development process in ad creation requires a high level of coding knowledge. The increased awareness of the benefits of implementing AR advertising will alert developers everywhere and should be seen as a rising opportunity. We can expect an increase in demand for AR developers as those who have the expertise in the technology will be high on the agenda for many advertising companies and agencies who are looking to take advantage of the market to engage with their customers differently. For projects that involve AR development, augmented reality developers should be at the forefront of business creative teams, ensuring that the ideas that are created can be implemented correctly. [author title="About Jamie Costello"] Jamie Costello is a student and an aspiring freelance writer based in Manchester. His interests are to write about a variety of topics but his biggest passion concerns technology. He says, “When I'm not writing or studying, I enjoy swimming and playing games consoles”.[/author]   Read Next Adobe glides into Augmented Reality with Adobe Aero Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more    
Read more
  • 0
  • 0
  • 16330

article-image-ios-9-speed
Samrat Shaw
20 May 2016
5 min read
Save for later

iOS 9: Up to Speed

Samrat Shaw
20 May 2016
5 min read
iOS 9 is the biggest iOS release to date. The new OS introduced new intricate features and refined existing ones. The biggest focus is on intelligence and proactivity, allowing iOS devices to learn user habits and act on that information. While it isn’t a groundbreaking change like iOS 7, there is a lot of new functionality for developers to learn. Along with iOS 9 and Xcode 7, Apple also announced major changes to the Swift language (Swift 2.0) and announced open source plans. In this post, I will discuss some of my favorite changes and additions in iOS 9. 1 List of new features Let’s examine the new features. 1.1 Search Extensibility Spotlight search in iOS now includes searching within third-party apps. This allows you to deep link from Search in iOS 9. You can allow users to supply relevant information that they can then navigate directly to. When a user clicks on any of the search results, the app will be opened and you can be redirected to the location where the search keyword is present. The new enhancements to the Search API include NSUserActivity APIs, Core Spotlight APIs, and web markup. 1.2 App Thinning App thinning optimizes the install sizes of apps to use the lowest amount of storage space while retaining critical functionality. Thus, users will only download those parts of the binary that are relevant to them. The app's resources are now split, so that if a user installs an app on iPhone 6, they do not download iPad code or other assets that are used to make an app universal. App thinning has three main aspects, namely app slicing, on-demand resources, and bitcode. Faster downloads and more space for other apps and content provide a better user experience. 1.3 3D Touch iPhone 6s and 6s Plus added a whole new dimension to UI interactions. A user can now press the Home screen icon to immediately access functionality provided by an app. Within the app, a user can now press views to see previews of additional content and gain accelerated access to features. 3D Touch works by detecting the amount of pressure that you are applying to your phone's screen in order to perform different actions. In addition to the UITouch APIs, Apple has also provided two new sets of classes, adding 3D Touch functionality to apps: UIPreviewAction and UIApplicationShortcutItem. This unlocks a whole new paradigm of iOS device interaction and will enable a new generation of innovation in upcoming iOS apps. 1.4 App Transport Security (ATS) With the introduction of App Transport Security, Apple is leading by example to improve the security of its operating system. Apple expects developers to adopt App Transport Security in their applications. With App Transport Security enabled, network requests are automatically made over HTTPS instead of HTTP. App Transport Security requires TLS 1.2 or higher. Developers also have an option to disable ATS, either selectively or as a whole, by specifying in the Info.plist of their applications. 1.5 UIStackView The newly introduced UIStackView is similar to Android’s LinearLayout. Developers embed views to the UIStackView (either horizontally or vertically), without the need to specify the auto layout constraints. The constraints are inserted by the UIKit at runtime, thus making it easier for developers. They have the option to specify the spacing between the subviews. It is important to note that UIStackViews don't scroll; they just act as containers that automatically fit their content. 1.6 SFSafariViewController With SFSafariViewController, developers can use nearly all of the benefits of viewing web content inside Safari without forcing users to leave an app. It saves developers a lot of time, since they no longer need to create their own custom browsing experiences. For the users too, it is more convenient, since they will have their passwords pre-filled, not have to leave the app, have their browsing history available, and more. The controller also comes with a built-in reader mode. 1.7 Multitasking for iPad Apple has introduced Slide Over, Split View, and Picture-in-Picture for iPad, thus allowing certain models to use the much larger screen space for more tasks. From the developer point of view, this can be supported by using the iOS AutoLayout and Size Classes. If the code base already uses these, then the app will automatically respond to the new multitasking setup. Starting from Xcode 7, each iOS app template will be preconfigured to support Slide Over and Split View. 1.8 The Contacts Framework Apple has introduced a brand new framework, Contacts. This replaces the function-based AddressBook framework. The Contacts framework provides an object-oriented approach to working with the user's contact information. It also provides an Objective-C API that works well with Swift too. This is a big improvement over the previous method of accessing a user’s contacts with the AddressBook framework. As you can see from this post, there are a lot of exciting new features and capabilities in iOS9 that developers can tap into, thus providing new and exciting apps for the millions of Apple users around the world. About the author Samrat Shaw is a graduate student (software engineering) at the National University Of Singapore and an iOS intern at Massive Infinity.
Read more
  • 0
  • 0
  • 16203
article-image-5-new-features-will-make-developers-love-android-7
Sam Wood
09 Sep 2016
3 min read
Save for later

5 New Features That Will Make Developers Love Android 7

Sam Wood
09 Sep 2016
3 min read
Android Nougat is here, and it's looking pretty tasty. We've been told about the benefits to end users - but what are some of the most exciting features for developers to dive into? We've got five that we think you'll love. 1. Data Saver If your app is a hungry, hungry data devourer then you could be losing users as you burn through their allowance of cellular data. Android 7's new data saver feature can help with that. It throttles background data usage, and signals to foreground apps to use less data. Worried that will make your app less useful? Don't worry - users can 'whitelist' applications to consume their full data desires. 2. Multi-tasking It's the big flagship feature of Android 7 - it's the ability to run two apps on the screen at once. As phones keep getting bigger (and more and more people opt for Android tablets over an iPad) having the option to run two apps alongside each other makes a lot more sense. What does this mean for developers? Well, first, you'll want to tweak your app to make sure it's multi-window ready. But what's even more exciting is the potential for drag and drop functionality between apps, dragging text and images from one pane to another. Ever miss being able to just drag files to attach them to an email like on a desktop? With Android N, that's coming to mobile - and devs should consider updating accordingly. 3. Vulkan API Nougat brings a new option to Android game developers in the form of the Vulkan graphics API. No longer restricted to just OpenGL ES, developers will find that Vulkan provides them with a more direct control over hardware - which should lead to improved game performance. Vulkan can also be used across OSes, including Windows and the SteamOS (Valve is a big backer). By adopting Vulkan, Google has really opened up the possibility for high-performance games to make it onto Android. 4. Just In Time Compiler Android 7 has added a JIT (Just In Time) compiler, which will work to constantly improve the performance of Android Apps as they run. The performance of your app will improve - but the device won't consume too much memory. Say goodbye to freezes and non-responsive devices, and hello to faster installation and updates! This means users installing more and more apps, which means more downloads for you! 5. Notification Enhancements Android 7 changes the way your notifications work on your device. Rather than just popping up at the top of your device, notifications in Nougat will have the option for a direct reply without opening the app, will be bundled together with related notifications, and can even be viewed as a 'heads-up' notification displayed to the user when the device is active. These heads-up notifications are also customizable by app developers - so better start getting creative! How will this option affect your app's UX and UI? There's plenty more... This are just some of the features of Android 7 we're most excited about - there's plenty more to explore! So dive right in to Android development, and start building for Nougat today!
Read more
  • 0
  • 0
  • 15800

article-image-google-arcore-is-pushing-immersive-computing-forward
Sugandha Lahoti
26 Apr 2018
7 min read
Save for later

Google ARCore is pushing immersive computing forward

Sugandha Lahoti
26 Apr 2018
7 min read
Immersive computing has been touted as a crucial innovation that is going to transform the way we interact with software in the future. But like every trend, there are a set of core technologies that lie at the center, helping to drive it forward. In the context of immersive computing Google ARCore is one of these technologies. Of course, it's no surprise to see Google somewhere at the heart of one of the most exciting developments in tech. But what is Google ARCore, exactly? And how is it going to help drive immersive computing into the mainstream? But first, let's take a look at exactly what immersive computing is. After that, we'll explore how Google ARCore is helping to drive it forward, and some examples of how to put it into practice with some motion tracking and light estimation projects. What is Immersive Computing? Immersive computing is a term used to describe applications that provide an immersive experience for the user. This may come in the form of an augmented or virtual reality experience. In order to better understand the spectrum of immersive computing, let's take a look at this diagram: The Immersive Computing Spectrum The preceding diagram illustrates how the level of immersion affects the user experience, with the left-hand side of the diagram representing more traditional applications with little or no immersion, and the right representing fully immersive virtual reality applications. For us, we will stay in the middle sweet spot and work on developing augmented reality applications. Why use Google ARCore for Augmented Reality? Augmented reality applications are unique in that they annotate or augment the reality of the user. This is typically done visually by having the AR app overlay a view of the real world with computer graphics. Google ARCore is designed primarily for providing this type of visual annotation for the user. An example of a demo ARCore application is shown here: The screenshot is even more impressive when you realize that it was rendered real time on a mobile device. It isn't the result of painstaking hours of using Photoshop or other media effects libraries. What you see in that image is the entire superposition of a virtual object, the lion, into the user's reality. More impressive still is the quality of immersion. Note the details, such as the lighting and shadows on the lion, the shadows on the ground, and the way the object maintains position in reality even though it isn't really there. Without those visual enhancements, all you would see is a floating lion superimposed on the screen. It is those visual details that provide the immersion. Google developed ARCore as a way to help developers incorporate those visual enhancements in building AR applications. Google developed ARCore for Android as a way to compete against Apple's ARKit for iOS. The fact that two of the biggest tech giants today are vying for position in AR indicates the push to build new and innovative immersive applications. Google ARCore has its origins in Tango, which is/was a more advanced AR toolkit that used special sensors built into the device. In order to make AR more accessible and mainstream, Google developed ARCore as an AR toolkit designed for Android devices not equipped with any special sensors. Where Tango depended on special sensors, ARCore uses software to try and accomplish the same core enhancements. For ARCore, Google has identified three core areas to address with this toolkit, and they are as follows: Motion tracking Environmental understanding Light estimation In the next three sections, we will go through each of those core areas in more detail and understand how they enhance the user experience. Motion tracking Tracking a user's motion and ultimately their position in 2D and 3D space is fundamental to any AR application. Google ARCore allows you to track position changes by identifying and tracking visual feature points from the device's camera image. An example of how this works is shown in this figure: In the figure, we can see how the user's position is tracked in relation to the feature points identified on the real couch. Previously, in order to successfully track motion (position), we needed to pre-register or pre-train our feature points. If you have ever used the Vuforia AR tools, you will be very familiar with having to train images or target markers. Now, ARCore does all this automatically for us, in real time, without any training. However, this tracking technology is very new and has several limitations. Environmental understanding The better an AR application understands the user's reality or the environment around them, the more successful the immersion. We already saw how Google ARCore uses feature identification in order to track a user's motion. Tracking motion is only the first part. What we need is a way to identify physical objects or surfaces in the user's reality. ARCore does this using a technique called meshing. This is what meshing looks like in action: What we see happening in the preceding image is an AR application that has identified a real-world surface through meshing. The plane is identified by the white dots. In the background, we can see how the user has already placed various virtual objects on the surface. Environmental understanding and meshing are essential for creating the illusion of blended realities. Where motion tracking uses identified features to track the user's position, environmental understanding uses meshing to track the virtual objects in the user's reality. Light estimation Magicians work to be masters of trickery and visual illusion. They understand that perspective and good lighting are everything in a great illusion, and, with developing great AR apps, this is no exception. Take a second and flip back to the scene with the virtual lion. Note the lighting and detail in the shadows on the lion and ground. Did you note that the lion is casting a shadow on the ground, even though it's not really there? That extra level of lighting detail is only made possible by combining the tracking of the user's position with the environmental understanding of the virtual object's position and a way to read light levels. Fortunately, Google ARCore provides us with a way to read or estimate the light in a scene. We can then use this lighting information in order to light and shadow virtual AR objects. Here's an image of an ARCore demo app showing subdued lighting on an AR object: The effects of lighting, or lack thereof, will become more obvious as we start developing our startup applications. To summarize, we took a very quick look at what immersive computing and AR is all about. We learned about augmented reality covering the middle ground of the immersive computing spectrum, and AR is a careful blend of illusions used to trick the user into believing that their reality has been combined with a virtual one. After all, Google developed ARCore as a way to provide a better set of tools for constructing those illusions and to keep Android competitive in the AR market. After that, we learned the core concepts ARCore was designed to address and looked at each: motion tracking, environmental understanding, and light estimation, in a little more detail. This has been taken from Learn ARCore - Fundamentals of Google ARCore. Find it here. Read More Getting started with building an ARCore application for Android Types of Augmented Reality targets  
Read more
  • 0
  • 0
  • 15691
Modal Close icon
Modal Close icon