Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech Guides - Mobile

49 Articles
article-image-top-7-modern-virtual-reality-hardware-systems
Sugandha Lahoti
20 Apr 2018
7 min read
Save for later

Top 7 modern Virtual Reality hardware systems

Sugandha Lahoti
20 Apr 2018
7 min read
Since its early inception, virtual reality has offered an escape. Donning a headset can transport you to a brand new world, full of wonderment and excitement. Or it can let you explore a location too dangerous for human existence.  Or it can even just present the real world to you in a new manner. And now that we have moved past the era of bulky goggles and clumsy helmets, the hardware is making the aim of unfettered escapism a reality. In this article, we present a roundup of the modern VR hardware systems. Each product is presented giving an overview of the device, and its price as of February 2018. Use this information to compare systems and find the device which best suits your personal needs. There has been an explosion of VR hardware in the last three years. They range from cheaply made housings around a pair of lens to full headsets with embedded screens creating a 110-degree field of view. Each device offers distinct advantages and use cases. Many have even dropped significantly in price over the past 12 months making them more accessible to a wider audience of users. Following is a brief overview of each device, ranked in terms of price and complexity. Google Cardboard Cardboard VR is compatible with a wide range of contemporary smartphones. Google Cardboard's biggest advantage is its low cost, broad hardware support, and portability. As a bonus, it is wireless. Using the phone's gyroscopes, the VR applications can track the user in 360 degrees of rotation. While modern phones are very powerful, they are not as powerful as desktop PCs. But the user is untethered and the systems are lightweight: Cost: $5-20 (plus an iOS or Android smartphone) [box type="shadow" align="" class="" width=""]Check out this post to Build Virtual Reality Solar System in Unity for Google Cardboard[/box] Google Daydream Rather than plastic, the Daydream is built from a fabric-like material and is bundled with a Wii-like motion controller with a trackpad and buttons. It does have superior optics compared to a Cardboard but is not as nice as the higher end VR systems. Just as with the Gear VR, it works only with a very specific list of phones: Cost: $79 (plus a Google or Android Smartphone) Gear VR Gear VR is part of the Oculus ecosystem. While it still uses a smartphone (Samsung only), the Gear VR Head-Mounted Display (HMD) includes some of the same circuitry from the Oculus Rift PC solution. This results in far more responsive and superior tracking compared to Google Cardboard, although it still only tracks rotation: Cost: $99 (plus Samsung Android Smartphone) Oculus Rift The Oculus Rift is the platform that reignited the VR renaissance through its successful Kickstarter campaign. The Rift uses a PC and external cameras that allow not only rotational tracking but also positional tracking, allowing the user a full VR experience. The Samsung relationship allows Oculus to use Samsung screens in their HMDs. While the Oculus no longer demands that the user remain seated, it does want the user to move within a smaller 3 m x 3 m area. The Rift HMD is wired to the PC. The user can interact with the VR world with the included Xbox gamepad, mouse, and keyboard, a one-button clicker, or proprietary wireless controllers: Cost: $399 plus $800 for a VR-ready PC Vive The HTC Vive from Valve uses smartphone panels from HTC. The Vive has its own proprietary wireless controllers, of a different design than Oculus (but it can also work with gamepads, joysticks, mouse/keyboards). The most distinguishing characteristic is that the Vive encourages users to explore and walk within a 4 m x 4 m, or larger, cube: Cost: $599 plus an $800 VR-ready PC Sony PSVR While there are persistent rumors of an Xbox VR HMD, Sony is currently the only video game console with a VR HMD. It is easier to install and set up than a PC-based VR system, and while the library of titles is much smaller, the quality of the titles is higher overall on average. It is also the most affordable of the positional tracking VR options. But, it is also the only one that cannot be developed on by the average hobbyist developer: Cost: $400, plus Sony Playstation 4 console Microsoft's HoloLens Microsoft's HoloLens provides a unique AR experience in several ways. The user is not blocked off from the real world; they can still see the world around them (other people, desks, chairs, and so on) through the HMD's semitransparent optics. The HoloLens scans the user's environment and creates a 3D representation of that space. This allows the Holograms from the HoloLens to interact with objects in the room. Holographic characters can sit on couches in the room, fish can avoid table legs, screens can be placed on walls in the room, and so on. The system is completely wireless. It's the only commercially available positional tracking device that is wireless. The computer is built into HMD with the processing power that sits in between a smartphone and a VR-ready PC. The user can walk, untethered, in areas as large as 30 m x 30 m. While an Xbox controller and a proprietary single-button controller can be used, the main interaction with the HoloLens is through voice commands and two gestures from the user's hand (Select and Go back). The final difference is that the holograms only appear in a relatively narrow field of view. Because the user can still see other people, either sharing the same Holographic projections or not, the users can interact with each other in a more natural manner: Cost: Development Edition: $3000; Commercial Suite: $5000 Headset costs and comparison across various features The following chart is a sampling of VR headset prices, accurate as of February 1, 2018. VR/AR hardware is rapidly advancing and prices and specs are going to change annually, sometimes quarterly. As of now, the price of the Oculus has dropped by $200: Google Cardboard Gear VR Google Daydream Oculus Rift HTC Vive Sony PS VR HoloLens Complete cost for HMD, trackers, default controllers $5 $99 $79 $399 $599 $299 $3000 Total cost with CPU: phone, PC, PS4 $200 $650 $650 $1,400 $1,500 $600 $3000 Built-in headphones NO No No Yes No No Yes Platform Apple Android Samsung Galaxy Google Pixel PC PC Sony PS4 Proprietary PC Enhanced rotational tracking No Yes No Yes Yes Yes yes Positional tracking No No No Yes Yes Yes Yes Built-in touch panel No* Yes No No No No no Motion controls No No No Yes Yes Yes Yes Tracking system No No No Optical Lighthouse Optical Laser True 360 tracking No No No Yes Yes No Yes Room scale and size No No No Yes Yes Yes Yes Remote No No Yes Yes No No Yes Gamepad support No Yes No Yes 2m x 2m Yes 4m x 4m Yes 3m x 3m Yes 10mX10m Resolution per eye Varies 1440 x1280 1440 x1280 1200 x1080 1200 x1080 1080 x960 1268 X720 Field of view Varies 100 90 110 110 100 30 Refresh rate 60 Hz 60 Hz 60 Hz 90 Hz 90 Hz 90-120 Hz 60 Hz Wireless Yes Yes Yes No No No Yes Optics adjustment No Focus No IPD IPD IPD IPD Operating system iOS Android Android Oculus Android Daydream Win 10 Oculus Win 10 Steam Sony PS4 Win 10 Built-in Camera Yes Yes Yes* No Yes* No Yes AR/VR VR* VR* VR VR VR* VR AR Natural user Interface No No No No No Yes Choosing which HMD to support comes down to a wide range of issues: cost, access to hardware, use cases, image fidelity/processing power, and more. The previous chart is provided to help the user understand the strengths and weaknesses of each platform. There are many HMDs not included in this overview. Some are not commercially available at the time of this writing (Magic Leap, the Win 10 HMD licensed from Microsoft, the Starbreeze/IMAX HMD, and others) and some are not yet widely available or differentiated enough: Razer's Open Source HMD. You enjoyed an excerpt from the book, Virtual Reality Blueprints, written by Charles Palmer and John Williamson. In this book, you will learn how to create immersive 3D games and applications with Cardboard VR, Gear VR, OculusVR, and HTC Vive. The hype behind Magic Leap’s New Augmented Reality Headsets Create Your First Augmented Reality Experience Using the Programming Language You Already Know  
Read more
  • 0
  • 0
  • 30271

article-image-top-7-tools-for-virtual-reality-game-developers
Natasha Mathur
31 Oct 2018
12 min read
Save for later

Top 7 tools for virtual reality game developers

Natasha Mathur
31 Oct 2018
12 min read
According to Statista, the virtual reality software market is booming. It is projected to reach a value of around 24.5 billion U.S. dollars by 2020. Also, the estimated revenue of the virtual reality market in the year 2021 is3.56 billion U.S. dollars. This would be a huge increase from a very respectable 3.06 billion U.S. dollars back in 2016 This makes virtual reality a potentially lucrative opportunity if you’re a game developer. But it’s also one that’s a lot of fun, with plenty of creative opportunities, and which doesn’t require a load of money up front. Thanks to technological advancements in the VR space, it’s not easier than ever to build a VR game from scratch. But with so many virtual reality tools out there, it can be hard to know where to start. It leaves you stranded with plenty of options but no sense of direction. To help you out, we’ve consolidated a list of what we think are the top 7 tools to help you get started. 1.Unity 3d: the leading game engine at the cutting edge of the industry Developer: Unity Technologies Release date: 2005 Why choose Unity for virtual reality game development? In a nutshell:  it is the easiest way to get started with Virtual Reality development and doesn’t compromise on the quality of the developed game. Unity offers a huge 3D asset store, which is an online marketplace by Unity. In this asset store, you can easily find the 2D, 3D models, SDKs, templates, as well as different virtual reality tools that you can download and import directly to your game. One of the most popular tools that you can find in the Unity asset store is the VR toolkit. So for times, when you don’t want to spend time on building a character model from scratch, you can simply pick one from the asset store. This helps jump-start the game development process. Some of these assets are free, and for some, you have to pay one-time. Moreover, the documentation in Unity consists of vivid examples ( eg; Introduction to VR best practices), video tutorials, as well as live training sessions (eg; VR essentials pack demo). This is not only great news for the experienced game developer but the newbies too as unity makes it easy for you to quickly learn to build games, including the AAA quality virtual reality games. It also has an ever-growing community. So, for times when you get stuck somewhere during the game development process, a solid community will be there to offer you advice on resolving a wide range of issues. Languages Supported: Unity supports three development languages namely, c#, Boo, and UnityScript. Platforms supported: Unity supports all the platforms such as mobile, PC, web and console platforms. The free version supports Mac OS X, Android, iOS, Windows and among other mobile platforms. The paid version further supports  Nintendo Wii, Xbox 360 and PlayStation. The free version, however, is more than enough to dive right into the development process. Unity also supports all the major HMDs such as Oculus Rift, Steam VR/Vive, Playstation VR, Gear VR, Microsoft HoloLens, and Google’s Daydream View. Price: Unity has three versions, namely,  personal, plus and pro version. The personal version is completely free, Unity 3D plus is $35 per seat per month, and pro is $125 per seat per month. However, the personal version is more than enough to dive right into the development process. Learning curve: Unity 3d has a flat learning curve. It can be used with ease by both beginners and professionals alike. Learning resources: Unity Virtual Reality Projects - Second Edition                                   Unity Virtual Reality - Volume 1 [Video]                                   Unity Virtual Reality - Volume 2 [Video] 2. Unreal Engine 4: a free game engine with exceptional graphics and capabilities for virtual reality Developer: Epic Games Release Date: 1998 Why choose Unreal Engine for virtual reality gaming? Unreal Engine has powered games with some of the most exceptional graphics and features, so it naturally comes with features catered towards advanced Game development. For virtual reality, Unreal Engine comes with an advanced cinematics system, advanced lighting capabilities, a rendering pipeline offering 90 Hz stereo framerate or faster at high resolutions as well as tools scaling from simple to detailed scenes, environments and characters. Similar to Unity, Unreal Engine 4  also comes with an asset store, which is an online marketplace by Unreal offering animations, blueprints, code plugins, props, environments, as well as architectural visualization. Again, just like Unity’s asset store, some of the assets are paid, and some are free. Documentation provided by Unreal Engine is not as rich as the one offered by Unity and comes with basic guides and live training streams on Virtual reality development. Unreal Engine 4 also has a strong community to guide you through your game development journey. Languages supported: Unreal Engine 4 offers only C++ development language. Platforms supported: UE4 supports all the latest HMDs such as Oculus Rift, HTC Vive, Samsung Gear VR, Google VR, and Leap Motion among others. Unreal Engine 4 lets you deploy your VR game projects to Windows PC, PlayStation 4, Xbox One, Mac OS X, iOS, Android, AR, VR, Linux, SteamOS, and HTML5. You can run the Unreal Editor on Windows, Mac OS X, and Linux. Moreover, Xbox One, PlayStation 4 and Nintendo Switch console tools and code are also available at no additional cost to registered developers for their respective platform(s). Price: The great thing about UE4 is that it is very cost-effective for all the game nerds out there, as it's free to use, with a 5% royalty on gross product revenue after the first $3,000 per game per calendar quarter from commercial products. Learning Curve: Unreal Engine 4 has a steep learning curve and is suited mostly for professionals. Learning resources: Exploring Unreal Engine 4 VR Editor and Essentials of VR [Video]                    Unreal Engine 4: The Complete Beginner's Course [Video]                      3. CryEngine: a game engine with a powerful range of assets for virtual reality games Developer: Crytek Release Date: 2002 Why choose CryEngine for virtual reality game development? Similar to Unity and Unreal Engine, CryEngine also offers an asset store, offering tools and assets across different domains such as 3D modeling, scripts, sounds, animations, etc. The documentation offered by CryEngine is not as rich as Unity, which makes it difficult to approach for the beginners. However, it does have an online forum which can guide the experienced developers during their virtual reality game development journey. CryEngine also includes CE# Framework, new Sandbox Editor, Improved Profiling, Reworked Low Overhead Renderer, DirectX 12 Support, Advanced Volumetric Cloud System, new particle system, FMOD Studio support, and Visual Studio 2015 Support, which all collectively can amp up the virtual reality game development process. Languages supported: It supports languages such as C++, Flash, ActionScript, and Lua. Platforms supported: CryEngine supports Windows, Linux, PlayStation 4, Xbox One, Oculus Rift, OSVR, PSVR, and HTC Vive. Mobile support is currently under development. Price: CryEngine is free but takes five percent of the revenues generated by each game built with CryEngine - after the revenues have passed $5,000. Learning curve: CryEngine has a steep learning curve as for anything other than basic games, you need to have strong command on languages such as C++, Flash, ActionScript, and Lua. Learning resources: CryENGINE Game Programming with C++, C#, and Lua                                  CryENGINE SDK Game Programming Essentials [Video] 4. Blender: an accessible tool for building exceptional graphics and animations Developer: Blender Foundation Release Date: 1998 Why choose Blender for virtual reality? Blender, a modern 3D graphics software is not only great for 3D modeling but supports the entirety of the 3D pipeline such as rigging, animation, simulation, rendering, motion tracking, video editing, and game creation. It also comes with a built-in powerful path-tracer engine called Cycles that offers stunning ultra-realistic rendering, real-time viewport preview, PBR shaders & HDR lighting support as well as VR rendering support. It also has a solid community of developers and offers tutorials, workshops, and courses on character modeling, character animation, and blender fundamentals. Blender comes with add-ons for VR such as BlenderVR that supports CAVE/VideoWall, Head-Mounted Displays (HMD) and external rendering modality engines. It helps with the cross-platform development of virtual reality applications as well as porting of scenes from one VR platform configuration to another without any requirement to edit the actual scene. Platforms supported:  Blender supports Windows, Mac OS, and Linux Price: Blender is free to use. Learning Curve: Blender has a flat learning curve and can be used with ease by both beginners and professionals alike. Learning resources: Building a Character using Blender 3D [Video]                                     Blender 3D Basics                            5. Amazon Lumberyard: an accessible and fast tool for building virtual reality games Developer: Amazon Release Date: 2015 Why choose Amazon Lumberyard for virtual reality game development? Bases on CryEngine’s architecture, Amazon Lumberyard, is a powerful cross-platform game engine comprising of tools that help you create the highest-quality games, and connect your games to the vast storage of the AWS Cloud, and engage fans on Twitch. Lumberyard's professional tools such as its virtual reality system use Lumberyard’s Gems, self-contained packages of assets and features that can be added within your game. In fact, these gems act as templates for you to build your own gems and supports all the VR devices without requiring any engine code editing. Lumberyard is also integrated with Amazon GameLift, which is an AWS service meant for deploying, operating, and scaling dedicated game servers for session-based multiplayer games. Lumberyard also speeds up virtual reality development with the new VR Preview function. This full VR preview function is in the editor, which you can click to see in VR right away. This lets the game developers make VR-specific adjustments and level the designs right in the editor, which is quite convenient and saves a lot of time. Platforms supported: Lumberyard supports HMDs such as Oculus Rift, HTC Vive and Open Source Virtual Reality (OSVR). It offers support for  PC, Xbox One, PlayStation 4, iOS (iPhone 5S+ and iOS 7.0+), and Android (Nexus 5 and equivalents with support for OpenGL 3.0+). Lumberyard also offers support for dedicated servers on Windows and Linux. Price: Amazon Lumberyard is free, with no seat licenses, royalties, or subscriptions required. You only need to pay the standard AWS fees for the AWS services that you choose to use. Learning curve: Lumberyard has a flat learning curve and is easy to use for both novices as well as professionals. Learning resources: Learning AWS Lumberyard Game Development 6. AppGameKit -VR (AGK): an easy way to build games for beginners Developer: The Game Creators Release Date: 2017 Why choose AppGameKit-VR for virtual reality game development? AppGameKit-VR lets anyone quickly code and builds apps for multiple platforms with the help of AGKs BASIC scripting system. It adds easy to use VR commands to the core AppGameKit Script Language, which delivers immersive VR experiences. It also allows full development control for SteamVR supported head-mounted displays, touch devices, and Leap Motion hand tracking. AGK does the majority of the work for you, so it makes it super easy to code, compile and export the apps to each platform. You mainly need to focus on your game/app idea.  AGK-VR offers 60 VR commands ranging from diagnostic checks on the hardware and SteamVR, Initialising the HMD, creating standing or seated VR experiences, rendering a 3D scene to the HMD, etc. AGK also offers demos on how to how to get started with using these commands in your games. It also has an online forum where you can ask questions, learn and interact with other users. The details of the AGK script is also fully documented. Platforms supported: AGK VR offers support for Windows, Mac, Linux, iOS Android (inc Google, Amazon & Ouya), HTML5, Raspberry Pi (free from TGC website). Price: AGK is available for $29.9 Learning curve: AppGameKit VR has a flat learning curve, which is ideal for beginners and makes the VR game development quick for the experienced. 7. Oculus Medium 2.0: software designed with virtual reality in mind Developer: Oculus VR Release Date: 2016 Why choose Oculus Medium for building virtual reality games? Oculus Medium is a great tool that brings sculpting, modeling, painting and creating objects for the virtual reality world all together in a single package. It's a very handy tool to have during the character designing process. It lets you sculpt and create a variety of 3D objects to include within your VR game with the help of Oculus Touch controllers alongside the Oculus Rift. It comes with features such as grid snapping, increased layer limit, multiple lights, and 300 prefabricated stamps.  It is quite simple to use, and anyone, be it a newbie or an experienced game developer can use this tool. The rendering engine in Oculus Medium uses Vulkan, which results in smoother frame rates and better memory management when building higher resolution sculpts. Other than that, Oculus Medium offers tutorials for you to quickly get hang of different features in the tool. It also has an online forum where different VR artisans and developers discuss tips, information, and videos to share with others. Price: Oculus Medium 2.0 is available for $30 which is quite affordable for novices and professionals alike. Learning curve: Oculus Medium has a flat learning curve as its pretty approachable for novices as well as professionals.                                 Each of the tools mentioned above brings something unique in terms of their abilities and features. However, keep in mind that selecting a tool solely based on its technical features is not the best idea. Rather, figure out what works best for you, depending on your experience, and requirement. So which tools/tool are you planning to use for VR game development? Is there any tool we missed out? Let us know! Game developers say Virtual Reality is here to stay What’s new in VR Haptics? Top 7 modern Virtual Reality hardware system
Read more
  • 0
  • 0
  • 30112

article-image-learn-kotlin-next-universal-programming-language
Sugandha Lahoti
11 May 2018
14 min read
Save for later

Forget C and Java. Learn Kotlin: the next universal programming language

Sugandha Lahoti
11 May 2018
14 min read
Kotlin is fast moving towards becoming the universal programming language. What is a universal programming language? From a simplistic view, the expectation could be that one language is used for all types of programming. While that may be far-fetched in today's complex world, the expectation could be adjusted to one language becoming the dominant programming language. Most certainly, it is the single, most important language to master. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,  Kotlin Blueprints, written by Ashish Belagali, Hardik Trivedi, and Akshay Chordiya. With this book, you will learn how to design and prototype professional-grade applications using various features of Kotlin.[/box] Historically, different languages have used strategies appropriate for those times to become the universal programming languages: In the 1970s, C became the universal programming language. Prior to C, the programming languages of the world were divided between low-level and high-level languages, the former being the languages that were close to machine code and the latter being ones that were more concise and worked better for human understanding. The C programming language was developed as a single language that could work as a low-level and a high-level language. The Unix operating system was showcased as one that was built ground-up entirely on C, without needing another low-level language. In the 1990s, Java became the universal programming language with the Write Once Run Anywhere strategy. Prior to Java, developers needed to create different programs to run on different platforms (different operating systems running on different hardware needed different programs to run). However, with Java, programs could be written targeting a single platform, namely the Java Virtual Machine (JVM). The JVM is available on all the popular platforms and takes care of all platform-specific nuances. The Java language became the universal language by being the language in which to write programs for the JVM. Another two decades have passed, and the stage is all set to welcome the next universal language. Let's examine Kotlin's strategy to become that. Why can Kotlin be described as a better Java than any other language? How does Kotlin address areas beyond the Java world? What is Kotlin's winning strategy? What does this all mean for a smart developer? Why Kotlin vs Java? Why is being a better Java important for a language? For over a decade, Java has consistently been the world's most widely used programming language. Therefore, a language that gets crowned as being a better Java should automatically attract the attention of the world's single largest community of programmers: the Java programmers. The TIOBE index is widely referred to as a gauge of the popularity of programming languages. Updated to August 2017, the index graph is reproduced in the following illustration:   The interesting point is that while Java has been the #1 programming language in the world for the last 15 years or so, it has been in a steady state of decline for many years now. Many new languages have kept coming, and existing ones have kept improving, chipping steadily into Java's developer base; however, none of them have managed to take the #1 position from Java so far. Today, Kotlin is poised to become the most serious challenger for the better Java crown, and subsequently, to take the first place, for reasons that we will see shortly. Presently at 41st place, Kotlin is marching ahead at a fast pace. In May 2017, Google announced Kotlin to be the officially supported language for Android development in league with Java. This has turned out to be a major boost for Kotlin, and the rate of its adoption has accelerated ever since. Why not other languages? Many languages prior to Kotlin have tried to become a better Java. Let's see why they could never become one. Every language attracts the programmer community by giving them an ability to do something that was cumbersome before. Their adoption is directly driven by how much value the promise has for them and how much faith the community can put into that promise. All languages or frameworks that claimed to be a better Java and offered something worthwhile beyond what Java offers also took something back in turn. Here are a few examples: .NET framework has been the longtime rival of Java and has supported multiple languages from day one. Based on the lessons learned from Java, the .NET designers came up with better language constructs. However, the biggest hurdle for .NET was that it was a proprietary technology, and that created an impediment to its adoption. Also, .NET was more aggressive in adding newer language constructs. While the framework evolved quickly as a result of that, it broke its backward compatibility many times. Ruby (and Python) offered shortened code, enticing programming constructs, and greater expressiveness as opposed to the boring Java; however, they took away static typing support (which helps to make robust programs) and made the programs slower. Scala offered shortened code and advanced programming constructs, without sacrificing typing safety. However, Scala is complex and has a substantially high learning curve. It supports multiple coding styles. So, there is a danger that Scala code written by one developer may not be understood easily by another. These are risk factors for any project that includes a team of developers and when the application is expected to be supported over a long period, which is true about most applications anyway. Why Kotlin? Unlike other languages, Kotlin offers a lot of power over Java, while not taking anything away. Let's take a look at the following screenshot to see how: Kotlin is interoperable with Java. It is possible to write applications containing both Java and Kotlin code, calling one from the other. Calling Java code from Kotlin is simpler, as opposed to the other way around, but the former will be the case most of the times anyway, where new Kotlin code is added on top of legacy Java code. Kotlin is interoperable and can use all the Java libraries and legacy coding without having to do any code conversion. It is possible to inject Kotlin into a Java project without boiling the ocean. Concise yet expressive code While being interoperable, Kotlin code is far superior to Java code. Like Scala, Kotlin uses type inference to cut down on a lot of boilerplate code and makes it concise. (Type inference is a better feature than dynamic typing as it reduces the code without sacrificing the robustness of the end product). However, unlike Scala, Kotlin code is easy to read and understand, even for someone who may not know Kotlin. Kotlin's data class construct is the most prominent example of being concise as shown in the following: data class Employee (val id: Long, var name: String) Compared to its Java counterpart, the preceding line has packed into it the class definition, member variables, constructor, getter-setter methods, and also the utility methods, such as equals() and hashCode(). This will easily take 15-20 lines of Java code. The data classes construct is not an isolated example. There are many others where the syntax is concise and expressive. Consider the following as additional examples: Kotlin's default values to function parameters save the need to overload the functions Kotlin's extension functions can be used to add domain-specific functionality to existing classes, making it easy for someone from the domain to understand Enhanced robustness Statically typed languages have a built-in safety net because of the assurance that the compiler will catch any incorrect type cast. Both Java and Kotlin support static typing. With Java Generics introduced in Java 1.5, they both fare better over the Java releases prior to 1.5. However, Kotlin takes a big step further in addressing the Null pointer error. This Null pointer error causes a lot of checks in Java programs: String s = someOperation(); if (s != null) { ... } One can see that the null check is not needed if someOperation() never returns null. On the other hand, it is possible for a programmer to omit the null check while someOperation() returning null is a valid case. With Kotlin, the definition of someOperation() itself will return either String or String? and then there are implications on the subsequent code, so the developer just cannot go wrong. Refer the  following table: fun someOperation() : String // not nullable fun someOperation() : String? // nullable val s = someOperation() if (s != null) { // null check not needed – editor warning … } val s = someOperation() n = s.length() // error, null check imposed n = s?.length() ?: 0 // handling null condition One may point out that Java developers can use the @Nullable and @NotNull annotations or the Optional class; however, these were added quite late, most developers are not aware of them, and they can always get away with not using them, as the code does not break. Finally, they are not as elegant as putting a question mark. There is also a subtle point here. If a Kotlin developer is careless, he would write just the type name, which would automatically become a non-nullable declaration. If he wanted to make it nullable, he would have to  key in that extra question mark deliberately. Thus, you are on the side of caution, and that is as far as keeping the code robust is concerned. Another example of this robustness is found in the var/val declarations. Seasoned programmers know that most variables get a value assigned to them only once. In Kotlin, while declaring the variable, you choose val for such a variable. At the time of variable declaration, the programmer has to select between val and var, and so he puts some thought into it. On the other hand, in Java, you can get away with just declaring the type with its name, and you will rarely find any Java code that defines a variable with the final keyword, which is Java's way of declaring that the variable can be assigned a value only once. Basically, with the same maturity level of programmers, you expect a relatively more robust code in Kotlin as opposed to Java, and that's a big win from the business perspective. Excellent IDE support from day one Kotlin comes from JetBrains, who also develop a well-known Java integrated development environment (IDE): IntelliJ IDEA. JetBrains developers made sure that Kotlin has first-class support in IDEA. Not only that, they also developed a Kotlin plugin for Eclipse, which is the #1 most widely used Java IDE. Contrast this with the situation when Java appeared on the scene roughly two decades ago. There was no good IDE support. Programmers were asked to use simple text editors. Coding Java was hard, with no safety net provided by an IDE, until the Eclipse editor was open-sourced. In the case of Kotlin, the editor's suggestions being available from day one means that they can learn the language faster, make fewer mistakes, and write good quality compilable code with relative ease. Clearly, Kotlin does not want to waste any time in climbing up the ladder of popularity. Beyond being a better Java We saw that on the JVM platform, Kotlin is neat and quite superior. However, Kotlin has set its eyes beyond the JVM. Its strategy is to win based on its superior and modern feature set. Before we go ahead, let's list the top five appeals of Kotlin: Static typing (like in C or Java) means that there is built-in type safety. The compiler catches any incorrect type assignments. This makes programs robust. Kotlin is concise and expressive. Being concise implies that there is less to read and maintain. Being expressive implies better maintainability. Being a JVM language, the Kotlin programs can take advantage of the features built into the JVM, such as its cross-platform nature, memory management, high performance and sandbox security. Kotlin has inbuilt null-safety. Null references are famous as the billion-dollar mistake, as admitted by its inventor Tony Hoare and cost a great deal of unnecessary null-checks in programs. Kotlin eliminates those and makes the programs more robust. Kotlin is easy to learn, especially for Java developers. Its syntax is clean and therefore easy to understand, because of which, Kotlin programs are fun for developers to code and easy to understand, and enhancing for their peers. From a business angle, they are more robust and easy to maintain for businesses. Kotlin is in the winning camp The features of Kotlin have a good validation when one considers that other languages, which have similar features, are also growing in popularity: The Crystal language attracts Ruby programmers by adding static typing support. Similarly, TypeScript adds static typing support to JavaScript and has become the preferred language for some JavaScript frameworks. Scala and F# add functional programming support to traditional non-functional paradigms without sacrificing type safety and, hence, are more attractive. Kotlin uses functional programming, just enough to ease out the programming in a lot of cases, but not too much to make it complex. Like Kotlin, Swift, and Rust also have inbuilt null-safety. Kotlin and Swift are often compared, as their syntaxes resemble one another a lot. Server-side languages, which were getting designed after the emergence of the parallel computing phenomena, became one of the chief requirements for providing inbuilt constructs for easing the programmer's work. One can find this in both Kotlin (coroutines) and Rust. Go native strategy The Kotlin developers figured that the same strategy that is used on the JVM platform could be used on other platforms too. Consider the following illustration: On no platform does Kotlin disrupt the platform's existing technology: The JVM works with the Java bytecode and Kotlin gives an alternative to Java to generate the same bytecode (By no means is Kotlin the first alternative as there are already 200+ languages that work with JVM, but it is the most elegant one for all the reasons that we have seen previously). On modern browsers where JavaScript is the de facto standard, Kotlin can work by transpiling to JavaScript. Again, this means that Kotlin is friendly with existing browsers without making any special effort. On the Node.js platform where JavaScript is used on the server side, your Kotlin code transpiles into JavaScript, and hence there are no changes needed in the Node.js framework for Kotlin to run. In a similar way, Kotlin/Native plans to work with other technologies in a native way. Since the platform's technology is not disrupted, there are zero changes needed at the platform level to adopt Kotlin. Kotlin's compatibility with a given platform can be taken for granted from day one. This eliminates a big business risk. Kotlin's winning strategy Kotlin's winning strategy is the sum of the various factors that we have seen previously. It has a two-pronged strategy to win over the developers with the coolness of the language, and the ease of working with it, to win over business users with its business benefits. The following illustration shows us the different benefits of using Kotlin: The other benefits also include: The growing popularity of the language Endorsement from Google to make Kotlin an officially supported language in May 2017 Kotlin-specific development frameworks emerging Leading Java frameworks, such as Spring, offering Kotlin-specific improvements The growing number of applications being tried out with Kotlin The user groups spread across Kotlin developer hubs The growing number of technology companies using Kotlin With this in mind, the winning strategy for smart programmers is to master Kotlin and learn to work with Kotlin on various platforms. Being ahead of the curve as opposed to following the world after Kotlin is already big but it will be a quick path to being recognized as a leader. Further chapters of this book will help you in exactly this mission. Apart from going through this book, we strongly suggest you join the community. Join the Kotlin weekly mailing list at http://kotlinweekly.net. Join the nearest Kotlin user group at http://kotlinlang.org/community/user-groups.html. Kotlin's community on Slack is at https://kotlinlang.slack.com/. We saw how Kotlin is well positioned to take off as the universal programming language. It offers an opportunity for smart programmers to establish themselves at the forefront of this rising tide. This article was taken from the book Kotlin Blueprints. If you liked reading this piece, check out the  book to build comprehensive applications using Kotlin features.  Getting started with Kotlin programming Build your first Android app with Kotlin How to convert Java code into Kotlin
Read more
  • 0
  • 2
  • 30105

article-image-whats-new-in-vr-haptics
Natasha Mathur
16 Jul 2018
8 min read
Save for later

What’s new in VR Haptics?

Natasha Mathur
16 Jul 2018
8 min read
Virtual Reality is evolving at a staggering rate. Some of the humankind’s most exciting tools and technologies are coming to the Virtual reality Space. One such technology which is taking over the VR world and making it more powerful is the VR haptics technology. VR Haptics technology offers an extra dimension to the VR world by letting users feel the virtual environment via the sense of touch, in addition to visual and aural perception. It makes you feel truly immersive in the artificial world. Imagine yourself in a desert seeing the sand and feeling it glide under your feet as you walk. It uses external devices like Gloves, Shoes, Joysticks, etc, via which users can receive feedback in the form of vibrations from these computer applications. This feedback provides physical sensations in the hand or other parts of the body. It also provides a realistic simulation of the movements and behaviors, similar to those realized in the real world. VR Haptics: a growing domain The VR haptics technology is growing beyond creating vibrations in game controllers. Now, in the near future, you might able to cuddle a dog and feel it licking your face in the VR world. This speaks volumes about the pace at which the haptic technology is growing. One famous example which discusses modern VR is the popular sci-fi novel “Ready Player One”. It illustrates the possibilities of haptic technology in the future. The novel explores the journey of a guy as he sets foot into a virtual reality simulator (OASIS). He uses a headset and a pair of gloves to maneuver around the virtual world. Apart from the gloves, a lot of future concept products are also covered in the novel which makes the illusion of immersion easier to picture, such as towers emitting smells in the VR world and Wind/Temperature generators that mimic real-life. Haptics came about just as head mounted displays (HMD) came to light in the 2010s. HMDs allowed people to see the virtual reality while haptic feedback gave people the opportunity to experience the virtual world and to act within it. Texture, temperature, pressure, taste, smell and other non-visual sensory inputs became real in VR. Apart from virtual reality games and apps, Haptics feedback is used widely in personal computers, mobile devices, robots, and more. But, in this article, we’ll stick to the use of haptic technology or haptic feedback in the VR space. Usually, most VR users use Touch Controllers for haptic feedback. But, recently, a lot of third-party companies are coming out with products such as gloves for systems like the Oculus Rift & HTC Vive. Here is a list of recent developments in the haptic technology for the VR world. Super affordable VR Haptic gloves by Plexus Most of the currently available options in the VR haptics field are somewhat pricey but earlier this month, Plexus announced their new product, a VR haptic and sensor glove. https://vimeo.com/276517370 Source: Plexus Key features Plexus VR haptics gloves offer a fully modular tracking solution which is capable of tracking up to 0.01 degrees of precision. These gloves are capable of individual finger tracking as well as tracking each joint on the finger, thereby, offering higher precision in the VR world. It is compatible with the HTC Vive, Oculus Rift as well as Windows Mixed Reality devices. The VR haptic gloves also come with additional adapter plates. The development kit version of the Plexus haptic gloves, priced at $249 per glove pair, can be pre-ordered on the official Plexus Website. The company will begin shipping in August 2018 but at the moment, shipping is only available to USA, Europe, Canada and Australia. Kaaya Tech’s full body tracking HoloSuit Kaaya came out with a motion capture (MoCap) suit called HoloSuit, last month, which offers motion capture as well as haptic feedback. HoloSuit is the world’s first affordable, wireless, easy to use, bi-directional, full body motion capture suit. User’s entire body movement data is captured by Holosuit and it uses haptic feedback to send information back to the user. https://www.youtube.com/watch?v=SEQsDR32gII&t=122s  Source: HoloSuit It can be used in various areas such as sports, healthcare, education, entertainment or industrial operations. Key Features The HoloSuit consists of 36 embedded sensors in the pro version and 26 embedded sensors in the less complex version. Embedded sensors carry out all the work of capturing body motion which is necessary for world-scale tracking. It also consists of 9 haptic feedback devices, and 6 embedded firing buttons ( buttons that govern specific tasks such as saving the game, pausing, etc ) which are dispersed across both arms, legs, and all the ten fingers. It delivers data wirelessly either through Wifi or Bluetooth LE to a VR setup by using Unity or a Wi-Fi SDK. The HoloSuit doesn’t come with an external camera tracking option. It supports all the major platforms such as Windows, macOS, iOS, and Android devices. A complete HoloSuit is quite expensive and starts at a regular price of $999. Jacket and Jersey are priced at $499, jersey or track pants for $399, and a pair of gloves are available for $799. HoloSuit Pro is priced at $1,599. Shipping for the full body VR haptic HoloSuit will start this November. Disney’s VR Haptic “Force Jacket” Disney came out with their VR haptic jacket, namely, “Force Jacket” back in April. It provides users with precisely directed force along with a high-frequency vibration which is felt against the user’s upper body in sync with the visual medium. The prototype is made out of a converted life jacket and is provided with 26 airbags. https://www.youtube.com/watch?v=5BOFHEow608   Source: DisneyResearchHub The Force Jacket is created by engineers at Disney Research, MIT and Carnegie Mellon University. Key Features The Haptic Jacket uses an air compressor and a vacuum pump. These air compartments in the jacket can be inflated to exert a force on the user’s body relative to force sensitive resistors. 26 air compartments are activated using microcontrollers for either pressure or vibrotactile feedback or both. Controllers are used to activating the solenoid valves which are connected to the vacuum. There are certain Jacket inflation parameters like speed, force, and duration which are specified using the haptic effects editor. The jacket makes use of the motion interface to sequentially inflate the compartments for simulating motion across the body. Each airbag within the haptic jacket can be influenced to mimic sensations such as being hit in the chest by a snowball, getting tapped on the shoulder, lime dripping on their back, getting punched in the side, and a snake coiling its body around the user. The jacket is mainly to be used in the entertainment and gaming industry and is not available for the consumer market. But, it seems to have great potential in the future for other applications as well. VR gloves by Haptx Haptx announced a pair of VR gloves back in November of last year. The gloves use micro-pneumatics technology for detailed haptics and force feedback (the ability to restrict your fingers’ movement to simulate holding objects) in the fingers. https://www.youtube.com/watch?v=2C2_kbjtjRU Source: HaptX Key Features It features technology that enables it to provide 100 points of tactile displacement feedback. It offers up to five pounds of resistance per finger. It also comes with sub-millimeter precision motion tracking The glove uses SDK of HaptX’s design, which is created by using Unreal Engine’s physics system. This tells the glove when and where it needs to apply haptic effects as well as when and how to engage the force feedback. No information on pricing or worldwide availability has been released by the company yet. But, it is rumored to launch the VR gloves for the consumer market sometime later this year. Apart from these products, there are other minor advancements that keep happening in the VR haptics space. For example, Heather Culbertson, Assistant Professor of USC's computer department, recently created a haptic armband which is capable of mimicking the sensation of a human touch. VR aims to provide you with an environment where you feel truly immersive and where you can feel the objects as in the real world. These products are bringing the VR world a step closer to achieve richer levels of immersive experiences. Gone are the days when haptic feedback was limited to just vibrating controllers and joysticks. As the technology advances, the whole new world of VR haptic devices is here to make your VR experience as seamlessly immersive as possible. In fact, some people even believe that without Haptics, VR is nothing but a picture and a sound. Game developers say Virtual Reality is here to stay CTA announces its first AR/VR Standard terminology Top 7 modern Virtual Reality hardware systems  
Read more
  • 0
  • 0
  • 29790

article-image-mvp-android
HariVigneshJayapalan
04 Apr 2017
6 min read
Save for later

MVP for Android

HariVigneshJayapalan
04 Apr 2017
6 min read
The Android framework does not encourage any specific way to design an application. In a way, that makes the framework more powerful and vulnerable at the same time. You may be asking yourself things like, "Why should I know about this? I'm provided with Activity and I can write my entire implementation using a few Activities and Fragments, right?” Based on my experience, I have realized that solving a problem or implementing a feature at that point of time is not enough. Over time, our apps will go through a lot of change cycles and feature management. Maintaining these over a period of time will create havoc in our application if not designed properly with separation of concerns. That’s why developers have come up with architectural design patterns for better code crafting. How has it evolved? Most developers started creating an Android app with Activity at the center and capable of deciding what to do and how to fetch data. Activity code over a period of time started to grow and became a collection of non-reusable components.Then developers started packaging those components and the Activity could use them through the exposed APIs of these components. Then they started to take pride and began breaking codes into bits and pieces as much as possible. After that, they found themselves in an ocean of components with hard-to-trace dependencies and usage. Also, later we were introduced to the concept of testability and found that regression is much safer if it’s written with tests. Developers realized that the jumbled code that they developed in the above process is very tightly coupled with the Android APIs, preventing JVM tests and also hindering an easy design of test cases. This is the classic MVC with Activity or Fragment acting as a Controller. SOLID principles SOLID principles are object-oriented design principles, thanks to dear Robert C. Martin. According to the SOLID article on Wikipedia, it stands for: S (SRP): Single responsibility principle This principle means that a class must have only one responsibility and do only the task for which it has been designed. Otherwise, if our class assumes more than one responsibility we will have a high coupling causing our code to be fragile with any changes. O (OCP): Open/closed principle According to this principle, a software entity must be easily extensible with new features without having to modify its existing code in use. Open for extension: new behavior can be added to satisfy the new requirements. Close for modification: extending the new behavior is not required to modify the existing code. If we apply this principle, we will get extensible systems that will be less prone to errors whenever the requirements are changed. We can use abstraction and polymorphism to help us apply this principle. L (LSP): Liskov substitution principle This principle was defined by Barbara Liskov and says that objects must be replaceable by instances of their subtypes without altering the correct functioning of our system. Applying this principle, we can validate that our abstractions are correct. I (ISP): Interface segregation principle This principle defines that a class should never implement an interface that does not go to use. Failure to comply with this principle means that in our implementations we will have dependencies on methods that we do not need but that we are obliged to define. Therefore, implementing a specific interface is better than implementing a general-purpose interface. An interface is defined by the client that will use it; so it should not have methods that the client will not implement. D (DIP): Dependency inversion principle The dependency inversion principle means that a particular class should not depend directly on another class, but on an abstraction (interface) of this class. When we apply this principle we will reduce dependency on specific implementations and thus make our code more reusable. MVP somehow tries to follow (not 100% completely) all of these five principles. You can try looking up clean architecture for pure SOLID implementation. What is an MVP design pattern? An MVP design pattern is a set of guidelines that if followed, decouples the code for reusability and testability. It divides the application components based on its role, called separation of concerns. MVP divides the application into three basic components: Model: The Model represents a set of classes that describes the business logic and data. It also defines business rules for data, which means how the data can be changed and manipulated. In other words, it is responsible for handling the data part of the application. View: The View represents the UI components. It is only responsible for displaying the data that is received from the presenter as the result. This also transforms the model(s) into UI. In other words, it is responsible for laying out the views with specific data on the screen. Presenter: The Presenter is responsible for handling all UI events on behalf of the view. This receives input from users via the View, then processes the user’s data with the help of Model, and passes the results back to the View. Unlike view and controller, view and presenter are completely decoupled from each other and communicates to each other by an interface. Also, Presenter does not manage the incoming request traffic as Controller. In other words, it is a bridge that connects a Model and a View. It also acts as an instructor to the View. MVP lays down a few ground rules for the abovementioned components, as listed below: A View’s sole responsibility is to draw a UI as instructed by the Presenter. It is a dumb part of the application. The View delegates all the user interactions to its Presenter. The View never communicates with Model directly. The Presenter is responsible for delegating the View’s requirements to Model and instructing the View with actions for specific events. The Model is responsible for fetching data from the server, database and file system. MVP projects for getting started Every developer will have his/her own way of implementing MVP. I’m listing a few projects down the line. Migrating to MVP will not be quick and it will take some time. Please take your time and get your hands dirty with MVP: https://github.com/mmirhoseini/marvel https://github.com/saulmm/Material-Movies https://fernandocejas.com/2014/09/03/architecting-android-the-clean-way/  About the author HariVigneshJayapalan is a Google-certified Android app developer, IDF-certified UI &UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur.
Read more
  • 0
  • 0
  • 29257

article-image-5-ways-to-reduce-app-deployment-time
Guest Contributor
27 Oct 2018
6 min read
Save for later

5 ways to reduce App deployment time

Guest Contributor
27 Oct 2018
6 min read
Over 6,000 mobile apps are released on the Google Play Store every day. This breeds major competition among different apps that are constantly trying to reach more consumers. Spoilt for choice, the average app user is no longer willing to put up with lags, errors, and other things that might go wrong with their app experience. Because consumers have such high expectations, developers need to find a way to release new updates, or deployments faster. This means app developers need to keep the deployment time low without compromising quality. The world of app development is always evolving, and any new deployments come with a risk. You need the right strategy to keep things from going wrong at every stage of the deployment process. Luckily, it’s not as complicated as you might think to create a workflow that won’t breed errors. Here are some tips to get you started. 1. Logging to catch issues before they happen An application log is a file that keeps track of events being logged by a software, which includes vital information such as errors and warnings. Logging helps in catching potential problems before they happen. Even if a problem arises, you’ll have a log to show you why it might have occurred. Logging also provides a history of earlier version updates which you can restore from. You have two options for application logging: creating your own framework or utilizing one that’s already readily available. While it’s completely possible to create your own, based on your own decision about what’s important for your application, there are already tools that work effectively that you can implement for your project. You can learn more about creating a system for finding problems before they happen here: Python Logging Basics - The Ultimate Guide to Logging. 2. Batching to identify errors/breakdowns quickly Deploying in batches gives developers much more control than releasing all major updates at once. When you reduce the amount of change taking place in every update, it’s easier to identify errors and breakdowns. If you update your app with large overhauls, you’ll spend countless hours hunting where something went wrong. Even if your team already utilizes small batch updates, you can take steps to make this process easier through automation using tools like Compuware, Helpsystems or Microsystems' Automation Batch Tools. Writing fresh code every time you need to make a change takes time and effort. When you have an agile schedule, you need to optimize your code to ensure time isn’t spent on repetitive tasks. Automated batching will help your team work faster and will fewer errors. 3. Key Performance Indicators to benchmark success Key Performance Indicators, also known as KPIs anticipate the success of your app. You should identify these early on so you’re able to not only recognize the success of your app but also notice areas that need improving. The KPIs you choose depend on the type of app. In the app world, some of the most common KPIs are: Number of downloads App open rate New users Retention rate Session length Conversion rate from users to customers Knowing your KPIs will help you anticipate user trends. If you notice your session length going down, for example, that’s a good sign it’s time for an update. On the other hand, an increase in downloads is a good indicator that you’re doing something right. 4. Testing Finally, you’ll want to set up a system for testing your app deployments effectively. Testing is important to make sure everything is working properly so you can quickly launch your newest deployment without worrying about things going wrong. You can create sample tests for all aspect of the user experience like logins, key pages, and APIs. However, you’ll need to choose a method (or several) of testing that makes sense based on your deployment size. Common application testing types: Functionality Testing: This is to ensure the app is working on all devices. Performance Testing: With this test, several mobile challenges are introduced like poor network coverage, and less memory that stress the application’s server. Memory Leakage Testing: This step checks for optimized memory processing. Security Testing: As security becomes a greater concern for users, apps need to be tested to ensure data is protected. The good news is much of this testing can be done through automated tools. With just a few clicks, you can test for all of the things above. The most common automated testing tools include Selenium, TestingWhiz, and Test IO. 5. Deployment tracking software When you’re continuously deploying new updates for your app, you need to have a way to track these changes in real-time. This helps your team see when the deployments happened, how they relate to prior deployments, and how they’ve affected your predetermined KPIs. While you should still have a system for testing, automating code, and tracking errors, some errors still happen since there is no way to prevent a problem from happening 100% of the time. Using a deployment tracking software such as Loggly (full disclosure, I work at Loggly), Raygun or Airbrake will help cut down on time spent searching for an error. Because they identify immediately if an error is related to newly released codes, you can spend less time locating a problem and more time solving it. When it comes to your app’s success, you need to make sure your deployments are as pain-free as possible. You don’t have time to waste since competition is fierce today, but that is no excuse to compromise on quality. The above tips will streamline your deployment process so you can focus on building something your users love. About the Author Ashley is an award-winning writer who discovered her passion in providing creative solutions for building brands online. Since her first high school award in Creative Writing, she continues to deliver awesome content through various niches. Mastodon 2.5 released with UI, administration, and deployment changes Google App Engine standard environment (beta) now includes PHP 7.2 Multi-Factor Authentication System – Is it a Good Idea for an App?
Read more
  • 0
  • 0
  • 28917
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-opencv-and-android-making-your-apps-see
Raka Mahesa
07 Jul 2016
6 min read
Save for later

OpenCV and Android: Making Your Apps See

Raka Mahesa
07 Jul 2016
6 min read
Computer vision might sound like an exotic term, but it's actually a piece of technology that you can easily find in your daily life. You know how Facebook can automatically tag your friends in a photo? That's computer vision. Have you ever tried Google Image Search? That's computer vision too. Even the QR Code reader app in your phone employs some sort of computer vision technology. Fortunately, you don't have to conduct your own researches to implement computer vision, since that technology is easily accessible in the form of SDKs and libraries. OpenCV is one of those libraries, and it's open source too. OpenCV focuses on real-time computer vision, so it feels very natural when the library is extended to Android, a device that usually has a camera built in. However, if you're looking to implement OpenCV in your app, you will find the official documentations for the Android version a bit lagging behind the ever evolving Android development environment. But don't worry; this post will help you with that. Together we're going to add the OpenCV Android library and use some of its basic functions on your app. Requirements Before you get started, let’s make sure you have all the following requirements: Android Studio v1.2 or above Android 4.4 (API 19) SDK or above OpenCV for Android library v3.1 or above An Android device with a camera Importing the OpenCV Library All right, let's get started. Once you have downloaded the OpenCV library, extract it and you will find a folder named "sdk" in it. This "sdk" folder should contain folders called "java" and "native". Remember the location of these 2 folders, since we will get back to them soon enough. So now you need to create a new project with blank activity on Android Studio. Make sure to set the minimum required SDK to API 19, which is the lowest version that's compatible with the library. Next, import the OpenCV library. Open the File > New > Import Module... menu and point it to the "java" folder mentioned earlier, which will automatically copy the Java library to your project folder. Now that you have added the library as a module, you need to link the Android project to the module. Open the File > Project Structure... menu and select app. On the dependencies tab, press the + button, choose Module Dependency, and select the OpenCV module on the list that pops up. Next, you need to make sure that the module will be built with the same setting as your app. Open the build.gradle scripts for both the app and the OpenCV module. Copy the SDK version and tools version values in the app graddle script to the OpenCV graddle script. Once it's done, sync the gradle scripts and rebuild the project. Here are the values of my graddle script, but your script may differ based on the SDK version you used. compileSdkVersion 23 buildToolsVersion "23.0.0 rc2" defaultConfig { minSdkVersion 19 targetSdkVersion 23 } To finish importing OpenCV, you need to add the C++ libraries to the project. Remember the "native" folder mentioned earlier? There should be a folder named "libs" inside it. Copy the "libs" folder to the <project-name>/OpenCVLibrary/src/main folder and rename it to "jniLibs" so that Android Studio will know that the files inside that folder are C++ libraries. Sync the project again, and now OpenCV should have been imported properly to your project. Accessing the Camera Now that you’re done importing the library, it's time for the next step: accessing the device's camera. The OpenCV library has its own camera UI that you can use to easily access the camera data, so let’s use that. To do that, simply replace the layout XML file for your main activity with this one. Then you'll need to ask permission from the user to access the camera. Add the following line to the app manifest. <uses-permission android_name="android.permission.CAMERA"/> And if you're building for Android 6.0 (API 23), you will need to ask for permission inside the app. Add the following line to the onCreate() function of your main activity to ask for permission. requestPermissions(new String[] { Manifest.permission.CAMERA }, 1); There are two things to note about the camera UI from the library. First, by default, it will not show anything unless it's activated in the app by calling the enableView() function. And second, on portrait orientation, the camera will display a rotated view. Fixing this last issue is quite a hassle, so let’s just choose to lock the app to landscape orientation. Using OpenCV Library With the preparation out of the way, let's start actually using the library. Here's the code for the app's main activity if you want to see how the final version works. To use the library, initialize it by calling the OpenCVLoader.initAsync() method on the activity's onResume() method. This way the activity will always check if the OpenCV library has been initialized every time the app is going to use it. //Create callback protected LoaderCallbackInterface mCallback = new BaseLoaderCallback(this) { @Override public void onManagerConnected(int status) { //If not success, call base method if (status != LoaderCallbackInterface.SUCCESS) super.onManagerConnected(status); else { //Enable camera if connected to library if (mCamera != null) mCamera.enableView(); } } }; @Override protected void onResume() { //Super super.onResume(); //Try to init OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_2_4_10, this, mCallback); } The initialization process will check if your phone already has the full OpenCV library. If it doesn't, it will automatically open the Google Play page for the OpenCV Manager app and ask the user to install it. And if OpenCV has been initialized, it simply activates the camera for further use.   If you noticed, the activity implements the CvCameraViewListener2 interface. This interface enables you to access the onCameraFrame() method, which is a function that allows you to read what image the camera is capturing, and to return what image the interface should be showing. Let's try a simple image processing and show it on the screen. @Override public Mat onCameraFrame(CameraBridgeViewBase.CvCameraViewFrame inputFrame) { //Get edge from the image Mat result = new Mat(); Imgproc.Canny(inputFrame.rgba(), result, 70, 100); //Return result return result; } Imgproc.Canny() is an OpenCV function that does Canny Edge Detection, which is a process to detect all edges in a picture. As you can see, it's pretty simple; you simply need to put the image from the camera (inputFrame.rgba()) into the function and it will return another image that shows only the edges. Here's what the app’s display will look like. And that's it! You've implemented a pretty basic feature from the OpenCV library on an Android app. There are still many image processing features that the library has, so check out this exhaustive list of features for more. Good luck! About the author Raka Mahesa is a game developer at Chocoarts who is interested in digital technology in general. Outside of work hours, he likes to work on his own projects, with Corridoom VR being his latest released game. Raka also regularly tweets as @legacy99.
Read more
  • 0
  • 0
  • 27794

article-image-7-android-predictions-for-2019
Guest Contributor
13 Jan 2019
8 min read
Save for later

7 Android Predictions for 2019

Guest Contributor
13 Jan 2019
8 min read
Emerging technologies not only change the way users interact with their devices but they also improve the development process. One such tech where most features emerge is Google’s Android. The Android App Development platform is coming up with new features every year at a neck-breaking pace These are some of the safest Android predictions which can be made for Android development in the year 2019. #1 Voice Command and Virtual Assistants Voice command simply dictates the user’s voice into an electronic word processed document which allows users to operate the system by talking to it and also frees up cognitive working space. It also has some potential drawbacks - it requires a large amount of memory to store voice data files and is difficult to operate in crowded places due to noise interference. What does it have in store for 2019? In 2019, voice search is going to create a new user interface that would be a mandate to take into consideration when developing and designing applications in mobile apps. Voice Assistants are gaining much popularity and we can see every big player has the one such as Siri, Google Assistant, Bixby, Alexa, Cortana. This will continue to grow in 2019. Use Case App: Pingpong Board The use case for voice assistants is to create an application similar to the Ping Pong board. Inside the application, there are two screens: the First screen shows available players with the leaderboard and scores and the second screen displays two players who are playing the game along with their game points. #2 Chatbots Chatbots are trending as they support faster customer service at low labor costs by increasing customer satisfaction. However, simple chatbots are often limited to give a response to the customers which could frustrate them whereas complex chatbots cost more, inhibiting their widespread adoption. What Next? As per the technology experts, it is predicted the whole world is going to introduce their company by Chatbots. The customer support service will be provided efficiently and the customer feedbacks will also be responded quicker to get the better results. Chatbots are a takeaway in this digital world as every application or website wants to provide this facility for the improvised customer support. Chatbots can be taken as the small assistants which are integrated into our applications. We can create our one with the help of DialogFlow which is easy to develop without much coding. Nowadays, facebook messenger is used in spite of being a messaging app as many of the chatbots are integrated into it. Use Case: Allstate chatbot The largest P&C insurer in America developed its own 'ABle' chatbot to help their agents learn to sell commercial insurance products. The bot teaches agents through the commercial selling process and can extract documents and also understands which product an agent is working on and where are they in the process. #3 Virtual and Augmented Reality Augmented Reality systems are highly interactive in nature and operate simultaneously with the real-time environment by reducing the thin line between real and virtual world; enhancing the perceptions and interactions with the real world. It is expensive to develop AR based devices for the desired projects, lack privacy and low-performance level are few drawbacks for the AR compliant devices. What next? The hardware for VR is initially driven by the hardcore games and gadget freaks where the mobile hardware is been caught up in some instances excluding the traditional computing platforms. For the real-world uses with Augmented Reality and sensor into the mobile devices like never before, AR and VR are combined to get much better visibility of applications which seems that virtual reality revolution is finally arriving. Use Case App: MarXent + AR AR is helping professionals to visualize their final products during the creative process from interior design to architecture and construction. By using the headsets enabled by architects, engineers, and design professionals can step directly into their buildings and spaces to look at how their designs might look and can even make virtual spot changes. #4 Android App Architecture Google has finally introduced guidelines after many years to develop the best Android apps. Even you are not forced to use Android architecture components but it is considered a good starting point to build stable applications. The argument about the best pattern for Android - MVC, MVP, MVVM or anything else has turned off and we can trust the solutions from Google which are good enough for all majority of apps. What next? The developers always face confusion implementing the multithreading on Android and to solve these problems, tools like Async Task and EventBus are supporting it. Also, we can choose RxJava, Kotlin Coroutines or Android LiveData for multithreading management. This fetches more stability and less confusion in the developer community. Loads of applications are installed on our mobile devices but we hardly use some of them. For this, Progressive WebApps are becoming popular in e-commerce. #5 Hybrid Solutions Big companies like Facebook is leading in utilizing the cross platforms for most of the part, it is a pragmatic approach where the larger the audience the bigger the market share for advertising and subscription revenues. What Next? The hybrid mobile applications come with the unified development that can substantially save a good amount of money and provide fast deployment through offline support and bridges the gap between other two approaches providing all the extra functionality with very little overhead. The hybrid applications can possibly result in the loss of performance and make the developer rely on the framework itself to nicely play with the targeted operating system. So, escaping out of the traditional hardware and software solutions, the developers have approached the market aiming to offer a total solution or cross-platform solutions. #6 Machine Learning Google switched to AI first from mobile first strategy since some time. This is clearly seen in the TensorFlow and MLKit in the Firebase ecosystem which is gaining popularity for creating simple basic models that do not need expertise in data sciences to make your application intelligent. People are getting more aware of the capabilities of machine learning along with its implementation in Matlab or R for mobile development. What next? Machine learning is used in a variety of applications for banking and financial sector, healthcare, retail, publishing and social media etc. Also, used by Google and Facebook to push down relevant advertisements based on past user search. The major challenge is to implement machine learning by implementing different techniques and interpretation of results which is also complex but important not only for image and speech recognition but also for user behavior prediction and analysis. Machine learning will be utilized in the future for Quantum computing to manipulate and classify large numbers of vectors in high-dimensional spaces. We expect to have better-unsupervised algorithms in building smarter applications that will lead to faster and more accurate outcomes. #7 Rooting Android Rooting Android means to get root access or administrative rights for your device. No matter how much you pay for your device the internals of the device is still locked away. With the help of Rooting Android, several advantages are offered of removing the pre-installed OEM applications, ad-blocking for all the apps which is a great benefit to the user. What next? As the rooting android installs the incompatible application on your device it can brick your device and it is advised to always get your apps from reliable sources. It does not come with a warranty and a wrong setting can move the wrong item which can cause huge problems. The risk with the rooted devices is that the system might not get well updated later which can create errors. Still, It also provides more display options and internal storage along with the greater battery life and speed. It will also make full device backups and have access to root files. Conclusion The year 2019 is going to very interesting for Android app development. We will observe a lot of new technologies emerging that will change the face of mobile development for future use. The developers need to stay up to date with the emerging trends and learn quickly to implement them in designing new products. We can definitely see a bright future by more good quality apps with even more engaging user interactions. We also expect to have more stable solutions to develop applications which result in better products. It becomes important to observe closely the new trends and become a quick learner in mastering these skills that will be the most important in the future. Author Bio Rooney Reeves is a content strategist and a technical blogger associated with eTatvaSoft. An old hand writer by day and an avid reader by night, she has a vast experience in writing about new products, software design, and test-driven methodology. Read Next 8 programming languages to learn in 2019 18 people in tech every programmer and software engineer need to follow in 2019 Cloud computing trends in 2019
Read more
  • 0
  • 0
  • 27643

article-image-why-you-should-analyze-user-behavior-data-before-developing-a-mobile-app
Guest Contributor
16 Jan 2019
6 min read
Save for later

Why you should analyze user-behavior data before developing a mobile app?

Guest Contributor
16 Jan 2019
6 min read
What is the first thing that comes to your mind when we say “mobile app”? If you are a user, you are probably thinking of, it is something that’s convenient, and eases your life. However, in a business context, an idea that can be converted into an app model and helps boost your profitability. When successful entrepreneurs launch their original idea, they do not just design and develop it for the market; they research, understand the market minutely and more importantly study the users in-depth. One part that leads you to success is the complete understanding of the user. Here, we will try to understand why user behavior analysis is important and how you can best deliver it. Why analyze user behavior? Instead of asking this question, let’s ask the most important question- who are you designing the app for? The users of the app will be members of the target audience, and technically it is for them that you are planning the app layout and coding the all new idea for. In this case, you need to ensure it is usable for them, and they find the app convenient. You need to understand every aspect of user behavior, ranging from an understanding of how they use the app to what engages them. Analysis of user behavior will help you design the UX accordingly, and allows you to deliver effective app solutions. For this, we need to identify the different ways in which you can identify user behavior and what you need to consider, in order to deliver a perfect app solution. 4 Effective ways to analyze user behavior data Here’re four effective ways that will help you to analyze user behavior data to design and develop a mobile app accordingly. The app goal: Whenever the user uses an app, they do it with a specific goal in mind. For instance, when you use Uber, you are choosing travel convenience and avoiding haggling with the driver over the fare. The Uber app allows its users to book their ride with ease and know the amount for the ride beforehand. When you are designing for the user, you need to understand the goal they are attempting to achieve with the app, and how best you can help them achieve it, in the simplest way possible. The mobile usage: While designing an app for the users, you need to understand how they use the mobile phone. What is most convenient for them? For instance, 79% people use their left hand instead of the right hand to cradle their phone or use the apps. Have you considered them while designing the app? Most people prefer the portrait mode for certain apps; however, when they are viewing videos, they prefer to hold it in the landscape mode. If your app does not change the view according to the preferences stated, then you are likely to lose out on the customers. Do users use the thumb to access the buttons on the screen or, do they use their finger? How do they navigate through the screens? Do they hold the phone in one hand or cradle the phone? When you are able to answer these questions, then you have nailed the design strategy You would know just where to place the buttons and how to design the interaction? There are places within the mobile screen which have been marked as inaccessible. If you place the buttons or other clickable elements in that part of the screen, then you are halting the access to the mobile app. Acknowledge feedback: What do users like the most about the mobile app in general, and what are the aspects that frustrate them? For instance, there are mobile app designs that don’t connect well with the user. An app that takes more than 3 seconds to load can be frustrating. If the images don’t load faster, then the app can be discarded immediately. This is true for e-commerce apps, as there are lots of images, and people tend to expect an immediate response from these apps. When the users give you their feedback, make sure you incorporate that into the app. The motivations: Finally, you need to take into account the motivations of the user towards using the app and completing an action. What makes them want to click on the buy now or, the action button in your app? Study your users. For some users, safety plays the predominant motivator while for others, the motivation factor is the value for money the app delivers. Along with the motivators, there are barriers too, which you need to consider in order to design the best user-centric app for the business idea. After identifying different ways to identify user behavior, now, let’s talk about two simple methods that can be used to analyze user behavior data: Questionnaire: Prepare a questionnaire including questions like what do you like the best about our app? Which other apps would you use as our alternative? What do you want us to improve? The questions are endless, but make sure these questions give you insights on your users. Spread this questionnaire among a group of people and based on their answers, you can drive user-behavior data and develop a mobile app accordingly. Mobile App Analytics Platforms: Another method is mobile app analytics platforms. Prepare navigation flow, a flowchart of all the app screens, to submit it on mobile app analytics platforms and identify how users are going from one screen to another. Through this navigation flow of your app, you can identify how users are interacting with screens and how they move through your app. This data will help you to know user behavior. This data helps you make data-driven changes. Conclusion Analyzing user behavior always must be a high priority for businesses who want to make a successful app and grow over time. When it comes to analyzing user behavior, top companies and brands like Uber, Airbnb, Pinterest, and Starbucks are using AI (Artificial Intelligence) to provide a personalized experience to their users. Through AI and machine learning, businesses will learn about customer or user behaviors on a deeper level and get help in delivering a better application. The possibilities are endless. The point is - are you utilizing already existing data to optimize the overall process? Author Bio Yuvrajsinh is a Marketing Manager at Space-O Technologies, a firm having expertise in developing Uber-like apps. He spends most of his time researching on the mobile app and startup trends. He is a regular contributor to popular publications like Entrepreneur, Yourstory, and Upwork. If you have any confusion, or question, or need any consultation regarding the mobile app development process, feel free to contact him. 5 UX design tips for building a great e-commerce mobile app 4 key benefits of using Firebase for mobile app development 9 reasons to choose Agile Methodology for Mobile App Development  
Read more
  • 0
  • 0
  • 25947

article-image-how-are-mobile-apps-transforming-the-healthcare-industry
Guest Contributor
15 Jan 2019
5 min read
Save for later

How are Mobile apps transforming the healthcare industry?

Guest Contributor
15 Jan 2019
5 min read
Mobile App Development has taken over and completely re-written the healthcare industry. According to Healthcare Mobility Solutions reports, the Mobile healthcare application market is expected to be worth more than $84 million by the year 2020. These mobile applications are not just limited to use by patients but are also massively used by doctors and nurses. As technology evolves, it simultaneously opens up the possibility of being used in multiple ways. Similar has been the journey of healthcare mobile app development that has originated from the latest trends in technology and has made its way to being an industry in itself. The technological trends that have helped build mobile apps for the healthcare industry are Blockchain You probably know blockchain technology, thanks to all the cryptocurrency rage in recent years. The blockchain is basically a peer-to-peer database that keeps a verified record of all transactions, or any other information that one needs to track and have it accessible to a large community. The healthcare industry can use a technology that allows it to record the medical history of patients, and store it electronically, in an encrypted form, that cannot be altered or hacked into. Blockchain succeeds where a lot of health applications fail, in the secure retention of patient data. The Internet of Things The Internet of Things (IoT) is all about connectivity. It is a way of interconnecting electronic devices, software, applications, etc., to ensure easy access and management across platforms. The loT will assist medical professionals in gaining access to valuable patient information so that doctors can monitor the progress of their patients. This makes treatment of the patient easier, and more closely monitored, as doctors can access the patient’s current profile anywhere and suggest treatment, medicine, and dosages. Augmented Reality From the video gaming industry, Augmented Reality has made its way to the medical sector. AR refers to the creation of an interactive experience of a real-world environment through superimposition of computer-generated perceptual information. AR is increasingly used to develop mobile applications that can be used by doctors and surgeons as a training experience. It stimulates a real-world experience of diagnosis and surgery, and by doing so, enhances the knowledge and its practical application that all doctors must necessarily possess. This form of training is not limited in nature, and can, therefore, simultaneously train a large number of medical practitioners. Big Data Analytics Big Data has the potential to provide comprehensive statistical information, only accessed and processed through sophisticated software. Big Data Analytics becomes extremely useful when it comes to managing the hospital’s resources and records in an efficient manner. Aside from this, it is used in the development of mobile applications that store all patient data, thus again, eliminating the need for excessive paperwork. This allows medical professionals to focus more on attending and treating the patients, rather than managing database. These technological trends have led to the development of a diverse variety of mobile applications to be used for multiple purposes in the healthcare industry. Listed below are the benefits of the mobile apps deploying these technological trends, for the professionals and the patients alike. Telemedicine Mobile applications can potentially play a crucial role in making medical services available to the masses. An example is an on-call physician on telemedicine duty. A mobile application will allow the physician to be available for a patient consult without having to operate via  PC. This will make the doctors more accessible and will bring quality treatment to the patients quickly. Enhanced Patient Engagement There are mobile applications that place all patient data – from past medical history to performance metrics, patient feedback, changes in the treatment patterns and schedules, at the push of a button on the smartphone application for the medical professional to consider and make a decision on the go. Since all data is recorded in real-time, it makes it easy for doctors to change shifts without having to explain to the next doctor the condition of the patient in person. The mobile application has all the data the supervisors or nurses need. Easy Access to Medical Facilities There are a number of mobile applications that allow patients to search for medical professionals in their area, read their reviews and feedback by other patients, and then make an online appointment if they are satisfied with the information that they find. Apart from these, they can also download and store their medical lab reports, and order medicines online at affordable prices. Easy Payment of Bills Like in every other sector, mobile applications in healthcare have made monetary transactions extremely easy. Patients or their family members, no longer need to spend hours waiting in the line to pay the bills. They can instantly pick a payment plan and pay bills immediately or add reminders to be notified when a bill is due. Therefore, it can be safely said that the revolution that the healthcare industry is undergoing and has worked in the favor of all the parties involved – Medical Professionals, Patients, Hospital Management and the Mobile App Developers. Author's Bio Ritesh Patil is the co-founder of Mobisoft Infotech that helps startups and enterprises in mobile technology. He’s an avid blogger and writes on mobile application development. He has developed innovative mobile applications across various fields such as Finance, Insurance, Health, Entertainment, Productivity, Social Causes, Education and many more and has bagged numerous awards for the same. Social Media – Twitter, LinkedIn Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions How IBM Watson is paving the road for Healthcare 3.0 7 Popular Applications of Artificial Intelligence in Healthcare
Read more
  • 0
  • 0
  • 25804
article-image-android-studio-how-does-it-differ-from-other-ides
Natasha Mathur
30 May 2018
5 min read
Save for later

What is Android Studio and how does it differ from other IDEs?

Natasha Mathur
30 May 2018
5 min read
Android Studio is a powerful and sophisticated development environment, designed with the specific purpose of developing, testing, and packaging Android applications. It can be downloaded, along with the Android SDK, as a single package.  It is a collection of tools and components. Many such tools are installed and updated independently of each other. Android Studio is not the only way to develop Android apps; there are other IDEs, such as Eclipse and NetBeans, and it is even possible to develop a complete app using nothing more than Notepad and the command line. This article is an excerpt from the book, 'Mastering Android Studio 3', written by Kyle Mew. Built for a purpose, Android Studio has attracted a growing number of third-party plugins that provide a large array of valuable functions, not available directly via the IDE. These include plugins to speed up build times, debug a project over Wi-Fi, and many more. Despite being arguably a superior tool, there are some very good reasons for having stuck with another IDE, such as Eclipse. Many developers develop for multiple platforms, which makes Eclipse a good choice of tool. Every developer has deadlines to meet, and getting to grips with unfamiliar software can slow them down considerably at first. But Android studio is the official IDE for Android studio and every android app developer should be wary of the differences between the two so that they can figure out the similarities and the differences, and see what works for them. How Android Studio differs There are many ways in which Android Studio differs from other IDEs and development tools. Some of these differences are quite subtle, such as the way support libraries are installed, and others, for instance, the build process and the UI design, are profoundly different. Before taking a closer look at the IDE itself, it is a good idea to first understand what some of these important differences are. The major ones are listed here:  UI development: The most significant difference between Studio and other IDEs is its layout editor, which is far superior to any of its rivals, offering text, design, and blueprint views, and most importantly, constraint layout tools for every activity or fragment, an easy-to-use theme and style editors, and a drag-and-drop design function. The layout editor also provides many tools unavailable elsewhere, such as a comprehensive preview function for viewing layouts on a multitude of devices and simple-to-use theme and translation editors. Project structure: Although the underlying directory structure remains the same, the way Android Studio organizes each project differs considerably from its predecessors. Rather than using workspaces as in Eclipse, Studio employs modules that can more easily be worked on together without having to switch workspaces. This difference in structure may seem unusual at first, but any Eclipse user will soon see how much time it can save once it becomes familiar.  Code completion and refactoring: The way that Android Studio intelligently completes code as you type makes it a delight to use. It regularly anticipates what you are about to type, and often a whole line of code can be entered with no more than two or three keystrokes. Refactoring too, is easier and more far-reaching than alternative IDEs, such as Eclipse and NetBeans. Almost anything can be renamed, from local variables to entire packages.  Emulation: Studio comes equipped with a flexible virtual device editor, allowing developers to create device emulators to model any number of real-world devices. These emulators are highly customizable, both in terms of form factor and hardware configurations, and virtual devices can be downloaded from many manufacturers. Users of other IDEs will be familiar with Android AVDs already, although they will certainly appreciate the preview features found in the Design tab. Build tools: Android Studio employs the Gradle build system, which performs the same functions as the Apache Ant system that many Java developers will be familiar with. It does, however, offer a lot more flexibility and allows for customized builds, enabling developers to create APKs that can be uploaded to TestFlight, or to produce demo versions of an app, with ease. It is also the Gradle system that allows for the modular nature. Rather than each library or a third-party SDK being compiled as a JAR file, Studio builds each of these using Gradle. These are the most far-reaching differences between Android Studio and other IDEs, but there are many other features which are unique. Studio provides the powerful JUnit test facility and allows for cloud platform support and even Wi-Fi debugging. It is also considerably faster than Eclipse, which, to be fair, has to cater for a wider range of development needs, as opposed to just one, and it can run on less powerful machines. Android Studio also provides an amazing time-saving device in the form of Instant Run. This feature cleverly only builds the part of a project that has been edited, meaning that developers can test small changes to code without having to wait for a complete build to be performed for each test. This feature can bring waiting time down from minutes to almost zero. To know more about Android studio and how to build faster, smoother, and error-free Android applications, be sure to check out the book 'Mastering Android Studio 3'. The art of Android Development using Android Studio Getting started with Android Things  Unit Testing apps with Android Studio
Read more
  • 0
  • 0
  • 25445

article-image-ai-on-mobile-how-ai-is-taking-over-the-mobile-devices-marketspace
Sugandha Lahoti
19 Apr 2018
4 min read
Save for later

AI on mobile: How AI is taking over the mobile devices marketspace

Sugandha Lahoti
19 Apr 2018
4 min read
If you look at the current trends in the mobile market space, a lot of mobile phone manufacturers portray artificial intelligence as the chief feature in their mobile phones. The total number of developers who build for mobile is expected to hit 14m mark by 2020, according to Evans Data survey. With this level of competition, developers have resorted to Artificial Intelligence to distinguish their app, or to make their mobile device stand out. AI on Mobile is the next big thing. AI on Mobile can be incorporated in multiple forms. This may include hardware, such as AI chips as seen on Apple’s iPhone X or software-based, such as Google’s TensorFlow for Mobile. Let’s look in detail how smartphone manufacturers and mobile developers are leveraging the power of AI for Mobile for both hardware and software specifications. Embedded chips and In-device AI Mobile Handsets nowadays are equipped with specialized AI chips. These chips are embedded alongside CPUs to handle heavy lifting tasks in smartphones to bring AI on Mobile. These built-in AI engines can not only respond to your commands but also lead the way and make decisions about what it believes is best for you. So, when you take a picture, the smartphone software, leveraging the power of AI hardware correctly identifies the person, object, or location being photographed and also compensates for low-resolution images by predicting the pixels that are missing. When we talk about battery life, AI allocates power to relevant functions eliminating unnecessary use of power. Also, in-device AI reduces data-processing dependency on cloud-based AI, saving both energy, time and associated costs. The past few months have seen a large number of AI-based silicon popping everywhere. The trend first began with Apple’s neural engine, a part of the new A11 processor Apple developed to power the iPhone X.  This neural engine powers the machine learning algorithms that recognize faces and transfer facial expressions onto animated emoji. Competing head first with Apple, Samsung revealed the Exynos 9 Series 9810. The chip features an upgraded processor with neural network capacity for AI-powered apps. Huawei also joined the party with Kirin 970 processor, a dedicated Neural Network Processing Unit (NPU) which was able to process 2,000 images per minute in a benchmark image recognition test. Google announced the open beta of its Tensor Processing Unit 2nd Gen. ARM announced its own AI hardware called Project Trillium, a mobile machine learning processor.  Amazon is also working on a dedicated AI chip for its Echo smart speaker. Google Pixel 2 features a Visual Core co-processor for AI. It offers an AI song recognizer, superior imaging capabilities, and even helps the Google Assistant understand the user commands/questions better. The arrival of AI APIs for Mobile Apart from in-device hardware, smartphones also have witnessed the arrival of Artificially intelligent APIs. These APIs add more power to a smartphone’s capabilities by offering personalization, efficient searching, accurate video and image recognition, and advanced data mining. Let’s look at a few powerful machine learning APIs and libraries targeted solely to Mobile devices. It all began with Facebook announcing Caffe2Go in 2016. This Caffe version was designed for running deep learning models on mobile devices. It condensed the size of image and video processing AI models by 100x, to run neural networks with high efficiency on both iOS and Android. Caffe2Go became the core of Style Transfer, Facebook’s real-time photo stylization tool. Then came Google’s TensorFlow Lite in 2017 announced at the Google I/O conference. Tensorflow Lite is a feather-light upshot for mobile and embedded devices. It is designed to be Lightweight, Speedy, and Cross-platform (the runtime is tailormade to run on various platforms–starting with Android and iOS.) TensorFlow Lite also supports the Android Neural Networks API, which can run computationally intensive operations for machine learning on mobile devices. Following TensorFlow Lite came Apple’s CoreML, a programming framework designed to make it easier to run machine learning models on iOS. Core ML supports Vision for image analysis, Foundation for natural language processing, and GameplayKit for evaluating learned decision trees. CoreML makes it easier for apps to process data locally using machine learning without sending user information to the cloud. It also optimizes models for Apple mobile devices, reducing RAM and power consumption. Artificial Intelligence is finding its way into every aspect of a mobile device, whether it be through hardware with dedicated AI chips or through APIs for running AI-enabled services on hand-held devices. And this is just the beginning. In the near future, AI on Mobile would play a decisive role in driving smartphone innovation possibly being the only distinguishing factor consumers think of while buying a mobile device.
Read more
  • 0
  • 0
  • 25353

article-image-why-are-android-developers-switching-java-kotlin
Hari Vignesh
23 Jan 2018
4 min read
Save for later

Why are Android developers switching from Java to Kotlin?

Hari Vignesh
23 Jan 2018
4 min read
When we talk about Android app development, the first programming language that comes to mind is 'Java'. However Java isn’t the only language you can use for Android programming – you can use any language that compiles to the JVM. Recently, a new language has caught the attention of the Android community – Kotlin. Kotlin has actually been around since 2011, but it was only in May 2017 that Google announced that the language was to become an officially supported language in the Android operating system. This is one of the many reasons why Kotlin’s adoption has been so dramatic. The Realm report, published at the end of 2017 suggests that Kotlin is likely to overtake Java in terms of usage in the next couple of years. When you want to work on custom Android applications, an advanced technology will help you achieve your goals. Java and Kotlin are commonly used languages for Google for writing Android Apps. A great importance is given to programming languages because it might cut down some of your time and money. Want to learn Kotlin? Find Kotlin eBooks and videos in our library. There are many reasons why mobile developers are choosing to switch from Java to Kotlin. Below are some of the most significant. Kotlin is easy for anyone who knows Java to learn Similarities in typing and syntax make Kotlin very easy to master for anyone who’s already working with Java. If you’re worried about a steep learning curve, you'll be pleasantly surprised by how easy it is for developers to dive into coding in Kotlin. Kotlin is evolving with a lot of support from the developer community. A lot of developers who contribute to Kotlin’s evolution are freelancers who find work on different platforms and experience a wide range of smaller projects with varied needs. Other contributors include larger companies and industry giants like Google. Kotlin needs 20 percent less coding compared to Java. Java is a bit outdated, which means every new launch has to support features included in the previous version. This eventually increases the code to write, resulting in absence of layer-to-layer architecture. If you compare the coding of Java class and Kotlin class, you will find that the one written in Kotlin will be much more compact than the one written in Java. Kotlin has Android Studio support Because Kotlin is built by JetBrains, it’s unsurprising that Android Studio (also a JetBrains product) has excellent support for Kotlin. Android Studio makes it incredibly easy to configure Kotlin in your project; it’s as straightforward as simply opening a few menus. Your IDE will have no problem understanding, compiling and running Kotlin code once you have set up Kotlin for Android Studio. After configuring Kotlin for Android Studio, you can convert the entire Java source file into a Kotlin file. The fact that Kotlin is Java compatible makes it a uniquely useful language that can leverage JVMs while at the same time be used to update and improve enterprise-level solutions that have enormous codebases written in Java. Kotlin is great for procedural programming Every programming paradigm has its own set of strengths and weaknesses. There will always be certain scenarios where one is more effective than another. One thing that’s so appealing about Kotlin is that it combines the strengths of two different approaches – procedural and functional. True, the largely procedural approach can sometimes be the most challenging aspect of the language when you first start to get to grips with it. However, the level of control such an approach can give you is well worth the investment of your time. Kotlin makes development more efficient and your life easier This follows on nicely from the point above. While certain aspects of Kotlin require patience and concentration to master, in the long run, with less code, errors and bugs will be greatly reduced. That saves you time, making coding much more enjoyable rather than an administrative nightmare of spaghetti code. There are plenty of features in Kotlin that makes it a practical solution to today’s programming challenges. Where JetBrains takes the language next remains to be seen – we could, perhaps, see Kotlin make a move towards iOS development, and if it compiled to JavaScript we may also begin to see it used more and more within web development. Of course, this will largely be down to JetBrain’s goals and just how much they want Kotlin to dominate the developer landscape. Hari Vignesh Jayapalan is a Google Certified Android app developer, IDF Certified UI & UX Professional, street magician, fitness freak, technology enthusiast, and wannabe entrepreneur. He can be found on Twitter @HariofSpades.
Read more
  • 0
  • 0
  • 24744
article-image-google-fuchsia-what-all-the-fuss-is-about
Amarabha Banerjee
02 Jul 2018
4 min read
Save for later

Google Fuchsia: What's all the fuss about?

Amarabha Banerjee
02 Jul 2018
4 min read
It was back in 2016 when we first heard about the Google Fuchsia platform which was supposed to be an alternative to the Android operating system. Google had revealed a WIP version in 2016 and since then a lot of dust had gathered on this news until the latest developments and news resurfaced in January 2018. The question on everyone’s mind is is do you really need to be concerned about Fuchsia OS and does it have what it takes to even challenge the market positioning of Android? Before we come to these questions, let’s look at what Fuchsia has to offer. The Fuchsia UI - Inspired by Material Design Fuchsia brings a complete material design approach to UI design. The first look shared by Google seemed a lot different than the Android UI. Source: The Droid guy Basic Android UI Source: Tech Radar Google Fuchsia on a smartphone device There is more depth; the text, images and wallpapers all look sleeker and feel like a peek through a window rather than being underlays to text and icons. Fuchsia currently offers two layouts - a mobile-centric design codenamed Armadillo, and a more traditional desktop experience codenamed Capybara. While the mobile centric version is in more focus, the desktop version is far from being ready. Google is trying to push Material Design heavily with Fuchsia. How far hey will succeed depends on their roadmap and future investment plan. The Concept of One OS across all devices It has been a long standing dream of Google to make all the different devices work under one OS platform. Google seems to be betting on Fuchsia to be that OS on Desktops, Tablets & Mobiles too. The Google ledger facility allows you to get a cloud account to seamlessly access and manage different Google services. The primary feature of seamless transition of data from one device to another, is sure to help the users play around with it effortlessly. Using the Custom Kernel Feature What makes Android version updates a pain to implement is that different devices run different kernel versions of Linux,the spine of Android. As such, the update rollouts are never in unison. This can create security flaws, and can be a real worrisome factor for Android users. This is where Fuchsia trumps Android. Fuchsia has its own Kernel - Zircon, which is designed to be consistently upgradeable. This helps the apps to be isolated from the Kernel and hence adds an extra security layer to the apps and also doesn’t render these apps useless after an OS update. Language Interoperability The most important aspect from the developer’s perspective is the feature of multi language support. Fuchsia is written in Dart using the latest Google cross platform framework, Flutter. It also provides support for development in Go and Rust. It is also extending support for Swift developers. This along with the added FIDL protocol, will help the developers to easily develop different parts in different languages - such as using a Go based backend with a Dart based front end. This gives developers immense power and flexibility. Although these features seem to be useful and interesting, Fuchsia will need a steady development pipeline and regular updates to reach a stable version so that devices can use it as their default UI. Keeping the current development trends in mind, we can safely conclude that till the next stable release, you can continue to browse your Android phones and not worry about being replaced by Fuchsia or any other competitor. Google updates biometric authentication for Android P, introduces BiometricPrompt API Google’s Android Things, developer preview 8: First look Google Flutter moves out of beta with release preview 1  
Read more
  • 0
  • 0
  • 24236

article-image-why-does-oculus-cto-prefer-2d-vr-interfaces-over-3d-virtual-reality-interfaces
Sugandha Lahoti
23 May 2019
6 min read
Save for later

Why does Oculus CTO John Carmack prefer 2D VR interfaces over 3D Virtual Reality interfaces?

Sugandha Lahoti
23 May 2019
6 min read
Creative immersive 3D experiences in Virtual reality setup is the new norm. Tech companies around the world are attempting to perfect these 3D experiences to make them as natural, immersive, and realistic as possible. However, a certain portion of Virtual Reality creators still believe that creating a new interaction paradigm in 3D is actually worse than 2D. One of them is John Carmack, CTO of Oculus VR, the popular Virtual Reality headgear. He has penned a Facebook post highlighting why he thinks 3D interfaces are usually worse than 2D interfaces. Carmack details a number of points to justify his assertion and says that the majority of browsing, configuring, and selecting interactions benefit from designing in 2D. He wrote an internal post in 2017 clarifying his views. Recently, he was reviewing a VR development job description before an interview last week, where he saw that one of the responsibilities for the open Product Management Leader position was: “Create a new interaction paradigm that is 3D instead of 2D based” which made him write this post. Splitting information across multiple depths is harmful Carmack says splitting information across multiple depths makes our eyes re-verge and re-focus. He explains this point with an analogy. “If you have a convenient poster across the room in your visual field above your monitor – switch back and forth between reading your monitor and the poster, then contrast with just switching back and forth with the icon bar at the bottom of your monitor.” Static HMD optics should have their focus point at the UI distance. If we want to be able to scan information as quickly and comfortably as possible, says Carmack, it should all be the same distance from the viewer and it should not be too close. As Carmack observes, you don't see in 3D. You see two 2D planes that your brain extracts a certain amount of depth information from. A Hacker news user points out, “As a UI goes, you can't actually freely use that third dimension, because as soon as one element obscures another, either the front element is too opaque to see through, in which case the second might as well not be there, or the opacity is not 100% in which case it just gets confusing fast. So you're not removing a dimension, you're acknowledging it doesn't exist. To truly "see in 3D" would require a fourth-dimension perspective. A 4D person could use a 3D display arbitrarily, because they can freely see the entire 3D space, including seeing things inside opaque spheres, etc, just like we can look at a 2D display and see the inside of circles and boxes freely.” However, a user critiqued also Carmack’s statement of splitting information across multiple depths being harmful. He says, “Frequently jumping between dissimilar depths is harmful. Less frequent, sliding, and similar depths, can be wonderful, allowing the much denser and easily accessible presentation of information. A general takeaway is that “most of the current commentary about "VR", is coming from a community focused on a particular niche, current VR gaming. One with particular and severe, constraints and priorities that don't characterize the entirety of a much larger design space.” Visualize 3D environment as a pair of 2D projections Camack says that unless we move significantly relative to the environment, they stay essentially the same 2D projections. He further adds, “even on designing a truly 3D UI, developers would have to consider this notion to keep the 3D elements from overlapping each other when projected onto the view.” It can also be difficult for 2D UX/product designers to transfer their thinking over to designing immersive products. https://twitter.com/SuzanneBorders/status/1130231236243337216 However, building in 3D is important for things which are naturally intuitive in 3D. This, as Carmack mentions is "true 3D" content, for which you get a 3D interface whether you like it or not. A user on Hacker News points out, “Sometimes things which we struggle to decode in 2D are just intuitive in 3D like knots or the run of wires or pipes.” Use 3D elements for efficient UI design Carmack says that 3D may have a small place for efficient UI design as a “treatment” for UI elements. He gives examples such as using slightly protruding 3D buttons sticking out of the UI surface in places where we would otherwise use color changes or faux-3D effects like bevels or drop shadows. He says, “the visual scanning and interaction is still fundamentally 2D, but it is another channel of information that your eye will naturally pick up on.” This doesn’t mean that VR interfaces should just be “floating screens”. The core advantage of VR from a UI standpoint is the ability to use the entire field of view, and allow it to be extended by “glancing” to the sides. Content selection, Carmack says, should go off the sides of the screens and have a size/count that leaves half of a tile visible at each edge when looking straight ahead. Explaining his statement he adds, “actually interacting with UI elements at the angles well away from the center is not good for the user, because if they haven’t rotated their entire body, it is a stress on their neck to focus there long, so the idea is to glance, then scroll. He also advises putting less frequently used UI elements off to the sides or back. A Twitter user agreed to Carmack’s floating screens comment. https://twitter.com/SuzanneBorders/status/1130233108073144320 Most users agreed to Carmack’s assertion, sharing their own experiences. A comment on reddit reads, “He makes a lot of good points. There are plenty examples of 'real life' instances where the existence and perception of depth isn't needed to make useful choices or to interact with something, and that in fact, as he points out, it's actually a nuisance to have to focus on multiple planes, back and forth', to get something done.” https://twitter.com/feiss/status/1130524764261552128 https://twitter.com/SculptrVR/status/1130542662681939968 https://twitter.com/jeffchangart/status/1130568914247856128 However, some users point out that this can also be because the tools for doing full 3D designs are nowhere near as mature as the tools for doing 2D designs. https://twitter.com/haltor/status/1130600718287683584 A Twitter user aptly observes: “3D is not inherently superior to 2D.” https://twitter.com/Clarice07825084/status/1130726318763462656 Read the full text of John’s article on Facebook. More insights on this Twitter thread. Google announces Glass Enterprise Edition 2: an enterprise-based augmented reality headset Oculus Rift S: A new VR with inside-out tracking, improved resolution and more! What’s new in VR Haptics?
Read more
  • 0
  • 0
  • 21830
Modal Close icon
Modal Close icon