Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-firefox-reality-1-0-a-browser-for-mixed-reality-is-now-available-on-viveport-oculus-and-daydream
Bhagyashree R
19 Sep 2018
2 min read
Save for later

Firefox Reality 1.0, a browser for mixed reality, is now available on Viveport, Oculus, and Daydream

Bhagyashree R
19 Sep 2018
2 min read
After announcing their plans of introducing a web browser for virtual and augmented reality headsets earlier this year, Mozilla launched Firefox Reality 1.0 yesterday. The browser will be available in the Viveport, Oculus, and Daydream app stores. What features will you find in Firefox Reality? The browser is built from the ground up to work on mixed reality headsets with the following features: Voice search Typing a search query when you are wearing a VR headset is not a pleasant experience. Using Firefox Reality you can search the web using your voice. Smooth and fast performance Firefox Reality uses Quantum engine, which Mozilla calls "the next-generation web engine for Firefox users". With the help of this engine, it provides a smooth, fast, and secure user experience. Switch between 2D and 3D mode Firefox Reality supports both 2D and 3D mode for web search. You can seamlessly switch between these two modes according to your preference. The home screen becomes your entertainment hub You can find games, videos, stories, and other entertainment content directly on the home screen. The mixed reality team is working with more creators around the world to bring more engaging content to the browser. What’s in the future? In the announcement, they have mentioned that they will soon be launching Firefox Reality 1.1. We can expect more features in the coming release such as support for bookmarks, 360 videos, accounts, and more. Mozilla has actively been contributing in the mixed reality world by introducing WebVR, WebAR, and A-Frame. With the release of Firefox Reality they have taken the next step to make it the best browser for mixed reality: “We are in this for the long haul. This is version 1.0 of Firefox Reality and version 1.1 is right around the corner. We have an always-growing list of ideas and features that we are working to add to make this the best browser for mixed reality. We will also be listening and react quickly when we need to provide bug fixes and other minor updates.” To see read more about Firefox Reality, check out Mozilla’s official announcement. Mozilla releases Firefox 62.0 with better scrolling on Android, a dark theme on macOS, and more Firefox Nightly browser: Debugging your app is now fun with Mozilla’s new ‘time travel’ feature Mozilla is building a bridge between Rust and JavaScript
Read more
  • 0
  • 0
  • 15249

article-image-google-open-sources-seurat-to-bring-high-precision-graphics-to-mobile-vr
Sugandha Lahoti
08 May 2018
2 min read
Save for later

Google open sources Seurat to bring high precision graphics to Mobile VR

Sugandha Lahoti
08 May 2018
2 min read
Google has open sourced Seurat, their VR positional-tracking tool. Google Seurat was first announced at their 2017 I/O conference to help developers bring high-precision graphics to standalone virtual reality headsets. Now, Google has open sourced Seurat to the developer community. Developers can now bring visually stunning scenes to their own VR applications while having the flexibility to customize the tool for their own workflows. Seurat can process complex 3D scenes, that couldn’t be run in real-time even on the highest performance desktop hardware, into a representation that renders efficiently on mobile hardware. How Google Seurat works Polygons are generally used to compose 3D images in computer graphics. The Polygon count refers to the number of polygons being rendered per frame. Google Seurat reduces the overall polygon count that is displayed at any given time and therefore lowers the required processing power and resources. It takes advantage of the limited viewing region, available in a mobile VR, to optimize the geometry and textures in a scene. What this means is, that Seurat takes all of the possible viewpoints that a VR user may see and removes the area of the 3D environment that they’d never be able to see. By utilizing the limited range of movement to their advantage, Seurat removes object permanence from the equation. So if users can’t see something in virtual reality, chances are it doesn’t actually exist.   On a more technical level, Google Seurat takes RGBD images (color and depth) as input and generates a textured mesh to simplify scenes. It targets a configurable number of triangles, texture size, and fill rate, to achieve this simplification process. Thus delivering immersive VR experiences on standalone headsets. This scene simplification technology was used to bring a 'Rogue One: A Star Wars Story' scene to a standalone VR experience. Developers can start working with Seurat right away with the GitHub page, containing the documentation and source code required to implement it into their projects. Alongside Seurat, Google also released Mirage Solo, the first headset on the Daydream VR platform. Top 7 modern Virtual Reality hardware systems Oculus Go, the first stand-alone VR headset arrives! Leap Motion open sources its $100 augmented reality headset, North Star  
Read more
  • 0
  • 0
  • 15222

Matthew Emerick
02 Oct 2020
3 min read
Save for later

Join Hacktoberfest at the Xamarin Community Toolkit from Xamarin Blog

Matthew Emerick
02 Oct 2020
3 min read
You may have heard about the Xamarin Community Toolkit by now. This toolkit will be an important part of the Xamarin.Forms ecosystem during the evolution to .NET 6. Now is the time to contribute since it is “Hacktoberfest” after all! What is the Xamarin Community Toolkit? Since Xamarin.Forms 5 will be the last major version of Forms before .NET 6, we wanted to have an intermediate library that can still add value for Forms in the meanwhile. However, why stop there? There is also a lot of converters, behaviors, effects, etc. that everyone is continually rewriting. To help avoid this, we consolidated all of those into the Xamarin Community Toolkit. This toolkit already has a lot of traction and support from our wonderful community, but there is always room for more! Lucky for us, that time of year which makes contributing extra special is upon us again. Hacktoberfest 2020 For Hacktoberfest we welcome you all to join us and plant that tree (new reward by Hacktoberfest!) or earn that t-shirt while giving some of your valuable time to our library. On top of that, we will offer some swag whenever you decide to contribute. When you do, we will reach out to you near the end of October and if your PRs are eligible to make sure we get your details. That means: no need to do anything special, just crush that code! How to Get Involved? Head over to the Xamarin Community Toolkit repository. Find an issue you want to work on and comment that you will be taking responsibility for that issue. It might be that your issue is not on there yet, please feel free to add it. Please await confirmation of your issue, which typically happens within 24 hours. Socialize your new issue on Twitter with the hashtag #XamarinCommunityToolkit A couple things to note: We appreciate any and all contributions. However, fixing typos, “README”s, or similar documents does NOT count towards a rewardable contribution. All pull-requests should be opened between October 2nd – November 1st, 2020. Have questions? Just ask! Feel free to contact us through the Discord server. You can also reach out directly on Twitter: @jfversluis. Additionally, you can open an issue. Quality Pull-Requests Anything that substantially improves the quality of the product. It should be more than fixing typos. Approved Items of Work Any open “bug” issue that has been verified, or enhancement spec that has some indication it is approved. If you have any question, please contact us. Since the Toolkit was launched recently, we apologize in advance if some of the issues are mislabeled. If you are unsure about anything, just comment and a member of our team will reach out with guidance. Get Started Thank you so much for your interest, we look forward to all of your great contributions this year! The post Join Hacktoberfest at the Xamarin Community Toolkit appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 15172

article-image-android-p-new-features-artificial-intelligence-digital-wellbeing-and-simplicity
Kunal Chaudhari
14 May 2018
9 min read
Save for later

Android P new features: artificial intelligence, digital wellbeing, and simplicity

Kunal Chaudhari
14 May 2018
9 min read
Google announced the beta version of Android P at the I/O 2018 conference last week. This is one of the major updates to the mobile operating system since the release of Android 5.0 Lollipop, with a myriad of features like design changes, new animations, better notification system, and plenty of helpful shortcuts that improve the overall user experience. A decade has gone by since Google showcased the first version of Android in 2008. So it was obvious that this 10th version of the OS called for an update that would grab the attention of users, and developers alike. The previous version, Android Oreo, failed to delight users by going beyond their expectations. It holds the least amount of market share when compared to its previous three predecessors. So the stakes are higher than usual this time around for the world’s favorite mobile OS by Google. In his opening keynote, Sundar Pichai, CEO at Google, came all guns blazing with the focus, as usual, on the new developments in AI, a somewhat controversial demo from the Google’s voice assistant, and Google’s very own AI-specific processing units (TPUs). But amidst all these cool AI related stuff, he gave the world a peek into the new features of the much awaited Android P. He spoke of how Google have introduced some key capabilities in Android to help people find the right balance between digital and real life. After some more keynotes and sessions, it became clear that these new Android features have a theme which can be classified into three broad areas: Intelligence, Digital Wellbeing, and Simplicity. Machine intelligence on mobile Machine learning has been a key area of development for Google since the last few years. With each Android release, more and more features have started using these machine learning capabilities. And Android P is a step in this direction to bring AI at the core of the operating system, making smartphones smarter. Here’s a quick rundown of enhancements in this category: Adaptive battery Pretty much found in every user survey, battery life has been a top priority. With Android P, Google has partnered with its AI subsidiary DeepMind, to provide a more consistent battery experience to the users. It uses a ‘deep convolutional neural network’ or in simple words ‘on-device machine learning’, to figure out which apps the users are likely to use in the next few hours and which apps are not going to be used at all throughout the day. This usage pattern is taken into consideration by the Android P operating system to spend battery power on the apps which you are actually going to use. This results in a considerable improvement in the battery performance by the OS, which is mostly required to update the apps in the background. Image source: Google Blogs Adaptive Brightness Another AI-powered feature learns how users set their brightness according to the surrounding ambient lightning. Based on these user preferences Android P automatically sets the brightness, in a power efficient way. Although most smartphones already have ‘auto-brightness’ as an inbuilt feature, the main difference here is, they do not take the user preference and the environmental conditions into a picture. Google claims that more than 50% of the users testing on Android P have stopped adjusting the brightness manually. App Actions Last year in Android Oreo, Google launched a new feature called ‘predicted apps’, which predicts the next app that the user is most likely to launch. If this wasn’t spooky enough, Google released App Actions this year, which predicts the next action or the task that the user is going to perform and pins it on top of the Google Launcher. Image source: Google Blogs Slices This is one interesting feature where Google tries to bring a slice of the app UI to the users while they are searching for the app on the phone. Suppose you were searching for the ride-sharing app Lyft on Google, it would provide a slice of the UI from Lyft in the search drop down with your preferred options. In this case, it might show your predetermined rides to home or work which you can select right then and there from the search menu in Google. This feature totally depends on the developers, if they decide to provide a snapshot of the UI from the app on Google as it risks the users from not visiting the actual app. While all these AI features sound cool and claim to provide a rich user experience, it also factors in the ‘big question’ about the user data. From the looks of it, these features leverage a lot of user data and utilize app usage patterns, which to some or most of the users is quite alarming. Take the case of the recent breach of user data on Facebook. Google claims that these features are a result of ‘on-device machine learning’ where the data is kept private or restricted to just the users’ phone. Image source: Google Blogs Digital wellbeing takes center stage The next set of features and tools is what Google is calling ‘Digital Wellbeing’. The goal here is to enable users to understand their habits, control the demands technology places on their attention, and focus on what actually matters. Digital wellbeing was started by Tristan Harris, a former Product Manager at Google and co-founder of the Center for Humane Technology. While working on the Inbox app, Tristan found himself becoming increasingly disillusioned by the overwhelming demands of the tech industry and wrote a memo on Digital wellbeing that went viral in the company. Sameer Samat, vice president of Product Management at Google, gave an interesting talk at I/O this year, which extended Tristan’s philosophy and talked about the digital wellbeing of the users and how Android P claims to help users achieve it with its brand new set of tools. Image source: Google Blogs Dashboard Just as a Fitbit tracker gauges for activity and informs to motivate you, Google's Android P update includes a dashboard to monitor how long you've been using your phone and specific apps. It's supposed to aid you in understanding what you're spending too much time on so that you can adjust your behavior. App timer While Dashboard gives a summary of the time spent on the phone, it also allows users to tap into the apps they are using and set a time limit on it for daily usage. Once the app crosses the time limit the app icon will soon fade to gray on the home screen and it won’t launch, suggesting that the user has crossed the time limit. Do Not Disturb Do Not Disturb is already available as a feature on Android devices to prevent users from hearing any kind of notification from text or emails. This feature comes in handy particularly when you are in a meeting or away or not paying attention to your phone. The new Do Not Disturb in Android P one step further and takes out all the visual indicators or notifications even if you have the device in your hand allowing you to do better things with your phone, like reading. Google is also adding a feature where you can turn the phone on its face to activate do not disturb automatically. No more dinner interruptions. Wind down Generally, people tend to spend a lot of time on their phones while they are in bed just before sleeping. Previously smartphones used to block the notification light at bedtime but Google is going one step further with the ‘Wind Down’ feature. As your bedtime approaches this feature would turn your screen to grayscale making the apps less tempting. Google hopes this will let users "remember to get to sleep at the time [they] want”. Overall, these features sound like a real step forward taken by Google in making the phones less addictive, but there is no proven research. Much of what we know about these features is based not on peer-reviewed research but on anecdotal data. And if users don’t enable any of these Digital Wellbeing features then the new version of Android isn’t going to do anything better. UI Simplicity once again in vogue One of the key takeaways from the previous versions of Android releases has been the simplicity in the UI. Google has been trying to make the UI more accessible and approachable to the current as well as the new users. Android P is not only banking on the suggestions and patterns from the machine learning capabilities but also making the user experience more simplistic. Gesture-based navigation controls Navigation gestures aren’t new. Mobile operating systems such as webOS, MeeGo, and Blackberry 10 all had previously supported navigation gestures. But iPhone X popularized it with the removal of the home button. this meant that gestures are the only way to navigate the device. This change has been generally appreciated by users as it is simple and easy to learn. Google has introduced Gestures in the Android P to substitute the buttons for various actions such as swipe up to open the recent apps menu called Overview while double tapping it opens the app drawer. Swipe down to return to the home screen, and swipe left and right to switch between the recently-opened apps. Image source: Google Blogs Other features in this segment include manual rotation, smart text selection, quick setting, among others. You can read about these features on the official web page of Android. Beyond Intelligence, simplicity and digital wellbeing, there are hundreds of additional improvements coming in Android P, including security and privacy improvements such as DNS over TLS, encrypted backups, Protected Confirmations and more. The initial reaction to all these features was decidedly mixed; while some praised the evolution of Google’s operating system, others slammed it for looking or adopting similar features from Apple’s iOS. Overall the features look great, but we would like to see some rigorous investigation that suggests that people actually feel empowered while using the latest version of Android. Google still haven’t told us what dessert-themed name the Android update will take, saving the naming announcement for later in the summer, closer to the actual release date. Pancake, Peanut butter, Pumpkin Pie, and Popsicle are some of our top predictions. The Android P Beta is available now on Google Pixel, Essential Phone, OnePlus, Mi, Sony, Essential and Oppo handsets. Top 5 Google I/O 2018 conference Day 1 Highlights Google News’ AI revolution strikes balance between personalization and the bigger picture Google’s Android Things, developer preview 8: First look  
Read more
  • 0
  • 0
  • 15114

article-image-amazon-open-sources-amazon-sumerian-its-popular-ar-vr-app-toolkit
Sugandha Lahoti
17 May 2018
2 min read
Save for later

Amazon open sources Amazon Sumerian, its popular AR/VR app toolkit

Sugandha Lahoti
17 May 2018
2 min read
Last year at re:invent 2017, Amazon unveiled Amazon Sumerian, a toolkit for easily creating AR, VR, and 3D apps. Now Amazon has open-sourced it to allow all developers to create compelling virtual environments and scenes for their AR, VR, and 3D apps without having to acquire or master specialized tools. The open sourcing of Amazon Sumerian comes as a part of Amazon’s strategy to expand its reach and revenues by offering its cloud services to the largest number of developers, startups, and organizations as possible. As Kyle Roche, the GM of Amazon Sumerian, puts it “We are targeting enterprises who don’t have the talent in-house. Tackling new tech can sometimes be too overwhelming, and this is one way of getting inspiration or prototypes going. Sumerian is a stable way to bootstrap ideas and start conversations. There is a huge business opportunity here.” Most importantly, with Amazon Sumerian, you don’t necessarily need 3D graphics or programming experience to build rich, interactive VR and AR scenes. And hence, open sourcing Sumerian will only give it more traction from both non-developers, and trained professionals alike. Amazon Sumerian is equipped with multiple user-friendly features. Editor: A web-based editor for constructing 3D scenes, importing assets, scripting interactions, and special effects, with cross-platform publishing. Object Library: a library of pre-built objects and templates. Asset Import: Upload 3D assets to use in your scene. Sumerian supports importing FBX, OBJ, and Unity projects. Scripting Library: provides a JavaScript scripting library via its 3D engine for advanced scripting capabilities. Hosts: animated, lifelike 3D characters that can be customized for gender, voice, and language. Amazon Sumerian also has baked in integration with Amazon Polly and Amazon Lex to add speech and natural language into Sumerian hosts. Additionally, the scripting library can be used with AWS Lambda allowing the use of the full range of AWS services. The VR and AR apps created using Sumerian ca run in browsers that support WebGL or WebVR and on popular devices such as the Oculus Rift, HTC Vive, and those powered by iOS or Android. You can learn more details by visiting the Amazon Sumerian homepage and browsing through Sumerian Tutorials. Google open sources Seurat to bring high precision graphics to Mobile VR [news] Verizon launches AR Designer, a new tool for developers [news] Getting started with building an ARCore application for Android [tutorial]
Read more
  • 0
  • 0
  • 15094

article-image-google-announces-the-stable-release-of-android-jetpack-navigation
Bhagyashree R
15 Mar 2019
2 min read
Save for later

Google announces the stable release of Android Jetpack Navigation

Bhagyashree R
15 Mar 2019
2 min read
Yesterday, Google announced the stable release of the Android Jetpack Navigation component. This component is a suite of libraries and tooling to help developers implement navigation in their apps, whether it is incorporating simple button clicks or more complex navigation patterns such as app bars and navigation drawers. Some features of Android Jetpack Navigation Handle basic user actions You can make basic user actions like Up and Back buttons work consistently across devices and screens for better user experience. Deep linking Deep linking gets complicated as your app gets more complex. With deep linking, you can enable users to land directly on any part of your app. In the Navigation component, deep linking is a first-class citizen to make your app navigation more consistent and predictable. Reducing the chances of runtime crashes It ensures the type safety of arguments that are passed from one screen to another. This, as a result, will decrease the chances of runtime crashes as users navigate in your app. Adhering to Material Design guidelines You will be able to add navigation experiences like navigation drawers and navigation bottom bars to make your app navigation more aligned with Material Design guidelines. Navigation Editor You can use the Navigation Editor to easily visualize and manipulate the navigation graph, a resource file that contains all of your destinations and actions, for your app. The Navigation Editor is available in Android Studio 3.3 and above. To know more in detail, check out the official announcement. Android Q Beta is now available for developers on all Google Pixel devices Android Studio 3.5 Canary 7 releases! Android Things is now more inclined towards smart displays and speakers than general purpose IoT devices
Read more
  • 0
  • 0
  • 15068
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-facebook-launched-new-multiplayer-ar-games-in-messenger
Natasha Mathur
09 Aug 2018
3 min read
Save for later

Facebook launched new multiplayer AR games in Messenger

Natasha Mathur
09 Aug 2018
3 min read
Facebook announced a new “multiplayer AR games” feature for its popular messaging platform,”Messenger”, today. This feature makes your chatting experience even more fun as it lets you challenge your friends to games while video chatting.      Facebook Messenger AR games Facebook seems inspired by Snapchat, as the new feature is quite similar to Snapchat’s multiplayer AR video chat games, called Snappables, launched in April 2018. The only difference is that Snapchat transforms your whole screen using AR, taking you into space or a disco dance hall. On the other hand, Facebook AR games only overlay a few graphics on the world around you. Snapchat Snappables The new feature powered by AR Studio, a platform that was released last year at Facebook F8 which allows developers to build AR experiences for Facebook Camera. It lets you challenge up to six people at a time. There are currently two games rolled out namely, “Don’t Smile” and “Asteroids Attack”. “Don’t smile” is a game where the person making the serious face for the longest time wins. In “Asteroids attack”, the person who is able to navigate their spaceship better, wins”. Facebook is planning to release more games in the future, such as Beach Bump and Kitten Craze. You need to have the latest version of Messenger updated on your phones to be able to play these games. You can either open an existing conversation or find the person to chat with, then tap the video icon on the upper right corner of your phone screen. After this, you just need to tap the star button and select one of the AR games. This will notify the person you are chatting with to join you in the game. Video chats in Messenger have been on the rise with over 17 billion video chats last year and twice as many as the year before. Also, Facebook seems to be quite invested in incorporating Augmented Reality into different aspects of its business. Last month, Facebook announced that it’s planning to launch AR ads on its news feed to let you try on products virtually. Messenger is doing a good job of connecting people in real-time, and now, AR games are like the cherry on top which will help people create memories and have fun. Facebook launches a 6-part Machine Learning video series Facebook plans to use Bloomsbury AI to fight fake news Is Facebook planning to spy on you through your mobile’s microphones?  
Read more
  • 0
  • 0
  • 15045

article-image-htc-vive-focus-2-0-update-promises-long-battery-life-among-other-things-for-the-vr-headset
Natasha Mathur
29 May 2018
4 min read
Save for later

HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset

Natasha Mathur
29 May 2018
4 min read
HTC Vive Focus 2.0 update promises long battery life, among other things for the VR headset HTC Vive Focus, the standalone VR headset, gets a major system update 2.0, making the headset even more versatile, before its global release. HTC announced at the annual Vive Ecosystem Conference in Shenzhen, that its standalone 6DoF Vive headset can install a System update 2.0 which promises long battery life, ability to link with HTC smartphones, passenger and surroundings mode along with other exciting features. Vive Focus Smartphone Integration Features Here’s a quick rundown of updates made to HTC Vive Focus: Key feature updates Smartphone Integration: There is a newly added ability to link HTC Smartphone with the Vive Focus. This helps users take calls, receive messages, and view social media notifications from a paired HTC smartphone, without taking the headset off. This new Focus feature will be made available first to the HTC U12+ and will later get distributed for all the other HTC smartphone users through HTC’s and Tencent’s app stores. Surroundings Mode: This mode is handy when using Vive focus in a moving vehicle. It is a see-through mode which can be enabled by double-clicking the power button on the headset. The camera in the headset then gets activated. This helps the user to see the world outside the headset without taking it off. Passenger Mode: This mode makes it possible for the users to experience the Virtual world seamlessly, making sure that the user does not drift away in the virtual world due to the turbulence of a moving vehicle. Stream Content from Viveport or SteamVR: The new Vive Focus update also provides users the ability to stream Viveport or SteamVR content from a PC to a Vive Focus using Riftcat VRidge app over 5GHz WiFi. VR apps installation: You can now install apps directly on the microSD card with the new System Update 2.0. You can also purchase apps using credit cards from within the Viveport store. Other Upcoming Features There are other upcoming features lined up for HTC Vive Focus in the third quarter of 2018. These features include: Software update which will make Vive’s 3DoF headset controller behave like a 6DoF controller by using computer vision technology on focus’ camera. Hand movements tracking with the help of front cameras using gesture recognition technology. You will be able to stream different forms of media such as apps, videos and games directly from the six-inch U12+ phone screen to the bigger VR display. An additional combination of media storage device and external battery pack, the Seagate VR Power Drive will also be enabled in the HTC Vive Focus. The VR power drive is also compatible with U12+ and will be optimized for the Focus as well. This promises to improve the battery life considerably. Vive Focus system update 2.0 is available on all HTC Vive Focus devices which are available only in China for now. No announcement has been made regarding its availability in the West, but with the company shipping dev kits to developers in the past few weeks, the announcement will be made soon. Also, details regarding the cost and storage capacity will be announced later this year but it’s not yet confirmed. With competitors such as Google daydream powered Lenovo Mirage Solo, the latest updates made to the HTC Vive Focus have built major anticipation in the mind of the users regarding what more to expect in the VR world. Qualcomm may announce a new chipset for standalone AR/VR headsets this week at Augmented World Expo Top 7 modern Virtual Reality hardware systems Understanding the hype behind Magic Leap’s New Augmented Reality Headsets  
Read more
  • 0
  • 0
  • 15027

article-image-apps-need-real-time-mobile-analytics
Amarabha Banerjee
21 May 2018
4 min read
Save for later

Why your app needs real time mobile analytics

Amarabha Banerjee
21 May 2018
4 min read
What’s every mobile developer’s worst nightmare? The mere idea that their app has fallen into obscurity and it doesn’t have a single user engagement or installs! If you are a mobile developer and you are reading this, you might be well aware of this thought, in imagination as well as in reality sometime or the other. We all know that traditional analytics methods adopted and made popular by Google don’t really have a great impact on mobile apps. They are not helpful in finding out the exact reasons why your app might have failed to register a high number of installs or user engagements. So the real question to alleviate your fear is: what are the data pointers necessary to figure out a way to filter out the noise and make your app stand out among the clutter? The answer is not merely a name change, but a change in approach and it’s called mobile analytics. For starters, some reasons users typcially don’t interact with your app are: The UX is not tempting enough The app is slow The app doesn’t live up to what it promised The target audience segment is wrong Your app is really bad Barring the last pointer, the other four can have real life solutions that can salvage your app, if applied in time.  Here we are putting more focus on the phrase “In Time”. That’s where real time mobile analytics come in. Because in case of mobile apps, every minute counts, literally. Mobile analytics works on the ways and types of data collected. In order to understand why your app is not an instant hit, you will have to keep a track of: Geographical data of app installs: This will help you to identify your geographical strongholds i.e.,  from where you have got the most response. You can then analyze other geographies or similar locations that you can target in order to make your ad campaigns effective. Demographics of the users who engage with your app: This data will be particularly helpful in identifying the age group and the type of users who are engaging in in-app purchases. Thus, helping you to reach your overall goal. Which Sources provide loyal users and generate more revenue: Knowing the right media outlet to promote your ad is imperative to its success. This will enable you to target the correct media sources for maximum revenue and in creating more loyal fanbase. What are the reasons for the users to quit: This will identify the reason behind the app not getting popular. Analyzing this data will enable you to learn about potential flaws in the UX or in the app performance or any security issues which might be prompting the users to quit your app suddenly. So how do you enable real time mobile analytics? There are a few platforms which provide ready-to-deploy real time mobile analytics. Fair warning, you might end up feeling like you used a black box where you feed data and the result comes out without knowing why you came up with those results. However there are other solutions being provided by IBM cloud, AWS Pinpoint, among others which will enable the developers to be a part of the overall analytics process and also play with the parameters to see predictions of app usage and conversion. The real challenge however lies in bringing all these analytics into your mobile device. For example, if you have seeing sudden uninstalls of your app and what you have right now is your mobile device, then you should be able to access the cloud and upload that data and analyze that on your mobile to get insights on what should be done. Like whether there is an urgent UX issue that needs fixing or there is a sudden bug in the application, or there might be a sudden security threat that potentially can compromise user data. To perform these mobile analytics natively and real time, we would most definitely need better computation capabilities and battery power. Whether the tech giants like Google, AMD, Microsoft will come up with a possible solution to this mobile computation problem with a much longer battery life, is something that time can only tell. Packt teams up with Humble Bundle to bring developers a selection of mobile development bundles Five reasons why Xamarin will change mobile development Hybrid Mobile apps: What you need to know
Read more
  • 0
  • 1
  • 15018

article-image-google-daydream-powered-lenovo-mirage-solo-hits-the-market
Natasha Mathur
09 May 2018
3 min read
Save for later

Google Daydream powered Lenovo Mirage solo hits the market

Natasha Mathur
09 May 2018
3 min read
Just when people couldn’t keep up with the excitement of the Oculus Go launched at the Facebook’s F8 conference, Google added fuel to the fire by making Lenovo Mirage Solo, the first stand-alone Daydream VR headset, available for purchase at $399.9. Lenovo Mirage Solo VR headset Let’s have a look at the features that are making this headset all the rage: Self-contained VR Headset What makes this VR headset the talk of the town is that it’s the first stand-alone Daydream VR headset. That means it doesn’t require the excess baggage of connecting the phone and then putting on the headset. All you need to do is, just put the headset on and explore the intriguing VR world sans the wires and the added complexity. The hardware inside resembles that of a mobile device. It has a Snapdragon 835 processor with 4GB RAM, and 64GB of storage. It comes with a long battery life of 2.5 hours making the entire VR experience seamless. It consists of embedded sensors along with a gyroscope, accelerometer, and magnetometer. Also, it has a microSD slot, a USB Type-C port, a power button, volume buttons, and a 3.5mm headphone jack. Position-tracking Technology Lenovo Mirage Solo comes with WorldSense, an outstanding 6 degrees of freedom motion tracking feature that helps you move around freely with headsets on, thereby, making the entire experience more immersive. WorldSense helps remove the need to set up extra sensors. It offers: Two inside-out tracking cameras Built-in proximity sensors that detect the position of nearby objects Display Lenovo Mirage Solo comes with a 5.5-inch LCD display. This is an effort to get rid of the blurring issue that happens as you move from one side to the other in the VR world. The screen has a 2,560 x 1,440-pixel resolution with a 110-degree field of view which is similar to Rift and Vive, thereby, making the VR exploration even more interactive. Design The headset body is primarily matte plastic in white color with accents of black, and gray running through it, and a solid plastic strap that wraps around the head. The Lenovo Mirage solo is a self-contained headset, which has a strong built. Yet some people find it bulky as the majority of the weight resides on the top of a wearer’s forehead. However, it is adjustable as the headset can be brought all the way around your skull. Also, the Display housing keeps the light from coming in without disturbing the image, making the headset easily movable. Sound Lenovo Mirage Solo comes with two microphones, but users need to plug in their own headphones into the 3.5-mm jack as it doesn’t come equipped with in-built speakers. Apart from the above-mentioned features, the Mirage Solo depends on the Daydream library for accessing content. The catalog has more than 350 games and apps with over 70 titles optimized for WorldSense. As you can see, Mirage Solo is not flawless. It suffers from issues such as bulky design, no built-in speakers, and limited library app content. But the pros overpower the cons in this case, and it goes without saying that Lenovo Mirage Solo is here to revolutionize the VR experience. To know more, visit the official Daydream Google Blog Oculus Go, the first stand-alone VR headset arrives! Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Build a Virtual Reality Solar System in Unity for Google Cardboard
Read more
  • 0
  • 0
  • 14962
article-image-apple-announces-expanded-security-bug-bounty-program-up-to-1-million-plans-to-release-ios-security-research-device-program-in-2020
Vincy Davis
09 Aug 2019
4 min read
Save for later

Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020

Vincy Davis
09 Aug 2019
4 min read
Apple made some major announcements at the Black Hat cybersecurity conference 2019 which concluded yesterday, in Las Vegas. Apple’s head of security engineering, Ivan Krstić announced that anybody who can hack an iPhone will get up to $1 million reward. They have also released a new payout system for security researchers, depending on the type of vulnerability found by them. Krstić also unveiled Apple’s new iOS Security Research Device program, which will be out next year. As part of the program, qualified security researchers will be provided with special iPhones to find out flaws in them. Apple expands its Security bug Bounty program Apple first launched its bug bounty program, in 2016. The previous bug bounty program consisted of $200,000 and included only those involved in Apple’s invite-only bug bounty program. Yesterday, Apple announced that, per Apple’s new security bug bounty program, anyone who can hack an iPhone will receive up to $1 million. Also, the security bounty program has been opened to all security researchers. It will include all of Apple’s platforms, including iCloud, iOS, tvOS, iPadOS, watchOS, and macOS. https://twitter.com/mikebdotorg/status/1159557138580004864 Apple has also released a new payout system with the payouts starting from $100,000 for finding a bug that allows lock screen bypass or unauthorized access to iCloud. Researchers can also gain up to 50% bonus if they find any bugs in a pre-released software. The top payout is booked for hackers who can discover a zero-click kernel code execution with persistence. https://twitter.com/Manzipatty/status/1159680310348537861 https://twitter.com/sdotknight/status/1159807563036340224 https://twitter.com/kennwhite/status/1159705960061030400 Apple’s new iOS Security Research Device program Apple gave out details about its new iOS Security Research Device program, which will be out next year. In this program, Apple will be supplying special iPhones to security researchers to help them find security flaws in iOS. However, this the iOS security research device program is available only to researchers who have great experience in security research on any platforms. https://twitter.com/0x30n/status/1159553364159414272 The special devices will be different from the regular iPhones, as it will come with ssh, a root shell, and advanced debug capabilities to ensure identification of bugs. “This is an unprecedented fully Apple supported iOS security research platform,” said Krstić at the conference. https://twitter.com/skbakken/status/1159556808198852608 https://twitter.com/marconielsen/status/1159584902339276801 Though many users have praised Apple for the great money and initiating the security research device program, few also opine that this is not so huge. Given the kind of knowledge and expertise required to find these bugs, there are suggestions that Apple should consider paying these hackers more as they are the ones saving Apple from a lot of negative P.R. Also, they found a bug, which even the Apple employees are sometimes unable to find. A user on Hacker News comments, “1M is a lot of money to me, a regular person, but when you consider that top security engineering talent could be making north of 500k in total compensation, 1M suddenly doesn’t seem all that impressive. It’s a good bet to make on their risk. Imagine paying a mere 1M to avoid a public fiasco where all of your users get owned. This just seems like good business. They could make it 5M, and it would still be worth it to them in the medium to long term.” Another user says, “I'm surprised by how cheap the vulnerabilities market is. A good exploit, against a popular product like Chrome, selling for 100k or even $1M may sound like a lot, but it's really pennies for any top software firm. And $1M is still a lot for a vulnerability by market prices.” Another comment on Hacker News reads, “When I read the article, my first reaction was "Only a million?" Considering the importance of a bug like this to Apple's business and the size of their cash hoard, this sounds like they don't actually care that much.” To know about other highlights at the Black Hat cybersecurity conference 2019, head over to our full coverage. Apple Card, iPhone’s new payment system, is now available for select users Apple plans to suspend Siri response grading process due to privacy issues Data Transfer Project: Now Apple joins Google, Facebook, Microsoft and Twitter to make data sharing seamless
Read more
  • 0
  • 0
  • 14948

article-image-qualcomm-may-announce-a-new-chipset-snapdragon-rx-1-for-standalone-ar-vr-headsets
Natasha Mathur
28 May 2018
4 min read
Save for later

Qualcomm announces a new chipset for standalone AR/VR headsets at Augmented World Expo

Natasha Mathur
28 May 2018
4 min read
Qualcomm Inc announced a new chipset namely Snapdragon XR1, its first dedicated Extended Reality (XR) platform, to power standalone virtual reality (VR) and augmented reality (AR) headsets. This is Qualcomm’s attempt to expand its business beyond the realms of smartphones. The newly dedicated chipset got introduced at the Augmented World Expo in Santa Clara, California today. The AR/VR industry seems to have developed quite an interest in building standalone headsets recently. With standalone headsets such as Facebook’s Oculus Go and Google Daydream powered Lenovo Mirage Solo ruling the market, it is quite evident that the need for powerful chipsets to power these devices is only going to rise. Snapdragon XR1, the first ever dedicated XR platform Snapdragon XR1 Key Features: Let’s look at the features that the makes the new chipset quite a powerful device to have for the standalone headsets: XR1 is a system-on-a-chip (SoC). It has all the required electronic circuitry and smartphone parts on a single integrated circuit (IC). The chip includes ARM-based CPU, a GPU, a vector processor and AI engine. AI engine will be able to optimize different AI functions such as object recognition and pose prediction on the device. With this chip, Head-tracking interaction with headsets will also be possible. It will also be capable of handling voice control. It enables better user experience with high-quality visual and audio playback, as well as 3-DoF and 6-DoF interactive controls.  The XR1 will provide support for 4K video at up to 60 frames per second, dual displays, 3D overlays and popular graphics such as APIs OpenGL, OpenCL, and Vulkan. The chipset consists of Spectra image signal processor which will help reduce noise for clearer image quality. Currently, Qualcomm's audio technologies like Aqstic, 3D Audio Suite, and aptX are being used by XR1 which enables high-fi sound. It also uses Aqstic's always-on and always-listening assistance. There will also be a system, namely, Head Related Transfer Functions which will give an impression of sounds coming from a specific point in space. This will create a more realistic experience. It will be able to delegate tasks to various different cores for more efficient performance by using heterogeneous computing. Qualcomm’s goal with this chip design is an effort to make it easy for the AR/VR hardware manufacturers to design and build headsets which are cheap yet powerful and energy-efficient. The XR1 consists of an SDK which helps manufacturers to implement some of these features, as well as Bluetooth and WiFi capabilities. The famous Oculus Go makes use of Qualcomm smartphone chip and Lenovo Mirage solo also uses Qualcomm phone processors but the battery life of the standalone headsets is not comparable to that of a smart-phone. Now, with chipsets being built specifically for these headsets, the battery life would improve considerably. Qualcomm is not the only one working on manufacturing chips dedicated to headsets, others are aiming at similar technologies too. Apple is working on developing its own chip for the AR glasses which will be on sale in early 2020. Nvidia and Intel are among the others that want to join the game. It is also worth noting that Qualcomm is on the lookout for new sources of revenue as the smartphone industry is drying up and competition is continually increasing. Qualcomm will team up with other existing headset makers that plan to include the chip such as HTC ( Vive ), Vuzix, Meta, and Pico. With Qualcomm unveiling the new Snapdragon XR1 at the Augmented World Expo today, the AR/VR manufacturers across the globe have received an extra boost in terms of shipping hardware for the AR/VR space. For more details on Snapdragon XR1, check out the official Qualcomm press release. Top 7 modern Virtual Reality hardware systems Types of Augmented Reality targets Google open sources Seurat to bring high precision graphics to Mobile VR  
Read more
  • 0
  • 0
  • 14876

article-image-9-most-important-features-in-android-studio-3-2
Amarabha Banerjee
07 Jun 2018
3 min read
Save for later

9 Most Important features in Android Studio 3.2

Amarabha Banerjee
07 Jun 2018
3 min read
Android Studio has been the IDE of choice for Android developers since its release in 2014. Version 3.2 of Android Studio was released at the end of April, bringing about a few very interesting changes to the Android Studio ecosystem. Here are the most important changes you need to be aware of: IAndroid Studio Jetpack has been updated and improved. The new, updated jetpack is going to make the overall development process much smoother and easier. It should minimize repetitive work and help to streamline the development workflow. New Navigation Editor. The new navigation editor helps developers to gain a better view of their app’s design and layout. It should make it much easier to plan navigational patterns between different parts of an app. AndroidX Refactoring. Android has introduced a new refactoring mechanism for the Android Support Libraries to a new Android extension library using the androidx namespace. The New App Bundle. The new dynamic app bundle is much smarter and intuitive than the previous version. Once you have created your app and uploaded its  resources, you won’t now need to generate customized APKs for different types of machines on which these APKs are going to run. You can now simply use the dynamic APK builder. This will automatically create different versions of the APK best suited to different machines. You can then add some added bundles for your app that can be downloaded by users on demand. Layout preview. During the app development process, the presence of runtime data can hamper the visualization capability of the app. This can affect the app design process. With the new  layout preview, you can preview your design using sample data in the layout editor. You can then change the data as you require, which will allow you to see a complete preview of your app design. Slice Functionality. Android Studio 3.2 will now create a preview of your app in Google Search results. This is what’s being called ‘slice functionality’. This will be particularly useful for mobile developers that want to think carefully and thoroughly about how they market their app. More new lint checks. Beyond Kotlin interoperability lint checks, Android Studio 3.2 is implementing 20 new lint checks. These will help developers find and identify common code problems. These new checks range from warnings about potential usability issues to high-priority errors regarding security vulnerabilities. New Gradle target. You can use the new lintFix Gradle task to apply all of the safe fixes suggested by the lint check directly to the source code. An example of a lint check that suggests a safe fix to apply is SyntheticAccessor. Metadata updates. Various metadata, such as the service cast check, have been updated for lint checks to work with Android P Developer Preview. Android Studio has been the default development environment for Android developers and with these new changes, it is trying to incorporate some cool new smart features which are sure to help the developers create better and faster apps more efficiently and save a lot of their development time. What is Android Studio and how does it differ from other IDEs? Unit Testing Apps with Android Studio The Art of Android Development Using Android Studio  
Read more
  • 0
  • 0
  • 14840
article-image-apples-new-arkit-2-0-brings-persistent-ar-shared-augmented-reality-experiences-and-more
Sugandha Lahoti
06 Jun 2018
3 min read
Save for later

Apple’s new ARKit 2.0 brings persistent AR, shared augmented reality experiences and more

Sugandha Lahoti
06 Jun 2018
3 min read
In the keynote, at the ongoing WWDC 2018, Apple has shared their latest version of Augmented reality toolkit, ARKit 2.0. The primary focus of Apple this year is primarily on improving the user experience and making the Apple devices perform better with improved functionalities. ARKit 2 features realistic rendering, multiplayer experiences, a new file format, and more. Shared Augmented reality With ARKit 2.0, you can now collaborate with multiple other users in a virtual environment. Apple says “Shared experiences with ARKit 2 make AR, even more, engaging on iPhone and iPad, allowing multiple users to play a game or collaborate on projects like home renovations.” There is also a new spectator mode, if you are keen on watching the game, instead of playing it. With this mode, you will see and experience what the players see and observe. AR that stays the same Persistent AR, as Apple likes to call it, is also another fabulous feature in ARKit 2.0. You can now leave virtual objects in the living world and then return back to them later in time. Interacting with AR becomes more life-like as you can now start a puzzle on a table and come back to it later in the same state. Image detection and tracking also get an update with ARKit 2.0. It can now detect 3D objects like toys or sculptures, and can also apply reflections of the real world onto AR objects. A new file format Apple has introduced a new open file format, usdz, in collaboration with Pixar. This file format is optimized for sharing in apps like Messages, Safari, Mail, Files, and News while retaining powerful graphics and animation features. The format enables the new Quicklook for AR feature, which allows users to place 3D objects into the real world. usdz is a part of the developer preview of iOS 12. It will be available this fall as part of a free software update for iPhone and iPad 2018 models. The Measure app Apple also unveiled its very own AR measuring app. The new iOS 12 app automatically provides the dimensions of objects like picture frames, posters, and signs, and can also show diagonal measurements, and compute area. Users can either take a photo or share these dimensions from their iPhone or iPad. You can tune into Apple’s WWDC event website to watch the keynote and read about other exciting releases. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference. Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others. Apple steals AI chief from Google.
Read more
  • 0
  • 0
  • 14822

article-image-apple-releases-ios-12-2-beta-1-for-developers-with-custom-screen-time-scheduling-pwa-improvements-among-other-features
Sugandha Lahoti
28 Jan 2019
3 min read
Save for later

Apple releases iOS 12.2 beta 1 for developers with custom screen time scheduling, PWA improvements among other features

Sugandha Lahoti
28 Jan 2019
3 min read
Apple released the next major iOS update, iOS 12.2 beta 1 to developers on January 24, 2019. This update boasts of features like custom downtime scheduling, as well as major updates to PWA. Custom screen time scheduling According to a report by 9to5Mac, iOS users will be offered a custom downtime scheduler in the latest iOS update. Users will now be able to adjust the Screen Time feature per the days of the week. Although previous iOS versions had a similar downtime scheduler, it was limited to be applied every day. With iOS 12.2 beta 1, users can either choose to use the same schedule everyday, or customize it depending on which day of that week it is. You can use it by navigating to Settings > Screen Time > Downtime. https://twitter.com/Mr_SamSpencer/status/1089161676983844865 PWA improvements Apple has made major improvements to Progressive web apps by introducing new features for developers. Mike Hartington, developer advocate for Ionic framework gives us a glimpse of new improvements in a tweet. New experimental features include Web Auth, Web Animations, WebMeta, pointer events, intersection observer etc. Service workers are removed from the experiments list and are enabled by default. External sites are loaded via SFViewController. This means authentication flows and still work without leaving the PWA. The current state of any app is maintained, even if the app goes in the background. You can view the native app as well as the PWA of the same app in the search. Users are generally excited for Apple making improvements to its PWA. A comment on Hacker news reads, “This is great for user rights and moves the needle more towards a decentralized and open ecosystem, while maintaining strong security guarantees to the end-user.” However, users also want Apple to consider supporting Push Notifications for PWAs. Other UI features 9to5Mac notes the following new UI updates made to Apple iOS 12.2 beta 1. New Screen Mirroring icon in Control Center New full screen Apple TV Remote Control Center interface New “Speakers & TVS” in Home app settings More detailed Apple Wallet UI for Recent Transactions Updated details button in Wallet card UI Tap a transaction for more detail Card details feature bubbly inset rectangles rows Motion & Orientation Data is new Safari toggle in iOS Settings Air Quality Index reading in Maps Safari warns about websites not supporting HTTPS Fill in a search suggestion without submitting the search Keyboard color picker Inline Safari music playback Album name full song search results in Music app iOS 12.2 will bring Apple News to Canada Developers can head to Settings > General > Software Updates to start downloading iOS 12.2 beta 1, if they have a previous iOS 12 beta installed. Non-developers can enter the public beta program by visiting beta.apple.com on the device they wish to enroll in the beta.  Currently, there is no public beta release. Apple has introduced Shortcuts for iOS 12 to automate your everyday tasks Microsoft Office 365 now available on the Mac App Store Tim Cook cites supply constraints and economic deceleration as the major reason for Apple missing it’s earnings target
Read more
  • 0
  • 0
  • 14819
Modal Close icon
Modal Close icon