Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-ieee-computer-society-predicts-top-ten-tech-trends-for-2019-assisted-transportation-chatbots-and-deep-learning-accelerators-among-others
Natasha Mathur
21 Dec 2018
5 min read
Save for later

IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others

Natasha Mathur
21 Dec 2018
5 min read
IEEE Computer Society (IEEE-CS) released its annual tech future predictions, earlier this week, unveiling the top ten most likely to be adopted technology trends in 2019. "The Computer Society's predictions are based on an in-depth analysis by a team of leading technology experts, identify top technologies that have substantial potential to disrupt the market in the year 2019," mentions Hironori Kasahara, IEEE Computer Society President. Let’s have a look at their top 10 technology trends predicted to reach wide adoption in 2019. Top ten trends for 2019 Deep learning accelerators According to IEEE computer society, 2019 will see widescale adoption of companies designing their own deep learning accelerators such as GPUs, FPGAs, and TPUs, which can be used in data centers. The development of these accelerators would further allow machine learning to be used in different IoT devices and appliances. Assisted transportation Another trend predicted for 2019 is the adoption of assisted transportation which is already paving the way for fully autonomous vehicles. Although the future of fully autonomous vehicles is not entirely here, the self-driving tech saw a booming year in 2018. For instance, AWS introduced DeepRacer, a self-driving race car, Tesla is building its own AI hardware for self-driving cars, Alphabet’s Waymo will be launching the world’s first commercial self-driving cars in upcoming months, and so on. Other than self-driving, assisted transportation is also highly dependent on deep learning accelerators for video recognition. The Internet of Bodies (IoB) As per the IEEE computer society, consumers have become very comfortable with self-monitoring using external devices like fitness trackers and smart glasses. With digital pills now entering the mainstream medicine, the body-attached, implantable, and embedded IoB devices provide richer data that enable development of unique applications. However, IEEE mentions that this tech also brings along with it the concerns related to security, privacy, physical harm, and abuse. Social credit algorithms Facial recognition tech was in the spotlight in 2018. For instance, Microsoft President- Brad Smith requested governments to regulate the evolution of facial recognition technology this month, Google patented a new facial recognition system that uses your social network to identify you, and so on.  According to the IEEE, social credit algorithms will now see a rise in adoption in 2019. Social credit algorithms make use of facial recognition and other advanced biometrics that help identify a person and retrieve data about them from digital platforms. This helps them check the approval or denial of access to consumer products and services. Advanced (smart) materials and devices IEEE computer society predicts that in 2019, advanced materials and devices for sensors, actuators, and wireless communications will see widespread adoption. These materials include tunable glass, smart paper, and ingestible transmitters, will lead to the development of applications in healthcare, packaging, and other appliances.   “These technologies will also advance pervasive, ubiquitous, and immersive computing, such as the recent announcement of a cellular phone with a foldable screen. The use of such technologies will have a large impact on the way we perceive IoT devices and will lead to new usage models”, mentions the IEEE computer society. Active security protection From data breaches ( Facebook, Google, Quora, Cathay Pacific, etc) to cyber attacks, 2018 saw many security-related incidents. 2019 will now see a new generation of security mechanisms that use an active approach to fight against these security-related accidents. These would involve hooks that can be activated when new types of attacks are exposed and machine-learning mechanisms that can help identify sophisticated attacks. Virtual reality (VR) and augmented reality (AR) Packt’s 2018 Skill Up report highlighted what game developers feel about the VR world. A whopping 86% of respondents replied with ‘Yes, VR is here to stay’. IEEE Computer Society echoes that thought as it believes that VR and AR technologies will see even greater widescale adoption and will prove to be very useful for education, engineering, and other fields in 2019. IEEE believes that now that there are advertisements for VR headsets that appear during prime-time television programs, VR/AR will see widescale adoption in 2019. Chatbots 2019 will also see an expansion in the development of chatbot applications. Chatbots are used quite frequently for basic customer service on social networking hubs. They’re also used in operating systems as intelligent virtual assistants. Chatbots will also find its applications in interaction with cognitively impaired children for therapeutic support. “We have recently witnessed the use of chatbots as personal assistants capable of machine-to-machine communications as well. In fact, chatbots mimic humans so well that some countries are considering requiring chatbots to disclose that they are not human”, mentions IEEE.   Automated voice spam (robocall) prevention IEEE predicts that the automated voice spam prevention technology will see widespread adoption in 2019. It will be able to block a spoofed caller ID and in turn enable “questionable calls” where the computer will ask questions to the caller for determining if the caller is legitimate. Technology for humanity (specifically machine learning) IEEE predicts an increase in the adoption rate of tech for humanity. Advances in IoT and edge computing are the leading factors driving the adoption of this technology. Other events such as fires and bridge collapses are further creating the urgency to adopt these monitoring technologies in forests and smart roads. "The technical community depends on the Computer Society as the source of technology IP, trends, and information. IEEE-CS predictions represent our commitment to keeping our community prepared for the technological landscape of the future,” says the IEEE Computer Society. For more information, check out the official IEEE Computer Society announcement. Key trends in software development in 2019: cloud native and the shrinking stack Key trends in software infrastructure in 2019: observability, chaos, and cloud complexity Quantum computing, edge analytics, and meta learning: key trends in data science and big data in 2019
Read more
  • 0
  • 0
  • 18833

article-image-react-native-0-57-released-with-major-improvements-in-accessibility-apis-wkwebview-backed-implementation
Bhagyashree R
13 Sep 2018
2 min read
Save for later

React Native 0.57 released with major improvements in accessibility APIs, WKWebView-backed implementation, and more!

Bhagyashree R
13 Sep 2018
2 min read
With 600 commits and 992 files changed, React Native 0.57 was released yesterday. In this release, major improvements are done in the accessibility APIs, and WKWebView-backed implementation is added just as they announced in August, along with several tooling updates. What is new in React Native 0.57? New features Accessibility APIs, used for making apps accessible to people with disabilities, now support accessibility hints, inverted colors, and easier usage of defining the element's role and states. On iOS, WebView can now use WKWebView internally by passing useWebKit={true}. Loosen platform check to improve support for out-of-tree platforms. An implementation of YogaNodeProperties is added, which accesses style and layout properties using a ByteBuffer rather than JNI calls. FlatList and SectionList are now added to Animated exports. Changes Android tooling has been updated to match newer configuration requirements (SDK 27, gradle 4.4, and support library 27) unbundle is renamed to ram-bundle (a breaking change for OSS) Minimum Node version is changed from 8 to 8.3 Flow is upgraded to v0.76.0 ESLint is upgraded to 5.1.0 Babel is upgraded to v7.0.0 The “loading from pre-bundled file” notification won’t show up anymore when not on dev mode StyleSheet.compose is refined so that subtypes of DangerouslyImpreciseStyleProp can flow through the function call without losing their type The use of new Metro configuration is supported in the public react-native CLI react-native-dom is whitelisted in haste/cli config defaults Bug fixes debugger-ui path of react-native CLI was wrong earlier, which is now fixed. Extreme slowness of <TextInput> is fixed Placeholder of TextInput not completely visible on Android is fixed Horizontal <ScrollView> overflow issue is fixed Added support for connecting to the Packager when running the iOS app on device when using custom Debug configuration Fix crash in RCTImagePicker on iOS Removed features ScrollView.propTypes is removed. It is recommended to use flow or typescript for verifying correct prop usage instead. (Breaking change) ReactInstancePackage is now deprecated. It is recommended to use @link ReactPackage or @link LazyReactPackage. To know more about the improvements in React Native 0.57 release, head over to their GitHub repository. React Native 0.57 coming soon with new iOS WebViews React Native announces re-architecture of the framework for better performance Introducing Watermelon DB: A new relational database to make your React and React Native apps highly scalable
Read more
  • 0
  • 0
  • 18662

article-image-swift-4-2-releases-with-language-library-and-package-manager-updates
Natasha Mathur
18 Sep 2018
2 min read
Save for later

Swift 4.2 releases with language, library and package manager updates!

Natasha Mathur
18 Sep 2018
2 min read
The Swift team released Swift version 4.2 yesterday. Swift 4.2 comes with updates to the Swift language including generics improvement, and standard library updates. Swift 4.2 also comprises changes to the package manager. Swift is a general-purpose and multi-paradigm programming language developed by Apple Inc. for iOS, macOS, watchOS, tvOS, and Linux. Swift is built for uses that range from systems programming to mobile and desktop apps, to scaling up to cloud services. Let’s have a look at the key features in Swift 4.2. Language updates Swift 4.2 is a major language release and comprises of language changes such as Generics improvements, and Standard library updates. Generics Improvement Better support has been added for Generics which ultimately makes more of your code reusable. Standard Library updates The standard library in this latest release consists of a number of new features such as improvements to the Hashable protocol and a new unified set of randomization functions and protocols. Other updates Swift 4.2 has enabled the binary compatibility for future releases of Swift. Support has been added for batch mode compilation that results in faster build times. There’s a change in calling convention for retain/release cycle which helps reduce the code size and improve runtime performance. Package Manager Updates Swift 4.2 explores three new features namely, batch mode support, improved scheme generation logic, and automatic Xcode project generation. With Swift 4.2, the swift targets will be now compiled using the Swift compiler’s batch mode The scheme generation logic has been improved and it generates the following schemes: One scheme comprising all regular and test targets of the root package. One scheme per executable target consisting of the test targets whose dependencies intersect with the dependencies of the executable target. Swift 4.2 offers automatic Xcode project generation. The generate-xcodeproj here has a new –watch option which enables it to watch the file system and regenerate the Xcode project automatically, if needed. For more information on this release, check out the official release notes. Your First Swift Program What’s new in Vapor 3, the popular Swift based web framework Swift’s Core Libraries
Read more
  • 0
  • 0
  • 18601

article-image-apple-ipados-now-available-for-download-with-slide-over-and-split-view-home-screen-updates-new-capabilities-to-apple-pencil-and-more
Sugandha Lahoti
25 Sep 2019
4 min read
Save for later

Apple iPadOS now available for download with Slide Over and Split View, Home Screen updates, new capabilities to Apple Pencil and more

Sugandha Lahoti
25 Sep 2019
4 min read
iPadOS was first announced at Apple’s WWDC 2019 conference as a new operating system for Apple’s iPad which used iOS. Basically, iPadOS builds on the same foundation as iOS, adding intuitive features specific to the large display of iPad. Now, Apple iPadOS is available for iPad Air 2 and later and iPad mini 4 and later. iPadOS has a new Home screen layout with icons arranged in a tighter grid to give you more room for apps and information. What’s new in iPadOS Split View and Slide Over Split View allows you to work on multiple files and documents simultaneously while using the same app for multiple purposes. With Slide Over, you can quickly move between apps by swiping along the bottom. You can also swipe up to see all the apps in Slide Over and make it full screen by dragging it to the top. You can also open a window from the same app in multiple spaces so you can work on different projects across your iPad. The updated App Switcher shows all spaces and windows for all apps along with title windows and the App Exposé allows you to see all the open windows for an app by tapping its icon in the Dock. Updates to Apple Pencil Apple Pencil integration now brings more natural customizations to iPadOS. The latency has been reduced to 9 milliseconds and tool palette has been redesigned. You can drag it to either side of the screen, or minimize it in the corner so you have more room for your content. Apple Pencil also has a pixel eraser and a ruler.  You can also quickly take a screenshot using Apple Pencil by dragging it from either bottom corner. Improvements in the Files app The Files app gets a major improvement and iCloud Drive support for folder sharing. You can now access files on a USB drive, SD card, or hard drive. You can also share folders with friends, family, and colleagues in iCloud Drive. You can also easily browse files deep in nested folders in the new Column View. Quick Actions makes it easy to rotate, mark up, or create a PDF in the Files app. Improved Text Editing Text editing on the iPad receives a major update with iPadOS. With Scroll bar scrubbing, you can instantly navigate long documents, web pages, and conversations. You can also select text just by tapping and swiping. You can double-tap to quickly select addresses, phone numbers, email addresses, and more. With a simple three‑finger swipe to the left, you can undo gestures or redo by swiping three fingers to the right. You can also quickly select email messages, files, and folders by tapping with two fingers and dragging. Other updates in iPadOS You can use your iPad as a second display for additional screen space. New Dark Mode option for low-light environments New Photos tab lets you browse your photo library with different levels of curation Apps launch is up to 2x faster and unlocking the iPad Pro is up to 30 percent faster Support for Apple Arcade, a game subscription service with over 100 amazing new games, all with no ads or additional purchases. Bug in iPadOS grants third-party keyboards full access Apple has warned its users that a bug has been found in iOS and iPadOS that can result in keyboard extensions being granted full access even if you haven't approved this access. This issue does not impact Apple’s built-in keyboards. It also doesn't impact third-party keyboards that don't make use of full access. Apple says that the issue will be fixed soon in an upcoming software update. These are a select few updates. For more information read the detailed coverage on Apple iPadOS. Apple’s September 2019 Event: iPhone 11 Pro and Pro Max, Watch Series 5, Apple TV+, new iPad and more. Apple releases Safari 13 with opt-in dark mode support, FIDO2-compliant USB security keys support and more!. Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking
Read more
  • 0
  • 0
  • 18568

article-image-apple-pay-will-soon-support-nfc-tags-to-trigger-payments
Vincy Davis
14 May 2019
3 min read
Save for later

Apple Pay will soon support NFC tags to trigger payments

Vincy Davis
14 May 2019
3 min read
At the beginning of this month, Apple’s Vice President of Apple Pay, Jennifer Bailey announced a new NFC feature for Apple Pay. Now, Apple Pay will be supported with NFC stickers/tags, that will trigger it for payment without needing to have an app installed. This announcement was made during the keynote address at the TRANSACT Conference, Las Vegas, which focused on global payment technology. The new iPhones will have special NFC tags that will trigger Apple Pay purchases when tapped. This means all you need to do is tap on the NFC tag, confirm the purchase through Apple Pay(through Face ID or Touch ID) and the payment would be done. This will require no separate app and will be handled by Apple Pay along with the Wallet app. As per 9to5Mac, Apple is partnering with Bird scooters, Bonobos clothing store, and PayByPhone parking meters in the initial round. Also, users will soon be able to sign up for loyalty cards within the Wallet app, with a single tap with no third party or setup required. According to NFC World, Dairy Queen, Panera Bread, Yogurtland, Jimmy John's, Dave & Busters, and Caribou Coffee are all planning to launch services later this year that will use NFC tags allowing customers to sign up for loyalty cards. https://twitter.com/SteveMoser/status/1127949077432426496 This could be another step towards Apple’s goal of replacing the wallet. This feature will make instant and on the go purchases a lot more faster and easier. A user on Reddit has commented, “From a user's point of view, this seems great. No need to wait for congested LTE to download an app in order to pay for a scooter or parking.” Another user is comparing Apple Pay with QR code, stating “QR code requires at least one more step which is using the camera. Hopefully, Apple Pay will be just a single tap and confirm, which would be invoked automatically whenever the phone is near a point of sale. And since the NFC tags will have a predetermined, set payment amount associated with them, even biometrics shouldn’t be necessary.” https://twitter.com/lgeffen/status/1128083948410744832 More details on this feature can be expected at the Apple Worldwide Developers Conference (WWDC) 2019 (WWDC19) coming up in June. Apple’s March Event: Apple changes gears to services, is now your bank, news source, gaming zone, and TV Spotify files an EU antitrust complaint against Apple; Apple says Spotify’s aim is to make more money off others’ work Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws
Read more
  • 0
  • 0
  • 18405

article-image-you-can-now-use-fingerprint-or-screen-lock-instead-of-passwords-when-visiting-certain-google-services-thanks-to-fido2-based-authentication
Sugandha Lahoti
13 Aug 2019
2 min read
Save for later

You can now use fingerprint or screen lock instead of passwords when visiting certain Google services thanks to FIDO2 based authentication

Sugandha Lahoti
13 Aug 2019
2 min read
Google has announced a FIDO2 based local user verification for Google Accounts, for a simpler authentication experience when viewing saved passwords for a website. Basically, you can now use fingerprint or screen lock instead of passwords when visiting certain Google services. This password-free authentication service will leverage the FIDO2 standards, FIDO CTAP, and WebAuthn, which is designed to “provide simpler and more secure authentication experiences. They are a result of years of collaboration between Google and many other organizations in the FIDO Alliance and the W3C” according to a blog post from the company. This new authentication process is designed to speed up the process of logging into Google accounts as well as being more secure by replacing the password typing system with a direct biometric authentication system. How this works is that if you tap on any one of your saved passwords on passwords.google.com, then Google will prompt you to "Verify that it’s you," at which point, you can authenticate using your fingerprint or any other method you usually use to unlock your phone (such as using a pin number or a touch pattern). Google has not yet made it clear which Google services could be used by the biometric method; the blog post cited Google's online Password Manager, as the example. Source: Google Google is also being cautious about data privacy, noting, “Your fingerprint is never sent to Google's servers - it is securely stored on your device, and only a cryptographic proof that you've correctly scanned it is sent to Google's servers. This is a fundamental part of the FIDO2 design. This sign-in feature is currently available on all Pixel devices. It will be made available to all Android phones running 7.0 Nougat or later "over the next few days.  Google Titan Security key with secure FIDO two factor authentication is now available for purchase Google to provide a free replacement key for its compromised Bluetooth Low Energy (BLE) Titan Security Keys Cloud Next 2019 Tokyo: Google announces new security capabilities for enterprise users
Read more
  • 0
  • 0
  • 18379
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-google-is-planning-to-bring-node-js-support-to-fuchsia
Natasha Mathur
20 Mar 2019
2 min read
Save for later

Google is planning to bring Node.js support to Fuchsia

Natasha Mathur
20 Mar 2019
2 min read
Google is, reportedly, planning to bring Node.js to Fuchsia. Yang Guo, a staff software engineer at Google, posted a tweet, yesterday, where he says he’s looking for a full-time software engineer at Google Munich, Germany, who can port Node.js to Fuchsia. https://twitter.com/hashseed/status/1108016705920364544 “We are interested in bringing JS as a programming language to that platform to add to the list of supported languages,” states Guo. Currently, Fuchsia supports languages such as C/C++, Dart, FIDL, Go, Rust, Python, and Flutter modules. Fuchsia is a new operating system that Google is currently working on. Google has been working over Fuchsia for over two years in the hope that it will replace the much dominant Android. Fuchsia is a capability-based operating system and is based on a new microkernel called "Zircon".   Zircon is the core platform responsible for powering the Fuchsia OS. It is made up of a microkernel (source in kernel/...) and a small set of userspace services, drivers, and libraries (source in system/...) that are necessary for the system to boot, talk to hardware, as well as load and run the userspace processes. The Zircon Kernel is what helps provide syscalls to Fuchsia, thereby, helping it manage processes, threads, virtual memory, inter-process communication, state changes, and locking. Fuchsia can run on a variety of platforms ranging from embedded systems to smartphones, and tablets. Earlier this year in January, 9to5Google published evidence, stating that Fuchsia can also run Android Applications. Apparently, a new change was spotted by the 9to5 team in the Android Open Source Project that makes use of a special version of ART (Android Runtime) to run Android apps. This feature would allow devices such as computers, wearables, etc to leverage Android apps in the Google Play Store. Public reaction to the news is positive, with people supporting the announcement: https://twitter.com/aksharpatel47/status/1108136513575882752 https://twitter.com/damarnez/status/1108090522508410885 https://twitter.com/risyasin/status/1108029764957294593 Google’s Smart Display – A push towards the new OS, Fuchsia Fuchsia’s Xi editor is no longer a Google project Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation
Read more
  • 0
  • 0
  • 18301

article-image-apple-music-is-now-available-on-your-web-browser
Sugandha Lahoti
06 Sep 2019
2 min read
Save for later

Apple Music is now available on your web browser

Sugandha Lahoti
06 Sep 2019
2 min read
Yesterday, Apple brought it’s popular Apple Music streaming service to the web. Apple Music for the web is launched in public beta and you can access it from anywhere in the world with your Apple ID, if you are an Apple Music subscriber. This is the first time that Apple Music has been officially offered on the web. You can visit the link beta.music.apple.com to get started. New users will be able to sign up for Apple Music through the website later in the future. Twitterati were shocked to the core as Apple Music came to the web. Appreciation tweets flooded the social media platform. https://twitter.com/viticci/status/1169715776279973889 https://twitter.com/bzamayo/status/1169705640215945218 https://twitter.com/kylewagaman/status/1169878550523940865 Apple Music for web allows you to search for and play any song in the Apple Music catalog. If you have set up the Sync Library option on other devices, you can play tunes from your own library as well. All the main sections from the Apple Music app will also be available, including Library, Search, For You, Browse and Radio. The player has some of the same features as the macOS Catalina Music app, for instance, adapting to a Dark Mode setting. At WWDC, Apple announced that with macOS Catalina, Apple is replacing iTunes with Apple Music. Once the new Music app launches on Mac this fall, it will help Apple move away from supporting iTunes on Windows. A web app is also accessible for people unable to install the iTunes or Apple Music apps. This is another of Apple’s steps in putting more focus on services. Building a web app is a sensible business move that will benefit Apple’s current and future subscribers. Apple Music on web also brings the company on par with Spotify, which is Apple’s biggest competitor in the music sphere. In March this year, Spotify had filed an EU antitrust complaint against Apple. Apple had responded that Spotify’s aim is to make more money off others’ work. More interesting news for Apple Is Apple’s ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill? Apple accidentally unpatches a fixed bug in iOS 12.4 that enables its jailbreaking Apple announces ‘WebKit Tracking Prevention Policy’ that considers web tracking as a security vulnerability
Read more
  • 0
  • 0
  • 18103

article-image-top-5-google-i-o-2018-conference-day-1-highlights-android-p-android-things-arcore-ml-kit-and-lighthouse
Sugandha Lahoti
10 May 2018
7 min read
Save for later

Top 5 Google I/O 2018 conference Day 1 Highlights: Android P, Android Things, ARCore, ML kit and Lighthouse

Sugandha Lahoti
10 May 2018
7 min read
Google I/O 2018, the most anticipated conference by Google kicked off yesterday at Shoreline Amphitheatre in Mountain View, California. Seems like it was just yesterday that Google I/O 2017 was over and we were still in awe of the new AI capabilities they announced last time but here we are, with the next annual I/O event in front of us. On the 1st day, CEO Sundar Pichai delivered the keynote promising a 3-day gala event for over 7,200 attendees with a plethora of announcements and updates to Google products. I/O’18 will conduct 400+ extended events in 85 countries. Artificial intelligence was a big theme throughout. Google showcased ML Kit, a SDK for adding Google’s machine learning smarts to Android and iOS apps. New features were added to Android P, Google’s most ambitious Android update. Not to mention the release of Lighthouse 3.0, new anchor tools for multiplayer AR, updates to Google assistant, Gmail, Google Maps and more. Here are our top picks from Day 1 of Google I/O 2018. Machine Learning for Mobile Developers Google’s newly launched ML Kit SDK, allows mobile developers to make use of Google’s machine learning expertise in the development of Android and iOS apps. This kit allows integration of mobile apps with a number of pre-built Google-provided machine learning models. These models support text recognition, face detection, barcode scanning, image labeling and landmark recognition, among other things. What stands out here is the fact that the ML Kit is available both online and offline, depending on network availability and the developer’s preference. In the coming months, Google plans to add a smart reply API and a high-density face contour feature for the face detection API, in the list of currently available APIs. New Augmented Reality experiences come to Android At the Google I/O conference, Google also announced several updates to its ARCore platform focused on overcoming the limitations of existing AR-enabled smartphones. Multi-User and shared AR New cloud anchor tools will enable developers to create new types of collaborative experiences, which can be shared with multiple users across both Android and iOS devices. More surfaces to play around with Vertical Plane Detection, a new feature of ARCore, allows users to place AR objects on more surfaces, like textured walls. Another capability, Augmented Images, brings images to life just by pointing a phone at them. https://www.youtube.com/watch?v=uDs9rd7yD0I Simple AR development New ARcore updates also simplify the process of AR development for Java developers with the introduction of Sceneform. Developers can now build immersive, 3D apps, optimized for mobile without having to learn complicated APIs like OpenGL. They can use Sceneform to build AR apps from scratch as well as to add AR features to existing ones. Android P: the most ambitious Android OS yet The name for the new version is yet to be decided but judging by their trend of naming the OS after a dessert it may be Pumpkin Pie, Peppermint Patty, Or Popsicle? I’m voting for Popsicle! Apart from the name, here are the other major features of the new OS: Jetpack: Jetpack is the next generation of the Android Support Library, redefining how developers write applications for Android. Jetpack manages tedious activities like background tasks, navigation, and lifecycle management, so developers can focus on core app development. Android KTX: In the last I/O conference, Google made Kotlin language a first-class citizen for developing Android apps. Continuing on the same trend, Google announced Android KTX in I/O’18. It is a part of Jetpack that further optimizes the Kotlin developer experience across libraries, tooling, runtime, documentation, and training. Android Studio 3.2: There are 20 major features in this release of Android Studio spanning from ultra-fast Android Emulator Snapshots and Sample Data in the Layout Editor, to a brand new Energy Profiler to measure battery impact of the app. Material Design 2: While other Google apps like Gmail and Tasks have already gotten a recent visual update, in Android P, Google is overhauling the OS’ overall look with what people are calling Material Design 2. Google calls it Material Themes, a powerful plugin to help designers implement Material Design in their apps. This new interface is designed to be “responsive and efficient,” while feeling “cohesive” with the rest of the G Suite family of apps. Adaptive Battery: Apart from refreshing the looks, Google has been busy thinking about improving performance. Google has partnered with its AI subsidiary DeepMind on a smart battery management system for Android. Scaling IoT with Android Things 1.0 After over 100,000 SDK downloads of the Developer Preview of Android Things, Google announced the long-term release of Android Things 1.0 to developers with long-term support for production devices. App Library, allows developers to manage APKs more easily without the need to package them together in a separate zipped bundle. Visual storage layout helps in configuring the device storage allocated to apps and data for each build and helps in getting an overview of how much storage your apps require. Group sharing, where product sharing has been extended to include support for Google Groups. Updated permissions, to give developers more control over the permissions used by apps on their devices. Developers can manage their Android Things devices via a cloud-based Android Things Console. Devices themselves can manage OS and app updates, view analytics for device health and performance, and issue test builds of the software package. Lighthouse 3.0 for better web optimization A new update to Lighthouse, the web optimization tool of Google, was also announced at Google I/O. Lighthouse 3.0 offers smaller waiting periods more updates to developers to efficiently optimize their websites and audit their performance. It uses Simulated throttling, with a new Lighthouse internal auditing engine, that runs audits under normal network and CPU settings, and then estimates how long the page would take to load under mobile conditions. Lighthouse 3.0 also features a new report UI along with invocation, scoring, audit, and output changes. Other highlights Google announced the rebranding of its Google Research division to Google AI. Google made a massive “continued conversation” update to Google Assistant with Google Duplex, a new technology that enables Google's machine intelligence–powered virtual assistant, to conduct a natural conversation with a human over the phone. Google has also announced the release of the third beta of Flutter. Flutter is Google’s mobile app SDK used for creating high-quality, native user experiences on mobile. Google Photos get more AI-powered fixes such as B&W photo colorization, brightness correction and suggested rotations. Google’s first Smart Displays, the screen-enriched smart speakers, will launch in July, powered by Google Assistant and YouTube. Google Assistant is coming to Google Maps, available on iOS and Android. There are still 2 more days left for Google I/O to conclude and going by day 1 announcements, I can’t wait to see what’s next. I am especially looking forward to knowing more about Android Auto, Google’s Tour Creator,  and Google Lens. You can view the Livestream and other sessions on the Google I/O conference page. Keep visiting Packt Hub for more updates on Google I/O, Microsoft Build and other key tech conferences happening this month. Google’s Android Things, developer preview 8: First look Google open sources Seurat to bring high precision graphics to Mobile VR Microsoft Build 2018 Day 1: Azure meets Artificial Intelligence
Read more
  • 0
  • 0
  • 18027

article-image-magic-leap-unveils-mica-a-human-like-ai-in-augmented-reality
Sugandha Lahoti
12 Oct 2018
3 min read
Save for later

Magic Leap unveils Mica, a human-like AI in augmented reality

Sugandha Lahoti
12 Oct 2018
3 min read
In the keynote of their developer conference L.E.A.P., which took place Wednesday, Magic Leap showed a demo of their new human-like AI. Dubbed, Mica, she can communicate with a viewer through the company’s augmented reality glasses, the Magic Leap One Creator Edition. Basically, Mica is a short-haired woman who can express facial expressions closely resembling a normal human. She does not speak but can still communicate in warm ways with the viewer. The project was presented at the Magic Leap L.E.A.P. event by Andrew Rabinovich, head of AI at Magic Leap, and John Monos, head of human-centered AI. According to the keynote, Mica is their prototype for developing systems to create digital human representations. The first prototype came up with a realistic eye gaze and eye movement. Artificial Intelligence components were then added to track users and look them in the eye. Additional AI elements were then added for body language and posture. According to Nick Whiting from Epic Games, Mica is powered by Unreal Engine 4. Magic Leap focused on creating natural facial expressions that can emote in believable ways. Their main goal was to create facial elements that connect users to her. Mica came out as an ideal interface to human-centered AI that evokes natural reactions from the users. Mica gets the interactions and intelligence to how people expect. User focus becomes the temperament for Mica, her personality traits, and mannerism are aligned to how the users are with her. VentureBeat’s correspondent was invited for a demo of Mica. Per his experience, “ I walked into a  physical room and sat in a chair. Mica was sitting at the table in the same room. She smiled at me and look at me. I was struck that she wasn’t just looking at me. She was looking in my eyes. She tilted her head from side to side. When I noticed how attentive she was, I moved my head forward and looked in her eyes. She did the same and looked at me. I moved my head back and she moved her head back too. She was mimicking some of the movements that she saw me make. She didn’t talk, but that is coming in the future.” Magic Leap’s Mica is a clear indication of what the virtual assistant future will look like for most people in the very near future. Read more about Magic Leap’s L.E.A.P conference to know what else was announced. You may also watch the keynote. Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality. Understanding the hype behind Magic Leap’s New Augmented Reality Headsets. Magic Leap One, the first mixed reality headsets by Magic Leap, is now available at $2295.
Read more
  • 0
  • 0
  • 17996
article-image-xamarin-essentials-1-6-preview-macos-media-and-more-from-xamarin-blog
Matthew Emerick
07 Oct 2020
4 min read
Save for later

Xamarin.Essentials 1.6 preview: macOS, media, and more! from Xamarin Blog

Matthew Emerick
07 Oct 2020
4 min read
Xamarin.Essentials has been a staple for developers building iOS, Android, and Windows apps with Xamarin and .NET since it was first released last year. Now, we are introducing Xamarin.Essentials 1.6, which adds new APIs including MediaPicker, AppActions, Contacts, and more. Not to mention that this release also features official support for macOS! This means that Xamarin.Essentials now offers over 50 native integrations with support for 7 different operating systems. All from a single library that is optimized for performance, linker safe, and production ready. Here is a highlight reel of all the new features: https://devblogs.microsoft.com/xamarin/wp-content/uploads/sites/44/2020/09/Xamarin.Essentials-1.6.mp4 Welcome macOS Since the first release of Xamarin.Essentials the team and community have been continuously working to add more platforms to fit developer’s needs. After adding tvOS, watchOS, and Tizen support the next natural step was first class support for macOS to compliment the UWP desktop support. I am pleased to announce most APIs are now supported for macOS 10.12.6 (Sierra) and higher! Take a look at the update platform support page to see all of the APIs that you can leverage on your macOS apps. MediaPicker and FilePicker The time has finally come for brand new media capabilities in Xamarin.Essentials. These new APIs enable you to easily access device features such as picking a file from the system, selecting photos or videos, or having your user take a photo or video with the camera. async Task TakePhotoAsync() { try { var photo = await MediaPicker.CapturePhotoAsync(); await LoadPhotoAsync(photo); Console.WriteLine($"CapturePhotoAsync COMPLETED: {PhotoPath}"); } catch (Exception ex) { Console.WriteLine($"CapturePhotoAsync THREW: {ex.Message}"); } } App Actions App actions, shortcuts, and jump lists have all been simplified across iOS, Android, and UWP with this new API. You can now manually create and react to actions when the user selects them from the app icon. try { await AppActions.SetAsync( new AppAction("app_info", "App Info", icon: "app_info_action_icon"), new AppAction("battery_info", "Battery Info")); } catch (FeatureNotSupportedException ex) { Debug.WriteLine("App Actions not supported"); } Contacts Does your app need the ability to get contact information? The brand-new Contacts API has you covered with a single line of code to launch a contact picker to gather information: try { var contact = await Contacts.PickContactAsync(); if(contact == null) return; var name = contact.Name; var contactType = contact.ContactType; // Unknown, Personal, Work var numbers = contact.Numbers; // List of phone numbers var emails = contact.Emails; // List of email addresses } catch (Exception ex) { // Handle exception here. } So Much More That is just the start of brand-new features in Xamarin.Essentials 1.6. When you install the latest update, you will also find new APIs including Screenshot, Haptic Feedback, and an expanded Permissions API. Additionally, there has been tweaks and optimizations to existing features and of course some bug fixes. Built with the Community One of the most exciting parts of working on Xamarin.Essentials is seeing the amazing community contributions. The additions this month included exciting new large new APIs, small tweaks, and plenty of bug fixes. Thank you to everyone that has filed an issue, filed a feature request, reviewed code, or sent a full pull request down. sung-su.kim – Tizen FilePicker Andrea Galvani – UWP Authenticator Fixes, Pedro Jesus – Contacts, Color.ToHsv/FromHsva Dimov Dima – HapticFeedback API Dogukan Demir – Android O Fixes in Permissions Sreeraj P R – Audio fixes on Text-to-Speech Martin Kuckert – iOS Web Authenticator Fixes solomonfried – WebAuthenticator Email vividos – FilePicker API Janus Weil – Location class fixes, AltitudeReferenceSystem addition Ed Snider – App Actions Learn More Be sure to read the full release notes and the updated documentation to learn more about each of the new features. The post Xamarin.Essentials 1.6 preview: macOS, media, and more! appeared first on Xamarin Blog.
Read more
  • 0
  • 0
  • 17835

article-image-you-can-now-permanently-delete-your-location-history-and-web-and-app-activity-data-on-google
Sugandha Lahoti
03 May 2019
4 min read
Save for later

You can now permanently delete your location history, and web and app activity data on Google

Sugandha Lahoti
03 May 2019
4 min read
Google keeps a track of everything that you do online, including the websites you visit, the ads you see, the videos you watch, and the things you search. Soon, this is (partially) going to change. Google, on Wednesday, launched a new feature allowing users to delete all or part of the location history and web and app activity data, manually. This has been a long requested feature by all internet users, and Google says it “ has heard user feedback that they need to provide simpler ways for users to manage or delete their data.” In the Q1 earnings shared by Google’s parent company Alphabet, they said that EU’s USD 1.49 billion fine on Google is one of the reasons their profit sagged in the first three months of this year.  This was Google’s third antitrust fine by EU since 2017. In the Monday report, Alphabet said that profit in the first quarter fell 29 percent to USD 6.7 billion on revenue that climbed 17 percent to USD 36.3 billion. “Without identifying you personally to advertisers or other third parties, we might use data that includes your searches and location, websites and apps you’ve used, videos and ads you’ve seen, and basic information you’ve given us, such as your age range and gender,” the company explains on its Safety Center Web page. Google already allows you to turn off their location history and Web and app activity. You can also manually delete data that’s generated from searches and other Google services. This new feature, however, lets you remove such information automatically. It has a time limit for how long you want your activity data to be saved: Keep until I delete manually Keep for 18 months, then delete automatically Keep for 3 months, then delete automatically Based on the option chosen, any data older than that will be automatically deleted from your account on an ongoing basis. Surprisingly, Google still does not have an option that says 'don't track me' or 'automatically delete after I close website', which would ensure 100 percent data privacy and security for users. Source: Google Blog Enabling privacy has not been one of Google’s strongholds in recent times. Last year, Google was caught in a scandal which allowed Google to track a person’s location history in incognito mode, even when they had turned it off. In November last year, Google came under scrutiny by the European Consumer Organisation (BEUC). They published a report stating that Google is using various methods to encourage users to enable the settings ‘location history’ and ‘web and app activity’ which are integrated into all Google user accounts. They allege that Google is using these features to facilitate targeted advertising. “These practices are not compliant with GDPR, as Google lacks a valid legal ground for processing the data in question. In particular, the report shows that users’ consent provided under these circumstances is not freely given,” BEUC, speaking on behalf of the countries’ consumer groups, said. Google was also found helping the police use Google’s location database to catch potential crime suspects, and sometimes capturing innocent people in the process, per a recent New York Times investigation. The new feature will be rolled out in the coming weeks for location history and for web and app activity data. It is likely to be incorporated in data history as well, but it has not been officially confirmed. To enable this privacy feature, visit your Google account activity controls. European Consumer groups accuse Google of tracking its users’ location, calls it a breach of GDPR. Google’s incognito location tracking scandal could be the first real test of GDPR Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says.
Read more
  • 0
  • 0
  • 17807

article-image-google-takes-steps-towards-better-security-introduces-new-api-policies-for-3rd-parties-and-a-titan-security-system-for-mobile-devices
Bhagyashree R
10 Oct 2018
4 min read
Save for later

Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices

Bhagyashree R
10 Oct 2018
4 min read
With Google+ shutting down because of a data vulnerability, Google has been working towards providing users better security for their data. On Monday, it introduced new policies that aim to provide users a better security for their data. These new policies are focused on Gmail APIs and will go into effect on January 9, 2019. Furthermore, yesterday in its hardware event Google announced that they have integrated the Titan security chip to the newly launched Pixel 3, Pixel 3 XL, and Pixel Slate. What are the newly introduced security policies? The following policies will be applied to the apps accessing user data from consumer Google accounts: Application types allowed to access the covered APIs Only the following application types will be permitted to access these APIs: Source: Google Now users will get additional warnings if they are allowing applications to access their email without regular direct interaction. The applications will also need to re-consent to access user emails at regular intervals. The right use of user data According to this policy, third-party apps must access these APIs only to use the data in order to provide user-facing features. They should not transfer or sell the data for other purposes such as ads, market research, email campaign tracking, and other unrelated purposes. Applications are permitted to use data from a user’s email if they are using it for the direct benefit of a user and not for market research. Also, human review of email data must be strictly limited. Apps will have to pass assessments to ensure data security To reduce the risk of data breach, third-party apps handling Gmail data will have to meet minimum security standards. Apps will need to demonstrate secure data handling with a series of assessments. These assessments include: Application penetration testing External network penetration testing Account deletion verification Reviews of incident response plans Vulnerability disclosure programs Information security policies Accessing only the information you need Applications will be given limited API access to only the information necessary to implement the application. For instance, if an app does not need full or read access and only requires send capability, they will be allowed to only request narrower scopes so that the app only accesses data needed for its features. Applications that are accessing the covered Gmail APIs can submit an application beginning from January 9, 2019, and must submit a review by February 15, 2019. These applications will be reviewed for compliance with the policies described above. After that, developers need to complete a security assessment by a third party assessor for which they will be charged a fee ranging between $15,000 to $75,000. This fee is due whether or not the app passes the assessment. Titan Security chip comes to Pixel 3, Pixel 3 XL, and Pixel Slate Google announced in yesterday’s hardware event that they have integrated their in-house Titan Security chip into the newly launched Pixel 3, Pixel 3 XL, and Pixel Slate, making for a more secure experience for users. Google in a blog post said: “We’re committed to the security of our users. We need to offer simple, powerful ways to safeguard your devices. We’ve integrated Titan Security, the system we built for Google, into our new mobile devices. Titan Security protects your most sensitive on-device data by securing your lock screen and strengthening disk encryption.” The Titan Security system was first introduced last year for Google Cloud Platform. It is a low-power, phishing-resistant two-factor authentication (2FA) microchip. This chip is used to secure the lockscreen, strengthen disk encryption, and protect the integrity of the operating system. Rick Osterloh, senior vice president of hardware, said during the event: "By combining Titan Security both in the data center and on device, we've created a closed loop for your data across the Google ecosystem." To read the full list of policies, check out Google’s official announcement. Also read the announcement about their newly launched mobile devices: Pixel 3, Pixel 3 XL, and Pixel Slate. Google opts out of Pentagon’s $10 billion JEDI cloud computing contract, as it doesn’t align with its ethical use of AI principles Google reveals an undisclosed bug that left 500K Google+ accounts vulnerable in early 2018; plans to sunset Google+ consumer version Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan
Read more
  • 0
  • 0
  • 17805
article-image-ethical-mobile-operating-system-eelo-an-alternative-for-android-and-ios-is-in-beta
Prasad Ramesh
11 Oct 2018
5 min read
Save for later

‘Ethical mobile operating system’ /e/, an alternative for Android and iOS, is in beta

Prasad Ramesh
11 Oct 2018
5 min read
Right now Android and iOS are the most widely used OSes on mobile phones. Both owned by giant corporates and there are no other offerings that are in line with public interest, privacy, or affordability. Android is owned by Google, can’t say it is pro user privacy with all the tracking they do. iOS by Apple is a very closed OS and not to mention that it isn’t exactly affordable to the masses. Apart from some OSes in the works, there is an OS called /e/ or eelo from the creator of Mandrake-Linux, focused on user privacy. Some OSes in the works Some of the mobile OSes include Tizen from Samsung which it had released only with entry level smartphones. There is also an OS in the making by Huawei. Google has also been working on a new OS called Fuchsia. It uses a new microkernel called Zicron created by Google, instead of Linux. It is also in the early stages and there is no clear indication behind the purpose of building Fuchsia when Android is ubiquitous in the market. Google was fined for $5B regarding Android antitrust earlier this year, maybe Fuchsia can come into picture here. In response to EU’s decision to fine Google, Sundar Pichai said that preventing Google from bundling its apps would “upset the balance of the Android ecosystem” and that the Android business model guaranteed zero charges for the phone makers. This seems like a warning from Google to consider licensing Android to phone makers. Will curtains be closed on Android over legal disputes? That does not seem very likely considering Android smartphones and Google’s services in these smartphones are a big source of income for Google. They would not let it go that easily and I’m not sure if the world is ready to let go of the Android OS either. It has given access to apps, information, connectivity to the large masses. However, there is growing discontent among Android users, developers and handset partners. Whether that frustration will pivot enough to create a viable market for alternative mobile OS, is something only time can tell. Either way, there is one OS called /e/ or eelo intent on displacing Android. It has made some progress but is not an OS made from scratch exactly. What is eelo? The above mentioned OSes are far from complete and owned by large corporations. Here comes eelo, it is free and open-source. It is a forked LineageOS with all the Google apps and services removed. But that’s not all, it also has a select few default applications, a new user interface, and several integrated online services. The “/e/” ROM is in Beta stage and can be installed on several devices. More devices will be supported as more contributors port and maintain for different devices. The ROM uses microG instead of Google’s core apps. It uses Mozilla NLP which will make geolocation available even when GPS signal is not available. eelo project leader, Gaël Duval states: “At /e/, we want to build an alternative mobile operating system that everybody can enjoy using, one that is a lot more respectful of user’s data privacy while offering them real freedom of choice. We want to be known as the ethical mobile operating system, built in the public interest.” BlissLauncher is included with original icons and support for widgets and auto icon sizing based on screen pixel density. There are new default applications, a mail app, an SMS app (Signal), a chat application (Telegram), along with a weather app, a note app, a tasks app and a maps app. There is an /e/ account manager in which users can choose to use a single /e/ identity (user@e.email) for all services. It will also have OTA updates. The default search engine is searX with Qwant and DuckDuckGo as alternatives. They also plant to open a project in the personal assistant area. How has the market reacted to eelo? Early testers seem happy with /e/ or alternatively called as eelo. https://twitter.com/lowpunk/status/1050032760373633025 https://twitter.com/rvuong_geek/status/1048541382120525824 There are also some negative reactions where people don’t really welcome this new “mobile OS”. A comment on reddit by user JaredTheWolfy says: “This sounds like what Cyanogen tried to do, but at least Cyanogen was original and created a legacy for the community.” Another comment by user MyNDSETER on reddit reads: “Don't trust Google with your data. Trust us instead. Oh gee ok and I'll take some stickers as well.” Yet another reddit user zdakat says: “I guess that's the android version of I made my own cryptocurrency! (by changing a few strings in Bitcoin source, or the new thing: by deploying a token on Ethereum)” You can check out a detailed article about eelo on Hackernoon, and the /e/ website. A decade of Android: Slayer of Blackberry, challenger of iPhone, mother of the modern mobile ecosystem Microsoft Your Phone: Mirror your Android phone apps on Windows Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more!
Read more
  • 0
  • 0
  • 17654

article-image-googles-smart-display-push-towards-os-fuchsia
Amarabha Banerjee
03 Aug 2018
5 min read
Save for later

Google's Smart Display - A push towards the new OS, Fuchsia

Amarabha Banerjee
03 Aug 2018
5 min read
There is no denying that Amazon Echo, HomePod and Alexa have brought about a revolution in the digital assistant space. A careful study will however point to few common observations about these products. They are merely a representation of what the assistant is thinking. They can only perform scheduled tasks like playing a song, answering predefined questions, making a todo list. This is where the latest innovation by Lenovo on the Google powered Smart Assistant stands out. It’s not just an assistant that can perform pre-planned tasks, it’s a Smart Display. How does Google’s Lenovo Smart Display look? Source: Techhive The Google Smart Display UI comes in two versions: an 8-inch model with a 1280 x 800 resolution touchscreen for $US200, and a larger 10-inch version with a 1920 x 1200 display. Although Lenovo is the first company in the Smart Display UI domain, others like JBL, LG are also planning to come up with their own versions of this. The screen is not merely an add-on to the speaker based systems like Alexa, but it is a display based on the Google smart assistant, which really increases its functionality. The rear design is sleek and it’s easy to be used on Desktop. What does it do? The Smart Display runs on Qualcomm's 624 Home Hub platform, which is the faster of its two architectures for Android Things devices. While the Qualcomm 212 platform works well for things like smart speakers and low-power smart home accessories, the 624 Home Hub platform is better suited for the Smart Display UI. It helps to process Google Assistant requests both audio visually. How is the Smart Display different THe main question here is how is it different or better than the existing solutions like Amazon Alexa, or Echo? SImple tests of performance have yielded different results in favor of the SMart Display UI. A search on Alexa makes it search the internet and read out the first few answers. This can be interpreted as a question answer based smart system. Whereas the Smart Display UI doesn’t just bring up a relevant graphical display, it displays some relevant links and the most important aspect of your search on the screen. This is significantly faster than others. The reason being the powerful Google AI behind the system. From Android to Fuchsia The smart display UI and the already successful launch of chrome OS has triggered discussions around the possible replacement of Android with the Futuristic Fuchsia. The discussion is centered around Google’s intentions of creating a mobile OS that will have a formalized update cycle. An OS which won’t have different versions running across different devices. Chrome OS seems to be the first step, which runs on the same Linux kernel as Android. While the recent developments related to Fuchsia are still under the wraps, Google might want to finally use Android as a runtime on Fuchsia. This can be a difficult task to perform, but it is a better option than running two kernels in the overhead, one for Android and the other for Fuchsia. The signs that show that this smart display UI is a way to test waters for Fuchsia are plenty. The smart display UI is completely based on the Google Smart Assistant, which will be the core of Fuchsia. It doesn’t have a home button or an app menu button. You can navigate and search using voice command. Voice command is also at the heart of Fuchsia. Rather than swiping on screen, you will be able to navigate and search with voice command. Android P, next in line for release, is also moving towards a similar UI. Android P will display your apps as cards and will promote them to the system level and avoid a complete launch. This will help in reducing system overhead and stop apps from running perpetually. From all these indicators, it seems to be a natural progression for this smart display UI system to become the face of the Fuchsia operating system. The challenge will be in migrating the present Android phones to Fuchsia. Since Android is currently written in Java and bundled in a JVM bubble to cater to the android system, developers believe that it wouldn’t be a difficult task to create such an intermediate layer for Fuchsia which is written in Dart. The shift however seems a bit far fetched for now. Some discussions on reddit suggest it might be as as late as 2022.  But the drive for this change is pretty clear, The need to have a uniform OS across all devices The independence of the OS performance on system resources Taking backend operations to the cloud Making the UI voice controlled and reducing the touchscreen for a mere visual tool. We can only hope that Fuchsia can solve the problems for the users that Android couldn’t. Uniform mobile computing platform can be a good start, and Fuchsia seems to be the perfect successor to carry forward the legacy of Android and its huge fan base. Android P Beta 4 is here, stable Android P expected in the coming weeks! Is Google planning to replace Android with Project Fuchsia? Boston Dynamics’ ‘Android of robots’ vision starts with launching 1000 robot dogs in 2019
Read more
  • 0
  • 0
  • 17637
Modal Close icon
Modal Close icon