Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Mobile

204 Articles
article-image-react-native-community-announce-march-updates-post-sharing-the-roadmap-for-q4
Sugandha Lahoti
04 Mar 2019
3 min read
Save for later

React Native community announce March updates, post sharing the roadmap for Q4

Sugandha Lahoti
04 Mar 2019
3 min read
In November, last year, the React Native team shared a roadmap for React Native to provide better support to its users and collaborators outside of Facebook. The team is planning to open source some of the internal tools and improve the widely used tools in the open source community. Yesterday, they shared updates on the progress they have made in the two months since the release of the roadmap. Per the team, the goals were to “reduce outstanding pull requests, reduce the project's surface area, identify leading user problems, and establish guidelines for community management.” Updates to Pull Requests The number of open pull requests was reduced to 65. The average number of pull requests opened per day increased from 3.5 to 7. Almost two-thirds of pull requests were merged and one-third of the pull requests closed. Out of all the merged pull requests, only six caused issues; four only affected internal development and two were caught in the release candidate state. Cleaning up for leaner core The developers are planning on reducing the surface area of React Native by removing non-core and unused components. The community response on helping with the Lean Core project was massive. The maintainers jumped in for fixing long-standing issues, adding tests, and supporting long-requested features. Examples of such projects are WebView that has received many pull requests since their extraction and the CLI that is now maintained by members of the community and received much-needed improvements and fixes. Helping people upgrade to newer versions of React Native One of the highest voted problems was the developer experience of upgrading to newer versions of React Native. The team is planning on recommending CocoaPods by default for iOS projects which will reduce churn in project files when upgrading React Native. This will make it easier for people to install and link third-party modules. The team also acknowledged contributions from members of the community. One maintainer, Michał Pierzchała from Callstack helped in improving the react-native upgrade by using rn-diff-purge under the hood. Releasing React Native 0.59 For future releases, the team plans to: work with community members to create a blog post for each major release show breaking changes directly in the CLI when people upgrade to new versions reduce the time it takes to make a release by increasing automated testing and creating an improved manual test plan These plans will also be incorporated in the upcoming React Native 0.59 release. It is currently published as a release candidate and is expected to be stable within the next two weeks. What’s next The team will now focus on managing pull requests while also starting to reduce the number of outstanding GitHub issues. They will continue to reduce the surface area of React Native through the Lean Core project. They also plan to address five of the top community problems and work on the website and documentation. React Native 0.59 RC0 is now out with React Hooks, and more Changes made to React Native Community’s GitHub organization in 2018 for driving better collaboration The React Native team shares their open source roadmap, React Suite hits 3.4.0
Read more
  • 0
  • 0
  • 16558

article-image-understanding-the-hype-behind-magic-leaps-new-augmented-reality-headsets
Kunal Chaudhari
20 Apr 2018
4 min read
Save for later

Understanding the hype behind Magic Leap’s New Augmented Reality Headsets

Kunal Chaudhari
20 Apr 2018
4 min read
After 6 years of long anticipation, Magic Leap, the secretive billion dollar startup has finally unveiled its first augmented reality headset. This mysterious new device is supposedly priced at $1000 and hosts a variety of interesting new features. Let’s take a look at why this company, which is notoriously known to work in the “stealth mode”, has been gaining so much popularity. Magic Leap Origins Magic Leap was founded in 2010, by Rony Abovitz, a tech-savvy American entrepreneur. He previously founded a company which manufactured surgical robotic arm assistance platforms. But it was not until October 2014, when the company started to make the rounds in news by receiving $540 million of venture funding from Google, Qualcomm, Andreessen Horowitz, and Kleiner Perkins, among other leading tech investors. Some saw this funding as a desperate attempt from Google to match Facebook’s acquisition of Oculus, a virtual reality startup. This exaggerated valuation was based on little more than an ambitious vision of layering digital images on top of real-world objects with spatial glasses, and with no actual revenue or any products to show. The Anticipation After a year of the initial round of fundings, Magic Leap released a couple of cool demos. https://www.youtube.com/watch?v=kPMHcanq0xM Just another day in the office at Magic Leap https://www.youtube.com/watch?v=kw0-JRa9n94 Everyday Magic with Mixed Reality Both these videos showcased augmented reality gaming and productivity applications. While the description in the first one mentioned that it was just a concept video that highlights the potential of augmented reality, the second video claimed that it was shot from the actual device without the use of any special effects. These demos skyrocketed the popularity of Magic Leap creating huge anticipation among the users, developers, and investors alike. This hype attracted the likes of Alibaba and Disney to join hands with them in their quest for the next generation Augmented Reality device. Product Announcement and Pricing After 4 years of hype videos and almost 2 billion dollars in funding Magic Leap finally unveiled their first product called Magic Leap One Creator Edition. These headsets are specifically catered to developers and will start shipping later this year. The Creator Edition consists of three pieces of hardware: Source: Magic Leap Official Website Lightwear: It’s the actual headset which uses “Digital Lightfield” display technology with multiple integrated sensors to gather spatial information. Lightpack: The core computing power of the headsets lies in the Lightpack, a circular belt-worn hip pack which is connected to the headset. Controller: It is a remote that contains buttons, six-degrees of freedom motion sensing, touchpad, and haptic feedback. The remote-shaped controller appears to be very similar to what we can see in Samsung Gear VR and Google Daydream headset controllers. Along with the headsets, Magic Leap also launched the Lumin SDK, the toolkit which allows developers to build AR experiences for Lumin OS, the operating system that powers the Magic Leap One headset. There’s more! Magic Leap has made their SDK available in both Unity and Unreal game engines. This would allow a wide range of developers to start creating augmented reality experiences for their respective platforms. Although Magic Leap hasn’t shared any details on the exact pricing of the headsets, but if you go by what Rony Abovitz said in an interview, the price of the headset would be similar to that of a “Premium Computer”. He also mentioned that the company is planning to develop high-end devices for enterprises as well as lower-end versions for the common masses. Product trial shrouded in secrets Magic Leap, since their inception, have been claiming to revolutionize the AR/VR space with their mysterious technology. They boast that their proprietary features like “Digital Lightfield” and “Visual Perception”  would solve the long-standing problem of dizziness caused due to the continuous use of these headsets. Still, a lot of specifications are missing, like the field of view, or the processing power of the Lightpack processor. To add to the ambiguity, Magic Leap released a long list of security clauses for the developers who want to try out their products, some almost asking the developers to “lock away the hardware”. But this isn’t stopping the investors from pouring in more funds. Magic Leap just received another $461 million dollars from a Saudi Arabia sovereign investment arm. The uncertainty will only be cleared when the headsets become production ready and reach the consumers. Until then the hype remains... To know more about other features of Magic Leap One, check out their official webpage.
Read more
  • 0
  • 0
  • 16558

article-image-meet-sapper-a-military-grade-pwa-framework-inspired-by-next-js
Sugandha Lahoti
10 Jul 2018
3 min read
Save for later

Meet Sapper, a military grade PWA framework inspired by Next.js

Sugandha Lahoti
10 Jul 2018
3 min read
There is a new web application framework in town. Categorized as “Military grade” by its creator, Rich Harris, Sapper is a Next.js-style framework that is almost close to being the ideal web application framework. [box type="info" align="" class="" width=""]Fun Fact: Sapper, the name comes from the term for combat engineers, hence the term Military grade. It is also short for Svelte app maker.[/box] Sapper offers high grade development experience, with declarative routing, hot-module replacement, and scoped styles. It also includes modern development practices at par with current web application frameworks such as code-splitting, server-side rendering, and offline support. It is powered by Svelte, the UI framework which is essentially a compiler that turns app components into standalone JavaScript modules. What makes Sapper unique is that it dramatically reduces the amount of code that gets sent to the browser. In the RealWorld project challenge, Sapper implementation took 39.6kb (11.8kb zipped) to render an interactive homepage. The entire app cost 132.7kb (39.9kb zipped), which is significantly smaller than the reference React/Redux implementation at 327kb (85.7kb). Infact, the implementation totals 1,201 lines of source code, compared to 2,377 for the reference implementation. Another crucial feature of Sapper is code splitting. If an app uses React or Vue, there's a hard lower bound on the size of the initial code-split chunk, the framework itself, which is likely to be a significant portion of the total app size. Sapper has no lower bound for initial code splitting, which makes the app even faster. The framework is also extremely performant, memory-efficient, and easy to learn with Svelte's template syntax. It has scoped CSS, with unused style removal and minification built-in. The framework also has a svelte/store, a tiny global store that synchronises state across the component hierarchy with zero boilerplate. Currently Sapper is not released in version 1.0.0. Currently, Svelte's compiler operates at the component level. For the stable release, the team is looking for ‘whole-app optimisation' where the compiler understands the boundaries between the components to generate even more efficient code. Also, because Sapper is written in TypeScript, there may be plans to officially support TypeScript. Sapper may not be ready yet to take over an established framework such as React. The reason being, that the developers may have an aversion to any form of 'template language'. Moreover, React is extremely flexible and appealing to new developers. This is because of its highly active community and other learning resources, in particular, the devtools, editor integrations, tutorials, StackOverflow answers, and even job opportunities. When compared to such a giant, Sapper still has a long way to go. You can view the framework's progress and contribute your own ideas at Sapper GitHub and Gitter. Top frameworks for building your Progressive Web Apps (PWA) 5 reasons why your next app should be a PWA (progressive web app) Progressive Web AMPs: Combining Progressive Wep Apps and AMP
Read more
  • 0
  • 0
  • 16538

article-image-android-studio-3-3-released-with-support-for-navigation-editor-c-code-lint-inspections-and-more
Sugandha Lahoti
16 Jan 2019
2 min read
Save for later

Android Studio 3.3 released with support for Navigation Editor, C++ code lint inspections, and more

Sugandha Lahoti
16 Jan 2019
2 min read
Android Studio 3.3 has been released, earlier this week with official support for Navigation Editor, Project Marble, improved incremental Java compilation when using annotation processors, C++ code lint inspections etc. Other features include an updated new project wizard and usability fixes for each of the performance profilers. Overall, this release addresses over 200 users reported bugs. This release includes support for navigation editor, a visual editor for constructing XML resources using the Jetpack Navigation Component. Developers can build predictable interactions between the screens and content areas of an app with the Navigation Editor and the Navigation Component. The Network profiler in Android Studio 3.3 now formats common text types found in network payloads by default, including HTML, XML and JSON.   New Project Wizard for Android Studio has been updated to support the range of device types, programming languages, and new frameworks in a streamlined manner. Android Studio 3.3 includes Intellij 2018.2.2 and also bundles Kotlin 1.3.11. It also supports Clang-Tidy for C++ static code analysis. Android Studio 3.3 decreases build time by improving support for incremental Java compilation when using annotation processors. This release comes with a new feature to help clean up unused settings & cache directories. For better user feedback, it includes in-product sentiment buttons for quick feedback. The plugin uses Gradle's new task creation API to avoid initializing and configuring tasks that are not required to complete the current build. Android Studio 3.3 supports Android app bundle to build and deploy Google Play Instant experiences from a single Android Studio project. Android Emulator 28.0 now supports the ability to launch multiple instances of the same Android Virtual Device (AVD). The default Memory Profiler capture mode on Android 8.0 Oreo (API level 26) and higher devices are changed to sample for allocations periodically. You may check out the Android Studio release notes, Android Gradle plugin release notes, and the Android Emulator release notes for more details. Android Studio 3.2 releases with Android App Bundle, Energy Profiler, and more! Android Studio 3.2 Beta 5 out, with updated Protobuf Gradle plugin What is Android Studio and how does it differ from other IDEs?
Read more
  • 0
  • 0
  • 16522

article-image-introducing-android-9-pie-filled-with-machine-learning-and-baked-in-ui-features
Sugandha Lahoti
07 Aug 2018
4 min read
Save for later

Introducing Android 9 Pie, filled with machine learning and baked-in UI features

Sugandha Lahoti
07 Aug 2018
4 min read
Google has launched Android 9, the next in line Android Operating system. Named Android Pie, after Android’s convention of naming their OS on sweet treats, Android 9 comes with machine learning based interactive UI, security and privacy features, updates to connectivity and location, and more. With the filling of Machine Learning With Machine learning as its core, Android 9 helps a phone learn, by picking up on user preferences and adjusting automatically. Source: Android Developers Blog Google has partnered with DeepMind for Adaptive Battery that uses machine learning to prioritize system resources for the apps the user frequents the most. Android 9 Pie uses Slices, which are basically UI templates to display interactive content from an app from within other sources such as Google Search app or Google Assistant. It helps users perform tasks faster by engaging users outside of the fullscreen app experience. App Actions takes advantage of machine learning to bring an app to the user at just the right time. It is based on the app's semantic intents and the user's context. Another feature, the Smart Linkify lets users take advantage of the TextClassifier models through the Linkify API, providing options for quick follow-on user actions. Android 9 adds an updated version of the Neural networks API, to extend Android's support for accelerated on-device machine learning. Neural Networks 1.1 adds support for nine new ops. You can take advantage of the APIs through TensorFlow Lite. Baked in UI features Android 9 uses simpler and more approachable UI to help users find, use, and manage their apps. Source: Android Developers Blog There is a brand new system navigation for easily discoverable apps and to make Android's multitasking more approachable. Android 9 also has a display cutout support to take full advantage of the latest edge-to-edge screens. For immersive content, users can use the display cutout APIs to check the position and shape of the cutout and request full-screen layout around it. Messaging apps can take advantage of the new MessagingStyle APIs to show conversations, attach photos and stickers, and suggest smart replies. Android 9 will soon get the support of ML Kit to generate smart reply suggestions. Android 9 uses a Magnifier widget to improve the user experience of selecting text. The Magnifier widget can also provide a zoomed-in version of any view or surface. With the sprinkling of Security and privacy Major updates of Android 9 have been based on preserving the privacy and security of users’ data. Source: Android Developers Blog Android 9 uses the BiometricPrompt API to show the standard system dialog instead of building their own dialog. In addition to Fingerprint (including in-display sensors), the API supports Face and Iris authentication. The Android Protected Confirmation uses the Trusted Execution Environment (TEE) to guarantee that a given prompt string is shown and confirmed by the user. Only after successful user confirmation will the TEE then sign the prompt string, which the app can verify. StrongBox is added as a new KeyStore type, providing API support for devices that provide key storage in tamper-resistant hardware with isolated CPU, RAM, and secure flash. Android 9 adds built-in support for DNS over TLS, automatically upgrading DNS queries to TLS if a network's DNS server supports it. Android 9 restricts access to mic, camera, and all SensorManager sensors from apps that are idle. These are just a select few updates for the Android 9 operating system. The full list of features is available on the Android Developer Blog. Starting today, Android 9 Pie is rolling out to all Pixel users worldwide, and then to many other devices in the coming months. Android P Beta 4 is here, stable Android P expected in the coming weeks! Google updates biometric authentication for Android P, introduces BiometricPrompt API Android P new features: artificial intelligence, digital wellbeing, and simplicity
Read more
  • 0
  • 0
  • 16478

article-image-all-new-android-apps-on-google-play-must-target-api-level-26-android-oreo-or-higher-to-publish
Savia Lobo
03 Aug 2018
3 min read
Save for later

All new Android apps on Google Play must target API Level 26 (Android Oreo) or higher, to publish

Savia Lobo
03 Aug 2018
3 min read
At the Google I/O Event held in May 2018, Google advised all developers to update to the latest Android APIs by August 1, 2018. Google has instructed that all the new Android apps on Google Play should target API Level 26 (Android Oreo) in order to be published. The main reason for this strict decision is Google’s focus on security updates. Google plans to introduce Project Treble in Android 7, making it easier for mobile phone manufacturers to release Android OS updates to the devices. Google is working hard on backward compatibility and support APIs; however it is very important to target latest APIs to fully utilize the new features and backward compatibility support. Google’s roadmap for the new Android updates All Google wants the users to do is update all new and existing Android apps immediately. Here’s Google’s plan for users to ensure which apps to update and when. Things to do in August 2018 All the new apps are required to target API level 26 (Android Oreo 8.0) or higher. This means all the new apps that are not yet uploaded on Google Play (including the alpha and beta apps as well.) In November 2018 The updates to the existing apps--available on Google--are required to target API level 26 (Android Oreo 8.0) or higher. Please note here that existing apps which are not providing any updates are unaffected and will continue to work in a normal manner. Only the updates to those apps are required to target the latest APIs. 2019 Onwards Google mentions that each year the targetSdkVersion requirement will advance to the new level. And all the new apps and existing apps will need to target the corresponding API level or higher within one year of time. Developers can freely use a minSdkVersion of their choice. This means, there is no change to one’s ability to build apps for older Android versions. Google encourages developers to provide backward compatibility as far as reasonably possible. The updating process It is a very easy procedure per Google. There are some APIs which have been updated or removed in the latest API level. If any app is using those, then one will have to update the code accordingly. Following are some changes specified from recent platform versions: Implicit intents for bindService() no longer supported (Android 5.0) Runtime permissions (Android 6.0) User-added CAs not trusted by default for secure connections (Android 7.0) Apps can’t access user accounts without explicit user approval (Android 8.0) Know more about this update process by watching a Google IO 2018 session on ‘Migrate your existing app to target Android Oreo and above’ given below https://www.youtube.com/watch?v=YyDnYaFtRS0&feature=youtu.be Unity partners with Google, launches Ml-Agents ToolKit 0.4, Project MARS and more Working with shaders in C++ to create 3D games GitHub for Unity 1.0 is here with Git LFS and file locking support  
Read more
  • 0
  • 0
  • 16471
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introducing-coil-an-open-source-android-image-loading-library-backed-by-kotlin-coroutines
Bhagyashree R
13 Aug 2019
3 min read
Save for later

Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines

Bhagyashree R
13 Aug 2019
3 min read
Yesterday, Colin White, a Senior Android Engineer at Instacart, introduced Coroutine Image Loader (Coil). It is a fast, lightweight, and modern image loading library for Android backed by Kotlin. https://twitter.com/colinwhi/status/1160943333033648128 Currently, there are a number of image loading libraries for Android such as Glide, Fresco, Picasso, Mirage, among others. However, the Instacart team aims to introduce a new library that is “more modern and simpler” with Coil. Key features in Coil Backed by Kotlin Coil offers a “simple, elegant API” by leveraging the Kotlin language features like extension functions, inlining, lambda params, and sealed classes. It provides strong support for non-blocking asynchronous computation and work cancellation while ensuring maximum thread reuse with the help of Kotlin Coroutines. Leverages modern dependencies Coil relies on dependencies that are standard and recommended such as OkHttp, Okio, and AndroidX Lifecycles. Square’s OkHttp and Okio are by default efficient and enables Coil to avoid reimplementing things like disk caching and stream buffering. Likewise, AndroidX Lifecycles is a recommended way for tracking the lifecycle state. Lightweight Coil’s codebase consists of 8x fewer lines of code as compared to Glide. It adds approximately 1500 methods to your APK, which is comparable to Picasso and significantly less than Glide and Fresco. Supports extension The image pipeline of Coil consists of three main classes: Mappers, Fetchers, and Decoders. You can use these interfaces to augment or override the base behavior and add support for new file types in Coil. Supports dynamic image sampling Coil comes with a new feature, dynamic image sampling. Consider you want to load a 500x500 image into a 100x100 ImageView. The library will load the image into memory at 100x100. But, what if you want the quality to be as the 500x500 image? In this case, the 100x100 image is used as a placeholder while the 500x500 image is read. Coil will take care of this automatically for all BitmapDrawables. The placeholder is set synchronously on the main thread preventing white flashes where the ImageView is empty for one frame. It also creates a visual effect where the image detail appears to fade in with the help of the crossfade animation. To know more in detail about Coil, check out its official documentation and GitHub repository. 25 million Android devices infected with ‘Agent Smith’, a new mobile malware Mozilla launches Firefox Preview, an early version of a GeckoView-based Firefox for Android Facebook released Hermes, an open-source JavaScript engine to run React Native apps on Android  
Read more
  • 0
  • 0
  • 16334

article-image-facebook-releases-deepfocus-an-ai-powered-rendering-system-to-make-virtual-reality-more-real
Natasha Mathur
20 Dec 2018
3 min read
Save for later

Facebook releases DeepFocus, an AI-powered rendering system to make virtual reality more real

Natasha Mathur
20 Dec 2018
3 min read
Facebook released a new “AI-powered rendering system”, called DeepFocus yesterday, that works with Half Dome, a special prototype headset that Facebook’s Reality Lab (FRL) team had been working on over the past three years. HalfDome is an example of a "varifocal" head-mounted display (HMD) that comprises eye-tracking camera systems, wide-field-of-view optics, and adjustable display lenses that move forward and backward to match your eye movements. This makes the VR experience a lot more comfortable, natural, and immersive. However, HalfDome needs software to work in its full potential, that is where DeepFocus comes into the picture. “Our eyes are like tiny cameras: When they focus on a given object, the parts of the scene that are at a different depth look blurry. Those blurry regions help our visual system make sense of the three-dimensional structure of the world and help us decide where to focus our eyes next. While varifocal VR headsets can deliver a crisp image anywhere the viewer looks, DeepFocus allows us to render the rest of the scene just the way it looks in the real world: naturally blurry,” mentions Marina Zannoli, a vision scientist at FRL. Facebook is also open-sourcing DeepFocus, making the system’s code and the data set used to train it available to help other VR researchers incorporate it into their work. “By making our DeepFocus source and training data available, we’ve provided a framework not just for engineers developing new VR systems, but also for vision scientists and other researchers studying long-standing perceptual questions,” say the researchers. https://www.youtube.com/watch?v=Xp6OlfJEmAo DeepFocus A research paper presented at SIGGRAPH Asia 2018 explains that DeepFocus is a unified rendering and optimization framework based on convolutional neural networks that solve a full range of computational tasks. It also helps with enabling real-time operation of accommodation-supporting HMDs. The CNN comprises “volume-preserving” interleaving layers that help it quickly figure out the high-level features within an image. For instance, the paper mentions, that it accurately synthesizes defocus blur, focal stacks, multilayer decompositions, and multiview imagery. Moreover, it makes use of only commonly available RGB-D images, that enable real-time, near-correct depictions of a retinal blur. Researchers explain that DeepFocus is  “tailored to support real-time image synthesis..and ..includes volume-preserving interleaving layers..to reduce the spatial dimensions of the input, while fully preserving image details, allowing for significantly improved runtimes”. This model is more efficient unlike the traditional AI systems used for deep learning based image analysis as DeepFocus can process the visuals while preserving the ultrasharp image resolutions that are necessary for delivering high-quality VR experience. The researchers mention that DeepFocus can also grasp complex image effects and relations that includes foreground and background defocusing. However, DeepFocus isn’t just limited to Oculus HMDs. Since DeepFocus supports high-quality image synthesis for multifocal and light-field display, it is applicable to a complete range of next-gen head-mounted display technologies. “DeepFocus may have provided the last piece of the puzzle for rendering real-time blur, but the cutting-edge research that our system will power is only just beginning”, say the researchers. For more information, check out the official Oculus Blog.  Magic Leap unveils Mica, a human-like AI in augmented reality MagicLeap acquires Computes Inc to enhance spatial computing Oculus Connect 5 2018: Day 1 highlights include Oculus Quest, Vader Immortal and more!
Read more
  • 0
  • 0
  • 16325

article-image-the-ionic-team-announces-the-release-of-ionic-react-beta
Bhagyashree R
22 Feb 2019
2 min read
Save for later

The Ionic team announces the release of Ionic React Beta

Bhagyashree R
22 Feb 2019
2 min read
Yesterday, the team behind Ionic announced the beta release of Ionic React. Developers can now make use of all the Ionic components in their React application. Ionic React ships with almost 70 components including buttons, cards, menus, tabs, alerts, modals, and much more. It also supports TypeScript type definitions. Ionic is an open source framework that consists of UI components for building cross-platform applications. These components are written in HTML, CSS, and JavaScript and can easily be deployed natively to iOS and Android devices, desktop with Electron, or to the web as a progressive web app. Historically, Ionic has been associated with Angular, but this changed with its recent release, Ionic 4. Now, developers can use the Ionic app development framework alongside any frontend framework. The Ionic team has been working towards making Ionic work with React and Vue for a long time. React developers already have React Native to make native apps for iOS and Android, but with Ionic React they will also be able to create hybrid mobile, desktop, and progressive web apps. In the future, the team is also planning to make React Native and Ionic work together. You can easily get started with Ionic React with the create-react-app tool. The Ionic team also recommends users to TypeScript in their apps to get a better developer experience. As Ionic React is still in its early days, it is advised not to use it in production. To read the full announcement, visit Ionic’s official website. Ionic Framework 4.0 has just been released, now backed by Web Components, not Angular Ionic v4 RC released with improved performance, UI Library distribution and more Ionic framework announces Ionic 4 Beta
Read more
  • 0
  • 0
  • 16321

article-image-leap-motion-open-sources-its-100-augmented-reality-headset-north-star
Kunal Chaudhari
13 Apr 2018
3 min read
Save for later

Leap Motion open sources its $100 augmented reality headset, North Star

Kunal Chaudhari
13 Apr 2018
3 min read
Leap Motion, famous for building hand tracking hardware and software announced its move into the Augmented reality space with the project ‘North Star’- an augmented reality platform. They are planning to open source this project which includes a design for a headset that Leap Motion claims costs less than $100 at large-scale production. Image Credits: Leap Motion Official Blog Founded in 2010, Leap Motion first ventured into hand tracking technology by announcing their own set of motion controllers. This allowed users to interact with a computer by waving their hands, fingers, or other digits around to control games, maps, or other apps. While the technology was cool at that time, it was unimpressive to certain users because of the sensitivity of their controllers and the lack of apps available to play with. But the company’s still around, and now Leap Motion is unveiling something that could be revolutionary, or it could just be another cool idea that fails to catch on. Here’s a closer look: Design Project North Star isn’t a new consumer headset, the company is also releasing the necessary hardware specifications, designs, and software under an open source license. The headset design uses two fast-refreshing 3.5-inch LCD displays with a resolution of 1600x1440 per eye with 120fps and a 100-degree field in view. It also features Leap Motion’s 180-degree hand tracking sensor. The company claims that it offers a wider range of view than most AR systems that exist today, specifically comparing it with Microsoft Hololens which offers a 70-degree field of view. Most of the existing virtual reality and augmented reality headsets require handled controllers for input but with the Leap Motion sensor, users don’t need to hold anything in their hands at all. Pricing Leap Motion doesn’t plan to sell the headset, but it’ll make the hardware and software open source. They hope that someone else will build and sell the headsets, which the company says could cost less than $100 to produce. David Holz, the chief technology officer at Leap Motion mentioned in a blog post that “Although this is an experimental platform right now, we expect that the design itself will spawn further endeavors that will become available for the rest of the world”. This suggests that with relatively low cost and open source hardware third parties can experiment with the technology. While all these features sound promising, there are still plenty of details which are yet to be revealed. A thorough comparison with other prominent AR devices like Magic Leap and Hololens is necessary to identify Leap Motion’s true potential. Till then you can visit their Official Webpage to see some cool demos. Check out other latest news: Windows launches progressive web apps… that don’t yet work on mobile Verizon launches AR Designer, a new tool for developers
Read more
  • 0
  • 0
  • 16192
article-image-google-i-o-2019-d1-highlights-smarter-display-search-feature-with-ar-capabilities-android-q-linguistically-advanced-google-lens-and-more
Fatema Patrawala
09 May 2019
11 min read
Save for later

Google I/O 2019 D1 highlights: smarter display, search feature with AR capabilities, Android Q, linguistically advanced Google lens and more

Fatema Patrawala
09 May 2019
11 min read
This year’s Google IO 2019 was meant to be big, and it didn't disappoint at all. There's a lot of big news to talk about as it introduced and showcased exciting new products, updates, features and functionalities for a much better user experience. Google I/O kicked off yesterday and it will run through Thursday May 9 at the Shoreline Amphitheater in Mountain View, California. It has approximately 7000 attendees from all around the world. “To organize the world’s information and make it universally accessible and useful. We are moving from a company that helps you find answers to a company that helps you get things done. Our goal is to build a more helpful Google for everyone.” Sundar Pichai, Google CEO commenced his Keynote session with such strong statements. He further listed a few recent tech advances and said “We continue to believe that the biggest breakthroughs happen at the intersection of AI.” He then went on to discuss how Google is confident that it can do more AI without private data leaving your devices, and that the heart of the solution will be federated learning. Basically, federated learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data. It enables mobile phones at different geographical locations to collaboratively train a machine learning model without transferring any data that may contain personal information from the devices. While the keynote lasted for nearly two hours, some of the breakthrough innovation in tech were introduced which will be briefed in detail moving ahead in the article. Google Search at Google IO 2019 Google remains a search giant, and that's something it has not forgotten at Google IO 2019. However, search is about to become far more visually rich, thanks to the inclusion of AR camera trick which is now introduced directly into search results. They held an on-stage demonstration at Google IO which showed how a medical student could search for a muscle group, and be presented within mobile search results with a 3D representation of the body part. Not only could it be played with within the search results, it could be placed on the user’s desk to be seen at real scale from their smartphone’s screen. Source: Google And even larger things, like an AR shark, could be put into your AR screen, straight from the app. The Google team showcased this feature as the shark virtually appeared live in front of the audience. Google Lens bill splitting and food recommendations Google Lens was something which caught audience’s interest in the Google's App arsenal. Lens was presented in a way that it can use image recognition to deliver information based on what your camera is looking at. A demo was shown on how a combination of mapping data and image recognition will let Google Lens make recommendations from a restaurant’s menu, just by pointing your camera at it. And when the bill arrives, point your camera at the receipt and it'll show you tipping info and bill splitting help. They also announced their partnership with recipe providers to allow Lens to produce video tutorials when your phone is pointed at a written recipe. Source: Google Debut of Android Q beta 3 version At Google IO Android Q beta 3 was introduced, it is the 10th generation of the Android operating system, and it comes with new features for phone and tablet users. Google announced that there are over 2.5 billion active Android devices as the software extends to televisions, in-car systems and smart screens like the Google Home Hub. Further it was discussed how the Android will work with foldable devices, and how it will be able to seamlessly tweak its UI depending on the format and ratio of the folding device. Another new feature of live caption system in Android Q will turn audio instantly into text to be read. It's a system function triggered within the volume rocker menu. They can be tweaked for legibility to your eyes, doesn't require an internet connection, and happens on videos that have never been manually close-captioned. It's at an OS level, letting it work across all your apps. Source: Google The smart reply feature will now work across all messaging apps in Android Q, with the OS smartly predicting your text. The Dark Theme activated by battery saver or the quick tile was introduced. Lighting up less pixels in your phone will save its battery life. Android Q will also double down on security and privacy features, such as a Maps incognito mode, reminders for location usage and sharing (such as only when a delivery app is in use), and TLSV3 encryption for low end devices. Security updates will roll out faster too, updating over the air without a reboot needed for the device. With Android Q Beta 3 which will be launched today on 21 new devices, Google also announced that it will make Kotlin, a statically typed programming language for writing its Android apps. Chrome to be more transparent in terms of cookie control Google announced that it will update Chrome to provide users with more transparency about how sites are using cookies, as well as simpler controls for cross-site cookies. A number of changes will be made to Chrome to enable features, like modifying how cookies work so that developers need to explicitly specify which cookies are allowed to work across websites — and could be used to track users. The mechanism is built on the web's SameSite cookie attribute and you can find the technical details on web.dev. In the coming months, Chrome will require developers to use this mechanism to access their cookies across sites. This change will enable users to clear all such cookies while leaving single domain cookies unaffected, preserving user logins and settings. It will also enable browsers to provide clear information about which sites are setting these cookies, so users can make informed choices about how their data is used. This change also has a significant security benefit for users, protecting cookies from cross-site injection and data disclosure attacks like Spectre and CSRF by default. They further announced that they will eventually limit cross-site cookies to HTTPS connections, providing additional important privacy protections for our users. Developers can start to test their sites and see how these changes will affect behavior in the latest developer build of Chrome. They have also announced Flutter for web, mobile and desktop. It will allow web-based applications to be built using the Flutter framework. The core framework for mobile devices will be upgraded to Flutter 1.5. And for the desktop, Flutter will be used as an experimental project. “We believe these changes will help improve user privacy and security on the web — but we know that it will take time. We’re committed to working with the web ecosystem to understand how Chrome can continue to support these positive use cases and to build a better web.” says Ben Galbraith - Director, Chrome Product Management and Justin Schuh - Director, Chrome Engineering Next generation Google Assistant Google has been working hard to compress and streamline the AI that Google Assistant taps into from the cloud when it is processing voice commands. Currently every voice request has to be run through three separate processing models to land on the correctly-understood voice command. It is only data that until now has taken up 100GB of storage on many Google servers. But that's about to change. As Google has figured how to shrink that down to just 500MB of storage space, and to put it on your device. This will help lower the latency between your voice request and the task you've wished to trigger being carried out. It's 10x faster - 'real time', according to Google. It also showed a demo where, a Google rep fired off a string of voice commands that required Google Assistant to access multiple apps, execute specific actions, and understand not only what the rep was saying, but what she actually meant. For example she said, “Hey Google, what’s the weather today? What about tomorrow? Show me John Legend on Twitter; Get a Lyft ride to my hotel; turn the flashlight on; turn it off; take a selfie.” Assistant executed the whole sequence flawlessly, in a span of about 15 seconds. Source: Google Further demos showed off its ability to compose texts and emails that drew on information about the user’s travel plans, traffic conditions, and photos. And last but not the least it can also silence your alarms and timers by just saying 'Stop' to help you go back to your slumber. Google Duplex gets smarter Google Duplex is a Google Assistant service which earlier use to make calls and bookings on your behalf based on the requests. But now It's getting more smarter as it comes with the new 'Duplex on the web' feature. Now you can ask Google Duplex to plan a trip, and it'll begin filling in website forms such as reservation details, hire car bookings and more, on your behalf. And it only awaits you to confirm the details it has input. Google Home Hub is dead, Long live the Nest Hub Max At Google IO, the company announced it was dropping the Google Home moniker, instead rebranding its devices with the Nest name, bringing them in line with its security systems. The Nest Hub Max was introduced, with a camera and larger 10-inch display. With a built-in Nest Cam wide-angle lens security camera (127 degrees), which the original Home Hub omitted due to privacy concerns, it's now a far more security-focussed device. It also lets you make video calls using a wide range of video calling apps. Cameras and mics can be physically switched off with a slider that cuts off the electronics, for the privacy-conscious. Source: Google Voice and Face match features, allowing families to create voice and face models, will let the Hub Max know to only show an individual's information or recommendations. It'll also double up as a kitchen TV, if you've access to YouTube TV plans, and lowering the volume is as simple as raising your hand in front of the display. It'll be launched this summer for $229 in the US, and AU$349 in Australia. The original Hub also gets a price cut to $129 / AU$199. Other honorable mentions Google Stadia: Google had introduced its new game-streaming service, called Stadia in March. The service uses Google’s own servers to store and run games, which you can then connect to and play whenever you’d like on literally any screen in your house including your desktop, laptop, TV, phone and tablet. Basically, if it’s internet-connected and has access to Chrome, it can run Stadia. Today at I/O they announced that Stadia will not only stream games from the cloud to the Chrome browser but on the Chromecast, and other Pixel and Android devices. They plan to launch ahead this year in the US, Canada, UK, and Europe. A cheaper Pixel phone: While other smartphones are getting more competitive in terms of pricing, Google introduced its new Pixel 3a which is less powerful than the existing Pixel 3, and at a base price of $399, which is half as expensive as Pixel 3. In 2017 Forbes had done an analysis on why Google Pixel failed in the market and one of the reason was its exorbitant high price. It states that the tech giant needs to come to the realization that its brand in the phone hardware business is just not worth as much as Samsung's or Apple's that it can command the same price premium. Source: Google “Focus mode:” A new feature coming to Android P and Q devices this summer will let you turn off your most distracting apps to focus on a task, while still allowing text messages, calls, and other important notifications through. Augmented reality in Google Maps: AR is one of those technologies that always seems to impress the tech companies that make it more than it impresses their actual users. But Google may finally be finding some practical uses for it, like overlaying walking directions when you hold up your phone’s camera to the street in front of you. Incognito mode for Google Maps: It also announced a new “incognito” mode for Google Maps, which will stop keeping records of your whereabouts while it’s enabled. And they will further roll out this feature in Google Search and YouTube. Google I/O 2019: Flutter UI framework now extended for Web, Embedded, and Desktop You can now permanently delete your location history, and web and app activity data on Google Google’s Sidewalk Lab smart city project threatens privacy and human rights: Amnesty Intl, CA says
Read more
  • 0
  • 0
  • 16143

article-image-react-native-0-56-is-now-available
Sugandha Lahoti
10 Jul 2018
2 min read
Save for later

React Native 0.56 is now available

Sugandha Lahoti
10 Jul 2018
2 min read
React Native, Facebook’s framework for building native apps using React is now available as a new version.The version 0.56, is a fundamental building block towards a more stable framework: leading to a better July 2018 (0.57.0) release. This was a long-awaited release with a lot of discussion between "waiting for more stability" versus "testing led to successful results so it can push forward". The ride to release was not smooth but eventually with dedicated community communication the react native 0.56.0 release was stabilized. The major changes include: Support for Babel 7 The version 0.56 now allows support for the latest version of Babel. Babel is the transpiler tool that allows React Native to use the latest features of JavaScript. Babel 7 hosts a variety of important changes and the React team will now allow Metro, the JavaScript bundler for React Native to leverage its improvements. Modernizing Android support React Native has added updates to Android support for faster builds. It will also help developers comply with the new Play Store requirements coming into effect next month. Version 0.56 now supports Gradle 3.5, Android SDK 26, Fresco to 1.9.0, and OkHttp to 3.10.0 and the NDK API target to API 16. Interested developers can follow the discussion on Android developments in the dedicated issue list. New Node, Xcode, React, and Flow Node 8 is now the standard for React Native and React is also updated to v16.4. Version 0.56 has dropped support for iOS 8, making iOS 9 the oldest iOS version that can be targeted. Also, the continuous integration toolchain has been updated to use Xcode 9.4, ensuring that all iOS tests are run on the latest developer tools provided by Apple. They have also upgraded to Flow 0.75 to use the new error format and also created types for many more components. YellowBox is replaced with a new implementation that makes debugging easier. For the complete release notes, you can reference the full changelog. Also, keep an eye on the upgrading guide to avoid issues moving to this new version. React Native announces re-architecture of the framework for better performance Is React Native really a Native framework? React Native Performance
Read more
  • 0
  • 0
  • 16069

article-image-is-apples-independent-repair-provider-program-a-bid-to-avoid-the-right-to-repair-bill
Vincy Davis
30 Aug 2019
4 min read
Save for later

Is Apple's ‘Independent Repair Provider Program’ a bid to avoid the ‘Right To Repair’ bill?

Vincy Davis
30 Aug 2019
4 min read
Yesterday, Apple announced a new ‘Independent Repair Provider Program’ which will offer customers additional options for common out-of-warranty iPhone repairs. It will also provide independent repair businesses with genuine Apple parts, tools, training, repair manuals and diagnostics. Customers can now approach these independent repair shops to fix their devices instead of being restricted to Apple Authorized Service Providers (AASPs). The program is only available in U.S. for now, but will soon be expanded to other countries. To qualify for the Independent Repair Provider Program, an independent repair business will need to have at least one Apple-certified technician to perform the iPhone repair. In the press release, Apple states that only “qualifying repair businesses will receive Apple-genuine parts, tools, training, repair manuals and diagnostics at the same cost as AASPs.” Apple’s certification program is simple and an indie business can enroll in it free of any cost. Apple’s Chief Operating Officer, Jeff Williams says, “When a repair is needed, a customer should have confidence that the repair is done right. We believe the safest and most reliable repair is one handled by a trained technician using genuine parts that have been properly engineered and rigorously tested” In the past one year, Apple has launched a “successful pilot” with 20 independent repair businesses which supplies genuine parts to customers in North America, Europe and Asia. Read Also: Apple announces expanded security bug bounty program up to $1 million; plans to release iOS Security Research Device program in 2020 Is this Apple’s way to avoid the ‘Right To Repair’ bill? Apple’s sudden shift to a new trajectory comes as a surprise after it was reported that Apple was trying hard to kill the ‘Right To Repair’ bill in California. If passed, the bill would provide customers the right to fix or modify their devices without any effect on their warranty. The Apple representatives tried to protest the bill by stoking fears of battery explosions for the consumers who attempt to repair their iPhones. Currently, the bill has been pushed till 2020, allegedly due to successful lobbying of Californian lawmakers by Apple. Many people believe that Apple is going to use this ‘Independent Repair Provider Program’ to support their side of the right-to-repair debate. A user on Hacker News says, “Pretty straightforward attempt to stave off right-to-repair laws... and coming after years of attempts to destroy independent repair businesses. Very hard to see this as a good faith effort by Apple.” Another user comments, “I feel like this is an end-run attempt to avoid right-to-repair legislation. While I hope for the best from this program, it seems to little to late and in direct opposition of prior arguments they’ve made concerning third-party repairs and parts distribution.” Apple’s iPhone sales have declined in the past two fiscal quarters. Kyle Wiens, chief executive of repair guide company iFixit and a longtime advocate for right-to-repair laws, said “This is Apple realizing that the market for repair is larger than Apple could ever handle themselves, and providing independent technicians with genuine parts is a great step,” Wiens said. But, he said, “what this clearly shows is that if right-to-repair legislation passed tomorrow, Apple could instantly comply.” Another critical point which is not highlighted by Apple in the press release he says is that this program does not provide customers the opportunity to repair their own phones. Apple also reserves the right to reject any application they want, without comment. https://twitter.com/kwiens/status/1167076331953090561 Others believe that this a step in the right direction. https://twitter.com/LaurenGoode/status/1167156636789592064 A Redditor comments, “Sounds like Apple is choosing a great halfway point between letting anyone access parts and only letting established shops/businesses from offering repairs. Hopefully we’ll see more independent repair stores offering repairs officially if as it sounds it’s free to apply and get certified!” Another reason Apple probably would have felt the need to change its longstanding policy, is seeing viable and easy to repair alternatives cropping up in the smartphone market. For instance, this week, Fairphone, a Dutch company launched a sustainable smartphone called ‘Fairphone 3’ which aims to minimize electronic waste. Fairphone 3 contains 7 modules, which have been designed to support easy repairs. It also boasts tech specifications of any modern 2019 smartphone. For more details on the Independent Repair Provider Program, head over to the official press release by Apple. Interested businesses can also check out the Independent Repair Provider Program official website. Other interesting news in Tech The Accelerate State of DevOps 2019 Report: Key findings, scaling strategies and proposed performance & productivity models “Rust is the future of systems programming, C is the new Assembly”: Intel principal engineer, Josh Triplett #Reactgate forces React leaders to confront community’s toxic culture head on
Read more
  • 0
  • 0
  • 16021
article-image-oculus-go-the-first-stand-alone-vr-headset-arrives
Sugandha Lahoti
03 May 2018
3 min read
Save for later

Oculus Go, the first stand alone VR headset arrives!

Sugandha Lahoti
03 May 2018
3 min read
At the two-day F8 conference hosted by Facebook, Oculus unveiled a new virtual reality headset. The Oculus Go is priced at an astonishing $199, a lot lesser than its predecessors. (Oculus Rift headset costs around $399). Here’s a quick rundown of all the key features. Self-contained headset The Oculus Go is completely self- contained and stand alone. Everything including the hardware, screen, and processor, is contained within the headset. The functional pistol-grip Oculus controller is also included in the box. Developers don’t need a special computer, graphics card, game console or even a phone to operate the VR device as it is completely autonomous. Rich Display Oculus is equipped with a 5.5-inch, 2,560x1,440-pixel LCD display that looks particularly crisp when reading text or watching videos.  It uses optimized 3D graphics which reduced the screen-door effect typically encountered on most VR headsets. It uses fixed foveated rendering, rendering the area at the center of the display more sharply than the edges, to make many apps look even better. Powerful Sound Spatial audio drivers are built into the headset which provides direct, immersive, surround sound, without the need for earphones. Alternatively, it also has a 3.5mm audio jack. Lightweight and Comfortable The Oculus Go is comfortable and well designed. The Go goggles have breathable fabrics, injection foam molding, and other advances in wearable materials for better comfort. It is also lightweight and portable. For all the great features, the product is not without its limitations. The screen has a narrower field of view (FOV) than Oculus Rift and HTC Vive. It also does not include a slider or scalar to adjust "interpupillary distance", i.e. how images line up with your own face. Oculus Go only recognizes three degrees of freedom (3DOF). So, realistic VR effects remain when you rotate or tilt your head. However, the effect is broken as you lean in any direction. Go does not offer positional tracking while seated or while walking.   Nevertheless, Oculus currently supports over 1,000 existing apps, and pairs with both iPhones and Android phones, making it one of the best iPhone VR headsets around right now.  The pricing is set to be $199 USD for the 32GB model and $249 for the 64GB version. Consumers can now purchase the headset via the Oculus website in 23 countries. Facebook’s F8 Conference – 5 key announcements Understanding the hype behind Magic Leap’s New Augmented Reality Headsets Leap Motion open sources its $100 augmented reality headset, North Star
Read more
  • 0
  • 0
  • 16018

article-image-apple-releases-ios-12-beta-2-with-screen-time-and-battery-usage-updates-among-others
Natasha Mathur
20 Jun 2018
3 min read
Save for later

Apple releases iOS 12 beta 2 with screen time and battery usage updates among others

Natasha Mathur
20 Jun 2018
3 min read
The second beta of iOS 12 has been released by Apple yesterday to the registered developers for testing purposes. This is two weeks after the first beta was rolled out following the much-awaited Worldwide Developers Conference. All thanks to the ongoing beta releases, beta 2 includes modifications to many of the new features which are introduced in iOS 12 such as changes to screen time, battery usage, and other smaller tweaks. Let’s have a look at the key updates that will change your iPhone or iPad for the better. Key Updates Battery Usage The usage charts that represent the activity and battery level for the past 24 hours is redesigned in iOS 12 beta 2. Also, fonts and wordings have been updated in this section. Source: macrumors Screen Time The existing toggle that helps with clearing the Screen Time data is removed. The interface which lets you add time limits to apps via the Screen Time screen has been modified. With the first beta, when you tapped an app it would go right into the limits interface. Now when you tap on an app, more information gets displayed on the app. This information includes daily average use, developer, category, and more. There's a new splash screen available for the Screen Time feature. There are also new options in screen time which lets you view your activity on either one or all devices. Notifications The new iOS 12 comes with a feature where Siri makes suggestions to the user about limiting the notifications from the sparingly used apps. Now with beta 2, the Notifications section of the Settings app has a new toggle that will allow you to get rid of the suggestions made by Siri for the individual apps. Photos Search With the iOS 12 beta 2, the Photos now support more advanced searches. So if you search for a photo taken on a specific date, say, May 15, all the photos from all years taken on May 15 will pop up. This is quite different than the iOS 12 beta 1 behavior. Also, the font of listings such as "Media Types" and "Albums" has changed. Now the listings’ font size in the Photos app is way bigger, which makes it easier for the users to read. Voice Memos A new introductory splash screen is added for Voice Memos in iOS 12 beta 2. Apart from these updates, there are also certain minor changes which are listed below: On unlocking any content using Face ID, the iPhone X now says "Scanning with Face ID." Now, on opening iPhone apps on the iPad, such as Instagram, these apps get displayed in a modern device size (iPhone 6) in both the modes namely: 1x and 2x. A new interface for auto-filling a password saved in iCloud Keychain is added. Podcasts app will now show ‘Now Playing’ indicator for the currently playing chapters. Time Travel references have been removed from the Watch app. The iOS 12 public beta will launch after iOS 12 developer beta 3 around June 26. The release date for the final version of iOS 12 is set sometime in September 2018. Also, there are some known issues regarding the latest iOS 12 beta 2 update that needs resolving. Registered developers can check out the release notes for beta 2 on the official Apple developer website. WWDC 2018 Preview: 5 Things to expect from Apple’s Developer Conference Apple releases iOS 11.4 update with features including AirPlay 2, and HomePod among others Apple introduces macOS Mojave with UX enhancements like voice memos, redesigned App Store, Apple News, & more security controls  
Read more
  • 0
  • 0
  • 15986
Modal Close icon
Modal Close icon