Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-nvidia-announces-pre-orders-for-the-jetson-xavier-developer-kit-an-ai-chip-for-autonomous-machines-at-2499
Prasad Ramesh
28 Aug 2018
3 min read
Save for later

NVIDIA announces pre-orders for the Jetson Xavier Developer Kit, an AI chip for autonomous machines, at $2,499

Prasad Ramesh
28 Aug 2018
3 min read
NVIDIA Jetson Xavier is an AI computer designed to be used in autonomous machines. It delivers the performance of a GPU workstation in an embedded module while consuming power under 30W. It can also operate at 10W and 15W. The Jetson Xavier is supported by NVIDIA’s SDKs like JetPack and DeepStream. It also supports popular libraries like CUDA, cuDNN, and TensorRT. Per NVIDIA, Xavier has 20 times the performance and 10 times the energy efficiency of its predecessor, the NVIDIA Jetson TX2. Everything needed to get started with Nvidia Jetson Xavier is present in the box, including the power supply and cables. The Jetson Xavier is designed for robots, drones and other autonomous machines and is also suitable for smart city applications. An important use case NVIDIA considered while designing the chip was robot prototyping, that meant making it as small as possible while delivering the maximum performance and options for I/O. The module itself without the thermal solution is just about the size of a small notebook. You can run a total of three monitors at once with the two USB 3.1 type C ports and the HDMI port. The chip consists of six processing units. It includes a 512-core Nvidia Volta Tensor Core GPU and an eight-core Carmel Arm64 CPU. The chip capable of 30 trillion operations per second. The specifications of the NVIDIA Jetson Xavier are: GPU 512-core Volta GPU with Tensor Cores DL Accelerator (2x) NVDLA Engines CPU 8-core ARMv8.2 64-bit CPU, 8MB L2 + 4MB L3 Memory 16GB 256-bit LPDDR4x | 137 GB/s Storage 32GB eMMC 5.1 Vision Accelerator 7-way VLIW Processor Video Encode (2x) 4Kp60 | HEVC Video Decode (2x) 4Kp60 | 12-bit support Camera 16x CSI-2 Lanes (40 Gbps in D-PHY V1.2 or 109 Gbps in CPHY v1.1) 8x SLVS-EC lanes (up to 18.4 Gbps) Up to 16 simultaneous cameras PCIe 5x PCIe gen4 (16GT/s) controllers | 1x8, 1x4, 1x2, 2x1 Root port and endpoint Mechanical 100mm x 87mm with 16mm Z-height (699-pin board-to-board connector)   The Xavier is available for preorder for $2,499, but if you are a member of the NVIDIA Developer Program you can get your first kit at a special price of $1,299. For more details, visit the NVIDIA website. NVIDIA open sources its material definition language, MDL SDK NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers Video-to-video synthesis method: A GAN by NVIDIA & MIT CSAIL is now Open source
Read more
  • 0
  • 0
  • 14288

article-image-suse-is-now-an-independent-company-after-being-acquired-by-eqt-for-2-5-billion
Amrata Joshi
18 Mar 2019
3 min read
Save for later

SUSE is now an independent company after being acquired by EQT for $2.5 billion

Amrata Joshi
18 Mar 2019
3 min read
Last week, SUSE, an open-source software company that develops and sells Linux products to business customers announced that it is an independent company now. SUSE also finalized its $2.5 billion acquisition by growth investor EQT from Micro Focus. According to the official post, SUSE also claims to be “the largest independent open source company.” Novell, a software and services based company, had first acquired SUSE in 2004. Novell then got acquired by Attachmate in 2010, which was then acquired by Micro Focus in 2014. Micro Focus then turned Suse into an independent division and further sold SUSE to EQT in 2018. The newly independent SUSE has brought in addition to its team by adding new leadership roles. Enrica Angelone has joined as SUSE’s Chief Financial Officer, and Sander Huyts, director of sales at SUSE, is the new Chief Operations Officer. Thomas Di Giacomo, former CTO for SUSE, is now the president of Engineering, Product and Innovation. According to SUSE’s blog post, SUSE’s expanded team will be actively participating in communities and projects to bring open source innovation to the enterprise. Nils Brauckmann, CEO at SUSE, said, “ Our genuinely open, open source solutions, flexible business practices, lack of enforced vendor lock-in and exceptional service are more critical to customer and partner organizations, and our independence coincides with our single-minded focus on delivering what is best for them.” He further added, “Our ability to consistently meet these market demands creates a cycle of success, momentum and growth that allows SUSE to continue to deliver the innovation customers need to achieve their digital transformation goals and realize the hybrid and multi-cloud workload management they require to power their own continuous innovation, competitiveness, and growth.” SUSE’s new move is towards capitalizing on market dynamics, creating tremendous value for customers and partners. SUSE’s independent status and EQT’s backing will enable the company’s continued expansion towards driving growth in SUSE’s core business and in emerging technologies, both organically and through add-on acquisitions. As the company has been owned by EQT, so according to few users it’s still not independent. One of the users commented on HackerNews, “Being owned by a Private Equity fund can really not be described as being "independent". Such funds have a typical investment horizon of 5 - 7 years, with potential exits being an IPO, a strategic sale (to a bigger company) or a sale to another PE fund, with the strategic sale probably more typical. In the meantime the fund will impose strict growth targets and strong cost cuts.” Another comment reads, “Yeah, I'm not sure how anyone can call private equity "independent". Our whole last year had selling the company as our top priority. Not something I'd choose in a truly independent position.” To know more about this news, check out SUSE’s official announcement. Google introduces Season of Docs that will connect technical writers and mentors with open source projects Microsoft open sources ‘Accessibility Insights for Web’, a chrome extension to help web developers fix their accessibility issues MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process  
Read more
  • 0
  • 0
  • 14251

article-image-gnu-bison-3-2-got-rolled-out
Amrata Joshi
01 Nov 2018
2 min read
Save for later

GNU Bison 3.2 got rolled out

Amrata Joshi
01 Nov 2018
2 min read
On Monday, the team at Bison announced the release of GNU Bison 3.2, a general-purpose parser generator. It converts an annotated context-free grammar into a deterministic LR or generalized LR (GLR) parser, employing LALR(1) parser tables. This release is bootstrapped with the following tools, Gettext 0.19.8.1, Autoconf 2.69, Automake 1.16.1, Flex 2.6.4, and Gnulib v0.1-2176-ga79f2a287 GNU Bison, commonly known as Bison, is a parser generator that is part of the GNU Project. It is used to develop a wide range of language parsers, right from those used in simple desk calculators to complex programming languages. One has to be fluent in C or C++ programming in order to use Bison. Bison 3.2  comes with massive improvements to the deterministic C++ skeleton, Lalr1.cc, while maintaining compatibility with C++98. Move-only types can now be used for semantic values while working with Bison’s variants. In modern C++ (C++11 and later), one should always use 'std::move' with the values of the right-hand side symbols ($1, $2, etc.), as they will be popped from the stack anyway. Using 'std::move' is now mandatory for move-only types such as unique_ptr, and it provides a significant speedup for large types such as std::string, or std::vector, etc. A warning will be issued when automove is enabled, and a value is used several times. Major Changes in Bison 3.2 Support for DJGPP (DJ's GNU Programming Platform), which have been unmaintained and untested for years, is now termed obsolete. Unless there is an activity to revive it, it will be removed. To denote the output stream, printers should now use ‘yyo’ instead of ‘yyoutput’. The variant-based symbols in C++ should now use emplace() instead ofbuild(). In C++ parsers, the parser::operator() is now a synonym for the parser::parse. A ‘comment’ in the generated code now emphasizes that users should not depend on non-documented implementation details, such as macros starting with YY_. A new section named "A Simple C++ Example", is now a tutorial for parsers in C++. Bug Fixes in Bison 3.2 Major bug fixes in this release include Portability issues in MinGW and VS2015,  the test suite and with Flex. To know more about this release check out the official mailing list. Mio, a header-only C++11 memory mapping library, released! Google releases Oboe, a C++ library to build high-performance Android audio apps The 5 most popular programming languages in 2018
Read more
  • 0
  • 0
  • 14240

article-image-tesla-is-building-its-own-ai-hardware-for-self-driving-cars
Richard Gall
02 Aug 2018
3 min read
Save for later

Tesla is building its own AI hardware for self-driving cars

Richard Gall
02 Aug 2018
3 min read
Elon Musk revealed yesterday that Tesla is developing its own hardware in its bid to bring self-driving cars to the public. Up to now Tesla has used Nvidia's Drive Platform, but this will be replaced by 'Hardware 3,' which will be, according to Tesla at least, the 'world’s most advanced computer for autonomous driving.’ The Hardware 3 chip has been in the works for a few years now, with Jim Keller joining Tesla from chip manufacturer AMD back in 2016, and Musk confirming the project in December 2017. Keller has since left Tesla, and the Autopilot project - Tesla's self-driving car initiative - is now being led by Pete Bannon. "The chips are up and working, and we have drop-in replacements for S, X and 3, all have been driven in the field," Bannon said. "They support the current networks running today in the car at full frame rates with a lot of idle cycles to spare." Why has Tesla developed its own AI hardware? By developing its own AI hardware, Tesla is able to build the solutions tailored to its needs. It means it isn't relying on others - like Nvidia, say - to build what they need. Bannon explained "nobody was doing a bottoms-up design from scratch." By bringing hardware in-house, Tesla will not only be able to develop chips according to its needs, it will also make it easier to plan and move at its own pace. Essentially, it allows Tesla to take control of its own destiny. In the context of safety concerns around safe-driving cars, taking on responsibility for developing the hardware on which your machine intelligence will sit makes a lot of sense. It means you can assume responsibility for solving your own problems. How does Tesla's Hardware 3 compare with other chips? The hardware 3 chips are, according to Musk, 10x better than the current Nvidia GPUs. The current GPUs in Tesla's Autopilot system can analyze 200 frames per second. Tesla's new hardware can run on 2000 frames per second. This significant performance boost should, in theory, bring significant gains in terms of safety. What's particularly remarkable is that the new chip isn't actually costing Tesla any more than its current solution. Musk explained how the team was able to find such significant performance gains. "The key is to be able to run the neural network at a fundamental, bare metal level. You have to do these calculations in the circuit itself, not in some sort of emulation mode, which is how a GPU or CPU would operate. You want to do a massive amount of [calculations] with the memory right there.” The hardware is expected to roll out in 2019 and offered as a hardware upgrade to all owners of Autopilot 2.0 cars and up. Read next Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine DeepMind, Elon Musk, and others pledge not to build lethal AI Elon Musk’s tiny submarine is a lesson in how not to solve problems in tech
Read more
  • 0
  • 0
  • 14223

article-image-debian-maintainer-points-out-difficulties-in-deep-learning-framework-packaging
Natasha Mathur
17 Apr 2019
3 min read
Save for later

Debian maintainer points out difficulties in Deep Learning Framework Packaging

Natasha Mathur
17 Apr 2019
3 min read
A Debian maintainer, named, Mo Zhou, wrote a mail to Development of Debian or Debian Devel team (responsible for discussion over tech development topics) on Debian mailing list, stating difficulties in deep learning framework Packaging. Zhou re-evaluated the status of TensorFlow’s latest build systems and shared point related to deep learning framework packaging. “My thoughts are concluded from failures instead of success. That said, they should be helpful to future maintainers who'd like to maintain similar packages”, writes Zhou. Zhou elaborates on three obstacles faced by maintainers in Debian's context in case of License, ISA Baseline, and Build System. License Zhou states that although the de facto dominating performance library is cuDNN, no user would prefer using a D-L framework without cuDNN or TPU acceleration. He states that packaging for cuDNN is available under Salsa:nvidia-team, however, the plan to upload it had been aborted since its license looks “too scary”. ISA Baseline Zhou writes that the absence of SIMD code affects critical computational performance. There have been certain helpful suggestions made by other volunteers including ld.so tricks and some gcc features that enables run-time code selection as per the CPU capability. The ld.so tricks help to bloat the resulting .deb packages but it's the most applicable solution. On the other hand, patching a million lines of Tensorflow code that would enable the "function attributes" feature is very difficult and “impossible” to a volunteer. Build System Zhou states that the build systems of TensorFlow and PyTorch are volatile due to the fast pace of development, especially TensorFlow's build system "bazel" is very hard to package for Debian. Also, a good amount of patching work is required to prevent bazel from downloading ~3.0GiB of before building TensorFlow. Additionally, PyTorch's setup.py+cmake+shell build system also requires some patching work. Zhou writes that any future contributor who is about to deal with any deep learning packages to carefully assess the three factors listed above. Apart from that, Zhou has also filed Orphan bugs against tensorflow and several of its dependencies, except src:nsync that contains cmake files. Zhou also mentions that DUPR is the best choice for him in case of .deb packages. For more information, check out the official Debian mailing list. Debian project leader elections goes without nominations. What now? It is supposedly possible to increase reproducibility from 54% to 90% in Debian Buster! Debian 9.7 released with fix for RCE flaw
Read more
  • 0
  • 0
  • 14217

article-image-facebook-family-of-apps-hits-14-hours-outage-longest-in-its-history
Fatema Patrawala
14 Mar 2019
3 min read
Save for later

Facebook family of apps hits 14 hours outage, longest in its history

Fatema Patrawala
14 Mar 2019
3 min read
The biggest interruption ever suffered by Facebook goes beyond 14 hours at a stretch. Twitter was flooded with tweets about Facebook, Instagram and Whatsapp been down intermittently in some parts of the world on all of Wednesday. Facebook itself had to turn to its rival Twitter to explain that its group of hugely popular apps are having difficulties. Some users of Facebook and other platforms owned by the tech giant — including Instagram, Messenger and WhatsApp — reported problems accessing the services and posting content. According to DownDetector, it looks like the outages are mainly in New England; Texas; Seattle, Washington; parts of Latin America, including Peru; the UK; India; Japan; Malaysia and the Philippines. Users have written in from Canada, Las Vegas, and Turkey to confirm outages there as well. The outage caused a bigger hit to the revenue of advertisers on Facebook that spend large amounts of money to reach potential customers on Facebook platforms. The Facebook spokesperson says we are investigating the possibility of refunds to the advertisers. The cause of the interruption has not yet been made public. "We're aware that some people are currently having trouble accessing the Facebook family of apps," tweeted from the official Facebook account. "We're working to resolve the issue as soon as possible." In response to rumours posted on other social networks, the company said the outages were not a result of a Distributed Denial of Service attack, known as DDoS - a type of cyber-attack that involves flooding a target service with extremely high volumes of traffic. The last time Facebook had a disruption of this magnitude was in 2008, when the site had 150m users - compared to around 2.3bn monthly users today. Users funnily turn to Twitter in absence of Facebook and Instagram While Facebook and Instagram have been down, many have turned to Twitter to make jokes about the outage. The hashtags #FacebookDown and #InstagramDown have been used more than 150,000 times so far. Some Twitter users who work in "Facebook-centric" jobs, expressed their panic and distress at being unable to use the platform. Many shared jokes about the social media outage leading to the collapse of society, as "nobody remembers how to reach loved ones or eat food without posting updates". Many tweeted about Facebook users tweeting for the first time. https://twitter.com/slaylegend_13/status/1106049260288499712 Others have shared a version of the "distracted boyfriend" meme, referencing people turning to Twitter in their hour of need. https://twitter.com/Ahmadridhopp_/status/1106018690217107456 Some joked that the lack of access to Facebook would deprive them of validation. https://twitter.com/Jayanliyanage2/status/1106040536866148353 Apart from Twitter there are interesting reactions from users on Hacker News which goes like a sudden increase in worker productivity, a brief glimpse into your neighbor's vacation story, or Facebook launching tools to manage spending time on social media, etc. It's been a good couple of hours that Facebook tweeted about resolving the issue. The company has provided no further updates in the fourteen hours since then. Facebook open-sources homomorphic hashing for secure update propagation Facebook announces ‘Habitat’, a platform for embodied Artificial Intelligence research UK lawmakers publish a report after 18 month long investigation condemning Facebook’s disinformation and fake news practices    
Read more
  • 0
  • 0
  • 14209
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-scala-2-12-5-is-here
Sugandha Lahoti
27 Mar 2018
2 min read
Save for later

Scala 2.12.5 is here!

Sugandha Lahoti
27 Mar 2018
2 min read
Scala 2.12.5 version has been released. Scala is a popular programming language for a data scientist. It is mostly favored by aspiring or seasoned data scientists who are planning to work with Apache Spark for Big Data analysis. With the new version 2.12.5, Scala has brought in four major highlights. Most importantly, Scala 2.12.5 is binary compatible with the whole Scala 2.12 series. Major highlights: When compiling on Java 9 or higher, the new-release N flag changes the compilation classpath to match the JDK version N. This works for the JDK itself and for multi-release JARs on the classpath. With the new -Ybackend-parallelism N compiler flag, the backend can now run bytecode serialization, classfile writing, and method-local optimizations (-opt:l:method) in parallel on N threads. The raw"" and s"" string interpolators are now intercepted by the compiler to produce more efficient bytecode. The -Ycache-plugin-class-loader and -Ycache-macro-class-loader flags enable caching of classloaders for compiler plugins and macro definitions. This can lead to significant performance improvements. Other features include: The apply method on the PartialFunction companion object is now deprecated. Scala JARs (library, reflect, compiler) now have an Automatic-Module-Name attribute in their manifests. Enabling unused warnings now lead to fewer false positives. Explicit eta-expansion (foo _) of a nullary method no longer gives a deprecation warning. Scala releases are available through a variety of channels, including: Bump the scalaVersion setting in your sbt-based project Download a distribution from scala-lang.org Obtain JARs via Maven Central However, there is regression since 2.12.4 when compiling code on Java 9 or 10 that uses macros. Users must either compile on Java 8 or wait for 2.12.6. You can check out all closed bugs and merged PRs for further details.
Read more
  • 0
  • 0
  • 14181

article-image-richard-devaul-alphabet-executive-resigns-after-being-accused-of-sexual-harassment
Natasha Mathur
31 Oct 2018
2 min read
Save for later

Richard DeVaul, Alphabet executive, resigns after being accused of sexual harassment

Natasha Mathur
31 Oct 2018
2 min read
It was only last week when the New York Times reported shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. Now, Richard DeVaul, a director at unit X of Alphabet (Google’s parent company), resigned from the company, yesterday, after being accused of sexually harassing Star Simpson, a hardware engineer. DeVaul has not received any exit package on his resignation. As per the NY times report, Richard DeVaul interviewed Star Simpson for a job reporting to him. He then further invited her to a Burning Man, an annual festival in the Nevada desert, the next week. Mr. DeVaul then sexually harassed Simpson at his encampment at Burning Man, as DeVaul made inappropriate requests to Simpson. Later when Simpson reported to Google regarding DeVaul’s sexual misconduct two years later, one of the company officials shrugged her off by saying the story was “more likely than not” true and that appropriate corrective actions had been taken. DeVaul had apologized in a statement to the New York Times saying that the incident was "an error in judgment. Sundar Pichai, Google’s CEO, further apologized yesterday, saying that the “apology at TGIF didn't come through, and it wasn't enough” in an e-mail obtained by Axios. Pichai will also be supporting the women engineers at Google, who are organizing a “women’s walk” walkout tomorrow to protest. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 14180

article-image-ubers-marmaray-an-open-source-data-ingestion-and-dispersal-framework-for-apache-hadoop
Natasha Mathur
14 Sep 2018
3 min read
Save for later

Uber’s Marmaray, an Open Source Data Ingestion and Dispersal Framework for Apache Hadoop

Natasha Mathur
14 Sep 2018
3 min read
Uber came out with an open source data ingestion and dispersal framework for Apache Hadoop, called “Marmaray”, yesterday. Marmaray is a plug-in based framework built and designed on top of the Hadoop ecosystem by the Hadoop Platform team. Marmaray helps connect a collection of systems and services in a cohesive manner to be able to perform certain functions. Let’s have a look at these functions. Major Functions Marmaray is capable of producing quality schematized data via Uber’s schema management library and services. It ingests data from multiple data stores into Uber’s Hadoop data lake. It can build pipelines using Uber’s internal workflow orchestration service. This allows it to crunch and process the ingested data along with storing and calculating the business metrics based on this data in Hive. Marmaray serves the processed results from Hive to an online data store. This allows the internal customers to query the data and get close to instant results. Other than that, a majority of the fundamental building blocks and abstractions for Marmaray’s design were inspired by Gobblin, a similar project developed at LinkedIn. Marmaray Architecture There are certain generic components such as DataConverters, WorkUnitCalculator, Metadata Manager, ISourceand ISink in Marmaray that facilitates its overall job flow. Let’s discuss these components.  Marmaray Architecture DataConverters DataConverters are responsible for producing the error records with every transformation. It is important for all the raw data to conform to a schema before it is ingested into Uber’s Hadoop data lake, this is where DataConverts come into picture. It filters out any data that is malformed, missing required fields, or has other issues. WorkUnitCalculator Uber introduced the concept of WorkUnitCalculator in order to measure the amount of data to process. At advanced levels, WorkUnitCalculator analyzes the type of input source and the previously stored checkpoint. It then calculates the next work unit or batch of work. The WorkUnitCalculator also considers throttling information when measuring the next batch of data which needs processing. Metadata Manager The Metadata Manager is responsible to cache job level metadata information. The metadata store is capable of storing any relevant metrics which are useful to track, describe, or collect status on jobs. This helps Marmaray to cache job level metadata information. ISource and ISink The ISource consists of necessary information from the source data required for the appropriate work units, and ISink comprises all the necessary information on writing to the sink. Marmaray’s support for any-source to any-sink data pipelines can be applied to a wide range of use cases both in the Hadoop ecosystem and for data migration. “We hope that Marmaray will serve the data needs of other organizations, and that open source developers will broaden its functionalities,” reads the Uber Blog. For more information, check out the official Uber Blog. Uber open sources its large scale metrics platform, M3 for Prometheus Uber introduces Fusion.js, a plugin-based web development framework for high performance apps Uber’s kepler.gl, an open source toolbox for GeoSpatial Analysis
Read more
  • 0
  • 0
  • 14164

article-image-apple-steals-ai-chief-google
Richard Gall
04 Apr 2018
2 min read
Save for later

Apple steals AI chief from Google

Richard Gall
04 Apr 2018
2 min read
Google are the leaders when it comes to artificial intelligence. Apple have somewhat fallen behind - where Google are perhaps known more for technical innovation and experimentation, Apple's success is built on it's focus on design and customer experience. But that might be changing thanks to a high profile coup. "Apple have hired Google's chief of search and artificial intellience", the New York Times reports. John Giannandrea, after 8 years working at Google, will be joining Apple to help drive the organization's machine learning and artificial intelligence projects forward. Anyone who has used Siri will know that Apple have some catching up to do in terms of conversational UI - Amazon's Alexa and Google Assistant have captured the marketplace and seem to be defining the future. One of the reasons Apple has struggled to keep up the pace with the likes of Google and Facebook, as noted by a number of news sites, is that they have a completely different approach to user data. As we've seen in recent weeks, Facebook have a huge wealth of data on users that expands beyond the limits of the platform - Google, in defining the foundations of many people's experiences of search, also has a huge amount of data on users. As the New York Times explains: Apple has taken a strong stance on protecting the privacy of people who use its devices and online services, which could put it at a disadvantage when building services using neural networks. Researchers train these systems by pooling enormous amounts of digital data, sometimes from customer services. Apple, however, has said it is developing methods that would allow it to train these algorithms without compromising privacy. Giannandrea's perspective on AI would seem to be well-aligned with Apple's philosophy. In a number of interviews and conference talks, he has played down talks of automation and human's becoming obsolete, instead urging people to consider the biases and ethical considerations of artificial intelligence. Read more: Apple Recruits Google's Search and AI Chief John Giannandrea to Help Improve Siri [Gizmodo] Apple hires Google’s former AI boss to help improve Siri [The Verge]  
Read more
  • 0
  • 0
  • 14125
article-image-microsoft-amplifies-focus-on-conversational-ai-acquires-xoxco-shares-guide-to-developing-responsible-bots
Bhagyashree R
16 Nov 2018
5 min read
Save for later

Microsoft amplifies focus on conversational AI: Acquires XOXCO; shares guide to developing responsible bots

Bhagyashree R
16 Nov 2018
5 min read
On Wednesday, Microsoft shared that it has signed an agreement to acquire XOXCO, an Austin-based software developer with a focus on bot design. In another announcement, it shared a set of guidelines formulated to help developers build responsible bots or conversational AI. Microsoft acquires conversational AI startup, XOXCO Microsoft has shared its intent to acquire XOXCO. The software product design and development company has been working on conversation AI since 2013. They have developed products like Botkit which provide development tools and the Howdy bot for Slack that enables users to schedule meetings. With this acquisition, Microsoft aims to democratize AI development. “The Microsoft Bot Framework, available as a service in Azure and on GitHub, supports over 360,000 developers today. With this acquisition, we are continuing to realize our approach of democratizing AI development, conversation, and dialog, and integrating conversational experiences where people communicate,” reads the post. Throughout this year, the tech giant acquired many companies which have contributed to AI development. For example, Semantic Machines in May, Bonsai in July, and Lobe in September. XOXCO is another company added to this list enabling Microsoft to get more closer to its goal of “making AI accessible and valuable to every individual and organization, amplifying human ingenuity with intelligent technology.” Read more about the acquisition on Microsoft’s official website. Building responsible bots with Microsoft’s guidelines Nowadays, conversational AI is being used to automate communication, query solving, and create personalized customer experiences at scale. With this increasing adoption, it is important to build conversational AI that is responsible and trustworthy. The team at ICS.ai are a Microsoft Inner Circle partner that provide transformational AI solutions for the public sector in the United Kingdom. Their Smart Chat AI Assistant offers that achieves human parity performance, with over 90% of queries answered correctly. The 10 guidelines formulated by Microsoft aims to help developers do exactly that: [box type="shadow" align="" class="" width=""] Articulate the purpose of your bot and take special care if your bot will support consequential use cases. Be transparent about the fact that you use bots as part of your product or service. Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence. Design your bot so that it respects relevant cultural norms and guards against misuse. Ensure your bot is reliable. Ensure your bot treats people fairly. Ensure your bot respects user privacy. Ensure your bot handles data securely. Ensure your bot is accessible. Accept responsibility.[/box] Some of them are described below: “Articulate the purpose of your bot and take special care if your bot will support consequential use cases.” Before starting any design work, carefully analyze the benefits your bot will provide to the users or the entity deploying the bot. Ensuring that your bot’s design is ethical is very important, especially when it is likely to affect the well-being of the user such as in consequential use cases. These use cases include access to services such as healthcare, education, employment, and financing. “Be transparent about the fact that you use bots as part of your product or service.” Users should be aware that they are interacting with a bot. Nowadays, designers can equip their bots with “personality” and natural language capabilities. This is why it is important to convey to the users that they are not interacting with another person and some aspects of their interaction are being performed by a bot. Also, users should be able to easily find information about the limitations of the bot, including the possibility of errors and the consequences of these errors. “Ensure a seamless hand-off to a human where the human-bot exchange leads to interactions that exceed the bot’s competence.” In cases where a human judgment is required, provide a means or ready access to a human moderator, particularly if your bot deals with consequential matters. Bots should have the ability to transfer a conversation to a human moderator as soon as the user asks. Users will quickly lose trust in the technology and in the company that has deployed it if they feel trapped or alienated by a bot. “Design your bot so that it respects relevant cultural norms and guards against misuse.” Bots should have built-in safeguards and protocols to handle misuse and abuse. Since bots can now have a human-like persona, it is crucial that they interact respectfully and safely with users. Developers can use machine learning techniques and keyword filtering mechanisms to enable the bot to detect and respond appropriately to sensitive or offensive input from users. “Ensure your bot is reliable.” Bots need to be reliable for the function it aims to perform. As a developer, you should take into account that since AI systems are probabilistic they will not always give the correct answer. That is why establish reliability metrics and review them periodically. The performance of AI-based systems may vary over time as the bot is rolled out to new users and in new contexts, developers must continually monitor its reliability. Read the full document: Responsible bots: 10 guidelines for developers of conversational AI Microsoft announces container support for Azure Cognitive Services to build intelligent applications that span the cloud and the edge Satya Nadella reflects on Microsoft’s progress in areas of data, AI, business applications, trust, privacy and more. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report
Read more
  • 0
  • 0
  • 14094

article-image-google-researchers-present-zanzibar-a-global-authorization-system-it-scales-trillions-of-access-control-lists-and-millions-of-authorization-requests-per-second
Amrata Joshi
11 Jun 2019
6 min read
Save for later

Google researchers present Zanzibar, a global authorization system, it scales trillions of access control lists and millions of authorization requests per second

Amrata Joshi
11 Jun 2019
6 min read
Google researchers presented a paper on Google’s consistent global authorization system known as Zanzibar. The paper focuses on the design, implementation, and deployment of Zanzibar for storing and evaluating access control lists (ACL). Zanzibar offers a uniform data model and configuration language for providing a wide range of access control policies from hundreds of client services at Google. The client services include Cloud, Drive, Calendar, Maps, YouTube and Photos. Zanizibar authorization decisions respect causal ordering of user actions and thus provide external consistency amid changes to access control lists and object contents. It scales to trillions of access control lists and millions of authorization requests per second to support services used by billions of people. It has maintained 95th-percentile latency of less than 10 milliseconds and availability of greater than 99.999% over 3 years of production use. Here’s a list of the authors who contributed to the paper, Ruoming Pang, Ramon C ´aceres, Mike Burrows, Zhifeng Chen, Pratik Dave, Nathan Germer, Alexander Golynski, Kevin Graney, Nina Kang, Lea Kissner, Jeffrey L. Korn, Abhishek Parmar, Christopher D. Richards and Mengzhi Wang. What are the goals of Zanzibar system Researchers have certain goals for the Zanzibar system which are as follows: Correctness: The system must ensure consistency of access control decisions. Flexibility: Zanzibar system should also support access control policies for consumer and enterprise applications. Low latency: The system should quickly respond because authorization checks are usually in the critical path of user interactions. And low latency is important for serving search results that often require tens to hundreds of checks. High availability: Zanzibar system should reliably respond to requests Because in the absence of explicit authorization, client services would be forced to deny their user access. Large scale: The system should protect billions of objects that are shared by billions of users. The system should be deployed around the globe so that it becomes easier for its clients and the end users. To achieve the above-mentioned goals, Zanzibar involves a combination of features. For example, for flexibility, the system pairs a simple data model with a powerful configuration language that allows clients to define arbitrary relations between users and objects. The Zanzibar system employs an array of techniques for achieving low latency and high availability and for consistency, it stores the data in normalized forms. Zanzibar replicates ACL data across multiple data centers The Zanzibar system operates at a global scale and stores more than two trillion ACLs (Access Control Lists) and also performs millions of authorization checks per second. But the ACL data does not lend itself to geographic partitioning as the authorization checks for an object can actually come from anywhere in the world. This is the reason why, Zanzibar replicates all of its ACL data in multiple geographically distributed data centers and then also distributes the load across thousands of servers around the world. Zanzibar’s architecture includes a main server organized in clusters Image source:  Zanzibar: Google’s Consistent, Global Authorization System The acl servers are the main server type in this system and they are organized in clusters so that they respond to Check, Read, Expand, and Write requests. When the requests arrive at any server in a cluster, the server passes on the work to other servers in the cluster and those servers may then contact other servers for computing intermediate results. The initial server is the one that gathers the final result and returns it to the client. The Zanzibar system stores the ACLs and their metadata in Spanner databases. There is one database for storing relation tuples for each client namespace and one database for holding all namespace configurations. And there is one changelog database that is shared across all namespaces. So the acl servers basically read and write those databases while responding to client requests. Then there are a specialized server type that respond to Watch requests, they are known as the watchservers. These servers tail the changelog and serve namespace changes to clients in real time. The Zanzibar system runs a data processing pipeline for performing a variety of offline functions across all Zanzibar data in Spanner. For example, producing dumps of the relation tuples in each namespace at a known snapshot time. Zanzibar uses an indexing system for optimizing operations on large and deeply nested sets, known as Leopard. It is responsible for reading periodic snapshots of ACL data and for watching the changes between snapshots. It also performs transformations on data, such as denormalization, and then responds to requests coming from acl servers. The researchers concluded by stating that Zanzibar system is simple, flexible data model and offers configuration language support. According to them, Zanzibar’s external consistency model allows authorization checks to be evaluated at distributed locations without the need for global synchronization. It also offers low latency, scalability, and high availability. People are finding this paper very interesting and also the facts involved are surprising for them. A user commented on HackerNews, “Excellent paper. As someone who has worked with filesystems and ACLs, but never touched Spanner before.” Another user commented, “What's interesting to me here is not the ACL thing, it's how in a way 'straight forward' this all seems to be.” Another comment reads, “I'm surprised by all the numbers they give out: latency, regions, operation counts, even servers. The typical Google paper omits numbers on the Y axis of its most interesting graphs. Or it says "more than a billion", which makes people think "2B", when the actual number might be closer to 10B or even higher.” https://twitter.com/kissgyorgy/status/1137370866453536769 https://twitter.com/markcartertm/status/1137644862277210113 Few others think that the name of the project wasn’t Zanzibar initially and it was called ‘Spice’. https://twitter.com/LeaKissner/status/1136691523104280576 To know more about this system, check out the paper Zanzibar: Google’s Consistent, Global Authorization System. Google researchers propose building service robots with reinforcement learning to help people with mobility impairment Researchers propose a reinforcement learning method that can hack Google reCAPTCHA v3 Researchers input rabbit-duck illusion to Google Cloud Vision API and conclude it shows orientation-bias    
Read more
  • 0
  • 0
  • 14091

article-image-chinas-huawei-technologies-accused-of-stealing-apples-trade-secrets-reports-the-information
Amrata Joshi
19 Feb 2019
4 min read
Save for later

China’s Huawei technologies accused of stealing Apple’s trade secrets, reports The Information

Amrata Joshi
19 Feb 2019
4 min read
China’s Huawei Technologies which was recently accused by the U.S. government for stealing trade secrets have again come in light for using tactics to steal Apple’s trade secrets, The Information reports. The tactics include Huawei engineers appealing to Apple's third-party manufacturers and suppliers with promises of big orders. However, instead of using the opportunity to inquire on processes related to Apple’s component production, Huawei used suspicious tactics that tried to reverse engineer technology from Apple and other competitors in the electronics market. Huawei has been trying to obtain technology from rivals, especially from Apple’s suppliers in China. Huawei has also previously copied a popular feature in Apple’s smartwatch. Last year in November, a Huawei engineer got in touch with a supplier that helps in making Apple’s heart rate sensor. The engineer arranged for a meeting on the pretext of offering the supplier a manufacturing contract. The Huawei engineer even emailed the executive a photo of material it was considering for a heart rate sensor and said, “Feel free to suggest a design you already have experience with.” But the supplier didn’t leak out any details regarding the Smart Watch. In a statement to The Information, an Apple executive said, "They were trying their luck, but we wouldn't tell them anything." Apple Watch has been approved by the U.S. Food and Drug Administration and  Huawei’s smartwatch didn’t receive good feedback as users complained about the performance of its heart rate monitor. According to a spokesperson who was interviewed by The Information, “In conducting research and development, Huawei employees must search and use publicly available information and respect third-party intellectual property per our business-conduct guidelines.” Reportedly, Huawei has been adding up to the fight between the U.S. and China. U.S. companies such as Motorola and Cisco Systems have made similar claims against Huawei in civil lawsuits. According to the report by The Information, Chicago-based company, Akhan Semiconductor which helps in making durable smartphone glass, said it cooperated with a federal investigation into a theft of its intellectual property by Huawei. Huawei has been accused of using the prospect of its business relationship with Akhan to acquire samples of its glass, which Huawei took and studied. According to The Information, Huawei encouraged its employees to steal information and post it on an internal company website. The employees were also given an email address where they could send the information. Huawei had a formal program for rewarding employees that steal information. The employees get bonuses that increase based on the confidential value of the information. The company also assured employees they wouldn’t be punished for taking such actions. Huawei was suspected of copying Apple’s connector which was developed in 2016, that made the MacBook Pro hinge thinner. Last year, Huawei’s MateBook Pro showed up a similar component which was made of 13 similar parts assembled in the same manner. A former Apple employee when interviewed at Huawei was constantly asked about Apple’s upcoming products and technological features. The former Apple employee didn’t give any details and stopped interviewing at Huawei. The employee said, “It was clear they were more interested in trying to learn about Apple than they were in hiring me.” People are shocked because of the tactics used these days. A comment on HackerNews reads, “the bar for trade secrets theft is pretty low these days.” Few others think that China is getting targeted by the media. Another comment reads, “Is it just me or does there seem to be a mainstream media narrative trying to stoke the fires of nationalism against China with Huawei being the current lightning rod?” Apple announces the iOS 12.1.4 with a fix for its Group FaceTime video bug Apple and Google slammed by Human Rights groups for hosting Absher, a Saudi app that tracks women Apple reinstates Facebook and Google Developer Certificates, restores the ability to run internal iOS apps
Read more
  • 0
  • 0
  • 14089
article-image-tensorflow-1-13-0-rc2-releases
Natasha Mathur
25 Feb 2019
2 min read
Save for later

TensorFlow 1.13.0-rc2 releases!

Natasha Mathur
25 Feb 2019
2 min read
After the TensorFlow 1.13.0-rc0 release last month, the TensorFlow team is out with another update 1.13.0-rc2, unveiling major features and updates. The new release explores minor bug fixes, improvements, and other changes. Let’s have a look at the noteworthy features in TensorFlow 1.13.0-rc2. Major Improvements TensorFlow Lite has moved from contrib to core. TensorFlow GPU binaries are built against CUDA 10 and TensorRT 5.0. There’s newly added support for Python3.7 on all operating systems. NCCL has been moved to core. Behavioral and other changes Conversion of python floating types to uint32/64 in tf.constant is not allowed. The gain argument of convolutional orthogonal initializers has consistent behavior with the tf.initializers.orthogonal initializer. Subclassed Keras models can be saved via tf.contrib.saved_model.save_keras_model. LinearOperator.matmul now returns a new LinearOperator. Performance of GPU cumsum/cumprod has improved by up to 300x. Support has been added for weight decay in most TPU embedding optimizers, including AdamW and MomentumW. Tensorflow/contrib/lite has been moved to tensorflow/lite. An experimental Java API is added to inject TensorFlow Lite delegates. Support has been added for strings in TensorFlow Lite Java API. All the occurences of tf.contrib.estimator.DNNLinearCombinedEstimator has been replaced with tf.estimator.DNNLinearCombinedEstimator. Regression_head has been updated to the new Head API for Canned Estimator V2. XLA HLO graphs can be rendered as SVG/HTML. Bug Fixes Documentation has been updated with the details regarding the rounding mode used in quantize_and_dequantize_v2. OpenSSL compatibility has been fixed by avoiding EVP_MD_CTX_destroy. CUDA dependency has been upgraded to 10.0. All occurences of tf.contrib.estimator.InMemoryEvaluatorHook and tf.contrib.estimator.make_stop_at_checkpoint_step_hook have been replaced with tf.estimator.experimental.InMemoryEvaluatorHook and tf.estimator.experimental.make_stop_at_checkpoint_step_hook. tf.data.Dataset.make_one_shot_iterator() has been deprecated in V1, removed from V2, and tf.compat.v1.data.make_one_shot_iterator() has instead been added. keep_prob is deprecated and Dropout now takes rate argument. NUMA-aware MapAndBatch dataset has been added. Apache Ignite Filesystem plugin has been added to support accessing Apache IGFS. For more information, check out the official TensorFlow 1.13.0-rc2 release notes TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tf function and more Building your own Snapchat-like AR filter on Android using TensorFlow Lite [ Tutorial ] TensorFlow 1.11.0 releases
Read more
  • 0
  • 0
  • 14046

article-image-aws-makes-amazon-rekognition-image-recognition-ai-available-asia-pacific-developers
Savia Lobo
14 Mar 2018
1 min read
Save for later

AWS makes Amazon Rekognition, its image recognition AI, available for Asia-Pacific developers

Savia Lobo
14 Mar 2018
1 min read
Amazon Rekognition, one of AWS’ Artificial Intelligence (AI) services, is now available in the AWS Asia Pacific (Sydney) Region. With this provision, Australian developers can add visual analysis and recognition to their applications. Amazon Rekognition is a deep learning-based service which easily add images and analyzes video for your applications. Rekognition Image API allows you to detect objects, scenes, faces and inappropriate content, extract text, search and compare faces within images, and so on. One can also use Rekognition Video to detect objects, scenes, activities and inappropriate content, and also search faces in video stored in Amazon S3 in the AWS Asia Pacific (Sydney) region. With Rekognition API, developers can easily: Build an application that measures the likelihood that faces in two images are of the same person, thereby being able to verify a user against a reference photo in near real-time. Also, developers can create collections of millions of faces (detected in images) and can search for a face similar to their reference image in the collection. Amazon Rekognition has no minimum fees or upfront commitment and works on a pay-per-usage model. To know more in detail and other regions where these APIs are available, read the Amazon documentation.
Read more
  • 0
  • 0
  • 14041
Modal Close icon
Modal Close icon