Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-google-researchers-propose-building-service-robots-with-reinforcement-learning-to-help-people-with-mobility-impairment
Amrata Joshi
01 Mar 2019
5 min read
Save for later

Google researchers propose building service robots with reinforcement learning to help people with mobility impairment

Amrata Joshi
01 Mar 2019
5 min read
Yesterday, Google researchers released three different research papers which describe their investigations in easy-to-adapt robotic autonomy by combining deep Reinforcement Learning with long-range planning. This research is made for people with a mobility impairment that makes them home-bound. The researchers propose to build service robots, trained using reinforcement learning to improve the independence of people with limited mobility. The researchers have trained the local planner agents in order to perform basic navigation behaviors and traverse short distances safely without collisions with moving obstacles. These local planners take noisy sensor observations, such as a 1D lidar that helps in providing distances to obstacles, and output linear and angular velocities for robot control. The researchers trained the local planner in simulation with AutoRL (AutomatedReinforcement Learning) which is a method that automates the search for RL rewards and neural network architecture. These local planners transfer to both real robots and to new, previously unseen environments. This works as building blocks for navigation in large spaces. The researchers then worked on a roadmap, a graph where nodes are locations and edges connect the nodes only if local planners can traverse between them reliably. Automating Reinforcement Learning (AutoRL) In the first paper, Learning Navigation Behaviors End-to-End with AutoRL, the researchers trained the local planners in small, static environments. It is difficult to work with standard deep RL algorithms, such as Deep Deterministic Policy Gradient (DDPG). To make it easier, the researchers automated the deep Reinforcement Learning training. AutoRL is an evolutionary automation layer around deep RL that searches for a reward and neural network architecture with the help of a large-scale hyperparameter optimization. It works in two phases, reward search, and neural network architecture search. During the reward search, AutoRL concurrently trains a population of DDPG agents, with each having a slightly different reward function. At the end of the reward search phase, the reward that leads the agents to its destination most often gets selected. In the neural network architecture search phase, the process gets repeated. The researchers use the selected reward and tune the network layers. This turns into an iterative process and which means AutoRL is not sample efficient. Training one agent takes 5 million samples while AutoRL training around 10 generations of 100 agents requires 5 billion samples which is equivalent to 32 years of training. The advantage is that after AutoRL, the manual training process gets automated, and DDPG does not experience catastrophic forgetfulness. Another advantage is that AutoRL policies are robust to the sensor, actuator and localization noise, which generalize to new environments. PRM-RL In the second paper, PRM-RL: Long-Range Robotic Navigation Tasks by Combining Reinforcement Learning and Sampling-based Planning, the researchers explain Sampling-based planners that tackle long-range navigation by approximating robot motions. In this paper, the researchers have combined PRMs with hand-tuned RL-based local planners (without AutoRL) for training robots locally and then adapting them to different environments. The researchers trained a local planner policy in a generic simulated training environment, for each robot. Then they build a PRM with respect to that policy, called a PRM-RL, over a floor plan for the deployment environment. For building a PRM-RL, the researchers connected the sampled nodes with the help of Monte Carlo simulation. The resulting roadmap can be tuned to both the abilities and geometry of the particular robot. Though the roadmaps for robots with the same geometry having different sensors and actuators will have different connectivity. At execution time, the RL agent easily navigates from roadmap waypoint to waypoint. Long-Range Indoor Navigation with PRM-RL In the third paper, the researchers have made several improvements to the original PRM-RL. They replaced the hand-tuned DDPG with AutoRL-trained local planners, which improves long-range navigation. They have also added Simultaneous Localization and Mapping (SLAM) maps, which robots use at execution time, as a source for building the roadmaps. As the SLAM maps are noisy, this change closes the “sim2real gap”, a phenomenon where simulation-trained agents significantly underperform when they are transferred to real-robots. Lastly, they have added distributed roadmap building to generate very large scale roadmaps containing up to 700,000 nodes. The team compared PRM-RL to a variety of different methods over distances of up to 100m, well beyond the local planner range. The team realized that PRM-RL had 2 to 3 times the rate of success over baseline because the nodes were connected appropriately for the robot’s capabilities. To conclude, Autonomous robot navigation can improve the independence of people with limited mobility. This is possible by automating the learning of basic, short-range navigation behaviors with AutoRL and using the learned policies with SLAM maps for building roadmaps. To know more about this news, check out the Google AI blog post. Google launches Flutter 1.2, its first feature update, at Mobile World Congress 2019 Google released a paper showing how it’s fighting disinformation on its platforms Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling  
Read more
  • 0
  • 0
  • 14998

article-image-rust-1-33-0-released-with-improvements-to-const-fn-pinning-and-more
Amrata Joshi
01 Mar 2019
2 min read
Save for later

Rust 1.33.0 released with improvements to Const fn, pinning, and more!

Amrata Joshi
01 Mar 2019
2 min read
Yesterday, the team at Rust announced the stable release, Rust 1.33.0, a programming language that helps in building reliable and efficient software. This release comes with significant improvements to const fns, and the stabilization of a new concept: "pinning." What's new in Rust 1.33.0? https://twitter.com/rustlang/status/1101200862679056385 Const fn It’s now possible to work with irrefutable destructuring patterns (e.g. const fn foo((x, y): (u8, u8)) { ... }). This release also offers let bindings (e.g. let x = 1;). It also comes with mutable let bindings (e.g. let mut x = 1;) Pinning This release comes with a new concept for Rust programs called pinning. Pinning ensures that the pointee of any pointer type for example P has a stable location in memory. This means that it cannot be moved elsewhere and its memory cannot be deallocated until it gets dropped. And the pointee is said to be "pinned". Compiler It is now possible to set a linker flavor for rustc with the -Clinker-flavor command line argument. The minimum required LLVM version is 6.0. This release comes with added support for the PowerPC64 architecture on FreeBSD and x86_64-unknown-uefi target. Libraries In this release, the methods overflowing_{add, sub, mul, shl, shr} are const functions for all numeric types. Now the is_positive and is_negative methods are const functions for all signed numeric types. Even the get method for all NonZero types is now const. Language It now possible to use the cfg(target_vendor) attribute. E.g. #[cfg(target_vendor="apple")] fn main() { println!("Hello Apple!"); }. It is now possible to have irrefutable if let and while let patterns. It is now possible to specify multiple attributes in a cfg_attr attribute. One of the users commented on the HackerNews, “This release also enables Windows binaries to run in Windows nanoserver containers.” Another comment reads, “It is nice to see the const fn improvements!” https://twitter.com/AndreaPessino/status/1101217753682206720 To know more about this news, check out Rust’s official post. Introducing RustPython, a Python 3 interpreter written in Rust How Deliveroo migrated from Ruby to Rust without breaking production Rust 1.32 released with a print debugger and other changes  
Read more
  • 0
  • 0
  • 12120

article-image-youtube-disables-all-comments-on-videos-featuring-children-in-an-attempt-to-curb-predatory-behavior-and-appease-advertisers
Sugandha Lahoti
01 Mar 2019
3 min read
Save for later

YouTube disables all comments on videos featuring children in an attempt to curb predatory behavior and appease advertisers

Sugandha Lahoti
01 Mar 2019
3 min read
YouTube has disabled all comments from its videos featuring young children in order to curb the spread of pedophiles who are using YouTube to trade clips of young girls in states of undress. This issue was first discovered, when Matt Watson, a video blogger, posted a 20-minute clip detailing how comments on YouTube were used to identify certain videos in which young girls were in activities that could be construed as sexually suggestive, such as posing in front of a mirror and doing gymnastics. Youtube’s content regulation practices have been in the spotlight in recent years. Last week, YouTube received major criticism for recommending videos of minors and allowing pedophiles to comment on these posts, with a specific time stamp of the video of when an exposed private part of the young child was visible. YouTube was also condemned for monetizing these videos allowing advertisements for major brands like Nestle, Fortnite, Disney, Fiat, Fortnite, L’Oreal, Maybelline, etc to be displayed on these videos. Following this news, a large number of companies have suspended their advertising spending from YouTube and refused to do so until YouTube took strong actions. In the same week, YouTube told Buzzfeed News that it is demonetizing channels that promote anti-vaccination content. YouTube said that this type of content does not align with its policy and called it “dangerous and harmful” content. Actions taken by YouTube YouTube said that it will now disable comments worldwide on almost all videos of minors by default. It said the change would take effect over several months. This will include videos featuring young and older minors that could be at risk of attracting predatory behavior. They are further introducing new comments classifier powered by machine learning that will identify and remove twice as many predatory comments as the old one. YouTube has also banned videos that encourage harmful and dangerous challenges. “We will continue to take actions on creators who cause egregious harm to the community”, they wrote in a blog post. "Nothing is more important to us than ensuring the safety of young people on the platform," said YouTube chief executive Susan Wojcicki on Twitter. https://twitter.com/SusanWojcicki/status/1101182716593135621 Despite her apologetic comments, she was on the receiving end of a brutal backlash with people asking her to resign from the organization. https://twitter.com/g8terbyte/status/1101221757233573899 https://twitter.com/KamenGamerRetro/status/1101186868052398080 https://twitter.com/SpencerKarter/status/1101305878014242822 The internet is slowly becoming a harmful place for young tweens. Not just Youtube, recently, TikTok, the popular video-sharing app which is a rage among tweens, was accused of illegally collecting personal information from children under 13. It was fined $5.7m by the US Federal Trade Commission. TikTok has now implemented features to accommodate younger US users in a limited, separate app experience that has additional safety and privacy protections. Similar steps have, however, not been implemented across their global operations. Nestle, Disney, Fortnite pull out their YouTube ads from paedophilic videos as YouTube’s content regulation woes continue. Youtube promises to reduce recommendations of ‘conspiracy theory’. Ex-googler explains why this is a ‘historic victory’. Is the YouTube algorithm’s promoting of #AlternativeFacts like Flat Earth having a real-world impact?
Read more
  • 0
  • 0
  • 9269

article-image-announcing-wireshark-3-0-0
Melisha Dsouza
01 Mar 2019
2 min read
Save for later

Announcing Wireshark 3.0.0

Melisha Dsouza
01 Mar 2019
2 min read
Yesterday, Wireshark released its version 3.0.0 with new user interface improvements, bug fixes, new Npcap Windows Packet capturing driver and more. Wireshark, the open source and cross-platform network protocol analysis software is used by security analysts, experts and developers for analysis, troubleshooting, development, and other security-related tasks to capture and browse the packets traffic on computer networks. Features of Wireshark 3.0.0 The Windows .exe installers replaces WinPcap with Npcap. Npcap supports loopback capture and 802.11 WiFi monitor mode capture - only if supported by the NIC driver. The "Map-Button" of the Endpoint dialog that was erased since Wireshark Version 2.6.0 has been added in a modernized form. The macOS package ships with Qt 5.12.1 and the OS requires version 10.12 or later. Initial support has been provided for using PKCS #11 tokens for RSA decryption in TLS. Configure this at Preferences, RSA Keys. The new WireGuard dissector has decryption support and requires Libgcrypt 1.8 for the same. You can now copy coloring rules, IO graphs, filter Buttons and protocol preference tables from other profiles using a button in the corresponding configuration dialogs. Wireshark now supports Swedish, Ukrainian and Russian language. A new dfilter function string() has been added which allows the conversion of non-string fields to strings. This enables string functions to be used on them. The legacy (GTK+) user interface, the portaudio library are removed and no longer supported. Wireshark requires Qt 5.2 or later, GLib 2.32 or later, GnuTLS 3.2 or later as optional dependency. Building Wireshark requires Python 3.4 or a newer version. Data following a TCP ZeroWindowProbe is not passed to subdissectors and is marked as retransmission. Head over to Wireshark’s official blog for the entire list of upgraded features in this release. Using statistical tools in Wireshark for packet analysis [Tutorial] Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Analyzing enterprise application behavior with Wireshark 2
Read more
  • 0
  • 0
  • 37553

article-image-common-voice-mozillas-largest-voice-dataset-with-approx-1400-hours-of-voice-clips-in-18-different-languages
Natasha Mathur
01 Mar 2019
3 min read
Save for later

Common Voice: Mozilla’s largest voice dataset with approx 1400 hours of voice clips in 18 different languages

Natasha Mathur
01 Mar 2019
3 min read
Mozilla, a popular free and open-source web browser, released the largest public dataset of human voices available for use, called Common Voice, yesterday. The dataset consists of 18 different languages (including English, French, German, Mandarin Chinese, Welsh, Kabyle, etc) and adds about 1,400 hours of recorded voice clips from more than 42,000 contributors. “With this release, the continuously growing Common Voice dataset is now the largest ever of its kind, with tens of thousands of people contributing their voices and originally written sentences to the public domain (CC0)”, states the Mozilla team. The  Common Voice dataset is unique and rich in diversity as it represents a global community of voice contributors. These contributors can also opt-in to offer other information such as age, sex, and accent so that their voice clips get attached to data that is useful in training speech engines. Mozilla had enabled multi-language support back in June 2018, making Common Voice more global and inclusive. Mozilla also involves different communities contributing towards the project who have helped with launching the data collection efforts in 22 different languages and 70 more in progress on the Common Voice site. With the help of these communities, Mozilla has made the latest additions to the Common Voice dataset including languages such as Dutch, Hakha-Chin, Esperanto, Farsi, Basque, and Spanish. It also plans to continue working with these communities to retain the diversity in the voices represented. As per the Mozilla team, these public contributors are not only able to track the progress per language in recording and validation but have also improved the prompts that vary from clip to clip. Mozilla has also added a new option to create a saved profile, that helps the contributors keep track of their progress and metrics across different languages. It also offers optional demographic profile information that further helps improve the audio data used in training speech recognition accuracy. Apart from the dataset, Mozilla also has goals towards contributing to a more diverse and innovative voice technology ecosystem in the future. It aims to release voice-enabled products while also making sure to support researchers and smaller players. “For Common Voice, our focus in 2018 was to build out the concept, make it a tool for any language community to use, optimize the website, and build a robust backend. Our overall aim remains: Providing more and better data to everyone in the world who seeks to build and use voice technology”, states the Mozilla team. For more information on this announcement, check out the official Mozilla blog post. Mozilla partners with Scroll to understand consumer attitudes for an ad-free experience on the web Mozilla shares key takeaways from the Design Tools survey Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 12144

article-image-ai-village-shares-its-perspective-on-openais-decision-to-release-a-limited-version-of-gpt-2
Bhagyashree R
28 Feb 2019
4 min read
Save for later

AI Village shares its perspective on OpenAI’s decision to release a limited version of GPT-2

Bhagyashree R
28 Feb 2019
4 min read
Earlier this month, OpenAI released a limited version of GPT-2, its unsupervised language model, with a warning that it could be used for automating the production of fake content. While many machine learning researchers supported their decision for putting AI safety first, some felt that OpenAI is spreading fear and hindering reproducibility while others felt it was a PR stunt. AI Village, a community of hackers and data scientists working together to spread awareness about the use and misuse of AI, also shared its views on GPT-2 and its threat models. AI Village in the blog post said, “...people need to know what these algorithms that control their lives are capable of. This model seems to have capabilities that could be dangerous and it should be held back for a proper review.” These are the potential threat models in which GPT-2 can be used, according to AI Village: The bot-based misinformation threat model Back in 2017, when FCC launched a public comments website, it faced a massive coordinated botnet attack. This botnet posted millions of comments alongside humans about anti-net neutrality. Researchers were able to detect this disinformation by using regex to filter for all these comments with near certainty. AI Village said that if these comments were generated by GPT-2, we wouldn’t have been able to find that these comments were written by a botnet. Amplifying human generated disinformation We have seen a significant amount of bot-activity on different online platforms. Very often, these activities just amplify fake content created by humans by giving upvotes and likes. How these bots work is that they log in on any online platform, like the target post, and then log off for the next bot. This behavior is quite different from a human, who actually scroll through posts and stay on these social media platforms for some time. This is what gives away whether the user is a bot or an actual human. This metadata of login times, locations, site activity can prove to be really helpful in detecting bots. Automated spear phishing In a paper published in 2016, two data scientists, John Seymour, and Philip Tully introduced SNAP_R. It is a recurrent neural network that can learn to tweet phishing posts targeting specific end users. The GPT-2 language model could also be used for automated spear phishing campaigns. How can we prevent the misuse of AI? OpenAI with this decision wanted to start a discussion about the responsible release of machine learning models, and AI Village hopes that having more such discussions could prevent AI threats on our society. “We need to have an honest discussion about misinformation & disinformation online. This discussion needs to include detecting and fighting botnets, and users who are clearly not who they say they are from generating disinformation.” In recent years, we have seen many breakthroughs in AI, but comparatively very less effort has been put into finding ways to prevent the malicious use of AI. Generative Adversarial Networks are now capable of producing headshots that are indistinguishable from photos. Deepfakes for video and audio have advanced so much that they almost seem real. Currently, we do not have any mechanism in place for researchers to responsibly release their work that could potentially be used for evil. “With truly dangerous AI, it should be locked up. But after we verify that it's a threat and scope out that threat”, states the blog post. AI Village believes that we need to have more thorough discussions about such AI systems and the damages they can do. Last year, in a paper, AI Village listed down some of the ways through which AI researchers, companies, legislators, security researchers, and educators can come together to prevent and mitigate the AI threats: Policymakers and technical researchers should come together to investigate, prevent, and mitigate potential malicious uses of AI. Researchers before opening their work to the general public should think about the dual use of their work. They should also proactively reach out to relevant actors when harmful applications are foreseeable. We should have the best practices and more sophisticated methods in place to address dual use concerns. We should try to expand the range of stakeholders and domain experts involved in discussions of these challenges. You can read the AI Village post on its official website. OpenAI’s new versatile AI model, GPT-2 can efficiently write convincing fake news from just a few words Artificial General Intelligence, did it gain traction in research in 2018? OpenAI team publishes a paper arguing that long term AI safety research needs social scientists
Read more
  • 0
  • 0
  • 11364
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-marionet-a-browser-based-attack-that-allows-hackers-to-run-malicious-code-even-if-users-exit-a-web-page
Savia Lobo
28 Feb 2019
3 min read
Save for later

MarioNet: A browser-based attack that allows hackers to run malicious code even if users’ exit a web page

Savia Lobo
28 Feb 2019
3 min read
If you think closing down a website, closes down the possibility of the device being tracked, then you are wrong! Some Greek researchers have revealed a new browser-based attack named MarioNet, using which attackers can run malicious code inside users' browsers even after users have closed the webpage or even navigated away from the web page on which they got infected. The researchers in the paper titled, “Master of Web Puppets: Abusing Web Browsers for Persistent and Stealthy Computation” have also explained different anti-malware browser extensions and anti-mining countermeasures, and also puts forward several mitigations that browser makers could take. The MarioNet attack was presented on February 25 at the NDSS 2019 conference in San Diego, USA. MarioNet allows hackers to assemble giant botnets from users’ browsers. The researchers state that these bots can be used for in-browser crypto-mining (crypto jacking), DDoS attacks, malicious files hosting/sharing, distributed password cracking, creating proxy networks, advertising click-fraud, and traffic stats boosting. Even after a user exits a browser or web page, MarioNet can easily survive. This is because modern web browsers support a new API called Service Workers. “This mechanism allows a website to isolate operations that rendering a page's user interface from operations that handle intense computational tasks so that the web page UI doesn't freeze when processing large quantities of data”, the ZDNet reports. In their research paper, they explain technical details of how service workers are an update to an older API called Web Workers. They say, unlike web workers, a service worker, once registered and activated, can live and run in the page's background, without requiring the user to continue browsing through the site that loaded the service worker. The attack routine consists of registering a service worker when the user lands on an attacker-controlled website and then abusing the Service Worker SyncManager interface to keep the service worker alive after the user navigates away. The attack doesn't require any type of user interaction as browsers don't alert users or ask for permission before registering a service worker. Everything happens under the browser's hood as the user waits for the website to load. MarioNet allows attackers to place malicious code on high-traffic websites for a short period of time. This allows the attackers to gain a huge user base, remove the malicious code, but continue to control the infected browsers from another central server. The attack can also persist across browser reboots by abusing the Web Push API. This requires the attacker from getting user permission from the infected hosts to access this API. The researchers also highlighted the fact that as Service Workers have been introduced a few years back, the MarioNet attack also works in almost all desktop and mobile browsers. Places, where a MarioNet attack won't work, are IE (desktop), Opera Mini (mobile), and Blackberry (mobile). To know more about MarioNet attack in detail, read the complete research paper. New research from Eclypsium discloses a vulnerability in Bare Metal Cloud Servers that allows attackers to steal data Security researchers discloses vulnerabilities in TLS libraries and the downgrade Attack on TLS 1.3 Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack
Read more
  • 0
  • 0
  • 15891

article-image-cuda-10-1-released-with-new-tools-libraries-improved-performance-and-more
Amrata Joshi
28 Feb 2019
2 min read
Save for later

CUDA 10.1 released with new tools, libraries, improved performance and more

Amrata Joshi
28 Feb 2019
2 min read
Yesterday, the team at NVIDIA released CUDA 10.1 with a new lightweight GEMM library, new functionalities and performance updates to existing libraries, and improvements to the CUDA Graphs APIs. What’s new in CUDA 10.1? Now there are new encoding and batched decoding functionalities in nvJPEG. This release also features faster performance for a broad set of random number generators in cuRAND. In this release, there is improved performance and support for fork/join kernels in CUDA Graphs APIs. Compiler In this release, the CUDA-C and CUDA-C++ compiler, nvcc, are found in the bin/ directory. They are built on top of the NVVM optimizer, which itself is built on top of the LLVM compiler infrastructure. Developers who are willing to target NVVM directly can do so by using the Compiler SDK, which is available in the nvvm/directory. Tools There are new development tools available in the bin/ directory including, few IDEs like nsight (Linux, Mac), Nsight VSE (Windows) and debuggers like cuda-memcheck, cuda-gdb (Linux), Nsight VSE (Windows). The tools also include a few profilers and utilities. Libraries This release comes with cuBLASLt, a new lightweight GEMM library with a flexible API and tensor core support for INT8 inputs and FP16 CGEMM split-complex matrix multiplication. CUDA 10.1 also features selective eigensolvers SYEVDX and SYGVDX in cuSOLVER. Few of the available utility libraries in the lib/ directory (DLLs on Windows are in bin/) are cublas (BLAS), cublas_device (BLAS Kernel Interface), cuda_occupancy (Kernel Occupancy Calculation [header file implementation]), etc. To know more about this news in detail, check out the post by Nvidia. Implementing color and shape-based object detection and tracking with OpenCV and CUDA [Tutorial] ClojureCUDA 0.6.0 now supports CUDA 10 Stable release of CUDA 10.0 out, with Turing support, tools and library changes
Read more
  • 0
  • 0
  • 13008

article-image-stanford-researchers-introduce-two-datasets-coqa-and-hotpotqa-to-incorporate-reading-and-reasoning-in-simple-pattern-matching-problems
Amrata Joshi
28 Feb 2019
4 min read
Save for later

Stanford researchers introduce two datasets CoQA, and HotpotQA to incorporate “reading” and “reasoning” in simple pattern matching problems

Amrata Joshi
28 Feb 2019
4 min read
On Tuesday, Stanford University researchers introduced two recent datasets collected by the Stanford NLP Group to further advance the field of machine reading. These two new datasets CoQA (Conversational Question Answering), and HotpotQA work towards incorporating more “reading” and “reasoning” in the task of question answering and move beyond questions that can be answered by simple pattern matching. CoQA aims to solve the problem by introducing a context-rich interface of a natural dialog about a paragraph of text. The second one, HotpotQA goes beyond the scope of one paragraph and presents the challenge of reasoning over multiple documents to arrive at the answer. Lately, solving the task of machine reading or question answering is becoming an important section  towards a powerful and knowledgeable AI system. Recently, large-scale question answering datasets like the Stanford Question Answering Dataset (SQuAD) and TriviaQA have progressed a lot in this direction. These datasets have enabled good results in allowing researchers to train deep learning models What is CoQA? Most of the question answering systems are limited to answering questions independently. But usually while having a conversation there happens to be a few interconnected questions. Also, it is more common to seek information by engaging in conversations involving a series of interconnected questions and answers. CoQA is a Conversational Question Answering dataset developed by the researchers at Stanford University to address this limitation and working in the direction of conversational AI systems. Features of CoQA dataset The researchers didn’t restrict the answers to be a contiguous span in the passage. As a lot of questions can’t be answered by a single span in the passage, which will limit the naturalness of the conversations. For example, for a question like How many times a word has been repeated?, the answer can be simply three despite text in the passage not spelling this out directly. Most of the QA datasets mainly focus on a single domain, which makes it difficult to test the generalization ability of existing models. The CoQA dataset is collected from seven different domains including, children’s stories, literature, middle and high school English exams, news, Wikipedia, Reddit, and science. The CoQA challenge launched in August 2018, has received a great deal of attention and has become one of the most competitive benchmarks. Post the release of Google’s BERT models, last November, a lot of progress has been made, which has lifted the performance of all the current systems. Microsoft Research Asia’ state-of-the-art ensemble system “BERT+MMFT+ADA” achieved 87.5% in-domain F1 accuracy and 85.3% out-of-domain F1 accuracy. These numbers are now approaching human performance. HotpotQA: Machine Reading over Multiple Documents We often find ourselves in need of reading multiple documents to find out about the facts about the world. For instance, one might wonder, in which state was Yahoo! founded? Or, does Stanford have more computer science researchers or Carnegie Mellon University? Or simply, How long do I need to run to burn the calories of a Big Mac? The web does contain the answers to many of these questions, but the content is not always in a readily available form, or even available at one place. To successfully answer these questions, there is a need for a QA system that finds the relevant supporting facts and to compare them in a meaningful way to yield the final answer. HotpotQA is a large-scale question answering (QA) dataset that contains about 113,000 question-answer pairs. These questions require QA systems to sift through large quantities of text documents for generating an answer. While collecting the data for HotpotQA, the researchers have annotators to specify the supporting sentences they used for arriving at the final answer. To conclude, CoQA considers those questions that would arise in a natural dialog given a shared context, with challenging questions that require reasoning beyond one dialog turn. While, HotpotQA focuses on multi-document reasoning, and challenges the research community for developing new methods to acquire supporting information. To know more about this news, check out the post by Stanford. Stanford experiment results on how deactivating Facebook affects social welfare measures Thank Stanford researchers for Puffer, a free and open source live TV streaming service that uses AI to improve video-streaming algorithms Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 2471

article-image-redis-labs-announces-its-annual-growth-of-more-than-60-in-the-fiscal-year-2019
Natasha Mathur
28 Feb 2019
2 min read
Save for later

Redis Labs announces its annual Growth of more than 60% in the Fiscal Year 2019

Natasha Mathur
28 Feb 2019
2 min read
Redis Labs, the provider of Redis Enterprise, announced details about its 14th consecutive quarter of double-digit growth and its annual growth of more than 60% in the company’s 2019 fiscal year. It was just last week, when Redis Labs, a California-based computer software startup, announced that it has raised $60 million in Series E financing round led by a new and leading private equity firm, Francisco Partners. Moreover, Redis Labs finished the 2019 fiscal year with more than 250 full-time employees, with its global headcount increasing 50 percent in the past year. Also, since the company scales its global go-to-market team, new offices have been opened in Austin, Texas, and Bangalore (India) to drive adoption of Redis Enterprise. Based on the record growth results and recently secured funding, Redis Labs aims to accelerate its plans in the new fiscal year across different departments including sales, marketing, and product development. This would help them meet the demands for a multi-model database that is capable of delivering the performance, deployment flexibility, and seamless scaling to advance instant experiences. Redis Labs is already continuing to expand the business with other Global 1000 enterprises such as Alliance Data, ANZ Bank, Applied Materials, Carrefour, Dick's Sporting Goods, Thomas Cook, Mercedes Benz, Nordea, UIPath, and WestPac. Other than that, Alvin Richards has been promoted to Chief Product Officer from Chief Education Officer, that he had been appointed as back in 2017. This will help him continue the company’s market leadership and deliver innovation for the multi-model database market. Redis was named as 2019 technology of the year for the second time by IDG's InfoWorld. Also, Redis secured the place of the seventh most popular database in DB-Engines ranking among more than 300 databases. Apart from having the highest rating among the top seven database providers, it is also the first database that achieved 1 billion launches on Docker Hub in 2018. “Redis Enterprise delivers the requirements for a multi-model cloud-native database that operates at record-breaking performance with unmatched cost efficiency”, mentioned Ofer Bengal, co-founder, and CEO at Redis Labs in an email sent to us. RedisGraph v1.0 released, benchmarking proves its 6-600 times faster than existing graph databases Redis Cluster Features Overview Redis Labs moves from Apache2 modified with Commons Clause to Redis Source Available License (RSAL)
Read more
  • 0
  • 0
  • 2288
article-image-mariadb-ceo-says-big-proprietary-cloud-vendors-strip-mining-open-source-technologies-and-companies
Melisha Dsouza
28 Feb 2019
4 min read
Save for later

MariaDB CEO says big proprietary cloud vendors "strip-mining open-source technologies and companies”

Melisha Dsouza
28 Feb 2019
4 min read
At the MariaDB OpenWorks held earlier this week, MariaDB CEO Michael Howard took a stab at big proprietary cloud vendors and accused them of "strip-mining open-source technologies and companies," and “abusing the license and privilege, not giving back to the community." His keynote at the event described his plans for MariaDB, the future of MariaDB, and how he plans for MariaDB on becoming an ‘heir to Oracle and much more’. Furthermore, the entire keynote saw instances of Howard targeting his rivals- namely Amazon and Oracle- and comparing MariaDB mottos to its rivals. "We believe proprietary and closed licenses are dead. We believe you have to be a general-purpose database and not a relegated niche one, like--and nothing against it--time series. That's not going to be a general purpose database that will drive applications worldwide." MariaDB is an example of such a database. Accusations on Oracle and Amazon AWS Targeting Oracle, Howard said, "Now, you can migrate complex operational Oracle systems to MariaDB. Last year, we had one of the largest banks--Development Bank of Singapore--in the world forklift from Oracle to MariaDB. Since then, MariaDB has seen five times the number of Oracle migrations happening over the last year." Howard has also accused Amazon’s AWS of promoting its brand and making MariaDB instances on AWS look incompetent in the process. When Austin Rutherford, MariaDB's VP of Customer Success, showed the audience the result of a HammerDB benchmark on AWS EC2, AWS's default MariaDB instances did poorly. The AWS homebrew Aurora, which is built on top of MySQL, consistently beat the former database. The top-performing DBMS was MariaDB Managed Services on AWS. While these results initially were not a major cause of concern, Howard observed that one of the biggest retail drug companies in the world-and a MariaDB customer-had told MariaDB that "Amazon offers the most vanilla MariaDB around. There's nothing enterprise about it. We could just install MariaDB from source on EC2 and do as well." It was then that he "began to wonder, Is there something that they're deliberately crippling?" Further adding “There is something not kosher happening." Comparing MariaDB to Aurora, Howard said, "The best Aurora can do in a failover is 12 seconds. MariaDB can do it less than a second." ‘Heir to Oracle’ In his keynote, he speaks about making MariaDB the 'heir apparent' to Oracle, even including a checklist of what needs to be achieved to be that 'drop-in' replacement for the market-leading database. Source: Computerworld UK According to The Register, just last year, MariaDB released an Oracle compatibility layer, which allows customers to migrate their applications from Oracle to MariaDB, and also use their internal skills. “All these Oracle application developers and people familiar with Oracle – you can’t just say ‘jump off a cliff onto new ground’; you have to give them a bridge. Sometimes that’s emotional, sometimes it’s technical.” “It was so jarring to the proprietary vendors who pride themselves on secrecy, on taking advantage – at least monetarily, in the margins sense – from customers,” he said. “Open-source destroys these artificial definitions and boundaries that have been so, so much a part of the software industry.” Speaking to Computerworld UK , Howard further explained his views on the big cloud vendors. "Oracle as the example of on-premise lock-in and Amazon being the example of cloud lock-in. You could interchange the names, you can honestly say now that Amazon should just be called Oracle Prime, they have gone so aggressive. Fortunately or unfortunately, depending on whose position you want to take, it's all good for MariaDB because we can act as consumer protection. We are going to protect the brand quality and technical quality of our product no matter where it sits." Red Hat Satellite to drop MongoDB and will support only PostgreSQL backend Red Hat drops MongoDB over concerns related to its Server Side Public License (SSPL) GNU Health Federation message and authentication server drops MongoDB and adopts PostgreSQL
Read more
  • 0
  • 0
  • 11728

article-image-facebook-announces-habitat-a-platform-for-embodied-artificiai-intelligence-research
Melisha Dsouza
28 Feb 2019
2 min read
Save for later

Facebook announces ‘Habitat’, a platform for embodied ArtificiaI Intelligence research

Melisha Dsouza
28 Feb 2019
2 min read
Today, the Facebook research team announced ‘Habitat’, a new platform for embodied AI research. According to the team, this is a “modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators”. This will empower a shift from ‘internet AI’ with static datasets to an embodied AI model with agents acting in realistic environments. The project was launched by the Facebook Reality Lab, Georgia Tech, SFU, Intel, and Berkeley to restore the disconnect between ‘internet AI’ and ‘embodied AI’.  It will standardize the entire ‘software stack’ for training embodied agents, and release modular high-level libraries to train and deploy embodied agents. An important objective of Habitat-API is to make it easy for users to use a 3D environment and set up a variety of embodied agent tasks in it. Habitat consists of Habitat-Sim, Habitat-API, and Habitat Challenge. #1 Habitat-Sim This is “a flexible, high-performance 3D simulator with configurable agents, multiple sensors, and generic 3D dataset handling”. It also has built-in support for SUNCG, MatterPort3D, Gibson and other datasets. Habitat-Sim achieves several thousand frames per second (FPS) running single-threaded, and reaches over 10,000 FPS multi-process on a single GPU on rendering a scene from the Matterport3D dataset. #2 Habitat-API Habitat-API defines embodied AI tasks, configuring embodied agents, training these agents, and benchmarking their performance on the defined tasks using standard metrics. #3 Habitat Challenge Habitat Challenge is an autonomous navigation challenge that benchmarks and accelerates progress in embodied AI. Participants can upload code and not predictions- unlike classical 'internet AI' image dataset-based challenges. The uploaded agents are evaluated to test for generalization You can head over to Facebook’s official announcement for more information on this news. Facebook and Google pressurized to work against ‘Anti-Vaccine’ trends after Pinterest blocks anti-vaccination content from its pinboards Facebook’s AI Chief at ISSCC talks about the future of deep learning hardware Regulate Google, Facebook, and other online platforms to protect journalism, says a UK report  
Read more
  • 0
  • 0
  • 10002

article-image-the-ember-project-announces-version-3-8-of-ember-js-ember-data-and-ember-cli
Bhagyashree R
28 Feb 2019
2 min read
Save for later

The Ember project announces version 3.8 of Ember.js, Ember Data, and Ember CLI

Bhagyashree R
28 Feb 2019
2 min read
Yesterday, the community behind the Ember project released version 3.8 of the three sub-projects: Ember.js, Ember Data, and Ember CLI. Along with few bugfixes in Ember Data and Ember CLI, this release introduces two new features: element modifier manager and array helper. Updates in the Ember.js web framework Ember.js 3.8 is a long-term support candidate. This release is incremental, backward compatible and comes with two new features: element modifier manager and array helper. Element modifier manager Element modifier manager is a very low-level API, which will be responsible for coordinating the lifecycle events that are triggered when an element modifier is invoked, installed, and updated. Array helper Now you can create an array in a template with a new feature introduced in Ember.js 3.8, the {{array}} helper. The working of this helper is very similar to the already existing {{hash}} helper. Deprecations Computed property overridability: Computed properties in Ember.js are overridable by default when no setter is defined. As this behavior is bug-prone, it has been deprecated. The ‘readOnly()’ modifier that prevents this behavior will be deprecated once overridability has been removed. @ember/object#aliasMethod: This method, which allows you to add aliases to objects defined with EmberObject, is now deprecated as it is very little known and rarely used by developers. Component manager factory function: Now, setComponentManager does not require a string to associate the custom component class and the component manager. Instead, developers can pass a factory function that produces an instance of the component manager. Updates in Ember Data Not many changes have been made in this release of Ember Data. Along with updating the documentation, the team has updated ‘_scheduleFetch’ to ‘use _fetchRecord’ for belongsTo relationship. Updates in Ember CLI The {{content-for}} hook is updated to allow developers to use it in the same way when different types are specified, for instance, {{content-for 'head'}} {{content-for 'head-footer'}}. With this release, gitignore will ignore Yarn .pnp files. To read the entire list of updates, visit Ember’s official website. The Ember project announces version 3.7 of Ember.js, Ember Data, and Ember CLI The Ember project releases version 3.5 of Ember.js, Ember Data, and Ember CLI The Ember project announces version 3.4 of Ember.js, Ember Data, and Ember CLI
Read more
  • 0
  • 0
  • 5473
article-image-magic-leap-announces-selections-for-the-magic-leap-independent-creator-program
Sugandha Lahoti
28 Feb 2019
1 min read
Save for later

Magic Leap announces selections for the Magic Leap Independent Creator Program

Sugandha Lahoti
28 Feb 2019
1 min read
Last year in November, Magic Leap introduced an Independent Creator Program. Yesterday, they named their selections for this program. The Magic Leap team reviewed over 6,500 entries, and selected projects in a wide range of categories, including education, entertainment, gaming, enterprise and more. Magic Leap Independent Creator Program is a development fund to help individual developers and teams to kick-start their Magic Leap One projects. They are offering grants between $20,000 and $500,000 per project along with the developer, hardware, and marketing support. The teams selected include: Source: MagicLeap The selected teams will now be paired with Magic Leap’s Developer Relations team for guidance and support. Once the teams have built, submitted, and launched their projects, the best experiences will be showcased at L.E.A.P. Conference in 2019. Teams will receive dedicated marketing support, including planning, promotion, and social media amplification. The Developer Relations team consisting of Magic Leap’s subject matter experts and QA testers will give developers one on one guidance. Magic Leap acquires Computes Inc to enhance spatial computing Magic Leap unveils Mica, a human-like AI in augmented reality Magic Leap teams with Andy Serkis’ Imaginarium Studios to enhance Augmented Reality
Read more
  • 0
  • 0
  • 7339

article-image-2018-prediction-was-reinforcement-learning-applied-to-many-real-world-situations
Prasad Ramesh
27 Feb 2019
4 min read
Save for later

2018 prediction: Was reinforcement learning applied to many real-world situations?

Prasad Ramesh
27 Feb 2019
4 min read
Back in 2017, we predicted that reinforcement learning would be an important subplot in the growth of artificial intelligence. After all, a machine learning agent that adapts and ‘learns’ according to environmental changes has all the makings of an incredibly powerful strain of artificial intelligence. Surely, then, the world was going to see new and more real-world uses for reinforcement learning. But did that really happen? You can bet it did. However, with all things intelligent subsumed into the sexy, catch-all term artificial intelligence, you might have missed where reinforcement learning was used. Let’s go all the way back to 2017 to begin. This was the year that marked a genesis in reinforcement learning. The biggest and most memorable event was perhaps when Google’s AlphaGo defeated the world’s best Go player. Ultimately, this victory could be attributed to reinforcement learning; AlphaGo ‘played’ against itself multiple times, each time becoming ‘better’ at the game, developing an algorithmic understanding of how it could best defeat an opponent. However, reinforcement learning went well beyond board games in 2018. Reinforcement learning in cancer treatment MIT researchers used reinforcement learning to improve brain cancer treatment. Essentially, the reinforcement learning system is trained on a set of data on established treatment regimes for patients, and then ‘learns’ to find the most effective strategy for administering cancer treatment drugs. The important point is that artificial intelligence here can help to find the right balance between administering and withholding the drugs. Reinforcement learning in self-driving cars In 2018, UK self-driving car startup Wayve trained a car to drive using its ‘imagination’. Real world data was collected offline to train the model, which was then used to observe and predict the ‘motion’ of items in a scene and drive on the road. Even though the data was collected in sunny conditions, the system can also drive in rainy situations adjusting itself to reflections from puddles etc. As the data is collected from the real world, there aren’t any major differences in simulation versus real application. UC Berkeley researchers also developed a deep reinforcement learning method to optimize SQL joins. The join ordering problem is formulated as a Markov Decision Process (MDP). A method called Q-learning is applied to solve the join-ordering MDP. The deep reinforcement learning optimizer called DQ offers out solutions that are close to an optimal solution across all cost models. It does so without any previous information about the index structures. Robot prosthetics OpenAI researchers created a robot hand called Dactyl in 2018. Dactyl has human-like dexterity for performing complex in hand manipulations, achieved through the use of reinforcement learning. Finally, it’s back to Go. Well, not just Go - chess, and a game called Shogi too. This time, Deepmind’s AlphaZero was the star. Whereas AlphaGo managed to master Go, AlphaZero mastered all three. This was significant as it indicates that reinforcement learning could help develop a more generalized intelligence than can currently be developed through artificial intelligence. This is an intelligence that is able to adapt to new contexts and situations - to almost literally understand the rules of very different games. But there was something else impressive about AlphaZero - it was only introduced to a set of basic rules for each game. Without any domain knowledge or examples, the newer program outperformed the current state-of-the-art programs in all three games with only a few hours of self-training. Reinforcement learning: making an impact irl These were just some of the applications of reinforcement learning to real-world situations to come out of 2018. We’re sure we’ll see more as 2019 develops - the only real question is just how extensive its impact will be. This AI generated animation can dress like humans using deep reinforcement learning Deep reinforcement learning – trick or treat? DeepMind open sources TRFL, a new library of reinforcement learning building blocks
Read more
  • 0
  • 0
  • 7153
Modal Close icon
Modal Close icon