Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News

3711 Articles
article-image-introducing-remove-bg-a-deep-learning-based-tool-that-automatically-removes-the-background-of-any-person-based-image-within-5-seconds
Amrata Joshi
18 Dec 2018
3 min read
Save for later

Introducing remove.bg, a deep learning based tool that automatically removes the background of any person based image within 5 seconds

Amrata Joshi
18 Dec 2018
3 min read
Yesterday, Benjamin Groessing, a web consultant and developer at byteq, released remove.bg, a tool built on python, ruby and deep learning. This tool automatically removes the background of any image within 5 seconds. It uses various custom algorithms for the processing of the image. https://twitter.com/hammer_flo_/status/1074914463726350336 It is a free service and users don’t have to manually select the background/foreground layers to separate them. One can simply select an image and instantly download the resulting image with the background removed. Features of remove.bg Personal and professional use Remove.bg can be used by graphic designer, photographer or selfie lover for removing backgrounds. Saves time and money It saves time as it is automated and it is free of cost. 100% Automatic Apart from the image file, this release doesn’t require inputs such as selecting pixels, marking persons, etc. How does remove.bg work? https://twitter.com/begroe/status/1074645152487129088 Remove.bg uses AI technology for detecting foreground layers and separating them from the background. It uses additional algorithms for improving fine details and preventing color contamination. The AI detects persons as foreground and everything else as background. So, it only works if there is at least one person in the image. Users can upload images of any resolution but for performance reasons, the output image has been limited to 500 × 500 pixels. Privacy in remove.bg User images are uploaded through a secure SSL/TLS-encrypted connection. These images are processed and the result is temporarily stored till the time a user can download them. After which, approximately an hour later, these image files get deleted. Privacy message on the official website of remove.bg states, “We do not share your images or use them for any other purpose than removing the background and letting you download the result.” What can be expected from the next release? The next set of releases might support other kinds of images such as product images. The team at Remove.bg might also release an easy-to-use API. Users are very excited about this release and the technology used behind it. Many users are comparing it with the portrait mode on iPhone X. Though it is not that fast but users are still liking it. https://twitter.com/Baconbrix/status/1074805036264316928 https://twitter.com/hammer_flo_/status/1074914463726350336 But how strong is remove.bg with regards to privacy is a bigger question. Though the website gives a privacy note at the end but it will take more to win the user’s trust. The images uploaded to remove.bg’ cloud might be at risk. How strong is the security and what preventive measures have they taken? These are few of the questions that might bother many. To have a look at the ongoing discussion on remove.bg, check out Benjamin Groessing’s AMA twitter thread. Facebook open-sources PyText, a PyTorch based NLP modeling framework Deep Learning Indaba presents the state of Natural Language Processing in 2018 NYU and AWS introduce Deep Graph Library (DGL), a python package to build neural network graphs
Read more
  • 0
  • 0
  • 25504

article-image-microsoft-open-sources-trill-a-streaming-engine-that-employs-algorithms-to-process-a-trillion-events-per-day
Prasad Ramesh
18 Dec 2018
3 min read
Save for later

Microsoft open sources Trill, a streaming engine that employs algorithms to process “a trillion events per day”

Prasad Ramesh
18 Dec 2018
3 min read
Yesterday, Microsoft open sourced Trill, previously an internal project used for processing “a trillion events per day”. It was the first streaming engine to incorporate algorithms that process events in small batches of data based on latency on the user side. It powers services like Financial Fabric, Bing ads, Azure stream analytics, Halo, etc. With the increasing flow of data, the ability to process huge amounts of data each millisecond is a necessity. Microsoft has open sourced Trill for processing a trillion events per day to ‘address this growing trend’. Microsoft Trill features Trill is a single-node engine library and any .NET application, service, or platform can readily use Trill to start processing queries. It has a temporal query language which allows users to use complex queries over real-time and offline data sets. Trill has high performance which allows users to get results with great speed and low latency. How did Trill start? Trill was a research project at Microsoft Research in 2012. It has been described in various research papers like VLDB and the IEEE Data Engineering Bulletin. Trill is based on a former Microsoft service called StreamInsight—a platform that allowed developers to develop and deploy event processing applications. Both of these systems are based on an extended query and data model which extends the relational model with a component for time. Systems before Trill could only achieve a part of the benefits. All these advantages come in one package with Trill. Trill was the very first streaming engine that incorporated algorithms to process events in data batches based on the latency tolerated by users. It was also the first engine that organized data batches in a columnar format. This enabled queries to execute with much higher efficiency. Using Trill is similar to working with any .NET library. Trill has the same performance for real-time and offline datasets. Trill allows users to perform advanced time-oriented analytics and also look for complex patterns over streaming datasets. Open-sourcing Trill Microsoft believes there Trill is the best available tool in this domain in the developer community. By open sourcing it, they want to offer the features of IStreamable abstraction to all customers. There are opportunities for community involvement for future development of Trill. It allows users to write custom aggregates. There are also research projects built on Trill where the code is present but is not yet ready to use. For more details on Trill, visit the Microsoft website. Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details Microsoft confirms replacing EdgeHTML with Chromium in Edge Microsoft Connect(); 2018: .NET foundation open membership, .NET Core 2.2, .NET Core 3 Preview 1 released, WPF, WinUI, Windows forms open sourced
Read more
  • 0
  • 0
  • 14135

article-image-clojure-1-10-released-with-prepl-improved-error-reporting-and-java-compatibility
Amrata Joshi
18 Dec 2018
5 min read
Save for later

Clojure 1.10 released with Prepl, improved error reporting and Java compatibility

Amrata Joshi
18 Dec 2018
5 min read
Yesterday the team at Clojure released Clojure 1.10, a dynamic, general-purpose programming language. Clojure treats the code as data and has a Lisp macro system. What’s new in Clojure 1.10? Java compatibility and dependencies Java 8 is the minimum requirement for Clojure 1.10. Clojure 1.10 comes with ASM 6.2 and updates javadoc links. Conditional logic has been removed in this release. Added type hint to address reflection ambiguity in JDK 11. spec.alpha dependency has been updated to 0.2.176 core.specs.alpha dependency has been updated to 0.2.44 Error Printing In Clojure 1.10, errors are categorized into various phases such as: :read-source: It is an error thrown while reading characters at the REPL or from a source file. :macro-syntax-check: It is a syntax error found in the syntax of a macro call, either from the spec or from a macro which throws IllegalArgumentException, IllegalStateException, or ExceptionInfo. :macroexpansion: All the errors thrown during macro evaluation are termed as macroexpansion errors. :compile-syntax-check: It is a syntax error caught during compilation. :compilation: It is a non-syntax error which is caught during compilation. :execution: Any error thrown at the execution time is termed as execution error. :read-eval-result: An error thrown while reading the result of execution is categorized as read-eval-result error. :print-eval-result: An error thrown while printing the result of execution is termed as print-eval-result error.Protocol extension by metadata This release comes with a new option, :extend-via-metadata. When :extend-via-metadata is true, values can extend the protocols by adding metadata. The protocol implementations are first checked for direct definitions such as, defrecord, deftype, reify. Further,they are checked for metadata definitions, and then for external extensions such as, extend, extend-type, extend-protocol. Tap Clojure 1.10 comes with tap, a shared and globally accessible system used for distributing a series of informational or diagnostic values to a set of handler functions. It can be used as a better debug prn and for facilities like logging. The function tap> sends a value to the set of taps. The tap function may block (e.g. for streams) and would never impede calls to tap>. Indefinite blocking ly may cause tap values to drop. Read string capture mode This release comes with read+string function that not only mimics read but also captures the string that is read. It returns both the read value and the whitespace-trimmed read string. This function requires a LineNumberingPushbackReader. Prepl (alpha) Prepl, a new stream-based REPL, comes with structured output that is suitable for programmatic use. In prepl, forms are read from the reader and return data maps for the return value (if successful), output to *out* (possibly many), output to *err*(possibly many), or tap> values (possibly many). Other functions in Clojure 1.10 include, io-prepl, a prepl bound to *in* and *out*, which works with the Clojure socket server and remote-prepl, a prepl that can be connected to a remote prepl over a socket Datafy and nav The clojure.datafy function features data transformation for objects. The datafy and nav functions can be used to transform and navigate through object graphs. datafy is still in alpha stage. Major bug fixes ASM regression has been fixed in this release. In the previous release, there were issues with deprecated JDK APIs, which has been fixed. The invalid bytecode generation for static interface method calls have been fixed. Redundant key comparisons have been removed from HashCollisionNode. This release reports correct line number for uncaught ExceptionInfo in clojure.test. Many users have appreciated the efforts taken by the Clojure team for this project. According to most of the users, this release might prove to be a better foundation for developer tooling. Users are happy with the updated debug messages and bug fixes. One user commented on HackerNews, “From the perspective of a (fairly large-scale at this point) app developer: I find it great that Clojure places such emphasis on backwards compatibility. The language has been designed by experienced and mature people and doesn't go through "let's throw everything out and start again" phases like so many other languages do.”  Though few users still prefer occasional updates instead so that they get better APIs. Rich Hickey, the creator of Clojure language has been appreciated a lot for his efforts even by the non Clojurists. One of the users commented on HackerNews, “Rich's writing, presentations, and example of overall conceptual discipline and maturity have helped me focus on the essentials in ways that I could not overstate. I'm glad (but not surprised) to see so much appreciation for him around here, even among non-Clojurists (like myself).” Though REPLs use the ubiquitous paradigm of stdio streams and are efficient, their major downside is of mingling evaluation output with printing output (the "Print" step). Few users are linking this project with the unrepl and are confused if the design works for the projects. Clojure is stable and it doesn’t have a static type system. Also, users are not happy with the changelog. However, this release has raised a few questions among the developer community. One of the big questions here is if PREPLs would replace remote APIs someday? It would be interesting to see if that really happens with the next set of Clojure releases. Get more information about Clojure 1.10 on Clojure. ClojureCUDA 0.6.0 now supports CUDA 10 Clojure 1.10.0-beta1 is out! Clojure for Domain-specific Languages – Design Concepts with Clojure  
Read more
  • 0
  • 0
  • 15074

article-image-france-to-levy-digital-services-tax-on-big-tech-companies-like-google-apple-facebook-amazon-in-the-new-year
Savia Lobo
18 Dec 2018
3 min read
Save for later

France to levy digital services tax on big tech companies like Google, Apple, Facebook, Amazon in the new year

Savia Lobo
18 Dec 2018
3 min read
At a press conference yesterday, French Economy Minister Bruno Le Maire announced that France would levy a new tax on big tech companies including Google, Apple, Facebook, and Amazon, also known as GAFA, w.e.f. January 1, 2019. This tax is estimated to bring in €500m, or about $567 million in the coming year. Le Maire told France24 television, “I am giving myself until March to reach a deal on a European tax on the digital giants. If the European states do not take their responsibilities on taxing the GAFA, we will do it at a national level in 2019.” In an interview with Reuters and a small group of European newspapers, Le Maire said, “We want a fair taxation of digital giants that creates value in Europe in 2019”. France, along with Germany’s help had proposed a comprehensive digital services tax (DST) to cover all 28 EU member states. However, Ireland dismissed the move stating that this would aggravate the US-EU trade intentions. Dublin also said this bloc should happen only after the Organisation for Economic Co-operation and Development (OECD) had presented its tax proposals in 2019. Le Maire, however, said that France would press ahead alone with the tax. In March 2018, the European Commission published a proposal for a 3% tax on tech giants with global revenues north of €750m ($850 million USD) per year, and EU revenue above €50m (about $57 million). But with disagreements by some member states, including Ireland and the Netherlands, on how to move forward with such a tax, the process has been stalled. Per Le Maire, “The digital giants are the ones who have the money." The companies "make considerable profits thanks to French consumers, thanks to the French market, and they pay 14 percentage points of tax less than other businesses.” In October, British Chancellor Philip Hammond announced in the Budget that he plans to introduce a digital services tax from April 2020 following a consultation. The Chancellor's office has suggested that the tax would generate at least 400 million pounds ($505 million) per year. According to Reuters, “President Emmanuel Macron’s government has proposed taxing the tech giants on revenues rather than profits, to get around the problem that the companies shift the profits from where they are earned to low tax jurisdictions.” France and Germany in their alternative plan at a meeting of EU finance ministers proposed levying a 3 percent tax on digital advertising from Google and Facebook, which together account for about 75 percent of digital advertising, starting in 2021. Ministers asked the European Commission to work on the new proposal and present its findings to them in January or February. After the meeting, Le Maire said, "It's a first step in the right direction, which in the coming months should make the taxation of digital giants a possibility." To know more about this in detail, visit France24’s complete coverage. Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many including the tech community Amazon addresses employees dissent regarding the company’s law enforcement policies at an all-staff meeting, in a first Senator Ron Wyden’s data privacy law draft can punish tech companies that misuse user data  
Read more
  • 0
  • 0
  • 13785

article-image-chromium-based-brave-browser-shows-22-faster-page-load-time-than-its-muon-based-counterpart
Bhagyashree R
18 Dec 2018
2 min read
Save for later

Chromium-based Brave browser shows 22% faster page load time than its Muon-based counterpart

Bhagyashree R
18 Dec 2018
2 min read
Back in March this year, the Brave team shared their plans of replacing their desktop Muon runtime, with a more comprehensive Chromium stack for the desktop browser. Yesterday, the team shared a report on performance improvements in Brave Core, which refers to the newly redesigned browser for desktop operating systems. It uses Chromium’s native interface and supports nearly all Chrome features and extension APIs. Brave is a free and open source web browser, founded by the inventor of Javascript and co-founder of Mozilla, with the main focus on privacy and performance. By switching to the Chromium code base, the browser has become the latest addition to the Chromium bandwagon, which now includes Google Chrome, Vivaldi, Opera, and most recently, Edge. This evaluation of Brave Core’s performance was done based on two critical metrics: how quickly it loads pages and how much resources it consumes. Brave 0.24.0 was compared against Brave Core 0.55.12 Beta release. For this comparison, they considered Alexa News Top 10, as they are frequently visited by a lot of people and are run by reputable companies that pay attention to their readers. Results of the performance comparison between Brave Core and Muon-based Brave The team arrived at the following results after comparing the upcoming Brave Core browser with the current version of Muon-based Brave on a desktop computer: Load time savings on common desktops: Brave Core showed a load time savings of 10%-34% on the tested popular media websites with the same page content and blocking. Also, it showed a 22% average and 18% median load time savings. Performance on slower processors: On slower environments, similar to today’s average Android device on a fast 3G connection, the browser showed savings ranging up to 44%. Better CPU utilization: Brave Core showed better CPU utilization with all computationally intensive tasks running faster across all tested websites and configurations. These time savings were a result of several improvements across HTML parsing, JavaScript execution, page rendering, etc. To read more in detail about the performance analysis of Brave, check out their original post. Introducing Basilisk, an open source XUL based browser and “close twin” to pre-Servo Firefox Google’s V8 7.2 and Chrome 72 gets public class fields syntax; private class fields to come soon Google Chrome announces an update on its Autoplay policy and its existing YouTube video annotations
Read more
  • 0
  • 0
  • 13559

article-image-ai-chipmaking-startup-graphcore-raises-200m-from-bmw-microsoft-bosch-dell
Melisha Dsouza
18 Dec 2018
2 min read
Save for later

AI chipmaking startup ‘Graphcore’ raises $200m from BMW, Microsoft, Bosch, Dell

Melisha Dsouza
18 Dec 2018
2 min read
Today, Graphcore, a UK-based chipmaking startup has raised $200m in a series D funding round from investors including Microsoft and BMW, valuing the company at $1.7bn. This new funding brings the total capital raised by Graphcore to date to more than $300m. The funding round was led by U.K.venture capital firm Atomico and Sofina, with participation from the biggest names in the AI and machine learning industry like Merian Global Investors, BMW iVentures, Microsoft, Amadeus Capital Partners, Robert Bosch Venture Capital, Dell Technologies Capital, amongst many others. The company intends to use the funds generated to execute on its product roadmap, accelerate scaling and expand its global presence. Graphcore, which designs chips purpose-built for artificial intelligence, is attempting to create a new class of chips that are better able to deal with the huge amounts of data needed to make AI computers. The company is ramping up production to meet customer demand for its Intelligence Processor Unit (UPU) PCIe processor cards, the first to be designed specifically for machine intelligence training and inference. Mr. Nigel Toon, CEO, and co-founder, Graphcore said that Graphcore’s processing units can be used for both the training and deployment of machine learning systems, and they were “much more efficient”. Tobias Jahn, principal at BMW i Ventures stated that Graphcore’s technology "is well-suited for a wide variety of applications from intelligent voice assistants to self-driving vehicles.” Last year the company raised $50 million from investors including Demis Hassabis, co-founder of DeepMind; Zoubin Ghahramani of Cambridge University and chief scientist at Uber, Pieter Abbeel from UC Berkeley, and Greg Brockman, Scott Grey and Ilya Sutskever, from OpenAI. Head over to Graphcore’s official blog for more insights on this news. Microsoft Azure reportedly chooses Xilinx chips over Intel Altera for AI co-processors, says Bloomberg report NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase NVIDIA demos a style-based generative adversarial network that can generate extremely realistic images; has ML community enthralled
Read more
  • 0
  • 0
  • 16205
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-facebook-uses-hyperloglog-in-presto-to-speed-up-cardinality-estimation
Bhagyashree R
18 Dec 2018
3 min read
Save for later

How Facebook uses HyperLogLog in Presto to speed up cardinality estimation

Bhagyashree R
18 Dec 2018
3 min read
Yesterday, Facebook shared how they use HyperLogLog (HLL) in Presto to do computational intensive operations like estimating distinct values within a huge dataset. With this implementation, they were able to achieve up to 1,000x speed improvements while working on count-distinct problems. What is HyperLogLog? HyperLogLog is an algorithm designed for estimating the number of unique values within a huge dataset, which is also known as cardinality. To produce an estimate of the cardinality it uses an auxiliary memory of m units and performs a single pass over the data. This algorithm is an improvised version of the previously known cardinality estimator, LogLog. Facebook uses HypeLogLog in scenarios like determining the number of distinct people visiting Facebook in the past week using a single machine. To even further speed up these type of queries they implemented HLL in Presto, which is an open source distributed SQL query engine. Presto is designed for running interactive analytic queries against data sources of all sizes. Using HLL, the same calculation can be performed in 12 hours with less than 1 MB of memory. Facebook highlights that they have seen great improvements, with some queries being run within minutes, including those used to analyze thousands of A/B tests. Presto’s HLL implementation The implementation of HLL data structures in Presto consists of two layout formats: sparse and dense. To save on memory the storage starts off with a sparse layout and when the input data structure goes over the prespecified memory limit for the sparse format, Presto switches to the dense layout automatically. The sparse layout is used to get almost the exact count in low-cardinality datasets, for instance, the number of distinct countries. The dense layout is used in the cases where the cardinality is high such as the number of distinct users. There is an HYPERLOGLOG data type in Presto. Those users who prefer a single format so that they can process the output structure in other platforms such as Python, there is another data type called P4HYPERLOGLOG, which starts and stays strictly as a dense HLL. To read more in detail about how Facebook uses HLL, check out their article. Facebook open-sources PyText, a PyTorch based NLP modeling framework Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior
Read more
  • 0
  • 0
  • 3619

article-image-introducing-basilisk-an-open-source-xul-based-browser-and-close-twin-to-pre-servo-firefox
Bhagyashree R
18 Dec 2018
2 min read
Save for later

Introducing Basilisk, an open source XUL based browser and “close twin” to pre-Servo Firefox

Bhagyashree R
18 Dec 2018
2 min read
Yesterday, the team behind Pale Moon, an open-source web browser introduced Basilisk, which is supposedly a “close twin to Mozilla's Firefox”. Basilisk is an open source web browser which is based on Mozilla’s XML User Interface Language (XUL). It is being introduced as primarily a reference application for development of the XUL platform it builds upon. It features Firefox-style interface and operation. What are the features of Basilisk? It uses Goanna as layout and rendering engine, which is forked off of the Firefox’s browser engine called Gecko. Builds on UXP, a XUL platform in development. As it does not uses Rust or Photon user interface, users can expect the user interface to be similar to Firefox between v29 and v56. It does not use Electrolysis or e10s, which aims to bring multi-process architecture to Firefox. It splits the Firefox browser into a single process for the UI, and several processes for web content, media playback, plugins, etc. It does not require walled-garden extension signing. To provide users a modern web browsing experience it supports the ECMAScript 6 standard of JavaScript. Supports all NPAPI plugins such as Unity, Silverlight, Flash, Java, authentication plugins, etc. Supports XUL/Overlay Mozilla-style extensions. Comes with experimental support for WebExtension. Supports ALSA on Linux Supports WebAssembly Supports advanced Graphite font shaping features Supports modern web cryptography such TLS 1.3, modern ciphers, HSTS, etc. Basilisk is still in its development phase or in beta, which means that it may have some bugs and is provided as-is, with potential defects. Many developers are confused about how Basilisk differs from the Pale Moon browser the team offers and also why anyone would want to use pre-Servo Firefox. While some users say, “My interpretation is that this is mostly a project for the die-hard users who lost support for niche extensions they really liked when XUL left mainstream Firefox... it reads mostly like they intend to maintain it as a time capsule. "No different from last time" is exactly the main selling feature.” To read more in detail, visit Basilisk’s official website. Anti-paywall add-on is no longer available on the Mozilla website The State of Mozilla 2017 report focuses on internet health and user privacy Mozilla introduces new Firefox Test Pilot experiments: Price Wise and Email tabs
Read more
  • 0
  • 0
  • 13440

article-image-facebook-open-sources-pytext-a-pytorch-based-nlp-modeling-framework
Amrata Joshi
17 Dec 2018
4 min read
Save for later

Facebook open-sources PyText, a PyTorch based NLP modeling framework

Amrata Joshi
17 Dec 2018
4 min read
Last week, the team at Facebook AI Research announced that they are open sourcing  PyText NLP framework. PyText, a deep-learning based NLP modeling framework, is built on PyTorch. Facebook is outsourcing some of the conversational AI techs for powering the Portal video chat display and M suggestions on Facebook Messenger. https://twitter.com/fb_engineering/status/1073629026072256512 How is PyText useful for Facebook The PyText framework is used for tasks like document classification, semantic parsing, sequence tagging and multitask modeling. This framework easily fits into research and production workflows and emphasizes on robustness and low-latency to meet Facebook’s real-time NLP needs. PyText is also responsible for models powering more than a billion daily predictions at Facebook. This framework addresses the conflicting requirements of enabling rapid experimentation and serving models at scale by providing simple interfaces and abstractions for model components. It uses PyTorch’s capabilities of exporting models for inference through optimized Caffe2 execution engine. Features of PyText PyText features production-ready models for various NLP/NLU tasks such as text classifiers, sequence taggers, etc. PyText comes with a distributed-training support, built on the new C10d backend in PyTorch 1.0. It comes with training support and also features extensible components that help in creating new models and tasks. The framework’s modularity, allows it to create new pipelines from scratch and modify the existing workflows. It comes with a simplified workflow for faster experimentation. It gives an access to a rich set of prebuilt model architectures for text processing and vocabulary management. Serve as an end-to-end platform for developers. Its modular structure helps engineers to incorporate individual components into existing systems. Added support for string tensors to work efficiently with text in both training and inference. PyText for NLP development PyText improves the workflow for NLP and supports distributed training for speeding up NLP experiments that require multiple runs. Easily portable The PyText models can be easily shared across different organizations in the AI community. Prebuilt models With a model focused on NLP tasks, such as text classification, word tagging, semantic parsing, and language modeling, this framework makes it possible to use pre-built models on new data, easily. Contextual models For improving the conversational understanding in various NLP tasks, PyText uses the contextual information, such as an earlier part of a conversation thread. There are two contextual models in PyText, a SeqNN model for intent labeling tasks and a Contextual Intent Slot model for joint training on both tasks. PyText exports models to Caffe2 PyText uses PyTorch 1.0’s capability to export models for inference through the optimized Caffe2 execution engine. Native PyTorch models require Python runtime, which is not scalable because of the multithreading limitations of Python’s Global Interpreter Lock. Exporting to Caffe2 provides efficient multithreaded C++ backend for serving huge volumes of traffic efficiently. PyText’s capabilities to test new state-of-the-art models will be improved further in the next release. Since, putting sophisticated NLP models on mobile devices is a big challenge, the team at Facebook AI research will work towards building an end-to-end workflow for on-device models. The team plans to include supporting multilingual modeling and other modeling capabilities. They also plan to make models easier to debug, and might also add further optimizations for distributed training. “PyText has been a collaborative effort across Facebook AI, including researchers and engineers focused on NLP and conversational AI, and we look forward to working together to enhance its capabilities,” said the Facebook AI research team. Users are excited about this news and want to explore more. https://twitter.com/ezylryb_/status/1073893067705409538 https://twitter.com/deliprao/status/1073671060585799680 To know about this in detail, check out the release notes on GitHub. Facebook contributes to MLPerf and open sources Mask R-CNN2Go, its CV framework for embedded and mobile devices Facebook retires its open source contribution to Nuclide, Atom IDE, and other associated repos Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior
Read more
  • 0
  • 0
  • 17631

article-image-an-sqlite-magellan-rce-vulnerability-exposes-billions-of-apps-including-all-chromium-based-browsers
Natasha Mathur
17 Dec 2018
2 min read
Save for later

An SQLite “Magellan” RCE vulnerability exposes billions of apps, including all Chromium-based browsers

Natasha Mathur
17 Dec 2018
2 min read
The Tencent Blade security team found a vulnerability in the SQLite database that exposes billions of desktop and web applications to hackers. This vulnerability classified as a remote code execution (RCE) vulnerability hasn’t received a CVE identification number yet and has been nicknamed as “Magellan” by the Tencent Blade Team. Since SQLite is one of the most popular databases used in modern operating systems and applications, this vulnerability can affect a variety of different apps ( eg: Android/iOS), devices (eg: IoT), and software. Magellan poses dangers such as allowing hackers to run malicious code within the hacked computers, leaking program memory or causing program crashes. Moreover, this vulnerability can be remotely exploited on even accessing a particular web page in a browser that supports SQLite. Other than SQLite, all web browsers using the Chromium engine has also been affected by this vulnerability. Tencent Blade has already reported the vulnerability to Google developers who then promptly took care of it on their end. Additionally, security experts at Tencent Blade also successfully exploited Google Home with this vulnerability, but haven’t disclosed the exploit code yet. The team also mentions how they’re yet to see a case where Magellan has been abused “wildly”. Tencent Blade recommends updating to the official stable version 71.0.3578.80 of Chromium and to 3.26.0 for SQLite as they’re safe from the vulnerability. Google Chrome, Vivaldi, and Brave are all reported to be affected as they support SQLite through the Web SQL database API. Safari web browser isn’t affected yet and Firefox may be prone to this vulnerability in case a hacker gains access to its local SQLite database. “We will not disclose any details of the vulnerability at this time, and we are pushing other vendors to fix this vulnerability as soon as possible”, says the Tencent Blade team. Zimperium zLabs discloses a new critical vulnerability in multiple high-privileged Android services to Google A kernel vulnerability in Apple devices gives access to remote code execution Microsoft announces Windows DNS Server Heap Overflow Vulnerability, users dissatisfied with patch details
Read more
  • 0
  • 0
  • 11414
article-image-nvidia-makes-its-new-brain-for-autonomous-ai-machines-jetson-agx-xavier-module-available-for-purchase
Natasha Mathur
17 Dec 2018
3 min read
Save for later

NVIDIA makes its new “brain for autonomous AI machines”, Jetson AGX Xavier Module, available for purchase

Natasha Mathur
17 Dec 2018
3 min read
NVIDIA made Jetson AGX Xavier module, its new “powerful brain” for autonomous AI machines, available for purchase worldwide, last week, starting at volume pricing $1099 for batches of 1,000 units or more.   Jetson AGX Xavier module is the new addition to the Jetson TX2 and TX1 developer kits family. It is aimed at providing high-level performance and will allow the companies to go into volume production of applications that are developed on the Jetson AGX Xavier developer kit, that was released back in September. Jetson AGX Xavier module consumes as little as 10-watt power and delivers 32 trillion computer operations per second (TOPS). It is supported by a 512-core Volta GPU with Tensor Cores and an 8-core ARM v8.2 64-bit CPU. It also comes with two NVDLA deep learning chips and dedicated image, video and vision processors. Other than that, it is supported by the NVIDIA’s JetPack and DeepStream software development kits. JetPack is NVIDIA’s SDK for autonomous machines that includes support for AI, computer vision, multimedia and more. The DeepStream SDK will enable streaming analytics, and developers can build multi-camera and multi-sensor applications to detect and identify objects such as vehicles, pedestrians, and cyclists. “These SDKs save developers and companies time and money while making it easy to add new features and functionality to machines to improve performance. With this combination of new hardware and software, it’s now possible to deploy AI-powered robots, drones, intelligent video analytics applications and other intelligent devices at scale,” mentions the NVIDIA team. Jetson AGX Xavier module has already been put to use by Oxford Nanopore, a U.K. medical technology startup, where it handles DNA sequencing in real time with the MinION, a powerful handheld DNA sequencer. Also, Japan’s DENSO, a global auto parts maker, believes that Jetson AGX Xavier will be a key platform to helping it introduce AI to its auto parts manufacturing factories where it will help with boosting productivity and efficiency. “Developers can use Jetson AGX Xavier to build the autonomous machines that will solve some of the world’s toughest problems, and help transform a broad range of industries. Millions are expected to come onto the market in the years ahead”, says the NVIDIA team. NVIDIA open sources its game physics simulation engine, PhysX, and unveils PhysX SDK 4.0 NVIDIA leads the AI hardware race. But which of its GPUs should you use for deep learning NVIDIA shows off GeForce RTX, real-time raytracing GPUs, as the holy grail of computer graphics to gamers
Read more
  • 0
  • 0
  • 13102

article-image-discord-to-adopt-90-10-revenue-split-for-game-developers-starting-from-2019
Bhagyashree R
17 Dec 2018
2 min read
Save for later

Discord to adopt 90/10 revenue split for game developers starting from 2019

Bhagyashree R
17 Dec 2018
2 min read
Last week, Discord announced that developers will be allowed to self publish games and get a 90% share of the revenue once their game store opens for all the creators in 2019. https://twitter.com/discordapp/status/1073606080188637189 Major game distribution platforms generally follow 30 percent cut of revenues from any games sold through their online stores. But, with the increasing competition and developers opting for self-publishing, this trend has started to change. Earlier this month, Epic launched its own store that takes only a 12-percent share of total revenue. Valve also recently updated their Steam Distribution Agreement which includes new revenue share tiers for games that hit certain revenue levels. As per Discord’s announcement, the game distribution shouldn’t cost 30%, “Turns out, it does not cost 30% to distribute games in 2018. After doing some research, we discovered that we can build amazing developer tools, run them, and give developers the majority of the revenue share.” Discord in the announcement further added that they will try reducing the 10% share as well,”...and we’ll explore lowering it by optimizing our tech and making things more efficient.” The beta version of the Discord Games Store was first launched in August, which now includes up to 100 titles. This new self-serve publishing platform will give developers, irrespective of how big the game or the team is, access to the Discord Game Store and the new 90-percent revenue share. In addition to the 90/10 revenue share, they will also be focusing on empowering developers to communicate with their players by improving Verified Servers. Read Discord’s official announcement on Medium. Unity 2018.3 is here with improved Prefab workflows, Visual Effect graph and more Uses of Machine Learning in Gaming Key Takeaways from the Unity Game Studio Report 2018
Read more
  • 0
  • 0
  • 5949

article-image-numpy-drops-python-2-support-now-you-need-python-3-5-or-later
Prasad Ramesh
17 Dec 2018
2 min read
Save for later

NumPy drops Python 2 support. Now you need Python 3.5 or later.

Prasad Ramesh
17 Dec 2018
2 min read
In a GitHub pull request last week, the NumPy community decided to remove support for Python 2.7. Python 3.4 support will also be dropped with this pull request. So now, to use NumPy 1.17 and newer versions, you will need Python 3.5 or later. NumPy has been supporting both Python versions since 2010. This move doesn't come as a surprise with the Python core team itself dropping support for Python 2 in 2020. The NumPy team had mentioned that this move comes in “Python 2 is an increasing burden on our limited resources”. The discussion to drop Python 2 support in NumPy started almost a year ago. Running pip install numpy on Python 2 will still install the last working version. But here on now, it may not contain the latest features as released for Python 3.5 or higher. However, NumPy on Python 2 will still be supported until December 31, 2019. After January 1, 2020, it may not contain the newest bug fixes. The Twitter audience sees this as a welcome move: https://twitter.com/TarasNovak/status/1073262599750459392 https://twitter.com/esc___/status/1073193736178462720 A comment on Hacker News reads: “Let's hope this move helps with the transitioning to Python 3. I'm not a Python programmer myself, but I'm tired of things getting hairy on Linux dependencies written in Python. It almost seems like I always got to have a Python 2 and a Python 3 version of some packages so my system doesn't break.” Another one reads: “I've said it before, I'll say it again. I don't care for everything-is-unicode-by-default. You can take my Python 2 when you pry it from my cold dead hands.” Some researchers who use NumPy and SciPy stick Python 2, this move from the NumPy team will help in getting everyone to work on a single version. One single supported version will sure help with the fragmentation. Often, Python developers find themselves in a situation where they have one version installed and a specific module is available/works properly in another version. Some also argue about stability, that Python 2 has greater stability and x or y feature. But the general sentiment is more supportive of adopting Python 3. Introducing numpywren, a system for linear algebra built on a serverless architecture NumPy 1.15.0 release is out! Implementing matrix operations using SciPy and NumPy  
Read more
  • 0
  • 0
  • 30921
article-image-drupal-9-will-be-released-in-2020-shares-dries-buytaert-drupals-founder
Bhagyashree R
14 Dec 2018
2 min read
Save for later

Drupal 9 will be released in 2020, shares Dries Buytaert, Drupal’s founder

Bhagyashree R
14 Dec 2018
2 min read
At Drupal Europe 2018, Dries Buytaert, the founder and lead developer of the Drupal content management system announced that Drupal 9 will be released in 2020. Yesterday, he shared a much detailed timeline for Drupal 9, according to which it is planned to release on June 3, 2020. One of the biggest dependency of Drupal 8 is Symfony 3 and it is scheduled to reach its end-of-life by November 21. This means that no security bugs in Symfony 3 will be fixed and people have to move to Drupal 9 for better support and security. Going by the plan, the site owners will have at least one year to upgrade from Drupal 8 to Drupal 9. Drupal 9 will not have a separate code base, rather the team is adding new functionalities in Drupal 8 as backward-compatible code and experimental features. Once they are sure that these features are stable, any old functionalities will be deprecated. One of the most notable update will be, support for Symfony 4 or 5 in Drupal 9. Since, Symfony 5 is not yet released the scope of its changes will not be clear to the Drupal team. They are focusing on running Drupal 8 with Symfony 4. The final goal is to make Drupal 8 work with Symfony 3, 4 or 5 so that any issues encountered can be fixed before they start requiring Symfony 4 or 5 in Drupal 9. As Drupal 9 is being build in Drupal 8, this will make things much easier for every stakeholder. Drupal core contributors will just have to remove the deprecated functionalities and upgrade the dependencies. For site owners it will be much easier to upgrade to Drupal 9 than it was to upgrade to Drupal 8. Dries Buytaert in his post said, “Drupal 9 will simply be the last version of Drupal 8, with its deprecations removed. This means we will not introduce new, backwards-compatibility breaking APIs or features in Drupal 9 except for our dependency updates. As long as modules and themes stay up-to-date with the latest Drupal 8 APIs, the upgrade to Drupal 9 should be easy. Therefore, we believe that a 12- to 18-month upgrade period should suffice.” You can read the full announcement on Drupal's website. WordPress 5.0 (Bebo) released with improvements in design, theme and more 5 things to consider when developing an eCommerce website Introduction to WordPress Plugin
Read more
  • 0
  • 0
  • 16202

article-image-google-wont-sell-its-facial-recognition-technology-until-questions-around-tech-and-policy-are-sorted
Savia Lobo
14 Dec 2018
4 min read
Save for later

Google won’t sell its facial recognition technology until questions around tech and policy are sorted

Savia Lobo
14 Dec 2018
4 min read
Google, yesterday released a blog post titled ‘AI for Social Good in Asia Pacific’  where they mentioned they have “chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions” According to senior vice president of Global Affairs Kent Walker, "Like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes," Google said. "We continue to work with many organizations to identify and address these challenges, and unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions." Google, backed away from the military drone project and published ethical AI principles that prohibit weapons and surveillance usage, which face recognition falls under, in light of Project Maven with the U.S. Department of Defense. The facial recognition technology has raised in popularity after finding popular use cases such as from entertainment industry to law enforcement agencies. Many companies have also faced a lot of pushback on how well they have handled their own technologies and whom they have sold it to. According to Engadget, “Amazon, for instance, has come under fire for selling its Rekognition software to law enforcement groups, and civil rights groups, as well as its own investors and employees, have urged the company to stop providing its facial recognition technology to police. In a letter to CEO Jeff Bezos, employees warned about Rekognition's potential to become a surveillance tool for the government, one that would "ultimately serve to harm the most marginalized.” The American Civil Liberties Union issued a statement in support of today’s development from Google, stating, “This is a strong first step. Google today demonstrated that, unlike other companies doubling down on efforts to put dangerous face surveillance technology into the hands of law enforcement and ICE, it has a moral compass and is willing to take action to protect its customers and communities. Google also made clear that all companies must stop ignoring the grave harms these surveillance technologies pose to immigrants and people of color, and to our freedom to live our lives, visit a church, or participate in a protest without being tracked by the government.” Amazon had also pitched its Rekognition software to ICE in October. “Yesterday during a hearing with the New York City Council, an Amazon executive didn't deny having a contract with the agency, saying in response to a question about its involvement with ICE that the company provides Rekognition "to a variety of government agencies." Lawmakers in the US have now asked Amazon for more information about Rekognition multiple times.” “Microsoft also shared six principles it has committed to regarding its own facial recognition technology. Among those guidelines is a pledge to treat people fairly and to provide clear communication about the technology's capabilities and limitations”, says Engadget. ACLU's Nicole Ozer said, “Google today demonstrated that, unlike other companies doubling down on efforts to put dangerous face surveillance technology into the hands of law enforcement and ICE, it has a moral compass and is willing to take action to protect its customers and communities. Google also made clear that all companies must stop ignoring the grave harms these surveillance technologies pose to immigrants and people of color, and to our freedom to live our lives, visit a church, or participate in a protest without being tracked by the government.” To know more about this in detail, visit Google’s official blogpost. Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP ‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management
Read more
  • 0
  • 0
  • 14724
Modal Close icon
Modal Close icon