Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Data

1209 Articles
article-image-google-wont-sell-its-facial-recognition-technology-until-questions-around-tech-and-policy-are-sorted
Savia Lobo
14 Dec 2018
4 min read
Save for later

Google won’t sell its facial recognition technology until questions around tech and policy are sorted

Savia Lobo
14 Dec 2018
4 min read
Google, yesterday released a blog post titled ‘AI for Social Good in Asia Pacific’  where they mentioned they have “chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions” According to senior vice president of Global Affairs Kent Walker, "Like many technologies with multiple uses, facial recognition merits careful consideration to ensure its use is aligned with our principles and values, and avoids abuse and harmful outcomes," Google said. "We continue to work with many organizations to identify and address these challenges, and unlike some other companies, Google Cloud has chosen not to offer general-purpose facial recognition APIs before working through important technology and policy questions." Google, backed away from the military drone project and published ethical AI principles that prohibit weapons and surveillance usage, which face recognition falls under, in light of Project Maven with the U.S. Department of Defense. The facial recognition technology has raised in popularity after finding popular use cases such as from entertainment industry to law enforcement agencies. Many companies have also faced a lot of pushback on how well they have handled their own technologies and whom they have sold it to. According to Engadget, “Amazon, for instance, has come under fire for selling its Rekognition software to law enforcement groups, and civil rights groups, as well as its own investors and employees, have urged the company to stop providing its facial recognition technology to police. In a letter to CEO Jeff Bezos, employees warned about Rekognition's potential to become a surveillance tool for the government, one that would "ultimately serve to harm the most marginalized.” The American Civil Liberties Union issued a statement in support of today’s development from Google, stating, “This is a strong first step. Google today demonstrated that, unlike other companies doubling down on efforts to put dangerous face surveillance technology into the hands of law enforcement and ICE, it has a moral compass and is willing to take action to protect its customers and communities. Google also made clear that all companies must stop ignoring the grave harms these surveillance technologies pose to immigrants and people of color, and to our freedom to live our lives, visit a church, or participate in a protest without being tracked by the government.” Amazon had also pitched its Rekognition software to ICE in October. “Yesterday during a hearing with the New York City Council, an Amazon executive didn't deny having a contract with the agency, saying in response to a question about its involvement with ICE that the company provides Rekognition "to a variety of government agencies." Lawmakers in the US have now asked Amazon for more information about Rekognition multiple times.” “Microsoft also shared six principles it has committed to regarding its own facial recognition technology. Among those guidelines is a pledge to treat people fairly and to provide clear communication about the technology's capabilities and limitations”, says Engadget. ACLU's Nicole Ozer said, “Google today demonstrated that, unlike other companies doubling down on efforts to put dangerous face surveillance technology into the hands of law enforcement and ICE, it has a moral compass and is willing to take action to protect its customers and communities. Google also made clear that all companies must stop ignoring the grave harms these surveillance technologies pose to immigrants and people of color, and to our freedom to live our lives, visit a church, or participate in a protest without being tracked by the government.” To know more about this in detail, visit Google’s official blogpost. Google AI releases Cirq and Open Fermion-Cirq to boost Quantum computation Google expands its machine learning hardware portfolio with Cloud TPU Pods (alpha) to effectively train and deploy TensorFlow machine learning models on GCP ‘Istio’ available in beta for Google Kubernetes Engine, will accelerate app delivery and improve microservice management
Read more
  • 0
  • 0
  • 14724

article-image-deepmind-open-sources-trfl-a-new-library-of-reinforcement-learning-building-blocks
Natasha Mathur
18 Oct 2018
3 min read
Save for later

DeepMind open sources TRFL, a new library of reinforcement learning building blocks

Natasha Mathur
18 Oct 2018
3 min read
The DeepMind team announced yesterday that they’re open sourcing a new library, named TRFL, that comprises useful building blocks for writing reinforcement learning (RL) agents in TensorFlow. The TRFL library was created by the research engineering team at DeepMind. TRFL library is a collection of key algorithmic components that are used for a large number of DeepMind’s agents such as DQN, DDPG, and the Importance Weighted Actor Learner Architecture. A typical deep reinforcement learning agent usually comprises a large number of interacting components that includes the environment and some deep network representing values or policies. Apart from these, these RL agents also include components such as a learned model of the environment, pseudo-reward functions or a replay system. Moreover, these RL agents interact in subtle ways which makes it difficult to identify bugs in large computational graphs. To fix this issue, it is recommended to open-source complete agent implementations. This is because even though the large agent codebases are useful for reproducing research, it is hard to modify and extend them. Additionally, a different and complementary approach is to provide a reliable, well-tested implementation of common building blocks. These implementations can then be used in a variety of different RL agents. TRFL library helps as it includes functions that help implement both classical RL algorithms as well as other cutting-edge techniques. The loss functions and other operations that come with TRFL, are implemented in pure TensorFlow. These RL algorithms are not complete algorithms instead they’re implementations of RL-Specific mathematical operations which are required when building fully-functional RL agents. The DeepMind team also provides TensorFlow ops for value-based reinforcement learning in discrete action spaces such as TD-learning, Sarsa, Q-learning, and their variants. Moreover, it offers ops for implementing continuous control algorithms such as DPG as well as ops for learning distributional value functions. Finally, TRFL also comes with an implementation of the auxiliary pseudo-reward functions used by UNREAL. This improves data efficiency in a wide range of domains. “This is not a one-time release. Since this library is used extensively within DeepMind, we will continue to maintain it as well as add new functionalities over time. We are also eager to receive contributions to the library by the wider RL community”, mentioned the DeepMind team. For more information, check out the official DeepMind blog. Google open sources Active Question Answering (ActiveQA), a Reinforcement Learning based Q&A system Microsoft open sources Infer.NET, it’s popular model-based machine learning framework Salesforce Einstein team open sources TransmogrifAI, their automated machine learning library
Read more
  • 0
  • 0
  • 14679

article-image-deepmind-researchers-provide-theoretical-analysis-on-user-recommender-system
Natasha Mathur
07 Mar 2019
3 min read
Save for later

DeepMind researchers provide theoretical analysis on recommender system, 'echo chamber' and 'filter bubble effect'

Natasha Mathur
07 Mar 2019
3 min read
DeepMind researchers published a paper last week, titled ‘Degenerate Feedback Loops in Recommender Systems’. In the paper, researchers provide a new theoretical analysis examining the user dynamics role and the behavior of recommender systems, that can help remove the echo chamber from the filter bubble effect. https://twitter.com/DeepMindAI/status/1101514121563041792 Recommender systems are aimed to provide users with personalized product and information offerings. These systems take into consideration the user’s personal characteristics and past behaviors to generate a list of items that have been personalized as per the user’s tastes. Although very successful, there are certain concerns related to the systems that it might lead to a self-reinforcing pattern of narrowing exposure and a shift in user’s interest. These problems are often called the “echo chamber” and “filter bubble”. In the paper, researchers define echo chamber as user’s interest being positively or negatively reinforced due to the repeated exposure to a certain category of items. For “filter bubble”, researchers use the definition introduced by Pariser (2011) that states that the recommender systems select limited content to serve the users online. Researchers have considered a recommender system that is capable of interacting with a user over time. At every time step, the recommender system serves a different number of items (or categories of items such as news articles, videos, or consumer products) to a user from a set of finite or countably infinite items. The goal of this recommender system is to provide those items to a user that she/he might be interested in.                     The interaction between the recommender system and user The paper also considers the fact that the user’s interaction with the recommender system can change depending on her interest in different items for the next interaction. Additionally, to further analyze the echo chamber or filter bubble effect in recommender systems, researchers track when the user’s interest changes extremely. Futhermore, researchers used the dynamical system framework to model the user’s interest. They treated the interest extremes of the users as the degeneracy points within the system. For the recommender system, researchers discussed the influence on the degeneracy speed of these three independent factors in system design including model accuracy, amount of exploration, and the growth rate of the candidate pool. As per the researchers, continuous random exploration along with linearly growing the candidate pool is the best methods against system degeneracy. Although this research is quite effective, it still has two main limitations. The first limitation is that user interests are hidden variables and are not observed directly which is why a good measure for user interests is needed for practice to reliably study the degeneration process. Secondly, since the researchers have assumed the items and users being independent of each other, the theoretical analysis has been extended to study possibly mutually dependent items and users in the future. For more information, check out the official research paper. Google DeepMind’s AI AlphaStar beats StarCraft II pros TLO and MaNa; wins 10-1 against the gamers Blizzard set to demo Google’s DeepMind AI in StarCraft 2 Deepmind’s AlphaZero shows unprecedented growth in AI, masters 3 different games
Read more
  • 0
  • 0
  • 14676

article-image-facebook-launches-machine-learning-video-series
Sugandha Lahoti
08 Aug 2018
2 min read
Save for later

Facebook launches a 6-part Machine Learning video series

Sugandha Lahoti
08 Aug 2018
2 min read
Facebook has launched a 6-part video series dedicated to providing practical tips about how to apply machine-learning capabilities to real-world problems. The Facebook Field Guide to Machine Learning is developed by the Facebook ads machine learning team. The development process of an ML model Source: Facebook research The video series will cover how the entire development process works. This includes what happens during the training of machine learning models and what happens before and after the training process in each step. Each video also includes examples and stories of non-obvious things that can be important in an applied setting. The video series breaks down the machine learning process into six steps: Problem definition It is necessary to have the right set up before you go about choosing an algorithm. The first video, Problem definition, talks about how to best define your machine learning problem before going into the actual process. You save almost a week’s worth of time by spending just a few hours at the definition stage. Data This tutorial teaches developers how to prepare the training data. The training data is a powerful variable to create high-quality machine learning systems. Evaluation The third lesson talks about the steps to evaluate the performance of your machine learning model. Features The fourth tutorial explains examples of various features such as categorical, continuous and derived features. It also describes how to choose the right feature for the right model. The video also talks about changing features, feature breakage, leakage, and coverage. Model The next lesson describes how to choose the right machine learning model for your data and find the algorithm to implement and train that model. It also offers tips and tricks for picking, tuning and comparing models. Experimentation The final tutorial covers experimentation, which is about making your experiments actionable. A large part of the tutorial is dedicated to the difference between offline and online experimentation. The entire video series is available on the Facebook blog for you to watch. Microsoft start AI School to teach Machine Learning and Artificial Intelligence Soft skills every data scientist should teach their child Google introduces Machine Learning courses for AI beginners
Read more
  • 0
  • 0
  • 14670

article-image-openai-five-loses-against-humans-in-dota-2-at-the-international-2018
Amey Varangaonkar
27 Aug 2018
3 min read
Save for later

OpenAI Five loses against humans in Dota 2 at The International 2018

Amey Varangaonkar
27 Aug 2018
3 min read
Looks like OpenAI’s intelligent game-playing bots need to get a little more street smart before they can beat the world’s best. Played as a promotional side-event in The International - the annual Dota 2 tournament, OpenAI Five were beaten by a team of top human professional players in the first two games of the Best of Three contest. Both games were intense and lasted for approximately an hour, but the human teams emerged victorious quite comfortably. OpenAI Five, as we know, are 5 artificially intelligent bots developed by OpenAI, a research institute co-founded by Tesla CEO, Elon Musk, to develop and research human-level artificial intelligence. These bots are trained specifically to play Dota 2 game against top human professionals. While OpenAI five racked up more kills in the game than the human teams paiN Gaming and Big God, it lacked a cohesive strategy and wasted many opportunities to gather and utilize resources in the game efficiently, which is often the difference between a win and a loss. This loss highlights the fact that while the bots are on the right track, more improvement is needed in the manner they adjust to their surroundings and make tactical decisions on the go. Researcher at the University of Falmouth, UK, Mike Cook, agrees - his criticism being that the bots lacked decision-making at the macro-level while having their own moments of magic in the game. [embed]https://twitter.com/mtrc/status/1032430538039148544[/embed] Greg Brockman, CTO and co-founder of OpenAI, meanwhile, was not worried about this loss, citing that it is the defeats that will make OpenAI Five better and more efficient. He was of the opinion that the AI was designed to learn and adapt by learning from the experiences first, before being able to beat the human players. According to Greg, the OpenAI Five is very much still a work in progress project. [embed]https://twitter.com/gdb/status/1032830230103244800[/embed] The researchers at OpenAI are hopeful that the OpenAI Five will improve from this valuable learning experience and give a much tougher fight in the next edition of the tournament, since there won’t be a third game this year. As things stand, though, it’s pretty clear that the human players aren’t going to be replaced by the AI bots anytime soon. See Also: AI beats human again – this time in a team-based strategy game Build your first Reinforcement learning agent in Keras A new Stanford artificial intelligence camera uses a hybrid optical-electronic CNN for rapid decision making
Read more
  • 0
  • 0
  • 14670

article-image-matplotlib-3-0-is-here-with-new-cyclic-colormaps-and-convenience-methods
Natasha Mathur
20 Sep 2018
3 min read
Save for later

Matplotlib 3.0 is here with new cyclic colormaps, and convenience methods

Natasha Mathur
20 Sep 2018
3 min read
Matplotlib team announced Matplotlib version 3.0, on Tuesday. Matplotlib 3.0 comes with new features such as two new cyclic colormaps, AnchoredDirectionArrows feature, and other updates and improvements. Matplotlib is a plotting library for the Python programming language as well as for its numerical mathematics extension, NumPy. It offers an object-oriented API for embedding plots into applications using general-purpose GUI toolkits such as Tkinter, wxPython, Qt, or GTK+. Let’s have a look at what’s new in this latest release. Cyclic Colormaps Two new colormaps namely 'twilight' and 'twilight_shifted' have been added to this new release. These two colormaps start and end on the same color. They have two symmetric halves with equal lightness, but diverging color. AnchoredDirectionArrows added to mpl_toolkits A new mpl_toolkit class AnchoredDirectionArrows, has been added in this release. AnchoredDirectionArrows draws a pair of orthogonal arrows which helps indicate directions on a 2D plot. Several optional parameters can alter the layout of these arrows. For instance, the arrow pairs can be rotated and their color can be changed. The labels and the arrows have the same color by default, but the class may also pass arguments for customizing arrow and text layout. Other than that, the location, length, and width of both the arrows can also be adjusted. Improved default backend selection The default backend needs no longer be set as part of the build process. Instead, builtin backends are tried in sequence at run time, until one of the imports. Also, Headless Linux servers cannot select a GUI backend. Scale axis by a fixed order of magnitude With Matplotlib 3.0, you can scale an axis by a fixed order of magnitude by setting the scilimits argument of Axes.ticklabel_format to the same (non-zero) lower and upper limits. With this setting, the order of magnitude gets adjusted depending on the axis values, rather than remaining fixed. minorticks_on()/off() methods added for colorbar A new method colorbar.Colobar.minorticks_on() has been added in this new release that can correctly display the minor ticks on a colorbar. This method doesn't allow the minor ticks to extend into the regions beyond vmin and vmax. A complementary method named colorbar.Colobar.minorticks_off() has also been added for removing the minor ticks on the colorbar. New convenience methods for GridSpec New convenience methods namely gridspec.GridSpec and gridspec.GridSpecFromSubplotSpec have been added in Matplotlib 3.0. Other Changes Colorbar ticks are now automatic. Legend has a title_fontsize kwarg (and rcParam) now. Multipage PDF support has been added for pgf backend. Pie charts are now circular by default in Matplotlib 3.0 :math: directive has been renamed to :mathmpl: For more information, be sure to check out the official Matplotlib release notes. Creating 2D and 3D plots using Matplotlib How to Customize lines and markers in Matplotlib 2.0 Tinkering with ticks in Matplotlib 2.0
Read more
  • 0
  • 0
  • 14650
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-microsoft-is-seeking-membership-to-linux-distros-mailing-list-for-early-access-to-security-vulnerabilities
Vincy Davis
01 Jul 2019
4 min read
Save for later

Microsoft is seeking membership to Linux-distros mailing list for early access to security vulnerabilities

Vincy Davis
01 Jul 2019
4 min read
Microsoft is now aiming to add its own contributions and strengthen Linux, by getting an early access to its security vulnerabilities. Last week, Microsoft applied for membership to join the official closed group of Linux, called the Linux-distros mailing list. The Linux-distros mailing list is used by Linux distributors to privately report, coordinate and discuss security issues. The issues revealed in this group are not made public for 14 days. Members of this group include Amazon Linux AMI, Openwall, Oracle, Red Hat, SUSE and Ubuntu. Sasha Levin, a Microsoft Linux kernel developer has applied for the membership application on behalf of Microsoft, to join the exclusive group. If approved, it would allow Microsoft to be part of the private behind-the-scenes chatter about vulnerabilities, patches, and ongoing security issues with the open-source kernel and related code. These discussions are crucial for getting early information and coordinating the deployment of fixes before they are made public. One of the main criteria for membership in the Linux-distros mailing list, is to have a Unix-like distro that makes use of open source components.  To indicate that Microsoft deserves this membership, Levin has cited Microsoft's Azure Sphere and the Windows Subsystem For Linux (WSL) 2 as examples of distro-like builds.  Last month, Microsoft announced that Windows Subsystem for Linux 2 (WSL 2) is available in Windows Insiders. With availability in build 18917, Windows will now be shipping with a full Linux kernel. This will allow WSL 2 to run inside a VM and provide full access to Linux system calls. The kernel will be specifically tuned for WSL 2 and will be fully open sourced with the full configuration available on GitHub. This will enable users for a faster turnaround on updating the kernel, when new versions become available. Thus the new architecture aims to increase file system performance and provide full system call compatibility, in a Linux environment. Levin also highlighted that Microsoft’s Linux builds are open sourced and that it contributes to the community. Levin has also revealed that Linux is used more on Azure than Windows server. This does not come as a surprise, as this is not the first time that Microsoft is being aligned to Linux. There are at least eight Linux-distros available on Azure. Also Microsoft’s former CEO Steve Balmer, who has previously quoted Linux as “Cancer”, now says that he loves Linux.  This move by Microsoft to embrace Linux, is being seen as Microsoft’s way of staying relevant in the industry. In a statement to Register, the open-source pioneer Bruce Perens says that, “What we are seeing here is that Microsoft wants access to early security alerts on Linux,  They’re joining it as a Linux distributor because that’s how it’s structured. Microsoft obviously has a lot of Linux plays, and it’s their responsibility to fix known security bugs as quickly as other Linux distributors.” Most users are of the opinion that, Microsoft embracing Linux was bound to happen. With its immense advantages, Linux is the default option for many. A user on Hacker News says that,  “The biggest practical advantage I have found is that Linux has dramatically better file system I/O performance. Like, a C++ project that builds in 20 seconds on Linux, takes several minutes to build on the same hardware in Windows.” Another user comments that, “I'm surprised it took this long. With Linux support for .NET and SQL Server, there is zero reason to host anything new on Windows now (of course legacy enterprise software is another story). I wouldn't be surprised if Windows Server is fully EOL'd in a few years.” Another user wrote that, “On Azure, a Windows VM instance tends to cost about 50% more than the equivalent instance running Linux, so it is a no brainer to use Linux if your application is operating system independent.” Another comment reads, “Linux is the default choice when you set up a VM.” Debian GNU/Linux port for RISC-V 64-bits: Why it matters and roadmap Netflix security engineers report several TCP networking vulnerabilities in FreeBSD and Linux kernels Unity Editor will now officially support Linux
Read more
  • 0
  • 0
  • 14644

article-image-california-replaces-cash-bail-with-algorithms
Richard Gall
05 Sep 2018
2 min read
Save for later

California replaces cash bail with algorithms

Richard Gall
05 Sep 2018
2 min read
Last week, (August 28) California Governor Jerry Brown signed a bill that will see cash bail replaced by an algorithm. Set to take effect in October 2019, it will mean that if you're accused of a crime you won't simply be able to pay money as a form of collateral before your trial. Instead, you'll be 'graded' by an algorithm according to how likely you are to abscond or commit another crime. However, the algorithm won't make the final decision - the grade is rather a guide for a county official who will then decide whether to grant bail. In a statement, Brown said that Today, "California reforms its bail system so that rich and poor alike are treated fairly". However, there are plenty who disagree that that will be the case. Criticism of the legislation The move has been met with criticism from civil liberties groups and AI watchdogs. Although cash bail has long drawn criticism for making wealth the arbiter of someone's freedom, placing judicial decision making in the hands of algorithms could, these groups argue, similarly discriminate and entrench established injustice and social divisions. Rashida Richardson, policy director at AI think tank AI Now, speaking to Quartz, said that "a lot of these criminal justice algorithmic-based systems are relying on data collected through the criminal justice system." This means, Richardson explains, “you have data collection that’s flawed with a lot of the same biases as the criminal justice system.” Raj Jayadev, from Silicon Valley Debug also speaking to Quartz, said that the legislation will "lead to an increase in pretrial detention." The details have yet to be finalized, but it's believed that the bill's impact will be reviewed in 2023. The most crucial element for this project to work is transparency - whether law makers and law enforcement provide transparency on the algorithm and how its used will remain to be seen. Read next Amazon is selling facial recognition technology to police Alarming ways governments are using surveillance tech to watch you Lerna development team quickly reverses decision to block ICE Contractors from using its Software
Read more
  • 0
  • 0
  • 14643

article-image-google-researchers-present-xlnet-a-new-pre-training-method-that-outperforms-bert-on-20-tasks
Amrata Joshi
25 Jun 2019
7 min read
Save for later

CMU and Google researchers present XLNet: a new pre-training method for language modeling that outperforms BERT on 20 tasks

Amrata Joshi
25 Jun 2019
7 min read
Last week, Carnegie Mellon University (CMU) and Google researchers presented a paper XLNet: Generalized Autoregressive Pretraining for Language Understanding which focuses on the XLNet model. https://twitter.com/quocleix/status/1141511813709717504 In this paper, the researchers have explained about the XLNet and how it uses a permutation language modeling objective for combining the advantages of AR and AE methods. The researchers compared XLNet with BERT and they have shown with examples that XLNet was able to surpass BERT on 20 tasks using the RACE, SQuAD and GLUE datasets. What is the need for XLNet Among different unsupervised pre-training objectives, autoregressive (AR) language modeling and autoencoding (AE) have been the two most successful pre-training objectives. Also, AR language modeling estimates the probability distribution of a text corpus with an autoregressive model. This language model has been only trained to encode a uni-directional context and is not effective at modeling deep bidirectional contexts. But the downstream language understanding tasks usually need bidirectional context information and which results in a gap between AR language modeling and effective pretraining. In contrast, AE based pretraining does not perform density estimation but it works towards reconstructing the original data from corrupted input. As density estimation is not part of the objective, BERT can utilize bidirectional contexts for reconstruction which also closes the bidirectional information gap in AR language modeling and improves performance. BERT (Bidirectional Encoder Representations from Transformers) achieves better performance than pretraining approaches that are based on autoregressive language modeling. But it relies on corrupting the input with masks and neglects dependency between the masked positions and also suffers from a discrepancy. Considering these pros and cons, the researchers from CMU and Google proposed XLNet, a generalized autoregressive pretraining method that: (1) enables learning bidirectional contexts by simply maximizing the expected likelihood over all permutations of the factorization order and (2) overcomes the limitations of BERT because of its autoregressive formulation. XLNet also integrates ideas from Transformer-XL which is the state-of-the-art autoregressive model, into pretraining. It outperforms BERT on 20 tasks and usually by a large margin, and achieves state-of-the-art results on 18 tasks. These tasks include question answering,  sentiment analysis, natural language inference, and document ranking. The researchers observed that applying a Transformer(-XL) architecture to permutation-based language modeling does not work as the factorization order is random and also the target is unclear. To solve this, the researchers proposed to reparameterize the Transformer(-XL) network for removing the ambiguity. https://twitter.com/rsalakhu/status/1141539269565132800?s=19 XLNet comparison with BERT While comparing with BERT, researchers observed that BERT and XLNet perform partial prediction, which means predicting only a subset of tokens in the sequence. It is important for BERT because in case, all the tokens are masked then it is impossible to make any meaningful predictions. Partial prediction plays a role in reducing optimization difficulty for both BERT and XLNet by predicting tokens with sufficient context. XLNet improves architectural designs for pretraining and improves the performance for tasks involving a longer text sequence. XLNet does not rely on data corruption so it does not suffer from the pretrain-finetune discrepancy that happens in the case of BERT. The autoregressive objective provides a natural way to use the product rule for factorizing the joint probability of the predicted tokens. This eliminates the independence assumption made in BERT. XLNet maximizes the expected log likelihood of a sequence with respect to all possible permutations of the factorization order instead of using a fixed forward or backward factorization order. According to the researchers, “BERT factorizes the joint conditional probability p(x¯ | xˆ) based on an independence assumption that all masked tokens x̄ are separately reconstructed (Given a text sequence x = [x1, · · · , xT ],). The researchers have called it as independence assumption, and according to them it disables BERT to model dependency between targets. The researchers explain the difference between XLNet and BERT with an example, “Let’s consider a concrete example [New, York, is, a, city]. Suppose both BERT and XLNet select the two tokens [New, York] as the prediction targets and maximize 6 log p(New York | is a city). Also suppose that XLNet samples the factorization order [is, a, city, New, York]. In this case, BERT and XLNet respectively reduce to the following objectives: JBERT = log p(New | is a city) + log p(York | is a city), JXLNet = log p(New | is a city) + log p(York | New, is a city). Notice that XLNet is able to capture the dependency between the pair (New, York), which is omitted by BERT.” In the above example, BERT learns some dependency pairs such as (New, city) and (York, city), so the researchers conclude that XLNet always learns more dependency pairs given the same target and contains “denser” effective training signals. Also, the XLNet objective comprises of more effective training signals that offer better performance. XLNet comparison with Language Model According to the researchers, standard AR language model like GPT (GUID Partition Table) is only able to cover the dependency (x = York, U = {New}) but not (x = New, U = {York}). On the other hand, XLNet is able to cover both in expectation overall factorization orders. This limitation of AR language modeling can be a critical issue in real-world applications. The researchers concluded that AR language modeling is not able to cover the dependency but XLNet is able to cover all dependencies in expectation. There has always been a gap between language modeling and pretraining because of the lack of the capability of bidirectional context modeling. But XLNet generalizes language modeling and bridges the gap. Implementation and conclusion The researchers used the BooksCorpus and English Wikipedia as part of their pre-training data, which contains 13GB plain text combined. They experimented on four datasets including RACE dataset, SQuAD dataset, ClueWeb09-B Dataset, and GLUE dataset. “They further studied three major aspects: The effectiveness of the permutation language modeling objective, especially compared to the denoising auto-encoding objective used by BERT.  The importance of using Transformer-XL as the backbone neural architecture and employing segment-level recurrence (i.e. using memory). The necessity of some implementation details including span-based prediction, the bidirectional input pipeline, and next-sentence prediction.” The researchers concluded that XLNet is a generalized AR pre-training method and it uses a permutation language modeling objective for combining the advantages of AR and AE methods. According to them, the neural architecture of XLNet is developed to work seamlessly with the AR objective that integrates Transformer-XL. It also achieves state-of-the-art results in various tasks with improvement. The paper reads, “In the future, we envision applications of XLNet to a wider set of tasks such as vision and reinforcement learning.” A lot of users seem to be excited about this news and they think it can get even better. One of the users commented on Reddit, “The authors are currently trying to see the text generation capability of XLNet. If they confirm that it's on par with left-to-right model (hence better than BERT), then their work would be even more impressive.” Few others think that it will be better if the researchers use more diverse datasets for experimentation purpose. Another user commented, “The result seems to me as if the substantial improvement in this setting is coming mostly from the use of Transformer-XL (i.e. larger context size). Probably using more data and greater context size (and more diverse dataset) is far more important than doing anything else proposed in the paper.” Many others are excited about this research and think that XLNet is better than BERT. https://twitter.com/eturner303/status/1143174828804857856 https://twitter.com/ST4Good/status/1143182779460608001 https://twitter.com/alex_conneau/status/1141489936022953984 To know more about this, check out the paper XLNet: Generalized Autoregressive Pretraining for Language Understanding. Curl’s lead developer announces Google’s “plan to reimplement curl in Libcrurl” Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers Google Calendar was down for nearly three hours after a major outage
Read more
  • 0
  • 0
  • 14639

article-image-nips-foundation-decides-against-name-change-as-poll-says-it-is-an-unpopular-superficial-move-instead-increases-focus-on-diversity-and-inclusivity-initiatives
Melisha Dsouza
24 Oct 2018
5 min read
Save for later

NIPS Foundation decides against name change as poll finds it an unpopular superficial move; instead increases ‘focus on diversity and inclusivity initiatives’

Melisha Dsouza
24 Oct 2018
5 min read
The ‘Neural Information Processing Systems’, also known as ‘NIPS’ is a well known for hosting the most influential AI conferences over the past 32 years, all around the globe. The conference is organized by NIPS Foundation and brings together researchers from biological, psychological, technological, mathematical, and theoretical areas of science and engineering - including the big names of the tech industry like Google, Nvidia, Facebook, and Microsoft. The acronym of the conference has been receiving a lot of attention from members worldwide over the past few years. Some members of the community have pointed out that the current acronym ‘NIPS’ has unintended connotations which makes the name sound “sexist“ On the other hand, the decision of bringing about a name change only added further confusion and frustration. In August 2018, the organizers of the conference conducted a  poll on the NIPS website asking people whether they agree or disagree with the potential name change. This was done taking cue from the several well-publicized incidents of insensitivity at past conferences. The poll requested alternative names for the conference, rating of the existing and alternative names, and encouraging additional comments from members. "Arguments in favor of keeping the existing name include a desire to respect the intellectual tradition that brought the meeting to where it is today and the strong brand that comes with this tradition. Arguments in favor of changing the name include a desire to better reflect the modern scope of the conference and to avoid distasteful connotations of the name." - Organizers of NIPS Out of the 2270 participants who took the survey,  over 86% were male, around 13% were female, and 0.01% other gender or non-responsive. A key question in the poll was: “Do you think we should change the name of the NIPS conference?” To this, around 30% of the respondents said they support the name change (28% males and about 44% females) while 31% ‘strongly disagreed’ with the name change proposal (31% male and 25% female). Here is the summary of the response distribution:                                                    Source: nips.cc Some respondents also questioned whether the name was deliberately selected for a double entendre. But the foundation denies the claims as the name was selected in 1987, and sources such as Oxford English Dictionary show that the slang reference to a body part did not come into usage until years later. To the foundation, the results of the poll did not provide any useful insights to the situation. The first poll resulted in a long list of alternative names. Most of them being unsuitable for reasons like- existing brand, too close to names of other conferences, offensive connotations in some language.  After shortlisting six names, a second poll was conducted. None of these names were strongly preferred by the community. Since the polls have not returned a consensus result, the foundation has decided not to change the name of the conference- at least for now. Here are some of the comments posted on the NIPS website (with permission) “Thanks for considering the name change. I am not personally bothered by the current name, which is semi-accurate and has no ill intent -- but I think the gesture of making a name change will send a much-needed inclusive vibe in the right direction” “If it were up to me, I'd call off this nice but symbolic gesture and use whatever time, money, and energy it requires to make actual changes that boost inclusivity, like providing subsidized child care so that parents can attend, or offering more travel awards to scholars from lesser-developed countries” “Please, please please change the name. It is sexist and a racist slur!!! I'm embarrassed every time I have to say the name of the conference” “As a woman, I find it offensive that the board is seriously considering changing the name of the meeting because of an adolescent reference to a woman’s body. From my point of view, it shows that the board does not see me as an equal member of the community, but as a woman first and a scientist second” “I am a woman, I have experienced being harassed by male academics, and I would like this problem to be discussed and addressed. But not in this frankly almost offensive way” Much of the feedback received from its members pointed towards taking a more substantive approach to diversity and inclusivity. Taking this into account, The NIPS code of conduct was implemented, two Inclusion and Diversity chairs were appointed to the organizing committee and, childcare support for NIPS 2018 Conference in Montreal has been introduced. In addition, NIPS has welcomed the formation of several co-located workshops focused on diversity in the field.  NIPS is also extending support to additional groups, including Black in AI (BAI), Queer in AI@NIPS, Latinx in AI (LXAI), and Jews in ML (JIML). Twitter saw some pretty strong opinions on this decision- https://twitter.com/StephenLJames/status/1054996053177589760 The foundation hopes that the community's support will help in improving the inclusiveness of the conference for its diverse set of members. Head over to the Neural Information Processing Systems Blog post for more insights on this news. NIPS 2017 Special: 6 Key Challenges in Deep Learning for Robotics by Pieter Abbeel NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey  
Read more
  • 0
  • 0
  • 14617
article-image-tensorflow-lite-developer-preview
Savia Lobo
15 Nov 2017
3 min read
Save for later

Tensorflow Lite developer preview is Here

Savia Lobo
15 Nov 2017
3 min read
Team TensorFlow announces the developer preview of TensorFlow Lite, a feather-light upshot for mobile and embedded devices, at the I/O developer conference. TensorFlow has been a popular framework grabbing everyone’s attention since its inception. Its adoption can be seen right from within the enormous server racks to tiny IoT(Internet of Things) devices; now it’s time for mobile and embedded devices! Also, since TensorFlow Lite made its debut in May, many other opponents have come up with their version of AI on mobile-- Apple’s CoreML, and the Cloud service from Clarifai are some popular examples. TensorFlow Lite is available for both Android and iOS devices. TensorFlow Lite is designed to be: Lightweight: It allows inference of the on-device machine learning models that too with a small binary size, allowing faster initialization/ startup. Speed: The model loading time is dramatically improved, with an accelerated hardware support. Cross-platform: It includes a runtime tailormade to run on various platforms--starting with Android and iOS. Recently, there has been an increase in the number of mobile devices that make use of a custom-built hardware to carry out ML workloads efficiently. Keeping this in mind, TensorFlow Lite also supports the Android Neural Networks API to leverage the advantages of the new accelerators. Another feature of TensorFlow Lite is that when the accelerator hardware is not available, it relies on the optimized CPU for execution. This ensures that your models run fast on a large set of devices. It also allows a low-latency inference for the on-device ML models. Let’s now have a look at the lightweight architecture: Source: https://www.tensorflow.org/mobile/tflite/ Starting from the top and moving down: A Trained TensorFlow Model, which is saved on disk. A TensorFlow Lite Converter program which converts the Tensorflow model into the TensorFlow Lite format. A TensorFlow Lite Model File format based on FlatBuffers, optimized for maximum speed and minimum size. Further down the architecture, one can see how Tensorflow Lite Model file is deployed onto Android and iOS Applications. Now, within each mobile Application, there is a Java API, a C++ API and an interpreter. Developers also have a choice to implement custom kernels with the C++ API which can be used by the Interpreter. Tensorflow also has support for various models, trained and optimized for the mobile devices. The models are: MobileNet, which is able to identify across 1000 varied object classes. It is designed specifically for an efficient execution on mobile and embedded devices. Inception v3: This is an image recognition model and is similar to MobileNet in functionality. Though large in size, it offers higher accuracy. Smart Reply: An on-device conversational model that provides replies to incoming chat messages with one touch. Many Android wears possess this feature within their messaging apps. Both, Inception v3 and MobileNets are trained using the ImageNet dataset. Using this dataset one can easily retrain the two models on their own image datasets via transfer learning. TensorFlow already has a TensorFlow Mobile API that supports mobile and embedded deployment of models. The obvious question then is, why TensorFlow Lite? Well, team TensorFlow’s answer to this on their official blog post is, “Going forward, TensorFlow Lite should be seen as the evolution of TensorFlow Mobile, and as it matures it will become the recommended solution for deploying models on mobile and embedded devices. With this announcement, TensorFlow Lite is made available as a developer preview, and TensorFlow Mobile is still there to support production apps.” For more information on Tensorflow Lite, you can visit the official documentation page here.
Read more
  • 0
  • 0
  • 14605

article-image-neurips-2018-paper-deepmind-researchers-explore-autoregressive-discrete-autoencoders-adas-to-model-music-in-raw-audio-at-scale
Melisha Dsouza
03 Dec 2018
5 min read
Save for later

NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale

Melisha Dsouza
03 Dec 2018
5 min read
In the paper ‘The Challenge of realistic music generation: modelling raw audio at scale’, researchers from DeepMind have embarked on modelling music in the raw audio domain. They have explored autoregressive discrete autoencoders (ADAs) to enable autoregressive models to capture long-range correlations in waveforms. Autoregressive models are the best while generating raw audio waveforms of speech, but when applied to music, they are more biased towards capturing local signal structure at the expense of modelling long-range correlations. Since music exhibits structure at many different timescales, this issue is problematic; thereby making realistic music generation a challenging task. This paper will be presented in the 32nd Conference on Neural Information Processing Systems (NIPS 2018) to be held at Montréal, Canada this week. Challenges when music is symbolically represented Music has a complex structure by nature and is made up of waveforms that spans over different time periods and magnitudes. Therefore, modelling all of the temporal correlations in the sequence that arise from this structure is challenging. Most of the work in music generation has focused on symbolic representations. This method however has multiple limitations. Symbolic representations abstract away the idiosyncrasies of a particular performance, and these nuances abstracted away are often musically quite important, impacting a user’s enjoyment of music. The paper states an example of the precise timing, timbre and volume of the notes played by a musician do not correspond exactly to those written in a score. Symbolic representations are often tailored to particular instruments, reducing their generality, thereby leading to a lot of work being applied to existing modelling techniques to new instruments. Digital representations of audio waveforms retain all the musically relevant information. These models can be applied to recordings of any set of instruments. However, the task is challenging as compared to modelling symbolic representations. These generative models of waveforms capturing musical structure at many timescales requires high representational capacity, distributed effectively over the various musically-relevant timescales. Steps performed to address music generation in the raw audio domain The researchers use autoregressive models to model structure across roughly 400,000 timesteps, or about 25 seconds of audio sampled at 16 kHz. They demonstrate a computationally efficient method to enlarge their receptive fields using autoregressive discrete autoencoders (ADAs). They explore the domain of autoregressive models for this task, while they use the argmax autoencoder (AMAE) as an alternative to vector quantisation variational autoencoders (VQ-VAE). This autoencoder converges more reliably when trained on a challenging dataset. To model long-range structure in musical audio signals, the receptive fields (RFs) of AR models have to be enlarged. One way to do this is by providing a rich conditioning signal. The paper concentrates on this notion which turns an AR model into an autoencoder by attaching an encoder to learn a high-level conditioning signal directly from the data. Temporal downsampling operations can be inserted into the encoder to make this signal more coarse-grained than the original waveform. The resulting autoencoder uses its AR decoder to model any local structure that this compressed signal cannot capture. The researchers went on to compare the two techniques that can be used to model the raw audio: Vector quantisation variational autoencoders and the argmax autoencoder (AMAE). Vector quantisation variational autoencoders use vector quantisation (VQ): the queries are vectors in a d-dimensional space, and a codebook of k such vectors is learnt on the fly, together with the rest of the model parameters. The loss function is as follows: LV Q−V AE = − log p(x|qj) + (qj − [q])2 + β · ([qj] − q)2. However, the issue with VQ-VAEs when trained on challenging (i.e. high-entropy) datasets is that they often suffer from codebook collapse. At some point during training, some portion of the codebook may fall out of use and the model will no longer use the full capacity of the discrete bottleneck, leading to worse results and poor reconstructions. As an alternative to VQ-VAE method, the researchers have come up with a model called the argmax autoencoder (AMAE). This produces k-dimensional queries, and features a nonlinearity that ensures all outputs are on the (k 1)-simplex. The quantisation operation is then simply an argmax operation, which is equivalent to taking the nearest k-dimensional one-hot vector in the Euclidean sense. This projection onto the simplex limits the maximal quantization error. This makes the gradients that pass through it more accurate. To make sure the full capacity is used, an additional diversity loss term is added which encourages the model to use all outputs in equal measure. This loss can be computed using batch statistics, by averaging all queries q (before quantisation) across the batch and time axes, and encouraging the resulting vector q¯ to resemble a uniform distribution. Results of the experiment This is what the researchers achieved: Addressed the challenge of music generation in the raw audio domain autoregressive models and extending their receptive fields in a computationally efficient manner. Introduced the argmax autoencoder (AMAE), an alternative to VQ-VAE which shows improved stability for the task. Using separately trained autoregressive models at different levels of abstraction captures long-range correlations in audio signals across tens of seconds, corresponding to 100,000s of timesteps, at the cost of some signal fidelity. You can refer to the paper for a comparison of results obtained across various autoencoders and for more insights on this topic. Exploring Deep Learning Architectures [Tutorial] Implementing Autoencoders using H2O What are generative adversarial networks (GANs) and how do they work? [Video]
Read more
  • 0
  • 0
  • 14602

article-image-amazon-announces-general-availability-of-amazon-personalize-an-ai-based-recommendation-service
Vincy Davis
12 Jun 2019
3 min read
Save for later

Amazon announces general availability of Amazon Personalize, an AI-based recommendation service

Vincy Davis
12 Jun 2019
3 min read
Two days ago, Amazon Web Services (AWS) announced in a press release that Amazon Personalize will now be generally available to all customers. Until now, this machine learning technology was used by Amazon.com for AWS customers to use in their applications. https://twitter.com/jeffbarr/status/1138430113589022721 Amazon Personalize basically helps developers easily add custom machine learning models into their applications, such as personalized product and content recommendations, tailored search results, and targeted marketing promotions, even if they don’t have machine learning experience. It is a fully managed service that trains, tunes, and deploys custom, private machine learning models. Customers have to pay for only what they use, with no minimum fees or upfront commitments. Amazon has been using Personalize for processing and examining the data, to identify what is meaningful, to select from multiple built advanced algorithms for their retail business, and for training and optimizing a personalization model customized to their data. All this is done, while keeping the customers data completely private. Customers receive results via an Application Programming Interface (API). Now,with the general availability of Amazon Personalize, many application developers and data scientists at businesses of all sizes across all industries, will be able to use and implement the power of Amazon’s expertise in machine learning. Swami Sivasubramanian, Vice President of Machine Learning, Amazon Web Services said “Customers have been asking for Amazon Personalize, and we are eager to see how they implement these services to delight their own end users. And the best part is that these artificial intelligence services, like Amazon Personalize, do not require any machine learning experience to immediately train, tune, and deploy models to meet their business demands”. Amazon Personalize will now be available in US East (Ohio), US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Singapore) and EU (Ireland). Amazon charges five cents per GB of data uploaded to Personalize and 24 cents per training hour used to train a custom model. Real-time recommendation requests are priced based on how many requests are uploaded, with discounts for larger orders. Customers who have already added Amazon Personalize to their apps include Yamaha Corporation of America, Subway, Zola and Segment. In the press release, Ishwar Bharbhari, Director of Information Technology, Yamaha Corporation of America, said “Amazon Personalize saves us up to 60% of the time needed to set up and tune the infrastructure and algorithms for our machine learning models when compared to building and configuring the environment on our own. It is ideal for both small developer teams who are trying to build the case for ML and large teams who are trying to iterate rapidly at reasonable cost. Even better, we expect Amazon Personalize to be more accurate than other recommender systems, allowing us to delight our customers with highly personalized product suggestions during their shopping experience, which we believe will increase our average order value and the total number of orders”. Developers are of course excited, that they can finally implement Amazon Personalize in their applications. https://twitter.com/TheNickWalsh/status/1138243004127334400 https://twitter.com/SubkrishnaRao/status/1138742140996112384 https://twitter.com/PatrickMoorhead/status/1138228634924212229 To get started with Amazon Personalize, head over to this blog post by Julien Simon. Amazon re:MARS Day 1 kicks off showcasing Amazon’s next-gen AI robots; Spot, the robo-dog and a guest appearance from ‘Iron Man’ US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny World’s first touch-transmitting telerobotic hand debuts at Amazon re:MARS tech showcase
Read more
  • 0
  • 0
  • 14559
article-image-samsung-ai-lab-researchers-present-a-system-that-can-animate-heads-with-one-shot-learning
Amrata Joshi
23 May 2019
5 min read
Save for later

Samsung AI lab researchers present a system that can animate heads with one-shot learning

Amrata Joshi
23 May 2019
5 min read
Some of the recent works have shown how to obtain highly realistic human head images by training convolutional neural networks to generate them. For creating such a personalized talking head model, these works require training on a large dataset of images of a single person. So the researchers from Samsung AI Center presented a system with few-shot capability. They have presented the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. The system performs lengthy meta-learning on a large dataset of videos, and further frames few and one-shot learning of neural talking head models of previously unseen people with the help of high capacity generators and discriminators. https://twitter.com/DmitryUlyanovML/status/1131155659305705472 The system is capable of initializing the parameters of both the generator and the discriminator in a person-specific way such that the training can be based on just a few images and can be done quickly. The researchers have shown in the paper that such an approach is capable of learning highly realistic and personalized talking head models of new people and even portrait paintings. The researchers have considered the task of creating personalized photo realistic talking head models or systems that can synthesize video-sequences of speech expressions and mimics of a particular individual. https://youtu.be/p1b5aiTrGzY To be more specific the researchers have considered the problem of synthesizing photorealistic personalized head images with a set of face landmarks, which drive the animation of the model. Such a system has practical applications for telepresence, including videoconferencing, multi-player games, and in special effects industry. Why is synthesizing realistic talking head sequences difficult? Synthesizing realistic talking head sequences is difficult because of two major reasons. The first issue is that the human heads have high photometric, geometric and kinematic complexity so it is difficult to model faces. The second is complicating factor is the acuteness of the human visual system so even minor mistakes in the appearance while modelling can cause a problem. What the researchers have done to overcome the problem? The researchers have presented a system for creating talking head models from a handful of photographs which is also called few-shot learning. The system can also generate a result based on a single photograph, this process is also known as one-shot learning. But adding a few more photographs increases the fidelity of personalization. The talking heads created by the researchers’ system can handle a large variety of poses that goes beyond the abilities of warping-based systems. The few-shot learning ability is obtained by extensive pre-training (meta-learning) on a large corpus of talking head videos that correspond to different speakers with diverse appearance. In the course of meta-learning, this system simulates few-shot learning tasks and also learns to transform landmark positions into realistically-looking personalized photographs. A handful of photographs of a new person will set up a new adversarial learning problem with high-capacity generator and discriminator that are pre-trained via meta-learning. The new problem converges to the state that would generate realistic and personalized images post a few training steps. In the experiments, the researchers have provided comparisons of talking heads created by their system with alternative neural talking head models through quantitative measurements and a user study. They have also demonstrated several use cases of their talking head models which includes video synthesis using landmark tracks extracted from video sequences and puppeteering (video synthesis of a certain person based on the face landmark tracks of a different person). The researchers have used two datasets with talking head videos for quantitative and qualitative evaluation: VoxCeleb1 [26] (256p videos at 1 fps) and VoxCeleb2 [8] (224p videos at 25 fps), with the second one having approximately 10 times more videos than the first one. The first dataset, VoxCeleb1 is used for comparison with baselines and ablation studies, the researchers show the potential of their approach with the second dataset, VoxCeleb2. To conclude, researchers have presented a framework for meta-learning of adversarial generative models that can train highly realistic virtual talking heads in the form of deep generator networks. A handful of photographs (could be as little as one) is needed to create a new model, but the model that is trained on 32 images achieves perfect realism and personalization score in their user study (for 224p static images). The key limitations of the method are the mimics representation and the lack of landmark adaptation. The landmarks from a different person can lead to a noticeable personality mismatch. If someone wants to create “fake” puppeteering videos without such mismatch then, in that case, some landmark adaptation is needed. The paper further reads, “We note, however, that many applications do not require puppeteering a different person and instead only need the ability to drive one’s own talking head. For such scenario, our approach already provides a high-realism solution.” To know more about this news, check out the paper, Few-Shot Adversarial Learning of Realistic Neural Talking Head Models. Samsung opens its AI based Bixby voice assistant to third-party developers Researchers from China introduced two novel modules to address challenges in multi-person pose estimation AI can now help speak your mind: UC researchers introduce a neural decoder that translates brain signals to natural sounding speech
Read more
  • 0
  • 0
  • 14552

article-image-lawmakers-introduce-new-consumer-privacy-bill-and-malicious-deep-fake-prohibition-act-to-support-consumer-privacy-and-battle-deepfakes
Sugandha Lahoti
05 Feb 2019
4 min read
Save for later

Lawmakers introduce new Consumer privacy bill and Malicious Deep Fake Prohibition Act to support consumer privacy and battle deepfakes

Sugandha Lahoti
05 Feb 2019
4 min read
Yesterday, a Massachusetts senator filed a consumer privacy bill enabling consumers to sue for privacy invasions. The bill touted to be similar to Californians for consumer privacy (CCPA).  It allows for a private right of action and statutory damages for any violation of the law (not just breaches) and does not require a demonstration of a loss of money or property. Here’s what the bill proposes: Businesses provide consumers with a notice At/Before Collection Right to Delete: A consumer shall have the right to request that a business delete any personal information about the consumer which the business has collected from the consumer. Right to Opt-out of Third Party Disclosure:  A consumer shall have the right, at any time, to demand that a business not disclose the consumer’s personal information to third parties. No Penalty for Exercise of Rights: A business shall not discriminate against a consumer because the consumer exercised any of the consumer’s rights under the bill Private Right of Action: A consumer who has suffered a violation of this bill may bring a lawsuit against the business or service provider that violated this bill. The bill says that “Consumers need not to suffer a loss of money or property as a result of the violation in order to bring an action for a violation." People on Twitter generally had positive sentiments. https://twitter.com/natashanyt/status/1090328524865576961 https://twitter.com/ashk4n/status/1092452492175175680 https://twitter.com/gabrielazanfir/status/1092524077854670851 Last month, Sen. Ben Sasse introduced a bill, “Malicious Deep Fake Prohibition Act” to criminalize the malicious creation and distribution of deepfakes, which are increasingly being used for promoting harassment and illegal activities. Under the bill, it would be illegal for individuals to: (1) Create, with the intent to distribute, a deep fake with the intent that the distribution of the deep fake would facilitate criminal or tortious conduct under Federal, State, local, or Tribal law; or (2) Distribute an audiovisual record with— (A) actual knowledge that the audiovisual record is a deep fake; and (B) the intent that the distribution of the audiovisual record would facilitate criminal or tortious conduct under Federal, State, local, or Tribal law. However, this bill was widely criticized for its loopholes. A statement in the bill states, “Deep fake means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.” A hacker news user said that this, limits the scope of the act to prohibiting deep fakes that are not explicitly labeled as such. Another user said, “If that's the case, any sort of creative editing, even just quick cuts, could fall under this (see: any primetime or cable news, any TV campaign ad, the quick cuts of Obama where it looks like he's singing Never Gonna Give You Up, etc). A law like this could also be weaponized against political foes—basically, label everything you don't like as "fake news" and prosecute it under this law.” Orin Kerr in a blog post comments, “The Sasse bill also has a potential problem of not distinguishing between devices and files. Reading the bill, it prohibits the distribution of an audiovisual record with the intent that the distribution would facilitate tortious conduct.” It is promising to see the lawmakers sincerely taking measures to enable building strict privacy standards. Only time will tell if this new legislation will continue to protect consumer data than the businesses that profit from it. Machine generated videos like Deepfakes – Trick or Treat? Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois
Read more
  • 0
  • 0
  • 14546
Modal Close icon
Modal Close icon