Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - News

104 Articles
article-image-troll-patrol-report-amnesty-international-and-element-ai-use-machine-learning-to-understand-online-abuse-against-women
Sugandha Lahoti
18 Dec 2018
5 min read
Save for later

Troll Patrol Report: Amnesty International and Element AI use machine learning to understand online abuse against women

Sugandha Lahoti
18 Dec 2018
5 min read
Amnesty International has partnered with Element AI to release a Troll Patrol report on the online abuse against women on Twitter. This finding was a part of their Troll patrol project which invites human rights researchers, technical experts, and online volunteers to build a crowd-sourced dataset of online abuse against women.   https://twitter.com/amnesty/status/1074946094633836544 Abuse of women on social media websites has been rising at an unprecedented rate. Social media websites have a responsibility to respect human rights and to ensure that women using the platform are able to express themselves freely and without fear. However, this has not been the case with Twitter and Amnesty has unearthed certain discoveries. Amnesty’s methodology was powered by machine learning Amnesty and Element AI surveyed 778 journalists and politicians from the UK and US throughout 2017 and then use machine learning techniques to qualitatively analyze abuse against women. The first process was to design large, unbiased dataset of tweets mentioning 778 women politicians and journalists from the UK and US. Next, over 6,500 volunteers (aged between 18 to 70 years old and from over 150 countries) analyzed 288,000 unique tweets to create a labeled dataset of abusive or problematic content. This was based on simple questions such as if the tweets were abusive or problematic, and if so, whether they revealed misogynistic, homophobic or racist abuse or other types of violence. Three experts also categorized a sample of 1,000 tweets to assess the quality of the tweets labeled by digital volunteers. Element AI used data science specifically using a subset of the Decoders and experts’ categorization of the tweets, to extrapolate the abuse analysis. Key findings from the report Per the findings of the Troll Patrol report, 7.1% of tweets sent to the women in the study were “problematic” or “abusive”. This amounts to 1.1 million tweets mentioning 778 women across the year, or one every 30 seconds. Women of color, (black, Asian, Latinx and mixed-race women) were 34% more likely to be mentioned in abusive or problematic tweets than white women. Black women were disproportionately targeted, being 84% more likely than white women to be mentioned in abusive or problematic tweets. Source: Amnesty Online abuse targets women from across the political spectrum faced similar levels of online abuse and both liberals and conservatives alike, as well as left and right-leaning media organizations, were targeted. Source: Amnesty   What does this mean for people in tech Social media organizations are repeatedly failing in their responsibility to protect women’s rights online. They fall short of adequately investigating and responding to reports of violence and abuse in a transparent manner which leads many women to silence or censor themselves on the platform. Such abuses also hinder the freedom of expression online and also undermines women’s mobilization for equality and justice, particularly those groups who already face discrimination and marginalization. What can tech platforms do? One of the recommendations of the report is that social media platforms should publicly share comprehensive and meaningful information about reports of violence and abuse against women, as well as other groups, on their platforms. They should also talk in detail about how they are responding to it. Although Twitter and other platforms are using machine learning for content moderation and flagging, they should be transparent about the algorithms they use. They should publish information about training data, methodologies, moderation policies and technical trade-offs (such as between greater precision or recall) for public scrutiny. Machine learning automation should ideally be part of a larger content moderation system characterized by human judgment, greater transparency, rights of appeal and other safeguards. Amnesty in collaboration with Element AI also developed a machine learning model to better understand the potential and risks of using machine learning in content moderation systems. This model was able to achieve results comparable to their digital volunteers at predicting abuse, although it is ‘far from perfect still’, Amnesty notes. It achieves about a 50% accuracy level when compared to the judgment of experts. It was able to correctly identify 2 in every 14 tweets as abusive or problematic in comparison to experts who identified 1 in every 14 tweets as abusive or problematic. “Troll Patrol isn’t about policing Twitter or forcing it to remove content. We are asking it to be more transparent, and we hope that the findings from Troll Patrol will compel it to make that change. Crucially, Twitter must start being transparent about how exactly they are using machine learning to detect abuse, and publish technical information about the algorithms they rely on”. said Milena Marin senior advisor for tactical research at Amnesty International. Read more: The full list of Amnesty’s recommendations to Twitter. People on Twitter (the irony) are shocked at the release of Amnesty’s report and #ToxicTwitter is trending. https://twitter.com/gregorystorer/status/1074959864458178561 https://twitter.com/blimundaseyes/status/1074954027287396354 https://twitter.com/MikeWLink/status/1074500992266354688 https://twitter.com/BethRigby/status/1074949593438265344 Check out the full Troll Patrol report on Amnesty. Also, check out their machine learning based methodology in detail. Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly. Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Twitter plans to disable the ‘like’ button to promote healthy conversations; should retweet be removed instead?
Read more
  • 0
  • 0
  • 10551

article-image-facebooks-ceo-mark-zuckerberg-summoned-for-hearing-by-uk-and-canadian-houses-of-commons
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Facebook's CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons

Bhagyashree R
01 Nov 2018
2 min read
Yesterday, the chairs of the UK and Canadian Houses of Commons issued a letter calling for Mark Zuckerberg, Facebook’s CEO to appear before them. The primary aim of this hearing is to get a clear idea of what measures Facebook is taking to avoid the spreading of disinformation on the social media platform and to protect user data. It is scheduled to happen at the Westminster Parliament on Tuesday 27th November. The committee has already gathered evidence regarding several data breaches and process failures including the Cambridge Analytica scandal and is now seeking answers from Mark Zuckerberg on what led to all of these incidents. Mark last attended a hearing in April with the Senate's Commerce and Judiciary committees this year in which he was asked about the company’s failure to protect its user data, its perceived bias against conservative speech, and its use for selling illegal material like drugs. After which he has not attended any of the hearings and instead sent other senior representatives such as Sheryl Sandberg, COO at Facebook. The letter pointed out: “You have chosen instead to send less senior representatives, and have not yourself appeared, despite having taken up invitations from the US Congress and Senate, and the European Parliament.” Throughout this year we saw major security and data breaches involving Facebook. The social media platform faced a security issue last month which impacted almost 50 million user accounts. Its engineering team discovered that hackers were able to find a way to exploit a series of bugs related to the View As Facebook feature. Earlier this year, Facebook witnessed a backlash for the Facebook-Cambridge Analytica data scandal. It was a major political scandal about Cambridge Analytica using personal data of millions of Facebook users for political purposes without their permission. The reports of this hearing will be shared in December if at all Zuckerberg agrees to attend it. The committee has requested his response till 7th November. Read the full letter issued by the committee. Facebook is at it again. This time with Candidate Info where politicians can pitch on camera Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 9560

article-image-brainnet-an-interface-to-communicate-between-human-brains-could-soon-make-telepathy-real
Sunith Shetty
28 Sep 2018
3 min read
Save for later

BrainNet, an interface to communicate between human brains, could soon make Telepathy real

Sunith Shetty
28 Sep 2018
3 min read
BrainNet provides the first multi-person brain-to-brain interface which allows a nonthreatening direct collaboration between human brains. It can help small teams collaborate to solve a range of tasks using direct brain-to-brain communication. How does BrainNet operate? The noninvasive interface combines electroencephalography (EEG) to record brain signals and transcranial magnetic stimulation (TMS) to deliver the required information to the brain. For now, the interface allows three human subjects to collaborate, handle and solve a task using direct brain-to-brain communication. Two out of three human subjects are “Senders”. The senders’ brain signals are decoded using real-time EEG data analysis. This technique allows extracting decisions which are vital in communicating in order to solve the required challenges. Let’s take an example of a Tetris-like game--where you need quick decisions to decide whether to rotate a block or drop as it is in order to fill a line. The senders’ signals (decisions) are transmitted to the third subject human brain via the Internet, the “Receiver” in this case. The decisions are sent to the receiver brain via magnetic stimulation of the occipital cortex. The receiver can’t see the game screen to decide if the rotation of the block is required. The receiver integrates the decisions received and makes an informed call using an EEG interface regarding turning the position of the block or keeping it in the same position. The second round of the game allows the senders to validate the previous move and provide the necessary feedback to the receiver’s action. How did the results look? The group of researchers has implemented this technique for the Tetris game to evaluate the performance of BrainNet considering the following factors: Group-level performance during the game True/False positive rates of subject’s decisions Mutual information between subjects This was implemented among five groups of three human brain subjects to perform the Tetris task using BrainNet interface. The average accuracy result for the task was 0.813. Furthermore, they also tried varying the information reliability by injecting artificially generated noise into one of the senders’ signals. However, the receiver was able to classify which sender is more reliable based on the information transmitted to their brains. These positive results have open the gates and the possibilities of future brain-to-brain interfaces which holds the power of enabling cooperative problem solving by humans using a "social network" of connected brains. To know more, you can refer to the research paper. Read more Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light Baidu announces ClariNet, a neural network for text-to-speech synthesis Optical training of Neural networks is making AI more efficient
Read more
  • 0
  • 0
  • 9020

article-image-how-bad-is-the-gender-diversity-crisis-in-ai-research-study-analysing-1-5million-arxiv-papers-says-its-serious
Fatema Patrawala
18 Jul 2019
9 min read
Save for later

How bad is the gender diversity crisis in AI research? Study analysing 1.5million arxiv papers says it’s “serious”

Fatema Patrawala
18 Jul 2019
9 min read
Yesterday the team at Nesta organization, an innovation firm based out of UK published a research on gender diversity in the AI research workforce. The authors of this research are Juan Mateos Garcis, the Director, Konstantinos Stathoulopoulos, the Principal Researcher and Hannah Owen, the Programme Coordinator at Nesta. https://twitter.com/JMateosGarcia/status/1151517641103872006 They have prepared an analysis purely based on 1.5 million arxiv papers. The team claims that it is the first ever study of gender diversity in AI which is not on any convenience sampling or proprietary database. The team posted on its official blog post, “We conducted a large-scale analysis of gender diversity in AI research using publications from arXiv, a repository with more than 1.5 million preprints widely used by the AI community. We aim to expand the evidence base on gender diversity in AI research and create a baseline with which to interrogate the impact of current and future policies and interventions.  To achieve this, we enriched the ArXiv data with geographical, discipline and gender information in order to study the evolution of gender diversity in various disciplines, countries and institutions as well as examine the semantic differences between AI papers with and without female co-authors.” With this research the team also aims to bring prominent female figures they have identified under the spotlight. Key findings from the research Serious gender diversity crisis in AI research The team found a severe gender diversity gap in AI research with only 13.83% of authors being women. Moreover, in relative terms, the proportion of AI papers co-authored by at least one woman has not improved since the 1990s. Juan Mateos thinks this kind of crisis is a waste of talent and it increases the risk of discriminatory AI systems. https://twitter.com/JMateosGarcia/status/1151517642236276736 Location and research domain are significant drivers of gender diversity Women in the Netherlands, Norway and Denmark are more likely to publish AI papers while those in Japan and Singapore are less likely. In the UK, 26.62% of the AI papers have at least one female co-author, placing the country at the 22nd spot worldwide. The US follows the UK in terms of having at least one female co-authors at 25% and for the unique female author US leads one position above UK. Source: Nesta research report Regarding the research domains, women working in Physics and Education, Computer Ethics and other societal issues and Biology are more likely to publish their work on AI in comparison to those working in Computer Science or Mathematics. Source: Nesta research report Significant gender diversity gap in universities, big tech companies and other research institutions Apart from the University of Washington, every other academic institution and organisation in the dataset has less than 25% female AI researchers. Regarding some of the big tech, only 11.3% of Google’s employees who have published their AI research on arXiv are women, while the proportion is similar for Microsoft (11.95%) and is slightly better for IBM (15.66%). Important semantic differences between AI paper with and without a female co-author When examining the publications in the Machine Learning and Societal topics in the UK in 2012 and 2015, papers involving at least one female co-author tend to be more semantically similar to each other than with those without any female authors. Moreover, papers with at least one female co-author tend to be more applied and socially aware, with terms such as fairness, human mobility, mental, health, gender and personality being among the most salient ones. Juan Mateos noted that this is an area which deserves further research. https://twitter.com/JMateosGarcia/status/1151517647361781760   The top 15 women with the most AI publications on arXiv identified Aarti Singh, Associate Professor at the Machine learning department of Carnegie Mellon University Cordelia Schmid, is a part of Google AI team and holds a permanent research position at Inria Grenoble Rhone-Alpes Cynthia Rudin, an associate professor of computer science, electrical and computer engineering, statistical science and mathematics at Duke University Devi Parikh, an Assistant Professor in the School of Interactive Computing at Georgia Tech Karen Livescu, an Associate Professor at Toyota Technical Institute at Chicago Kate Saenko,  an Associate Professor at the Department of Computer at Boston University Kristina Lerman, a Project Leader at the Information Sciences Institute at the University of Southern California Marilyn A. Walker, a Professor at the Department of Computer Science at the University of California Mihaela van der Schaar, is John Humphrey Plummer Professor of Machine Learning, Artificial Intelligence and Medicine at the University of Cambridge and a Turing Fellow at The Alan Turing Institute in London Petia Radeva, a professor at the Department of Mathematics and Computer Science, Faculty of Mathematics and Computer Science at the Universitat de Barcelona Regina Barzilay is a professor at the Massachusetts Institute of Technology and a member of the MIT Computer Science and Artificial Intelligence Laboratory Svetha Venkatesh, an ARC Australian Laureate Fellow, Alfred Deakin Professor and Director of the Centre for Pattern Recognition and Data Analytics (PRaDA) at Deakin University Xiaodan Liang, an Associate Professor at the School of Intelligent Systems Engineering, Sun Yat-sen University Yonina C. Elda, a Professor of Electrical Engineering, Weizmann Faculty of Mathematics and Computer Science at the University of Israel Zeynep Akata, an Assistant Professor with the University of Amsterdam in the Netherlands There are 5 other women researchers who were not identified in the study. Interviews bites from few women contributors and institutions The research team also interviewed few researchers and institutions identified in their work and they think a system wide reform is needed. When the team discussed the findings with the most cited female researcher Mihaela Van Der Schaar, she did feel that her presence in the field has only started to be recognised, having begun her career in 2003, ‘I think that part of the reason for this is because I am a woman, and the experience of (the few) other women in AI in the same period has been similar.’ she says. Professor Van Der Schaar also described herself and many of her female colleagues as ‘faceless’, she suggested that the work of celebrating leading women in the field could have a positive impact on the representation of women, as well as the disparity in the recognition that these women receive. This suggests that work is needed across the pipeline, not just with early-stage invention in education, but support for those women in the field. She also highlighted the importance of open discussion about the challenges women face in the AI sector and that workplace changes such as flexible hours are needed to enable researchers to participate in a fast-paced sector without sacrificing their family life. The team further discussed the findings with the University of Washington’s Eve Riskin, Associate Dean of Diversity and Access in the College of Engineering. Riskin described that much of her female faculty experienced a ‘toxic environment’ and pervasive imposter syndrome. She also emphasized the fact that more research is needed in terms of the career trajectories of the male and female researchers including the recruitment and retention. Some recent examples of exceptional women in AI research and their contribution While these women talk about the diversity gaps in this field recently we have seen works from female researchers like Katie Bouman which gained significant attention. Katie is a post-doctoral fellow at MIT whose algorithm led to an image of a supermassive black hole. But then all the attention became a catalyst for a sexist backlash on social media and YouTube. It set off “what can only be described as a sexist scavenger hunt,” as The Verge described it, in which an apparently small group of vociferous men questioned Bouman’s role in the project. “People began going over her work to see how much she’d really contributed to the project that skyrocketed her to unasked-for fame.” Another incredible example in the field of AI research and ethics is of Meredith Whittaker, an ex-Googler, now a program manager, activist, and co-founder of the AI Now Institute at New York University. Meredith is committed to the AI Now Institute, her AI ethics work, and to organize an accountable tech industry. On Tuesday,  Meredith left Google after facing retaliation from company for organizing last year’s protest of Google Walkout for Real Change demanding the company for structural changes to ensure a safe and conducive work environment for everyone.. Other observations from the research and next steps The research also highlights the fact that women are as capable as men in contributing to technical topics while they tend to contribute more than men to publications with a societal or ethical output. Some of the leading AI researchers in the field shared their opinion on this: Petia Radeva, Professor at the Department of Mathematics and Computer Science at the University of Barcelona, was positive that the increasingly broad domains of application for AI and the potential impact of this technology will attract more women into the sector. Similarly, Van Der Schaar suggests that “publicising the interdisciplinary scope of possibilities and career paths that studying AI can lead to will help to inspire a more diverse group of people to pursue it. In parallel, the industry will benefit from a pipeline of people who are motivated by combining a variety of ideas and applying them across domains.” The research team in future will explore the temporal co-authorship network of AI papers to examine how different the career trajectory of male and female researchers might be. They will survey AI researchers on arXiv and investigate the drivers of the diversity gap in more detail through their innovation mapping methods. They also plan to extend this analysis to identify the representation of other underrepresented groups. Meredith Whittaker, Google Walkout organizer, and AI ethics researcher is leaving the company, adding to its brain-drain woes over ethical concerns “I’m concerned about Libra’s model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition DeepMind’s Alphastar AI agent will soon anonymously play with European StarCraft II players
Read more
  • 0
  • 0
  • 8442

article-image-the-eu-commission-introduces-guidelines-for-achieving-a-trustworthy-ai
Savia Lobo
09 Apr 2019
4 min read
Save for later

The EU commission introduces guidelines for achieving a ‘Trustworthy AI’

Savia Lobo
09 Apr 2019
4 min read
On the third day of the Digital Day 2019 held in Brussels, the European Commission introduced a set of essential guidelines for building a trustworthy AI, which will guide companies and government to build ethical AI applications. By introducing these new guidelines, the commission is working towards a three-step approach including, Setting out the key requirements for trustworthy AI Launching a large scale pilot phase for feedback from stakeholders Working on international consensus building for human-centric AI EU’s high-level expert group on AI, which consists of 52 independent experts representing academia, industry, and civil society, came up with seven requirements, which according to them, the future AI systems should meet. Seven guidelines for achieving an ethical AI Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy. Robustness and safety: A trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them. Transparency: The traceability of AI systems should be ensured. Diversity, non-discrimination, and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes. According to EU’s official press release, “Following the pilot phase, in early 2020, the AI expert group will review the assessment lists for the key requirements, building on the feedback received. Building on this review, the Commission will evaluate the outcome and propose any next steps.” The plans fall under the Commission’s AI strategy of April 2018, which “aims at increasing public and private investments to at least €20 billion annually over the next decade, making more data available, fostering talent and ensuring trust ”, the press release states. Andrus Ansip, Vice-President for the Digital Single Market, said, “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.” Mariya Gabriel, Commissioner for Digital Economy and Society, said, “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI." Thomas Metzinger, a Professor of Theoretical Philosophy at the University of Mainz and who was also a member of the commission's expert group that has worked on the guidelines has put forward an article titled, ‘Ethics washing made in Europe’. Metzinger said he has worked on the Ethics Guidelines for nine months. “The result is a compromise of which I am not proud, but which is nevertheless the best in the world on the subject. The United States and China have nothing comparable. How does it fit together?”, he writes. Eline Chivot, a senior policy analyst at the Center for Data Innovation think tank, told The Verge, “We are skeptical of the approach being taken, the idea that by creating a golden standard for ethical AI it will confirm the EU’s place in global AI development. To be a leader in ethical AI you first have to lead in AI itself.” To know more about this news in detail, read the EU press release. Is Google trying to ethics-wash its decisions with its new Advanced Tech External Advisory Council? IEEE Standards Association releases ethics guidelines for automation and intelligent systems Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018
Read more
  • 0
  • 0
  • 7009

article-image-the-u-s-dod-wants-to-dominate-russia-and-china-in-artificial-intelligence-last-week-gave-us-a-glimpse-into-that-vision
Savia Lobo
18 Mar 2019
9 min read
Save for later

The U.S. DoD wants to dominate Russia and China in Artificial Intelligence. Last week gave us a glimpse into that vision.

Savia Lobo
18 Mar 2019
9 min read
In a hearing on March 12, the sub-committee on emerging threats and capabilities received testimonies on Artificial Intelligence Initiatives within the Department of Defense(DoD). The panel included Peter Highnam, Deputy Director of the Defense Advanced Research Projects Agency; Michael Brown, DoD Defense Innovation Unit Director; and Lieutenant General John Shanahan, director of the Joint Artificial Intelligence Center (JAIC). The panel broadly testified to senators that AI will significantly transform DoD’s capabilities and that it is critical the U.S. remain competitive with China and Russia in developing AI applications. Dr. Peter T. Highnam on DARPA’s achievements and future goals Dr. Peter T. Highnam, Deputy Director, Defense Advanced Research Projects Agency talked about DARPA’s significant role in the development of AI technologies that have produced game-changing capabilities for the Department of Defense and beyond. In his testimony, he mentions, “DARPA’s AI Next effort is simply a continuing part of its 166 historic investment in the exploration and advancement of AI technologies.” Dr. Highnam highlighted different waves of AI technologies. The first wave, which was nearly 70 years ago, emphasized handcrafted knowledge, and computer scientists constructed so-called expert systems that captured the rules that the system could then apply to situations of interest. However, handcrafting rules was costly and time-consuming. The second wave that brought in machine learning that applies statistical and probabilistic methods to large data sets to create generalized representations that can be applied to future samples. However, this required training deep learning (artificial) neural networks with a variety of classification and prediction tasks when adequate historical data. Therein lies the rub, however, as the task of collecting, labelling, and vetting data on which to train. Such a process is prohibitively costly and time-consuming too. He says, “DARPA envisions a future in which machines are more than just tools that execute human programmed rules or generalize from human-curated data sets. Rather, the machines DARPA envisions will function more as colleagues than as tools.” Towards this end, DARPA is focusing its investments on a “third wave” of AI technologies that brings forth machines that can reason in context. Incorporating these technologies in military systems that collaborate with warfighters will facilitate better decisions in complex, time-critical, battlefield environments; enable a shared understanding of massive, incomplete, and contradictory information; and empower unmanned systems to perform critical missions safely and with high degrees of autonomy. DARPA’s more than $2 billion “AI Next” campaign, announced in September 2018, includes providing robust foundations for second wave technologies, aggressively applying the second wave AI technologies into appropriate systems, and exploring and creating third wave AI science and technologies. DARPA’s third wave research efforts will forge new theories and methods that will make it possible for machines to adapt contextually to changing situations, advancing computers from tools to true collaborative partners. Furthermore, the agency will be fearless about exploring these new technologies and their capabilities – DARPA’s core function – pushing critical frontiers ahead of our nation’s adversaries. To know more about this in detail, read Dr. Peter T. Highnam’s complete statement. Michael Brown on (Defense Innovation Unit) DIU’s efforts in Artificial Intelligence Michael Brown, Director of the Defense Innovation Unit, started the talk by highlighting on the fact how China and Russia are investing heavily to become dominant in AI.  “By 2025, China will aim to achieve major breakthroughs in AI and increase its domestic market to reach $59.6 billion (RMB 400 billion) To achieve these targets, China’s National Development and Reform Commission (China’s industrial policy-making agency) funded the creation of a national AI laboratory, and Chinese local governments have pledged more than $7 billion in AI funding”, Brown said in his statement. He said that these Chinese firms are in a way leveraging U.S. talent by setting up research institutes in the state, investing in U.S. AI-related startups and firms, recruiting U.S.-based talent, and commercial and academic partnerships. Brown said that DIU will engage with DARPA and JAIC(Joint Artificial Intelligence Center) and also make its commercial knowledge and relationships with potential vendors available to any of the Services and Service Labs. DIU also anticipates that with its close partnership with the JAIC, DIU will be at the leading edge of the Department’s National Mission Initiatives (NMIs), proving that commercial technology can be applied to critical national security challenges via accelerated prototypes that lay the groundwork for future scaling through JAIC. “DIU looks to bring in key elements of AI development pursued by the commercial sector, which relies heavily on continuous feedback loops, vigorous experimentation using data, and iterative development, all to achieve the measurable outcome, mission impact”, Brown mentions. DIU’s AI portfolio team combines depth of commercial AI, machine learning, and data science experience from the commercial sector with military operators. However, they have specifically prioritized projects that address three major impact areas or use cases which employ AI technology, including: Computer vision The DIU is prototyping computer vision algorithms in humanitarian assistance and disaster recovery scenarios. “This use of AI holds the potential to automate post-disaster assessments and accelerate search and rescue efforts on a global scale”, Brown said in his statement. Large dataset analytics and predictions DIU is prototyping predictive maintenance applications for Air Force and Army platforms. For this DIU plans to partner with JAIC to scale this solution across multiple aircraft platforms, as well as ground vehicles beginning with DIU’s complementary predictive maintenance project focusing on the Army’s Bradley Fighting Vehicle. Brown says this is one of DIU’s highest priority projects for FY19 given its enormous potential for impact on readiness and reducing costs. Strategic reasoning DIU is prototyping an application from Project VOLTRON that leverages AI to reason about high-level strategic questions, map probabilistic chains of events, and develop alternative strategies. This will make DoD owned systems more resilient to cyber attacks and inform program offices of configuration errors faster and with fewer errors than humans. Know more about what more DIU plans in partnership with DARPA and JAIC, in detail, in Michael Brown’s complete testimony. Lieutenant General Jack Shanahan on making JAIC “AI-Ready” Lieutenant General Jack Shanahan, Director, Joint Artificial Intelligence Center, touches upon  how the JAIC is partnering with the Under Secretary of Defense (USD) Research & Engineering (R&E), the role of the Military Services, the Department’s initial focus areas for AI delivery, and how JAIC is supporting whole-of-government efforts in AI. “To derive maximum value from AI application throughout the Department, JAIC will operate across an end-to-end lifecycle of problem identification, prototyping, integration, scaling, transition, and sustainment. Emphasizing commerciality to the maximum extent practicable, JAIC will partner with the Services and other components across the Joint Force to systematically identify, prioritize, and select new AI mission initiatives”, Shanahan mentions in his testimony. The AI capability delivery efforts that will go through this lifecycle will fall into two categories including National Mission Initiatives (NMI) and Component Mission Initiatives (CMI). NMI is an operational or business reform joint challenge, typically identified from the National Defense Strategy’s key operational problems and requiring multi-service innovation, coordination, and the parallel introduction of new technology and new operating concepts. On the other hand, Component Mission Initiatives (CMI) is a component-level challenge that can be solved through AI. JAIC will work closely with individual components on CMIs to help identify, shape, and accelerate their Component-specific AI deployments through: funding support; usage of common foundational tools, libraries, cloud infrastructure; application of best practices; partnerships with industry and academia; and so on. The Component will be responsible for identifying and implementing the organizational structure required to accomplish its project in coordination and partnership with the JAIC. Following are some examples of early NMI’s by JAIC to deliver mission impact at speed, demonstrate the proof of concept for the JAIC operational model, enable rapid learning and iterative process refinement, and build their library of reusable tools while validating JAIC’s enterprise cloud architecture. Perception Improve the speed, completeness, and accuracy of Intelligence, Surveillance, Reconnaissance (ISR) Processing, Exploitation, and Dissemination (PED). Shanahan says Project Maven’s efforts are included here. Predictive Maintenance (PMx) Provide computational tools to decision-makers to help them better forecast, diagnose, and manage maintenance issues to increase availability, improve operational effectiveness, and ensure safety, at a reduced cost. Humanitarian Assistance/Disaster Relief (HA/DR) Reduce the time associated with search and discovery, resource allocation decisions, and executing rescue and relief operations to save lives and livelihood during disaster operations. Here, JAIC plans to apply lessons learned and reusable tools from Project Maven to field AI capabilities in support of federal responses to events such as wildfires and hurricanes—where DoD plays a supporting role. Cyber Sensemaking Detect and deter advanced adversarial cyber actors who infiltrate and operate within the DoD Information Network (DoDIN) to increase DoDIN security, safeguard sensitive information, and allow warfighters and engineers to focus on strategic analysis and response. Shanahan states, “Under the DoD CIO’s authorities and as delineated in the JAIC establishment memo, JAIC will coordinate all DoD AI-related projects above $15 million annually.” “It does mean that we will start to ensure, for example, that they begin to leverage common tools and libraries, manage data using best practices, reflect a common governance framework, adhere to rigorous testing and evaluation methodologies, share lessons learned, and comply with architectural principles and standards that enable scale”, he further added. To know more about this in detail, read Lieutenant General Jack Shanahan’s complete testimony. To know more about this news in detail, watch the entire hearing on 'Artificial Intelligence Initiatives within the Department of Defense' So, you want to learn artificial intelligence. Here’s how you do it. What can happen when artificial intelligence decides on your loan request Mozilla partners with Ubisoft to Clever-Commit its code, an artificial intelligence assisted assistant
Read more
  • 0
  • 0
  • 6681
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-un-global-working-group-on-big-data-publishes-a-handbook-on-privacy-preserving-computation-techniques
Bhagyashree R
03 Apr 2019
4 min read
Save for later

UN Global Working Group on Big Data publishes a handbook on privacy-preserving computation techniques

Bhagyashree R
03 Apr 2019
4 min read
On Monday, the UN Global Working Group (GWG) on Big Data published UN Handbook on Privacy-Preserving Computation Techniques. This book talks about the emerging privacy-preserving computation techniques and also outlines the key challenges in making these techniques more mainstream. https://twitter.com/UNBigData/status/1112739047066255360 Motivation behind writing this handbook In recent years, we have come across several data breaches. Companies collect users’ personal data without their consent to show them targeted content. The aggregated personal data can be misused to identify individuals and localize their whereabouts. Individuals can be singled out with the help of just a small set of attributes. This large collections of data are very often an easy target for cybercriminals. Previously, when cyber threats were not that advanced, people used to focus mostly on protecting the privacy of data at rest. This led to development of technologies like symmetric key encryption. Later, when sharing data on unprotected networks became common, technologies like Transport Layer Security (TLS) came into the picture. Today, when attackers are capable of penetrating servers worldwide, it is important to be aware of the technologies that help in ensuring data privacy during computation. This handbook focuses on technologies that protect the privacy of data during and after computation, which are called privacy-preserving computation techniques. Privacy Enhancing Technologies (PET) for statistics This book lists five Privacy Enhancing Technologies for statistics that will help reduce the risk of data leakage. I say “reduce” because there is, in fact, no known technique that can give a complete solution to the privacy question. #1 Secure multi-party computation Secure multi-party computation is also known as secure computation, multi-party computation (MPC), or privacy-preserving computation. A subfield of cryptography, this technology deals with scenarios where multiple parties are jointly working on a function. It aims to prevent any participant from learning anything about the input provided by other parties. MPC is based on secret sharing, in which data is divided into shares that are random themselves, but when combined it gives the original data. Each data input is shared into two or more shares and distributed among the parties involved. These when combined produce the correct output of the computed function. #2 Homomorphic encryption Homomorphic encryption is an encryption technique using which you can perform computations on encrypted data without the need for a decryption key. The advantage of this encryption scheme is that it enables computation on encrypted data without revealing the input data or result to the computing party. The result can only be decrypted by a specific party that has access to the secret key, typically it is the owner of the input data. #3 Differential Privacy (DP) DP is a statistical technique that makes it possible to collect and share aggregate information about users, while also ensuring that the privacy of individual users is maintained. This technique was designed to address the pitfalls that previous attempts to define privacy suffered, especially in the context of multiple releases and when adversaries have access to side knowledge. #4 Zero-knowledge proofs Zero-knowledge proofs involve two parties: prover and verifier. The prover has to prove statements to the verifier based on secret information known only to the prover. ZKP allows you to prove that you know a secret or secrets to the other party without actually revealing it. This is why this technology is called “zero knowledge”, as in, “zero” information about the secret is revealed. But, the verifier is convinced that the prover knows the secret in question. #5 Trusted Execution Environments (TEEs) This last technique on the list is different from the above four as it uses both hardware and software to protect data and code. It provides users secure computation capability by combining special-purpose hardware and software built to use those hardware features. In this technique, a process is run on a processor without its memory or execution state being exposed to any other process on the processor. This free 50-pager handbook is targeted towards statisticians and data scientists, data curators and architects, IT specialists, and security and information assurance specialists. So, go ahead and have a read: UN Handbook for Privacy-Preserving Techniques! Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? Researchers successfully trick Tesla autopilot into driving into opposing traffic via “small stickers as interference patches on the ground”
Read more
  • 0
  • 0
  • 5947

article-image-tensorflow-1-9-is-now-generally-available
Savia Lobo
11 Jul 2018
3 min read
Save for later

Tensorflow 1.9 is now generally available

Savia Lobo
11 Jul 2018
3 min read
After the back-to-back release of Tensorflow 1.9 release candidates, rc-0, rc-1, and rc-2, the final version TensorFlow 1.9 is out and generally available. Key highlights of this version include support for gradient boosted trees estimators, new keras layers to speed up GRU and LSTM implementations and tfe.Network deprecation. It also includes improved functions for supporting data loading, text processing and pre-made estimators. Tensorflow 1.9 major features and improvements As mentioned in Tensorflow 1.9 rc-2, new Keras-based get started page and programmers guide page in the tf.Keras have been updated. The tf.Keras has been updated to Keras 2.1.6 API. One should try the newly added  tf.keras.layers.CuDNNGRU, used for a faster GRU implementation and tf.keras.layers.CuDNNLSTM layers, which allows faster LSTM implementation. Both these layers are backed by cuDNN( NVIDIA CUDA Deep Neural Network library (cuDNN)). Gradient boosted trees estimators, a non-parametric statistical learning technique for  classification and regression, are now supported by core feature columns and losses. Also, the python interface for the TFLite Optimizing Converter has been expanded, and the command line interface (AKA: toco, tflite_convert) is once again included in the standard pip installation. The distributions.Bijector API in the TF version 1.9 also supports broadcasting for Bijectors with the new API changes. Tensorflow 1.9 also includes improved data-loading and text processing with tf.decode_compressed, tf.string_strip, and Tf.strings.regex_full_match. It also has an added experimental support for new pre-made estimators like tf.contrib.estimator.BaselineEstimator, tf.contrib.estimator.RNNClassifier, tf.contrib.estimator.RNNEstimator. This version includes two breaking changes. Firstly for opening up empty variable scopes one can replace variable_scope('', ...) by variable_scope(tf.get_variable_scope(), ...), which is used to get the current scope of the variable. And the second breakthrough change is, headers used for building custom ops have been moved to a different file path. From site-packages/external to site-packages/tensorflow/include/external. Some bug fixes and other changes include: The tfe.Network has been deprecated Layered variable names have changed in the following conditions: Using tf.keras.layers with custom variable scopes. Using tf.layers in a subclassed tf.keras.Model class. Added the ability to pause recording operations for gradient computation via tf.GradientTape.stop_recording in the Eager execution and updated its documentation and introductory notebooks. Fixed an issue in which the TensorBoard Debugger Plugin, which could not handle total source file size exceeding gRPC message size limit (4 MB). Added GCS Configuration Ops and complex128 support to FFT, FFT2D, FFT3D, IFFT, IFFT2D, and IFFT3D. Conv3D, Conv3DBackpropInput, Conv3DBackpropFilter now supports arbitrary. Prevents tf.gradients() from backpropagating through integer tensors. LinearOperator[1D,2D,3D]Circulant added to tensorflow.linalg. To know more about the other changes, visit TensorFlow 1.9 release notes on GitHub. Create a TensorFlow LSTM that writes stories [Tutorial] Build and train an RNN chatbot using TensorFlow [Tutorial] Use TensorFlow and NLP to detect duplicate Quora questions [Tutorial]
Read more
  • 0
  • 0
  • 5930

article-image-did-unfettered-growth-kill-maker-media-financial-crisis-leads-company-to-shutdown-maker-faire-and-lay-off-all-staff
Savia Lobo
10 Jun 2019
5 min read
Save for later

Did unfettered growth kill Maker Media? Financial crisis leads company to shutdown Maker Faire and lay off all staff

Savia Lobo
10 Jun 2019
5 min read
Updated: On July 10, 2019, Dougherty announced the relaunch of Maker Faire and Maker Media with the new name “Make Community“. Maker Media Inc., the company behind Maker Faire, the popular event that hosts arts, science, and engineering DIY projects for children and their parents, has laid off all its employees--22 employees--and have decided to shut down due to financial troubles. In January 2005, the company first started off with MAKE, an American bimonthly magazine focused on do it yourself and/or DIWO projects involving computers, electronics, robotics, metalworking, woodworking, etc. for both adults and children. In 2006, the company first held its Maker Faire event, that lets attendees wander amidst giant, inspiring art and engineering installations. Maker Faire now includes 200 owned and licensed events per year in over 40 countries. The Maker movement gained momentum and popularity when MAKE magazine first started publishing 15 years ago.  The movement emerged as a dominant source of livelihood as individuals found ways to build small businesses using their creative activity. In 2014, The WhiteHouse blog posted an article stating, “Maker Faires and similar events can inspire more people to become entrepreneurs and to pursue careers in design, advanced manufacturing, and the related fields of science, technology, engineering and mathematics (STEM).” With funding from the Department of Labor, “the AFL-CIO and Carnegie Mellon University are partnering with TechShop Pittsburgh to create an apprenticeship program for 21st-century manufacturing and encourage startups to manufacture domestically.” Recently, researchers from Baylor University and the University of North Carolina, in their research paper, have highlighted opportunities for studying the conditions under which the Maker movement might foster entrepreneurship outcomes. Dale Dougherty, Maker Media Inc.’s founder and CEO, told TechCrunch, “I started this 15 years ago and it’s always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works…barely. Events are hard . . . there was a drop off in corporate sponsorship”. “Microsoft and Autodesk failed to sponsor this year’s flagship Bay Area Maker Faire”, TechCrunch reports. Dougherty further told that the company is trying to keep the servers running. “I hope to be able to get control of the assets of the company and restart it. We’re not necessarily going to do everything we did in the past but I’m committed to keeping the print magazine going and the Maker Faire licensing program”, he further added. In 2016, the company laid off 17 of its employees, followed by 8 employees recently in March. “They’ve been paid their owed wages and PTO, but did not receive any severance or two-week notice”, TechCrunch reports. These layoffs may have hinted the staff of the financial crisis affecting the company. Maker Media Inc. had raised $10 million from Obvious Ventures, Raine Ventures, and Floodgate. Dougherty says, “It started as a venture-backed company but we realized it wasn’t a venture-backed opportunity. The company wasn’t that interesting to its investors anymore. It was failing as a business but not as a mission. Should it be a non-profit or something like that? Some of our best successes, for instance, are in education.” The company has a huge public following for its products. Dougherty told TechCrunch that despite the rain, Maker Faire’s big Bay Area event last week met its ticket sales target. Also, about 1.45 million people attended its events in 2016. “MAKE: magazine had 125,000 paid subscribers and the company had racked up over one million YouTube subscribers. But high production costs in expensive cities and a proliferation of free DIY project content online had strained Maker Media”, writes TechCrunch. Dougherty told TechCrunch he has been overwhelmed by the support shown by the Maker community. As of now, licensed Maker Faire events around the world will proceed as planned. “Dougherty also says he’s aware of Oculus co-founder Palmer Luckey’s interest in funding the company, and a GoFundMe page started for it”, TechCrunch reports. Mike Senese, Executive Editor, MAKE magazine, tweeted, “Nothing but love and admiration for the team that I got to spend the last six years with, and the incredible community that made this amazing part of my life a reality.” https://twitter.com/donttrythis/status/1137374732733493248 https://twitter.com/xeni/status/1137395288262373376 https://twitter.com/chr1sa/status/1137518221232238592 Former Mythbusters co-host Adam Savage, who was a regular presence at the Maker Faire, told The Verge, “Make Media has created so many important new connections between people across the world. It showed the power from the act of creation. We are the better for its existence and I am sad. I also believe that something new will grow from what they built. The ground they laid is too fertile to lie fallow for long.” On July 10, 2019, Dougherty announced he’ll relaunch Maker Faire and Maker Media with the new name “Make Community“. The official launch of Make Community will supposedly be next week. The company is also working on a new issue of Make Magazine that is planned to be published quarterly and the online archives of its do-it-yourself project guides will remain available. Dougherty told TechCrunch “with the goal that we can get back up to speed as a business, and start generating revenue and a magazine again. This is where the community support needs to come in because I can’t fund it for very long.” GitHub introduces ‘Template repository’ for easy boilerplate code management and distribution 12 Visual Studio Code extensions that Node.js developers will love [Sponsored by Microsoft] Shoshana Zuboff on 21st century solutions for tackling the unique complexities of surveillance capitalism
Read more
  • 0
  • 0
  • 5086

article-image-the-us-department-of-commerce-wants-to-regulate-export-of-ai-and-related-products
Prasad Ramesh
21 Nov 2018
4 min read
Save for later

The US Department of Commerce wants to regulate export of AI and related products

Prasad Ramesh
21 Nov 2018
4 min read
This Monday the Department of Commerce, Bureau of Industry and Security (BIS) published a proposal to control the export of AI from USA. This move seems to lean towards restricting AI tech going out of the country to protect the national security of USA. The areas that come under the licensing proposal Artificial intelligence, as we’ve seen in recent years has great potential for both good and harm. The DoC in the United States of America is not taking any chances with it. The proposal lists many areas of AI that could potentially require a license to be exported to certain countries. Other than computer vision, natural language processing, military-specific products like adaptive camouflage and faceprint for surveillance is also listed in the proposal to restrict the export of AI. The areas major areas listed in the proposal are: Biotechnology including genomic and genetic engineering Artificial intelligence (AI) and machine learning including neural networks, computer vision, and natural language processing Position, Navigation, and Timing (PNT) technology Microprocessor technology like stacked memory on chip Advanced computing technology like memory-centric logic Data analytics technology like data analytics by visualization and analysis algorithms Quantum information and sensing technology like quantum computing, encryption, and sensing Logistics technology like mobile electric power Additive manufacturing like 3D printing Robotics like micro drones and molecular robotics Brain-computer interfaces like mind-machine interfaces Hypersonics like flight control algorithms Advanced Materials like adaptive camouflage Advanced surveillance technologies faceprint and voiceprint technologies David Edelman, a former adviser to ex-US president Barack Obama said: “This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be”. Countries that could be affected with regulation on export of AI To determine the level of export controls, the department will consider the potential end-uses and end-users of the technology. The list of countries is not clear but ones to which exports are restricted like embargoed countries will be considered. Also, China could be one of them. What does this mean for companies? If your organization creates products in ‘emerging technologies’ then there will be restrictions on the countries you can export to and also on disclosure of technology to foreign nationals in United States. Depending on the criteria, non-US citizens might even need licenses to participate in research and development of such technology. This will restrict non-US citizens to participate and take back anything from, say an advanced AI research project. If the new regulations go into effect, it will affect the security review of foreign investments across these areas. When the list of technologies is finalized, many types of foreign investments will be subject to a review and deals could be halted or undone. Public views on academic research In addition to commercial applications and products, this regulation could also be bad news for academic research. https://twitter.com/jordanbharrod/status/1065047269282627584 https://twitter.com/BryanAlexander/status/1064941028795400193 Even Google Home, Amazon Alexa, iRobot Roomba could be affected. https://twitter.com/R_D/status/1064511113956655105 But it does not look like research papers will be really affected. The document states that the commerce does not intend to expand jurisdiction on ‘fundamental research’ for ‘emerging technologies’ that is intended to be published and not currently subject to EAR as per § 734.8. But will this affect open-source technologies? We really hope not. Deadline for comments is less than 30 days away BIS has invited comments to the proposal for defining and categorizing emerging technologies, the impact of the controls in US technology leadership among other topics. However the short deadline of December 19, 2018 indicates their haste to implement licensing export of AI quickly. For more details, and to know where you can submit your comments, read the proposal. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Google open sources BERT, an NLP pre-training technique Teaching AI ethics – Trick or Treat?
Read more
  • 0
  • 0
  • 4523
article-image-crypto-cash-is-missing-from-the-wallet-of-dead-cryptocurrency-entrepreneur-gerald-cotten-find-it-and-you-could-get-100000
Richard Gall
05 Mar 2019
3 min read
Save for later

Crypto-cash is missing from the wallet of dead cryptocurrency entrepreneur Gerald Cotten - find it, and you could get $100,000

Richard Gall
05 Mar 2019
3 min read
In theory, stealing cryptocurrency should be impossible. But a mystery has emerged that seems to throw all that into question and even suggests a bigger, much stranger conspiracy. Gerald Cotten, the founder of cryptocurrency exchange QadrigaCX, died in December in India. He was believed to have left $136 million USD worth of crypto-cash in 'cold wallets' on his own laptop, to which only he had access. However, investigators from EY, who have been working on closing QuadrigaCX following Cotten's death, were surprised to find that the wallets were empty. In fact, it's believed crypto-cash had disappeared from them months before Cotten died. A cryptocurrency mystery now involving the FBI The only lead in this mystery is the fact that the EY investigators have found other user accounts that appear to be linked to Gerald Cotten. There's a chance that Cotten used these to trade on his own exchange, but the nature of these exchanges remain a little unclear. To add to the intrigue, Fortune reported yesterday that the FBI are working with Canada's Mounted Police Force to investigate the missing money. This information came from Jesse Powell, CEO of another cryptocurrency company called Kraken. Powell told Fortune that both the FBI and the Mounted Police have been in touch with him about the mystery surrounding QuadrigaCX. Powell has offered a reward of $100,000 to anyone that can locate the missing cryptocurrency funds. So what actually happened to Gerald Cotten and his crypto-cash? The story has many layers of complexity. There are rumors that Cotten faked his own death. For example, Cotten filed a will just 12 days before his death, leaving a significant amount of wealth and assets to his wife. And while sources from the hospital in India where Cotten is believed to have died say he died of cardiac arrest, as Fortune explains, "Cotten’s body was handled by hotel staff after an embalmer refused to receive it" - something which is, at the very least, strange. It should be noted that there is certainly no clear evidence that Cotten faked his own death - only missing pieces that encourage such rumors. A further subplot - that might or night not be useful in cracking this case - emerged late last week when Canada's Globe and Mail reported that QuadrigaCX's co-founder has a history of identity theft and using digital currencies to launder money. Where could the money be? There are, as you might expect, no shortage of theories about where the cash could be. A few days ago, it was suggested that it might be possible to locate Cotten's Ethereum funds - a blog post by James Edwards, who is the editor of cryptocurrency blog zerononcense claimed that Ethereum linked to QuadrigaCX can be found in Bitfinex, Poloniex, and Jesse Powell's Kraken. "It appears that a significant amount of Ethereum (600,000+ ETH) was transferred to these exchanges as a means of ‘storage’ during the years that QuadrigaCX was in operation and offering Ethereum on their exchange," Edwards writes. Edwards is keen for his findings to be the starting point for a clearer line of inquiry, free from speculation and conspiracy. He wrote that he hoped that it would be "a helpful addition to the QuadrigaCX narrative, rather than a conspiratorial piece that speculates on whether the exchange or its owners have been honest."
Read more
  • 0
  • 0
  • 4038

article-image-top-announcements-from-the-tensorflow-dev-summit-2019
Sugandha Lahoti
08 Mar 2019
5 min read
Save for later

Top announcements from the TensorFlow Dev Summit 2019

Sugandha Lahoti
08 Mar 2019
5 min read
The two-days long TensorFlow Dev Summit 2019 just got over, leaving in its wake major updates being made to the TensorFlow ecosystem.  The major announcement included the release of the first alpha version of most coveted release TensorFlow 2.0. Also announced were, TensorFlow Lite 1.0, TensorFlow Federated, TensorFlow Privacy and more. TensorFlow Federated In a medium blog post, Alex Ingerman (Product Manager) and Krzys Ostrowski (Research Scientist) introduced the TensorFlow Federated framework on the first day. This open source framework is useful for experimenting with machine learning and other computations on decentralized data. As the name suggests, this framework uses Federated Learning, a learning approach introduced by Google in 2017. This technique enables ML models to collaboratively learn a shared prediction model while keeping all the training data on the device. Thus eliminating machine learning from the need to store the data in the cloud. The authors note that TFF is based on their experiences with developing federated learning technology at Google. TFF uses the Federated Learning API to express an ML model architecture, and then train it across data provided by multiple developers, while keeping each developer’s data separate and local. It also uses the Federated Core (FC) API, a set of lower-level primitives, which enables the expression of a broad range of computations over a decentralized dataset. The authors conclude, “With TFF, we are excited to put a flexible, open framework for locally simulating decentralized computations into the hands of all TensorFlow users. You can try out TFF in your browser, with just a few clicks, by walking through the tutorials.” TensorFlow 2.0.0- alpha0 The event also the release of the first alpha version of the TensorFlow 2.0 framework which came with fewer APIs. First introduced last year in August by Martin Wicke, engineer at Google, TensorFlow 2.0, is expected to come with: Easy model building with Keras and eager execution. Robust model deployment in production on any platform. Powerful experimentation for research. API simplification by reducing duplication removing deprecated endpoints. The first teaser,  TensorFlow 2.0.0- alpha0 version comes with the following changes: API clean-up included removing tf.app, tf.flags, and tf.logging in favor of absl-py. No more global variables with helper methods like tf.global_variables_initializer and tf.get_global_step. Functions, not sessions (tf.Session and session.run -> tf.function). Added support for TensorFlow Lite in TensorFlow 2.0. tf.contrib has been deprecated, and functionality has been either migrated to the core TensorFlow API, to tensorflow/addons, or removed entirely. Checkpoint breakage for RNNs and for Optimizers. Minor bug fixes have also been made to the Keras and Python API and tf.estimator. Read the full list of bug fixes in the changelog. TensorFlow Lite 1.0 The TF-Lite framework is basically designed to aid developers in deploying machine learning and artificial intelligence models on mobile and IoT devices. Lite was first introduced at the I/O developer conference in May 2017 and in developer preview later that year. At the TensorFlow Dev Summit, the team announced a new version of this framework, the TensorFlow Lite 1.0. According to a post by VentureBeat, improvements include selective registration and quantization during and after training for faster, smaller models. The team behind TF-Lite 1.0 says that quantization has helped them achieve up to 4 times compression of some models. TensorFlow Privacy Another interesting library released at the TensorFlow dev summit was TensorFlow Privacy. This Python-based open source library aids developers to train their machine-learning models with strong privacy guarantees. To achieve this, it takes inspiration from the principles of differential privacy. This technique offers strong mathematical guarantees that models do not learn or remember the details about any specific user when training the user data. TensorFlow Privacy includes implementations of TensorFlow optimizers for training machine learning models with differential privacy. For more information, you can go through the technical whitepaper describing its privacy mechanisms in more detail. The creators also note that “no expertise in privacy or its underlying mathematics should be required for using TensorFlow Privacy. Those using standard TensorFlow mechanisms should not have to change their model architectures, training procedures, or processes.” TensorFlow Replicator TF Replicator also released at the TensorFlow Dev Summit, is a software library that helps researchers deploy their TensorFlow models on GPUs and Cloud TPUs. To do this, the creators assure that developers would require minimal effort and need not have previous experience with distributed systems. For multi-GPU computation, TF-Replicator relies on an “in-graph replication” pattern, where the computation for each device is replicated in the same TensorFlow graph. When TF-Replicator builds an in-graph replicated computation, it first builds the computation for each device independently and leaves placeholders where cross-device computation has been specified by the user. Once the sub-graphs for all devices have been built, TF-Replicator connects them by replacing the placeholders with actual cross-device computation. For a more comprehensive description, you can go through the research paper. These were the top announcements made at the TensorFlow Dev Summit 2019. You can go through the Keynote and other videos of the announcements and tutorials on this YouTube playlist. TensorFlow 2.0 to be released soon with eager execution, removal of redundant APIs, tffunction and more. TensorFlow 2.0 is coming. Here’s what we can expect. Google introduces and open-sources Lingvo, a scalable TensorFlow framework for Sequence-to-Sequence Modeling
Read more
  • 0
  • 0
  • 4037

article-image-top-research-papers-nips-2017-part-2
Sugandha Lahoti
07 Dec 2017
8 min read
Save for later

Top Research papers showcased at NIPS 2017 - Part 2

Sugandha Lahoti
07 Dec 2017
8 min read
Continuing from where we left our previous post, we are back with a quick roundup of top research papers on Machine Translation, Predictive Modelling, Image-to-Image Translation, and Recommendation Systems from NIPS 2017. Machine Translation In layman terms, Machine translation (MT) is the process by which a computer software translates a text from one natural language to another. This year at NIPS, a large number of presentations focused on innovative ways of improving translations. Here are our top picks. Value Networks: Improving beam search for better Translation Microsoft has ventured into translation tasks with the introduction of Value Networks in their paper “Decoding with Value Networks for Neural Machine Translation”. Their prediction network improves beam search which is a shortcoming of Neural Machine Translation (NMT). This new methodology inspired by the success of AlphaGo, takes the source sentence x, the currently available decoding output y1, ··· , yt1 and a candidate word w at step t as inputs, using which it predicts the long-term value (e.g., BLEU score) of the partial target sentence if it is completed by the NMT(Neural Machine Translational) model. Experiments show that this approach significantly improves the translation accuracy of several translation tasks. CoVe: Contextualizing Word Vectors for Machine Translation Salesforce researchers have used a new approach to contextualize word vectors in their paper “Learned in Translation: Contextualized Word Vectors”. A wide variety of common NLP tasks namely sentiment analysis, question classification, entailment, and question answering use only supervised word and character vectors to contextualize Word vectors. The paper uses a deep LSTM encoder from an attentional sequence-to-sequence model trained for machine translation. Their research portrays that adding these context vectors (CoVe) improves performance over using only unsupervised word and character vectors. For fine-grained sentiment analysis and entailment also, CoVe improves the performance of the baseline models to the state-of-the-art. Predictive Modelling A lot of research showcased at NIPS was focussed around improving the predictive capabilities of Neural Networks. Here is a quick look at the top presentations. Deep Ensembles for Predictive Uncertainty Estimation Bayesian Solutions are most frequently used in quantifying predictive uncertainty in Neural networks. However, these solutions can at times be computationally intensive. They also require significant modifications to the training pipeline. DeepMind researchers have proposed an alternative to Bayesian NNs in their paper “Simple and scalable predictive uncertainty estimation using deep ensembles”. Their proposed method is easy to implement, readily parallelizable requires very little hyperparameter tuning, and yields high-quality predictive uncertainty estimates. VAIN: Scaling Multi-agent Predictive Modelling Multi-agent predictive modeling predicts the behavior of large physical or social systems by an interaction between various agents. However, most approaches come at a prohibitive cost. For instance, Interaction Networks (INs) were not able to scale with the number of interactions in the system (typically quadratic or higher order in the number of agents). Facebook researchers have introduced VAIN, which is a simple attentional mechanism for multi-agent predictive modeling that scales linearly with the number of agents. They can achieve similar accuracy but at a much lower cost. You can read more about the mechanism in their paper “VAIN: Attentional Multi-agent Predictive Modeling” PredRNN: RNNs for Predictive Learning with ST-LSTM Another paper titled “PredRNN: Recurrent Neural Networks for Predictive Learning using Spatiotemporal LSTMs” showcased a new predictive recurrent neural network.  This architecture is based on the idea that spatiotemporal predictive learning should memorize both spatial appearances and temporal variations in a unified memory pool. The core of this RNN is a new Spatiotemporal LSTM (ST-LSTM) unit that extracts and memorizes spatial and temporal representations simultaneously. Memory states are allowed to zigzag in two directions: across stacked RNN layers vertically and through all RNN states horizontally. PredRNN is a more general framework, that can be easily extended to other predictive learning tasks by integrating with other architectures. It achieved state-of-the-art prediction performance on three video prediction datasets. Recommendation Systems New researches were presented by Google and Microsoft to address the cold-start problem and to build robust and powerful of Recommendation systems. Off-Policy Evaluation For Slate Recommendation Microsoft researchers have studied and evaluated policies that recommend an ordered set of items in their paper “Off-Policy Evaluation For Slate Recommendation”. General recommendation approaches require large amounts of logged data to evaluate whole-page metrics that depend on multiple recommended items, which happens when showing ranked lists. The number of these possible lists is called as slates. Microsoft researchers have developed a technique for evaluating page-level metrics of such policies offline using logged past data, reducing the need for online A/B tests. Their method models the observed quality of the recommended set as an additive decomposition across items. It fits many realistic measures of quality and shows exponential savings in the amount of required data compared with other off-policy evaluation approaches. Meta-Learning on Cold-Start Recommendations Matrix Factorization techniques for product recommendations, although efficient, suffer from serious cold-start problems. The cold start problem concerns with the recommendations for users with no or few past history i.e new users. Providing recommendations to such users becomes a difficult problem for recommendation models because their learning and predictive ability are limited. Google researchers have come up with a meta-learning strategy to address item cold-start when new items arrive continuously. Their paper “A Meta-Learning Perspective on Cold-Start Recommendations for Items” has two deep neural network architectures that implement this meta-learning strategy. The first architecture learns a linear classifier whose weights are determined by the item history while the second architecture learns a neural network whose biases are instead adjusted. On evaluating this technique on the real-world problem of Tweet recommendation, the proposed techniques significantly beat the MF baseline. Image-to-Image Translation NIPS 2017 exhibited a new image-to-image translation system, a model to hide images within images, and use of feature transforms to improve universal style. Unsupervised Image-to-Image Translation Researchers at Nvidia have proposed an unsupervised image-to-image translation framework based on Coupled GANs. Unsupervised image-to-image translation learns a joint distribution of images in different domains by using images from the marginal distributions in individual domains. However, there exists an infinite set of joint distributions that can arrive from the given marginal distributions. So, one could infer nothing about the joint distribution from the marginal distributions, without additional assumptions. Their paper “Unsupervised Image-to-Image Translation Networks ” uses a shared-latent space assumption to address this issue. Their method presents high-quality image translation results on various challenging unsupervised image translation tasks, such as street scene image translation, animal image translation, and face image translation. Deep Steganography Steganography is commonly used to unobtrusively hide a small message within the noisy regions of a larger image. Google researchers in their paper “Hiding Images in Plain Sight: Deep Steganography” have demonstrated the successful application of deep learning to hiding images. They have placed a full-size color image within another image of the same size. They have also trained Deep neural networks to create the hiding and revealing processes and are designed to specifically work as a pair. Their approach compresses and distributes the secret image's representation across all of the available bits, instead of encoding the secret message within the least significant bits of the carrier image. This system is trained on images drawn randomly from the ImageNet database and works well on natural images. Improving Universal style transfer on images NIPS 2017 witnessed another paper aimed at improving the Universal Style Transfer. Universal style transfer is used for transferring arbitrary visual styles to content images. The paper “Universal Style Transfer via Feature Transforms” by Nvidia researchers highlight feature transforms, as a simple yet effective method to tackle the limitations of existing feed-forward methods for Universal Style Transfer, without training on any pre-defined styles. Existing feed-forward based methods are mainly limited by the inability of generalizing to unseen styles or compromised visual quality. The research paper embeds a pair of feature transforms, whitening and coloring, to an image reconstruction network. The whitening and coloring transform reflect a direct matching of feature covariance of the content image to a given style image. The algorithm can generate high-quality stylized images with comparisons to a number of recent methods. Key Takeaways from NIPS 2017 The Research papers covered in this and the previous post highlight that most organizations are at the forefront of machine learning and are actively exploring virtually all aspects of the field. Deep learning practices were also in trend. The conference was focussed on the current state and recent advances in Deep Learning. A lot of talks and presentations were about industry-ready neural networks suggesting a fast transition from research to industry. Researchers are also focusing on areas of language understanding, speech recognition, translation, visual processing, and prediction. Most of these techniques rely on using GANs as the backend. For live content coverage, you can visit NIPS’ Facebook page.
Read more
  • 0
  • 0
  • 3563
article-image-eu-approves-labour-protection-laws-for-whistleblowers-and-gig-economy-workers-with-implications-for-tech-companies
Savia Lobo
17 Apr 2019
5 min read
Save for later

EU approves labour protection laws for ‘Whistleblowers’ and ‘Gig economy’ workers with implications for tech companies

Savia Lobo
17 Apr 2019
5 min read
The European Union approved two new labour protection laws recently. This time, for the two not so hyped sects, the whistleblowers and the ones earning their income via the ‘gig economy’. As for the whistleblowers, with the new law, they receive an increased protection landmark legislation aimed at encouraging reports of wrongdoing. On the other hand, for those working for ‘on-demand’ jobs, thus, termed as the gig economy, the law sets minimum rights and demands increased transparency for such workers. Let’s have a brief look at each of the newly approved by the EU. Whistleblowers’ shield against retaliation On Tuesday, the EU parliament approved a new law for whistleblowers safeguarding them from any retaliation within an organization. The law protects whistleblowers against dismissal, demotion and other forms of punishment. “The law now needs to be approved by EU ministers. Member states will then have two years to comply with the rules”, the EU proposal states. Transparency International calls this as “pathbreaking legislation”, which will also give employees a "greater legal certainty around their rights and obligations". The new law creates a safe channel which allows the whistleblowers to report of an EU law breach both within an organization and to public authorities. “It is the first time whistleblowers have been given EU-wide protection. The law was approved by 591 votes, with 29 votes against and 33 abstentions”, the BBC reports. In cases where no appropriate action is taken by the organization’s authorities even after reporting, whistleblowers are allowed to make public disclosure of the wrongdoing by communicating with the media. European Commission Vice President, Frans Timmermans, says, “potential whistleblowers are often discouraged from reporting their concerns or suspicions for fear of retaliation. We should protect whistleblowers from being punished, sacked, demoted or sued in court for doing the right thing for society.” He further added, “This will help tackle fraud, corruption, corporate tax avoidance and damage to people's health and the environment.” “The European Commission says just 10 members - France, Hungary, Ireland, Italy, Lithuania, Malta, the Netherlands, Slovakia, Sweden, and the UK - had a "comprehensive law" protecting whistleblowers”, the BBC reports. “Attempts by some states to water down the reform earlier this year were blocked at an early stage of the talks with Luxembourg, Ireland, and Hungary seeking to have tax matters excluded. However, a coalition of EU states, including Germany, France, and Italy, eventually prevailed in keeping tax revelations within the proposal”, the Reuters report. “If member states fail to properly implement the law, the European Commission can take formal disciplinary steps against the country and could ultimately refer the case to the European Court of Justice”, BBC reports. To know more about this new law for whistleblowers, read the official proposal. EU grants protection to workers in Gig economy (casual or short-term employment) In a vote on Tuesday, the Members of the European Parliament (MEP) announced minimum rights for workers with on-demand, voucher-based or platform jobs, such as Uber or Deliveroo. However, genuinely self-employed workers would be excluded from the new rules. “The law states that every person who has an employment contract or employment relationship as defined by law, collective agreements or practice in force in each member state should be covered by these new rights”, BBC reports. “This would mean that workers in casual or short-term employment, on-demand workers, intermittent workers, voucher-based workers, platform workers, as well as paid trainees and apprentices, deserve a set of minimum rights, as long as they meet these criteria and pass the threshold of working 3 hours per week and 12 hours per 4 weeks on average”, according to EU’s official website. For this, all workers need to be informed from day one as a general principle, but no later than seven days where justified. Following are the specific set of rights to cover new forms of employment includes: Workers with on-demand contracts or similar forms of employment should benefit from a minimum level of predictability such as predetermined reference hours and reference days. They should also be able to refuse, without consequences, an assignment outside predetermined hours or be compensated if the assignment was not cancelled in time. Member states shall adopt measures to prevent abusive practices, such as limits to the use and duration of the contract. The employer should not prohibit, penalize or hinder workers from taking jobs with other companies if this falls outside the work schedule established with that employer. Enrique Calvet Chambon, the MEP responsible for seeing the law through, said, “This directive is the first big step towards the implementation of the European Pillar of Social Rights, affecting all EU workers. All workers who have been in limbo will now be granted minimum rights thanks to this directive, and the European Court of Justice rulings, from now on no employer will be able to abuse the flexibility in the labour market.” To know more about this new law on Gig economy, visit EU’s official website. 19 nations including The UK and Germany give thumbs-up to EU’s Copyright Directive Facebook discussions with the EU resulted in changes of its terms and services for users The EU commission introduces guidelines for achieving a ‘Trustworthy AI’
Read more
  • 0
  • 0
  • 3482
Modal Close icon
Modal Close icon