Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-australias-accc-publishes-a-preliminary-report-recommending-google-facebook-be-regulated-and-monitored-for-discriminatory-and-anti-competitive-behavior
Sugandha Lahoti
10 Dec 2018
5 min read
Save for later

Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior

Sugandha Lahoti
10 Dec 2018
5 min read
The Australian competition and consumer commission (ACCC) have today published a 378-page preliminary report to make the Australian government and the public aware of the impact of social media and digital platforms on targeted advertising and user data collection. The report also highlights the ACCC's concerns regarding the “market power held by these key platforms, including their impact on Australian businesses and, in particular, on the ability of media businesses to monetize their content.” This report was published following an investigation when ACCC Treasurer Scott Morrison MP had asked the ACCC, late last year, to hold an inquiry into how online search engines, social media, and digital platforms impact media and advertising services markets. The inquiry demanded answers on the range and reliability of news available via Google and Facebook. The ACCC also expressed concerns on the large amount and variety of data which Google and Facebook collect on Australian consumers, which users are not actively willing to provide. Why did ACCC choose Google and Facebook? Google and Facebook are the two largest digital platforms in Australia and are the most visited websites in Australia. Google and Facebook also have similar business models, as they both rely on consumer attention and data to sell advertising opportunities and also have substantial market power. Per the report, each month, approximately 19 million Australians use Google Search, 17 million access Facebook, 17 million watch YouTube (which is owned by Google) and 11 million access Instagram (which is owned by Facebook). This widespread and frequent use of Google and Facebook means that these platforms occupy a key position for businesses looking to reach Australian consumers, including advertisers and news media businesses. Recommendations made by the ACCC The report contains 11 preliminary recommendations to these digital platforms and eight areas for further analysis. Per the report: #1 The ACCC wants to amend the merger law to make it clearer that the following are relevant factors: the likelihood that an acquisition would result in the removal of a potential competitor, and the amount and nature of data which the acquirer would likely have access to as a result of the acquisition. #2 ACCC wants Facebook and Google to provide advance notice of the acquisition of any business with activities in Australia and to provide sufficient time to enable a thorough review of the likely competitive effects of the proposed acquisition. #3 ACCC wants suppliers of operating systems for mobile devices, computers, and tablets to provide consumers with options for internet browsers and search engines (rather than providing a default). #4 The ACCC wants a regulatory authority to monitor, investigate and report on whether digital platforms are engaging in discriminatory conduct by favoring their own business interests above those of advertisers or potentially competing businesses. #5 The regulatory authority should also monitor, investigate and report on the ranking of news and journalistic content by digital platforms and the provision of referral services to news media businesses. #6 The ACCC wants the government to conduct a separate, independent review to design a regulatory framework to regulate the conduct of all news and journalistic content entities in Australia. This framework should focus on underlying principles, the extent of regulation, content rules, and enforcement. #7 Per ACCC, the ACMA (Australian Communications and Media Authority) should adopt a mandatory standard regarding take-down procedures for copyright infringing content. #8 ACCC proposes amendments to the Privacy Act. These include: Strengthen notification requirements Introduce an independent third-party certification scheme Strengthen consent requirements Enable the erasure of personal information Increase the penalties for breach of the Privacy Act Introduce direct rights of action for individuals Expand resourcing for the OAIC (Office of the Australian Information Commissioner) to support further enforcement activities #9 The ACCC wants OAIC to develop a code of practice under Part IIIB of the Privacy Act to provide Australians with greater transparency and control over how their personal information is collected, used and disclosed by digital platforms. #10 Per ACCC, the Australian government should adopt the Australian Law Reform Commission’s recommendation to introduce a statutory cause of action for serious invasions of privacy. #11 Per the ACCC, unfair contract terms should be illegal (not just voidable) under the Australian Consumer Law “The inquiry has also uncovered some concerns that certain digital platforms have breached competition or consumer laws, and the ACCC is currently investigating five such allegations to determine if enforcement action is warranted,” ACCC Chair Rod Sims said. The ACCC is also seeking feedback on its preliminary recommendations and the eight proposed areas for further analysis and assessment. Feedback can be shared by email to platforminquiry@accc.gov.au by 15 February 2019. AI Now Institute releases Current State of AI 2018 Report Australia passes a rushed anti-encryption bill “to make Australians safe”; experts find “dangerous loopholes” that compromise online privacy and safety Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report
Read more
  • 0
  • 0
  • 14705

article-image-accountability-and-algorithmic-bias-why-diversity-and-inclusion-matters-neurips-invited-talk
Sugandha Lahoti
08 Dec 2018
4 min read
Save for later

Accountability and algorithmic bias: Why diversity and inclusion matters [NeurIPS Invited Talk]

Sugandha Lahoti
08 Dec 2018
4 min read
One of the most awaited machine learning conference, NeurIPS 2018 is happening throughout this week in Montreal, Canada. It will feature a series of tutorials, invited talks, product releases, demonstrations, presentations, and announcements related to machine learning research. For the first time, NeurIPS invited a diversity and inclusion (D&I) speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech. Laura Gomez is the CEO of Atipica that helps tech companies find and hire diverse candidates. Being a Latina woman herself, she had to face oppression when seeking capital and funds for her startup trying to establish herself in Silicon Valley. This experience led to her realization that there is a strong need to talk about why diversity and inclusion matters. Her efforts were not in vain and recently, she raised $2M in seed funding led by True Ventures. “At Atipica, we think of Inclusive AI in terms of data science, algorithms, and their ethical implications. This way you can rest assure our models are not replicating the biases of humans that hinder diversity while getting patent-pending aggregate demographic insights of your talent pool,” reads the website. She talks about her journey as a Latina woman in the tech industry. She reminisced on how she was the only one like her who got an internship with Hewlett Packard and the fact that she hated it. Nevertheless, she still decided to stay, determined not to let the industry turn her into a victim. She believes she made the right choice going forward with tech; now, years later, diversity is dominating the conversation in the industry. After HP, she also worked at Twitter and YouTube, helping them translate and localize their applications for a global audience. She is also a founding advisor of Project Include, which is a non-profit organization run by women, that uses data and advocacy to accelerate diversity and inclusion solutions in the tech industry. She opened her talk by agreeing to a quote from Safiya Noble, who wrote Algorithms of Oppression. “Artificial Intelligence will become a major human rights issue in the twenty-first century.” She believes we need to talk about difficult questions such as where AI is heading? And where should we hold ourselves and each other accountable.” She urges people to evaluate their role in AI, bias, and inclusion, to find the empathy and value in difficult conversations, and to go beyond your immediate surroundings to consider the broader consequences. It is important to build accountable AI in a way that allows humanity to triumph. She touched upon discriminatory moves by tech giants like Amazon and Google. Amazon recently killed off its AI recruitment tool because it couldn’t stop discriminating against women. She also criticized upon Facebook’s Myanmar operation where Facebook data scientists were building algorithms for hate speech. They didn’t understand the importance of localization or language or actually internationalize their own algorithms to be inclusive towards all the countries. She also talked about algorithmic bias in library discovery systems, as well as how even ‘black robots’ are being impacted by racism. She also condemned Palmer Luckey's work who is helping U.S. immigration agents on the border wall identify Latin refugees. Finally, she urged people to take three major steps to progress towards being inclusive: Be an ally Think of inclusion as an approach, not a feature Work towards an Ethical AI Head over to NeurIPS facebook page for the entire talk and other sessions happening at the conference this week. NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 18166

article-image-ai-now-institute-releases-current-state-of-ai-2018-report
Natasha Mathur
07 Dec 2018
7 min read
Save for later

AI Now Institute releases Current State of AI 2018 Report

Natasha Mathur
07 Dec 2018
7 min read
The AI Now Institute, New York University, released its third annual report on the current state of AI, yesterday.  2018 AI Now Report focused on themes such as industry AI scandals, and rising inequality. It also assesses the gaps between AI ethics and meaningful accountability, as well as looks at the role of organizing and regulation in AI. Let’s have a look at key recommendations from the AI Now 2018 report. Key Takeaways Need for a sector-specific approach to AI governance and regulation This year’s report reflects on the need for stronger AI regulations by expanding the powers of sector-specific agencies (such as United States Federal Aviation Administration and the National Highway Traffic Safety Administration) to audit and monitor these technologies based on domains. Development of AI systems is rising and there aren’t adequate governance, oversight, or accountability regimes to make sure that these systems abide by the ethics of AI. The report states how general AI standards and certification models can’t meet the expertise requirements for different sectors such as health, education, welfare, etc, which is a key requirement for enhanced regulation. “We need a sector-specific approach that does not prioritize the technology but focuses on its application within a given domain”, reads the report. Need for tighter regulation of Facial recognition AI systems Concerns are growing over facial recognition technology as they’re causing privacy infringement, mass surveillance, racial discrimination, and other issues. As per the report, stringent regulation laws are needed that demands stronger oversight, public transparency, and clear limitations. Moreover, only providing public notice shouldn’t be the only criteria for companies to apply these technologies. There needs to be a “high threshold” for consent, keeping in mind the risks and dangers of mass surveillance technologies. The report highlights how “affect recognition”, a subclass of facial recognition that claims to be capable of detecting personality, inner feelings, mental health, etc, depending on images or video of faces, needs to get special attention, as it is unregulated. It states how these claims do not have sufficient evidence behind them and are being abused in unethical and irresponsible ways.“Linking affect recognition to hiring, access to insurance, education, and policing creates deeply concerning risks, at both an individual and societal level”, reads the report. It seems like progress is being made on this front, as it was just yesterday when Microsoft recommended that tech companies need to publish documents explaining the technology’s capabilities, limitations, and consequences in case their facial recognition systems get used in public. New approaches needed for governance in AI The report points out that internal governance structures at technology companies are not able to implement accountability effectively for AI systems. “Government regulation is an important component, but leading companies in the AI industry also need internal accountability structures that go beyond ethics guidelines”, reads the report.  This includes rank-and-file employee representation on the board of directors, external ethics advisory boards, along with independent monitoring and transparency efforts. Need to waive trade secrecy and other legal claims The report states that Vendors and developers creating AI and automated decision systems for use in government should agree to waive any trade secrecy or other legal claims that would restrict the public from full auditing and understanding of their software. As per the report, Corporate secrecy laws are a barrier as they make it hard to analyze bias, contest decisions, or remedy errors. Companies wanting to use these technologies in the public sector should demand the vendors to waive these claims before coming to an agreement. Companies should protect workers from raising ethical concerns It has become common for employees to organize and resist technology to promote accountability and ethical decision making. It is the responsibility of these tech companies to protect their workers’ ability to organize, whistleblow, and promote ethical choices regarding their projects. “This should include clear policies accommodating and protecting conscientious objectors, ensuring workers the right to know what they are working on, and the ability to abstain from such work without retaliation or retribution”, reads the report. Need for more in truth in advertising of AI products The report highlights that the hype around AI has led to a gap between marketing promises and actual product performance, causing risks to both individuals and commercial customers. As per the report, AI vendors should be held to high standards when it comes to them making promises, especially when there isn’t enough information on the consequences and the scientific evidence behind these promises. Need to address exclusion and discrimination within the workplace The report states that the Technology companies and the AI field focus on the “pipeline model,” that aims to train and hire more employees. However, it is important for tech companies to assess the deeper issues such as harassment on the basis of gender, race, etc, within workplaces. They should also examine the relationship between exclusionary cultures and the products they build, so to build tools that do not perpetuate bias and discrimination. Detailed account of the “full stack supply chain” As per the report, there is a need to better understand the parts of an AI system and the full supply chain on which it relies for better accountability. “This means it is important to account for the origins and use of training data, test data, models, the application program interfaces (APIs), and other components over a product lifecycle”, reads the paper. This process is called accounting for the ‘full stack supply chain’ of AI systems, which is necessary for a more responsible form of auditing. The full stack supply chain takes into consideration the true environmental and labor costs of AI systems. This includes energy use, labor use for content moderation and training data creation, and reliance on workers for maintenance of AI systems. More funding and support for litigation, and labor organizing on AI issues The report states that there is a need for increased support for legal redress and civic participation. This includes offering support to public advocates representing people who have been exempted from social services because of algorithmic decision making, civil society organizations and labor organizers who support the groups facing dangers of job loss and exploitation. Need for University AI programs to expand beyond computer science discipline The report states that there is a need for university programs and syllabus to expand its disciplinary orientation. This means the inclusion of social and humanistic disciplines within the universities AI programs. For AI efforts to truly make social impacts, it is necessary to train the faculty and students within the computer science departments, to research the social world. A lot of people have already started to implement this, for instance, Mitchell Baker, chairwoman, and co-founder of Mozilla talked about the need for the tech industry to expand beyond the technical skills by bringing in humanities. “Expanding the disciplinary orientation of AI research will ensure deeper attention to social contexts, and more focus on potential hazards when these systems are applied to human populations”, reads the paper. For more coverage, check out the official AI Now 2018 report. Unity introduces guiding Principles for ethical AI to promote responsible use of AI Teaching AI ethics – Trick or Treat? Sex robots, artificial intelligence, and ethics: How desire shapes and is shaped by algorithms
Read more
  • 0
  • 0
  • 17718

article-image-how-neurips-2018-is-taking-on-its-diversity-and-inclusion-challenges
Sugandha Lahoti
06 Dec 2018
3 min read
Save for later

How NeurIPS 2018 is taking on its diversity and inclusion challenges

Sugandha Lahoti
06 Dec 2018
3 min read
This year the Neural Information Processing Systems Conference is asking serious questions to improve diversity, equity, and inclusion at NeurIPS. “Our goal is to make the conference as welcoming as possible to all.” said the heads of the new diversity and inclusion chairs introduced this year. https://twitter.com/InclusionInML/status/1069987079285809152 The Diversity and Inclusion chairs were headed by Hal Daume III, a professor from the University of Maryland and machine learning and fairness groups researcher at Microsoft Research and Katherine Heller, assistant professor at Duke University and research scientist at Google Brain. They opened up the talk by acknowledging the respective privilege that they get as a group of white man and woman and the fact that they don’t reflect the diversity of experience in the conference room, much less the world. They talk about the three major goals with respect to inclusion at NeurIPS: Learn about the challenges that their colleagues have faced. Support those doing the hard work of amplifying the voices of those who have been historically excluded. To begin structural changes that will positively impact the community over the coming years. They urged attendees to start building an environment where everyone can do their best work. They want people to: see other perspectives remember the feeling of being an outsider listen, do research and learn. make an effort and speak up Concrete actions taken by the NeurIPS diversity and inclusion chairs This year they have assembled an advisory board and run a demographics and inclusion survey. They have also conducted events such as WIML (Women in Machine Learning), Black in AI, LatinX in AI, and Queer in AI. They have established childcare subsidies and other activities in collaboration with Google and DeepMind to support all families attending NeurIPS by offering a stipend of up to $100 USD per day. They have revised their Code of Conduct, to provide an experience for all participants that is free from harassment, bullying, discrimination, and retaliation. They have added inclusion tips on Twitter offering tips and bits of advice related to D&I efforts. The conference also offers pronoun stickers (only them and they), first-time attendee stickers, and information for participant needs. They have also made significant infrastructure improvements for visa handling. They had discussions with people handling visas on location, sent out early invitation letters for visas, and are choosing future locations with visa processing in mind. In the future, they are also looking to establish a legal team for details around Code of Conduct. Further, they are looking to improve institutional structural changes that support the community, and improve the coordination around affinity groups & workshops. For the first time, NeurIPS also invited a diversity and inclusion (D&I) speaker Laura Gomez to talk about the lack of diversity in the tech industry, which leads to biased algorithms, faulty products, and unethical tech. Head over to NeurIPS website for interesting tutorials, invited talks, product releases, demonstrations, presentations, and announcements. NeurIPS 2018: Deep learning experts discuss how to build adversarially robust machine learning models NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale NeurIPS 2018: A quick look at data visualization for Machine learning by Google PAIR researchers [Tutorial]
Read more
  • 0
  • 0
  • 17690

article-image-google-employees-join-hands-with-amnesty-international-urging-google-to-drop-project-dragonfly
Sugandha Lahoti
28 Nov 2018
3 min read
Save for later

Google employees join hands with Amnesty International urging Google to drop Project Dragonfly

Sugandha Lahoti
28 Nov 2018
3 min read
Yesterday, Google employees have signed a petition protesting Google’s infamous Project Dragonfly. “We are Google employees and we join Amnesty International in calling on Google to cancel project Dragonfly”, they wrote on a post on Medium. This petition also marks the first time over 300 Google employees (at the time of writing this post) have used their actual names in a public document. Project Dragonfly is the secretive search engine that Google is allegedly developing which will comply with the Chinese rules of censorship. It has been on the receiving end of constant backlash from various human rights organizations and investigative reporters, since it was revealed earlier this year. On Monday, it also faced critique from human rights organization Amnesty International. Amnesty launched a petition opposing the project, and coordinated protests outside Google offices around the world including San Francisco, Berlin, Toronto and London. https://twitter.com/amnesty/status/1067488964167327744 Yesterday, Google employees joined Amnesty and wrote an open letter to the firm. “We are protesting against Google’s effort to create a censored search engine for the Chinese market that enables state surveillance. Our opposition to Dragonfly is not about China: we object to technologies that aid the powerful in oppressing the vulnerable, wherever they may be. Dragonfly in China would establish a dangerous precedent at a volatile political moment, one that would make it harder for Google to deny other countries similar concessions. Dragonfly would also enable censorship and government-directed disinformation, and destabilize the ground truth on which popular deliberation and dissent rely.” Employees have expressed their disdain over Google’s decision by calling it a money-minting business. They have also highlighted Google’s previous disappointments including Project Maven, Dragonfly, and Google’s support for abusers, and believe that “Google is no longer willing to place its values above its profits. This is why we’re taking a stand.” Google spokesperson has redirected to their previous response on the topic: "We've been investing for many years to help Chinese users, from developing Android, through mobile apps such as Google Translate and Files Go, and our developer tools. But our work on search has been exploratory, and we are not close to launching a search product in China." Twitterati have openly sided with Google employees in this matter. https://twitter.com/Davidramli/status/1067582476262957057 https://twitter.com/shabirgilkar/status/1067642235724972032 https://twitter.com/nrambeck/status/1067517570276868097 https://twitter.com/kuminaidoo/status/1067468708291985408 OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Amnesty International takes on Google over Chinese censored search engine, Project Dragonfly. Google’s prototype Chinese search engine ‘Dragonfly’ reportedly links searches to phone numbers
Read more
  • 0
  • 0
  • 40695

article-image-recode-decode-googlewalkout-interview-shows-why-data-and-evidence-dont-always-lead-to-right-decisions-in-even-the-worlds-most-data-driven-company
Natasha Mathur
23 Nov 2018
10 min read
Save for later

Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company

Natasha Mathur
23 Nov 2018
10 min read
Earlier this month, 20,000 Google employees along with temps, Vendors, and Contractors walked out of their respective Google offices to protest against the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. As a part of the walkout, Google employees had laid out five demands urging Google to bring about structural changes within the workplace. In the latest episode of Recode Decode with Kara Swisher, yesterday, six of the Google walkout organizers, namely, Erica Anderson, Claire Stapleton, Meredith Whittaker, Stephanie Parker, Cecilia O’Neil-Hart and Amr Gaber spoke out about Google’s dismissive approach towards the five demands laid out by the Google employees. A day after the Walkout, Google addressed these demands in a note written by Sundar Pichai, where he admitted that they have “not always gotten everything right in the past” and are “sincerely sorry”. Pichai also mentioned that  “It’s clear that to live up to the high bar we set for Google, we need to make some changes. Going forward, we will provide more transparency into how you raise concerns and how we handle them”. The 'walkout for real change' was a response to the New York Times report, published last month, that exposed how Google has protected its senior executives (Andy Rubin, Android Founder being one of them) that had been accused of sexual misconduct in the recent past. We’ll now have a look at the major highlights from the podcast. Key Takeaways The podcast talks about how the organizers formulated their demands, the rights of contractors at Google, post walkout town hall meeting, and what steps will be taken next by the Google employees. How the walkout mobilized collective action and the formulation of demands As per the Google employees, collating demands was a collective effort from the very beginning. They were inspired by stories of sexual harassment at Google that were floating around in an internal email chain. This urged the organizers of the walkout to send out an email to a large group of women stating that they need to do something about it, to which a lot of employees suggested that they should put out their demands. A doc was prepared in Google Doc Live that listed all the suggested demands by the fellow Googlers. “it was just this truly collective action, living, moving in a Google Document that we were all watching and participating in” said Cecelia O’Neil Hart, a marketer at YouTube.  Cecelia also pointed out that the demands that were being collected were not new and had represented the voices of a lot of groups at Google. “It was just completely a process of defining what we wanted in solidarity with each other. I think it showed me the power of collective action, writing the demands quite literally as a collective” said Cecelia. Rights of Contractors One of the demands laid out by the Google employees as a part of the walkout, states, “commitment to ending pay and opportunity inequity for all levels of the organization”. They expected a change that is applicable to not just full-time employees, but also contract workers as well as subcontract workers, as they are the ones who work at Google with rights that are restricted and different than those of the full-time employees. “We have contractors that manage teams of upwards of 10, 20, even more, other people but left in this second-class state where they don’t have healthcare benefits, they don’t have paid sick leave and they definitely don’t get access to the same well-being resources: Counseling, professional development, any of that”, adds Stephanie Parker, a policy specialist on Trust and Safety, YouTube. Other examples of discrimination against contractors at Google include the shooting at YouTube Headquarters in April where contractor workers (security guards, cafeteria workers, etc) were excluded from the post-shooting town hall meeting conducted by Susan Wojcicki, CEO, YouTube. Also, while the shooting was taking place, all the employees were being updated on the Security via texts, except the contractors. Similarly, the contractors were not allowed in the town hall meeting that was conducted six days post walkout, although the demands applied to them just as much as it did to full-time employees. There’s also systemic racism in hiring and promotion for certain job ladders like engineering, versus other job ladders, versus contract work. Parker mentioned that by including contractors in the five demands, they wanted to bring it to everyone’s attention that despite Google striving to be a company with the best workplace that offers the best benefits, it’s quite far-off from leading in that space. “The solution is to convert them to full-time or to treat them fairly with respect. Not to throw up our hands and say, “Oh well” said Parker. Post walkout town hall meeting Six days after the walkout, a mail was sent over to the employees regarding the town hall meeting, which Google said was accidentally “leaked”. Stapleton, a marketing manager at YouTube, says that the “the town hall was really tough to watch” and that the Google executives “did not ever address, acknowledge, the list of demands nor did they adequately provide solutions to all the five. They did drop forced arbitration, but for sexual harassment only, not discrimination, which was a key omission”. As per the employees, Google seemed to use the same old methods to get the situation under control. Google said that they’ll be focusing on committing to the OKRs (Objective and Key Result) i.e. the main goal for the company as a whole. Moreover, they also tried to play down the other concerns and core issues such as discrimination (apart from sexual), racism, and the abuse of power while only focussing on one kind of behavior i.e. sexual assault. They mentioned how Google refused to address any issues surrounding the TVCs (temps, vendors, and contractors), despite being asked about it in the town hall. Also, Google did not acknowledge that the HR processes and systems within the company are not working. Instead, Google decided to conduct a survey to ensure how people really feel about the HR teams within the workplace. “They heard loud and clear from 20,000 of us that these processes and reporting lines that are in place are set up the wrong way and need to be redesigned so that we normal employees have more of a say and more of a look into the decision-making processes, and they didn’t even acknowledge that as a valid sentiment or idea”, said Parker. All in all, there wasn’t much “leadership”, and there wasn’t an understanding that “accountability was necessary”. Employees want their demands to be met Employees want an employee representative on board to speak on behalf of all the employees. They want accountability systems in place and for Google to begin analyzing the cultures within companies that use racism, discrimination, abuse of power, sexism, the kind that excludes many from power and accrue resources to only a few. The employees acknowledge that Google is continuing to discuss and talk about the issue, but that the employees would have to keep pushing the conversation forward every step of the way. “I think we need to not be afraid to say the real words. I want to hear our execs say the real words like “discrimination,” which was erased from their response to the demands. Like ‘systemic racism’.I want to hear those real words” said Cecelia. Employees also want the demand no. 2 i.e. ending pay inequity specifically to be addressed by Google as all they keep getting in response is that Google is “looking into it” and “studying” about it. “I think that what they have to do is embrace the tough critique that they’ve gotten and try to understand where we’re coming from and make these changes, and make them in collaboration with us, which has not happened,” said Stapleton. Employees continue to be cautiously hopeful Employees believe that Google has incredible people at the company. Thousands of people came together and worked on their vision for the world altogether on something that really mattered. “You know, we’ve called this the ‘Walkout for Real Change’ for a reason. Even if all of our optimism comes true and the best outcome and our demands are met, real change happens over time and we’re going to hold people accountable to that real change actually going down, and hold us accountable for demanding it also, because we’ve got to get the rest of the demands met”, says Cecelia. Our thoughts on this topic Just as history has proven time and again, information and data can be used to drive a narrative that benefits the storyteller and their agendas. Based on collecting feedback from workers across the company, the Google walkout organizers pointed out systemic issues within the company that enabled the sexual predatory behavior. They pointed out that sexual harassment is one of the symptoms and not the cause. They demanded that the root causes be addressed holistically through their set of five demands. To extinguish a movement or dissension in its infancy, regimes and corporations throughout history have used the following tactics: Be the benevolent ruler Divide and conquer the crowd by appealing to individual group needs but never to everyone’s collective demands Find a middle ground by agreeing to some demands while signaling that the other side also takes a few steps forward thereby disengaging those whose demands aren’t met. This would weaken the movement’s leadership Use the information to support the status quo. Promote the influencers into top management roles It appears that Google is using a lot of the approaches to appease the walkout participants. The Google management adopted classic labor negotiation tactics by sanctioning the protest, also encouraging managers to participate, then agreeing to adopt the easiest item on the list of demands which have already been implemented in some other tech companies but restricted it to only their employees. But restricting the reforms to only their employees, and creating a larger distance for TVCs, they seem to be thinning out the protesting crowd. By not engaging in open dialog on all key issues highlighted and by removing key decision makers on top out of the town hall, they have created a situation for deniability. Lastly, by going back to surveying sentiments on key issues, they are not only relying on time to subdue anger felt but also on the grassroots voice to dissipate. Will this be the tipping point for Google employees to unionize? BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration” OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Following Google, Facebook changes its forced arbitration policy for sexual harassment claims
Read more
  • 0
  • 0
  • 43899
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-facebooks-outgoing-head-of-communications-and-policy-takes-blame-for-hiring-pr-firm-definers-and-reveals-more
Melisha Dsouza
22 Nov 2018
4 min read
Save for later

Facebook's outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more

Melisha Dsouza
22 Nov 2018
4 min read
On 4th November, the New York Times published a scathing report on Facebook that threw the tech giant under scrutiny for its leadership morales. The report pointed out how Facebook has been following the strategy of 'delaying, denying and deflecting’ the blame for all the controversies surrounding it. One of the recent scandals it was involved in was hiring a PR firm- called Definers- who did opposition research and shared content that criticized Facebook’s rivals Google and Apple, diverting focus from the impact of Russian interference on Facebook. They also pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement. Now, in a memo sent by Elliot Schrage (Facebook’s outgoing Head of Communications and Policy) to Facebook employees and obtained by TechCrunch, he takes the blame for hiring The Definers. Elliot Schrage, who after the Cambridge Analytica scandal, announced in June that he was leaving, admitted that his team asked Definers to push negative narratives about Facebook's competitors. He also stated that Facebook asked Definers to conduct research on liberal financier George Soros. His argument was that after George Soros attacked Facebook in a speech at Davos, calling them a “menace to society”, they wanted to determine if he had any financial motivation. According to the TechCrunch report, Elliot denied that the company asked the PR firm to distribute or create fake news. "I knew and approved of the decision to hire Definers and similar firms. I should have known of the decision to expand their mandate," Schrage said in the memo. He further stresses on being disappointed that a lot of the company’s internal discussion has become public. According to the memo, “This is a serious threat to our culture and ability to work together in difficult times.” Saving Mark and Sheryl from additional finger pointing, Schrage further added "Over the past decade, I built a management system that relies on the teams to escalate issues if they are uncomfortable about any project, the value it will provide or the risks that it creates. That system failed here and I'm sorry I let you all down. I regret my own failure here." As a follow-up note to the memo, Sheryl Sandberg (COO, Facebook) also shares accountability of hiring Deniers. She says “I want to be clear that I oversee our Comms team and take full responsibility for their work and the PR firms who work with us” Conveniently enough, this memo comes after the announcement that Elliot is stepping down from his post at Facebook. Elliot’s replacement, Facebook’s new head of global policy and former U.K. Deputy Prime Minister, Nick Clegg will now be reviewing its work with all political consultants. The entire scandal has led to harsh criticism from the media circle like Kara Swisher and from academics like Scott Galloway. On an episode of Pivot with Kara Swisher and Scott Galloway,  Kara comments that “Sheryl Sandberg ... really comes off the worst in this story, although I still cannot stand the ability of people to pretend that this is not all Mark Zuckerberg’s responsibility,” She further followed up with a jarring comment stating “He is the CEO. He has 60 percent. He’s an adult, and they’re treating him like this sort of adult boy king who doesn’t know what’s going on. It’s ridiculous. He knows exactly what’s going on.” Galloway added that since Sheryl had “written eloquently on personal loss and the important discussion around gender equality”, these accomplishments gave her “unfair” protection, and that it might also be true that she will be “unfairly punished.” He raises questions on both, Mark and Sheryl’s leadership saying “Can you think of any individuals who have made so much money doing so much damage? I mean, they make tobacco executives look like Mister Rogers.” On 19th November, he tweeted a detailed theory on why Sandberg is yet a part of Facebook; because “The Zuck can't be (fired)” and nobody wants to be the board who "fires the woman". https://twitter.com/profgalloway/status/1064559077819326464 Here’s another recent tweet thread from Scott which is a sarcastic take on what a “Big Tech” company actually is: https://twitter.com/profgalloway/status/1065315074259202048 Head over to CNBC to know more about this news. What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”  
Read more
  • 0
  • 0
  • 15093

article-image-opencv-4-0-releases-with-experimental-vulcan-g-api-module-and-qr-code-detector-among-others
Natasha Mathur
21 Nov 2018
2 min read
Save for later

OpenCV 4.0 releases with experimental Vulcan, G-API module and QR-code detector among others

Natasha Mathur
21 Nov 2018
2 min read
Two months after the OpenCV team announced the alpha release of Open CV 4.0, the final version 4.0 of OpenCV is here. OpenCV 4.0 was announced last week and is now available as a c++11 library that requires a c++ 11- compliant compiler. This new release explores features such as a G-API module, QR code detector, performance improvements, and DNN improvements among others. OpenCV is an open source library of programming functions which is mainly aimed at real-time computer vision. OpenCV is cross-platform and free for use under the open-source BSD license. Let’s have a look at what’s new in OpenCV 4.0. New Features G-API: OpenCV 4.0 comes with a completely new module opencv_gapi. G-API is an engine responsible for very efficient image processing, based on the lazy evaluation and on-fly construction of the processing graph. QR code detector and decoder: OpenCV 4.0 comprises QR code detector and decoder that has been added to opencv/objdetect module along with a live sample. The decoder is currently built on top of QUirc library. Kinect Fusion algorithm: A popular Kinect Fusion algorithm has been implemented, optimized for CPU and GPU (OpenCL), and integrated into opencv_contrib/rgbd module.  Kinect 2 support has also been updated in opencv/videoio module to make the live samples work. DNN improvements Support has been added for Mask-RCNN model. A new Integrated ONNX parser has been added. Support added for popular classification networks such as the YOLO object detection network. There’s been an improvement in the performance of the DNN module in OpenCV 4.0 when built with Intel DLDT support by utilizing more layers from DLDT. OpenCV 4.0 comes with experimental Vulkan backend that has been added for the platforms where OpenCL is not available. Performance improvements In OpenCV 4.0, hundreds of basic kernels in OpenCV have been rewritten with the help of "wide universal intrinsics". Wide universal intrinsics map to SSE2, SSE4, AVX2, NEON or VSX intrinsics, depending on the target platform and the compile flags. This leads to better performance, even for the already optimized functions. Support has been added for IPP 2019 using the IPPICV component upgrade. For more information, check out the official release notes. Image filtering techniques in OpenCV 3 ways to deploy a QT and OpenCV application OpenCV and Android: Making Your Apps See
Read more
  • 0
  • 0
  • 30408

article-image-the-us-department-of-commerce-wants-to-regulate-export-of-ai-and-related-products
Prasad Ramesh
21 Nov 2018
4 min read
Save for later

The US Department of Commerce wants to regulate export of AI and related products

Prasad Ramesh
21 Nov 2018
4 min read
This Monday the Department of Commerce, Bureau of Industry and Security (BIS) published a proposal to control the export of AI from USA. This move seems to lean towards restricting AI tech going out of the country to protect the national security of USA. The areas that come under the licensing proposal Artificial intelligence, as we’ve seen in recent years has great potential for both good and harm. The DoC in the United States of America is not taking any chances with it. The proposal lists many areas of AI that could potentially require a license to be exported to certain countries. Other than computer vision, natural language processing, military-specific products like adaptive camouflage and faceprint for surveillance is also listed in the proposal to restrict the export of AI. The areas major areas listed in the proposal are: Biotechnology including genomic and genetic engineering Artificial intelligence (AI) and machine learning including neural networks, computer vision, and natural language processing Position, Navigation, and Timing (PNT) technology Microprocessor technology like stacked memory on chip Advanced computing technology like memory-centric logic Data analytics technology like data analytics by visualization and analysis algorithms Quantum information and sensing technology like quantum computing, encryption, and sensing Logistics technology like mobile electric power Additive manufacturing like 3D printing Robotics like micro drones and molecular robotics Brain-computer interfaces like mind-machine interfaces Hypersonics like flight control algorithms Advanced Materials like adaptive camouflage Advanced surveillance technologies faceprint and voiceprint technologies David Edelman, a former adviser to ex-US president Barack Obama said: “This is intended to be a shot across the bow, directed specifically at Beijing, in an attempt to flex their muscles on just how broad these restrictions could be”. Countries that could be affected with regulation on export of AI To determine the level of export controls, the department will consider the potential end-uses and end-users of the technology. The list of countries is not clear but ones to which exports are restricted like embargoed countries will be considered. Also, China could be one of them. What does this mean for companies? If your organization creates products in ‘emerging technologies’ then there will be restrictions on the countries you can export to and also on disclosure of technology to foreign nationals in United States. Depending on the criteria, non-US citizens might even need licenses to participate in research and development of such technology. This will restrict non-US citizens to participate and take back anything from, say an advanced AI research project. If the new regulations go into effect, it will affect the security review of foreign investments across these areas. When the list of technologies is finalized, many types of foreign investments will be subject to a review and deals could be halted or undone. Public views on academic research In addition to commercial applications and products, this regulation could also be bad news for academic research. https://twitter.com/jordanbharrod/status/1065047269282627584 https://twitter.com/BryanAlexander/status/1064941028795400193 Even Google Home, Amazon Alexa, iRobot Roomba could be affected. https://twitter.com/R_D/status/1064511113956655105 But it does not look like research papers will be really affected. The document states that the commerce does not intend to expand jurisdiction on ‘fundamental research’ for ‘emerging technologies’ that is intended to be published and not currently subject to EAR as per § 734.8. But will this affect open-source technologies? We really hope not. Deadline for comments is less than 30 days away BIS has invited comments to the proposal for defining and categorizing emerging technologies, the impact of the controls in US technology leadership among other topics. However the short deadline of December 19, 2018 indicates their haste to implement licensing export of AI quickly. For more details, and to know where you can submit your comments, read the proposal. The US Air Force lays groundwork towards artificial general intelligence based on hierarchical model of intelligence Google open sources BERT, an NLP pre-training technique Teaching AI ethics – Trick or Treat?
Read more
  • 0
  • 0
  • 4523

article-image-googlewalkout-demanded-a-truly-equity-culture-for-everyone-pichai-shares-a-comprehensive-plan-for-employees-to-safely-report-sexual-harassment
Melisha Dsouza
09 Nov 2018
4 min read
Save for later

#GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment

Melisha Dsouza
09 Nov 2018
4 min read
Last week, 20,000 Google employees along with Temps, Vendors, and Contractors walked out to protest the discrimination, racism, and sexual harassment that they encountered at Google’s workplace. This global walkout by Google workers was a response to the New York times report on Google published last month, shielding senior executives accused of sexual misconduct. Yesterday, Google addressed these demands in a note written by Sundar Pichai to their employees. He admits that they have “not always gotten everything right in the past” and they are “sincerely sorry”  for the same. This supposedly ‘comprehensive’ plan will provide more transparency into how employees raise concerns and how Google will handle them. Here are some of the major changes that caught our attention: Following suite after Uber and Microsoft, Google has eliminated forced arbitration in cases of sexual harassment. Fostering a more transparent nature in reporting a sexual harassment case, employees can now be accompanied with support persons to the meetings with HR. Google is planning to update and expand their mandatory sexual harassment training. They will now be conducting these annually instead of once in two years. If an employee fails to complete his/her training, they will receive a one-rating dock in the employees performance review system. This applies to senior management as well where they could be downgraded from ‘exceeds expectation’ to ‘meets expectation’. They will turn increase focus towards diversity, equity and inclusion in 2019, through hiring, progression and retention, in order to create a more inclusive culture for everyone. Google found that one of the most common factors among the harassment complaints is that the perpetrator was under the influence of alcohol (~20% of cases). Stating the policy again, the plan mentions that excessive consumption of alcohol is not permitted when an employee is at work, performing Google business, or attending a Google-related event, whether onsite or offsite. Going forward, all leaders at the company will be expected to create teams, events, offsites and environments in which excessive alcohol consumption is strongly discouraged. They will be expected to follow the two-drink rule. Although the plan is a step towards making workplace conditions stable, it does leave out some of the more inherent concerns related to structural changes as stated by the organizers of the Google Walkout. For example, the structural inequity that separates ‘full time’ employees from contract workers. Contract workers make up more than half of Google’s workforce, and perform essential roles across the company. However, they receive few of the benefits associated with tech company employment. They are also largely women, people of color, immigrants, and people from working class backgrounds. “We demand a truly equitable culture, and Google leadership can achieve this by putting employee representation on the board and giving full rights and protections to contract workers, our most vulnerable workers, many of whom are Black and Brown women.” -Google Walkout Organizer Stephanie Parker Google’s plan to bring transparency at the workplace looks like a positive step towards improving their workplace culture. It would be interesting to see how the plan works out for Google’s employees, as well as other organizations using this as an example to maintain a peaceful workplace environment for their workers. You can head over to Medium.com to read the #GoogleWlakout organizers’ response to the update. Head over to Pichai’s blog post for details on the announcement itself. Technical and hidden debts in machine learning – Google engineers’ give their perspective 90% Google Play apps contain third-party trackers, share user data with Alphabet, Facebook, Twitter, etc: Oxford University Study OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly?
Read more
  • 0
  • 0
  • 42628
article-image-un-on-web-summit-2018-how-we-can-create-a-safe-and-beneficial-digital-future-for-all
Bhagyashree R
07 Nov 2018
4 min read
Save for later

UN on Web Summit 2018: How we can create a safe and beneficial digital future for all

Bhagyashree R
07 Nov 2018
4 min read
On Monday, at the opening ceremony of Web Summit 2018, Antonio Guterres, the secretary general of the United Nations (UN) spoke about the benefits and challenges that come with cutting edge technologies. Guterres highlighted that the pace of change is happening so quickly that trends such as blockchain, IoT, and artificial intelligence can move from the cutting edge to the mainstream in no time. Guterres was quick to pay tribute to technological innovation, detailing some of the ways this is helping UN organizations improve the lives of people all over the world. For example, UNICEF is now able to map a connection between school in remote areas, and the World Food Programme is using blockchain to make transactions more secure, efficient and transparent. But these innovations nevertheless pose risks and create new challenges that we need to overcome. Three key technological challenges the UN wants to tackle Guterres identified three key challenges for the planet. Together they help inform a broader plan of what needs to be done. The social impact of the third and fourth industrial revolution With the introduction of new technologies, in the next few decades we will see the creation of thousands of new jobs. These will be very different from what we are used to today, and will likely require retraining and upskilling. This will be critical as many traditional jobs will be automated. Guterres believes that consequences of unemployment caused by automation could be incredibly disruptive - maybe even destructive - for societies. He further added that we are not preparing fast enough to match the speed of these growing technologies. As a solution to this, Guterres said: “We will need to make massive investments in education but a different sort of education. What matters now is not to learn things but learn how to learn things.” While many professionals will be able to acquire the skills to become employable in the future, some will inevitably be left behind. To minimize the impact of these changes, safety nets will be essential to help millions of citizens transition into this new world, and bring new meaning and purpose into their lives. Misuse of the internet The internet has connected the world in ways people wouldn’t have thought possible a generation ago. But it has also opened up a whole new channel for hate speech, fake news, censorship and control. The internet certainly isn’t creating many of the challenges facing civic society on its own - but it won’t be able to solve them on its own either. On this, Guterres said: “We need to mobilise the government, civil society, academia, scientists in order to be able to avoid the digital manipulation of elections, for instance, and create some filters that are able to block hate speech to move and to be a factor of the instability of societies.” The problem of control Automation and AI poses risks that exceed the challenges of the third and fourth industrial revolutions. They also create urgent ethical dilemmas, forcing us to ask exactly what artificial intelligence should be used for. Smarter weapons might be a good idea if you’re an arms manufacturer, but there needs to be a wider debate that takes in wider concerns and issues. “The weaponization of artificial intelligence is a serious danger and the prospects of machines that have the capacity by themselves to select and destroy targets is creating enormous difficulties or will create enormous difficulties,” Guterres remarked. His solution might seem radical but it’s also simple: ban them. He went on to explain: “To avoid the escalation in conflict and guarantee that international military laws and human rights are respected in the battlefields, machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant and should be banned by international law.” How we can address these problems Typical forms of regulations can help to a certain extent, as in the case of weaponization. But these cases are limited. In the majority of circumstances technologies move so fast that legislation simply cannot keep up in any meaningful way. This is why we need to create platforms where governments, companies, academia, and civil society can come together, to discuss and find ways that allow digital technologies to be “a force for good”. You can watch Antonio Guterres’ full talk on YouTube. Tim Berners-Lee is on a mission to save the web he invented MEPs pass a resolution to ban “Killer robots” In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey
Read more
  • 0
  • 0
  • 12335

article-image-technical-and-hidden-debts-in-machine-learning-google-engineers-give-their-perspective
Prasad Ramesh
06 Nov 2018
6 min read
Save for later

Technical and hidden debts in machine learning - Google engineers’ give their perspective

Prasad Ramesh
06 Nov 2018
6 min read
In a paper, Google engineers have pointed out the various costs of maintaining a machine learning system. The paper, Hidden Technical Debt in Machine Learning Systems, talks about technical debt and other ML specific debts that are hard to detect or hidden. They found that is common to incur massive maintenance costs in real-world machine learning systems. They looked at several ML-specific risk factors to account for in system design. These factors include boundary erosion, entanglement, hidden feedback loops, undeclared consumers, data dependencies, configuration issues, changes in the external world, and a number of system-level anti-patterns. Boundary erosion in complex models In traditional software engineering, setting strict abstractions boundaries helps in logical consistency among the inputs and outputs of a given component. It is difficult to set these boundaries in machine learning systems. Yet, machine learning is needed in areas where the desired behavior cannot be effectively expressed with traditional software logic without depending on data. This results in a boundary erosion in a couple of areas. Entanglement Machine learning systems mix signals together, entangle them and make isolated improvements impossible. Change to one input may change all the other inputs and an isolated improvement cannot be done. It is referred to as the CACE principle: Change Anything Changes Everything. There are two possible ways to avoid this: Isolate models and serve ensembles. Useful in situations where the sub-problems decompose naturally. In many cases, ensembles work well as the errors in the component models are not correlated. Relying on this combination creates a strong entanglement and improving an individual model may make the system less accurate. Another strategy is to focus on detecting changes in the prediction behaviors as they occur. Correction cascades There are cases where a problem is only slightly different than another which already has a solution. It can be tempting to use the same model for the slightly different problem. A small correction is learned as a fast way to solve the newer problem. This correction model has created a new system dependency on the original model. This makes it significantly more expensive to analyze improvements to the models in the future. The cost increases when correction models are cascaded. A correction cascade can create an improvement deadlock. Visibility debt caused by undeclared consumers Many times a model is made widely accessible that may later be consumed by other systems. Without access controls, these consumers may be undeclared, silently using the output of a given model as an input to another system. These issues are referred to as visibility debt. These undeclared consumers may also create hidden feedback loops. Data dependencies cost more than code dependencies Data dependencies can carry a similar capacity as dependency debt for building debt, only more difficult to detect. Without proper tooling to identify them, data dependencies can form large chains that are difficult to untangle. They are of two types. Unstable data dependencies For moving along the process quickly, it is often convenient to use signals from other systems as input to your own. But some input signals are unstable, they can qualitatively or quantitatively change behavior over time. This can happen as the other system updates over time or made explicitly. A mitigation strategy is to create versioned copies. Underutilized data dependencies Underutilized data dependencies are input signals that provide little incremental modeling benefit. These can make an ML system vulnerable to change where it is not necessary. Underutilized data dependencies can come into a model in several ways—via legacy, bundled, epsilon or correlated features. Feedback loops Live ML systems often end up influencing their own behavior on being updated over time. This leads to analysis debt. It is difficult to predict the behavior of a given model before it is released in such a case. These feedback loops are difficult to detect and address if they occur gradually over time. This may be the case if the model is not updated frequently. A direct feedback loop is one in which a model may directly influence the selection of its own data for future training. In a hidden feedback loop, two systems influence each other indirectly. Machine learning system anti-patterns It is common for systems that incorporate machine learning methods to end up with high-debt design patterns. Glue code: Using generic packages results in a glue code system design pattern. In that, a massive amount of supporting code is typed to get data into and out of general-purpose packages. Pipeline jungles: Pipeline jungles often appear in data preparation as a special case of glue code. This can evolve organically with new sources added. The result can become a jungle of scrapes, joins, and sampling steps. Dead experimental codepaths: Glue code commonly becomes increasingly attractive in the short term. None of the surrounding structures need to be reworked. Over time, these accumulated codepaths create a growing debt due to the increasing difficulties of maintaining backward compatibility. Abstraction debt: There is a lack of support for strong abstractions in ML systems. Common smells: A smell may indicate an underlying problem in a component system. These can be data smells, multiple-language smell, or prototype smells. Configuration debt Debt can also accumulate when configuring a machine learning system. A large system has a wide number of configurations with respect to features, data selection, verification methods and so on. It is common that configuration is treated an afterthought. In a mature system, config lines can be larger than the code lines themselves and each configuration line has potential for mistakes. Dealing with external world changes ML systems interact directly with the external world and the external world is rarely stable. Some measures that can be taken to deal with the instability are: Fixing thresholds in dynamic systems It is necessary to pick a decision threshold for a given model to perform some action. Either to predict true or false, to mark an email as spam or not spam, to show or not show a given advertisement. Monitoring and testing Unit testing and end-to-end testing cannot ensure complete proper functioning of an ML system.  For long-term system reliability, comprehensive live monitoring and automated response is critical. Now there is a question of what to monitor. The authors of the paper point out three areas as starting points—prediction bias, limits for actions, and upstream producers. Other related areas in ML debt In addition to the mentioned areas, an ML system may also face debts from other areas. These include data testing debt, reproducibility debt, process management debt, and cultural debt. Conclusion Moving quickly often introduces technical debt. The most important insight from this paper, according to the authors is that technical debt is an issue that both engineers and researchers need to be aware of. Paying machine learning related technical debt requires commitment, which can often only be achieved by a shift in team culture. Prioritizing and rewarding this effort which needs to be recognized is important for the long-term health of successful machine learning teams. For more details, you can read the paper at NIPS website. Uses of Machine Learning in Gaming Julia for machine learning. Will the new language pick up pace? Machine learning APIs for Google Cloud Platform
Read more
  • 0
  • 0
  • 46116

article-image-facebooks-ceo-mark-zuckerberg-summoned-for-hearing-by-uk-and-canadian-houses-of-commons
Bhagyashree R
01 Nov 2018
2 min read
Save for later

Facebook's CEO, Mark Zuckerberg summoned for hearing by UK and Canadian Houses of Commons

Bhagyashree R
01 Nov 2018
2 min read
Yesterday, the chairs of the UK and Canadian Houses of Commons issued a letter calling for Mark Zuckerberg, Facebook’s CEO to appear before them. The primary aim of this hearing is to get a clear idea of what measures Facebook is taking to avoid the spreading of disinformation on the social media platform and to protect user data. It is scheduled to happen at the Westminster Parliament on Tuesday 27th November. The committee has already gathered evidence regarding several data breaches and process failures including the Cambridge Analytica scandal and is now seeking answers from Mark Zuckerberg on what led to all of these incidents. Mark last attended a hearing in April with the Senate's Commerce and Judiciary committees this year in which he was asked about the company’s failure to protect its user data, its perceived bias against conservative speech, and its use for selling illegal material like drugs. After which he has not attended any of the hearings and instead sent other senior representatives such as Sheryl Sandberg, COO at Facebook. The letter pointed out: “You have chosen instead to send less senior representatives, and have not yourself appeared, despite having taken up invitations from the US Congress and Senate, and the European Parliament.” Throughout this year we saw major security and data breaches involving Facebook. The social media platform faced a security issue last month which impacted almost 50 million user accounts. Its engineering team discovered that hackers were able to find a way to exploit a series of bugs related to the View As Facebook feature. Earlier this year, Facebook witnessed a backlash for the Facebook-Cambridge Analytica data scandal. It was a major political scandal about Cambridge Analytica using personal data of millions of Facebook users for political purposes without their permission. The reports of this hearing will be shared in December if at all Zuckerberg agrees to attend it. The committee has requested his response till 7th November. Read the full letter issued by the committee. Facebook is at it again. This time with Candidate Info where politicians can pitch on camera Facebook finds ‘no evidence that hackers accessed third party Apps via user logins’, from last week’s security breach How far will Facebook go to fix what it broke: Democracy, Trust, Reality
Read more
  • 0
  • 0
  • 9560
article-image-google-employees-walkout-for-real-change-today-these-are-their-demands
Natasha Mathur
01 Nov 2018
5 min read
Save for later

Google employees ‘Walkout for Real Change’ today. These are their demands.

Natasha Mathur
01 Nov 2018
5 min read
More than 1500 Google employees, around the world, are planning to walk out of their respective Google offices today, to protest against Google’s handling of sexual misconduct within the workplace, according to the New York Times. This is a part of the “women’s walkout” that was organized by more than 200 Google engineers, earlier this week as a response to Google’s handling of sexual misconduct in the recent past, that employees found as inadequate. The planning for the walkout was done last Friday, where Claire Stapleton, product marketing manager at Google’s YouTube created an internal mailing list to organize the walkout according to the New York Times. As the walkout was organized, more than 200 employees had joined in over the weekend, which has since grown to more than 1,500. The organizers took to Twitter, yesterday, to lay out five demands for change within the workplace. The protest has already started at Google’s Tokyo and Singapore office. Google employees and contractors, across the globe, will be leaving work at 11:10 AM in their respective time zones.   Here are some glimpses from the walkout: https://twitter.com/GoogleWalkout/status/1058199862502612993 https://twitter.com/EmmaThomson2/status/1058180157804994562 https://twitter.com/GoogleWalkout/status/1058018104930897920 https://twitter.com/GoogleWalkout/status/1058010748444700672 https://twitter.com/GoogleWalkout/status/1058003099581853697 The demands laid out by the Google employees are as follows: An end to Forced Arbitration in cases of harassment and discrimination for all current and future employees. This means that Google should no longer require people to waive their right to sue. In fact, every co-worker should be given the right to bring a co-worker, representative, or supporter of their choice when meeting with HR for filing a harassment claim. A commitment to end pay and opportunity inequity. This includes making sure that there are women of color at all the levels of the organization. There should also be transparent data on the gender, race, and ethnicity compensation gap, across both level and years of industry experience.  The methods and techniques that have been used to aggregate such data should also be transparent. A publicly disclosed sexual harassment transparency report. This includes the number of harassment claims at Google over time, types of claims submitted, how many victims and accused have left Google, details about exit packages and their worth. A clear, uniform, and globally inclusive process for reporting sexual misconduct safely and anonymously. This is because the current process in place is not working. HR’s performance is assessed by senior management and directors, which forces them to put the management’s interest ahead of the employees that report harassment and discrimination. Accountability, safety, and ability to report regarding unsafe working conditions should not be dictated by the employment status. Elevate the Chief Diversity Officer to answer directly to the CEO and make recommendations directly to the Board of Directors. Appoint an Employee Rep to the Board. The frustration among the Google employees surfaced after the New York Times report brought to light the shocking allegations against Andy Rubin’s (creator of Android) sexual misconduct at Google. As per the report, Rubin was accused of misbehavior in 2014 and the allegations were confirmed by Google. Due to this, he was asked to leave by former Google CEO, Mr.Page, but what’s discreditable is the fact that Google paid him $90 million as an exit package. Moreover,  he also received a high profile well-respected farewell by Google in October 2014. Also, the fact that senior executives such as Drummond, Chief Legal Officer, Alphabet, who were mentioned in the NY times report for indulging in “inappropriate relationships” within the organization continues to work in highly placed positions at Google and haven’t faced any real punitive action by Google for their past behavior. “We don’t want to feel that we’re unequal or we’re not respected anymore. Google’s famous for its culture. But in reality, we’re not even meeting the basics of respect, justice, and fairness for every single person here”, Stapleton told the NY Times. Google CEO Sundar Pichai had sent an email to all the Google employees, last Thursday, clarifying that the company has fired 48 people over the last two years for sexual harassment, out of whom, 13  were “senior managers and above”. He also mentioned how none of them received any exit packages. Sundar Pichai, Google’s CEO, further apologized in an email obtained by Axios this Tuesday, saying that the “apology at TGIF didn’t come through, and it wasn’t enough”. Pichai also mentioned that he supports the engineers at Google who have organized a “walkout”. “I am taking in all your feedback so we can turn these ideas into action. We will have more to share soon. In the meantime, Eileen will make sure managers are aware of the activities planned for Thursday and that you have the support you need”, wrote Pichai. The very same day, news of Richard DeVaul, a director at unit X of Alphabet (Google’s parent company) whose name was also mentioned in the New York Times report, resigning from the company came to light. DeVaul had been accused of sexually harassing Star Simpson, a hardware engineer. DeVaul did not receive any exit package on his resignation. Public response to the walkout has been largely positive: https://twitter.com/lizthegrey/status/1057859226100355072 https://twitter.com/amrtgaber/status/1057822987527761920 https://twitter.com/sparker2/status/1057846019122069508 https://twitter.com/LisaIronTongue/status/1057852658948595712 Ex-googler who quit Google on moral grounds writes to Senate about company’s “Unethical” China censorship plan OK Google, why are you ok with mut(at)ing your ethos for Project DragonFly? Google takes steps towards better security, introduces new API policies for 3rd parties and a Titan Security system for mobile devices
Read more
  • 0
  • 0
  • 44872

article-image-we-are-not-going-to-withdraw-from-the-future-says-microsofts-brad-smith-on-the-ongoing-jedi-bid-amazon-concurs
Prasad Ramesh
29 Oct 2018
5 min read
Save for later

‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs

Prasad Ramesh
29 Oct 2018
5 min read
The Pentagon has been trying to get a hold of AI and related technologies from tech giants. Google employees had quit over it, Microsoft employees had asked the company to withdraw from the JEDI project. Last Friday, Microsoft President Brad Smith wrote about Microsoft and the US Military and the company’s visions in this area. Amazon, Microsoft, IBM, and Oracle are the companies who have bid for the Joint Enterprise Defense Infrastructure (JEDI) project. JEDI is a department wide cloud computing infrastructure that will give the Pentagon access to weapons systems enhanced with artificial intelligence and cloud computing. Microsoft believes in defending USA “We are not going to withdraw from the future, in the most positive way possible, we are going to work to help shape it.” said Brad Smith, President at Microsoft indicating that Microsoft intends to provide their technology to the Pentagon. Microsoft did not shy away from bidding in the Pentagon’s JEDI project. This in contrast to Google, which opted out of the same program earlier this month citing ethical concerns. Smith expressed Microsoft’s intent towards providing AI and related technologies to the US defense department saying, “we want the people who defend USA to have access to the nation’s best technology, including from Microsoft”. Smith stated that Microsoft’s work in this area is based on three convictions: Microsoft believes in the strong defense of USA and wants the defenders to have access to USA’s best technology, this includes Microsoft They want to use their ‘knowledge and voice’ to address ethical AI issues via the nation’s ‘civic and democratic processes’. They are giving their employees to opt out of work on these projects given that as a global company they consist of employees from different countries. Smith shared that Microsoft has had a long standing history with the US Department of Defense (DOD). Their tech has been used throughout the US military from the front office to field operations. This includes bases, ships, aircraft and training facilities. Amazon shares Microsoft’s visions Amazon too shares these visions with Microsoft in empowering US law and defense institutions with the latest technology. Amazon already provides cloud services to power the Central Intelligence Agency (CIA). Amazon CEO, Jeff Bezos said: “If big tech companies are going to turn their back on the Department of Defense, this country is going to be in trouble.” Amazon also provides the US law enforcement with their facial recognition technology called Rekognition. This has been a bone of contention for not just civil rights groups but also for some Amazon’s employees. Rekognition will help in identifying and incarcerating undesirable people. But it does not really work with accuracy. In a study by ACLU, Rekognition identified 28 people from the US congress incorrectly. The American Civil Liberties Union (ACLU) has now filed a Freedom of Information Act (FOIA) request which demands the Department of Homeland Security (DHS) to disclose how DHS and Immigration and Customs Enforcement (ICE) use Rekognition for law enforcement and immigration checks. Google’s rationale for withdrawing from the JEDI project Last week, in an interview with the Fox Network, Oracle founder Larry Ellison stated that it was shocking how Google viewed this matter. Google withdrew from the JEDI project following strong backlash from many of its employees. In the official statement, they have stated the reason for dropping out of the JEDI contract bidding as an ethical value misalignment and also that they don’t fully have all necessary clearance to work on Government projects.’ However, Google is open to launching a customized search engine in China that complies with China’s rules of censorship including potential to surveil Chinese citizens. Should AI be used in weapons? This question is the at the heart of the contentious topic of the tech industry working with the military. It is a serious topic that has been debated over the years by educated scientists and experienced leaders. Elon Musk, researchers from DeepMind and other companies even pledged to not build lethal AI. Personally, I side with the researchers and believe AI should be used exclusively for the benefit of mankind, to enhance human lives and solve problems that would prosper people’s lives. And not against each other in a race to build weapons or to become a superpower. But then again what would I know? Leading nations are in an AI arms race as we speak, with sophisticated national AI plans and agendas. For more details on Microsoft’s interest in working with the US Military visit the Microsoft website. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract
Read more
  • 0
  • 0
  • 13109
Modal Close icon
Modal Close icon