Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - News

104 Articles
article-image-social-media-enabled-and-amplified-christchurch-terrorist-attack
Fatema Patrawala
19 Mar 2019
11 min read
Save for later

How social media enabled and amplified the Christchurch terrorist attack

Fatema Patrawala
19 Mar 2019
11 min read
The recent horrifying terrorist attack in New Zealand has cast new blame on how technology platforms police content. There are now questions about whether global internet services are designed to work this way? And if online viral hate is uncontainable? Fifty one people so far have been reported to be dead and 50 more injured after the terrorist attacks on two New Zealand mosques on Friday. The victims included children as young as 3 and 4 years old, and elderly men and women. The alleged shooter is identified as a 28 year old Australian man named Brenton Tarrant. Brenton announced the attack on the anonymous-troll message board 8chan. There, he posted images of the weapons days before the attack, and made an announcement an hour before the shooting. On 8chan, Facebook and Twitter, he also posted links to a 74-page manifesto, titled “The Great Replacement,” blaming immigration for the displacement of whites in Oceania and elsewhere. The manifesto cites “white genocide” as a motive for the attack, and calls for “a future for white children” as its goal. Further he live-streamed the attacks on Facebook, YouTube; and posted a link to the stream on 8chan. It’s terrifying and disgusting, especially when 8chan is one of the sites where disaffected internet misfits create memes and other messages to provoke dismay and sow chaos among people. “8chan became the new digital home for some of the most offensive people on the internet, people who really believe in white supremacy and the inferiority of women,” Ethan Chiel wrote. “It’s time to stop shitposting,” the alleged shooter’s 8chan post reads, “and time to make a real-life effort post.” Many of the responses, anonymous by 8chan’s nature, celebrate the attack, with some posting congratulatory Nazi memes. A few seem to decry it, just for logistical quibbles. And others lament that the whole affair might destroy the site, a concern that betrays its users’ priorities. Social media encourages performance crime The use of social media technology and livestreaming marks the attack as different from many other terrorist incidents. It is a form of violent “performance crime”. That is, the video streaming is a central component of the violence itself, it’s not somehow incidental to the crime, or a trophy for the perpetrator to re-watch later. In the past, terrorism functioned according to what has been called the “theatre of terror”, which required the media to report on the spectacle of violence created by the group. Nowadays with social media in our hands it's much easier for someone to both create the spectacle of horrific violence and distribute it widely by themselves. There is a tragic and recent history of performance crime videos that use live streaming and social media video services as part of their tactics. In 2017, for example, the sickening murder video of an elderly man in Ohio was uploaded to Facebook, and the torture of a man with disabilities in Chicago was live streamed. In 2015, the murder of two journalists was simultaneously broadcast on-air, and live streamed. Tech companies on the radar Social-media companies scrambled to take action as the news—and the video—of the attack spread. Facebook finally managed to pull down Tarrant’s profiles and the video, but only after New Zealand police brought the live-stream to the company’s attention. It has been working "around the clock" to remove videos of the incident shared on its platform. In a statement posted to Twitter on Sunday, the tech company said that within 24 hours of Friday’s shooting it had removed 1.5 million videos of the attack from its platform globally. YouTube said it had also removed an “unprecedented volume” of videos of the shooting. Twitter also suspended Tarrant’s account, where he had posted links to the manifesto from several file-sharing sites. The chaotic aftermath mostly took place while many North Americans slept unaware, waking up to the news and its associated confusion. By morning on the East Coast, news outlets had already weighed in on whether technology companies might be partly to blame for catastrophes such as the New Zealand massacre because they have failed to catch offensive content before it spreads. One of the tweets say Google, Twitter and Facebook made a choice to not use tools available to them to stop white supremacist terrorism. https://twitter.com/samswey/status/1107055372949286912 Countries like Germany and France already have a law in place that demands social media sites move quickly to remove hate speech, fake news and illegal material. Sites that do not remove "obviously illegal" posts could face fines of up to 50m euro (£44.3m). In the wake of the attack, a consortium of New Zealand’s major companies has pledged to pull their advertising from Facebook. In a joint statement, the Association of New Zealand Advertisers (ANZA) and the Commercial Communications Council asked domestic companies to think about where “their advertising dollars are spent, and carefully consider, with their agency partners, where their ads appear.” They added, “We challenge Facebook and other platform owners to immediately take steps to effectively moderate hate content before another tragedy can be streamed online.” Additionally internet service providers like Vodafone, Spark and Vocus in New Zealand are blocking access to websites that do not respond or refuse to comply to requests to remove reuploads of the shooter’s original live stream. The free speech vs safety debate puts social media platforms in the crosshairs Tech Companies are facing new questions on content moderation following the New Zealand attack. The shooter posted a link to the live stream, and soon after he was apprehended, reuploads were found on other platforms like YouTube and Twitter. “Tech companies basically don’t see this as a priority,” the counter-extremism policy adviser Lucinda Creighton commented. “They say this is terrible, but what they’re not doing is preventing this from reappearing.” Others affirmed the importance of quelling the spread of the manifesto, video, and related materials, for fear of producing copycats, or of at least furthering radicalization among those who would be receptive to the message. The circulation of ideas might have motivated the shooter as much as, or even more than, ethnic violence. As Charlie Warzel wrote at The New York Times, the New Zealand massacre seems to have been made to go viral. Tarrant teased his intentions and preparations on 8chan. When the time came to carry out the act, he provided a trove of resources for his anonymous members, scattered to the winds of mirror sites and repositories. Once the live-stream started, one of the 8chan user posted “capped for posterity” on Tarrant’s thread, meaning that he had downloaded the stream’s video for archival and, presumably, future upload to other services, such as Reddit or 4chan, where other like-minded trolls or radicals would ensure the images spread even further. As Warzel put it, “Platforms like Facebook, Twitter, and YouTube … were no match for the speed of their users.” The internet is a Pandora’s box that never had a lid. Camouflaging stories is easy but companies trying hard in building AI to catch it Last year, Mark Zuckerberg defended himself and Facebook before Congress against myriad failures, which included Russian operatives disrupting American elections and permitting illegal housing ads that discriminate by race. Mark Zuckerberg repeatedly invoked artificial intelligence as a solution for the problems his and other global internet companies have created. There’s just too much content for human moderators to process, even when pressed hard to do so under poor working conditions. The answer, Zuckerberg has argued, is to train AI to do the work for them. But that technique has proved insufficient. That’s because detecting and scrubbing undesirable content automatically is extremely difficult. False positives enrage earnest users or foment conspiracy theories among paranoid ones, thanks to the black-box nature of computer systems. Worse, given a pool of billions of users, the clever ones will always find ways to trick any computer system, for example, by slightly modifying images or videos in order to make them appear different to the computer but identical to human eyes. 8chan, as it happens, is largely populated by computer-savvy people who have self-organized to perpetrate exactly those kinds of tricks. The primary sources of content are only part of the problem. Long after the deed, YouTube users have bolstered conspiracy theories about murders, successfully replacing truth with lies among broad populations of users who might not even know they are being deceived. Even stock-photo providers are licensing stills from the New Zealand shooter’s video; a Reuters image that shows the perpetrator wielding his rifle as he enters the mosque is simply credited, “Social media.” Interpreting real motives is difficult on social The video is just the tip of the iceberg. Many smaller and less obviously inflamed messages have no hope of being found, isolated, and removed by technology services. The shooter praised Donald Trump as a “symbol of renewed white identity” and incited the conservative commentator Candace Owens, who took the bait on Twitter in a post that got retweeted thousands of times by the morning after the attack. The shooter’s forum posts and video are littered with memes and inside references that bear special meaning within certain communities on 8chan, 4chan, Reddit, and other corners of the internet, offering tempting receptors for consumption and further spread. Perhaps worst of all, the forum posts, the manifesto, and even the shooting itself might not have been carried out with the purpose that a literal read of their contents suggests. At the first glance, it seems impossible to deny that this terrorist act was motivated by white-extremist hatred, an animosity that authorities like the FBI expert and the Facebook officials would want to snuff out before it spreads. But 8chan is notorious for users with an ironic and rude behaviour under the shades of anonymity.They use humor, memes and urban slang to promote chaos and divisive rhetoric. As the internet separates images from context and action from intention, and then spreads those messages quickly among billions of people scattered all around the globe. That structure makes it impossible to even know what individuals like Tarrant “really mean” by their words and actions. As it spreads, social-media content neuters earnest purpose entirely, putting it on the same level as anarchic randomness. What a message means collapses into how it gets used and interpreted. For 8chan trolls, any ideology might be as good as any other, so long as it produces chaos. We all have a role to play It’s easy to say that technology companies can do better. They can, and they should. But ultimately, content moderation is not the solution by itself. The problem is the media ecosystem they have created. The only surprise is that anyone would still be surprised that social media produce this tragic abyss, for this is what social media are supposed to do, what they were designed to do: spread the images and messages that accelerate interest and invoke raw emotions, without check, and absent concern for their consequences. We hope that social media companies get better at filtering out violent content and explore alternative business models, and governments think critically about cyber laws that protect both people and speech. But until they do we should reflect on our own behavior too. As news outlets, we shape the narrative through our informed perspectives which makes it imperative to publish legitimate & authentic content. Let’s as users too make a choice of liking and sharing content on social platforms. Let’s consider how our activities could contribute to an overall spectacle society that might inspire future perpetrator-produced videos of such gruesome crime – and act accordingly. In this era of social spectacle, we all have a role to play in ensuring that terrorists aren’t rewarded for their crimes with our clicks and shares. The Indian government proposes to censor social media content and monitor WhatsApp messages Virality of fake news on social media: Are weaponized AI bots to blame, questions Destin Sandlin Mastodon 2.7, a decentralized alternative to social media silos, is now out!
Read more
  • 0
  • 0
  • 17401

article-image-googlers-launch-industry-wide-awareness-campaign-to-fight-against-forced-arbitration
Natasha Mathur
17 Jan 2019
6 min read
Save for later

Googlers launch industry-wide awareness campaign to fight against forced arbitration

Natasha Mathur
17 Jan 2019
6 min read
A group of Googlers launched a public awareness social media campaign from 9 AM to 6 PM EST yesterday. The group, called, ‘Googlers for ending forced arbitration’ shared information about arbitration on their Twitter and Instagram accounts throughout the day. https://twitter.com/endforcedarb/status/1084813222505410560 The group tweeted out yesterday, as part of the campaign, that in surveying employees of 30+ tech companies and 10+ common Temp/Contractor suppliers in the industry, none of them could meet the three primary criteria needed for a transparent workplace. The three basic criteria include: optional arbitration policy for all employees and for all forms of discrimination (including contractors/temps), no class action waivers, and no gag rule that keeps arbitration hearings proceedings confidential. The group shared some hard facts about Arbitration and also busted myths regarding the same. Let’s have a look at some of the key highlights from yesterday’s campaign. At least 60 million Americans are forced to use arbitration The group states that the implementation of forced arbitration policy has grown significantly in the past seven years. Over 65% of the companies consisting of 1,000 or more employees, now have mandatory arbitration procedures. Employees don’t have an option to take their employers to court in cases of harassment or discrimination. People of colour and women are often the ones who get affected the most by this practice.           How employers use forced Arbitration Forced arbitration is extremely unfair Arbitration firms that are hired by the companies usually always favour the companies over its employees. This is due to the fear of being rejected the next time by an employer lest the arbitration firm decides to favour the employee. The group states that employees are 1.7 times more likely to win in Federal courts and 2.6 times more likely to win in state courts than in arbitration.   There are no public filings of the complaint details, meaning that the company won’t have anyone to answer to regarding the issues within the organization. The company can also limit its obligation when it comes to disclosing the evidence that you need to prove your case.   Arbitration hearings happen behind closed doors within a company When it comes to arbitration hearings, it's just an employee and their lawyer, other party and their lawyer, along with a panel of one to three arbitrators. Each party gets to pick one arbitrator each, who is also hired by your employers. However, there’s usually only a single arbitrator panel involved as three-arbitrator panel costs five times more than a single arbitrator panel, as per the American Arbitration Association. Forced Arbitration requires employees to sign away their right to class action lawsuits at the start of the employment itself The group states that irrespective of having legal disputes or not, forced arbitration bans employees from coming together as a group in case of arbitration as well as in case of class action lawsuits. Most employers also practice “gag rule” which restricts the employee to even talk about their experience with the arbitration policy. There are certain companies that do give you an option to opt out of forced arbitration using an opt-out form but comes with a time constraint depending on your agreement with that company. For instance, companies such as Twitter, Facebook, and Adecco give their employees a chance to opt out of forced arbitration.                                                  Arbitration opt-out option JAMS and AAA are among the top arbitration organizations used by major tech giants JAMS, Judicial Arbitration and Mediation Services, is a private company that is used by employers like Google, Airbnb, Uber, Tesla, and VMware. JAMS does not publicly disclose the diversity of its arbitrators. Similarly, AAA, America Arbitration Association, is a non-profit organization where usually retired judges or lawyers serve as arbitrators. Arbitrators in AAA have an overall composition of 24% women and minorities. AAA is one of the largest arbitration organizations used by companies such as Facebook, Lyft, Oracle, Samsung, and Two Sigma.   Katherine Stone, a professor from UCLA law school, states that the procedure followed by these arbitration firms don’t allow much discovery. What this means is that these firms don’t usually permit depositions or various kinds of document exchange before the hearing. “So, the worker goes into the hearing...armed with nothing, other than their own individual grievances, their own individual complaints, and their own individual experience. They can’t learn about the experience of others,” says Stone. Female workers and African-American workers are most likely to suffer from forced arbitration 58% female workers and 59% African American workers face mandatory arbitration depending on the workgroups. For instance, in the construction industry, which is a highly male-dominated industry, the imposition of forced arbitration is at the lowest rate. But, in the education and health industries, which has the majority of the female workforce, the imposition rate of forced arbitration is high.                                 Forced Arbitration rate among different workgroups Supreme Court has gradually allowed companies to expand arbitration to employees & consumers The group states that the 1925 Federal Arbitration Act (FAA) had legalized arbitration between shipping companies in cases of settling commercial disputes. The supreme court, however, expanded this practice of arbitration to companies too.                                                   Supreme court decisions Apart from sharing these facts, the group also shed insight on dos and don’t that employees should follow under forced arbitration clauses.                                                      Dos and Dont’s The social media campaign by Googlers for forced arbitration represents an upsurge in the strength and courage among the employees within the tech industry as not just the Google employees but also employees from different tech companies shared their experience regarding forced arbitration. The group had researched academic institutions, labour attorneys, advocacy groups, etc, and the contracts of around 30 major tech companies, as a part of the campaign. To follow all the highlights from the campaign, follow the End Forced Arbitration Twitter account. Shareholders sue Alphabet’s board members for protecting senior execs accused of sexual harassment Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Tech Workers Coalition volunteers talk unionization and solidarity in Silicon Valley
Read more
  • 0
  • 0
  • 15985

article-image-google-rejects-all-13-shareholder-proposals-at-its-annual-meeting-despite-protesting-workers
Sugandha Lahoti
20 Jun 2019
11 min read
Save for later

Google rejects all 13 shareholder proposals at its annual meeting, despite protesting workers

Sugandha Lahoti
20 Jun 2019
11 min read
Yesterday at Google’s annual shareholder meeting, Alphabet, Google’s parent company, faced 13 independent stockholder proposals ranging from sexual harassment and diversity, to the company's policies regarding China and forced arbitration. There was also a proposal to limit Alphabet's power by breaking up the company. However, as expected, every stockholder proposal was voted down after a few minutes of ceremonial voting, despite protesting workers. The company’s co-founders, Larry Page, and Sergey Brin were both no-shows and Google CEO Sundar Pichai didn’t answer any questions. Google has seen a massive escalation in employee activism since 2018. The company faced backlash from its employees over a censored version of its search engine for China, forced arbitration policies, and its mishandling of sexual misconduct. Google Walkout for Real Change organizers Claire Stapleton, and Meredith Whittaker were also retaliated against by their supervisors for speaking out. According to widespread media reports the US Department of Justice is readying to investigate into Google. It has been reported that the probe would examine whether the tech giant broke antitrust law in the operation of its online and advertisement businesses. Source: Google’s annual meeting MOM Equal Shareholder Voting Shareholders requested that Alphabet’s Board initiate and adopt a recapitalization plan for all outstanding stock to have one vote per share. Currently, the company has a multi-class voting structure, where each share of Class A common stock has one vote and each share of Class B common stock has 10 votes. As a result, Page and Brin currently control over 51% of the company’s total voting power, while owning less than 13% of stock. This raises concerns that the interests of public shareholders may be subordinated to those of our co-founders. However, board of directors rejected the proposal stating that their capital and governance structure has provided significant stability to the company, and is therefore in the best interests of the stockholders. Commit to not use Inequitable Employment Practices This proposal urged Alphabet to commit not to use any of the Inequitable Employment Practices, to encourage focus on human capital management and improve accountability.  “Inequitable Employment Practices” are mandatory arbitration of employment-related claims, non-compete agreements with employees, agreements with other companies not to recruit one another’s employees, and involuntary non-disclosure agreements (“NDAs”) that employees are required to sign in connection with settlement of claims that any Alphabet employee engaged in unlawful discrimination or harassment. Again Google rejected this proposal stating that the updated code of conduct already covers these requirements and do not believe that implementing this proposal would provide any incremental value or benefit to its stockholders or employees. Establishment of a Societal Risk Oversight Committee Stockholders asked Alphabet to establish a Societal Risk Oversight Committee (“the Committee”) of the Board of Directors, composed of independent directors with relevant experience. The Committee should provide an ongoing review of corporate policies and procedures, above and beyond legal and regulatory matters, to assess the potential societal consequences of the Company’s products and services, and should offer guidance on strategic decisions. As with the other Board committees, a formal charter for the Committee and a summary of its functions should be made publicly available. This proposal was also rejected. Alphabet said, “The current structure of our Board of Directors and its committees (Audit, Leadership Development and Compensation, and Nominating and Corporate Governance) already covers these issues.” Report on Sexual Harassment Risk Management Shareholders requested management to review its policies related to sexual harassment to assess whether the company needs to adopt and implement additional policies and to report its findings, omitting proprietary information and prepared at a reasonable expense by December 31, 2019. However, this was rejected as well based on the conclusion that Google has robust Conduct Policies already in place. There is also ongoing improvement and reporting in this area, and Google has a concrete plan of action to do more. Majority Vote for the Election of Directors This proposal demands that director nominees should be elected by the affirmative vote of the majority of votes cast at an annual meeting of shareholders, with a plurality vote standard retained for contested director elections, i.e., when the number of director nominees exceeds the number of board seats. This proposal also demands that a director who receives less than such a majority vote should be removed from the board immediately or as soon as a replacement director can be qualified on an expedited basis. This proposal was rejected basis, “Our Board of Directors believes that current nominating and voting procedures for election to our Board of Directors, as opposed to a mandated majority voting standard, provide the board the flexibility to appropriately respond to stockholder interests without the risk of potential corporate governance complications arising from failed elections.” Report on Gender Pay Stockholders demand that Google should report on the company’s global median gender pay gap, including associated policy, reputational, competitive, and operational risks, and risks related to recruiting and retaining female talent. The report should be prepared at a reasonable cost, omitting proprietary information, litigation strategy, and legal compliance information. Google says it already releases an annual report on its pay equity analyses, “ensuring they pay fairly and equitably.” They don’t think an additional report as detailed in the proposal above would enhance Alphabet’s existing commitment to fostering a fair and inclusive culture. Strategic Alternatives The proposal outlines the need for an orderly process to retain advisors to study strategic alternatives. They believe Alphabet may be too large and complex to be managed effectively and want a voluntary strategic reduction in the size of the company than from asset sales compelled by regulators. They also want a committee of independent directors to evaluate those alternatives in the exercise of their fiduciary responsibilities to maximize shareholder value. This proposal was also rejected basis, “Our Board of Directors and management do not favor a given size of the company or focus on any strategy based on ideological grounds. Instead, we develop a strategy based on the company’s customers, partners, users and the communities we serve, and focus on strategies that maximize long-term sustainable stockholder value.” Nomination of an Employee Representative Director An Employee Representative Director should be nominated for election to the Board by shareholders at Alphabet’s 2020 annual meeting of shareholders. Stockholders say that employee representation on Alphabet’s Board would add knowledge and insight on issues critical to the success of the Company, beyond that currently present on the Board, and may result in more informed decision making. Alphabet quoted their “Consideration of Director Nominees” section of the proxy statement, stating that the Nominating and Corporate Governance Committee of Alphabet’s Board of Directors look for several critical qualities in screening and evaluating potential director candidates to serve our stockholders. They only allow the best and most qualified candidates to be elected to the Board of Directors. Accordingly, this proposal was also rejected. Simple Majority Vote The shareholders call for the simple majority vote to be eliminated and replaced by a requirement for a majority of the votes cast for and against applicable proposals, or a simple majority in compliance with applicable laws. Currently, the voice of regular shareholders is diminished because certain insider shares have 10-times as many votes per share as regular shares. Plus shareholders have no right to act by written consent. The rejection reason laid out by Google was that more stringent voting requirements in certain limited circumstances are appropriate and in the best interests of Google’s stockholders and the company. Integrating Sustainability Metrics Report into Performance measures Shareholders requested the Board Compensation Committee to prepare a report assessing the feasibility of integrating sustainability metrics into performance measures. These should apply to senior executives under the Company’s compensation plans or arrangements. They state that Alphabet remains predominantly white, male, and occupationally segregated. Among Alphabet’s top 290 managers in 2017, just over one-quarter were women and only 17 managers were underrepresented people of color. Even after proof of Alphabet being not inclusive, the proposal was rejected. Alphabet said that the company already supports corporate sustainability, including environmental, social and diversity considerations. Google Search in China Google employees, as well as Human rights organizations, have called on Google to end work on Dragonfly. Google employees have quit to avoid working on products that enable censorship; 1,400 current employees have signed a letter protesting Dragonfly. Employees said: “Currently we do not have the information required to make ethically-informed decisions about our work, our projects, and our employment.” Some employees have threatened to strike. Dragonfly may also be inconsistent with Google’s AI Principles. The proposal urged Google to publish a Human Rights Impact Assessment by no later than October 30, 2019, examining the actual and potential impacts of censored Google search in China. Google rejected this proposal stating that before Google launches any new search product, it would conduct proper human rights due diligence, confer with Global Network Initiative (GNI) partners and other key stakeholders. If Google ever considers re-engaging this work, it will do so transparently, engaging and consulting widely. Clawback Policy Alphabet currently does not disclose an incentive compensation clawback policy in its proxy statement. The proposal urges Alphabet’s Leadership Development and Compensation Committee to adopt a clawback policy. This committee will review the incentive compensation paid, granted or awarded to a senior executive in case there is misconduct resulting in a material violation of law or Alphabet’s policy that causes significant financial or reputational harm to Alphabet. Alphabet rejected this proposal based on the following claims: The company has announced significant revisions to its workplace culture policies. Google has ended forced arbitration for employment claims. Any violation of our Code of Conduct or other policies may result in disciplinary action, including termination of employment and forfeiture of unvested equity awards. Report on Content Governance Stockholders are concerned that Alphabet’s Google is failing to effectively address content governance concerns, posing risks to shareholder value. They request Alphabet to issue a report to shareholders assessing the risks posed by content management controversies related to election interference, freedom of expression, and the spread of hate speech. Google said that they have already done significant work in the areas of combatting violent or extremist content, misinformation, and election interference and have continued to keep the public informed about our efforts. Thus they do not believe that implementing this proposal would benefit our stockholders. Read the full report here. Protestors showed disappointment As the annual meeting was conducted, thousands of activists protested across Google offices. A UK-based human rights organization called SumOfUs teamed up with Students for a Free Tibet to propose a breakup of Google. They organized a protest outside of 12 Google offices around the world to coincide with the shareholder meeting, including in San Francisco, Stockholm, and Mumbai. https://twitter.com/SumOfUs/status/1141363780909174784 Alphabet Board Chairman, John Hennessy opened the meeting by reflecting on Google's mission to provide information to the world. "Of course this comes with a deep and growing responsibility to ensure the technology we create benefits society as a whole," he said. "We are committed to supporting our users, our employees and our shareholders by always acting responsibly, inclusively and fairly." In the Q&A session which followed the meeting, protestors demanded why Page wasn't in attendance. "It's a glaring omission," one of the protestors said. "I think that's disgraceful." Hennessy responded, "Unfortunately, Larry wasn't able to be here" but noted Page has been at every board meeting. The stockholders pushed back, saying the annual meeting is the only opportunity for investors to address him. Toward the end of the meeting, protestors including Google employees, low-paid contract workers, and local community members were shouting outside. “We shouldn’t have to be here protesting. We should have been included.” One sign from the protest, listing the first names of the members of Alphabet’s board, read, “Sundar, Larry, Sergey, Ruth, Kent — You don’t speak for us!” https://twitter.com/GoogleWalkout/status/1141454091455008769 https://twitter.com/LauraEWeiss16/status/1141400193390141440   This meeting also marked the official departure of former Google CEO Eric Schmidt and former Google Cloud chief Diane Greene from Alphabet's board. Earlier this year, they said they wouldn’t be seeking re-election. Google Walkout organizer, Claire Stapleton resigns after facing retaliation from management US regulators plan to probe Google on anti-trust issues; Facebook, Amazon & Apple also under legal scrutiny. Google employees lay down actionable demands after staging a sit-in to protest retaliation
Read more
  • 0
  • 0
  • 15820

article-image-facebooks-outgoing-head-of-communications-and-policy-takes-blame-for-hiring-pr-firm-definers-and-reveals-more
Melisha Dsouza
22 Nov 2018
4 min read
Save for later

Facebook's outgoing Head of communications and policy takes blame for hiring PR firm ‘Definers’ and reveals more

Melisha Dsouza
22 Nov 2018
4 min read
On 4th November, the New York Times published a scathing report on Facebook that threw the tech giant under scrutiny for its leadership morales. The report pointed out how Facebook has been following the strategy of 'delaying, denying and deflecting’ the blame for all the controversies surrounding it. One of the recent scandals it was involved in was hiring a PR firm- called Definers- who did opposition research and shared content that criticized Facebook’s rivals Google and Apple, diverting focus from the impact of Russian interference on Facebook. They also pushed the idea that liberal financier George Soros was behind a growing anti-Facebook movement. Now, in a memo sent by Elliot Schrage (Facebook’s outgoing Head of Communications and Policy) to Facebook employees and obtained by TechCrunch, he takes the blame for hiring The Definers. Elliot Schrage, who after the Cambridge Analytica scandal, announced in June that he was leaving, admitted that his team asked Definers to push negative narratives about Facebook's competitors. He also stated that Facebook asked Definers to conduct research on liberal financier George Soros. His argument was that after George Soros attacked Facebook in a speech at Davos, calling them a “menace to society”, they wanted to determine if he had any financial motivation. According to the TechCrunch report, Elliot denied that the company asked the PR firm to distribute or create fake news. "I knew and approved of the decision to hire Definers and similar firms. I should have known of the decision to expand their mandate," Schrage said in the memo. He further stresses on being disappointed that a lot of the company’s internal discussion has become public. According to the memo, “This is a serious threat to our culture and ability to work together in difficult times.” Saving Mark and Sheryl from additional finger pointing, Schrage further added "Over the past decade, I built a management system that relies on the teams to escalate issues if they are uncomfortable about any project, the value it will provide or the risks that it creates. That system failed here and I'm sorry I let you all down. I regret my own failure here." As a follow-up note to the memo, Sheryl Sandberg (COO, Facebook) also shares accountability of hiring Deniers. She says “I want to be clear that I oversee our Comms team and take full responsibility for their work and the PR firms who work with us” Conveniently enough, this memo comes after the announcement that Elliot is stepping down from his post at Facebook. Elliot’s replacement, Facebook’s new head of global policy and former U.K. Deputy Prime Minister, Nick Clegg will now be reviewing its work with all political consultants. The entire scandal has led to harsh criticism from the media circle like Kara Swisher and from academics like Scott Galloway. On an episode of Pivot with Kara Swisher and Scott Galloway,  Kara comments that “Sheryl Sandberg ... really comes off the worst in this story, although I still cannot stand the ability of people to pretend that this is not all Mark Zuckerberg’s responsibility,” She further followed up with a jarring comment stating “He is the CEO. He has 60 percent. He’s an adult, and they’re treating him like this sort of adult boy king who doesn’t know what’s going on. It’s ridiculous. He knows exactly what’s going on.” Galloway added that since Sheryl had “written eloquently on personal loss and the important discussion around gender equality”, these accomplishments gave her “unfair” protection, and that it might also be true that she will be “unfairly punished.” He raises questions on both, Mark and Sheryl’s leadership saying “Can you think of any individuals who have made so much money doing so much damage? I mean, they make tobacco executives look like Mister Rogers.” On 19th November, he tweeted a detailed theory on why Sandberg is yet a part of Facebook; because “The Zuck can't be (fired)” and nobody wants to be the board who "fires the woman". https://twitter.com/profgalloway/status/1064559077819326464 Here’s another recent tweet thread from Scott which is a sarcastic take on what a “Big Tech” company actually is: https://twitter.com/profgalloway/status/1065315074259202048 Head over to CNBC to know more about this news. What is Facebook hiding? New York Times reveals Facebook’s insidious crisis management strategy NYT Facebook exposé fallout: Board defends Zuckerberg and Sandberg; Media call and transparency report Highlights BuzzFeed Report: Google’s sexual misconduct policy “does not apply retroactively to claims already compelled to arbitration”  
Read more
  • 0
  • 0
  • 15093

article-image-women-win-all-open-board-director-seats-in-open-source-initiative-2019-board-elections
Savia Lobo
19 Mar 2019
3 min read
Save for later

Women win all open board director seats in Open Source Initiative 2019 board elections

Savia Lobo
19 Mar 2019
3 min read
The recently held Open Source Initiative’s 2019 Board elections elected six Board of Directors to its eleven-person Board. Two were elected from the affiliate membership, and four from the individual membership. If it wasn’t incredible enough that many women ran for the seats,  they have won all the seats! The six seats include two returning directors, Carol Smith and Molly de Blanc; and three new directors Pamela Chestek, Elana Hashman, and Hong Phuc Dang. Pamela Chestek (nominated by The Document Foundation) and Molly de Blanc (nominated by the Debian Project) captured the most votes from OSI Affiliate Members. The last seat is a tie between Christine Hall and Mariatta Wijaya and hence a runoff election will be required to identify the final OSI Board Director. The run off election started yesterday, March 18th (opening at 12:00 a.m. / 00:00) and will end on Monday, March 25th (closing at 12:00 a.m. / 00:00). Mariatta Wijaya, a core Python developer and a platform engineer at Zapier, told in a statement to Business Insider that she found not all open source projects were as welcoming, especially to women. That's one reason why she's running for the board of the Open Source Initiative, an influential organization that promotes and protects open source software communities. Wijaya also said, "I really want to see better diversity across the people who contribute to open source. Not just the users, the creators of open source. I would love to see that diversity improve. I would like to see a better representation. I did find it a barrier initially, not seeing more people who look like me in this space, and I felt like an outsider." A person discussed six female candidates in misogynistic language on Slashdot, which is a tech-focussed social news website. The post also then labeled each woman with how much of a "threat" they were. Slashdot immediately took down this post “shortly afterward the OSI started seeing inappropriate comments posted on its website”. https://twitter.com/alicegoldfuss/status/1102609189342371840 Molly de Blanc and Patrick Masson said this was the first time they saw such type of harassment of female OSI board candidates. They also said that such harassments in open source are not uncommon. Joshua R. Simmons, an Open source advocate, and web developer tweeted, “women winning 100% of the open seats in an election that drew attention from a cadre of horrible misogynists” https://twitter.com/joshsimmons/status/1107303020293832704 OSI President, Simon Phipps said that the OSI committee is “thrilled the electorate has picked an all-female cohort to the new Board” https://twitter.com/webmink/status/1107367907825274886 To know more about these elections in detail, head over to the OSI official blog post. UPDATED: In the previous draft, Pamela Chestek who was listed as returning board member, is a new board member; and Carol Smith who was listed as a new board member, is a returning member. #GoogleWalkout demanded a ‘truly equitable culture for everyone’; Pichai shares a “comprehensive” plan for employees to safely report sexual harassment MongoDB withdraws controversial Server Side Public License from the Open Source Initiative’s approval process Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis  
Read more
  • 0
  • 0
  • 14912

article-image-twitter-and-facebook-removed-accounts-of-chinese-state-run-media-agencies-aimed-at-undermining-hong-kong-protests
Sugandha Lahoti
20 Aug 2019
5 min read
Save for later

Twitter and Facebook removed accounts of Chinese state-run media agencies aimed at undermining Hong Kong protests

Sugandha Lahoti
20 Aug 2019
5 min read
Update August 23, 2019: After Twitter, and Facebook Google has shutdown 210 YouTube channels that were tied to misinformation about Hong Kong protesters. The article has been updated accordingly. Chinese state-run media agencies have been buying advertisements and promoted tweets on Twitter and Facebook to portray Hong Kong protestors and their pro-democracy demonstrations as violent. These ads, reported by Pinboard’s Twitter account were circulated by State-run news agency Xinhua calling these protesters as those "escalating violence" and calls for "order to be restored." In reality, Hong Kong protests have been called a completely peaceful march. Pinboard warned and criticized Twitter about these tweets and asked for its takedown. Though Twitter and Facebook are banned in China, the Chinese state-run media runs several English-language accounts to present its views to the outside world. https://twitter.com/pinboard/status/1162711159000055808 https://twitter.com/Pinboard/status/1163072157166886913 Twitter bans 936 accounts managed by the Chinese state Following this revelation, in a blog post yesterday, Twitter said that they are discovering a “significant state-backed information operation focused on the situation in Hong Kong, specifically the protest movement”.  They identified 936 accounts that were undermining “the legitimacy and political positions of the protest movement on the ground.” They found a larger, spammy network of approximately 200,000 accounts which represented the most active portions of this campaign. These were suspended for a range of violations of their platform manipulation policies.  These accounts were able to access Twitter through VPNs and over a "specific set of unblocked IP addresses" from within China. “Covert, manipulative behaviors have no place on our service — they violate the fundamental principles on which our company is built,” said Twitter. Twitter bans ads from Chinese state-run media Twitter also banned advertising from Chinese state-run news media entities across the world and declared that affected accounts will be free to continue to use Twitter to engage in public conversation, but not in their advertising products. This policy will apply to news media entities that are either financially or editorially controlled by the state, said Twitter. They will be notified directly affected entities who will be given 30 days to offboard from advertising products. No new campaigns will be allowed. However, Pinboard argues that 30 days is too long; Twitter should not wait and suspend Xinhua's ad account immediately. https://twitter.com/Pinboard/status/1163676410998689793 It also calls on Twitter to disclose: How much money it took from Xinhua How many ads it ran for them since the start of the Hong Kong protests in June and How those ads were targeted Facebook blocks Chinese accounts engaged in inauthentic behavior Following a tip shared by Twitter, Facebook also removed seven Pages, three Groups and five Facebook accounts involved in coordinated inauthentic behavior as part of a small network that originated in China and focused on Hong Kong. However, unlike Twitter, Facebook did not announce any policy changes in response to the discovery. YouTube was also notably absent in the fight against Chinese misinformation propagandas. https://twitter.com/Pinboard/status/1163694701716766720 However, on 22nd August, Youtube axed 210 Youtube channels found to be spreading misinformation about the Hong Kong protests. “Earlier this week, as part of our ongoing efforts to combat coordinated influence operations, we disabled 210 channels on YouTube when we discovered channels in this network behaved in a coordinated manner while uploading videos related to the ongoing protests in Hong Kong,” Shane Huntley, director of software engineering for Google Security’s Threat Analysis Group said in a blog post. “We found use of VPNs and other methods to disguise the origin of these accounts and other activity commonly associated with coordinated influence operations.” Kyle Bass, Chief Investment Officer Hayman Capital Management, called on all social media outlets to ban all Chinese state-run propaganda sources. He tweeted, “Twitter, Facebook, and YouTube should BAN all State-backed propaganda sources in China. It’s clear that these 200,000 accounts were set up by the “state” of China. Why allow Xinhua, global times, china daily, or any others to continue to act? #BANthemALL” Public acknowledges Facebook and Twitter’s role in exposing Chinese state media Experts and journalists were appreciative of the role social media played in exposing those guilty and liked how they are responding to state interventions. Bethany Allen-Ebrahimian, President of the International China Journalist Association called it huge news. “This is the first time that US social media companies are openly accusing the Chinese government of running Russian-style disinformation campaigns aimed at sowing discord”, she tweeted. She added, “We’ve been seeing hints that China has begun to learn from Russia’s MO, such as in Taiwan and Cambodia. But for Twitter and Facebook to come out and explicitly accuse the Chinese govt of a disinformation campaign is another whole level entirely.” Adam Schiff, Representative (D-CA 28th District) tweeted, “Twitter and Facebook announced they found and removed a large network of Chinese government-backed accounts spreading disinformation about the protests in Hong Kong. This is just one example of how authoritarian regimes use social media to manipulate people, at home and abroad.” He added, “Social media platforms and the U.S. government must continue to identify and combat state-backed information operations online, whether they’re aimed at disrupting our elections or undermining peaceful protesters who seek freedom and democracy.” Social media platforms took an appreciable step against Chinese state-run media actors attempting to manipulate their platforms to discredit grassroots organizing in Hong Kong. It would be interesting to see if they would continue to protect individual freedoms and provide a safe and transparent platform if state actors from countries where they have a huge audiences like India or US, adopted similar tactics to suppress or manipulate the public or target movements. Facebook bans six toxic extremist accounts and a conspiracy theory organization Cloudflare terminates services to 8chan following yet another set of mass shootings in the US YouTube’s ban on “instructional hacking and phishing” videos receives backlash from the infosec community
Read more
  • 0
  • 0
  • 14895
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-australias-accc-publishes-a-preliminary-report-recommending-google-facebook-be-regulated-and-monitored-for-discriminatory-and-anti-competitive-behavior
Sugandha Lahoti
10 Dec 2018
5 min read
Save for later

Australia’s ACCC publishes a preliminary report recommending Google Facebook be regulated and monitored for discriminatory and anti-competitive behavior

Sugandha Lahoti
10 Dec 2018
5 min read
The Australian competition and consumer commission (ACCC) have today published a 378-page preliminary report to make the Australian government and the public aware of the impact of social media and digital platforms on targeted advertising and user data collection. The report also highlights the ACCC's concerns regarding the “market power held by these key platforms, including their impact on Australian businesses and, in particular, on the ability of media businesses to monetize their content.” This report was published following an investigation when ACCC Treasurer Scott Morrison MP had asked the ACCC, late last year, to hold an inquiry into how online search engines, social media, and digital platforms impact media and advertising services markets. The inquiry demanded answers on the range and reliability of news available via Google and Facebook. The ACCC also expressed concerns on the large amount and variety of data which Google and Facebook collect on Australian consumers, which users are not actively willing to provide. Why did ACCC choose Google and Facebook? Google and Facebook are the two largest digital platforms in Australia and are the most visited websites in Australia. Google and Facebook also have similar business models, as they both rely on consumer attention and data to sell advertising opportunities and also have substantial market power. Per the report, each month, approximately 19 million Australians use Google Search, 17 million access Facebook, 17 million watch YouTube (which is owned by Google) and 11 million access Instagram (which is owned by Facebook). This widespread and frequent use of Google and Facebook means that these platforms occupy a key position for businesses looking to reach Australian consumers, including advertisers and news media businesses. Recommendations made by the ACCC The report contains 11 preliminary recommendations to these digital platforms and eight areas for further analysis. Per the report: #1 The ACCC wants to amend the merger law to make it clearer that the following are relevant factors: the likelihood that an acquisition would result in the removal of a potential competitor, and the amount and nature of data which the acquirer would likely have access to as a result of the acquisition. #2 ACCC wants Facebook and Google to provide advance notice of the acquisition of any business with activities in Australia and to provide sufficient time to enable a thorough review of the likely competitive effects of the proposed acquisition. #3 ACCC wants suppliers of operating systems for mobile devices, computers, and tablets to provide consumers with options for internet browsers and search engines (rather than providing a default). #4 The ACCC wants a regulatory authority to monitor, investigate and report on whether digital platforms are engaging in discriminatory conduct by favoring their own business interests above those of advertisers or potentially competing businesses. #5 The regulatory authority should also monitor, investigate and report on the ranking of news and journalistic content by digital platforms and the provision of referral services to news media businesses. #6 The ACCC wants the government to conduct a separate, independent review to design a regulatory framework to regulate the conduct of all news and journalistic content entities in Australia. This framework should focus on underlying principles, the extent of regulation, content rules, and enforcement. #7 Per ACCC, the ACMA (Australian Communications and Media Authority) should adopt a mandatory standard regarding take-down procedures for copyright infringing content. #8 ACCC proposes amendments to the Privacy Act. These include: Strengthen notification requirements Introduce an independent third-party certification scheme Strengthen consent requirements Enable the erasure of personal information Increase the penalties for breach of the Privacy Act Introduce direct rights of action for individuals Expand resourcing for the OAIC (Office of the Australian Information Commissioner) to support further enforcement activities #9 The ACCC wants OAIC to develop a code of practice under Part IIIB of the Privacy Act to provide Australians with greater transparency and control over how their personal information is collected, used and disclosed by digital platforms. #10 Per ACCC, the Australian government should adopt the Australian Law Reform Commission’s recommendation to introduce a statutory cause of action for serious invasions of privacy. #11 Per the ACCC, unfair contract terms should be illegal (not just voidable) under the Australian Consumer Law “The inquiry has also uncovered some concerns that certain digital platforms have breached competition or consumer laws, and the ACCC is currently investigating five such allegations to determine if enforcement action is warranted,” ACCC Chair Rod Sims said. The ACCC is also seeking feedback on its preliminary recommendations and the eight proposed areas for further analysis and assessment. Feedback can be shared by email to platforminquiry@accc.gov.au by 15 February 2019. AI Now Institute releases Current State of AI 2018 Report Australia passes a rushed anti-encryption bill “to make Australians safe”; experts find “dangerous loopholes” that compromise online privacy and safety Australia’s Facial recognition and identity system can have “chilling effect on freedoms of political discussion, the right to protest and the right to dissent”: The Guardian report
Read more
  • 0
  • 0
  • 14705

article-image-diffractive-deep-neural-network-d2nn-ucla-developed-ai-device-can-identify-objects-at-the-speed-of-light
Bhagyashree R
08 Aug 2018
3 min read
Save for later

Diffractive Deep Neural Network (D2NN): UCLA-developed AI device can identify objects at the speed of light

Bhagyashree R
08 Aug 2018
3 min read
Researchers at the University of California, Los Angeles (UCLA) have developed a 3D-printed all-optical deep learning architecture called Diffractive Deep Neural Network (D2NN). D2NN is a deep learning neural network physically formed by multiple layers of diffractive surfaces that work in collaboration to optically perform an arbitrary function. While the inference/prediction of the physical network is all-optical, the learning part that leads to its design is done through a computer. How does D2NN work? A computer-simulated design was created first, then the researchers with the help of a 3D printer created very thin polymer wafers. The uneven surface of the wafers helped diffract light coming from the object in different directions. The layers are composed of tens of thousands of artificial neurons or tiny pixels from which the light travels through. These layers together, form an “optical network” that shapes how incoming light travels through them. The network is able to identify an object because the light coming from the object is diffracted mostly toward a single pixel that is assigned to that type of object. The network was then trained using a computer to identify the objects in front of it by learning the pattern of diffracted light each object produced as the light from that object passes through the device. What are its advantages? Scalable: It can easily be scaled up using numerous high-throughput and large-area 3D fabrication methods, such as, soft-lithography, additive manufacturing, and wide-field optical components and detection systems. Easily reconfigurable: D2NN can be easily improved by additional 3D printed layers or replacing some of the existing layers with newly trained ones. Lightening speed: Once the device is trained, it works at the speed of light. Efficient: No energy is consumed to run the device. Cost-effective: The device can be reproduced for less than $50, making it very cost-effective. What are the areas it can be used in? Image analysis Feature detection Object classification Can also enable new microscope or camera designs that can perform unique imaging tasks This new AI device could find applications in the area of medical technologies, data intensive tasks, robotics, security, and or any application where image and video data are essential. Refer to UCLA’s official news article to know more in detail. Also, you can refer to this paper  All-optical machine learning using diffractive deep neural Networks. OpenAI builds reinforcement learning based system giving robots human like dexterity Datasets and deep learning methodologies to extend image-based applications to videos AutoAugment: Google’s research initiative to improve deep learning performance
Read more
  • 0
  • 0
  • 14700

article-image-elon-musks-neuralink-unveils-a-sewing-machine-like-robot-to-control-computers-via-the-brain
Sugandha Lahoti
17 Jul 2019
8 min read
Save for later

Elon Musk's Neuralink unveils a “sewing machine-like” robot to control computers via the brain

Sugandha Lahoti
17 Jul 2019
8 min read
After two years of being super-secretive about their work, Neuralink, Elon’s Musk’s neurotechnology company, has finally presented their progress in brain-computer interface technology. The Livestream which was uploaded on YouTube showcases a “sewing machine-like” robot that can implant ultrathin threads deep into the brain giving people the ability to control computers and smartphones using their thoughts. For its brain-computer interface tech, the company has received $158 million in funding and has 90 employees. Note: All images are taken from Neuralink Livestream video unless stated otherwise. Elon Musk opened the presentation talking about the primary aim of Neuralink which is to use brain-computer interface tech to understand and treat brain disorders, preserve and enhance the brain, and ultimately and this may sound weird, “achieve a symbiosis with artificial intelligence”. He added, “This is not a mandatory thing. It is a thing you can choose to have if you want. This is something that I think will be really important on a civilization-level scale.” Neuralink wants to build, record from and selectively stimulate as many neurons as possible across diverse brain areas. They have three goals: Increase by orders of magnitude, the number of neurons you can read from and write to in safe, long-lasting ways. At each stage, produce devices that serve critical unmet medical needs of patients. Make inserting a computer connection into your brain as safe and painless as LASIK eye surgery. The robot that they have built was designed to be completely wireless, with a  practical bandwidth that is usable at home and lasts for a long time. Their system has an N1 sensor, which is an 8mm wide, 4mm tall cylinder having 1024 electrodes. It consists of a thin film, which has threads. The threads are placed using thin needles, into the brain by a robotic system in a manner akin to a sewing machine avoiding blood vessels. The robot peels off the threads one by one from the N1 Sensor and places it in the brain. A needle would grab each thread by a small loop and then is inserted into the brain by the robot. The robot is under the supervision of a human neurosurgeon who lays out where the threads are placed. The actual needle which the robot uses is 24 microns. The process puts a 2mm incision near the human ear, which is dilated to 8mm. The threads A robot implants threads using a needle For the first patients, the Neuralink team is looking at four sensors which will be connected via very small wires under the scalp to an inductive coil behind the ear. This is encased in a wearable device that they call the ‘Link’ which contains a Bluetooth radio and a battery. They will be controlled through an iPhone app. Source: NYT Neuralink/MetaLab iPhone app The goal is to drill four 8mm holes into paralyzed patients’ skulls and insert implants that will give them the ability to control computers and smartphones using their thoughts. For the first product, they are focusing on giving patients the ability to control their mobile device, and then redirect the output from their phone to a keyboard or a mouse. The company will seek U.S. Food and Drug Administration approval and is aspiring to target first-in-human clinical study by 2020. They will use it for treating upper cervical spinal cord injury. They’re expecting those patients to get four 1024 channel sensors, one each in the primary motor cortex, supplementary motor area, premotor cortex and closed-loop feedback into the primary somatosensory cortex. As reported by Bloomberg who got a pre-media briefing, Neuralink said it has performed at least 19 surgeries on animals with its robots and successfully placed the wires, which it calls “threads,” about 87% of the time. They used a lab rat and implanted a USB-C port in its head. A wire attached to the port transmitted its thoughts to a nearby computer where a software recorded and analyzed its brain activity, measuring the strength of brain spikes. The amount of data being gathered from a lab rat was about 10 times greater than what today’s most powerful sensors can collect. The flexibility of the Neuralink threads would be an advance, said Terry Sejnowski, the Francis Crick Professor at the Salk Institute for Biological Studies, in La Jolla, Calif to the New York Times. However, he noted that the Neuralink researchers still needed to prove that the insulation of their threads could survive for long periods in a brain’s environment, which has a salt solution that deteriorates many plastics. Musk's bizarre attempts to revolutionalize the world are far from reality Elon Musk is known for his dramatic promises and showmanship as much as he is for his eccentric projects. But how far they are grounded in reality is another thing. In May he successfully launched his mammoth space mission, Starlink sending 60 communications satellites to the orbit which will eventually be part of a single constellation providing high-speed internet to the globe. However, the satellites were launched after postponing it two times to “update satellite software”. Not just that,  three of the 60 satellites have lost contact with ground control teams, a SpaceX spokesperson said on June 28. Experts are already worried about how the Starlink constellation will contribute to the space debris problem. Currently, there are 2,000 operational satellites in orbit around Earth, according to the latest figures from the European Space Agency, and the completed Starlink constellation will drastically add to that number. Observers had also noticed some Starlink satellites had not initiated orbit raising after being released. Musk’s much-anticipated Hyperloop (first publicly mentioned in 2012) was supposed to shuttle passengers at near-supersonic speeds via pods traveling in a long, underground tunnel. But it was soon reduced to a car in a very small tunnel. When they unveiled the underground tunnel to the media in California last year in December, reporters climbed into electric cars made by Musk’s Tesla and were treated to a 40 mph ride along a bumpy path. Here as well there have been public concerns regarding its impact on public infrastructure and the environment. The biggest questions surrounding hyperloop’s environmental impact are its effect on carbon dioxide emissions, the effect of infrastructure on ecosystems, and the environmental footprint of the materials used to build it. Other concerns include noise pollution and how to repurpose hyperloop tubes and tunnels at the end of their lifespan. Researchers from Tencent Keen Security Lab criticized Tesla’s self-driving car software, publishing a report detailing their successful attacks on Tesla firmware. It includes remote control over the steering and an adversarial example attack on the autopilot that confuses the car into driving into oncoming traffic lane. Musk had also made promises to have a fully self-driving car for Tesla by 2020 which caused a lot of activity in the stock markets. But most are skeptical about this claim as well. Whether Elon Musk’s AI symbiotic visions will come in existence in the foreseeable future is questionable. Neuralink's long-term goals are characteristically unrealistic, considering not much is known about the human brain; cognitive functions and their representation as brain signals are still an area where much further research is required. While Musk’s projects are known for their technical excellence, History shows a lack of thought into the broader consequences and cost of such innovations such as the ethical concerns, environmental and societal impacts. Neuralink’s implant is also prone to invading one’s privacy as it will be storing sensitive medical information of a patient. There is also the likelihood of it violating one’s constitutional rights such as freedom of speech, expression among others. What does it mean to live in a world where one’s thoughts are constantly monitored and not truly one’s own? Then, because this is an implant what if the electrodes malfunction and send wrong signals to the brain. Who will be accountable in such scenarios? Although the FDA will be probing into such questions, these are some questions any responsible company should ask of itself proactively while developing life-altering products or services. These are equally important aspects that are worthy of stage time in a product launch. Regardless, Musk’s bold claims and dramatic representations are sure to gain the attention of investors and enthusiasts for now. Elon Musk reveals big plans with Neuralink SpaceX shares new information on Starlink after the successful launch of 60 satellites What Elon Musk can teach us about Futurism & Technology Forecasting
Read more
  • 0
  • 0
  • 14693

article-image-did-you-know-facebook-shares-the-data-you-share-with-them-for-security-reasons-with-advertisers
Natasha Mathur
28 Sep 2018
5 min read
Save for later

Did you know Facebook shares the data you share with them for ‘security’ reasons with advertisers?

Natasha Mathur
28 Sep 2018
5 min read
Facebook is constantly under the spotlight these days when it comes to controversies regarding user’s data and privacy. A new research paper published by the Princeton University researchers states that Facebook shares the contact information you handed over for security purposes, with their advertisers. This study was first brought to light by a Gizmodo writer, Kashmir Hill. “Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information”, writes Hill. Recently, Facebook introduced a new feature called custom audiences. Unlike traditional audiences, the advertiser is allowed to target specific users. To do so, the advertiser uploads user’s PII (personally identifiable information) to Facebook. After the uploading is done, Facebook then matches the given PII against platform users. Facebook then develops an audience that comprises the matched users and allows the advertiser to further track the specific audience. Essentially with Facebook, the holy grail of marketing, which is targeting an audience of one, is practically possible; nevermind whether that audience wanted it or not. In today’s world, different social media platforms frequently collect various kinds of personally identifying information (PII), including phone numbers, email addresses, names and dates of birth. Majority of this PII often represent extremely accurate, unique, and verified user data. Because of this, these services have the incentive to exploit and use this personal information for other purposes. One such scenario includes providing advertisers with more accurate audience targeting. The paper titled ‘Investigating sources of PII used in Facebook’s targeted advertising’ is written by Giridhari Venkatadri, Elena Lucherini, Piotr Sapiezynski, and Alan Mislove. “In this paper, we focus on Facebook and investigate the sources of PII used for its PII-based targeted advertising feature. We develop a novel technique that uses Facebook’s advertiser interface to check whether a given piece of PII can be used to target some Facebook user and use this technique to study how Facebook’s advertising service obtains users’ PII,” reads the paper. The researchers developed a novel methodology, which involved studying how Facebook obtains the PII to provide custom audiences to advertisers. “We test whether PII that Facebook obtains through a variety of methods (e.g., directly from the user, from two-factor authentication services, etc.) is used for targeted advertising, whether any such use is clearly disclosed to users, and whether controls are provided to users to help them limit such use,” reads the paper. The paper uses size estimates to study what sources of PII are used for PII-based targeted advertising. Researchers used this methodology to investigate which range of sources of PII was actually used by Facebook for its PII-based targeted advertising platform. They also examined what information gets disclosed to users and what control users have over PII. What sources of PII are actually being used by Facebook? Researchers found out that Facebook allows its users to add contact information (email addresses and phone numbers) on their profiles. While any arbitrary email address or phone number can be added, it is not displayed to other users unless verified (through a confirmation email or confirmation SMS message, respectively). This is the most direct and explicit way of providing PII to advertisers. Researchers then further moved on to examine whether PII provided by users for security purposes such as two-factor authentication (2FA) or login alerts are being used for targeted advertising. They added and verified a phone number for 2FA to one of the authors’ accounts. The added phone number became targetable after 22 days. This proved that a phone number provided for 2FA was indeed used for PII-based advertising, despite having set the privacy controls to the choice. What control do users have over PII? Facebook allows users the liberty of choosing who can see each PII listed on their profiles, the current list of possible general settings being: Public, Friends, Only Me.   Users can also restrict the set of users who can search for them using their email address or their phone number. Users are provided with the following options: Everyone, Friends of Friends, and Friends. Facebook provides users a list of advertisers who have included them in a custom audience using their contact information. Users can opt out of receiving ads from individual advertisers listed here. But, information about what PII is used by advertisers is not disclosed. What information about how Facebook uses PII gets disclosed to the users? On adding mobile phone numbers directly to one’s Facebook profile, no information about the uses of that number is directly disclosed to them. This Information is only disclosed to users when adding a number from the Facebook website. As per the research results, there’s very little disclosure to users, often in the form of generic statements that do not refer to the uses of the particular PII being collected or that it may be used to allow advertisers to target users. “Our paper highlights the need to further study the sources of PII used for advertising, and shows that more disclosure and transparency needs to be provided to the user,” says the researchers in the paper. For more information, check out the official research paper. Ex-employee on contract sues Facebook for not protecting content moderators from mental trauma How far will Facebook go to fix what it broke: Democracy, Trust, Reality Mark Zuckerberg publishes Facebook manifesto for safeguarding against political interference
Read more
  • 0
  • 0
  • 14569
article-image-googlepayoutsforall-a-digital-protest-against-googles-135-million-execs-payout-for-misconduct
Natasha Mathur
14 Mar 2019
6 min read
Save for later

#GooglePayoutsForAll: A digital protest against Google’s $135 million execs payout for misconduct

Natasha Mathur
14 Mar 2019
6 min read
The Google Walkout for Real Change group tweeted out their protest against the news of ‘Google confirming that it paid $135 million as exit packages to the two top execs accused of sexual assault, on Twitter, earlier this week. The group castigated the ‘multi-million dollar payouts’ and asked people to use the hashtag #GooglePayoutsForAll to demonstrate different and better ways this obscenely large amount of ‘hush money’ could have been used. https://twitter.com/GoogleWalkout/status/1105556617662214145 The news of Google paying its senior execs, namely, Amit Singhal (former Senior VP of Google search) and Andy Rubin (creator of Android) high exit packages was first highlighted in a report by the New York Times, last October. As per the report, Google paid $90 million to Rubin and $15 million to Singhal. A lawsuit filed by James Martin, an Alphabet shareholder, on Monday this week, further confirmed this news. The lawsuit states that this decision taken by directors of Alphabet caused significant financial harm to the company apart from deteriorating its reputation, goodwill, and market capitalization. Meredith Whittaker, one of the early organizers of the Google Walkout in November last month tweeted, “$135 million could fix Flint's water crisis and still have $80 million left.” Vicki Tardif, another Googler summed up the sentiments in her tweet, “$135M is 1.35 times what Google.org  gave out in grants in 2016.” An ACLU researcher pointed out that $135M could have in addition to feeding the hungry, housing the homeless and pay off some student loans, It could also support local journalism killed by online ads. The public support to the call for protest using the hashtag #GooglePayoutsForAll has been awe-inspiring. Some shared their stories of injustice in cases of sexual assault, some condemned Google for its handling of sexual misconduct, while others put the amount of money Google wasted on these execs into a larger perspective. Better ways Google could have used $135 million it wasted on execs payouts, according to Twitter Invest in people to reduce structural inequities in the company $135M could have been paid to the actual victims who faced harassment and sexual assault. https://twitter.com/xzzzxxzx/status/1105681517584572416 Google could have used the money to fix the wage and level gap for women of color within the company. https://twitter.com/sparker2/status/1105511306465992705 $135 million could be used to adjust the 16% median pay gap of the 1240 women working in Google’s UK offices https://twitter.com/crschmidt/status/1105645484104998913 $135M could have been used by Google for TVC benefits. It could also be used to provide rigorous training to the Google employees on what impact misinformation within the company can have on women and other marginalized groups.   https://twitter.com/EricaAmerica/status/1105546835526107136 For $135M, Google could have paid the 114 creators featured in its annual "YouTube Rewind" who are otherwise unpaid for their time and participation. https://twitter.com/crschmidt/status/1105641872033230848 Improve communities by supporting social causes Google could have paid $135M to RAINN, a largest American nonprofit anti-sexual assault organization, covering its expenses for the next 18 years. https://twitter.com/GoogleWalkout/status/1105450565193121792 For funding 1800 school psychologists for 1 year in public schools https://twitter.com/markfickett/status/1105640930936324097 To build real, affordable housing solutions in collaboration with London Breed, SFGOV, and other Bay Area officials https://twitter.com/jillianpuente/status/1105922474930245636 $135M could provide insulin for nearly 10,000 people with Type 1 diabetes in the US https://twitter.com/GoogleWalkout/status/1105585078590210051 To pay for the first year for 1,000 people with stage IV breast cancer https://twitter.com/GoogleWalkout/status/1105845951938347008 Be a responsible corporate citizen To fund approximately 5300 low-cost electric vehicles for Google staff, and saving around 25300 metric tons of carbon dioxide from vehicle emissions per year. https://twitter.com/crschmidt/status/1105698893361233926 Providing free Google Fiber internet to 225,000 homes for a year https://twitter.com/markfickett/status/1105641215389773825 To give $5/hr raise to 12,980 service workers at Silicon Valley tech campuses https://twitter.com/LAuerhahn/status/1105487572069801985 $135M could have been used for the construction of affordable homes, protecting 1,100 low-income families in San Jose from coming rent hikes of Google’s planned mega-campus. https://twitter.com/JRBinSV/status/1105478979543154688 #GooglePayoutsForAll: Another initiative to promote awareness of structural inequities in tech   The core idea behind launching #GooglePayoutsForAll on Twitter by the Google walkout group was to promote awareness among people regarding the real issues within the company. It urged people to discuss how Google is failing at maintaining the ‘open culture’ that it promises to the outside world. It also highlights how mottos such as “Don’t be Evil” and “Do the right thing” that Google stood by only make for pretty wall decor and there’s still a long way to go to see those ideals in action. The group gained its name when more than 20,000 Google employees along with vendors, and contractors, temps, organized Google “walkout for real change” and walked out of their offices in November 2018. The walkout was a protest against the hushed and unfair handling of sexual misconduct within Google. Ever since then, Googlers have been consistently taking initiatives to bring more transparency, accountability, and fairness within the company. For instance, the team launched an industry-wide awareness campaign to fight against forced arbitration in January, where they shared information about arbitration on their Twitter and Instagram accounts throughout the day. The campaign was a success as Google finally ended its forced arbitration policy which goes into effect this month for all the employees (including contractors, temps, vendors) and for all kinds of discrimination. Also, House and Senate members in the US have proposed a bipartisan bill to prohibit companies from using forced arbitration clauses, last month.    Although many found the #GooglePayoutsForAll idea praiseworthy, some believe this initiative doesn’t put any real pressure on Google to bring about a real change within the company. https://twitter.com/Jeffanie16/status/1105541489722081290 https://twitter.com/Jeffanie16/status/1105546783063752709 https://twitter.com/Jeffanie16/status/1105547341862457344 Now, we don’t necessarily disagree with this opinion, however, the initiative can't be completely disregarded as it managed to make people who’d otherwise hesitate to open up talk extensively regarding the real issues within the company. As Liz Fong-Jones puts it, “Strikes and walkouts are more sustainable long-term than letting Google drive each organizer out one by one. But yes, people *are* taking action in addition to speaking up. And speaking up is a bold step in companies where workers haven't spoken up before”. The Google Walkout group have not yet announced what they intend to do next following this digital protest. However, the group has been organizing meetups such as the one earlier this month on March 6th where it invited the tech contract workers for discussion about building solidarity to make work better for everyone. We are only seeing the beginning of a powerful worker movement take shape in Silicon Valley. Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Liz Fong Jones, prominent ex-Googler shares her experience at Google and ‘grave concerns’ for the company Google’s pay equity analysis finds men, not women, are underpaid; critics call out design flaws in the analysis
Read more
  • 0
  • 0
  • 14402

article-image-google-facebook-working-hard-to-clean-image-after-media-backlash-from-attack
Fatema Patrawala
22 Mar 2019
8 min read
Save for later

Google and Facebook working hard to clean image after the media backlash from the Christchurch terrorist attack

Fatema Patrawala
22 Mar 2019
8 min read
Last Friday’s uncontrolled spread of horrific videos on the Christchurch mosque attack and a propaganda coup for espousing hateful ideologies raised questions about social media. The tech companies scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide. In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media. The failure highlighted the social media companies struggle to police content that are massively lucrative and persistently vulnerable to outside manipulation despite years of promises to do better. After the white supremacist live-streamed the attack and uploaded the video to Facebook, Twitter, YouTube, and other platforms across the internet. These tech companies faced back lashes from the media and internet users worldwide, to an extent where they were regarded as complicit in promoting white supremacism too. In response to the backlash, Google and Facebook provides status report on what they went through when the video was reported, the kind of challenges they faced and what are the next steps to combat such incidents in future. Google’s report so far... Google in an email to Motherboard says it employs 10,000 people across to moderate the company’s platforms and products. They also described a process they would follow when a user reports a piece of potentially violating content—such as the attack video; which is The user flagged report will go to a human moderator to assess. The moderator is instructed to flag all pieces of content related to the attack as “Terrorist Content,” including full-length or sections of the manifesto. Because of the document’s length the email tells moderators not to spend an extensive amount of time trying to confirm whether a piece of content does contain part of the manifesto. Instead, if the moderator is unsure, they should err on the side of caution and still label the content as “Terrorist Content,” which will then be reviewed by a second moderator. The second moderator is told to take time to verify that it is a piece of the manifesto, and appropriately mark the content as terrorism no matter how long or short the section may be. Moderators are told to mark the manifesto or video as terrorism content unless there is an Educational, Documentary, Scientific, or Artistic (EDSA) context to it. Further Google adds that they want to preserve journalistic or educational coverage of the event, but does not want to allow the video or manifesto itself to spread throughout the company’s services without additional context. Google at some point had taken the unusual step of automatically rejecting any footage of violence from the attack video, cutting out the process of a human determining the context of the clip. If, say, a news organization was impacted by this change, the outlet could appeal the decision, Google commented. “We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review,” YouTube’s Product Officer Neal Mohan told the Washington Post in an article published Monday. Google also tweaked the search function to show results from authoritative news sources. It suspended the ability to search for clips by upload date, making it harder for people to find copies of the attack footage. "Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter," a YouTube spokesperson said. “Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” the statement added. Facebook’s update so far... Facebook on Wednesday also shared an update on how they have been working with the New Zealand Police to support their investigation. It provided additional information on how their products were used to circulate videos and how they plan to improve them. So far Facebook has provided the following information: The video was viewed fewer than 200 times during the live broadcast. No users reported the video during the live broadcast. Including the views during the live broadcast, the video was viewed about 4,000 times in total before being removed from Facebook. Before Facebook was alerted to the video, a user on 8chan posted a link to a copy of the video on a file-sharing site. The first user report on the original video came in 29 minutes after the video started, and 12 minutes after the live broadcast ended. In the first 24 hours, Facebook removed more than 1.2 million videos of the attack at upload, which were therefore prevented from being seen on our services. Approximately 300,000 additional copies were removed after they were posted. As there were questions asked to Facebook about why artificial intelligence (AI) didn’t detect the video automatically. Facebook says AI has made massive progress over the years to proactively detect the vast majority of the content it can remove. But it’s not perfect. “To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.” says Guy Rosen VP Product Management at Facebook. Guy further adds, “AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year Facebook more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers to report content that they find disturbing.” Facebook further plans to: Improve the image and video matching technology so that they can stop the spread of viral videos of such nature, regardless of how they were originally produced. React faster to this kind of content on a live streamed video. Continue to combat hate speech of all kinds on their platform. Expand industry collaboration through the Global Internet Forum to Counter Terrorism (GIFCT). Challenges Google and Facebook faced to report the video content According to Motherboard, Google saw an unprecedented number of attempts to post footage from the attack, sometimes as fast as a piece of content per second. But the challenge they faced was to block access to the killer’s so-called manifesto, a 74-page document that spouted racist views and explicit calls for violence. Google described the difficulties of moderating the manifesto, pointing to its length and the issue of users sharing the snippets of the manifesto that Google’s content moderators may not immediately recognise. “The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing,” says Google. A source with knowledge of Google’s strategy for moderating the New Zealand attack material said this can complicate moderation efforts because some outlets did use parts of the video and manifesto. UK newspaper The Daily Mail let readers download the terrorist’s manifesto directly from the paper’s own website, and Sky News Australia aired parts of the attack footage, BuzzFeed News reported. On the other hand Facebook faces a challenge to automatically discern such content from visually similar, innocuous content. For example if thousands of videos from live-streamed video games are flagged by the systems, reviewers could miss the important real-world videos where they could alert first responders to get help on the ground. Another challenge for Facebook is similar to what Google faces, which is the proliferation of many different variants of videos makes it difficult for the image and video matching technology to prevent spreading further. Facebook found that a core community of bad actors working together to continually re-upload edited versions of the video in ways designed to defeat their detection. Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats. In total, Facebook found and blocked over 800 visually-distinct variants of the video that were circulating. Both companies seem to be working hard to improve their products and gain user’s trust and confidence back. How social media enabled and amplified the Christchurch terrorist attack Google to be the founding member of CDF (Continuous Delivery Foundation) Google announces the stable release of Android Jetpack Navigation
Read more
  • 0
  • 0
  • 14346

article-image-wikileaks-founder-julian-assange-arrested-for-conspiracy-to-commit-computer-intrusion
Savia Lobo
12 Apr 2019
6 min read
Save for later

Wikileaks founder, Julian Assange, arrested for “conspiracy to commit computer intrusion”

Savia Lobo
12 Apr 2019
6 min read
Julian Assange, the Wikileaks founder, was arrested yesterday in London, in accordance with the U.S./UK Extradition Treaty. He was charged with assisting Chelsea Manning, a former intelligence analyst in the U.S. Army, to crack a password on a classified U.S. government computer. The indictment states that in March 2010, Assange assisted Manning by cracking password stored on U.S. Department of Defense computers connected to the Secret Internet Protocol Network (SIPRNet), a U.S. government network used for classified documents and communications. Being an intelligence analyst, Manning had access to certain computers and used these to download classified records to transmit to WikiLeaks. “Cracking the password would have allowed Manning to log on to the computers under a username that did not belong to her. Such a deceptive measure would have made it more difficult for investigators to determine the source of the illegal disclosures”, the indictment report states. “Manning confessed to leaking more than 725,000 classified documents to WikiLeaks following her deployment to Iraq in 2009—including battlefield reports and five Guantanamo Bay detainee profiles”, Gizmodo reports. In 2013, Manning was convicted of leaking the classified U.S. government documents to WikiLeaks. She was jailed in early March this year as a recalcitrant witness after she refused to answer the grand jury’s questions. According to court filings, after Manning’s arrest, she was held in solitary confinement in a Virginia jail for nearly a month. Following Assange’s arrest, a Swedish software developer and digital privacy activist, Ola Bini, who is allegedly close to Wikileaks founder Julian Assange has also been detained. “The official said they are looking into whether he was part of a possible effort by Assange and Wikileaks to blackmail Ecuador’s President, Lenin Moreno”, the Washington Post reports. Bini was detained at Quito’s airport as he was preparing to board a flight for Japan. Martin Fowler, a British software developer and renowned author and speaker, tweeted on Bini’s arrest. He said that Bini is a strong advocate and developer supporting privacy, and has not been able to speak to any lawyers. https://twitter.com/martinfowler/status/1116520916383621121 Following Assange’s arrest, Hillary Clinton, who was the nominee for the 2016 Presidential elections, said, “The bottom line is that he has to answer for what he has done”. “WikiLeaks’ publication of Democratic emails stolen by Russian intelligence officers during the 2016 election season hurt Clinton’s presidential campaign”, the Washington Post reports. Assange, who is an Australian citizen, was dragged out of Ecuador’s embassy in London after his seven-year asylum was revoked. He was granted Asylum by former Ecuadorian President Rafael Correa in 2012 for publishing sensitive information about U.S. national security interests. Australian PM, Scott Morrison told Australian Broadcasting Corp. the charge is a “matter for the United States” and has nothing to do with Australia. He was granted asylum just after “he was released on bail while facing extradition to Sweden on sexual assault allegations. The accusations have since been dropped but he was still wanted for jumping bail”, the Washington Post states. A Swedish woman alleged that she was raped by Julian Assange during a visit to Stockholm in 2010. Post Assange’s arrest on Thursday, Elisabeth Massi Fritz, the lawyer for the unnamed woman, said in a text message sent to The Associated Press that “we are going to do everything” to have the Swedish case reopened “so Assange can be extradited to Sweden and prosecuted for rape.” She further added, “no rape victim should have to wait nine years to see justice be served.” “In 2017, Sweden’s top prosecutor dropped a long-running inquiry into a rape claim against Assange, saying there was no way to have Assange detained or charged within a foreseeable future because of his protected status inside the embassy”, the Washington Post reports. In a tweet, Wikileaks posted a photo of Assange with the words: “This man is a son, a father, a brother. He has won dozens of journalism awards. He’s been nominated for the Nobel Peace Prize every year since 2010. Powerful actors, including CIA, are engaged in a sophisticated effort to dehumanize, delegitimize and imprison him. #ProtectJulian.” https://twitter.com/wikileaks/status/1116283186860953600 Duncan Ross, a data philanthropist, tweeted, “Random thoughts on Assange: 1) journalists don’t have to be nice people but 2) being a journalist (if he is) doesn’t put you above the law.” https://twitter.com/duncan3ross/status/1116610139023237121 Edward Snowden, a former security contractor who leaked classified information about U.S. surveillance programs, says the arrest of WikiLeaks founder Julian Assange is a blow to media freedom. “Assange’s critics may cheer, but this is a dark moment for press freedom”, he tweets. According to the Washington Post, in an interview with The Associated Press, Rafael Correa, Ecuador’s former president, was harshly critical of his successor’s decision to expel the Wikileaks founder from Ecuador’s embassy in London. He said that “although Julian Assange denounced war crimes, he’s only the person supplying the information.” Correa said “It’s the New York Times, the Guardian and El Pais publishing it. Why aren’t those journalists and media owners thrown in jail?” Yanis Varoufakis, Economics professor and former Greek finance minister, tweeted, “It was never about Sweden, Putin, Trump or Hillary. Assange was persecuted for exposing war crimes. Will those duped so far now stand with us in opposing his disappearance after a fake trial where his lawyers will not even now the charges?” https://twitter.com/yanisvaroufakis/status/1116308671645061120 The Democracy in Europe Movement 2025 (@DiEM_25) tweeted that Assange’s arrest is “a chilling demonstration of the current disregard for human rights and freedom of speech by establishment powers and the rising far-right.” The movement has also put a petition against Assange’s extradition. https://twitter.com/DiEM_25/status/1116379013461815296 Google employees filed petition to remove anti-trans, anti-LGBTQ and anti-immigrant Kay Coles James from the AI council A security researcher reveals his discovery on 800+ Million leaked Emails available online Leaked memo reveals that Facebook has threatened to pull investment projects from Canada and Europe if their data demands are not met
Read more
  • 0
  • 0
  • 14137
article-image-tesla-autonomy-day-takeaways-full-self-driving-computer-robotaxis-launching-next-year-and-more
Bhagyashree R
24 Apr 2019
6 min read
Save for later

Tesla Autonomy Day takeaways: Full Self-Driving computer, Robotaxis launching next year, and more

Bhagyashree R
24 Apr 2019
6 min read
This Monday, Tesla’s “Autonomy Investor Day” kickstarted at its headquarters in Palo Alto. At this invitation-only event, Elon Musk, the CEO of Tesla, with his fellow executives, talked about its new microchip, robotaxis hitting the road by next year, and more. Here are some of the key takeaways from the event: The Full Self-Driving (FSD) computer Tesla shared details of its new custom chip, Full Self-Driving (FSD) computer, previously known as Autopilot Hardware 3.0. Elon Musk, the CEO of Tesla, believes that the FSD computer is “the best chip in the world…objectively.” Tesla replaced Nvidia’s Autopilot 2.5 computer with its own custom chip for Model S and Model X about a month ago. For Model 3 vehicles this change happened about 10 days ago. Musk said, “All cars being produced all have the hardware necessary — computer and otherwise — for full self-driving. All you need to do is improve the software.” FSD is a high-performance, special-purpose chip built by Samsung with main focus on autonomy and safety. It comes with a factor of 21 improvements in frame per second processing as compared to the previous generation Tesla Autopilot hardware, which was powered by Nvidia hardware. The company further shared that retrofits will be offered to current Tesla owners who bought the ‘Full Self-Driving package’ in the next few months. Here’s the new Tesla FSD computer: Credits: Tesla Musk shared that the company has already started working on a next-generation chip. The design of FSD was completed within 2 years of time and Tesla is now about halfway through the design of the next-generation chip. Musk’s claims of building the best chip can be taken with a pinch of salt as it could surely upset some engineers from Nvidia, Mobileye, and other companies who have been in the chip manufacturing market for a long time. Nvidia, in a blog post, along with applauding Tesla for its FSD computer, highlighted “few inaccuracies” in the comparison made by Musk during the event: “It’s not useful to compare the performance of Tesla’s two-chip Full Self Driving computer against NVIDIA’s single-chip driver assistance system. Tesla’s two-chip FSD computer at 144 TOPs would compare against the NVIDIA DRIVE AGX Pegasus computer which runs at 320 TOPS for AI perception, localization and path planning.” While pointing out the “inaccuracies”, Nvidia did miss out the key point here: the power consumption. “Having a system that can do 160 TOPS means little if it uses 500 watts while tesla's 144 TOPS system uses 72 watts,” a Redditor said. Robotaxis will hit the roads in 2020 Musk shared that within the next year or so we will see Tesla’s robotaxis coming into the ride-hailing market giving competition to Uber and Lyft. Musk made a bold claim saying that though, similar to other ride-hailing services, the robotaxis will allow users to hail a Tesla for a ride, they will not have drivers. Musk announced, “I feel very confident predicting that there will be autonomous robotaxis from Tesla next year — not in all jurisdictions because we won’t have regulatory approval everywhere.” He did not share many details on what regulations he was talking about. The service will allow Tesla-owners to add their properly equipped vehicles to Tesla’s own ride-sharing app, following a similar business model as Uber or Airbnb. The company will provide a dedicated number of robotaxis in areas where there are not enough loanable cars. Musk predicted that the average robotaxi will be able to yield $30,000 in gross profit per car, annually. Of this profit, about 25% to 30% will go to Tesla, therefore, an owner will be able to make $21,000 a year. Musk's plans for launching robotaxis next year looks ambitious. Experts and the media are quite skeptical about his plan. The Partners for Automated Vehicle Education (PAVE) industry group tweeted: https://twitter.com/PAVECampaign/status/1120436981220237312 https://twitter.com/RobMcCargow/status/1120961462678245376 Musk says “Anyone relying on lidar is doomed” Musk has been pretty vocal about his dislike towards LIDAR. He calls this technology “a crutch for self-driving cars”. When this topic came up at the event, Musk said: “Lidar is a fool’s errand. Anyone relying on lidar is doomed. Doomed! [They are] expensive sensors that are unnecessary. It’s like having a whole bunch of expensive appendices. Like, one appendix is bad, well now you have a whole bunch of them, it’s ridiculous, you’ll see.” LIDAR, which stands for Light Direction and Ranging, is used by Uber, Waymo, Cruise, and many other self-driving vehicles manufacturing companies. LIDAR projects low-intensity, harmless, and invisible laser beams at a target, or in the case of self-driving cars, all around. The reflected pulses are then measured for return time and wavelength to calculate the distance of an object from the sender. Lidar is capable of producing pretty detailed visualizations of the environment around a self-driving car. However, Tesla believes that this same functionality can be facilitated by cameras. According to Musk, cameras can provide much better resolutions and when combined with the neural net can predict depth very well. Andrej Karpathy, Tesla’s Senior Director of AI, took to the stage to explain the limitations of Lidar. He said, “In that sense, lidar is really a shortcut. It sidesteps the fundamental problems, the important problem of visual recognition, that is necessary for autonomy. It gives a false sense of progress and is ultimately a crutch. It does give, like, really fast demos!”. Karpathy further added, “You were not shooting lasers out of your eyes to get here.” While true, many felt that the reasoning is completely flawed. A Redditor in a discussion thread, said, “Musk's argument that "you drove here using your own two eyes with no lasers coming out of them" is reductive and flawed. It should be obvious to anyone that our eyes are more complex than simple stereo cameras. If the Tesla FSD system can reliably perceive depth at or above the level of the human eye in all conditions, then they have done something truly remarkable. Judging by how Andrej Karpathy deflected the question about how well the system works in snowy conditions, I would assume they have not reached that level.” Check out the live stream of the autonomy day on Tesla’s official website. Tesla v9 to incorporate neural networks for autopilot Tesla is building its own AI hardware for self-driving cars Nvidia Tesla V100 GPUs publicly available in beta on Google Compute Engine and Kubernetes Engine  
Read more
  • 0
  • 0
  • 13954

article-image-the-cruelty-of-algorithms-heartbreaking-open-letter-criticizes-tech-companies-for-showing-baby-ads-after-stillbirth
Bhagyashree R
13 Dec 2018
3 min read
Save for later

The cruelty of algorithms: Heartbreaking open letter criticizes tech companies for showing baby ads after stillbirth

Bhagyashree R
13 Dec 2018
3 min read
2018 has thrown up a huge range of examples of the unintended consequences of algorithms. From the ACLU’s research in July which showed how the algorithm in Amazon’s facial recognition software incorrectly matched images of congress members with mugshots, to the same organization’s sexist algorithm used in the hiring process, this has been a year where the damage that algorithms can cause has become apparent. But this week, an open letter by Gillian Brockell, who works at The Washington Post, highlighted the traumatic impact algorithmic personalization can have. In it, Brockell detailed how personalized ads accompanied her pregnancy, and speculated how the major platforms that dominate our digital lives. “...I bet Amazon even told you [the tech companies to which the letter is addressed] my due date… when I created an Amazon registry,” she wrote. But she went on to explain how those very algorithms were incapable of processing the tragic death of her unborn baby, blind to the grief that would unfold in the aftermath. “Did you not see the three days silence, uncommon for a high frequency user like me”. https://twitter.com/STFUParents/status/1072759953545416706 But Brockell’s grief was compounded by the way those companies continued to engage with her through automated messaging. She explained that although she clicked the “It’s not relevant to me” option those ads offer users, this only led algorithms to ‘decide’ that she had given birth, offering deals on strollers and nursing bras. As Brockell notes in her letter, stillbirths aren’t as rare as many think, with 26,000 happening in the U.S. alone every year. This fact only serves to emphasise the empathetic blind spots in the way algorithms are developed. “If you’re smart enough to realize that I’m pregnant, that I’ve given birth, then surely you’re smart enough to realize my baby died.” Brockell’s open letter garnered a lot of attention on social media, to such an extent that a number of the companies at which Brockell had directed her letter responded. Speaking to CNBC, a Twitter spokesperson said, “We cannot imagine the pain of those who have experienced this type of loss. We are continuously working on improving our advertising products to ensure they serve appropriate content to the people who use our services.” Meanwhile, a Facebook advertising executive, Rob Goldman responded, “I am so sorry for your loss and your painful experience with our products.” He also explained how these ads could be blocked. “We have a setting available that can block ads about some topics people may find painful — including parenting. It still needs improvement, but please know that we’re working on it & welcome your feedback.” Experian did not respond to requests for comment. However, even after taking Goldman’s advice, Brockell revealed she was then shown adoption adverts: https://twitter.com/gbrockell/status/1072992972701138945 “It crossed the line from marketing into Emotional Stalking,” said one Twitter user. While the political impact of algorithms has led to sustained commentary and criticism in 2018, this story reveals the personal impact algorithms can have. It highlights that as artificial intelligence systems become more and more embedded in everyday life, engineers will need an acute sensitivity and attention to detail to the potential use cases and consequences that certain algorithms may have. You can read Brockell’s post on Twitter. Facebook’s artificial intelligence research team, FAIR, turns five. But what are its biggest accomplishments? FAT Conference 2018 Session 3: Fairness in Computer Vision and NLP FAT Conference 2018 Session 4: Fair Classification
Read more
  • 0
  • 0
  • 13738
Modal Close icon
Modal Close icon