Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - News

104 Articles
article-image-privacy-experts-urge-the-senate-commerce-committee-for-a-strong-federal-privacy-bill-that-sets-a-floor-not-a-ceiling
Sugandha Lahoti
11 Oct 2018
9 min read
Save for later

Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling”

Sugandha Lahoti
11 Oct 2018
9 min read
The Senate Commerce Committee held a hearing yesterday on consumer data privacy. The hearing focused on the perspective of privacy advocates and other experts. These advocates encouraged federal lawmakers to create strict data protection regulation rules, giving consumers more control over their personal data. The major focus was on implementing a strong common federal consumer privacy bill “that sets a floor, not a ceiling.” Representatives included Andrea Jelinek, the chair of the European Data Protection Board; Alastair Mactaggart, the advocate behind California's Consumer Privacy Act; Laura Moy, executive director of the Georgetown Law Center on Privacy and Technology; and Nuala O'Connor, president of the Center for Democracy and Technology. The Goal: Protect user privacy, allow innovation John Thune, the Committee Chairman said in his opening statement, “Over the last few decades, Congress has tried and failed to enact comprehensive privacy legislation. Also in light of recent security incidents including Facebook’s Cambridge Analytica and another security breach, and of the recent data breach in Google+, it is increasingly clear that industry self-regulation in this area is not sufficient. A national standard for privacy rules of the road is needed to protect consumers.” Senator Edward Markey, in his opening statement, spoke on data protection and privacy saying that “Data is the oil of the 21st century”. He further adds, “Though it has come with an unexpected cost to the users, any data-driven website that uses their customer’s personal information as a commodity, collecting, and selling user information without their permission.” He said that the goal of this hearing was to give users meaningful control over their personal information while maintaining a thriving competitive data ecosystem in which entrepreneurs can continue to develop. What did the industry tell the Senate Commerce Committee in the last hearing on the topic of consumer privacy? A few weeks ago, the Commerce committee held a discussion with Google, Facebook, Amazon, AT&T, and other industry players to understand their perspective on the same topic. The industry unanimously agreed that privacy regulations need to be put in place However, these companies pushed for the committee to make online privacy policy at the federal level rather than at the state level to avoid a nightmarish patchwork of policies for businesses to comply by. They also shared that complying by GDPR has been quite resource intensive. While they acknowledged that it was too soon to assess the impact of GDPR, they cautioned the Senate Commerce Committee that policies like the GDPR and CCPA could be detrimental to growth and innovation and thereby eventually cost the consumer more. As such, they expressed interest in being part of the team that formulates the new federal privacy policy. Also, they believed that the FTC was the right body to oversee the implementation of the new privacy laws. Overall, the last hearing’s meta-conversation between the committee and the industry was heavy with defensive stances and scripted almost colluded recommendations. The Telcos wanted tech companies to do better. The message was that user privacy and tech innovation are too interlinked and there is a need to strike a delicate balance to make privacy work practically. The key message from yesterday’s Senate Commerce Committee hearing with privacy advocates and EU regulator This time, the hearing was focused solely on establishing strict privacy laws and to draft clear guidelines regarding, definitions of ‘sensitive’ data, prohibited uses of data, and establishing limits for how long corporations can hold on to consumer data for various uses. A focal point of the hearing was to give users the key elements of Knowledge, Notice, and No. Consumers need knowledge that their data is being shared and how it is used, notice when their data is compromised, and the ability to say no to the entities that want their personal information. It should also include limits on how companies can use consumer’s information. The bill should prohibit companies from giving financial incentives to users in exchange for their personal information. Privacy must not become a luxury good that only the fortunate can afford. The bill should also ban “take it or leave it” offerings, in which a company requires a consumer to forfeit their privacy in order to consume a product. Companies should not be able to coerce users into providing their personal information by threatening to deprive them of a service. The law should include individual rights like the ability to access, correct, delete, and remove information. Companies should only collect user data which is absolutely necessary to carry out the service and keep that private information safe and secure. The legislation should also include special protections for children and teenagers. The federal government should be given strong enforcement powers and robust rule-making authority in order to ensure rules keep pace with changing technologies. Some of the witnesses believed that the FTC may not the right body to do this and that a new entity focused on this aspect may do a better and more agile job. “We can’t be shy about data regulation”, Laura Moy Laura Moy, Deputy Director of the Privacy and Technology center at Georgetown University law center talked at length about Data regulation. “This is not a time to be shy about data regulation,” Moy said. “Now is the time to intervene.” She emphasized that information should not in any way be used for discrimination. Nor it should be used to amplify hate speech, be sold to data brokers or used to target misinformation or disinformation. She also talked about Robust Enforcement, where she said she plans to call for legislation to “enable robust enforcement both by a federal agency and state attorneys general and foster regulatory agility.” She also addressed the question of whether companies should be able to tell consumers that if they don’t agree to share non-essential data, they cannot receive products or service? She disagreed saying that if companies do so, they have violated the idea of “Free choice”. She also addressed issues as to whether companies should be eligible for offering financial initiatives in exchange for user personal information, “GDPR was not a revolution, but just an evolution of a law [that existed for 20 years]”, Andrea Jelinek Andrea Jelinek, Chairperson, European Data Protection Board, highlighted the key concepts of GDPR and how it can be an inspiration to develop a policy in the U.S. at the federal level. In her opening statements, she said, “The volume of Digital information doubles every two years and deeply modifies our way of life. If we do not modify the roots of data processing gains with legislative initiatives, it will turn into a losing game for our economy, society, and each individual.” She addressed the issue of how GDPR is going to be enforced in the investigation of Facebook by Ireland’s Data protection authority. She also gave stats on the number of GDPR investigations opened in the EU so far. From the figures dating till October 1st, GDPR has 272 cases regarding identifying the lead supervisory authority and concern supervisory authority. There are 243 issues on mutual assistance according to Article 61 of the GDPR. There are also 223 opinions regarding data protection impact assessment. Company practices that have generated the most complaints and concerns from consumers revolved around “User Consent”. She explained why GDPR went with the “regulation route”, choosing one data privacy policy for the entire continent instead of each member country having their own. Jelinek countered Google’s point about compliance taking too much time and effort from the team by saying that given Google’s size, it would have taken around 3.5 hours per employee to get the compliance implemented. She also observed that it could have been reduced a lot, had they followed good data practices, to begin with. She also clarified that GDPR was not a really new or disruptive regulatory framework. In addition to the two years provided to companies to comply with the new rules, there was a 20-year-old data protection directive already in place in Europe in various forms. In that sense she said, GDPR was not a revolution, but just an evolution of a law that existed for 20 years. Californians for Consumer Privacy Act Alastair McTaggart, Chairman of Californians for consumer privacy, talked about CCPA’s two main elements. First, the Right to know, which allows Californians to know the information corporations have collected concerning them. Second, the Right to say no to businesses to stop selling their personal information. He said, “CCPA puts the focus on giving choice back to the consumer and enforced data security, a choice which is sorely needed." He also addressed questions like, “If he believes federal law should also grant permission for 13, 14, and 15-year-old?” What should the new Federal Privacy law look like according to CDT’s O’Connor Center for Democracy and Technology (CDT) President and CEO, Laura O'Connor said, "As with many new technological advancements and emerging business models, we have seen exuberance and abundance, and we have seen missteps and unintended consequences. International bodies and US states have responded by enacting new laws, and it is time for the US federal government to pass omnibus federal privacy legislation to protect individual digital rights and human dignity, and to provide certainty, stability, and clarity to consumers and companies in the digital world." She also highlighted five important pointers that should be kept in mind while designing the new Federal Privacy law. A comprehensive federal privacy law should apply broadly to all personal data and unregulated commercial entities, not just to tech companies. The law should include individual rights like the ability to access, correct, delete, and remove information. Congress should prohibit the collection, use, and sharing of certain types of data when not necessary for the immediate provision of the service. The FTC should be expressly empowered to investigate data abuses that result in discriminatory advertising and other practices. A federal privacy law should be clear on its face and provide specific guidance to companies and markets about legitimate data practices. It is promising to see the Senate Commerce committee sincerely taking in notes from both industry and privacy advocates to enable building strict privacy standards. They are hoping this new legislation is more focused on protecting consumer data than the businesses that profit from it. Only time will tell if a bipartisan consensus to this important initiative will be reached. For a detailed version of this story, it is recommended to hear the full Senate Commerce Committee hearing. Consumer protection organizations submit a new data protection framework to the Senate Commerce Committee. Google, Amazon, AT&T met the U.S Senate Committee to discuss consumer data privacy. Facebook, Twitter open up at Senate Intelligence hearing, the committee does ‘homework’ this time.
Read more
  • 0
  • 0
  • 13263

article-image-we-are-not-going-to-withdraw-from-the-future-says-microsofts-brad-smith-on-the-ongoing-jedi-bid-amazon-concurs
Prasad Ramesh
29 Oct 2018
5 min read
Save for later

‘We are not going to withdraw from the future’ says Microsoft’s Brad Smith on the ongoing JEDI bid, Amazon concurs

Prasad Ramesh
29 Oct 2018
5 min read
The Pentagon has been trying to get a hold of AI and related technologies from tech giants. Google employees had quit over it, Microsoft employees had asked the company to withdraw from the JEDI project. Last Friday, Microsoft President Brad Smith wrote about Microsoft and the US Military and the company’s visions in this area. Amazon, Microsoft, IBM, and Oracle are the companies who have bid for the Joint Enterprise Defense Infrastructure (JEDI) project. JEDI is a department wide cloud computing infrastructure that will give the Pentagon access to weapons systems enhanced with artificial intelligence and cloud computing. Microsoft believes in defending USA “We are not going to withdraw from the future, in the most positive way possible, we are going to work to help shape it.” said Brad Smith, President at Microsoft indicating that Microsoft intends to provide their technology to the Pentagon. Microsoft did not shy away from bidding in the Pentagon’s JEDI project. This in contrast to Google, which opted out of the same program earlier this month citing ethical concerns. Smith expressed Microsoft’s intent towards providing AI and related technologies to the US defense department saying, “we want the people who defend USA to have access to the nation’s best technology, including from Microsoft”. Smith stated that Microsoft’s work in this area is based on three convictions: Microsoft believes in the strong defense of USA and wants the defenders to have access to USA’s best technology, this includes Microsoft They want to use their ‘knowledge and voice’ to address ethical AI issues via the nation’s ‘civic and democratic processes’. They are giving their employees to opt out of work on these projects given that as a global company they consist of employees from different countries. Smith shared that Microsoft has had a long standing history with the US Department of Defense (DOD). Their tech has been used throughout the US military from the front office to field operations. This includes bases, ships, aircraft and training facilities. Amazon shares Microsoft’s visions Amazon too shares these visions with Microsoft in empowering US law and defense institutions with the latest technology. Amazon already provides cloud services to power the Central Intelligence Agency (CIA). Amazon CEO, Jeff Bezos said: “If big tech companies are going to turn their back on the Department of Defense, this country is going to be in trouble.” Amazon also provides the US law enforcement with their facial recognition technology called Rekognition. This has been a bone of contention for not just civil rights groups but also for some Amazon’s employees. Rekognition will help in identifying and incarcerating undesirable people. But it does not really work with accuracy. In a study by ACLU, Rekognition identified 28 people from the US congress incorrectly. The American Civil Liberties Union (ACLU) has now filed a Freedom of Information Act (FOIA) request which demands the Department of Homeland Security (DHS) to disclose how DHS and Immigration and Customs Enforcement (ICE) use Rekognition for law enforcement and immigration checks. Google’s rationale for withdrawing from the JEDI project Last week, in an interview with the Fox Network, Oracle founder Larry Ellison stated that it was shocking how Google viewed this matter. Google withdrew from the JEDI project following strong backlash from many of its employees. In the official statement, they have stated the reason for dropping out of the JEDI contract bidding as an ethical value misalignment and also that they don’t fully have all necessary clearance to work on Government projects.’ However, Google is open to launching a customized search engine in China that complies with China’s rules of censorship including potential to surveil Chinese citizens. Should AI be used in weapons? This question is the at the heart of the contentious topic of the tech industry working with the military. It is a serious topic that has been debated over the years by educated scientists and experienced leaders. Elon Musk, researchers from DeepMind and other companies even pledged to not build lethal AI. Personally, I side with the researchers and believe AI should be used exclusively for the benefit of mankind, to enhance human lives and solve problems that would prosper people’s lives. And not against each other in a race to build weapons or to become a superpower. But then again what would I know? Leading nations are in an AI arms race as we speak, with sophisticated national AI plans and agendas. For more details on Microsoft’s interest in working with the US Military visit the Microsoft website. ‘Employees of Microsoft’ ask Microsoft not to bid on US Military’s Project JEDI in an open letter Google employees quit over company’s continued Artificial Intelligence ties with the Pentagon Oracle’s bid protest against U.S Defence Department’s(Pentagon) $10 billion cloud contract
Read more
  • 0
  • 0
  • 13109

article-image-cryptocurrency-based-firm-tron-acquires-bittorrent
Savia Lobo
26 Jul 2018
3 min read
Save for later

Cryptocurrency-based firm, Tron acquires BitTorrent

Savia Lobo
26 Jul 2018
3 min read
Justin Sun, founder of the decentralized Internet platform, Tron announced the acquisition of BitTorrent, which is a popular file-sharing network. As reported by Techcrunch, the blockchain-based platform is said to have acquired BitTorrent for a sum of about $126 million. TRON foundation is a decentralized platform for sharing entertainment content, including music and games. It uses blockchain and peer-to-peer (p2p) network technology to exclude the need for a middleman between content producers and consumers such as Google and Amazon. BitTorrent, founded in the year 2004, is a popular peer-to-peer file sharing protocol with 100 million users. It also owns the popular, uTorrent client software and torrent client. BitTorrent is known to stream movies and music with great ease and is also fast and reliable. Moreover, it has changed how and why we watch things online. With the BitTorrent acquisition, Justin wants to make Tron the largest decentralized ecosystem in the world. While that’s an exciting prospect for both tech users, users had questions if Tron would charge them via cryptocurrency for the services offered. BitTorrent, in their blog, stated that “it has no plans to change what we do or charge for the services we provide. We have no plans to enable mining of cryptocurrency now or in the future." However, Tron’s plans for BitTorrent are still under the hood. ‘TRON + BitTorrent: The world’s largest decentralized ecosystem’ In an official letter by the Tron foundation, it stated that the firm would continue BitTorrent’s protocol legacy post integrating it within the Tron ecosystem. https://twitter.com/BitTorrent/status/1021629735258841088 The letter also states that, “With the integration of BitTorrent, TRON aims to liberate the Internet from the stranglehold of large corporations, give data rights back to the individual, and reignite the early 21st century vision of a free, transparent, decentralized network to connect the world, because the internet belongs to the people.” Sun in his letter also mentioned BitTorrent as the genesis of the decentralization movement. Tron’s developers, entrepreneurs, and the community consider BitTorrent as the original pioneers of decentralization technology. Sun stated, "We believe that joining the TRON network will further enhance BitTorrent and accelerate our mission of creating an Internet of options, not rules." Due to this acquisition, BitTorrent may lose its primary illegal user base. However, it still continues to demonstrate its legal uses and will further continue to evolve with TRON’s ecosystem. It will also take control of its two popular Torrent applications, BitTorrent and μTorrent clients, which will be free to download, and supported by ads. This merger is a happy turning point for BitTorrent. BitTorrent was in a total mess some years back and had not raised any money since 2008 following which they fired its dual CEOs. Given its commitment to the notion of a decentralized internet, BitTorrent still attempted to function as a business, with its app or service. But these strategies did not work out well. However, TRON’s acquisition has turned the tables for BitTorrent recently. It could be the story of Cinderella meeting Prince Charming of this decade. Read more about BitTorrent’s acquisition on Techcrunch. Top 15 Cryptocurrency Trading Bots Crypto-ML, a machine learning powered cryptocurrency platform Can Cryptocurrency establish a new economic world order?
Read more
  • 0
  • 0
  • 12962

article-image-23andme-share-client-genetic-data-with-gsk-drug-target-discovery
Sugandha Lahoti
28 Jul 2018
3 min read
Save for later

23andMe shares 5mn client genetic data with GSK for drug target discovery, a machine learning application in genetics research

Sugandha Lahoti
28 Jul 2018
3 min read
Genetics company 23andMe, which uses machine learning algorithms for human genome analysis, has entered into a four year collaboration with pharmaceutical giant GlaxoSmithKline. They will now share their 5 million client genetic data with GSK to advance research into treatments of diseases. This collaboration will be used to identify novel drug targets, tackle new subsets of disease and enable rapid progression of clinical programs. The 12 years old firm has already published more than 100 scientific papers based on its customers' data. All activities within the collaboration will initially be co-funded, with either company having certain rights to reduce its funding share. "The goal of the collaboration is to gather insights and discover novel drug targets driving disease progression and develop therapies," GlaxoSmithKline said in a press release. GSK is also reported to have invested $300 million in 23andMe. During the four year collaboration GSK will use 23andMe’s database and statistical analytics for drug target discovery. This collaboration will be used to design GSK’s LRRK2 inhibitor, which is in development for the potential treatment for Parkinson’s disease. 23andMe’s database of consented customers who have a LRRK2 variant status will be used to accelerate the progress of this programme. Together, GSK and 23andMe will target and recruit patients with defined LRRK2 mutations in order to reach clinical proof of concept. 23andMe have made it quite clear that participating in this program is voluntary and requires clients to affirmatively consent to participate. However not everyone is clear of how this would work. First, the company has specified that any research involving customer data that has already been performed or published prior to receipt of withdrawal request will not be reversed. This may have a negative effect as people are generally not aware of all the privacy policies and generally don’t read the Terms of Service. Moreover, as Peter Pitts, president of the Center for Medicine in the Public Interest, notes, “If a person's DNA is used in research, that person should be compensated. Customers shouldn’t be paying for the privilege of 23andMe working with a for-profit company in a for-profit research project.” Both the companies have sworn to provide maximum data protection for their employees. In a blog post, they note, “The continued protection of customers’ data and privacy is the highest priority for both GSK and 23andMe. Both companies have stringent security protections in place when it comes to collecting, storing and transferring information about research participants.” You can read more about the news, on a blog by 23andMe founder, Anne Wojcicki. 6 use cases of Machine Learning in Healthcare Healthcare Analytics: Logistic Regression to Reduce Patient Readmissions NIPS 2017 Special: How machine learning for genomics is bridging the gap between research and clinical trial success by Brendan Frey
Read more
  • 0
  • 0
  • 12960

article-image-im-concerned-about-libras-model-for-decentralization-says-co-founder-of-chainspace-facebooks-blockchain-acquisition
Fatema Patrawala
26 Jun 2019
7 min read
Save for later

“I'm concerned about Libra's model for decentralization”, says co-founder of Chainspace, Facebook’s blockchain acquisition

Fatema Patrawala
26 Jun 2019
7 min read
In February, Facebook made its debut into the blockchain space by acquiring Chainspace, a London-based, Gibraltar-registered blockchain venture. Chainspace was a small start-up founded by several academics from the University College London Information Security Research Group. Authors of the original Chainspace paper were Mustafa Al-Bassam, Alberto Sonnino, Shehar Bano, Dave Hrycyszyn and George Danezis, some of the UK’s leading privacy engineering researchers. Following the acquisition, last week Facebook announced the launch of its new cryptocurrency, Libra which is expected to go live by 2020. The Libra whitepaper involves a wide array of authors including the Chainspace co-founders namely Alberto Sonnino, Shehar Bano and George Danezis. According to Wired, David Marcus, a former Paypal president and a Coinbase board member, who resigned from the board last year, is appointed by Facebook to lead the project Libra. Libra isn’t like other cryptocurrencies such as Bitcoin or Ethereum. As per the Reuters report, the Libra blockchain will be permissioned, meaning that only entities authorized by the governing association will be able to run the computers. Mustafa Al-Bassam, one of the research co-founders of Chainspace who did not join Facebook posted a detailed Twitter thread yesterday. The thread included particularly his views on this new crypto-currency - Libra. https://twitter.com/musalbas/status/1143629828551270401 On Libra’s decentralized model being less censorship resistant Mustafa says, “I don't have any doubt that the Libra team is building Libra for the right reasons: to create an open, decentralized payment system, not to empower Facebook. However, the road to dystopia is paved with good intentions, and I'm concerned about Libra's model for decentralization.” He further pointed the discussion towards a user comment on GitHub which reads, “Replace "decentralized" with "distributed" in readme”. Mustafa explains that Libra’s 100 node closed set of validators is seen more as decentralized in comparison to Bitcoin. Whereas Bitcoin has 4 pools that control >51% of hashpower. According to the Block Genesis, decentralized networks are particularly prone to Sybil attacks due to their permissionless nature. Mustafa takes this into consideration and poses a question if Libra is Sybil resistant, he comments, “I'm aware that the word "decentralization" is overused. I'm looking at decentralization, and Sybil-resistance, as a means to achieve censorship-resistance. Specifically: what do you have to do to reverse or censor transaction, how much does it cost, and who has that power? My concern is that Libra could end up creating a financial system that is *less* censorship-resistant than our current traditional financial system. You see, our current banking system is somewhat decentralized on a global scale, as money travels through a network of banks.” He further explains that, “In the banking system there is no majority of parties that can collude together to deny two banks the ability to maintain a relationship which each other - in the worst case scenario they can send physical cash to each other, which does not require a ledger. It's permissionless.” Mustafa adds to this point with a surreal imagination that if Libra was the only way to transfer currency and it is less censorship resistant than we’d be in worse situations, he says, “With cryptocurrency systems (even decentralized ones), there is always necessarily a majority of consensus nodes (e.g. a 51% attack) that can collude together from censor or reverse transactions. So if you're going to create digital cash, this is extremely important to consider. With Libra, censorship-resistance is even more important, as Libra could very well end up being the world's "de facto" currency, and if the Libra network is the only way to transfer that currency, and it's less censorship-resistant, we're worse off than where we started.” On Libra's permissioned consensus node selection authority Mustafa says that, “Libra's current permissioned consensus node selection authority is derived directly from nation state-enforced (Switzerland's) organization laws, rather than independently from stakeholders holding sovereign cryptographic keys.” Source - Libra whitepaper What this means is the "root API" for Libra's node selection mechanism is the Libra Association via the Swiss Federal Constitution and the Swiss courts, rather than public key cryptography. Mustafa also pointed out that the association members for Libra are large $1b+ companies, and US-based. Source - Libra whitepaper To this there could be an argument that governments can regulate the people who hold those public keys, but a key difference is that this can't be directly enforced without access to the private key. Mustafa explained this point with an example from last year, where Iran tested how resistant global payments are to US censorship. Iran requested a 300 million Euro cash withdrawal via Germany's central bank which they rejected under US pressure. Mustafa added, “US sanctions have been bad on ordinary people in Iran, but they can at least use cash to transact with other countries. If people wouldn't even be able to use cash in the future because Libra digital cash isn't censorship-resistant, that would be *brutal*.” On Libra’s proof-of-stake based permissionless mechanism Mustafa argues that the Libra whitepaper confuses consensus with Sybil-resistance. His views are Sybil-resistant node selection through permissionless mechanisms such as proof-of-stake, which selects a set of cryptographic keys that participate in consensus, is necessarily more censorship-resistant than the Association-based model. Proof-of-stake is a Sybil-resistance mechanism, not a consensus mechanism. The "longest chain rule", on the other hand, is the consensus mechanism. He says that Libra has outlined a proof-of-stake-based permissionless roadmap and will transition to this in the next 5 years. Mustafa feels 5 years for this will be way too late when Group of seven nations (G7) are already lining up the taskforce to control Libra. Mustafa also thinks that it isn’t appropriate about Libra's whitepaper to claim the need to start permissioned for the next five years. He says permissionlessness and scalable secure blockchains are an unsolved technical problem, and they need community's help to research this. Source - Libra whitepaper He says, “It's as if they ignored the past decade of blockchain scalability research efforts. Secure layer-one scalability is a solved research problem. Ethereum 2.0, for example, is past the research stage and is now in the implementation stage, and will handle more than Libra's 1000tps.” Mustafa also points out that Chainspace was specifically in the middle of implementing a permissionless sharded blockchain with higher on-chain scalability than Libra's 1000tps. With FB's resources, this could've easily been accelerated and made a reality. He says, there are many research-led blockchain projects that have implemented or are implementing scalability strategies that achieve higher than Libra's 1000tps without heavily trading off security, so the "community" research on this is plentiful; it is just that Facebook is being lazy. He concludes, “I find it a great shame that Facebook has decided to be anti-social and launch a permissioned system as they need the community's help as scalable blockchains are an unsolved problem, instead of using their resources to implement on a decade of research in this area.” People have appreciated Mustafa on giving a detailed review of Libra, one of the tweets read, “This was a great thread, with several acute and correct observations.” https://twitter.com/ercwl/status/1143671361325490177 Another tweet reads, “Isn't a shard (let's say a blockchain sharded into 100 shards) by its nature trading off 99% of its consensus forming decentralization for 100x (minus overhead, so maybe 50x?) increased scalability?” Mustafa responded, “No because consensus participants are randomly sampled into shards from the overall consensus set, so shards should be roughly uniformly secure, and in the event that a shard misbehaves, fraud and data availability proofs kick in.” https://twitter.com/ercwl/status/1143673925643243522 In one of the tweets it is also suggested that 1/3 of Libra validators can enforce censorship even against the will of the 2/3 majority. In contrast it requires majority of miners to censor Bitcoin. Also unlike Libra, there is no entry barrier other than capital to become a Bitcoin miner. https://twitter.com/TamasBlummer/status/1143766691089977346 Let us know what are your views on Libra and how it is expected to perform. Facebook launches Libra and Calibra in a move to seriously disrupt the financial sector Facebook content moderators work in filthy, stressful conditions and experience emotional trauma daily, reports The Verge Facebook releases Pythia, a deep learning framework for vision and language multimodal research
Read more
  • 0
  • 0
  • 12721

article-image-highlights-from-jack-dorseys-live-interview-by-kara-swisher-on-twitter-on-lack-of-diversity-tech-responsibility-physical-safety-and-more
Natasha Mathur
14 Feb 2019
7 min read
Save for later

Highlights from Jack Dorsey’s live interview by Kara Swisher on Twitter: on lack of diversity, tech responsibility, physical safety and more

Natasha Mathur
14 Feb 2019
7 min read
Kara Swisher, Recode co-founder, interviewed Jack Dorsey, Twitter CEO, yesterday over Twitter. The interview ( or ‘Twitterview’)  was conducted in tweets using the hashtag #KaraJack. It started at 5 pm ET and lasted for around 90-minutes. Let’s have a look at the top highlights from the interview. https://twitter.com/karaswisher/status/1095440667373899776 On Fixing what is broke on Social Media and Physical safety Swisher asked Dorsey why he isn’t moving faster in his efforts to fix the disaster that has been caused so far on social media. To this Dorsey replied that Twitter was trying to do “too much” in the past but that they have become better at prioritizing now. The number one focus for them now is a person’s “physical safety” i.e. the offline ramifications for Twitter users off the platform. “What people do offline with what they see online”, says Dorsey. Some examples of ‘offline ramifications’ being “doxxing” (harassment technique that reveals a person’s personal information on the internet) and coordinated harassment campaigns. Dorsey further added that replies, searches, trends, mentions on Twitter are where most of the abuse happens and are the shared spaces people take advantage of. “We need to put our physical safety above all else. We don’t have all the answers just yet. But that’s the focus. I think it clarifies a lot of the work we need to do. Not all of it of course”, said Dorsey. On Tech responsibility and improving the health of digital conversation on Twitter When Swisher asked Dorsey what grading would he give to Silicon Valley and himself for embodying tech responsibility, he replied with “C” for himself. He said that Twitter has made progress but it’s scattered and ‘not felt enough’. He did not comment on what he thought of Silicon Valley’s work in this area. Swisher further highlighted that the goal of improving Twitter conversations have only remained empty talk so far. She asked Dorsey if Twitter has made any actual progress in the last 18-24 months when it comes to addressing the issues regarding the “health of conversation” (which eventually plays into safety). Dorsey said these issues are the most important thing right now that they need to fix and it’s a failure on Twitter’s part to ‘put the burden on victims’. He did not share a specific example of improvements made to the platform to further this goal. Swisher then questioned him on how he intends on fixing the issue, Dorsey mentioned that: Twitter intends to be more proactive when it comes to enforcing healthy conversations so that reporting/blocking becomes the last resort. He mentioned that Twitter takes actions against all offenders who go against its policies but that the system works reactively to someone who reports it. “If they don’t report, we don’t see it. Doesn’t scale. Hence the need to focus on proactive”, said Dorsey. Since Twitter is constantly evolving its policies to address the ‘current issues’, it's rooting these in fundamental human rights (UN) and is making physical safety the top priority alongside privacy. On lack of diversity https://twitter.com/jack/status/1095459084785004544 Swisher questioned Dorsey on his negligence towards addressing the issues. “I think it is because many of the people who made Twitter never ever felt unsafe,” adds Swisher. Dorsey admits that the “lack of diversity” didn’t help with the empathy of what people (especially women) experience on Twitter every day. He further adds that Twitter should be reflective of the people that it’s trying to serve, which is why they established a trust and safety council to get feedback. Swisher then asks him to provide three concrete examples of what Twitter has done to fix this. Dorsey mentioned that Twitter has: evolved its policies ( eg; misgendering policy). prioritized proactive enforcement by using machine learning to downrank bad actors, meaning, they'll look at the probability of abuse from any one account. This is because if someone else is abusing one account then they’re probably doing the same on other accounts. Given more user control in a product, such as muting of accounts with no profile picture, etc. More focus on coordinated behavior/gaming. On Dorsey’s dual CEO role Swisher asked him why he insists on being the CEO of two publicly traded companies (Twitter and Square Inc.) that both require maximum effort at the same time. Dorsey said that his main focus is on building leadership in both and that it’s not his ambition to be CEO of multiple companies “just for the sake of that”. She further questioned him if he has any plans in mind to hire someone as his “number 2”. Dorsey said it’s better to spread that kind of responsibility across several people as it reduces dependencies and the company gets more options for future leadership. “I’m doing everything I can to help both. Effort doesn’t come down to one person. It’s a team”, he said. On Twitter breaks, Donald Trump and Elon Musk When initially asked about what Dorsey feels about people not feeling good after being for a while on Twitter, he said he feels “terrible” and that it's depressing. https://twitter.com/jack/status/1095457041844334593 “We made something with one intent. The world showed us how it wanted to use it. A lot has been great. A lot has been unexpected. A lot has been negative. We weren’t fast enough to observe, learn, and improve”, said Dorsey. He further added that he does not feel good about how Twitter tends to incentivize outrage, fast takes, short term thinking, echo chambers, and fragmented conversations. Swisher then questioned Dorsey on whether Twitter has ever intended on suspending Donald Trump and if Twitter’s business/engagement would suffer when Trump is no longer the president. Dorsey replied that Twitter is independent of any account or person and that although the number of politics conversations has increased on Twitter, that’s just one experience. He further added that Twitter is ready for 2020 elections and that it has partnered up with government agencies to improve communication around threats. https://twitter.com/jack/status/1095462610462433280 Moreover, on being asked about the most exciting influential on Twitter, Dorsey replied with Elon Musk. He said he likes how Elon is focused on solving existential problems and sharing his thinking openly. On being asked he thought of how Alexandria Ocasio Cortez is using Twitter, he replied that she is ‘mastering the medium’. Although Swisher managed to interview Dorsey over Twitter, the ‘Twitterview’ got quite confusing soon and went out of order. The conversations seemed all over the place and as Kurt Wagner, tech journalist from Recode puts it, “in order to find a permanent thread of the chat, you had to visit one of either Kara or Jack’s pages and continually refresh”. This made for a difficult experience overall and points towards the current flaws within the conversation system on Twitter. Many users tweeted out their opinion regarding the same: https://twitter.com/RTKumaraSwamy/status/1095542363890446336 https://twitter.com/waltmossberg/status/1095454665305739264 https://twitter.com/kayvz/status/1095472789870436352 https://twitter.com/sukienniko/status/1095520835861864448 https://twitter.com/LauraGaviriaH/status/1095641232058011648 Recode Decode #GoogleWalkout interview shows why data and evidence don’t always lead to right decisions in even the world’s most data-driven company Twitter CEO, Jack Dorsey slammed by users after a photo of him holding ‘smash Brahminical patriarchy’ poster went viral Jack Dorsey discusses the rumored ‘edit tweet’ button and tells users to stop caring about followers
Read more
  • 0
  • 0
  • 12418
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-the-software-behind-silicon-valley-emmy-nominated-not-hotdog-app
Sugandha Lahoti
16 Jul 2018
4 min read
Save for later

The software behind Silicon Valley’s Emmy-nominated 'Not Hotdog' app

Sugandha Lahoti
16 Jul 2018
4 min read
This is a great news for all Silicon Valley Fans. The amazing Not Hotdog A.I. app shown on season 4’s 4th episode, has been nominated for a Primetime Emmy Award. The Emmys has placed Silicon Valley and the app in the category “Outstanding Creative Achievement In Interactive Media Within a Scripted Program” among other popular shows. Other nominations include 13 Reasons Why for “Talk To The Reasons”, a website that lets you chat with the characters. Rick and Morty, for “Virtual Rick-ality”, a virtual reality game. Mr. Robot, for "Ecoin", a fictional Global Digital Currency. And Westworld for “Chaos Takes Control Interactive Experience”, an online experience for promoting the show’s second season. Within a day of its launch, the ‘Not Hotdog’ application was trending on the App Store and on Twitter, grabbing the #1 spot on both Hacker News & Product Hunt, and won a Webby for Best Use of Machine Learning. The app uses state-of-the-art deep learning, with a mix of React Native, Tensorflow & Keras. It has averaged 99.9% crash-free users with a 4.5+/5 rating on the app stores. The ‘Not Hotdog’ app does what the name suggests. It identifies hotdogs — and not hot dogs. It is available for both Android and iOS devices whose description reads “What would you say if I told you there is an app on the market that tell you if you have a hotdog or not a hotdog. It is very good and I do not want to work on it any more. You can hire someone else.” How the Not Hotdog app is built The creator Tim Anglade uses sophisticated neural architecture for the Silicon Valley A.I. app that runs directly on your phone and trained it with Tensorflow, Keras & Nvidia GPUs. Of course, the use case is not very useful, but the overall app is a substantial example of deep learning and edge computing in pop culture.  The app provides better privacy as images never leave a user’s device. Consequently, users are provided with a faster experience and offline availability as processing doesn’t go to the cloud. Using a no cloud-based AI approach means that the company can run the app at zero cost, providing significant savings, even under a load of millions of users. What is amazing about the app is that it was built by a single creator with limited resources ( a single laptop and GPU, using hand-curated data). This talks lengths of how much can be achieved even with a limited amount of time and resources, by non-technical companies, individual developers, and hobbyists alike. The initial prototype of the app was built using Google Cloud Platform’s Vision API, and React Native. React Native is a good choice as it supports many devices. The Google Cloud’s Vision API, however, was quickly abandoned. Instead, what was brought into the picture was Edge Computing.  It enabled training the neural network directly on the laptop, to be exported and embedded directly into the mobile app, making the neural network execution phase run directly inside the user’s phone. How TensorFlow powers the Not Hotdog app After React Native, the second part of their tech stack was TensorFlow. They used the TensorFlow’s Transfer Learning script, to retrain the Inception architecture which helps in dealing with a more specific image problem. Transfer Learning helped them get better results much faster, and with less data compared to training from scratch. Inception turned out too big to be retrained. So, at the suggestion of Jeremy P. Howard, they explored and settled down on SqueezeNet.  It provided explicit positioning as a solution for embedded deep learning, and the availability of a pre-trained Keras model on GitHub. The final architecture was largely based on Google’s MobileNets paper, which provided their neural architecture with Inception-like accuracy on simple problems, with only almost 4M parameters. YouTube has a $25 million plan to counter fake news and misinformation Microsoft’s Brad Smith calls for facial recognition technology to be regulated Too weird for Wall Street: Broadcom’s value drops after purchasing CA Technologies
Read more
  • 0
  • 0
  • 12406

article-image-un-on-web-summit-2018-how-we-can-create-a-safe-and-beneficial-digital-future-for-all
Bhagyashree R
07 Nov 2018
4 min read
Save for later

UN on Web Summit 2018: How we can create a safe and beneficial digital future for all

Bhagyashree R
07 Nov 2018
4 min read
On Monday, at the opening ceremony of Web Summit 2018, Antonio Guterres, the secretary general of the United Nations (UN) spoke about the benefits and challenges that come with cutting edge technologies. Guterres highlighted that the pace of change is happening so quickly that trends such as blockchain, IoT, and artificial intelligence can move from the cutting edge to the mainstream in no time. Guterres was quick to pay tribute to technological innovation, detailing some of the ways this is helping UN organizations improve the lives of people all over the world. For example, UNICEF is now able to map a connection between school in remote areas, and the World Food Programme is using blockchain to make transactions more secure, efficient and transparent. But these innovations nevertheless pose risks and create new challenges that we need to overcome. Three key technological challenges the UN wants to tackle Guterres identified three key challenges for the planet. Together they help inform a broader plan of what needs to be done. The social impact of the third and fourth industrial revolution With the introduction of new technologies, in the next few decades we will see the creation of thousands of new jobs. These will be very different from what we are used to today, and will likely require retraining and upskilling. This will be critical as many traditional jobs will be automated. Guterres believes that consequences of unemployment caused by automation could be incredibly disruptive - maybe even destructive - for societies. He further added that we are not preparing fast enough to match the speed of these growing technologies. As a solution to this, Guterres said: “We will need to make massive investments in education but a different sort of education. What matters now is not to learn things but learn how to learn things.” While many professionals will be able to acquire the skills to become employable in the future, some will inevitably be left behind. To minimize the impact of these changes, safety nets will be essential to help millions of citizens transition into this new world, and bring new meaning and purpose into their lives. Misuse of the internet The internet has connected the world in ways people wouldn’t have thought possible a generation ago. But it has also opened up a whole new channel for hate speech, fake news, censorship and control. The internet certainly isn’t creating many of the challenges facing civic society on its own - but it won’t be able to solve them on its own either. On this, Guterres said: “We need to mobilise the government, civil society, academia, scientists in order to be able to avoid the digital manipulation of elections, for instance, and create some filters that are able to block hate speech to move and to be a factor of the instability of societies.” The problem of control Automation and AI poses risks that exceed the challenges of the third and fourth industrial revolutions. They also create urgent ethical dilemmas, forcing us to ask exactly what artificial intelligence should be used for. Smarter weapons might be a good idea if you’re an arms manufacturer, but there needs to be a wider debate that takes in wider concerns and issues. “The weaponization of artificial intelligence is a serious danger and the prospects of machines that have the capacity by themselves to select and destroy targets is creating enormous difficulties or will create enormous difficulties,” Guterres remarked. His solution might seem radical but it’s also simple: ban them. He went on to explain: “To avoid the escalation in conflict and guarantee that international military laws and human rights are respected in the battlefields, machines that have the power and the discretion to take human lives are politically unacceptable, are morally repugnant and should be banned by international law.” How we can address these problems Typical forms of regulations can help to a certain extent, as in the case of weaponization. But these cases are limited. In the majority of circumstances technologies move so fast that legislation simply cannot keep up in any meaningful way. This is why we need to create platforms where governments, companies, academia, and civil society can come together, to discuss and find ways that allow digital technologies to be “a force for good”. You can watch Antonio Guterres’ full talk on YouTube. Tim Berners-Lee is on a mission to save the web he invented MEPs pass a resolution to ban “Killer robots” In 5 years, machines will do half of our job tasks of today; 1 in 2 employees need reskilling/upskilling now – World Economic Forum survey
Read more
  • 0
  • 0
  • 12335

article-image-why-did-mcdonalds-acqui-hire-300-million-machine-learning-startup-dynamic-yield
Fatema Patrawala
29 Mar 2019
7 min read
Save for later

Why did McDonalds acqui-hire $300 million machine learning startup, Dynamic Yield?

Fatema Patrawala
29 Mar 2019
7 min read
Mention McDonald’s to someone today, and they're more likely to think about Big Mac than Big Data. But that could soon change. As the fast-food giant embraced machine learning, with plans to become a tech-innovator in a fittingly super-sized way. McDonald's stunned a lot of people when it announced its biggest acquisition in 20 years, one that reportedly cost it over $300 million. It plans to acquire Dynamic Yield, a New York based startup that provides retailers with algorithmically driven "decision logic" technology. When you add an item to an online shopping cart, “decision logic” is the tech that nudges you about what other customers bought as well. Dynamic Yield’s client list includes blue-chip retail clients like Ikea, Sephora, and Urban Outfitters. McDonald’s vetted around 30 firms offering similar personalization engine services, and landed on Dynamic Yield. It has been recently valued in the hundreds of millions of dollars; people familiar with the details of the McDonald’s offer put it at over $300 million. This makes the company's largest purchase as per a tweet by the McDonald’s CEO Steve Easterbrook. https://twitter.com/SteveEasterbrk/status/1110313531398860800 The burger giant can certainly afford it; in 2018 alone it tallied nearly $6 billion of net income, and ended the year with a free cash flow of $4.2 billion. McDonalds, a food-tech innovator from the start Over the last several years, McDonalds has invested heavily in technology by bringing stores up to date with self-serve kiosks. The company also launched an app and partnered with Uber Eats in that time, in addition to a number of infrastructure improvements. It even relocated its headquarters less than a year ago from the suburbs to Chicago’s vibrant West Town neighborhood, in a bid to attract young talent. Collectively, McDonald’s serves around 68 million customers every single day. And the majority of those people are at their drive-thru window who never get out of their car, instead place and pick up their orders from the window. And that’s where McDonalds is planning to deploy Dynamic Yield tech first. “What we hadn’t done is begun to connect the technology together, and get the various pieces talking to each other,” says Easterbrook. “How do you transition from mass marketing to mass personalization? To do that, you’ve really got to unlock the data within that ecosystem in a way that’s useful to a customer.” Here’s what that looks like in practice: When you drive up to place your order at a McDonald’s today, a digital display greets you with a handful of banner items or promotions. As you inch up toward the ordering area, you eventually get to the full menu. Both of these, as currently implemented, are largely static, aside from the obvious changes like rotating in new offers, or switching over from breakfast to lunch. But in a pilot program at a McDonald’s restaurant in Miami, powered by Dynamic Yield, those displays have taken on new dexterity. In the new McDonald’s machine-learning paradigm, that particular display screen will show customers what other items have been popular at that location, and prompt them with potential upsells. Thanks for your Happy Meal order; maybe you’d like a Sprite to go with it. “We’ve never had an issue in this business with a lack of data,” says Easterbrook. “It’s drawing the insight and the intelligence out of it.” Revenue aspects likely to double with the acquisition McDonald’s hasn’t shared any specific insights gleaned so far, or numbers around the personalization engine’s effect on sales. But it’s not hard to imagine some of the possible scenarios. If someone orders two Happy Meals at 5 o’clock, for instance, that’s probably a parent ordering for their kids; highlight a coffee or snack for them, and they might decide to treat themselves to a pick-me-up. And as with any machine-learning system, the real benefits will likely come from the unexpected. While customer satisfaction may be the goal, the avenues McDonald’s takes to get there will increase revenues along the way. Customer personalization is another goal to achieve As you may think, McDonald’s didn’t spend over $300 million on a machine-learning company to only juice up its drive-thru sales. An important part is to figure how to leverage the “personalization” part of a personalization engine. Fine-tuned insights at the store level are one thing, but Easterbrook envisions something even more granular. “If customers are willing to identify themselves—there’s all sorts of ways you can do that—we can be even more useful to them, because now we call up their favorites,” according to Easterbrook, who stresses that privacy is paramount. As for what form that might ultimately take, Easterbrook raises a handful of possibilities. McDonald’s already uses geofencing around its stores to know when a mobile app customer is approaching and prepare their order accordingly. On the downside of this tech integration When you know you have to change so much in your company, it's easy to forget some of the consequences. You race to implement all new things in tech and don't adequately think about what your employees might think of it all. This seems to be happening to McDonald's. As the fast-food chain tries to catch up to food trends that have been established for some time, their employees seem to be not happy about the fact. As Bloomberg reports, the more McDonald's introduces, fresh beef, touchscreen ordering and delivery, the more its employees are thinking: "This is all too much work." One of the employees at the McDonalds franchisee revealed at the beginning of this year. "Employee turnover is at an all-time high for us," he said, adding "Our restaurants are way too stressful, and people do not want to work in them." Workers are walking away rather than dealing with new technologies and menu options. The result: customers will wait longer. Already, drive-through times at McDonald’s slowed to 239 seconds last year -- more than 30 seconds slower than in 2016, according to QSR magazine. Turnover at U.S. fast-food restaurants jumped to 150% meaning a store employing 20 workers would go through 30 in one year. Having said that it does not come to us as a surprise that McDonalds on Tuesday announced to the National Restaurant Association that it will no longer participate in lobby efforts against minimum-wage hikes at the federal, state or local level. It does makes sense when they are already paying low wages and an all time high attrition rate hail as a bigger problem. Of course, technology is supposed to solve all the world's problems, while simultaneously eliminating the need for many people. Looks like McDonalds has put all its eggs in the machine learning and automation basket. Would it not be a rich irony, if people saw technology being introduced and walked out, deciding it was all too much trouble for just a burger? 25 Startups using machine learning differently in 2018: From farming to brewing beer to elder care An AI startup now wants to monitor your kids’ activities to help them grow ‘securly’ Microsoft acquires AI startup Lobe, a no code visual interface tool to build deep learning models easily
Read more
  • 0
  • 0
  • 12056

article-image-top-research-papers-showcased-nips-2017-part-1
Sugandha Lahoti
07 Dec 2017
6 min read
Save for later

Top Research papers showcased at NIPS 2017 - Part 1

Sugandha Lahoti
07 Dec 2017
6 min read
The ongoing 31st annual Conference on Neural Information Processing Systems (NIPS 2017) in Long Beach, California is scheduled from December 4-9, 2017. The 6-day conference is hosting a number of invited talks, demonstrations, tutorials, and paper presentations pertaining to the latest in machine learning, deep learning and AI research. This year the conference has grown larger than life with a record-high 3,240 papers, 678 selected ones, and a completely sold-out event. Top tech members from Google, Microsoft, IBM, DeepMind, Facebook, Amazon, are among other prominent players who enthusiastically participated this year. Here is a quick roundup of some of the top research papers till date. Generative Adversarial Networks Generative Adversarial Networks are a hot topic of research at the ongoing NIPS conference. GANs cast out an easy way to train the DL algorithms by slashing out the amount of data required in training with unlabelled data. Here are a few research papers on GANs. Regularization can stabilize training of GANs Microsoft researchers have proposed a new regularization approach to yield a stable GAN training procedure at low computational costs. Their new model overcomes the fundamental limitation of GANs occurring due to a dimensional mismatch between the model distribution and the true distribution. This results in their density ratio and the associated f-divergence to be undefined. Their paper “Stabilizing Training of Generative Adversarial Networks through Regularization” turns GAN models into reliable building blocks for deep learning. They have also used this for several datasets including image generation tasks. AdaGAN: Boosting GAN Performance Training GANs can at times be a hard task. They can also suffer from the problem of missing modes where the model is not able to produce examples in certain regions of the space. Google researchers have developed an iterative procedure called AdaGAN in their paper “AdaGAN: Boosting Generative Models”, an approach inspired by boosting algorithms, where many potentially weak individual predictors are greedily aggregated to form a strong composite predictor. It adds a new component into a mixture model at every step by running a GAN algorithm on a re-weighted sample. The paper also addresses the problem of missing modes. Houdini: Generating Adversarial Examples The generation of adversarial examples is considered as a critical milestone for evaluating and upgrading the robustness of learning in machines. Also, current methods are confined to classification tasks and are unable to alter the performance measure of the problem at hand. In order to tackle such an issue, Facebook researchers have come up with a research paper titled “Houdini: Fooling Deep Structured Prediction Models”, a novel and a flexible approach for generating adversarial examples distinctly tailormade for the final performance measure of the task taken into account (combinational and non-decomposable tasks). Stochastic hard-attention for Memory Addressing in GANs DeepMind researchers showcased a new method which uses stochastic hard-attention to retrieve memory content in generative models. Their paper titled “Variational memory addressing in generative models” was presented at the 2nd day of the conference and is an advancement over the popular differentiable soft-attention mechanism. Their new technique allows developers to apply variational inference to memory addressing. This leads to more precise memory lookups using target information, especially in models with large memory buffers and with many confounding entries in the memory. Image and Video Processing A lot of hype was also around developing sophisticated models and techniques for image and video processing. Here is a quick glance at the top presentations. Fader Networks: Image manipulation through disentanglement Facebook researchers have introduced Fader Networks, in their paper titled “Fader Networks: Manipulating Images by Sliding Attributes”. These fader networks use an encoder-decoder architecture to reconstruct images by disentangling their salient information and the values of particular attributes directly in a latent space. Disentanglement helps in manipulating these attributes to generate variations of pictures of faces while preserving their naturalness. This innovative approach results in much simpler training schemes and scales for manipulating multiple attributes jointly. Visual interaction networks for Video simulation Another paper titled “Visual interaction networks: Learning a physics simulator from video Tuesday” proposes a new neural-network model to learn physical objects without prior knowledge. Deepmind’s Visual Interaction Network is used for video analysis and is able to infer the states of multiple physical objects from just a few frames of video. It then uses these to predict object positions many steps into the future. It can also deduce the locations of invisible objects. Transfer, Reinforcement, and Continual Learning A lot of research is going on in the field of Transfer, Reinforcement, and Continual learning to make stable and powerful deep learning models. Here are a few research papers presented in this domain. Two new techniques for Transfer Learning Currently, a large set of input/output (I/O) examples are required for learning any underlying input-output mapping. By leveraging information based on the related tasks, the researchers at Microsoft have addressed the problem of data and computation efficiency of program induction. Their paper “Neural Program Meta-Induction” uses two approaches for cross-task knowledge transfer. First is Portfolio adaption, where a set of induction models is pretrained on a set of related tasks, and the best model is adapted towards the new task using transfer learning. The second one is Meta program induction, a k-shot learning approach which makes a model generalize itself to new tasks without requiring any additional training. Hybrid Reward Architecture to solve the problem of generalization in Reinforcement Learning A new paper from Microsoft “Hybrid Reward Architecture for Reinforcement Learning” highlights a new method to address the generalization problem faced by a typical deep RL method. Hybrid Reward Architecture (HRA) takes a decomposed reward function as the input and learns a separate value function for each component reward function. This is especially useful in domains where the optimal value function cannot easily be reduced to a low-dimensional representation. In the new approach, the overall value function is much smoother and can be easier approximated by a low-dimensional representation, enabling more effective learning. Gradient Episodic Memory to counter catastrophic forgetting in continual learning models Continual learning is all about improving the ability of models to solve sequential tasks without forgetting previously acquired knowledge. In the paper “Gradient Episodic Memory for Continual Learning”, Facebook researchers have proposed a set of metrics to evaluate models over a continuous series of data. These metrics characterize models by their test accuracy and the ability to transfer knowledge across tasks. They have also proposed a model for continual learning, called Gradient Episodic Memory (GEM) that reduces the problem of catastrophic forgetting. They have also experimented with variants of the MNIST and CIFAR-100 datasets to demonstrate the performance of GEM when compared to other methods. In our next post, we will cover a selection of papers presented so far at NIPS 2017 in the areas of Predictive Modelling, Machine Translation, and more. For live content coverage, you can visit NIPS’ Facebook page.
Read more
  • 0
  • 0
  • 11710
article-image-uk-online-harms-white-paper-divides-internet-puts-tech-companies-government-crosshairs
Fatema Patrawala
10 Apr 2019
10 min read
Save for later

Online Safety vs Free Speech: UK’s "Online Harms" white paper divides the internet and puts tech companies in government crosshairs

Fatema Patrawala
10 Apr 2019
10 min read
The internet is an integral part of everyday life for so many people. It has definitely added a new dimension to the spaces of imagination in which we all live. But it seems the problems of the offline world have moved there, too. As the internet continues to grow and transform our lives, often for the better, we should not ignore the very real harms which people face online every day. And the lawmakers around the world are taking decisive action to make people safer online. On Monday, Europe drafted EU Regulation on preventing the dissemination of terrorist content online. Last week, the Australian parliament passed legislation to crack down on violent videos on social media. Recently Sen. Elizabeth Warren, US 2020 presidential hopeful proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. On 3rd April, Elizabeth introduced Corporate Executive Accountability Act, a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached. Last year, the German parliament enacted the NetzDG law, requiring large social media sites to remove posts that violate certain provisions of the German code, including broad prohibitions on “defamation of religion,” “hate speech,” and “insult.” And here’s yet another tech regulation announcement on Monday, a white paper on online harms was announced by the UK government. The Department for Digital, Culture, Media and Sport (DCMS) has proposed an independent watchdog that will write a "code of practice" for tech companies. According to Jeremy Wright, Secretary of State for Digital, Media & Sport and Sajid Javid, Home Secretary, “nearly nine in ten UK adults and 99% of 12 to 15 year olds are online. Two thirds of adults in the UK are concerned about content online, and close to half say they have seen hateful content in the past year. The tragic recent events in New Zealand show just how quickly horrific terrorist and extremist content can spread online.” Further they emphasized on not allowing such harmful behaviours and content to undermine the significant benefits that the digital revolution can offer. The white paper therefore puts forward ambitious plans for a new system of accountability and oversight for tech companies, moving far beyond self-regulation. It includes a new regulatory framework for online safety which will clarify companies’ responsibilities to keep UK users safer online with the most robust action to counter illegal content and activity. The paper suggests 3 major steps for tech regulation: establishing an independent regulator that can write a "code of practice" for social networks and internet companies giving the regulator enforcement powers including the ability to fine companies that break the rules considering additional enforcement powers such as the ability to fine company executives and force internet service providers to block sites that break the rules Outlining the proposals, Culture Secretary Jeremy Wright discussed the fine percentage with BBC UK, "If you look at the fines available to the Information Commissioner around the GDPR rules, that could be up to 4% of company's turnover... we think we should be looking at something comparable here." What are the kind of 'online harms' cited in the paper? The paper cover a range of issues that are clearly defined in law such as spreading terrorist content, child sex abuse, so-called revenge pornography, hate crimes, harassment and the sale of illegal goods. It also covers harmful behaviour that has a less clear legal definition such as cyber-bullying, trolling and the spread of fake news and disinformation. The paper cites that in 2018 online CSEA (Child Sexual Exploitation and Abuse) reported over 18.4 million referrals of child sexual abuse material by US tech companies to the National Center for Missing and Exploited Children (NCMEC). Out of those, there were 113, 948 UK-related referrals in 2018, up from 82,109 in 2017. In the third quarter of 2018, Facebook reported removing 8.7 million pieces of content globally for breaching policies on child nudity and sexual exploitation. Another type of online harm occurs when terrorists use online services to spread their vile propaganda and mobilise support. Paper emphasizes that terrorist content online threatens the UK’s national security and the safety of the public. Giving an example of the five terrorist attacks in the UK during 2017, had an online element. And online terrorist content remains a feature of contemporary radicalisation. It is seen across terrorist investigations, including cases where suspects have become very quickly radicalised to the point of planning attacks. This is partly as a result of the continued availability and deliberately attractive format of the terrorist material they are accessing online. Further it suggests that social networks must tackle material that advocates self-harm and suicide, which became a prominent issue after 14-year-old Molly Russell took her own life in 2017. After she died her family found distressing material about depression and suicide on her Instagram account. Molly's father Ian Russell holds the social media giant partly responsible for her death. Home Secretary Sajid Javid said tech giants and social media companies had a moral duty "to protect the young people they profit from". Despite our repeated calls to action, harmful and illegal content - including child abuse and terrorism - is still too readily available online.” What does the new proposal suggest to tackle online harm The paper calls for an independent regulator to hold internet companies to account. While it did not specify whether a new body will be established, or an existing one will be handed new powers. The regulator will define a "code of best practice" that social networks and internet companies must adhere to. It applies to tech companies like Facebook, Twitter and Google, and the rules would also apply to messaging services such as Whatsapp, Snapchat and cloud storage services. The regulator will have the power to fine companies and publish notices naming and shaming those that break the rules. The paper suggests it is also considering fines for individual company executives and making search engines remove links to offending websites and also consulting over blocking harmful websites. Another area discussed in the paper is about developing a culture of transparency, trust and accountability as a critical element of the new regulatory framework. The regulator will have the power to require annual transparency reports from companies in scope, outlining the prevalence of harmful content on their platforms and what measures they are taking to address this. These reports will be published online by the regulator, so that users can make informed decisions about online use. Additionally it suggests the spread of fake news could be tackled by forcing social networks to employ fact-checkers and promote legitimate news sources. How it plans to deploy technology as a part of solution The paper mentions that companies should invest in the development of safety technologies to reduce the burden on users to stay safe online. As in November 2018, the Home Secretary of UK co-hosted a hackathon with five major technology companies to develop a new tool to identify online grooming. So they have proposed this tool to be licensed for free to other companies, and plan more such innovative and collaborative efforts with them. The government also plans to work with the industry and civil society to develop a safety by design framework, linking up with existing legal obligations around data protection by design and secure by design principles. This will make it easier for startups and small businesses to embed safety during the development or update of products and services. They also plan to understand how AI can be best used to detect, measure and counter online harms, while ensuring its deployment remains safe and ethical. A new project led by Turing is setting out to address this issue. The ‘Hate Speech: Measures and Counter-measures’ project will use a mix of natural language processing techniques and qualitative analyses to create tools which identify and categorize different strengths and types of online hate speech. Other plans include launching of online safety apps which will combine state-of-the-art machine-learning technology to track children’s activity on their smartphone with the ability for children to self-report their emotional state. Why is the white paper receiving critical comments Though the paper seems to be a welcome step towards a sane internet regulation and looks sensible at the first glance. In some cases it has been regarded as too ambitious and unrealistically feeble. It reflects the conflicting political pressures under which it has been generated. TechUK, an umbrella group representing the UK's technology industry, said the government must be "clear about how trade-offs are balanced between harm prevention and fundamental rights". Jim Killock, executive director of Open Rights Group, said the government's proposals would "create state regulation of the speech of millions of British citizens". Matthew Lesh, head of research at free market think tank the Adam Smith Institute, went further saying "The government should be ashamed of themselves for leading the western world in internet censorship. The proposals are a historic attack on freedom of speech and the free press. At a time when Britain is criticising violations of freedom of expression in states like Iran, China and Russia, we should not be undermining our freedom at home." No one doubts the harm done by child sexual abuse or terrorist propaganda online, but these things are already illegal. The difficulty is its enforcement, which the white paper does nothing to address. Effective enforcement would demand a great deal of money and human time. The present system relies on a mixture of human reporting and algorithms. The algorithms can be fooled without too much trouble: 300,000 of the 1.5m copies of the Christchurch terrorist videos that were uploaded to Facebook within 24 hours of the crime were undetected by automated systems. Apart from this there is a criticism about the vision of the white paper which says it wants "A free, open and secure internet with freedom of expression online" "where companies take effective steps to keep their users safe". But it is actually not explained how it is going to protect free expression and seems to be a contradiction to the regulation. https://twitter.com/jimkillock/status/1115253155007205377 Beyond this, there is a conceptual problem. Much of the harm done on and by social media does not come from deliberate criminality, but from ordinary people released from the constraints of civility. It is here that the white paper fails most seriously. It talks about material – such as “intimidation, disinformation, the advocacy of self-harm” – that is harmful but not illegal yet proposes to regulate it in the same way as material which is both. Even leaving aside politically motivated disinformation, this is an area where much deeper and clearer thought is needed. https://twitter.com/guy_herbert/status/1115180765128667137 There is no doubt that some forms of disinformation do serious harms both to individuals and to society as a whole. And regulating the internet is necessary, but it won’t be easy or cheap. Too much of this white paper looks like an attempt to find cheap and easy solutions to really hard questions. Tech companies in EU to face strict regulation on Terrorist content: One hour take down limit; Upload filters and private Terms of Service Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’ How social media enabled and amplified the Christchurch terrorist attack  
Read more
  • 0
  • 0
  • 11307

article-image-following-linux-gnu-publishes-kind-communication-guidelines-to-benefit-members-of-disprivileged-demographics
Sugandha Lahoti
23 Oct 2018
5 min read
Save for later

Following Linux, GNU publishes ‘Kind Communication Guidelines’ to benefit members of ‘disprivileged’ demographics

Sugandha Lahoti
23 Oct 2018
5 min read
The GNU project published Kind Communication Guidelines, yesterday, to encourage contributors to be kinder in their communication to fellow contributors, especially to women and other members of disprivileged demographics. This news follows the recent changes in the Code of Conduct for the Linux community. Last month, Linux maintainers revised its Code of Conflict, moving instead to a Code of Conduct. The change was committed by Linus Torvalds, who shortly after the change took a  self-imposed leave from the project to work on his behavior. By switching to a Code of Conduct, Linux placed emphasis on how contributors and maintainers work together to cultivate an open and safe community that people want to be involved in. However, Linux’s move was not received well by many of its developers. Some even threatened to pull out their blocks of code important to the project to revolt against the change. The main concern was that the new CoC could be randomly or selectively used as a tool to punish or remove anyone from the community. Read the summary of developers views on the Code of Conduct that, according to them, justifies their decision. GNU is taking an approach different from Linux in evolving its community into a more welcoming place for everyone. As opposed to a stricter code of conduct, which enforces people to follow rules or suffer punishments, the Kind communication guidelines will guide people towards kinder communication rather than ordering people to be kind. What do Stallman’s ‘Kindness’ guidelines say? In a post, Richard Stallman, President of the Free Software Foundation, said “People are sometimes discouraged from participating in GNU development because of certain patterns of communication that strike them as unfriendly, unwelcoming, rejecting, or harsh. This discouragement particularly affects members of disprivileged demographics, but it is not limited to them.” He further adds, “Therefore, we ask all contributors to make a conscious effort, in GNU Project discussions, to communicate in ways that avoid that outcome—to avoid practices that will predictably and unnecessarily risk putting some contributors off.” Stallman encourages contributors to lead by example and apply the following guidelines in their communication: Do not give heavy-handed criticism Do not criticize people for wrongs that you only speculate they may have done. Try and understand their work. Please respond to what people actually said, not to exaggerations of their views. Your criticism will not be constructive if it is aimed at a target other than their real views. It is helpful to show contributors that being imperfect is normal and politely help them in fixing their problems. Reminders on problems should be gentle and not too frequent. Avoid discrimination based on demographics Treat other participants with respect, especially when you disagree with them. He requests people to identify and acknowledge people by the names they use and their gender identity. Avoid presuming and making comments on a person’s typical desires, capabilities or actions of some demographic group. These are off-topic in GNU Project discussions. Personal attacks are a big no-no Avoid making personal attacks or adopt a harsh tone for a person. Go out of your way to show that you are criticizing a statement, not a person. Vice versa, if someone attacks or offends your personal dignity, please don't “hit back” with another personal attack. “That tends to start a vicious circle of escalating verbal aggression. A private response, politely stating your feelings as feelings, and asking for peace, may calm things down.” Avoid arguing unceasingly for your preferred course of action when a decision for some other course has already been made. That tends to block the activity's progress. Avoid indulging in political debates Contributors are required to not raise unrelated political issues in GNU Project discussions. The only political positions that the GNU Project endorses are that users should have control of their own computing (for instance, through free software) and supporting basic human rights in computing. Stallman hopes that these guidelines, will encourage more contribution to GNU projects, and the subsequent discussions will be friendlier and reach conclusions more easily. Read the full guidelines on GNU blog. People’s reactions to GNU’s move has been mostly positive. https://twitter.com/MatthiasStrubel/status/1054406791088562177 https://twitter.com/0xUID/status/1054506057563824130 https://twitter.com/haverdal76/status/1054373846432673793 https://twitter.com/raptros_/status/1054415382063316993 Linus Torvalds and Richard Stallman have been the fathers of the open source movement since its inception over twenty years ago. As such, these moves underline that open source indeed has a toxic culture problem, but is evolving and sincerely working to make it more open and welcoming to all to easily contribute to projects. We’ll be watching this space closely to see which approach to inclusion works more effectively and if there are other approaches to making this transition smooth for everyone involved. Stack Overflow revamps its Code of Conduct to explain what ‘Be nice’ means – kindness, collaboration, and mutual respect. Linux drops Code of Conflict and adopts new Code of Conduct. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion  
Read more
  • 0
  • 0
  • 11252

article-image-zuckerberg-agenda-for-tech-regulation-yet-another-digital-gangster-move
Sugandha Lahoti
01 Apr 2019
7 min read
Save for later

Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move

Sugandha Lahoti
01 Apr 2019
7 min read
Facebook has probably made the biggest April Fool’s joke of this year. Over the weekend, Mark Zuckerberg, CEO of Facebook, penned a post detailing the need to have tech regulation in four major areas: “harmful content, election integrity, privacy, and data portability”. However, privacy advocates and tech experts were frustrated rather than pleased with this announcement, stating that seeing recent privacy scandals, Facebook CEO shouldn’t be the one making the rules. The term ‘digital gangster’ was first coined by the Guardian, when the Digital, Culture, Media and Sport Committee published its final report on Facebook’s Disinformation and ‘fake news practices. Per the publishing firm, “Facebook behaves like a ‘digital gangster’ destroying democracy. It considers itself to be ‘ahead of and beyond the law’. It ‘misled’ parliament. It gave statements that were ‘not true’”. Last week, Facebook rolled out a new Ad Library to provide more stringent transparency for preventing interference in worldwide elections. It also rolled out a policy to ban white nationalist content from its platforms. Zuckerberg’s four new regulation ideas “I believe we need a more active role for governments and regulators. By updating the rules for the internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms.”, writes Zuckerberg. Reducing harmful content For harmful content, Zuckerberg talks about having a certain set of rules that govern what types of content tech companies should consider harmful. According to him, governments should set "baselines" for online content that require filtering. He suggests that third-party organizations should also set standards governing the distribution of harmful content and measure companies against those standards. "Internet companies should be accountable for enforcing standards on harmful content," he writes. "Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum." Ironically, over the weekend, Facebook was accused of enabling the spreading of anti-Semitic propaganda after its refusal to take down repeatedly flagged hate posts. Facebook stated that it will not remove the posts as they do not breach its hate speech rules and are not against UK law. Preserving election integrity The second tech regulation revolves around election integrity. Facebook has been taken steps in this direction by making significant changes to its advertising policies. Facebook’s new Ad library which was released last week, now provides advertising transparency on all active ads running on a Facebook page, including politics or issue ads. Ahead of the European Parliamentary election in May 2019, Facebook is also introducing ads transparency tools in the EU. He advises other tech companies to build a searchable ad archive as well. "Deciding whether an ad is political isn’t always straightforward. Our systems would be more effective if regulation created common standards for verifying political actors," Zuckerberg says. He also talks about improving online political advertising laws for political issues rather than primarily focussing on candidates and elections. “I believe”, he says “legislation should be updated to reflect the reality of the threats and set standards for the whole industry.” What is surprising is that just 24 hrs after Zuckerberg published his post committing to preserve election integrity, Facebook took down over 700 pages, groups, and accounts that were engaged in “coordinated inauthentic behavior” on Indian politics ahead of the country’s national elections. According to DFRLab, who analyzed these pages, Facebook was in fact quite late to take actions against these pages. Per DFRLab, "Last year, AltNews, an open-source fact-checking outlet, reported that a related website called theindiaeye.com was hosted on Silver Touch servers. Silver Touch managers denied having anything to do with the website or the Facebook page, but Facebook’s statement attributed the page to “individuals associated with” Silver Touch. The page was created in 2016. Even after several regional media outlets reported that the page was spreading false information related to Indian politics, the engagements on posts kept increasing, with a significant uptick from June 2018 onward." Adhering to privacy and data portability For privacy, Zuckerberg talks about the need to develop a “globally harmonized framework” along the lines of European Union's GDPR rules for US and other countries “I believe a common global framework — rather than regulation that varies significantly by country and state — will ensure that the internet does not get fractured, entrepreneurs can build products that serve everyone, and everyone gets the same protections.”, he writes. Which makes us wonder what is stopping him from implementing EU style GDPR on Facebook globally until a common framework is agreed upon by countries? Lastly, he adds, “regulation should guarantee the principle of data portability”, allowing people to freely port their data across different services. “True data portability should look more like the way people use our platform to sign into an app than the existing ways you can download an archive of your information. But this requires clear rules about who’s responsible for protecting information when it moves between services.” He also endorses the need for a standard data transfer format by supporting the open source Data Transfer Project. Why this call for regulation now? Zuckerberg's post comes at a strategic point of time when Facebook is battling a large number of investigations. Most recent of which is the housing discrimination charge by the U.S. Department of Housing and Urban Development (HUD) who alleged that Facebook is using its advertising tools to violate the Fair Housing Act. Also to be noticed is the fact, that Zuckerberg’s blog post comes weeks after Senator Elizabeth Warren, stated that if elected president in 2020, her administration will break up Facebook. Facebook was quick to remove and then restore several ads placed by Warren, that called for the breakup of Facebook and other tech giants. A possible explanation to Zuckerberg's post can be the fact that Facebook will be able to now say that it's actually pro-government regulation. This means it can lobby governments to make a decision that would be the most beneficial for the company. It may also set up its own work around political advertising and content moderation as the standard for other industries. By blaming decisions on third parties, it may also possibly reduce scrutiny from lawmakers. According to a report by Business Insider, just as Zuckerberg posted about his news today, a large number of Zuckerberg’s previous posts and announcements have been deleted from the FB Blog. Reaching for comment, a Facebook spokesperson told Business Insider that the posts were "mistakenly deleted" due to "technical errors." Now if this is a deliberate mistake or an unintentional one, we don’t know. Zuckerberg’s post sparked a huge discussion on Hacker news with most people drawing negative conclusions based on Zuckerberg’s writeup. Here are some of the views: “I think Zuckerberg's intent is to dilute the real issue (privacy) with these other three points. FB has a bad record when it comes to privacy and they are actively taking measures against it. For example, they lobby against privacy laws. They create shadow profiles and they make it difficult or impossible to delete your account.” “harmful content, election integrity, privacy, data portability Shut down Facebook as a company and three of those four problems are solved.” “By now it's pretty clear, to me at least, that Zuckerberg simply doesn't get it. He could have fixed the issues for over a decade. And even in 2019, after all the evidence of mismanagement and public distrust, he still refuses to relinquish any control of the company. This is a tone-deaf opinion piece.” Twitteratis also shared the same sentiment. https://twitter.com/futureidentity/status/1112455687169327105 https://twitter.com/BrendanCarrFCC/status/1112150281066819584 https://twitter.com/davidcicilline/status/1112085338342727680 https://twitter.com/DamianCollins/status/1112082926232092672 https://twitter.com/MaggieL/status/1112152675699834880 Ahead of EU 2019 elections, Facebook expands it’s Ad Library to provide advertising transparency in all active ads Facebook will ban white nationalism, and separatism content in addition to white supremacy content. Are the lawmakers and media being really critical towards Facebook?
Read more
  • 0
  • 0
  • 10913
article-image-highlights-from-mary-meekers-2019-internet-trends-report
Sugandha Lahoti
12 Jun 2019
8 min read
Save for later

Highlights from Mary Meeker’s 2019 Internet trends report

Sugandha Lahoti
12 Jun 2019
8 min read
At Recode by Vox’s 2019 Code Conference on Tuesday, Bond partner Mary Meeker made her presentation onstage, covering everything on the internet's latest trends. Meeker had first started presenting these reports in 1995, underlining the most important statistics and technology trends on the internet. Last year in September, Meeker quit Kleiner Perkins to start her own firm Bond and is popularly known as the Queen of the Internet. Mary Meeker’s 2019 Internet trends report highlighted that the internet is continuing to grow, slowly, as more users come online, especially with mobile devices. She also talked about increased internet ad spending, data growth, as well as the rise of freemium subscription business models, interactive gaming, the on-demand economy and more. https://youtu.be/G_dwZB5h56E The internet trends highlighted by Meeker include: Internet Users E-commerce and advertising Internet Usage Freemium business models Data growth Jobs and Work Online Education Immigration and Healthcare Internet Users More than 50% of the world’s population now has access to the internet. There are 3.8 billion internet users in the world with Asia-pacific leading in both users and potential. China is the largest market with 21% of total internet users and India is at 12%. However, the growth is slowing by 6% in 2018 versus 7% in 2017 because so many people have come online that new users are harder to come by. New smartphone unit shipments actually declined in 2018. Per the global internet market cap leaders, the U.S. is stable at 18 of the top 30 and China is stable at 7 of the top 30. These are the two leading countries where internet innovation is at an especially high level. If we look at revenue growth for the internet market cap leaders it continues to slow - 11 percent year-on-year in Q1 versus 13 percent in Q4. Internet usage Internet usage had a solid growth, driven by investment in innovation. The digital media usage in the U.S. is accelerating up 7% versus 5% growth in 2017. The average US adult spends 6.3 hours each day with digital media, over half of which is spent on their mobiles. Wearables had 52 million users which doubled in four years. Roughly 70 million people globally listen to podcasts in the US, a figure that’s doubled in about four years. Outside the US, there's especially high innovation in data-driven and direct fulfillment that's growing very rapidly in China. Innovation outside the US is also especially strong in financial services. Images are also becoming an increasingly relevant way to communicate. More than 50% of the tweets of impressions today are images, video or other forms of media. Interactive gaming innovation is rising across platforms as interactive games like Fortnite become the new social media for certain people. It is accelerating with 2.4 billion users up, 6 percent year-on-year in 2018. On the flip side Almost 26% of adults are constantly online versus 21% three years ago. That number jumped to 39% for 18 to 29 year-olds surveyed. However, digital media users are taking action to reduce their usage and businesses are also taking actions to help users monitor their usage. Social media usage has decelerated up 1% in 2018 versus 6% in 2017. Privacy concerns are high but they're moderating. Regulators and businesses are improving consumer privacy control. In digital media encrypted messaging and traffic are rising rapidly. In Q1, 87 percent of global web traffic was encrypted, up from 53 percent three years ago. Another usage concern is problematic content. Problematic content on the Internet can be less filtered and more amplified. Images and streaming can be more powerful than text. Algorithms can amplify users on patterns  and social media can amplify trending topics. Bad actors can amplify ideologies, unintended bad actors can amplify misinformation and extreme views can amplify polarization. However internet platforms are indeed driving efforts to reduce problematic content as do consumers and businesses. 88% percent of people in the U.S. believe the Internet has been mostly good for them and 70% believe the Internet has been mostly good for society. Cyber attacks have continued to rise. These include state-sponsored attacks, large-scale data provider attacks, and monetary extortion attacks. E-commerce and online advertising E-commerce is now 15 percent of retail sales. Its growth has slowed — up 12.4 percent in Q1 compared with a year earlier — but still towers over growth in regular retail, which was just 2 percent in Q1. In online advertising, on comparing the amount of media time spent versus the amount of advertising dollars spent, mobile hit equilibrium in 2018 while desktop hit that equilibrium point in 2015. The Internet ads spending on an annual basis accelerated a little bit in 2018 up 22 percent.  Most of the spending is still on Google and Facebook, but companies like Amazon and Twitter are getting a growing share. Some 62 percent of all digital display ad buying is for programmatic ads, which will continue to grow. According to the leading tech companies the internet average revenue has been decelerating on a quarterly basis of 20 percent in Q1. Google and Facebook still account for the majority of online ad revenue, but the growth of US advertising platforms like Amazon, Twitter, Snapchat, and Pinterest is outstripping the big players: Google’s ad revenue grew 1.4 times over the past nine quarters and Facebook’s grew 1.9 times, while the combined group of new players grew 2.6 times. Customer acquisition costs — the marketing spending necessary to attract each new customer — is going up. That’s unsustainable because in some cases it surpasses the long-term revenue those customers will bring. Meeker suggests cheaper ways to acquire customers, like free trials and unpaid tiers. Freemium business models Freemium business models are growing and scaling. Freemium businesses equals free user experience which enables more usage, engagement, social sharing and network effects. It also equals premium user experience which drives monetization and product innovation. Freemium business evolution started in gaming, evolving and emerging in consumer and enterprise. One of the important factors for this growth is cloud deployment revenue which grew about 58% year-over-year. Another enabler of freemium subscription business models is efficient digital payments which account for more than 50% of day-to-day transactions around the world. Data growth Internet trends indicate that a number of data plumbers are helping a lot of companies collect data, manage connections, and optimize data. In a survey of retail customers, 91% preferred brands that provided personalized offers and recommendations. 83% were willing to passively share data in exchange for personalized services and 74% were willing to actively share data in exchange for personalized experiences. Data volume and utilization is also evolving rapidly. Enterprise surpassed consumer in 2018 and cloud is overtaking both. More data is now stored in the cloud than on private enterprise servers or consumer devices. Jobs and Work Strong economic indicators, internet enabled services, and jobs are helping work. If we look at global GDP. China, the US and India are rising, but Europe is falling. Cross-border trade is at 29% of global GDP and has been growing for many years. Global relative unemployment concerns are very high outside the US and low in itself. Consumer confidence index is high and rising. Unemployment is at a 19-year low but job openings are at an all-time high and wages are rising. On-demand work is creating internet-enabled opportunities and efficiencies. There are 7 million on-demand workers up 22 percent year-on-year. Remote work is also creating internet enabled work opportunities and efficiency. Americans working remotely have risen from 5 percent versus 3 percent in 2000. Online education Education costs and student debt are rising in the US whereas post-secondary education enrollment is slowing. Online education enrollment is high across a diverse base of universities - public, private for-profit, and private not-for-profit.  Top offline institutions are ramping their online offerings at a very rapid rate - most recently University of Pennsylvania, University of London, University of Michigan and UC Boulder. Google's growth in creating certificates for in-demand jobs is growing rapidly which they are doing in collaboration with Coursera. Immigration and Healthcare In the U.S. 60% of the most highly valued tech companies are founded by first or second generation Americans. They employed 1.9 million people last year. USA entitlements account for 61% of government spending versus 42% 30 years ago, and shows no signs of stopping. Healthcare is steadily digitizing, driven by consumers and the trends are very powerful. You can expect more telemedicine and on-demand consultations. For details and infographics, we recommend you to go through the slide deck of the Internet trends report. What Elon Musk and South African conservation can teach us about technology forecasting. Jim Balsillie on Data Governance Challenges and 6 Recommendations to tackle them Experts present the most pressing issues facing global lawmakers on citizens’ privacy, democracy and the rights to freedom of speech.
Read more
  • 0
  • 0
  • 10903

article-image-tech-regulation-heats-up-australias-abhorrent-violent-material-bill-to-warrens-corporate-executive-accountability-act
Fatema Patrawala
04 Apr 2019
6 min read
Save for later

Tech regulation to an extent of sentence jail: Australia’s ‘Sharing of Abhorrent Violent Material Bill’ to Warren’s ‘Corporate Executive Accountability Act’

Fatema Patrawala
04 Apr 2019
6 min read
Businesses in powerful economies like USA, UK, Australia are as arguably powerful as politics or more than that. Especially now that we inhabit a global economy where an intricate web of connections can show the appalling employment conditions of Chinese workers who assemble the Apple smartphones we depend on. Amazon holds a revenue bigger than Kenya’s GDP. According to Business Insider, 25 major American corporations have revenues greater than the GDP of countries around the world. Because corporations create millions of jobs and control vast amounts of money and resources, their sheer economic power dwarfs government's ability to regulate and oversee them. With the recent global scale scandals that the tech industry has found itself in, with some resulting in deaths of groups of people, governments are waking up to the urgency for the need to hold tech companies responsible. While some government laws are reactionary, others are taking a more cautious approach. One thing is for sure, 2019 will see a lot of tech regulation come to play. How effective they are and what intended and unintended consequences they bear, how masterfully big tech wields its lobbying prowess, we’ll have to wait and see. Holding Tech platforms enabling hate and violence, accountable Australian govt passes law that criminalizes companies and execs for hosting abhorrent violent content Today, Australian parliament has passed legislation to crack down on violent videos on social media. The bill, described the attorney general, Christian Porter, as “most likely a world first”, was drafted in the wake of the Christchurch terrorist attack by a White supremacist Australian, when video of the perpetrator’s violent attack spread on social media faster than it could be removed. The Sharing of Abhorrent Violent Material bill creates new offences for content service providers and hosting services that fail to notify the Australian federal police about or fail to expeditiously remove videos depicting “abhorrent violent conduct”. That conduct is defined as videos depicting terrorist acts, murders, attempted murders, torture, rape or kidnap. The bill creates a regime for the eSafety Commissioner to notify social media companies that they are deemed to be aware they are hosting abhorrent violent material, triggering an obligation to take it down. While the Digital Industry Group which consists of Google, Facebook, Twitter, Amazon and Verizon Media in Australia has warned that the bill is passed without meaningful consultation and threatens penalties against content created by users. Sunita Bose, the group’s managing director says, “ with the vast volumes of content uploaded to the internet every second, this is a highly complex problem”. She further debates that “this pass it now, change it later approach to legislation creates immediate uncertainty to the Australia’s tech industry”. The Chief Executive of Atlassian Scott Farquhar said that the legislation fails to define how “expeditiously” violent material should be removed, and did not specify on who should be punished in the social media company. https://twitter.com/scottfarkas/status/1113391831784480768 The Law Council of Australia president, Arthur Moses, said criminalising social media companies and executives was a “serious step” and should not be legislated as a “knee-jerk reaction to a tragic event” because of the potential for unintended consequences. Contrasting Australia’s knee-jerk legislation, the US House Judiciary committee has organized a hearing on white nationalism and hate speech and their spread online. They have invited social media platform execs and civil rights organizations to participate. Holding companies accountable for reckless corporate behavior Facebook has undergone scandals after scandals with impunity in recent years given the lack of legislation in this space. Facebook has repeatedly come under the public scanner for data privacy breaches to disinformation campaigns and beyond. Adding to its ever-growing list of data scandals yesterday CNN Business uncovered  hundreds of millions of Facebook records were stored on Amazon cloud servers in a way that it allowed to be downloaded by the public. Earlier this month on 8th March, Sen. Warren has proposed to build strong anti-trust laws and break big tech companies like Amazon, Google, Facebook and Apple. Yesterday, she introduced Corporate Executive Accountability Act and also reintroduced the “too big to fail” bill a new piece of legislation that would make it easier to criminally charge company executives when Americans’ personal data is breached, among other corporate negligent behaviors. “When a criminal on the street steals money from your wallet, they go to jail. When small-business owners cheat their customers, they go to jail,” Warren wrote in a Washington Post op-ed published on Wednesday morning. “But when corporate executives at big companies oversee huge frauds that hurt tens of thousands of people, they often get to walk away with multimillion-dollar payouts.” https://twitter.com/SenWarren/status/1113448794912382977 https://twitter.com/SenWarren/status/1113448583771185153 According to Elizabeth, just one banker went to jail after the 2008 financial crisis. The CEO of Wells Fargo and his successor walked away from the megabank with multimillion-dollar pay packages after it was discovered employees had created millions of fake accounts. The same goes for the Equifax CEO after its data breach. The new legislation Warren introduced would make it easier to hold corporate executives accountable for their companies’ wrongdoing. Typically, it’s been hard to prove a case against individual executives for turning a blind eye toward risky or questionable activity, because prosecutors have to prove intent — basically, that they meant to do it. This legislation would change that, Heather Slavkin Corzo, a senior fellow at the progressive nonprofit Americans for Financial Reform, said to the Vox reporter. “It’s easier to show a lack of due care than it is to show the mental state of the individual at the time the action was committed,” she said. A summary of the legislation released by Warren’s office explains that it would “expand criminal liability to negligent executives of corporations with over $1 billion annual revenue” who: Are found guilty, plead guilty, or enter into a deferred or non-prosecution agreement for any crime. Are found liable or enter a settlement with any state or Federal regulator for the violation of any civil law if that violation affects the health, safety, finances, or personal data of 1% of the American population or 1% of the population of any state. Are found liable or guilty of a second civil or criminal violation for a different activity while operating under a civil or criminal judgment of any court, a deferred prosecution or non prosecution agreement, or settlement with any state or Federal agency. Executives found guilty of these violations could get up to a year in jail. And a second violation could mean up to three years. The Corporate Executive Accountability Act is yet another push from Warren who has focused much of her presidential campaign on holding corporations and their leaders responsible for both their market dominance and perceived corruption. Elizabeth Warren wants to break up tech giants like Amazon, Google Facebook, and Apple and build strong antitrust laws Zuckerberg wants to set the agenda for tech regulation in yet another “digital gangster” move Facebook under criminal investigations for data sharing deals: NYT report
Read more
  • 0
  • 0
  • 10615
Modal Close icon
Modal Close icon