Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-openssl-1-1-1-released-with-support-for-tls-1-3-improved-side-channel-security
Melisha Dsouza
12 Sep 2018
3 min read
Save for later

OpenSSL 1.1.1 released with support for TLS 1.3, improved side channel security

Melisha Dsouza
12 Sep 2018
3 min read
Yesterday (11th of September), the OpenSSL team announced the stable release of OpenSSL 1.1.1. With work being in progress for two years along with more than 500 commits, the release comes with many notable upgrades. The most important new feature in OpenSSL 1.1.1 is TLSv1.3, which was published last month as RFC 8446 by the Internet Engineering Task Force. Applications working with OpenSSL1.1.0 can gain the benefits of TLSv1.3 by upgrading to the new OpenSSL version. TLS 1.3 features Reduction in the number of round trips required between the client and server to improve connection times 0-RTT or “early data” feature - which is the ability  for clients to start sending encrypted data to the server straight away without any round trips with the server Removal of various obsolete and insecure cryptographic algorithms and encryption of more of the connection handshake has improved security For more details on TLS 1.3 read: Introducing TLS 1.3, the first major overhaul of the TLS protocol with improved security and speed Updates in OpenSSL 1.1.1 A complete rewrite of the OpenSSL random number generator The OpenSSL random number generator has been completely rewritten to introduce capabilities such as: The default RAND method now utilizes an AES-CTR DRBG according to NIST standard SP 800-90Ar1. Support for multiple DRBG instances with seed chaining. Public and private DRBG instance. DRBG instances are made fork-safe. Keep all global DRBG instances on the secure heap if it is enabled. The public and private DRBG instance are per thread for lock free operation Support for various new cryptographic algorithms The different algorithms that are now supported by OpenSSL 1.1.1 include: SHA3, SHA512/224 and SHA512/256 EdDSA (including Ed25519 and Ed448) X448 (adding to the existing X25519 support in 1.1.0) Multi-prime RSA SM2,SM3,SM4 SipHash ARIA (including TLS support) Side-Channel attack security improvements This upgrade also introduces significant Side-Channel attack security improvements, maximum fragment length TLS extension support and a new STORE module, implementing a uniform and URI based reader of stores containing keys, certificates, CRLs and numerous other objects. OpenSSL 1.0.2 will receive full support only until the end of 2018 and security fixes only till the end of 2019. The team advises users of OpenSSL 1.0.2 to upgrade to OpenSSL 1.1.1 at the earliest. Head over to the OpenSSL blog for further details on the news. GNU nano 3.0 released with faster file reads, new shortcuts and usability improvements Haiku, the open source BeOS clone, to release in beta after 17 years of development Ripgrep 0.10.0 released with PCRE2 and multi-line search support  
Read more
  • 0
  • 0
  • 9534

article-image-will-facebook-enforce-its-updated-remove-reduce-and-inform-policy-to-curb-fake-news-and-manage-problematic-content
Sugandha Lahoti
12 Apr 2019
6 min read
Save for later

Will Facebook enforce it's updated “remove, reduce, and inform” policy to curb fake news and manage problematic content?

Sugandha Lahoti
12 Apr 2019
6 min read
Facebook announced updates to it’s “remove, reduce, and inform” strategy to better control “problematic” content and fake news across Facebook, Instagram, and Messenger. No new tools or updates have been announced for Whatsapp. By problematic content, they mean reducing the spread of content that is inappropriate but does not violate their community guidelines. Similarly, for Instagram, the company is reducing the spread of posts that are inappropriate but do not go against Instagram’s Community Guidelines. These posts will not be recommended on the Explore and hashtag pages but can still appear in your feed if you follow the account that posts it. For instance, the company adds, “ a sexually suggestive post will still appear in Feed but may not appear for the broader community in Explore or hashtag pages.” They disclosed this news to a small group of journalists in an event organized at Menlo Park, on Wednesday. “This strategy”, Facebook said, “applies not only during critical times like elections but year-round.” Last week, WhatsApp included a 'Checkpoint Tipline' feature in India to verify messages during the election. "Launched by PROTO, an India-based media skilling startup, this tip line will help create a database of rumors to study misinformation during elections for Checkpoint," Facebook said in a statement. However, the tool turned out to be more for researching purposes rather than debunking fake news as reported in an investigation led by BuzzFeed News. Per Buzzfeed, FAQs uploaded on Pronto website suggests it’s just meant for research purposes. Increasing overall product integrity Facebook has rolled out a Community Standards site where people can track the updates Facebook makes each month. All policy changes will be visible to the public with specifics on some on why they made a certain change. Facebook Groups admins will be held more accountable for Community Standards violations. Facebook will be looking at admin and moderator content violations in a group while deciding whether or not to take it down. They will be checking approved member posts as a stronger signal that the group violates facebook standards. This feature is also released globally. A new Group Quality feature will provide an overview of content removed and flagged for most violations. It will also have a section for false news found in the group. This initiative is going to start globally in the coming weeks. They are also expanding their third-party collaborations for news flagging and fact-checking by including The Associated Press as part of the third-party fact-checking program. AP will be debunking false and misleading video misinformation and Spanish-language content appearing on Facebook in the US. Surprisingly, fact-checking by AP has not been added as a feature globally. India is Facebook’s largest market and is also conducting its national elections over this month and the next. Current fact checking agencies in India include AFP India, Boom, Fact Crescendo, Factly, India Today Fact Check, Newsmobile Fact Checker, and Vishvas.News. Facebook has made admin and moderator policies as well as the Group Quality feature made available globally, but not the AP inclusion. Read also: Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? If a Facebook group is found to repeatedly share misinformation that has been rated false by independent fact-checkers, Facebook will reduce that group’s overall News Feed distribution. Interestingly, they have not suspended these groups as they are only removing/suspending content that “violates their policies”, even if it’s deemed inappropriate. A new “Click-Gap” signal into News Feed ranking will be incorporated to see less low-quality content in their News Feed. Per Facebook, “This new signal, Click-Gap, relies on the web graph, a conceptual “map” of the internet in which domains with a lot of inbound and outbound links are at the center of the graph and domains with fewer inbound and outbound links are at the edges. Click-Gap looks for domains with a disproportionate number of outbound Facebook clicks compared to their place in the web graph. This can be a sign that the domain is succeeding on News Feed in a way that doesn’t reflect the authority they’ve built outside it and is producing low-quality content.” Specifically for Facebook and messenger apps The Context Button feature is now added to images to provide people more background information about the publishers and articles they see in News Feed. Facebook is testing this feature for images that have been reviewed by third-party fact-checkers. Trust Indicators are also added to the Context Button to provide a publication’s fact-checking practices, ethics statements, corrections, ownership and funding, and editorial team. They are created by a consortium of news organizations known as the Trust Project. This feature started in March 2019, on English and Spanish content. Facebook will also be adding more information to the Page Quality tab starting with info on Page’s status with respect to clickbait. Facebook will also allow people to remove their posts and comments from a group after they leave the group. For Messenger The Verified Badge is now officially a part of Messenger as a visible indicator of a verified account. There is also the inclusion of Messaging Settings and an Updated Block feature for greater control. Messenger also has a Forward Indicator and Context Button to help prevent the spread of misinformation. The Forward Indicator lets someone know if a message they received was forwarded by the sender, while the Context Button provides more background on shared articles. [dropcap]W[/dropcap]hat’s distressing is that there is a significant gap between policy update and the actual implementation of Facebook’s practices. Facebook continues to host Laura Loomer's inciting content on Instagram even after being flagged saying it does not violate their standards. Laura Loomer is an anti-Muslim conservative activist who published alarming posts that could potentially incite violence against Muslim congresswoman Ilhan Omar. https://twitter.com/letsgomathias/status/1116461347259256832 https://twitter.com/justinhendrix/status/1116501676456910849 Facebook discussions with the EU resulted in changes of its terms and services for users. Ahead of Indian elections, Facebook removes hundreds of assets spreading fake news and hate speech, but are they too late? Ahead of EU 2019 elections, Facebook expands its Ad Library to provide advertising transparency in all active ads.
Read more
  • 0
  • 0
  • 9491

article-image-fortnite-just-fixed-a-bug-that-let-attackers-to-fully-access-user-accounts-impersonate-real-players-and-buy-v-buck
Amrata Joshi
17 Jan 2019
4 min read
Save for later

Fortnite just fixed a bug that let attackers to fully access user accounts, impersonate real players and buy V-Buck

Amrata Joshi
17 Jan 2019
4 min read
Yesterday, Epic Games, the developer of Fortnite, an online video game acknowledged the existence of a bug in the game (Fortnite). This bug could let attackers access user accounts by impersonating as real gamers and purchase V-Buck, Fortnite’s in-game currency with credit cards. This bug could also eavesdrop on record players’ in-game conversation and background home conversations. Just two months ago, researchers at Check Point Research found the vulnerabilities and informed Epic Games which then fixed the vulnerability. In a statement to Washington Post, Oded Vanunu, Check Point’s head of products vulnerability research said, "The chain of the vulnerabilities within the log-in flow provide[d] the hacker the ability to take full control of the account.” According to an analysis made by market research company SuperData, last year, with the help of Fortnite, Epic Games was leading the market for free-to-play games by earning $2.4 billion in revenue. 10 months ago, a user shared his experience on Reddit regarding his account being hacked. The hacker used all his money using his card for buying V-Bucks. The post reads, “It appears my epic games account was hacked this past weekend, and they proceeded to spend all the money they could on v-bucks (which was all of it).” The victim also added a note, “ I've never tried signing up for free v-bucks or anything of the sort. I think I've just used the same password email combo too many times and at some point it was leaked in some data breach.” In spite of refund by Epic team the online gaming world doesn’t look that safe. But this post has some comments which clearly states how scared users are. One of the users commented, “Well, after reading this I just deleted my PayPal from my Epic Games account. Definitely going to run with entering details each time instead of storing them.” The thread has some comments which suggests having a two-way verification, changing passwords frequently and using prepaid cards if possible for online games. In a statement to The Verge, Epic Games said, “We were made aware of the vulnerabilities and they were soon addressed. We thank Check Point for bringing this to our attention. As always, we encourage players to protect their accounts by not re-using passwords and using strong passwords, and not sharing account information with others.” Hackers deceive players in various ways, one of which is, asking users to log into fake websites that promised to generate V-Buck. These sites ask gamers to enter their game login credentials and personal information like name, address and credit card details, which further get misused. Usually, such scams are promoted via social media campaigns that claim gamers can “earn easy cash” or “make quick money”. Check Point’s research found out a vulnerability in the game that didn’t even require the login details for the attackers to attack. An XSS (cross-site scripting) attack was responsible according to researchers, which would just require users to click on a link sent to them by the attacker. As soon as the user would click the link, their Fortnite username and password would immediately be captured by the attacker, without the need for them to enter any login credentials. According to the researchers, this bug would let hackers steal pieces of code to identify a gamer when he/she logs into the game by a third-party account such as Xbox Live or Facebook. After accessing a gamer’s account in Fortnite with these security tokens, hackers could buy weapons, in-game currency, or even cosmetic accessories. To know more about the bug in Fortnite, check out the report and YouTube video by Check Point. Hyatt Hotels launches public bug bounty program with HackerOne 35-year-old vulnerabilities in SCP client discovered by F-Secure researcher Fortnite server suffered a minor outage, Epic Games was quick to address the issue
Read more
  • 0
  • 0
  • 9461

article-image-china-telecom-misdirected-internet-traffic-says-oracle-report
Savia Lobo
06 Nov 2018
3 min read
Save for later

China Telecom misdirected internet traffic, says Oracle report

Savia Lobo
06 Nov 2018
3 min read
The Naval War College published a paper titled, “China’s Maxim – Leave No Access Point Unexploited: The Hidden Story of China Telecom’s BGP Hijacking” that contained a number of claims about purported efforts by the Chinese government to manipulate BGP routing in order to intercept internet traffic. Doug Madory, Director of Internet Analysis at Oracle's Internet Intelligence team, in his recent blog post addresses the paper’s claims. He said, “I don’t intend to address the paper’s claims around the motivations of these actions. However, there is truth to the assertion that China Telecom (whether intentionally or not) has misdirected internet traffic (including out of the United States) in recent years. I know because I expended a great deal of effort to stop it in 2017”. SK Broadband, formerly known as Hanaro, experienced a brief routing leak on 9 December 2015,  which lasted a little more than a minute. During the incident, SK’s ASN, AS9318, announced over 300 Verizon routes that were picked up by OpenDNS’s BGPstream service. This leak was announced exclusively through China Telecom (AS4134), one of SK Broadband’s transit providers. Just minutes after that, AS9318 began transiting the same routes from Verizon APAC (AS703) to China Telecom (AS4134). The China telecom in turn began announcing them to international carriers such as Telia (AS1299), Tata (AS6453), GTT (AS3257) and Vodafone (AS1273), which resulted in AS paths such as: … {1299, 6453, 3257, 1273} 4134 9318 703 Doug says, “Networks around the world who accepted these routes inadvertently sent traffic to Verizon APAC (AS703) through China Telecom (AS4134). Below is a traceroute mapping the path of internet traffic from London to address space belonging to the Australian government. Prior to this routing phenomenon, it never traversed China Telecom”. He added, “Over the course of several months last year, I alerted Verizon and other Tier 1 carriers of the situation and, ultimately, Telia and GTT (the biggest carriers of these routes) put filters in place to ensure they would no longer accept Verizon routes from China Telecom. That action reduced the footprint of these routes by 90% but couldn’t prevent them from reaching those who were peering directly with China Telecom”. Focus of the BGP hijack alerting The common focus of BGP hijack alerting is looking for unexpected origins or immediate upstreams for routed address space. But traffic misdirection can occur at other parts of the AS path. In this scenario, Verizon APAC (AS703) likely established a settlement-free peering relationship with SK Broadband (AS9318), unaware that AS9318 would then send Verizon’s routes exclusively on to China Telecom and who would in turn send them on to the global internet. Doug said, “We would classify this as a peer leak and the result was China Telecom’s network being inserted into the inbound path of traffic to Verizon. The problematic routing decisions were occurring multiple AS hops from the origin, beyond its immediate upstream. Thus, he adds that the routes accepted from one’s peers also need monitoring, which is a fairly rare practice. Blindly accepting routes from a peer enables the peer to insert itself into the path of your outbound traffic. To know more about this news in detail, read Doug Madory’s blog post. US Supreme Court ends the net neutrality debate by rejecting the 2015 net neutrality repeal allowing the internet to be free and open again Ex-Google CEO, Eric Schmidt, predicts an internet schism by 2028 Has the EU just ended the internet as we know it?
Read more
  • 0
  • 0
  • 9456

article-image-jack-dorsey-to-testify-before-the-house-energy-and-commerce-committee
Sugandha Lahoti
27 Aug 2018
2 min read
Save for later

Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee

Sugandha Lahoti
27 Aug 2018
2 min read
House Energy and Commerce Committee announced that Twitter CEO Jack Dorsey will testify before the committee regarding Twitter algorithms and content monitoring. The hearing will take place on the afternoon of Wednesday, September 5, 2018. https://twitter.com/HouseCommerce/status/1033099291185827841 A few days back, Jack Dorsey announced plans to rethink how Twitter works to combat fake news and data scandals. Last month, Twitter deleted 70 million fake accounts in an attempt to curb fake news and improve Twitter algorithms. It has been constantly suspending fake accounts which are inauthentic, spammy or created via malicious automated bots. Earlier this month, Apple, Facebook, and Spotify took action against Alex Jones. Initially, Twitter allowed Jones to continue using its service. But later Twitter imposed a seven-day “timeout” on Jones after he encouraged his followers to get their “battle rifles” ready against critics in the “mainstream media”. "Twitter is an incredibly powerful platform that can change the national conversation in the time it takes a tweet to go viral," said House Energy and Commerce Committee Chairman Greg Walden, in a statement. "When decisions about data and content are made using opaque processes, the American people are right to raise concerns.” The committee will deal with Twitter algorithms and will ask tough questions revolving around how Twitter monitors and polices content. E&C expects Twitter to adhere to content judgment calls and be transparent regarding the complex processes behind the social media’s algorithms. On Friday, U.S. President Donald Trump accused social media companies of silencing “millions of people” in an act of censorship, but without offering evidence to support the claim. https://twitter.com/realDonaldTrump/status/1032954224529817600 House Majority Leader Kevin McCarthy, commented on the hearing saying, "We all agree that transparency is the only way to fully restore Americans’ trust in these important public platforms." https://twitter.com/GOPLeader/status/1033118278728777729 Following Twitter, representatives from Google and Facebook are also scheduled to appear at next month's hearing. Twitter takes down hundreds of fake accounts with ties to Russia and Iran. Twitter’s disdain for third-party clients gets real. Time for Facebook, Twitter, and other social media to take responsibility or face regulation.
Read more
  • 0
  • 0
  • 9439

article-image-jack-dorsey-discusses-the-rumored-edit-tweet-button-and-tells-users-to-stop-caring-about-followers
Natasha Mathur
13 Nov 2018
3 min read
Save for later

Jack Dorsey discusses the rumored ‘edit tweet’ button and tells users to stop caring about followers

Natasha Mathur
13 Nov 2018
3 min read
Twitter CEO Jack Dorsey, attended a town hall meeting at IIT, Delhi, yesterday, where he talked about his plans to add an “edit tweet” feature to the social media platform. He revealed he has mixed feelings about the feature, and said that he wants to ensure that it gets implemented the right way. “You have to pay attention to what are the use cases for the edit button. A lot of people want the edit button because they want to quickly fix a mistake they made. Like a misspelling or tweeting the wrong URL. That’s a lot more achievable than allowing people to edit any tweet all the way back in time," said Dorsey. He also talked about the risks that can come along with the “edit tweet” feature. They could, he pointed out, be used to change old tweets, leading to further misinformation and ‘fake news’. Dorsey conceded, however, that an edit button remains high on users’ wishlists. https://twitter.com/KimKardashian/status/1006691477471125504 Dorsey elaborated on the conversations happening within Twitter about the feature. He said, “There’s a bunch of things we could do to show a changelog and show how a tweet has been changed and we’re looking at all this stuff. We’ve been considering edit for quite some time but we have to do it in the right way. We can’t just rush it out. We can’t make something which is distracting or takes anything away from the public record”. Dorsey says follower count is ‘meaningless’ Dorsey also talked about the follower count feature, calling it“meaningless”. According to the Twitter chief, people should stop focusing on the number of followers they have and instead focus on cultivating “meaningful conversations”. Only last month, the news of Twitter planning to disable the ‘like’ button emerged, with precisely this reasoning. There appears to be a perception that the gamified elements of the platform are harming conversation. Dorsey admitted that “back then, we were not really thinking about all the dynamics that could ensue afterwards. ”Bemoaning the importance of followers, he argued that. “what is more important is the number of meaningful conversations you're having on the platform. How many times do you receive a reply?” What this means, in reality, will remain to be seen. There will be many who still see Twitter’s attitude to verification and abuse as the real issues to be tackled if the platform is to become a place for ‘meaningful conversation.’ Twitter’s CEO, Jack Dorsey’s Senate Testimony: On Twitter algorithms, platform health, role in elections and more Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey
Read more
  • 0
  • 0
  • 9433
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-tim-cook-talks-about-privacy-supports-gdpr-for-usa-at-icdppc-ex-fb-security-chief-calls-him-out
Prasad Ramesh
25 Oct 2018
4 min read
Save for later

Tim Cook talks about privacy, supports GDPR for USA at ICDPPC, ex-FB security chief calls him out

Prasad Ramesh
25 Oct 2018
4 min read
Apple CEO Tim Cook, advocates data privacy, considers it as a fundamental human right representing ideas of the company. Closely after, ex-Facebook security chief calls him out on his speech over a series of Tweets. Cook on privacy Cook spoke during a keynote speech during the ongoing International Conference of Data Protection and Privacy Commissioners (ICDPPC) conference in Brussels, Belgium. He expressed his ideas of data privacy and praised the successful implementation of the GDPR policy of EU. The Apple CEO is in full support of a policy like GDPR coming into the US. “We at Apple are in full support of a comprehensive federal privacy law in the United States.” There are four essentials to such a law, he said: The right to have personal data minimized The right to knowledge The right to access The right to security He talked about how data collection has become sort of a trade and: “Today that trade has exploded into a data industrial complex. Our own information, from the everyday to the deeply personal, is being weaponized against us with military efficiency.” Cook did not explicitly mention any companies in his speech but it was likely that he was referring to the Facebook Cambridge Analytica scandal and Google being fined for privacy in the EU. There was also a Senate hearing recently on consumer data privacy. Cook added, “In the news almost every day, we bear witness to the harmful, even deadly, effects of these narrowed worldviews. We shouldn't sugarcoat the consequences. This is surveillance. And these stockpiles of personal data serve only to enrich the companies that collect them.” Cook on artificial intelligence Cook believes that for artificial intelligence to be truly smart it should respect human values which include privacy. He went on to say that achieving great artificial intelligence systems with great privacy standards is not just a possibility but a responsibility. He believes that we should not lose “humanity” in pursuit of artificial intelligence. He states: “For artificial intelligence to be truly smart, it must respect human values, including privacy.” Now how a system that makes decisions heavily based on data without using people’s data or obscure it to say the least is something to think about. Ex Facebook security chief on Cook’s speech Alex Stamos, ex Facebook security chief and a current adjunct professor said that he agrees with almost everything Cook had to say. On Twitter, Stamos mentioned Apple blocking the download of VPNs and the use of encrypted messaging apps in China. This could have given the Chinese citizens a way to connect to the internet and send private messages. Also, data on iCloud is supposed to be end-to-end encrypted. But Apple’s Chinese partner Guizhou-Cloud Big Data  stores iCloud data on Chinese government run servers. This gives them a possibility to access user data. He tweeted: “We don't want the media to create an incentive structure that ignores treating Chinese citizens as less-deserving of privacy protections because a CEO is willing to bad-mouth the business model of their primary competitor, who uses advertising to subsidize cheaper devices.” https://twitter.com/alexstamos/status/1055192743033458688 https://twitter.com/alexstamos/status/1055192747970191360 Now if data can really be weaponized against is a question of who has control over it. On a purely objective view yes it can be. But it is the responsibility of these tech giants collecting, using and controlling that data to use it responsibly, to keep it safe. It is understandable that a free to use model works on user data but the companies should respect that data and the people from whom they collect it. There are also some efforts towards mobile OSes that promote privacy. In a world where everything is online, what we share, creating profiles with personal details, using free services in exchange for our own data, complete privacy seems like a luxury. Apple now allows U.S. users to download their personal data via its online privacy data portal Privacy experts urge the Senate Commerce Committee for a strong federal privacy bill “that sets a floor, not a ceiling” Chrome 69 privacy issues: automatic sign-ins and retained cookies; Chrome 70 to correct these
Read more
  • 0
  • 0
  • 9428

article-image-fireeye-reports-north-korean-state-sponsored-hacking-group-apt38-is-targeting-financial-institutions
Savia Lobo
04 Oct 2018
3 min read
Save for later

FireEye reports North Korean state sponsored hacking group, APT38 is targeting financial institutions

Savia Lobo
04 Oct 2018
3 min read
Yesterday, FireEye revealed a new group of hackers named APT38, a financially motivated North Korean regime-backed group responsible for conducting destructive attacks against financial institutions, as well as for some of the world's largest cyber heists. FireEye Inc. is a cybersecurity firm that provides products and services to protect against advanced persistent threats and spear phishing. Earlier this year, FireEye helped Facebook find suspicious accounts linked to Russia and Iran on its platform and also alerted Google of election influence operations linked to Iranian groups. Now FireEye cybersecurity researchers released a special report titled APT38: Un-usual Suspects, to expose the methods used by the APT38 group. In the report, they said,“Based on observed activity, we judge that APT38's primary mission is targeting financial institutions and manipulating inter-bank financial systems to raise large sums of money for the North Korean regime.” The researchers also state that the group has attempted to steal more than $1.1 billion and were also responsible for some of the more high-profile attacks on financial institutions in the last few years.  Some of the publicly reported attempted heists attributable to APT38 include: Vietnam TP Bank in December 2015 Bangladesh Bank in February 2016 Far Eastern International Bank in Taiwan in October 2017 Bancomext in January 2018 Banco de Chile in May 2018 Sandra Joyce, FireEye’s vice president of global intelligence says, “The hallmark of this group is that it deploys destructive malware after stealing money from an organization, not only to cover its tracks, but [also]  in order to distract defenders, complicate the incident response process, and gain time to get out the door.” Some details of the APT38 targeting Since at least 2014, APT38 has conducted operations in more than 16 organizations in at least 11 countries. The total number of organizations targeted by APT38 may be even higher when considering the probable low incident reporting rate from affected organizations. The group is careful, calculated, and has demonstrated a desire to maintain access to a victim environment for as long as necessary to understand the network layout, required permissions, and system technologies to achieve its goals. On average, they have observed that APT38 remain within a victim network for approximately 155 days, with the longest time within a compromised environment believed to be almost two years. In just the publicly reported heists alone, APT38 has attempted to steal over $1.1 billion dollars from financial institutions. APT38 Attack Lifecycle FireEye researchers believe that APT38’s financial motivation, unique toolset, tactics, techniques and procedures (TTPs) observed during their carefully executed operations are distinct enough to be tracked separately from other North Korean cyber activity. The APT38 group overlaps characteristics with other operations, known as ‘Lazarus’ and the actor they call as TEMP.Hermit. On Tuesday, the U.S. government released details on malware it alleges Pyongyang’s computer operatives have used to fraudulently withdraw money from ATMs in various countries. The unmasking of APT38 comes weeks after the Justice Department announced charges against Park Jin Hyok, a North Korean computer programmer, in connection with the 2014 hack of Sony Pictures and the 2017 WannaCry ransomware attack. According to Jacqueline O’Leary, a senior threat intelligence analyst at FireEye, Park has likely contributed to both APT38 and TEMP.Hermit operations. However, the North Korean government has denied allegations that it sponsors such hacking. Reddit posts an update to the FireEye’s report on suspected Iranian influence operation Facebook COO, Sandberg’s Senate testimony Google’s Protect your Election program  
Read more
  • 0
  • 0
  • 9376

article-image-private-international-shares-its-findings-on-how-popular-android-apps-send-user-data-to-facebook-without-user-consent
Natasha Mathur
02 Jan 2019
4 min read
Save for later

Private International shares its findings on how popular Android apps send user data to Facebook without user consent

Natasha Mathur
02 Jan 2019
4 min read
Privacy International, a UK registered charity firm that promotes the right to privacy, released a report last week, that shows how popular Android apps (Qibla Connect, Period Tracker Clue, Indeed, My talking tom, etc) share user data with Facebook, despite not having a Facebook account. The report raises questions about transparency and use of important app data by Facebook. As per the report, Facebook uses Facebook Business tools to routinely track users, non-users and logged-out users outside its platform. App developers use Facebook software development Kit (SDK) to share data with Facebook. To track these data sharing practices, Privacy International used “mitmproxy” (interactive HTTPS proxy), a free and open source software tool to analyze the data sent to Facebook via 34 apps on Android. All of these apps were put to test between August and December 2018. The latest re-test was done between 3rd and 11th of December 2018. Findings from the analysis The report states that at least 61% of tested apps transferred data to Facebook the moment a user opened the app. It doesn’t matter whether a person has a Facebook account or not, or whether they are logged into Facebook or not. Privacy International found out that the data that gets transmitted first is “events data”. This kind of data tells Facebook that the Facebook SDK is initialized by transmitting data like "App installed” and "SDK Initialized". This data gives information that a specific app is being used by a user, every single time that user opens an app. It was found that apps that automatically transfer the data to Facebook share this data together with a unique identifier i.e. the Google advertising ID (AAID). These advertising IDs enable advertisers to link data about user behavior from different apps into a “comprehensive profile”, i.e. a clear and intimate picture of a person’s activities, interests, behaviors, and routines. This comprehensive profile can also reveal information about a person’s health or religion. The analysis also revealed that event data such as "App installed”, "SDK Initialized" and “Deactivate app” offer a detailed insight into the behavior of users and the apps that they use. Moreover, the report also revealed that some of the apps send data to Facebook that is highly detailed and sometimes sensitive. This data is often related to people who are either logged out of Facebook and even those with no Facebook account. The report states that Facebook’s Cookies Policy describes two ways where people with no Facebook account can control Facebook's use of cookies to show them ads. Privacy International analyzed both the ways and found out that it didn’t have much impact on the data sharing process. The report also mentions that the default implementation of the Facebook SDK automatically transmits event data to Facebook due to which many developers have filed bug reports, over the concerns that Facebook SDK shares user data without consent. After May 25th, 2018, when GDPR came into force, Facebook came out with a voluntary feature that enables developers to delay collecting logged events until they acquire user consent. Facebook responded to the report in an email saying that “Prior to our introduction of the ‘delay’ option, developers had the ability to disable transmission of automatic event logging data, except for a signal that the SDK had been initialized. Following the June change to our SDK, we also removed the signal that the SDK was initialized for developers that disabled automatic event logging.” However, Private International mentions that before this voluntary feature was released, many apps that used Facebook SDK in the Android ecosystem could not prevent or delay the SDK from automatically collecting and sharing that the SDK has been initialized. This data, in turn, informs Facebook about a user using a particular app, when they use it and for how long. “Without any further transparency from Facebook, it is impossible to know for certain, how the data that we have described in this report is being used. Our findings also raise a number of legal questions”, says Private International. For more information, check out the official Private International report. ProPublica shares learnings of its Facebook Political Ad Collector project Facebook halted its project ‘Common Ground’ after Joel Kaplan, VP, public policy, raised concerns over potential bias allegations NYT says Facebook has been disclosing personal data to Amazon, Microsoft, Apple and other tech giants; Facebook denies claims with obfuscating press release
Read more
  • 0
  • 0
  • 9307

article-image-the-state-of-mozilla-2017-report-focuses-on-internet-health-and-user-privacy
Prasad Ramesh
29 Nov 2018
4 min read
Save for later

The State of Mozilla 2017 report focuses on internet health and user privacy

Prasad Ramesh
29 Nov 2018
4 min read
The State of Mozilla 2017 report is out and contains information on areas where Mozilla has made an impact and its activities in 2017-18. We look at some of the important details from the report. Towards building a healthier internet In the last two years, there have been scandals and news around big tech companies relating to data misuse, privacy hindrances and more. Some of these include the Cambridge Analytica scandal, Google tracking, and many others. Public and political trust from large tech companies has eroded following the uncovering of how some of these companies operate and treat user data. The Mozilla report says that now the focus is on how to limit these tech platforms and encourage them to adopt data regulation protocols. Mozilla seeks to fill the void where there is a lack of people who can decide correctly towards building a better internet. The State of Mozilla 2017 report reads: “When the United States Federal Communications Commission attacks net neutrality or the Indian government undermines privacy with Aadhaar, we see people around the world—including hundreds of thousands of members of the Mozilla community—stand up and say, Things should not work this way.” Read also: Is Mozilla the most progressive tech organization on the planet right now? The Mozilla Foundation and the Mozilla Corporation Mozilla was founded in 1998 as an open source project back when open source was truly open source, free of things like the Commons Clause. Mozilla has two organizations. The Mozilla Foundation which supports emerging leaders and mobilizes citizens towards better health of the internet. Second, the Mozilla Corporation which is a wholly owned subsidiary of the former and creates Mozilla products and advances public policy. The Mozilla Foundation Mozilla invests in people and organizations with a common vision other than building products. Another part of the State of Mozilla 2017 reads: “Our core program areas work together to bring the most effective ideas forward, quickly and where they have the most impact. As a result of our work, internet users see a change in the products they use and the policies that govern them.” Every year Mozilla Foundation creates the open source Internet Health Report to shed light on what’s been happening on the internet, specifically on its wellbeing. Their research includes data from multiple sources on areas like privacy and security, open innovation, decentralization, web literacy, and digital inclusion. Per the health report, Mozilla spent close to a million in 2017 on their agenda-setting work. Mozilla has also mobilized conscious internet users with campaigns around net neutrality in the US, India’s Aadhaar biometric system, copyright reform in the EU, and more. Mozilla has also invested in connecting internet health leaders and worked on data and privacy issues across the globe. It also invested about $24M in 2017 in this work. The Mozilla Corporation Mozilla says that to take the charge in changing internet culture they need to do more than building products. Post Firefox Quantum’s success, their focus is to better enable people in taking control of their online life. Another part of the State of Mozilla 2017 report highlights their vision stating that “Over the coming years, we will become the leading provider of user agency and online privacy by developing long-term trusted relationships with "conscious choosers" with a focus on helping people navigate their connected lives.” Mozilla pulled its ads from Facebook after the Cambridge Analytica scandal After learning about the Cambridge Analytica incident and guided by the Mozilla Manifesto, they decided to pull their ads from Facebook. Their Manifesto says “Individuals’ security and privacy on the Internet are fundamental and must not be treated as optional,”. After sending a message with this action, Mozilla also launched Facebook Container. It is a version of multi-account containers that prevent Facebook from tracking its users when they are not on the platform. They say that everyone has a right to keep their private information private and control their own web experiences. You can view the full State of Mozilla 2017 report at the Mozilla website. Mozilla shares plans to bring desktop applications, games to WebAssembly and make deeper inroads for the future web Mozilla criticizes EU’s terrorist content regulation proposal, says it’s a threat to user rights Is Mozilla the most progressive tech organization on the planet right now?
Read more
  • 0
  • 0
  • 9259
article-image-github-increases-its-reward-payout-model-for-its-bug-bounty-program
Savia Lobo
20 Feb 2019
2 min read
Save for later

GitHub increases its reward payout model for its bug bounty program  

Savia Lobo
20 Feb 2019
2 min read
GitHub announced yesterday that it is expanding its bug bounty program by adding some more services into the list, and also increasing the reward amount offers for the vulnerability seekers. It has also added some Legal Safe Harbor terms to its updated policy. All products and services under the github.com domain including GitHub Education, Enterprise Cloud, Learning Lab, Jobs, the Desktop application, githubapp.com, and github.net are a part of this bug bounty list. Launched in 2014, GitHub’s Security Bug Bounty program paid out $165,000 to researchers from their public bug bounty program in 2018. GitHub’s researcher grants, private bug bounty programs, and a live-hacking event helped GitHub reach a huge milestone of $250,000 paid out to researchers last year. GitHub’s new Legal Safe Harbor terms cover three main sources of legal risk including: Protect user’s research activity and authorize if they cross the line for the purpose of research Protect researchers in the bug bounty program from legal exposure via third-parties. Unless GitHub gets user-written permission, they will not share identifying information with a third party Prevent researchers in the bug bounty program from being hit with any site violations when they’ve broken the rules in the spirit of research According to the GitHub blog post, “You won’t be violating our site terms if it’s specifically for bounty research. For example, if your in-scope research includes reverse engineering, you can safely disregard the GitHub Enterprise Agreement’s restrictions on reverse engineering. Our safe harbor now provides a limited waiver for parts of other site terms and policies to protect researchers from legal risk from DMCA anti-circumvention rules or other contract terms that could otherwise prohibit things a researcher might need to do, like reverse engineering or de-obfuscating code.” As for the reward schedule, GitHub says they have increased the reward amounts at all levels: Critical: $20,000–$30,000+ High: $10,000–$20,000 Medium: $4,000–$10,000 Low: $617–$2,000 “We no longer have a maximum reward amount for critical vulnerabilities. Although we’ve listed $30,000 as a guideline amount for critical vulnerabilities, we’re reserving the right to reward significantly more for truly cutting-edge research”, the GitHub blog states. Switzerland launches a bug bounty program ‘Public Intrusion test’ to find vulnerabilities in its E-Voting systems Hyatt Hotels launches public bug bounty program with HackerOne EU to sponsor bug bounty programs for 14 open source projects from January 2019
Read more
  • 0
  • 0
  • 9249

article-image-facebook-pays-users-20-month-to-install-a-facebook-research-vpn-that-spies-on-their-phone-and-web-activities-techcrunch-reports
Savia Lobo
30 Jan 2019
4 min read
Save for later

Facebook pays users $20/month to install a ‘Facebook Research’ VPN that spies on their phone and web activities, TechCrunch reports

Savia Lobo
30 Jan 2019
4 min read
TechCrunch, in their recent report mentioned, Facebook has been spying on user’s data and internet habits by paying $20 a month, plus referral fees for users aged between 13 - 35  to install a ‘Facebook Research’ VPN via beta testing services such as Applause, BetaBound, and uTest. The VPN allows Facebook to have an eye on user’s web as well as phone activity. Such activity was found similar to Facebook’s Onavo Project app, which was banned by Apple in June 2018 and totally discarded in August. Launched in 2016, the Facebook research project was renamed to Project Atlas mid-2018 after the backlash against Onavo. One of the companies, uTest, was also running ads for a "paid social media research study" on Instagram and Snapchat, tweeted one of contributing TechCrunch editors to the report. https://twitter.com/JoshConstine/status/1090395755880173569 TechCrunch has also updated that “Facebook now tells TechCrunch it will shut down the iOS version of its Research app in the wake of our report.” According to the Techcrunch report, “Facebook sidesteps the App Store and rewards teenagers and adults to download the Research app and give it root access to network traffic in what may be a violation of Apple policy so the social network can decrypt and analyze their phone activity.” Guardian Mobile Firewall’s security expert Will Strafach, told TechCrunch, “If Facebook makes full use of the level of access they are given by asking users to install the Certificate, they will have the ability to continuously collect the following types of data: private messages in social media apps, chats in instant messaging apps – including photos/videos sent to others, emails, web searches, web browsing activity, and even ongoing location information by tapping into the feeds of any location-tracking apps you may have installed.” As part of the study, users were even asked to provide screenshots of their Amazon purchases. For underage users, Applause requires parental permission, and Facebook is mentioned in the consent agreement. The agreement also mentions this line about the company tracking your children, “There are no known risks associated with this project, however, you acknowledge that the inherent nature of the project involves the tracking of personal information via your child’s use of Apps. You will be compensated by Applause for your child’s participation.” As highlighted by TechCrunch, the Facebook Research app sent data to an address which is affiliated with Onavo. https://twitter.com/chronic/status/1090397698803621889 A Facebook spokesperson wrote that the program is being misrepresented by TechCrunch and that there was never a lack of transparency surrounding it. As a response to this, Josh Constine, Editor at TechCrunch tweeted, “Here is my rebuttal to Facebook's statement regarding the characterization of our story. We stand by our report, and have a fully updated version here.” He also provided an updated report link followed by a snippet from the report. https://twitter.com/JoshConstine/status/1090519765452353536 https://twitter.com/matthewstoller/status/1090605150673215494 According to Will Strafach, who did the actual app research for TechCrunch, “"they didn't even bother to change the function names, the selector names, or even the "ONV" class prefix. it's literally all just Onavo code with a different UI. Also, the Root Certificate they have users install so that they can access any TLS-encrypted traffic they'd like." According to a user on Hacker News, “By using a VPN they forced all traffic to go through their servers, and with the root certificate, they are able to monitor and gather data from every single app and website users visit/use. Which would include medical apps, chat apps, Maps/gps apps and even core operating system apps. So for users using Facebook's VPN they are effectively able to mine data which actually belongs to other apps/websites.” Another user writes, “How is this not in violation of most wiretapping laws? Facebook is not the common carrier in these cases. Both parties of conversations with teens are not consenting to the wiretapping, which is not allowed in many US states. I’m not sure teenage consent is considered “consent” and the parents aren’t a party to the conversations Facebook is wiretapping. Facebook is both paying people and recording the electronic communications.” To know more about this news, head over to TechCrunch’s complete report. Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager Facebook has blocked 3rd party ad monitoring plugin tools from the likes of ProPublica and Mozilla that let users see how they’re being targeted by advertisers Facebook releases a draft charter introducing a new content review board that would filter what goes online
Read more
  • 0
  • 0
  • 9241

article-image-researchers-design-anonprint-for-safer-qr-code-mobile-payment-acsc-2018-conference
Melisha Dsouza
07 Jan 2019
7 min read
Save for later

Researchers design ‘AnonPrint’ for safer QR-code mobile payment: ACSC 2018 Conference

Melisha Dsouza
07 Jan 2019
7 min read
Last month, researchers from USA, China, and Hong Kong published a paper in collaboration, titled as ‘Beware of Your Screen: Anonymous Fingerprinting of Device Screens for Off-line Protection’. This paper, presented at The 34th Annual Computer Security Applications Conference, highlights a new technique to enhance the security protection of QR-based payment, without undermining the payer’s privacy. The technique used by the researchers takes advantage of the unique luminance unevenness of a payer’s screen that is introduced by the imperfect manufacture process. The paper also presents a way to ensure that even when the payer’s digital wallet has compromised, an unauthorized payment cannot succeed. Besides this, the paper also takes into consideration the privacy issues that may arise if the screen’s features were naively deployed to authenticate the payer; as it could be misused by the vendors to link one’s different purchases together. To tackle this, the researchers have presented ‘AnonPrint’ that obfuscates the phone screen during each payment transaction. QR-code mobile payment systems are used by almost everyone today, including banks, service providers, and other commercial organizations. These payment systems are deployed solely using software without any hardware support. The paper highlights that in the absence of hardware support, a users wallet ‘can be vulnerable to an Os-level adversary’ which could be misused to generate a user’s payment tokens. To overcome this adversary, the researchers have demonstrated a method as a second factor authentication mechanism in the form of the physical features of a mobile's screen. The research takes advantage of the taried luminance levels of the pixels on the screen (which occurs due to the flaws in the manufacturing process) and can be used to uniquely characterize the screen. An advantage of this method is that, since the adversary cannot observe the physical features of the screen the physical fingerprint cannot be stolen even when the OS is fully compromised. Also, this second-factor authentication is fool-proof even when the secret key for generating QR codes is stolen or when a user’s phone has been fully compromised by the adversary. How is Anonymous screen Fingerprinting carried out? In order to enable service providers to utilize the screen to enhance security protection as well as preserve users privacy, the researchers have designed a new technique called ‘AnonPrint’. AnonPrint randomly generates visual one- time masks which is a pixel pattern with dots set to various brightness levels to obfuscate the distinguishable features of a user’s screen. The technique randomly creates a smooth textured pattern for each transaction (this pattern is also known to the provider), and displays a pattern as the background of QR code to disarrange the brightness of a screen, in line with the screen’s real-world physical properties i.e. the neighboring dots are correlated and the levels of brightness change smoothly. This will hide the physical properties of a screen, and the party that knows the mask, like the payment service provider, can verify whether the features collected from the protected screen are related to the authorized device or not. Here is an overview of how the system works: First, the user needs to submit the original screen fingerprint of their device to the payment provider when they open an account. The wallet app is modified to synchronize a secret random seed with the provider. This seed could be achieved through hashing the time for the payment together with a shared secret using a cryptographic hash function (e.g., SHA-256). This duo then bootstraps a pseudo random number generator (PRNG) each time when the wallet app needs to provide each party a sequence of random numbers for mask generation. The mask is displayed as the background for displaying the QR payment token, from which the POS scanner extracts the obfuscated screen fingerprint in addition to decoding the QR code, finally passing the information to the payment provider. The provider retrieves the shared secret and the original screen fingerprint using the claimed ID. Next, the same mask used by the payer is re-constructed and used with the with the original ngerprint as inputs for synthesizing a new obfuscated fingerprint. This is compared with the fingerprint  from the payer’s screen and the transaction can be approved the similarity of these two prints is above a certain threshold and other security checks are completed. How does AnonPrint obfuscate the screen? AnonPrint creates a ‘mask’, to hide the screen’s hardware fingerprint for every payment transaction. Such a mask is automatically generated by a digital wallet app, seeding a PRNG with a random number synchronized with the payment service provider. To obfuscate this hardware fingerprint and to maintain a screen’s realistic look, the researchers performed the following steps: (1) They first performed a ‘Random zone selection’, in which they produced a 180*108 pure white (with all pixels set to 255) image as the background and randomly selected from the image 20 mutually disjoint zones, each of size 16*16. (2) Next, came the ‘Dot darkening’ step.  From each zone, they randomly chose 3 pixels and set their pixel value to a random number between 0 to 100. (3) The team then performed Smoothing in which for every zone, AnonPrint blurs it using Gaussian Smoothing that , “smoothes out” the dark color of the selected pixels to its neighboring pixels. (4) Finally, they performed ‘Resizing’ where the mask image is resized and scaled to a 1800*1080 matrix whose values range from 220 to 255. The size of this image is iden- tical to the original fingerprint. Each user needs to register to the payment provider with an image of their unprotected screen when all pixels are set to the maximum gray-scale. During the payment, an image of a masked screen is used to authenticate the payer done on the payment service provider’s side by reconstructing the mask using the shared secret, and then obfuscate the fingerprint for comparing with the image from the vendor. Results and Discussion The researchers conducted various experiments in which they collected 100 smartphones- including iPhone, Samsung and many others.  All 100 phones were used to understand the effectiveness of the screen fingerprint in identifying the device. 50 phones were used to evaluate the anonymity protection and the effectiveness of AnonPrint separately.  iPhone 6s was used to capture images for screen fingerprinting. They implemented an Android application to display QR code and obfuscate a screen using masks derived from given random numbers for anonymous payment. To collect the fingerprints from each device, they displayed a QR code without obfuscation, and then continue to show 5 different masks on the screen with the same code. Each time, they took a picture from the screen and used the image to extract fingerprints. Their experiment concluded that for 88.75% of transactions, the vendors can accurately identify other transactions from the same customer, by simply looking at the features of their screens. Their experiment also proved that Anon Print indeed breaks vendors’ capability of linking screen fingerprint and that the overhead introduced by AnonPrint (only 50ms) is small for the offline payment. Fingerprint verification takes 2.4 seconds on average to be completed. You can head over to the paper for a detailed explanation on every experiment conducted to check fingerprint accuracy, anonymity protection, fingerprint verification and much more. The research results look promising and it will be interesting to see some potential implementation in the QR-payment systems of today. Head over to the paper for more insights on this news. NeurIPS 2018 paper: DeepMind researchers explore autoregressive discrete autoencoders (ADAs) to model music in raw audio at scale Cyber security researcher withdraws public talk on hacking Apple’s Face ID from Black Hat Conference 2019: Reuters report Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US  
Read more
  • 0
  • 0
  • 9238
article-image-facebook-hires-top-eef-lawyer-and-facebook-critic-as-whatsapp-privacy-policy-manager
Melisha Dsouza
30 Jan 2019
2 min read
Save for later

Facebook hires top EEF lawyer and Facebook critic as Whatsapp privacy policy manager

Melisha Dsouza
30 Jan 2019
2 min read
Nate Cardozo, former top legal counsel at  Electronic Frontier Foundation and has been hired to undertake a privacy role at Whatsapp. Cardoza, who until recently worked as a Senior Information Security Counsel with the EFF, is also known as a prominent Facebook critic. He has already worked with private companies on privacy policies that protect user rights; and now, hopping aboard along with the Facebook team he says that “the privacy team I’ll be joining knows me well, and knows exactly how I feel about tech policy, privacy, and encrypted messaging. And that’s who they want at managing privacy at WhatsApp. I couldn’t pass up that opportunity.” In a 2015, Cardozo wrote a blog post, questioning Facebook’s ethical morals. He alleged Facebook’s data models  “depends on our collective confusion and apathy about privacy” to execute its operations of tracking users through their behaviour on the social media site. This can also been seen as a strategic move by Facebook, who hired Cardozo immediately after the attention Facebook drew to its intention of integrating  WhatsApp, Instagram and Facebook Messenger. Security and privacy-minded personnel expressed their concerns about what else the integration could possibly mean to user security. The EEF has also been critical of Facebook, related to Whatsapp’s user privacy and criticism over secretly released Facebook documents that outlined illicit practices being performed at the company. While hiring such a top EEF personnel may put several privacy minded people at rest, users on HackerNews are speculating if the move was calculative by ‘removing a critic and making him an ally and have removed him from constantly stirring controversy.’ Alongside Nate, Facebook has also hired Robyn Greene from the Open Technology Institute to focus on law enforcement access and data protection at Facebook. She announced the news on Twitter yesterday. https://twitter.com/Robyn_Greene/status/1090343419237617664 You can head over to CNBC for more insights on this news. GDPR complaint claims Google and IAB leaked ‘highly intimate data’ of web users for behavioral advertising Biometric Information Privacy Act: It is now illegal for Amazon, Facebook or Apple to collect your biometric data without consent in Illinois Advocacy groups push FTC to fine Facebook and break it up for repeatedly violating the consent order and unfair business practices
Read more
  • 0
  • 0
  • 9226

article-image-the-much-loved-reverse-chronological-twitter-timeline-is-back-as-twitter-attempts-to-break-the-filter-bubble
Natasha Mathur
19 Sep 2018
3 min read
Save for later

The much loved reverse chronological Twitter timeline is back as Twitter attempts to break the ‘filter bubble’

Natasha Mathur
19 Sep 2018
3 min read
Twitter’s CEO Jack Dorsey announced on Monday that Twitter’s bringing back the much-loved and original ‘reverse chronological order theme’ for the Twitter news feed. You can enable the reverse chronological theme by making setting changes. https://twitter.com/jack/status/1042038232647647232 Twitter is also working on providing users with a way to easily toggle between the two different themes i.e. a timeline of tweets most relevant to you and a timeline of all the latest tweets. To change to the reverse chronological order timeline, go to settings on the twitter, then select privacy option, go to the content section and uncheck the box that says “Timeline- show the best tweets first”. Twitter also removed the ‘in case you missed it’ section from the settings. https://twitter.com/TwitterSupport/status/1041838957896450048 The Reverse Chronological theme was Twitter’s original content presentation style, much before it made the ‘top tweets algorithm’ as a default option, back in 2016. When Twitter announced that it was changing its timeline so that it wouldn’t show the tweets in chronological order anymore, a lot of people were unhappy. In fact, people despised the new theme so much that a new hashtag #RIPTwitter was trending back then. Twitter with its new algorithm in 2016 focussed mainly on bringing the top, most happening, tweets to light. But, a majority of Twitter users felt differently. People enjoyed the simpler reverse-chron Twitter where people could get real-time updates from their close friends, family, celebrities, etc, not the twitter that shows only the most relevant tweets stacked together. Twitter defended the new approach as it tweeted yesterday that “We’ve learned that when showing the best Tweets first, people find Twitter more relevant and useful. However, we've heard feedback from people who at times prefer to see the most recent Tweets”. Also, Twitter has been making a lot of changes recently after Twitter CEO, Jack Dorsey testified before the House Energy and Commerce Committee regarding Twitter’s algorithms and content monitoring. Twitter mentioned that they want people to have more control over their timeline. https://twitter.com/TwitterSupport/status/1042155714205016064 Public reaction to this new change has been largely positive with a lot of people criticizing the company’s Top Tweet timeline.   https://twitter.com/_Case/status/1041841407739260928 https://twitter.com/terryb600/status/1041847173770620929   https://twitter.com/smithant/status/1041884671921930240 https://twitter.com/alliecoyne/status/1041850426159583232 https://twitter.com/fizzixrat/status/1041881429477654528 One common pattern observed is that people brought up Facebook a lot while discussing this new change. https://twitter.com/_Case/status/1042068118997270528 https://twitter.com/Depoetic/status/1041842498459578369 https://twitter.com/schachin/status/1041925075698503680 Twitter seems to have dodged a bullet by giving back to its users what they truly want. Twitter’s trying to shed its skin to combat fake news and data scandals, says Jack Dorsey Facebook, Twitter open up at Senate Intelligence hearing, committee does ‘homework’ this time Jack Dorsey to testify explaining Twitter algorithms before the House Energy and Commerce Committee  
Read more
  • 0
  • 0
  • 9150
Modal Close icon
Modal Close icon