Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Cybersecurity

373 Articles
article-image-mandrill-email-api-outage-unresolved-leaving-users-frustrated
Savia Lobo
06 Feb 2019
2 min read
Save for later

Mandrill email API outage unresolved; leaving users frustrated

Savia Lobo
06 Feb 2019
2 min read
At the beginning of this week, Mandrill, a transactional email API for MailChimp users, experienced an outage where users were able to send but were unable to receive emails. The Madrill community also tweeted stating that they were also seeing ongoing errors with scheduled mail and webhooks and would resolve the issue soon. https://twitter.com/mandrillapp/status/1092611488982945793 Sebastian Lauwers, the VP of Engineering at Dixa, a customer service software tweeted that the issue took too long to resolve. He also asked for the reason why Mandrill was taking so long--nearly 23 hours--to sort the issue. https://twitter.com/teotwaki/status/1092624972252618754 Today, one of the users with the username GuyPostington posted an email received from Mandrill, on HackerNews. The email explains the reason for Mandrill’s outage and how they will be addressing the issue. Mandrill uses a sharded Postgres setup as one of their main datastores. According to the email, “On Sunday, February 3, at 10:30 pm EST, 1 of our 5 physical Postgres instances saw a significant spike in writes. The spike in writes triggered a Transaction ID Wraparound issue. When this occurs, database activity is completely halted. The database sets itself in read-only mode until offline maintenance (known as vacuuming) can occur.” They have also tweeted the same They further mentioned that the database is large due to which the vacuum process takes a significant amount of time and resources, and there’s no clear way to track progress. To address this issue, the community writes, “We don’t have an estimated time for when the vacuum process and cleanup work will be complete. While we have a parallel set of tasks going to try to get the database back in working order, these efforts are also slow and difficult with a database of this size. We’re trying everything we can to finish this process as quickly as possible, but this could take several days, or longer.” The email also states that once the outage is resolved, the community plans to offer refunds to all the affected users. To know about this news in detail, visit Mandrill’s Tweet thread. Microsoft Cloud services’ DNS outage results in deleting several Microsoft Azure database records Internet Outage or Internet Manipulation? New America lists government interference, DDoS attacks as top reasons for Internet Outages across the world Outage in the Microsoft 365 and Gmail made users unable to log into their accounts
Read more
  • 0
  • 0
  • 2712

article-image-nsas-eternalblue-leak-leads-to-459-rise-in-illicit-crypto-mining-cyber-threat-alliance-report
Melisha Dsouza
20 Sep 2018
3 min read
Save for later

NSA’s EternalBlue leak leads to 459% rise in illicit crypto mining, Cyber Threat Alliance report

Melisha Dsouza
20 Sep 2018
3 min read
"Illicit mining is the 'canary in the coal mine' of cybersecurity threats. If illicit cryptocurrency mining is taking place on your network, then you most likely have worse problems and we should consider the future of illicit mining as a strategic threat." - Neil Jenkins, Chief Analytic Officer for the Cyber Threat Alliance A leaked software tool from the US National Security Agency has led to a surge in Illicit cryptocurrency mining, researchers said on Wednesday. The report released by the Cyber Threat Alliance, an association of cybersecurity firms and experts, states that it detected a 459 percent increase in the past year of illicit crypto mining- a technique used by hackers to steal the processing power of computers to create cryptocurrency. One reason for the sharp rise in illicit mining was the leak last year by a group of hackers known as the Shadow Brokers of EternalBlue. The EternalBlue was a software developed by the NSA to exploit vulnerabilities in the Windows operating system. There are still countless organizations that are being victimized by this exploit, even after a patch for EternalBlue has been made available for 18 months. Incidentally, the rise in hacking coincides with the growing use of virtual currencies such as bitcoin, ethereum or monero. Hackers have discovered ways to tap into the processing power of unsuspecting computer users to illicitly generate currency. Neil Jenkins said in a blog post that the rise in malware for crypto mining highlights "broader cybersecurity threats". Crypto mining which was once non-existent is, now, virtually on every top firm’s threat list. The report further added that 85 percent of illicit cryptocurrency malware mines monero, and 8 percent mines bitcoin. Even though Bitcoin is well known as compared to Monero, according to the report, the latter offers more privacy and anonymity which help cyber criminals hide their mining activities and their transactions using the currency. Transaction addresses and values are unclear in monero by default, making it incredibly difficult for investigators to find the cybercrime footprint. The blog advises network defenders to make it harder for cybercriminals to carry out illicit mining by improving practices of cyber hygiene. Detection of cyber mining and Incident response plans to the same should also be improved. Head over to techxplore for more insights on this news. NSA researchers present security improvements for Zephyr and Fucshia at Linux Security Summit 2018 Top 15 Cryptocurrency Trading Bots Cryptojacking is a growing cybersecurity threat, report warns  
Read more
  • 0
  • 0
  • 2669

article-image-icann-calls-for-dnssec-across-unsecured-domain-names-amidst-increasing-malicious-activity-in-the-dns-infrastructure
Amrata Joshi
25 Feb 2019
3 min read
Save for later

ICANN calls for DNSSEC across unsecured domain names amidst increasing malicious activity in the DNS infrastructure

Amrata Joshi
25 Feb 2019
3 min read
Last week, the Internet Corporation for Assigned Names and Numbers (ICANN) decided to call for the full deployment of the Domain Name System Security Extensions (DNSSEC) across all unsecured domain names. ICANN took this decision because of the increasing reports of malicious activity targeting the DNS infrastructure. According to ICANN, there is an ongoing and significant risk to key parts of the Domain Name System (DNS) infrastructure. The DNS that converts numerical internet addresses to domain names, has been the victim of various attacks by the use of different methodologies. https://twitter.com/ICANN/status/1099070857119391745?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Etweet Last month security company FireEye revealed that hackers associated with Iran were hijacking DNS records, by rerouting users from a legitimate web address to a malicious server in order to steal passwords. This “DNSpionage” campaign, was targeting governments in the United Arab Emirates and Lebanon. The Homeland Security’s Cybersecurity Infrastructure Security Agency had warned that U.S. agencies were also under attack. In its first emergency order amid a government shutdown, the agency ordered federal agencies to take action against DNS tampering. David Conrad, ICANN’s chief technology officer told the AFP news agency that the hackers are “going after the Internet infrastructure itself.” ICANN is urging domain owners for deploying DNSSEC, which is a more secure version of DNS and is difficult to manipulate. DNSSEC cryptographically signs data which makes it more difficult to be spoofed. Some of the attacks target the DNS where the addresses of intended servers are changed with addresses of machines controlled by the attackers. This type of attack that targets the DNS only works when DNSSEC is not in use. ICANN also reaffirms its commitment towards engaging in collaborative efforts for ensuring the security, stability, and resiliency of the internet’s global identifier systems. This month, ICANN offered a checklist of recommended security precautions for members of the domain name industry, registries, registrars, resellers, and related others, to proactively take steps to protect their systems. ICANN aims to assure that internet users reach their desired online destination by preventing “man in the middle” attacks where a user is unknowingly re-directed to a potentially malicious site. Few users have previously been a victim of DNS hijacking and think that this move won’t help them out. One user commented on HackerNews, “This is nonsense, and possibly crossing the border from ignorant nonsense to malicious nonsense.” Another user said, “There is in fact very little evidence that we "need" the authentication provided by DNSSEC.” Few others think that this might work as a good solution. A comment reads, “DNSSEC is quite famously a solution in search of a problem.” To know more about this news, check out ICANN’s official post. Internet governance project (IGP) survey on IPV6 adoption, initial reports Root Zone KSK (Key Sign Key) Rollover to resolve DNS queries was successfully completed RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover
Read more
  • 0
  • 0
  • 2648
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-eset-scientists-reveal-fancy-bears-first-documented-use-of-uefi-rootkit-targeting-european-governments
Melisha Dsouza
28 Sep 2018
3 min read
Save for later

ESET Scientists reveal Fancy Bear’s first documented use of UEFI rootkit targeting European governments

Melisha Dsouza
28 Sep 2018
3 min read
ESET researchers stated that they have found evidence that 'Fancy Bear' (Russia-backed hackers group) is using ‘LoJax’ malware to target certain government organizations in Europe. This research was presented on Thursday at the 2018 Microsoft BlueHat conference. This is the first case of a UEFI rootkit recorded as ‘active’ and still in use. The researchers have not explicitly named the governments that have been targeted. They have only stated that the hackers were active in targeting the Balkans and some central and eastern European countries. This attempt to target european governments is another one of Fancy Bears tactics after hacking into the Democratic National Committee. The hackers had previously targeted senators, social media sites, the French presidential elections, and leaked Olympic athletes’ confidential medical files, which demonstrates their hacking abilities. The LoJax UEFI rootkit LoJax is known for its brutal persistence in making it challenging to remove from a system. It embeds itself in the computer’s firmware and launches when the OS boots up. Sitting in a computer’s flash memory,  LoJax consumes time, effort and extreme care to reflash the memory with a new firmware. In May 2018, Arbor Networks suggested that this Russian hacker group was utilizing Absolute Software's 'LoJack'- a legitimate laptop recovery solution- for unscrupulous means. Hackers tampered with the samples of the LoJack software and programmed it to communicate with a command-and-control (C2) server controlled by Fancy Bear, rather than the legitimate Absolute Software server. The modified version was named as LoJax to separate it from Absolute Software's legitimate solution. LoJax is implemented as a UEFI/BIOS module, to resist operating system wipes or hard drive replacement. This UEFI rootkit was found bundled together with a toolset that was able to patch a victim's system firmware and install malware at the system’s deepest level. In at least one recorded case, the hackers behind the malware were able to write a malicious UEFI module into a system's SPI flash memory leading to the execution of malicious code on disk during the boot process. ESET further added that the malicious UEFI module is being bundled into exploit kits which are able to access and patch UEFI/BIOS settings. Alongside the malware, three other tools were found in Fancy Bear's refreshed kit. A tool that dumps information related to PC settings into a text file A tool to save an image of the system firmware by reading the contents of the SPI flash memory where the UEFI/BIOS is located A tool that adds the malicious UEFI module to the firmware image to write it back to the SPI flash memory. The researchers affirm that the UEFI rootkit has increased the severity of the hacking group. However, there are preventative measures to safeguard your system against this notorious group of hackers. The Fancy Bear’s rootkit isn’t properly signed and hence a computer’s Secure Boot feature could prevent the attack by properly verifying every component in the boot process. This can be switched on at a computer’s pre-boot settings. For more insights on this news, head over to ZDNet. Microsoft claims it halted Russian spearphishing cyberattacks Russian censorship board threatens to block search giant Yandex due to pirated content UN meetings ended with US & Russia avoiding formal talks to ban AI enabled killer robots
Read more
  • 0
  • 0
  • 2458

article-image-the-second-instance-of-windows-zero-day-vulnerability-disclosed-in-less-than-two-months
Savia Lobo
24 Oct 2018
3 min read
Save for later

The second instance of Windows zero-day vulnerability disclosed in less than two months

Savia Lobo
24 Oct 2018
3 min read
Two months ago, a security researcher with the name SandboxEscaper disclosed a local privilege escalation exploit in Windows. The researcher is back with another Windows zero-day vulnerability, which was disclosed on Twitter yesterday. A Proof-of-Concept (PoC) for this vulnerability was also published on Github. https://twitter.com/SandboxEscaper/status/1054744201244692485 Many security experts analyzed the PoC and stated that this zero-day vulnerability only affects recent versions of the Windows OS, such as Windows 10 (all versions, including the latest October 2018 Update), Server 2016, and even the new Server 2019. An attacker can use it to elevate their privileges on systems they already have an access to. Will Dormann, software vulnerability analyst, CERT/CC, says, “this is because the "Data Sharing Service (dssvc.dll), does not seem to be present on Windows 8.1 and earlier systems." According to ZDNet, experts who analyzed the PoC say, “The PoC, in particular, was coded to delete files for which a user would normally need admin privileges to do so. With the appropriate modifications, other actions can be taken.” The second zero-day Windows exploit This zero-day exploit is quite identical to the previous exploit released by SandboxEscaper in August, said Kevin Beaumont, an infosec geek at Vault-Tec. "It allows non-admins to delete any file by abusing a new Windows service not checking permissions again", he added. However, Microsoft released a security patch for the previous vulnerability during the September 2018 Patch Tuesday updates. SandboxEscaper’s PoC for the previous exploit “wrote garbage data to a Windows PC, the PoC for the second zero-day will delete crucial Windows files, crashing the operating system, and forcing users through a system restore process”. Hence, Mitja Kolsek, CEO of ACROS Security, advised users to avoid running this recent PoC. Kolsek's company released an update for their product (called 0Patch) that would block any exploitation attempts until Microsoft releases an official fix. Kolsek and his team are currently working on porting their ‘micro-patch’ to all affected Windows versions. As per ZDNet, malware authors integrated SandboxEscaper's first zero-day inside different malware distribution campaigns. Experts believe that malware authors can use the zero-day to delete OS files or DLLs and replace them with malicious versions. SandboxEscaper argues that this second zero-day can be just as useful for attackers as the first. To know more about this news in detail, head over to ZDNet’s website. ‘Peekaboo’ Zero-Day Vulnerability allows hackers to access CCTV cameras, says Tenable Research Implementing Identity Security in Microsoft Azure [Tutorial] Upgrade to Git 2.19.1 to avoid a Git submodule vulnerability that causes arbitrary code execution
Read more
  • 0
  • 0
  • 2421

article-image-californias-new-bot-disclosure-law-bans-bots-from-pretending-to-be-human-to-sell-products-or-influence-elections
Savia Lobo
03 Oct 2018
3 min read
Save for later

California’s new bot disclosure law bans bots from pretending to be human to sell products or influence elections

Savia Lobo
03 Oct 2018
3 min read
Last week, California’s Governor Jerry Brown passed a bill giving rise to a new law that will ban automated accounts, more commonly known as bots, from pretending to be real people in pursuit of selling products or influencing elections. The bill was approved on September 28 and will be effective from July 1, 2019. As per the California Senate, "This bill would, with certain exceptions, make it unlawful for any person to use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity for the purpose of knowingly deceiving the person about the content of the communication in order to incentivise a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election.” The law will assist in tackling social media manipulation to determine foreign interference. Bots caused major issues during the 2016 U.S. Presidential elections and have ever since grown to be a menace that platforms like Twitter have been trying to combat. The 2016 U.S. Presidential elections saw Russian-controlled bots playing an active role in manipulating opinions, retweeting Donald Trump's tweets 470,000 times, and Hillary Clinton's fewer than 50,000 times. The main aim of this effort is to target bots that spread misinformation. Twitter said that it took down 9.9 million potentially spammy or automated accounts per week in May and has placed warnings on suspicious accounts. Twitter has also announced an update in its work on its "election integrity" project, ahead of the US mid-term elections in November. These include updating its rules regarding fake accounts and sharing stolen information. It said it would now take into account stock avatar photos and copied profile bios in determining whether an account is genuine. Robert Hertzberg, a state senator from California who pushed for the new law forcing bots to disclose their lack of humanity, told The New York Times he was the subject of a bot attack over a bail reform bill. So he decided to fight bots with bots by launching @Bot_Hertzberg in January. As per California law, the account discloses its automated nature. “*I AM A BOT.*” states the account’s Twitter profile. “Automated accounts like mine are made to misinform & exploit users. But unlike most bots, I'm transparent about being a bot.” To know more about this California Senate ’s bill in detail, check out the Senate bill. Sentiment Analysis of the 2017 US elections on Twitter Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections
Read more
  • 0
  • 0
  • 2335
article-image-usc-researchers-present-identification-and-mitigation-techniques-to-combat-fake-news
Natasha Mathur
24 Jan 2019
6 min read
Save for later

USC researchers present identification and mitigation techniques to combat fake news

Natasha Mathur
24 Jan 2019
6 min read
A group of researchers from the University of Southern California published a paper titled “Combating Fake News: A Survey on Identification and Mitigation Techniques” that discusses existing methods and techniques applicable to identification and mitigation of fake news. The paper has categorised different existing work on fake news detection and mitigation methods into three types: fake news identification using content-based methods (classifies news based on the content of the information to be verified) identification using feedback-based methods (classifies news based on the user responses it receives over social media) intervention based solutions (offers computational solutions for identifying the spread of false information along with methods to mitigate the impact of fake news) These existing methods are further categorized as follows: Categorization of existing methods “The scope of the methods discussed in content-based and feedback based identification is limited to classifying news from a snapshot of features extracted from social media. For practical applications, we need techniques that can dynamically interpret and update the choice of actions for combating fake news based on real-time content propagation dynamics”, reads the paper. These techniques that provide such computational methods and algorithms are discussed extensively in the paper. Let’s have a look at some of these strategies. Mitigation strategies: decontamination, competing cascades, and multi-stage intervention The paper presents three different mitigation strategies aimed at reversing the effect of fake news by introducing true news on social media platforms. This ensures that users are exposed to the truth and the impact of fake news on user opinions are mitigated. The Computational methods designed for this purpose first needs to consider information diffusion models widely-used in social networks such as the Independent Cascade (IC), linear Threshold (LT) model, as well as the point process models such as Hawkes Process model. Decontamination The paper mentions the strategy introduced by Nam P Nguyen, in his paper “Containment of misinformation spread in online social networks”. The strategy includes decontaminating the users exposed to fake news. It makes use of the diffusion process (estimates the spread of information over the population) modelled with the help of Independent Cascade (IC) or Linear Threshold model (LT). A simple greedy algorithm is then designed that selects the best set of users. Then starts the diffusion process for true news so that at least a fraction of the selected users can be decontaminated. The algorithm iteratively selects the next best user to include into the set depending on the marginal gains obtained by the inclusion of that user (i.e. the number of users activated or reached by the true news in expectation, if the set did additionally include the chosen user). Competing cascades The paper mentions an intervention strategy based on competing cascades. The method of competing cascades involves introducing a true news cascade to make it compete with the fake news cascade, while the fake news is propagating through the network. The paper discusses an “influence blocking maximization objective” by Xinran He, as an optimal strategy to spread true news in the presence of fake news cascade. The process selects a set of “k” users strategically with the objective of minimizing the number of users who get activated by fake news at the end of the diffusion. According to the paper, this model assumes that once a user gets activated by either the fake or true cascade, that user will remain activated under that cascade. Multi-stage intervention Another strategy discussed in the paper is the “multi-stage intervention strategy” proposed by Mehrdad Farajtabar, in the paper “Fake News Mitigation via Point Process Based Intervention”.  This strategy allows “external interventions to adapt as necessary to the observed propagation dynamics of fake news”, states the paper. The purpose of the external interventions in the process is to incentivize certain users to enable increased sharing of true news that can counteract the fake news process over the network. At each step of the intervention, there are certain budget and user activity constraints that are imposed. This allows you to track the optimal amount of external incentivization, needed to achieve the desired objective i.e. minimizing the difference between fake and true news exposures. This strategy makes use of a reinforcement learning based policy iteration framework that helps derive the optimal amount of external incentivization. Identification strategies: Network Monitoring and crowd-sourcing The paper discusses different identification mechanisms that help actively detect and prevent the spread of misinformation due to fake news on social media platforms. Network Monitoring The paper presents a strategy based on network monitoring that involves intercepting information from a list of suspected fake news sources using computer-aided social media accounts or real paid user accounts. These accounts help filter out the information they receive and block fake news. The strategy used a “network monitor placement” that is determined by finding a part of the network with the highest probability of fakes news transmission. Another network monitoring placement solution involves a Stackelberg game between leader (attacker) and follower( defender) nodes. The paper also mentions an idea implemented by various network monitoring sites. This includes having multiple human or machine classifiers to improve the detection robustness as something that might get missed by one fact-checker might get captured by another. Crowd-sourcing Another identification strategy mentioned in the paper makes use of the crowd-sourced user feedback on social media platforms that helps users report or flag fake news articles. These crowd-sourced signals used to prioritize the fact-checking of news articles involves capturing “the trade-off between a collection of evidence v/s the harm caused from more users being exposed to fake news (exposures) to determine when the news needs to be verified”, states the paper. The fact-checking process and events are represented using point process models. This process helps to derive the optimal intensity of fact-checking that is proportional to the rate of exposure to misinformation and collected evidence as flags. The paper mentions an online learning algorithm to more accurately leverage user flags. This algorithm jointly infers the flagging accuracies of users while also identifying fake news. “The literature surveyed here has demonstrated significant advances in addressing the identification and mitigation of fake news. Nevertheless, there remain many challenges to overcome in practice,” states the researchers. For more information, check out the official research paper. Fake news is a danger to democracy. These researchers are using deep learning to model fake news to understand its impact on the elections Four 2018 Facebook patents to battle fake news and improve news feed Facebook patents its news feed filter tool to provide more relevant news to its users
Read more
  • 0
  • 0
  • 2305

article-image-researchers-find-way-to-spy-on-remote-screens-through-webcam-machine-learning
Fatema Patrawala
03 Sep 2018
6 min read
Save for later

Researchers find a way to spy on remote screens through the Webcam mic and machine learning

Fatema Patrawala
03 Sep 2018
6 min read
With a little help from machine learning, you might know what the people on the other end of a Hangouts session are really looking at on their screens. Based on research published at the CRYPTO 2018 Conference in Santa Barbara last week your webcam could give details on what's on your screen, if the person on the other end is listening the right way. All you'll need to do is process the audio picked up by their microphones. Daniel Genkin of the University of Michigan, Mihir Pattani of the University of Pennsylvania, Roei Schuster of Cornell Tech and Tel Aviv University, and Eran Tromer of Tel Aviv University and Columbia University investigated a potential new avenue of remote surveillance dubbed as "Synesthesia”. It is a side-channel attack that can reveal the contents of a remote screen, providing access to potentially sensitive information based solely on "content-dependent acoustic leakage from LCD screens.” Anyone who remembers working with cathode ray tube monitors is familiar with the phenomenon of coil whine. Even though LCD screens consume a lot less power than the old cathode ray tube (CRT), they still generate the same sort of noise, though in a totally different frequency range. Because of the way computer screens render a display—sending signals to each pixel of each line with varying intensity levels for each sub-pixel—the power sent to each pixel fluctuates as the monitor goes through its refresh scans. Variations in the intensity of each pixel create fluctuations in the sound created by the screen's power supply, leaking information about the image being refreshed—information that can be processed with machine learning algorithms to extract details about what's being displayed. That audio could be captured and recorded in a number of ways, as demonstrated by the researchers: Over a device's embedded microphone or an attached webcam microphone during a Skype, Google Hangouts, or other streaming audio chat Through recordings from a nearby device, such as a Google Home or Amazon Echo Over a nearby smartphone; or with a parabolic microphone from distances up to 10 meters Even a reasonably cheap microphone could pick up and record the audio from a display, even though it is just on the edge of human hearing And it turns out that audio can be exploited with a little bit of machine learning black magic. The researchers began by attempting to recognize simple, repetitive patterns. They created a simple program that displays patterns of alternating horizontal black and white stripes of equal thickness (in pixels), which shall be referred to as Zebras, the researchers recounted in their paper. These "zebras" each had a different period, measured by the distance in pixels between black stripes. As the program ran, the team recorded the sound emitted by a Soyo DYLM2086 monitor. With each different period of stripes, the frequency of the ultrasonic noise shifted in a predictable manner. The variations in the audio only really provide reliable data about the average intensity of a particular line of pixels, so it can't directly reveal the content of a screen. However, by applying supervised machine learning in three different types of attacks, the researchers demonstrated that it was possible to extract a surprising amount of information about what was on the remote screen. After training, a neural-network-generated classifier was able to reliably identify which of the Alexa top 10 websites was being displayed on a screen based on audio captured over a Google Hangouts call—with 96.5 percent accuracy. In a second experiment, the researchers were able to reliably capture on-screen keyboard strokes on a display in portrait mode (the typical tablet and smartphone configuration) with 96.4 percent accuracy, for transition times of one and three seconds between key "taps." On a landscape-mode display, accuracy of the classifiers was much lower, with a first-guess success rate of only 40.8 percent. However, the correct typed word was in the top three choices 71.9 percent of the time for landscape mode, meaning that further human analysis could still result in accurate data capture. (The correct typed word was in the top three choices for the portrait mode classifier 99.6 percent of the time.) In a third experiment, the researchers used guided machine learning in an attempt to extract text from displayed content based on the audio—a much more fine-grained sort of data than detecting changes in screen keyboard intensity. In this case, the experiment focused on a test set of 100 English words and also used somewhat ideal display settings for this sort of capture: all the letters were capitalized (in the Fixedsys Excelsior typeface with a character size 175 pixels wide) and black on an otherwise white screen. The results, as the team reported them, were promising: The per-character validation set accuracy (containing 10% of our 10,000 trace collection) ranges from 88% to 98%, except for the last character where the accuracy was 75%. Out of 100 recordings of test words, for two of them preprocessing returned an error. For 56 of them, the most probable word on the list was the correct one. For 72 of them, the correct word appeared in the list of top-five most probable words. While these tests were all done with a single monitor type, the researchers also demonstrated that a "cross screen" attack was possible—by using a remote connection to display the same image on a remote screen and recording the audio, it was possible to calibrate a baseline for the targeted screen. It's clear that there are limits to the practicality of acoustic side-channels as a means of remote surveillance. But as people move to use mobile devices such as smartphones and tablets for more computing tasks—with embedded microphones, limited screen sizes, and a more predictable display environment—the potential for these sorts of attacks could rise. And mitigating the risk would require re-engineering of current screen technology. So, while it remains a small risk, it's certainly one that those working with sensitive data will need to kept in mind—especially if they're spending much time in Google Hangouts with that data on-screen. Read more on this page. Google Titan Security key with secure FIDO two factor authentication is now available for purchase 6 artificial intelligence cybersecurity tools you need to know Defending Democracy Program: How Microsoft is taking steps to curb increasing cybersecurity threats to democracy
Read more
  • 0
  • 0
  • 2225

article-image-ethereum-community-postpones-constantinople-post-vulnerability-detection-from-chainsecurity
Savia Lobo
16 Jan 2019
2 min read
Save for later

Ethereum community postpones Constantinople, post vulnerability detection from ChainSecurity

Savia Lobo
16 Jan 2019
2 min read
The Ethereum developers announced yesterday that they are pulling back the Constantinople Hard Fork Upgrade after a vulnerability that could allow hackers to steal users’ funds was reported. This upgrade was scheduled to launch today, January 16th. This issue, known as the ‘reentrancy attack’ in the Ethereum Improvement Proposal (EIP) 1283. was identified by a smart contract audit firm ChainSecurity. They also reported about the bug in detail in a Medium blog post yesterday. According to the Ethereum official blog, “Security researchers like ChainSecurity and TrailOfBits ran (and are still running) analysis across the entire blockchain. They did not find any cases of this vulnerability in the wild. However, there is still a non-zero risk that some contracts could be affected.” According to a statement by Ethereum Core Developers and the Ethereum Security Community, “Because the risk is non-zero and the amount of time required to determine the risk with confidence is longer the amount of time available before the planned Constantinople upgrade, a decision was reached to postpone the fork out of an abundance of caution.” The blog posted by ChainSecurity explained the cause of the potential vulnerability and have also suggested how smart contracts can be tested for vulnerabilities. The blog highlighted that the EIP-1283 introduces cheaper gas cost for SSTORE operations. If the upgrade took place, the smart contracts on the chain could have utilized code patterns that would make them vulnerable to re-entrancy attack. However, these smart contracts would not have been vulnerable before the attack. Afri Schoedon, the hard fork coordinator at Ethereum said, “We will decide (sic) further steps on Friday in the all-core-devs call. For now it will not happen this week. Stay tuned for instructions.” To know more about this news in detail, visit the Ethereum official blog. Ethereum classic suffered a 51% attack; developers deny, state a new ASIC card was tested Ethereum’s 1000x Scalability Upgrade ‘Serenity’ is coming with better speed and security: Vitalik Buterin at Devcon Origin DApp: A decentralized marketplace on Ethereum mainnet aims to disrupt gig economy platforms like Airbnb and Uber
Read more
  • 0
  • 0
  • 2217
article-image-researchers-highlight-design-weaknesses-in-the-4g-and-5g-cellular-paging-protocols
Savia Lobo
25 Feb 2019
4 min read
Save for later

Researchers highlight design weaknesses in the 4G and 5G Cellular Paging Protocols

Savia Lobo
25 Feb 2019
4 min read
A few researchers from Purdue University and The University of Iowa have recently found three new security flaws in 4G and 5G protocols that can easily allow intruders to intercept calls and also track user’s device location. The research paper titled, ‘Privacy Attacks to the 4G and 5G Cellular Paging Protocols Using Side Channel Information’ mentions the design weaknesses of the 4G/5G cellular paging protocol, which can be misused by attackers to identify victim’s presence in a particular cell area just from the victim’s soft-identity (e.g., phone number, Twitter handle) with a novel attack called ToRPEDO (TRacking via Paging mEssage DistributiOn) attack. This attack also highlights two other attacks, namely, the PIERCER and the IMSI-Cracking attack which can be carried out via the ToRPEDO attack. The researchers in the paper state, “All of our attacks have been validated in a realistic setting for 4G using cheap software-defined radio and open-source protocol stack.” According to TechCrunch, “Hussain, along with Ninghui Li and Elisa Bertino at Purdue University, and Mitziu Echeverria and Omar Chowdhury at the University of Iowa are set to reveal their findings at the Network and Distributed System Security Symposium in San Diego on Tuesday.” The three security flaws in the 4G/5G cellular paging protocols The ToRPEDO attack The researchers have presented a ToRPEDO attack that exploits a 4G/5G paging protocol weakness. This enables the attacker to verify the victim’s presence in a particular cellular area and in the process identifies the victim’s paging occasion, if the attacker already knows the phone number. ToRPEDO can enable an adversary to verify a victim’s coarse-grained location information, inject fabricated paging messages, and mount denial-of-service attacks. PIERCER attack This attack exploits a 4G paging deployment vulnerability that allows an attacker to determine a victim’s international mobile subscriber identity (IMSI) on the 4G network. IMSI-Cracking attack In this attack, the victim’s IMSI details are leaked for both 4G and 5G. The researchers, in the paper, have demonstrated how by using the ToRPEDO attack as a sub-step, attackers can retrieve a victim device’s persistent identity (i.e., IMSI) with a brute-force IMSI-Cracking attack. One of the co-authors, Syed Rafiul Hussain, told TechCrunch, “Any person with a little knowledge of cellular paging protocols can carry out this attack.” “According to Hussain, all four major U.S. operators — AT&T, Verizon (which owns TechCrunch), Sprint and T-Mobile — are affected by Torpedo, and the attacks can be carried out with radio equipment costing as little as $200”, the TechCrunch reports. Hussain said the flaws were reported to the GSMA,  an industry body that represents mobile operators. GSMA recognized the flaws, but a spokesperson was unable to provide comment when reached. It isn’t known when the flaws will be fixed. One of the users wrote on HackerNews, “Most people consider the fact that your handset will readily talk to any base station that's on the air to be a feature. Try to imagine how things would work if you had to authenticate and authorize every station on the network. It's true that anyone who gets on the air and speaks the air protocol can screw with your phone. Those people are also violating multiple laws and regulations in the course of doing so.” To know more about these flaws in detail, head over to the complete research paper. Read Next Security researchers discloses vulnerabilities in TLS libraries and the downgrade Attack on TLS 1.3 Remote Code Execution Flaw in APT Linux Package Manager allows man-in-the-middle attack Internet Outage or Internet Manipulation? New America lists government interference, DDoS attacks as top reasons for Internet Outages across the world
Read more
  • 0
  • 0
  • 2150

article-image-qubes-os-4-0-1-rc1-has-been-released
Savia Lobo
06 Nov 2018
2 min read
Save for later

Qubes OS 4.0.1-rc1 has been released!

Savia Lobo
06 Nov 2018
2 min read
Yesterday, the Qubes OS community announced the first release candidate of Qubes OS 4.0.1. This is the first of at least two planned point releases for version 4.0. Qubes OS, a free and open source security-oriented operating system, aims to provide security through isolation. Virtualization in Qubes OS is performed by Xen; the user environments can be based on Fedora, Debian, Whonix, and Microsoft Windows. The community announced the release of 3.2.1-rc1 one month ago. Since no serious problems have been discovered in 3.2.1-rc1, they plan to build the final version of Qubes 3.2.1 at the end of this week. Features of Qubes OS 4.0.1-rc1 All 4.0 dom0 updates to date Includes Fedora 29 TemplateVM Debian 9 TemplateVM Whonix 14 Gateway and Workstation TemplateVMs Linux kernel 4.14 The next release candidate The second release candidate, 4.0.1-rc2, will include a fix for the Nautilus bug reported in #4460 along with any other available fixes for bugs reported against this release candidate. To know more about Qubes OS 4.0.1-rc1 visit its official release document. QubesOS’ founder and endpoint security expert, Joanna Rutkowska, resigns; joins the Golem Project to focus on cloud trustworthiness Harvard Law School launches its Caselaw Access Project API and bulk data service making almost 6.5 million cases available Google now requires you to enable JavaScript to sign-in as part of its enhanced security features
Read more
  • 0
  • 0
  • 2140

article-image-facebook-unfriends-twitter-cross-posted-tweets-on-facebook-disappear-temporarily
Prasad Ramesh
30 Aug 2018
2 min read
Save for later

Facebook unfriends Twitter. Cross-posted tweets on Facebook disappear temporarily.

Prasad Ramesh
30 Aug 2018
2 min read
Following Facebook’s move to restrict cross-posts from other platforms earlier this month via changes to its API platform, many users noticed their old Twitter posts disappearing from Facebook this week. The cross-posting option lets users publish their Twitter posts to Facebook automatically. The absence of cross posting was first noticed by users who heavily relied on cross-posting to keep their Facebook active. Without that feature, the Twitter app for Facebook was not of much use. This had caused a lot of old posts to disappear, first noticed around August 26, leaving users furious. Some of the users’ profiles were left fairly empty since they relied on cross-posting to keep their accounts active. Facebook API platform changes are a part of Facebook’s plan to take strict measures on misuse of its platform after the Cambridge Analytica scandal at the start of this year. Since then, Facebook has been taking a variety of efforts to prevent data misuse; stopping third-parties from being able to post to Facebook is one of them. TechCrunch was the first to report on the issue is sudden disappearance of cross-posts from Twitter, and Facebook confirmed to them the same day that it is checking the issue. Common belief is that changes in the API to prevent cross-posting would not have mass-deleted all the older posts. Following these changes from Facebook, Twitter asked Facebook for its app to be deleted from their platform. The result was users’ old Twitter posts on Facebook getting deleted. Turns out that this was just a bug and now it is fixed. In a statement to Axios, Facebook cleared the confusion saying "A Twitter admin requested their app be deleted, which resulted in content that people had cross-posted from Twitter to Facebook also being temporarily removed from people’s profiles. However, we have since restored the past content and it's now live on people’s profiles." You can find the original report on TechCrunch. Facebook Watch is now available world-wide challenging video streaming rivals, YouTube, Twitch, and more A new conservative employee group within Facebook to protest Facebook’s “intolerant” liberal policies Facebook’s AI algorithm finds 20 Myanmar Military Officials guilty of spreading hate and misinformation, leads to their ban
Read more
  • 0
  • 0
  • 1947

article-image-facebook-twitter-and-other-tech-giants-to-fight-against-indias-new-intermediary-guidelines-reuters-reports
Melisha Dsouza
14 Jan 2019
4 min read
Save for later

Facebook, Twitter, and other tech giants to fight against India’s new “intermediary guidelines” Reuters reports

Melisha Dsouza
14 Jan 2019
4 min read
According to a report by Reuters released later last month, The Indian Information Technology ministry has proposed rules that will compel major technology giants like Facebook, Whatsapp, Twitter etc to take down unlawful content affecting the “sovereignty and integrity of India”. According to the rules, this content will have to be taken down within 24 hours of being notified by a court or a government body. These rules are proposed with an aim to achieve the goal of ‘a safer social media’. The proposal drafted by the ministry is open for public comment until 31st January 2019; after which it will be adopted as law, either ‘with or without changes’. Now, Reuters report that sources familiar with the matter have revealed that the tech giants are all set to fight against these rules that regulate content in India. The country is one of the world’s biggest Internet market with about 300 million Facebook users, more than 200 million Whatsapp users, and millions of Twitter users as well. Reuters also reports that many U.S. and Indian lobby groups representing these top tech companies have started seeking legal opinions on the impact of these rules. They have also been advised by law firms on drafting objections against these rules to be filed with the IT ministry. According to the Ministry of Electronics and Information Technology, the draft Intermediary Guidelines will  “curb misuse of Social Media for mob lynching and other violence”. Last year, fake messages about child traffickers and kidnappers circulated through WhatsApp sparked mob lynchings in India. Mozilla Corp. called this proposal “a blunt and disproportionate” solution to regulating harmful online content. The company also added that these rules could lead to the problem of over censorship of online content. Joint secretary at India’s I.T. ministry, Gopalakrishnan S, said that the proposal would ‘make social media safer’ and ‘not curb freedom of speech’. Industrial executives and civil rights activists agree otherwise. They state that these rules could be used by the government of Prime Minister Narendra Modi to increase surveillance on the public, given that this proposal comes just ahead of India’s national election to be held in May. Sources also express their concern to Reuters that the rules will put the privacy of users at stake with round the clock monitoring of online content. This is because the rules require companies with more than 5 million Indian users to have a local office and a nodal officer for 24x7 coordination with law enforcement. The rules also mandate that on being questioned by the government, companies need to reveal the origin of a message; thus questioning user confidentiality on platforms like Whatsapp that uses end to end encryption to protect user privacy. Twitter was abuzz with mixed sentiments. While some did support the motive of banishing fake news and misinformation on the internet, others were concerned about targeted surveillance. https://twitter.com/akhileshsharma1/status/1081499612698083328 https://twitter.com/subhapa/status/1083240653272825856   https://twitter.com/subhapa/status/1083256991156453377 While the rules come just in time to prevent malicious actors from misusing social media platforms to spread fake news and sway away voters, we cannot help but notice the strict impositions that tech giants will have to face if this draft becomes law. You can head over to Reuters for the entire coverage of this news. US government privately advised by top Amazon executive on web portal worth billions to the Amazon; The Guardian reports Australia passes a rushed anti-encryption bill “to make Australians safe”; experts find “dangerous loopholes” that compromise online privacy and safety Australia’s Assistance and Access (A&A) bill, popularly known as the anti-encryption law, opposed by many including the tech community  
Read more
  • 0
  • 0
  • 1942
Modal Close icon
Modal Close icon