Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

Tech News - Security

470 Articles
article-image-git-bug-a-new-distributed-bug-tracker-embedded-in-git
Melisha Dsouza
20 Aug 2018
3 min read
Save for later

Git-bug: A new distributed bug tracker embedded in git

Melisha Dsouza
20 Aug 2018
3 min read
git-bug is a distributed bug tracker that is embedded in git. Using git's internal storage ensures that no files are added in your project. You can push your bugs to the same git remote that you are already using to collaborate with other people. The main idea behind implementing a distributed bug tracker in Git was to stop relying on a web service somewhere to deal with bugs. Browsing and editing bug reports offline wouldn’t be much of a pain, thanks to this implementation. While git-bug addresses a pressing need, note that the project is not yet available for full fledged use and is currently a proof of concept released just 3 days ago at version 0.2.0. Reddit is abuzz with views on the release. A user quotes- Source: reddit.com Certain users also had counter thoughts on the cons of the release - Source: reddit.com   Now that you want to get your hands on git-bug, let’s look at how to get started. Installing git-bug, Linux packages needed and CLI usage for its implementation To install the git-bug, all you need to do is execute the following command- go get github.com/MichaelMure/git-bug If it's not done already, add golang binary directory in your PATH: export PATH=$PATH:$GOROOT/bin:$GOPATH/bin You can set pre-compiled binaries by following 3 simple steps: Head over to the release page and download the appropriate binary for your system. Copy the binary anywhere in your PATH Rename the binary to git-bug (or git-bug.exe on windows) The only linux packge needed for this release is the Archlinux (AUR) Further, you can use the CLI to implement the git-bug using the following commands- Create a new bug: git bug new Your favorite editor will open to write a title and a message. You can push your new entry to a remote: git bug push [<remote>] And pull for updates: git bug pull [<remote>] List existing bugs: git bug ls   Use commands like show, comment, open or close to display and modify bugs. For more details about each command, you can run git bug <command> --help or scan the command's documentation. Features of the git-bug #1 Interactive User Interface for the terminal Use the git bug termui  command to browse and edit bugs. This short video will demonstrate how easy and interactive it is to browse and edit bugs #2 Launch a rich Web UI Take a look at the awesome web UI that is obtained with git bug webui. Source: github.com     Source: github.com   This web UI is entirely packed inside the same go binary and serve static content through a localhost http server. It connects to  backend through a GraphQL API. Take a look at the schema for more clarity. The additional features that are planned include media embedding import/export of github issue extendable data model to support arbitrary bug tracker inflatable raptor Every new release is expected to come with exciting new features, it is also coupled with a few minor constraints. You can check out some of the minor inconveniences as listed out on the github page. We can’t wait for the release to be in a fully working condition. But before that, if you need any additional information on how the git-bug works, head over to the github page. Snapchat source code leaked and posted to GitHub GitHub open sources its GitHub Load Balancer (GLB) Director Homebrew’s Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?
Read more
  • 0
  • 0
  • 24660

article-image-researchers-reveal-light-commands-laser-based-audio-injection-attacks-on-voice-control-devices-like-alexa-siri-and-google-assistant
Fatema Patrawala
06 Nov 2019
5 min read
Save for later

Researchers reveal Light Commands: laser-based audio injection attacks on voice-control devices like Alexa, Siri and Google Assistant

Fatema Patrawala
06 Nov 2019
5 min read
Researchers from the University of Electro-Communications in Tokyo and the University of Michigan released a paper on Monday, that gives alarming cues about the security of voice-control devices. In the research paper the researchers presented ways in which they were able to manipulate Siri, Alexa, and other devices using “Light Commands”, a vulnerability in in MEMS (microelectro-mechanical systems) microphones. Light Commands was discovered this year in May. It allows attackers to remotely inject inaudible and invisible commands into voice assistants, such as Google assistant, Amazon Alexa, Facebook Portal, and Apple Siri using light. This vulnerability can become more dangerous as voice-control devices gain more popularity. How Light Commands work Consumers use voice-control devices for many applications, for example to unlock doors, make online purchases, and more with simple voice commands. The research team tested a handful of such devices, and found that Light Commands can work on any smart speaker or phone that uses MEMS. These systems contain tiny components that convert audio signals into electrical signals. By shining a laser through the window at microphones inside smart speakers, tablets, or phones, a far away attacker can remotely send inaudible and potentially invisible commands which are then acted upon by Alexa, Portal, Google assistant or Siri. Many users do not enable voice authentication or passwords to protect devices from unauthorized use. Hence, an attacker can use light-injected voice commands to unlock the victim's smart-lock protected home doors, or even locate, unlock and start various vehicles. Further researchers also mentioned that Light Commands can be executed at long distances as well. To prove this they demonstrated the attack in a 110 meter hallway, the longest hallway available in the research phase. Below is the reference image where team demonstrates the attack, additionally they have captured few videos of the demonstration as well. Source: Light Commands research paper. Experimental setup for exploring attack range at the 110 m long corridor The Light Commands attack can be executed using a simple laser pointer, a laser driver, and a sound amplifier. A telephoto lens can be used to focus the laser for long range attacks. Detecting the Light Commands attacks Researchers also wrote how one can detect if the devices are attacked by Light Commands. They believe that command injection via light makes no sound, an attentive user can notice the attacker's light beam reflected on the target device. Alternatively, one can attempt to monitor the device's verbal response and light pattern changes, both of which serve as command confirmation. Additionally they also mention that so far they have not seen any such cases where the Light Command attack has been maliciously exploited. Limitations in executing the attack Light Commands do have some limitations in execution: Lasers must point directly at a specific component within the microphone to transmit audio information. Attackers need a direct line of sight and a clear pathway for lasers to travel. Most light signals are visible to the naked eye and would expose attackers. Also, voice-control devices respond out loud when activated, which could alert nearby people of foul play. Controlling advanced lasers with precision requires a certain degree of experience and equipment. There is a high barrier to entry when it comes to long-range attacks. How to mitigate such attacks Researchers in the paper suggested to add an additional layer of authentication in voice assistants to mitigate the attack. They also suggest that manufacturers can attempt to use sensor fusion techniques, such as acquiring audio from multiple microphones. When the attacker uses a single laser, only a single microphone receives a signal while the others receive nothing. Thus, manufacturers can attempt to detect such anomalies, ignoring the injected commands. Another approach proposed is reducing the amount of light reaching the microphone's diaphragm. This can be possible by using a barrier that physically blocks straight light beams to eliminate the line of sight to the diaphragm, or by implementing a non-transparent cover on top of the microphone hole to reduce the amount of light hitting the microphone. However, researchers also agreed that such physical barriers are only effective to a certain point, as an attacker can always increase the laser power in an attempt to pass through the barriers and create a new light path. Users discuss photoacoustic effect at play On Hacker News, this research has gained much attention as users find this interesting and applaud researchers for the demonstration. Some discuss the laser pointers and laser drivers price and features available to hack the voice assistants. Others discuss how such techniques come to play, one of them says, “I think the photoacoustic effect is at play here. Discovered by Alexander Graham Bell has a variety of applications. It can be used to detect trace gases in gas mixtures at the parts-per-trillion level among other things. An optical beam chopped at an audio frequency goes through a gas cell. If it is absorbed, there's a pressure wave at the chopping frequency proportional to the absorption. If not, there isn't. Synchronous detection (e.g. lock in amplifiers) knock out any signal not at the chopping frequency. You can see even tiny signals when there is no background. Hearing aid microphones make excellent and inexpensive detectors so I think that the mics in modern phones would be comparable. Contrast this with standard methods where one passes a light beam through a cell into a detector, looking for a small change in a large signal. https://chem.libretexts.org/Bookshelves/Physical_and_Theoret... Hats off to the Michigan team for this very clever (and unnerving) demonstration.” Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs How Chaos Engineering can help predict and prevent cyber-attacks preemptively An unpatched security issue in the Kubernetes API is vulnerable to a “billion laughs” attack Intel’s DDIO and RDMA enabled microprocessors vulnerable to new NetCAT attack Wikipedia hit by massive DDoS (Distributed Denial of Service) attack; goes offline in many countries
Read more
  • 0
  • 0
  • 24304

article-image-metasploit-5-0-released
Savia Lobo
14 Jan 2019
3 min read
Save for later

Metasploit 5.0 released!

Savia Lobo
14 Jan 2019
3 min read
Last week, the Metasploit team announced the release of its fifth version, Metasploit 5.0. This latest update introduces multiple new features including Metasploit’s new database and automation APIs, evasion modules and libraries, expanded language support, improved performance, and more. Metasploit 5.0 includes support for three different module languages; Go, Python, and Ruby. What’s New in Metasploit 5.0? Database as a RESTful service The latest Metasploit 5.0 now adds the ability to run the database by itself as a RESTful service on top of the existing PostgreSQL database backend from the 4.x versions. With this, multiple Metasploit consoles can easily interact. This change also offloads some bulk operations to the database service, which improves performance by allowing parallel processing of the database and regular msfconsole operations. New JSON-RPC API This new API will be beneficial for users who want to integrate Metasploit with new tools and languages. Till now, Metasploit supported automation via its own unique network protocol, which made it difficult to test or debug using standard tools like ‘curl’. A new common web service framework Metasploit 5.0 also adds a common web service framework to expose both the database and the automation APIs; this framework supports advanced authentication and concurrent operations and paves the way for future services. New evasion modules and libraries The Metasploit team announced a new evasion module type in Metasploit along with a couple of example modules in 2008. Using these module types, users can easily develop their own evasions and also add a set of convenient libraries that developers can use to add new on-the-fly mutations to payloads. A recent module uses these evasion libraries to generate unique persistent services. With Metasploit 5.0’s generation libraries, users can now write shellcode in C. Execution of an exploit module The ability to execute an exploit module against more than one target at a given point of time was a long-requested feature. Usage of the exploit module was limited to only one host at a time, which means any attempt at mass exploitation required writing a script or manual interaction. With Metasploit 5.0, any module can now target multiple hosts in the same way by setting RHOSTS to a range of IPs or referencing a hosts file with the file:// option. Improved search mechanism With a new improved search mechanism, Metasploit’s slow search has been upgraded and it now starts much faster out of the box. This means that searching for modules is always fast, regardless of how you use Metasploit. In addition, modules have gained a lot of new metadata capabilities. New metashell feature The new metashell feature allows users to background sessions with the background command, upload/download files, or even run resource scripts, all without needing to upgrade to a Meterpreter session first. As backward compatibility, Metasploit 5.0 still supports running with just a local database, or with no database at all. It also supports the original MessagePack-based RPC protocol. To know more about this news in detail, read its release notes on GitHub. Weaponizing PowerShell with Metasploit and how to defend against PowerShell attacks [Tutorial] Pentest tool in focus: Metasploit Getting Started with Metasploitable2 and Kali Linux
Read more
  • 0
  • 0
  • 24282

article-image-an-unsecured-elasticsearch-server-exposed-1-2-billion-user-records-containing-their-personal-and-social-information
Bhagyashree R
26 Nov 2019
4 min read
Save for later

An unsecured Elasticsearch server exposed 1.2 billion user records containing their personal and social information

Bhagyashree R
26 Nov 2019
4 min read
Last month, Vinny Troia, the founder of Data Viper and Bob Diachenko, an independent cybersecurity consultant discovered a “wide-open” Elasticsearch server. The server exposed the personal information of about 1.2 billion unique users including their names, email addresses, phone numbers, LinkedIn and Facebook profile information. The Elasticsearch server did not have any kind of authentication whatsoever and was accessible via a web browser. "No password or authentication of any kind was needed to access or download all of the data," the report adds. What the investigation on the Elasticsearch server revealed Troia and Diachenko came across the Elasticsearch server while looking for exposures on the web scanning services BinaryEdge and Shodan. Upon further investigation, the researchers speculated that the data originated from two different data enrichment companies: People Data Labs and  OxyData.io. Data enrichment, as the name suggests, is a process of enhancing the existing raw data to make it useful for businesses. Data enrichment companies can provide access to large stores of data merged from multiple third-party sources, which enables businesses to gain deeper insights into their current and potential customers. Elasticsearch stores its data in an index, which is similar to a ‘database’ in a relational database. The researchers found that the majority of the data spanned four separate data indexes, labeled “PDL” and “OXY”. Also, each user record was labeled with a “source” field that matched either PDL or Oxy, respectively. After the researchers de-duplicated the nearly 3 billion user records with the PDL index, they found roughly 1.2 billion unique people and 650 million unique email addresses. These numbers matched with the statistics provided by the company on their website. The data within the three PDL indexes included slightly varied information. While some focused on scraped LinkedIn information, email addresses and phone numbers, others included information on individual social media profiles such as a person’s Facebook, Twitter, and Github URLs. After analyzing the data under the OXY index, the researchers found scrape of LinkedIn data, including recruiter information. What made the case confusing was that the Elasticsearch server was hosted on Google Cloud Services, while People Data Labs appears to be using Amazon Web Services. When contacted about the Elasticsearch server, both the companies denied that the server belonged to them. In an interview with Wired, PDL co-founder Sean Thorne said, “The owner of this server likely used one of our enrichment products, along with a number of other data-enrichment or licensing services. Once a customer receives data from us, or any other data providers, the data is on their servers and the security is their responsibility. We perform free security audits, consultations, and workshops with the majority of our customers." This news sparked a discussion on Hacker News. While some users were stunned by the sheer negligence of leaving the Elasticsearch server wide-open, others were questioning the core business model of these companies. A user commented, “It has to exist on a private network behind a firewall with ports open to application servers and other es nodes only. Running things on a public IP address is a choice that should not be taken lightly. Clustering over the public internet is not a thing with Elasticsearch (or similar products).” “It's a tragedy that all of this data was available to anyone in a public database instead of.... checks notes... available to anyone who was willing to sign up for a free account that allowed them 1,000 queries. It seems like PDL's core business model is irresponsible regarding their stewardship of the data they've harvested,” another user added. Read the full report on Data Viper’s official website. Adobe confirms security vulnerability in one of their Elasticsearch servers that exposed 7.5 million Creative Cloud accounts Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator US Customs and Border Protection reveal data breach that exposed thousands of traveler photos and license plate images
Read more
  • 0
  • 0
  • 24172

article-image-nordvpn-reveals-it-was-affected-by-a-data-breach-in-2018
Savia Lobo
22 Oct 2019
3 min read
Save for later

NordVPN reveals it was affected by a data breach in 2018

Savia Lobo
22 Oct 2019
3 min read
NordVPN, a popular Virtual Private Network revealed that it was subject to a data breach in 2018. The breach came to light a few months ago when an expired internal security key was exposed, allowing anyone outside the company unauthorized access. NordVPN did not inform users then as they wanted to be "100 percent sure that each component within our infrastructure is secure." Details of the breach were traced back to March 2018 when one of NordVPN’s data centers in Finland, from whom they rent their servers from showed signs of unauthorized access. The attacker gained access to the server by exploiting an unsecured remote management system by the provider. In a press release statement, NordVPN explained "only 1 of more than 3000 servers we had at the time was affected." and that the company immediately terminated its contract with the data center provider after it learned of the hack. Even though the company had intrusion detection systems installed to find data breaches, it could not predict a remote data management system left by the data center provider. On the other hand, NordVPN said it was unaware that such a system existed. The company also said, "We are taking all the necessary means to enhance our security. We have undergone an application security audit, are working on a second no-logs audit right now, and are preparing a bug bounty program." They further added, "We will give our all to maximize the security of every aspect of our service, and next year we will launch an independent external audit ... of our infrastructure to make sure we did not miss anything else." NordVPN said that the attacker did not gain access to activity logs, user-credentials, or any other sensitive information. NordVPN maintains what it says is a strict "zero logs" policy. "We don’t track, collect, or share your private data," the company says on its website. In a statement to TechCrunch, NordVPN spokesperson Laura Tyrell said, “The server itself did not contain any user activity logs; none of our applications send user-created credentials for authentication, so usernames and passwords couldn’t have been intercepted either.” She further added, “On the same note, the only possible way to abuse the website traffic was by performing a personalized and complicated man-in-the-middle attack to intercept a single connection that tried to access NordVPN.” Based on a few records posted online, other VPN providers such as TorGuard and VikingVPN may have also been compromised. A spokesperson for TorGuard told TechCrunch that a “single server” was compromised in 2017 but denied that any VPN traffic was accessed. Users are furious that NordVPN did not inform them on time. https://twitter.com/figalmighty/status/1186566775330066432 https://twitter.com/bleepsec/status/1186557192549404672 To know more about this news in detail, you can read NordVPN’s complete press release. DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants StockX confirms a data breach impacting 6.8 million customers Following Capital One data breach, GitHub gets sued and AWS security questioned by a U.S. Senator
Read more
  • 0
  • 0
  • 24130

article-image-using-deep-learning-methods-to-detect-malware-in-android-applications
Savia Lobo
10 Jan 2019
5 min read
Save for later

Using deep learning methods to detect malware in Android Applications

Savia Lobo
10 Jan 2019
5 min read
Researchers from the North China Electric Power University have recently published a paper titled, ‘A Review on The Use of Deep Learning in Android Malware Detection’. Researchers highlight the fact that Android applications can not only be used by application developers, but also by malware developers with criminal intention to design and spread malicious applications that can affect the normal work of Android phones and tablets, steal personal information and credential data, or even worse lock the phone and ask for ransom. In this paper, they have explained how deep learning methods can be used as a countermeasure in Android malware detection to fight back malware. Android Malware Detection Techniques Researchers have said that one critical point of mobile phones is that they are a sensor-based event system, which permits malware to respond to approaching SMS, position changes and so forth, increasing the sophistication of automated malware-analysis techniques. Moreover, the apps can use services and activities and integrate varied programming languages (e.g. Java and C++) in one application. Each application is analyzed in the following stages: Static Analysis The static analysis screens parts of the application without really executing them. This analysis incorporates Signature-based, Permission-based and Component-based analysis. The Signature-based strategy draws features and makes distinctive signs to identify specific malware. Hence, it falls short to recognize the variation or unidentified malware. The Permission-based strategy recognizes permission requests to distinguish malware. The Component-based techniques decompile the APP to draw and inspect the definition and byte code connections of significant components (i.e. activities, services, etc.), to identify the exposures. The principal drawbacks of static analysis are the lack of real execution paths and suitable execution conditions. Dynamic Analysis This technique includes the execution of the application on either a virtual machine or a physical device. This analysis results in a less abstract perspective of application than static analysis. The code paths executed during runtime are a subset of every single accessible path. The principal objective of the analysis is to achieve high code inclusion since every feasible event ought to be activated to watch any possible malicious behavior Hybrid Analysis The hybrid analysis technique includes consolidating static and dynamic features gathered from examining the application and drawing data while the application is running, separately. Nevertheless, it would boost the accuracy of the identification. The principal drawback of hybrid analysis is that it consumes the Android system resources and takes a long time to perform the analysis. Use of deep learning in Android malware detection Currently available machine learning has several weaknesses and some open issues related to the use of DL in Android malware detection include: Deep learning lacks transparency to provide an interpretation of the decision created by its methods. Malware analysts need to understand how the decision was made. There is no assurance that classification models built based on deep learning will perform in different conditions with new data that would not match previous training data. Deep learning studies complex correlations within input and output feature with no innate depiction of causality. Deep learning models are not autonomous and need continual retraining and rigorous parameters adjustments. The DL models in the training phase were subjected to data poisoning attacks, which are merely implemented by manipulating the training and instilling data that make a deep learning model to commit errors. In the testing phase, the models were exposed to several attack types including: Adversarial Attacks are where the DL model inputs are the ones that an adversary has invented deliberately to cause the model to make mistakes Evasion attack: Here, the intruder exploits malevolent instances at test time to have them incorrectly classified as benign by a trained classifier, without having an impact over the training data. This can breach system integrity, either with a targeted or with an indiscriminate attack. Impersonate attack: This attack mimics data instances from targets. The attacker plans to create particular adversarial instances to such an extent that current deep learning-based models mistakenly characterize original instances with different tags from the imitated ones. Inversion attack: This attack uses the APIs allowed by machine learning systems to assemble some fundamental data with respect to the target system models. This kind of attack is divided into two types; Whitebox attack and Blackbox attack. The white-box attack implies that an aggressor can loosely get to and download learning models and other supporting data, while the black-box one points to the way that the aggressor just knows the APIs opened by learning models and some observation after providing input. According to the researchers, hardening deep learning models against different adversarial attacks and detecting, describing and measuring concept drift are vital in future work in Android malware detection. They also mentioned that the limitation of deep learning methods such as lack of transparency and being nonautonomous, is to build more efficient models. To know more about this research in detail, read the research paper. Researchers introduce a deep learning method that converts mono audio recordings into 3D sounds using video scenes IEEE Computer Society predicts top ten tech trends for 2019: assisted transportation, chatbots, and deep learning accelerators among others Stanford researchers introduce DeepSolar, a deep learning framework that mapped every solar panel in the US
Read more
  • 0
  • 0
  • 24030
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-googles-project-zero-reveals-a-high-severity-copy-on-write-security-flaw-found-in-macos-kernel
Savia Lobo
04 Mar 2019
3 min read
Save for later

Google’s Project Zero reveals a “High severity” copy-on-write security flaw found in macOS kernel

Savia Lobo
04 Mar 2019
3 min read
A Security researcher from Google’s Project Zero team recently revealed a high severity flaw in the macOS kernel that allows a copy-on-write (COW) behavior, a resource-management technique, also referred to as shadowing. The researcher informed Apple about the flaw back in November 2018, but the company is yet to fix it even after exceeding the 90-day deadline. This is the reason why the bug is now being made public with a "high severity" label. According to a post on Monorail, the issue tracking tool is for chromium-related projects, “The copy-on-write behavior works not only with anonymous memory but also with file mappings. This means that, after the destination process has started reading from the transferred memory area, memory pressure can cause the pages holding the transferred memory to be evicted from the page cache. Later, when the evicted pages are needed again, they can be reloaded from the backing filesystem.” “This means that if an attacker can mutate an on-disk file without informing the virtual management subsystem, this is a security bug. MacOS permits normal users to mount filesystem images. When a mounted filesystem image is mutated directly (e.g. by calling pwrite() on the filesystem image), this information is not propagated into the mounted filesystem”, the post further reads. According to a Google project member, “We've been in contact with Apple regarding this issue, and at this point no fix is available. Apple is intending to resolve this issue in a future release, and we're working together to assess the options for a patch. We'll update this issue tracker entry once we have more details.” A user commented on HackerNews, “Given the requirements that a secondary process should even be able to modify a file that is already open, I guess the expected behavior is that the 1st process's version should remain cached in memory while allowing the on-disk (CoW) version to be updated? While also informing the 1st process of the update and allowing the 1st process to reload/reopen the file if it chooses to do so. If this is the intended/expected behavior, then it follows that pwrite() and other syscalls should inform the kernel and cause prevent the origional cache from being flushed.” To know more about this news, head over to the bug issue post. Drupal releases security advisory for ‘serious’ Remote Code Execution vulnerability Google’s home security system, Nest Secure’s had a hidden microphone; Google says it was an “error” Firedome’s ‘Endpoint Protection’ solution for improved IoT security
Read more
  • 0
  • 0
  • 23761

article-image-kali-linux-2018-1-released
Savia Lobo
04 Apr 2018
2 min read
Save for later

Kali Linux 2018.1 released

Savia Lobo
04 Apr 2018
2 min read
Kali Linux 2018.1, the first of the many versions of Kali Linux for this year is now available. This release contains all the updates and bug fixes since the last version 2017.3, released in November 2017. The 2018.1 version is boosted by the new Linux 4.14.12 kernel. This brings in an added support for newer hardware and an improved performance. This means, ethical hackers and penetration testers can now use Kali in a more efficient manner to enhance security.   The release also has two exceptional features which include, AMD Secure Memory Encryption, a new feature in the AMD processors that enables automatic encryption and decryption of DRAM. The addition of this feature means that systems will no longer be vulnerable to cold-boot attacks because, even with physical access, the memory will be not be readable. Increased Memory Limits – This release also includes a support for 5-level paging, a new feature of the upcoming processors. These new processors will support 4 PB (petabytes) of physical memory and 128 PB of virtual memory. Several packages including zaproxy, secure-socket-funneling, pixiewps, seclists, burpsuite, dbeaver, and reaver have been updated in Kali 2018.1. Also, for those using Hyper-V to run Kali virtual machines provided by Offensive Security, the Hyper-V virtual machine is now generation 2. This means, the Hyper-V VM is now UEFI-based and supports expanding/shrinking of HDD. The generation 2 also includes Hyper-V integration services, which supports Dynamic Memory, Network Monitoring/Scaling, and Replication. Know more about Kali’s latest release on the Kali Linux Blog.
Read more
  • 0
  • 0
  • 23223

article-image-darpa-on-the-hunt-to-catch-deepfakes-with-its-ai-forensic-tools-underway
Natasha Mathur
08 Aug 2018
5 min read
Save for later

DARPA on the hunt to catch deepfakes with its AI forensic tools underway

Natasha Mathur
08 Aug 2018
5 min read
The U.S. Defense Advanced Research Projects Agency ( DARPA) has come out with AI-based forensic tools to catch deepfakes, first reported by MIT technology review yesterday. According to MIT Technology Review, the development of more tools is currently under progress to expose fake images and revenge porn videos on the web. DARPA’s deepfake mission project was announced earlier this year. Alec Baldwin on Saturday Night Live face swapped with Donald Trump As mentioned in the MediFor blog post, “While many manipulations are benign, performed for fun or for artistic value, others are for adversarial purposes, such as propaganda or misinformation campaigns”. This is one of the major reasons why DARPA Forensics experts are keen on finding methods to detect deepfakes videos and images How did deepfakes originate? Back in December 2017, a Reddit user named “DeepFakes” posted extremely real-looking explicit videos of celebrities. He used deep learning techniques to insert celebrities’ faces into adult movies. Using Deep learning, one can combine and superimpose existing images and videos onto original images or videos to create realistic-seeming fake videos. As per the MIT technology review,“Video forgeries are done using a machine-learning technique -- generative modeling -- lets a computer learn from real data before producing fake examples that are statistically similar”. Video tampering is done using two neural networks -- generative adversarial networks which work in conjunction “to produce ever more convincing fakes”. Why are deepfakes toxic? An app named FakeApp was released earlier this year which helped create deepfakes quite easily. FakeApp uses neural networking tools developed by Google's AI division. The app trains itself to perform image-recognition tasks using trial and error. Ever since its release, the app has been downloaded more than 120,000 times. In fact, there are tutorials online on how to create deepfakes. Apart from this, there are regular requests on deepfake forums, asking users for help in creating face-swap porn videos of ex-girlfriends, classmates, politicians, celebrities, and teachers. Deepfakes is even be used to create fake news such as world leaders declaring war on a country. The toxic potential of this technology has led to a growing concern as deepfakes have become a powerful tool for harassing people. Once deepfakes found their way on the world wide web, many websites such as Twitter and PornHub, banned them from being posted on their platforms. Reddit also announced a ban on deepfakes, earlier this year, killing The “deepfakes” subreddit which had more than 90,000 subscribers, entirely. MediFor: DARPA’s AI weapon to counter deepfakes DARPA’s Media Forensics group, also known as MediFor, works in a group along with other researchers is set on developing AI tools for deepfakes. It is currently focusing on four techniques to catch the audiovisual discrepancies present in a forged video. This includes analyzing lip sync, detecting speaker inconsistency, scene inconsistency and content insertions. One technique comes from a team led by Professor Siwei Lyu of SUNY Albany. Lyu mentioned that they “generated about 50 fake videos and tried a bunch of traditional forensics methods. They worked on and off, but not very well”. As the deepfakes are created using static images, Lyu noticed that that the faces in deepfakes videos rarely blink and that eye-movement, if present, is quite unnatural. An academic paper titled "In Ictu Oculi: Exposing AI Generated Fake Face Videos by Detecting Eye Blinking," by Yuezun Li, Ming-Ching Chang and Siwei Lyu explains a method to detect forged videos. It makes use of Long-term Recurrent Convolutional Networks (LRCN). According to the research paper, people, on an average, blink about 17 times a minute or 0.283 times per second. This rate increases with conversation and decreases while reading. There are a lot of other techniques which are used for eye blink detection such as detecting the eye state by computing the vertical distance between eyelids, measuring eye aspect ratio ( EAR ), and using the convolutional neural network (CNN) to detect open and closed eye states. But, Li, Chang, and Lyu use a different approach. They rely on  Long-term Recurrent Convolutional Networks (LRCN) model. They first perform pre-processing to identify facial features and normalize the video frame orientation. Then, they pass cropped eye images into the LRCN for evaluation. This technique is quite effective. It is also better as compared to other approaches, with a reported accuracy of 0.99 (LRCN) compared to 0.98 (CNN) and 0.79 (EAR). However, Lyu says that a skilled video editor can fix the non-blinking deepfakes by using images that shows blinking eyes. But, Lyu’s team has a secret effective technique in the works to fix even that, though he hasn’t divulged any details. Others in DARPA are on the look-out for similar cues such as strange head movements, odd eye color, etc as these little details are leading the team even closer to detection of deepfakes. As mentioned in the MIT Technology review post, “the arrival of these forensics tools may simply signal the beginning of an AI-powered arms race between video forgers and digital sleuths” and how”. Also, MediFor states that “If successful, the MediFor platform will automatically detect manipulations, provide detailed information about how these manipulations were performed, and reason about the overall integrity of visual media to facilitate decisions regarding the use of any questionable image or video”. Deepfakes need to stop and the U.S. Defense Advanced Research Projects Agency ( DARPA) seems all set to fight against them. Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news A new WPA/WPA2 security attack in town: Wi-fi routers watch out! YouTube has a $25 million plan to counter fake news and misinformation  
Read more
  • 0
  • 25
  • 22814

article-image-alexa-and-google-assistant-can-eavesdrop-or-vish-unsuspecting-users
Sugandha Lahoti
22 Oct 2019
3 min read
Save for later

Smart Spies attack: Alexa and Google Assistant can eavesdrop or vish (voice phish) unsuspecting users, disclose researchers from SRLabs

Sugandha Lahoti
22 Oct 2019
3 min read
In a new study security researchers from SRLabs have exposed a serious vulnerability - Smart Spies attack in smart speakers from Amazon and Google. According to SRLabs, smart speaker voice apps - Skills for Alexa and Actions on Google Home can be abused to eavesdrop on users or vish (voice-phish) their passwords. The researchers demonstrated that with Smart Spies attack they can get these smart speakers to silently record users or ask their Google account passwords by simply uploading a malicious software disguised as Alexa skill or Google action. The SRLabs team added "�. " (U+D801, dot, space) character sequence to various locations inside the backend of a normal Alexa/Google Home app. They tell a user that an app has failed, insert the "�. " to induce a long pause, and then prompt the user with the phishing message after a few minutes. This tricks users into believing the phishing message has nothing to do with the previous app with which they interacted. Using this sequence, the voice assistants kept on listening for much longer than usual for further commands. Anything the user says is then automatically transcribed and can be sent directly to the hacker. This revelation of Smart Spies attack is unsurprising considering Alexa and Google Home were found phishing and eavesdropping before. In June of this year, two lawsuits were filed in Seattle that allege that Amazon is recording voiceprints of children using its Alexa devices without their consent. Later, Amazon employees were found listening to Echo audio recordings, followed by Google’s language experts doing the same. SRLabs researchers urge users to be more aware of Smart Spies attack and the potential of malicious voice apps that abuse their smart speakers. They caution users to be more aware of third-party app sources while installing a new voice app on their speakers. Measures suggested to Google and Amazon to avoid Smart Spies attack Amazon and Google need to implement better protection, starting with a more thorough review process of third-party Skills and Actions made available in their voice app stores. The voice app review needs to check explicitly for copies of built-in intents. Unpronounceable characters like “�. “ and silent SSML messages should be removed to prevent arbitrary long pauses in the speakers’ output. Suspicious output texts including “password“ deserve particular attention or should be disallowed completely. In a statement provided to Ars Technica, Amazon said it has put new mitigations in place to prevent and detect skills from being able to do this kind of thing in the future. It said that it takes down skills whenever this kind of behavior is identified. Google also told Ars Technica that it has review processes to detect this kind of behavior, and has removed the actions created by the security researchers. The company is conducting an internal review of all third-party actions, and has temporarily disabled some actions while this is taking place. On Twitter people condemned Google and Amazon and cautioned others not to buy their smart speakers. https://twitter.com/ClaudeRdCardiff/status/1186577801459187712 https://twitter.com/Jake_Hanrahan/status/1186082128095825920 For more information, read the blog post on Smart Spies attack by SRLabs. Google’s language experts are listening to some recordings from its AI assistant Amazon’s partnership with NHS to make Alexa offer medical advice raises privacy concerns and public backlash Amazon is being sued for recording children’s voices through Alexa without consent
Read more
  • 0
  • 0
  • 22506
article-image-net-core-releases-may-2019-updates
Amrata Joshi
15 May 2019
3 min read
Save for later

.NET Core releases May 2019 updates

Amrata Joshi
15 May 2019
3 min read
This month, during the Microsoft Build 2019, the team behind .NET Core announced that .NET Core 5 will be coming in 2020. Yesterday the team at .NET Core released the .NET Core May 2019 updates for 1.0.16, 1.1.14, 2.1.11 and 2.2.5. The updates include security, reliability fixes, and updated packages. Expected updates in .NET Core Security .NET Core Tampering Vulnerability(CVE-2019-0820) When .NET Core improperly processes RegEx strings, a denial of service vulnerability exists. In this case, the attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET application. Even a remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core applications handle RegEx string processing. This security advisory provides information about a vulnerability in .NET Core 1.0, 1.1, 2.1 and 2.2. Denial of Service vulnerability in .NET Core and ASP.NET Core (CVE-2019-0980 & CVE-2019-0981) When .NET Core and ASP.NET Core improperly handle web requests, denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against a .NET Core and ASP.NET Core application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to a .NET Core application. This update addresses this vulnerability by correcting how .NET Core and ASP.NET Core web applications handle web requests. This security advisory provides information about the two vulnerabilities (CVE-2019-0980 & CVE-2019-0981) in .NET Core and ASP.NET Core 1.0, 1.1, 2.1, and 2.2. ASP.NET Core Denial of Service vulnerability(CVE-2019-0982) When ASP.NET Core improperly handles web requests, a denial of service vulnerability exists. An attacker who can successfully exploit this vulnerability can cause a denial of service against an ASP.NET Core web application. This vulnerability can be exploited remotely and without authentication. A remote unauthenticated attacker can exploit this vulnerability by issuing specially crafted requests to the ASP.NET Core application. This update addresses this vulnerability by correcting how the ASP.NET Core web application handles web requests. This security advisory provides information about a vulnerability (CVE-2019-0982) in ASP.NET Core 2.1 and 2.2. Docker images .NET Docker images have now been updated. microsoft/dotnet, microsoft/dotnet-samples, and microsoft/aspnetcore repos have also been updated. Users can get the latest .NET Core updates on the .NET Core download page. To know more about this news, check out the official announcement. .NET 5 arriving in 2020! Docker announces collaboration with Microsoft’s .NET at DockerCon 2019 .NET for Apache Spark Preview is out now!  
Read more
  • 0
  • 0
  • 22458

article-image-vevo-youtube-account-hacked
Vijin Boricha
12 Apr 2018
2 min read
Save for later

Vevo’s YouTube account Hacked: Popular videos deleted

Vijin Boricha
12 Apr 2018
2 min read
In this ever-growing technology era, one has to ensure the data they put on the internet is in safe hands. No matter which platform you use to share data, there is always a risk of your data being misused. Recently, a group of hackers managed to breach Vevo’s YouTube channel taking down their most-watched videos. This security breach alarmed a lot of viewers as they witnessed something unexpected when searching for popular music videos like ‘Despacito’. The hackers not only took down these videos but also replaced them with a different thumbnail and video title. Apparently, the thumbnail picture used was of a masked gang with guns taken from a Netflix show Casa de Papel and the video title consisted of their nicknames (Prosox and Kuroi’sh). Immediately after this news spread like wildfire, YouTube claimed that it was Vevo that was hacked and not YouTube.  Vevo is owned by the big three record companies in the United States: Warner Music Group, Universal Music Group, and Sony Music Entertainment. Vevo only hosts music videos from artists signed to Sony Music Entertainment and Universal Music Group and those are published on YouTube. YouTube also claimed that there is a big difference between YouTube and Vevo. Anyone with a google account can upload a video to YouTube’s mainstream. But this isn’t the case for Vevo. Vevo is managed by administrators responsible for uploading videos to the website and the Vevo YouTube channel. This means only authorized personnel have access to Vevo’s platform, which is broadcasted on YouTube. This personnel does not have any access to the rest of YouTube overall. It was Vevo’s servers that were hacked as all the affected videos came from that server. Since this attack catered to specific music artists it is still unclear if the hackers got through individual artist accounts or had a wider breakthrough Vevo accounts. So far, only one hacker has claimed that they used scripts to alter video titles. Vevo has already started fixing their security breaches where they have claimed that their affected videos and catalog have been restored to full working order. They are also currently investigating the source of the breach. You can know more about this developing news originally reported by BBC. Check out other latest news: Cryptojacking is a growing cybersecurity threat, report warns Top 5 cloud security threats to look out for in 2018
Read more
  • 0
  • 0
  • 22251

article-image-homebrews-github-repo-got-hacked-in-30-mins-how-can-open-source-projects-fight-supply-chain-attacks
Savia Lobo
14 Aug 2018
5 min read
Save for later

Homebrew's Github repo got hacked in 30 mins. How can open source projects fight supply chain attacks?

Savia Lobo
14 Aug 2018
5 min read
On 31st July 2018, Eric Holmes, a security researcher gained access to Homebrew's GitHub repo easily (He documents his experience in an in-depth Medium post). Homebrew is a free and open-source software package management system with well-known packages like node, git, and many more. It simplifies the installation of software on macOS. The Homebrew repository contains its recently elevated scopes. Eric gained access to git push on Homebrew/brew and Homebrew/homebrew-core. He was able to invade and make his first commit into Homebrew’s GitHub repo within 30 minutes. Attack = Higher chances of obtaining user credentials After getting an easy access to Homebrew’s GitHub repositories, Eric’s prime motive was to uncover user credentials of some of the members of Homebrew GitHub org. For this, he made use of an OSSINT tool by Michael Henriksen called gitrob, which easily automates the credential search. However, he could not find anything interesting. Next, he explored Homebrew’s previously disclosed issues on https://hackerone.com/Homebrew, which led him to the observation that Homebrew runs a Jenkins instance that’s (intentionally) publicly exposed at https://jenkins.brew.sh. With further invasion into the repo, Eric encountered that the builds in the “Homebrew Bottles” project were making authenticated pushes to the BrewTestBot/homebrew-core repo. This further led him to an exposed GitHub API token. The token opened commit access to these core Homebrew repos: Homebrew/brew Homebrew/homebrew-core Homebrew/formulae.brew.sh Eric stated in his post that, “If I were a malicious actor, I could have made a small, likely unnoticed change to the openssl formulae, placing a backdoor on any machine that installed it.” Via such a backdoor, intruders could have gained access to private company networks that use Homebrew. This could further lead to data breach on a large scale. Eric reported this issue to Homebrew developer, Mike McQuaid. Following which, he publicly disclosed the issue on the blog at https://brew.sh/2018/08/05/security-incident-disclosure/. Within a few hours the credentials had been revoked, replaced and sanitised within Jenkins so they would not be revealed in future. Homebrew/brew and Homebrew/homebrew-core were updated so non-administrators on those repositories cannot push directly to master. The Homebrew team worked with GitHub to audit and ensure that the given access token wasn’t used maliciously, and didn’t make any unexpected commits to the core Homebrew repos. As an ethical hacker, Eric reported the vulnerabilities he found to the Homebrew team and did no harm to the repo itself. But, not all projects may have such happy endings. How can one safeguard their systems from supply chain attacks? The precautions which Eric Holmes took were credible. He informed the Homebrew developer. However, not every hacker has good intentions and it is one’s responsibility to make sure to keep a check on all the supply chains associated to an organization. Keeping a check on all the libraries One should not allow random libraries into the supply chain. This is because it is difficult to partition libraries with organization’s custom code, thus both run with the same privilege risking the company’s security. One should make sure to levy certain policies around the code the company wishes to allow. Only projects with high popularity, active committers, and evidence of process should be allowed. Establishing guidelines Each company should create guidelines for secure use of the libraries selected. For this, a prior definition of what the libraries are expected to be used for should be made. The developers should also be detailed in safely installing, configuring, and using each library within their code. Identification of dangerous methods and how to use them safely should also be taken care of. A thorough vigilance within the inventory Every organization should keep a check within their inventories to know what open source libraries they are using. They should also ensure to set up a notification system which keeps them abreast of which new vulnerabilities the applications and servers are affected. Protection during runtime Organizations should also make use of runtime application security protection (RASP) to prevent both known and unknown library vulnerabilities from being exploited. If in case they notice new vulnerabilities, the RASP infrastructure enables one to respond in minutes. The software supply chain is the important part to create and deploy applications quickly. Hence, one should take complete care to avoid any misuse via this channel. Read the detailed story of Homebrew’s attack escape on its blog post and Eric’s firsthand account of how he went about planning the attack and the motivation behind it on his medium post. DCLeaks and Guccifer 2.0: Hackers used social engineering to manipulate the 2016 U.S. elections Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news YouTube has a $25 million plan to counter fake news and misinformation
Read more
  • 0
  • 0
  • 22083
article-image-a-universal-bypass-tricks-cylance-ai-antivirus-into-accepting-all-top-10-malware-revealing-a-new-attack-surface-for-machine-learning-based-security
Sugandha Lahoti
19 Jul 2019
4 min read
Save for later

A universal bypass tricks Cylance AI antivirus into accepting all top 10 Malware revealing a new attack surface for machine learning based security

Sugandha Lahoti
19 Jul 2019
4 min read
Researchers from Skylight Cyber, an Australian cybersecurity enterprise, have tricked Blackberry Cylance’s AI-based antivirus product. They identified a peculiar bias of the antivirus product towards a specific game engine and bypassed it to trick the product into accepting malicious malware files. This discovery means companies working in the field of artificial intelligence-driven cybersecurity need to rethink their approach to creating new products. The bypass is not just limited to Cylance, researchers chose it as it is a leading vendor in the field and is publicly available. The researchers Adi Ashkenazy and Shahar Zini from Skylight Cyber say they can reverse the model of any AI-based EPP (Endpoint Protection Platform) product, and find a bias enabling a universal bypass. Essentially meaning if you could truly understand how a certain model works, and the type of features it uses to reach a decision, you would have the potential to fool it consistently. How did the researchers trick Cylance into thinking bad is good? Cylance’s machine-learning algorithm has been trained to favor a benign file, causing it to ignore malicious code if it sees strings from the benign file attached to a malicious file. The researchers took advantage of this and appended strings from a non-malicious file to a malicious one, tricking the system into thinking the malicious file is safe and avoiding detection. The trick works even if the Cylance engine previously concluded the same file was malicious before the benign strings were appended to it. The Cylance engine keeps a scoring mechanism ranging from -1000 for the most malicious files, and +1000 for the most benign of files. It also whitelists certain families of executable files to avoid triggering false positives on legitimate software. The researchers suspected that the machine learning would be biased toward code in those whitelisted files. So, they extracted strings from an online gaming program that Cylance had whitelisted and appended it to malicious files. The Cylance engine tagged the files benign and shifted scores from high negative numbers to high positive ones. https://youtu.be/NE4kgGjhf1Y The researchers tested against the WannaCry ransomware, Samsam ransomware, the popular Mimikatz hacking tool, and hundreds of other known malicious files. This method proved successful for 100% of the top 10 Malware for May 2019, and close to 90% for a larger sample of 384 malware. “As far as I know, this is a world-first, proven global attack on the ML [machine learning] mechanism of a security company,” told Adi Ashkenazy, CEO of Skylight Cyber to Motherboard, who first reported the news. “After around four years of super hype [about AI], I think this is a humbling example of how the approach provides a new attack surface that was not possible with legacy [antivirus software].” Gregory Webb, chief executive officer of malware protection firm Bromium Inc., told SiliconAngle that the news raises doubts about the concept of categorizing code as “good” or “bad.” “This exposes the limitations of leaving machines to make decisions on what can and cannot be trusted,” Webb said. “Ultimately, AI is not a silver bullet.” Martijn Grooten, a security researcher also added his views to the Cylance Bypass story. He states, “This is why we have good reasons to be concerned about the use of AI/ML in anything involving humans because it can easily reinforce and amplify existing biases.” The Cylance team have now confirmed the global bypass issue and will release a hotfix in the next few days. “We are aware that a bypass has been publicly disclosed by security researchers. We have verified there is an issue which can be leveraged to bypass the anti-malware component of the product. Our research and development teams have identified a solution and will release a hotfix automatically to all customers running current versions in the next few days,” the team wrote in a blog post. You can go through the blog post by Skylight Cyber researchers for additional information. Microsoft releases security updates: a “wormable” threat similar to WannaCry ransomware discovered 25 million Android devices infected with ‘Agent Smith’, a new mobile malware FireEye reports infrastructure-crippling Triton malware linked to Russian government tech institute
Read more
  • 0
  • 0
  • 22074

article-image-new-iphone-exploit-checkm8-is-unpatchable-and-can-possibly-lead-to-permanent-jailbreak-on-iphones
Sugandha Lahoti
30 Sep 2019
4 min read
Save for later

New iPhone exploit checkm8 is unpatchable and can possibly lead to permanent jailbreak on iPhones

Sugandha Lahoti
30 Sep 2019
4 min read
An unnamed iOS researcher that goes by the Twitter handle @axi0mX has released a new iOS exploit, checkm8 that affects all iOS devices running on A5 to A11 chipsets. This exploit explores vulnerabilities in Apple’s bootroom (secure boot ROM) which can give phone owners and hackers deep level access to their iOS devices. Once a hacker jailbreaks, Apple would be unable to block or patch out with a future software update. This iOS exploit can lead to a permanent, unblockable jailbreak on iPhones. Jailbreaking can allow hackers to get root access, enabling them to install software that is unavailable in the Apple App Store, run unsigned code, read and write to the root filesystem, and more. https://twitter.com/axi0mX/status/1178299323328499712 The researcher considers checkm8 possibly the biggest news in the iOS jailbreak community in years. This is because Bootrom jailbreaks are mostly permanent and cannot be patched. To fix it, you would need to apply physical modifications to device chipsets. This can only happen with callbacks or mass replacements.  It is also the first bootrom-level exploit publicly released for an iOS device since the iPhone 4, which was released almost a decade ago. axi0mX had also released another jailbreak-enabling exploit called alloc8 that was released in 2017. alloc8 exploits a powerful vulnerability in function malloc in the bootrom applicable to iPhone 3GS devices. However, checkm8 impacts devices starting with an iPhone 4S (A5 chip) through the iPhone 8 and iPhone X (A11 chip). The only exception being A12 processors that come in iPhone XS / XR and 11 / 11 Pro devices, for which Apple has patched the flaw. The full jailbreak with Cydia on latest iOS version is possible, but requires additional work. Explaining the reason behind this iOS exploit to be made public, @axi0mX said “a bootrom exploit for older devices makes iOS better for everyone. Jailbreakers and tweak developers will be able to jailbreak their phones on latest version, and they will not need to stay on older iOS versions waiting for a jailbreak. They will be safer.” The researcher adds, “I am releasing my exploit for free for the benefit of iOS jailbreak and security research community. Researchers and developers can use it to dump SecureROM, decrypt keybags with AES engine, and demote the device to enable JTAG. You still need additional hardware and software to use JTAG.” For now, the checkm8 exploit is released in beta and there is no actual jailbreak yet. You can’t simply download a tool, crack your device, and start downloading apps and modifications to iOS. Axi0mX's jailbreak is available on GitHub. The code isn't recommended for users without proper technical skills as it could easily result in bricked devices. Nonetheless, it is still an unpatchable issue and poses security risks for iOS users. Apple has not yet acknowledged the checkm8 iOS exploit. A number of people tweeted about this iOS exploit and tried it. https://twitter.com/FCE365/status/1177558724719853568 https://twitter.com/SparkZheng/status/1178492709863976960 https://twitter.com/dangoodin001/status/1177951602793046016 The past year saw a number of iOS exploits. Last month, Apple has accidentally reintroduced a bug in iOS 12.4 that was patched in iOS 12.3. A security researcher, who goes by the name Pwn20wnd on Twitter, released unc0ver v3.5.2, a jailbreaking tool that can jailbreak A7-A11 devices. In July, two members of the Google Project Zero team revealed about six “interactionless” security bugs that can affect iOS by exploiting the iMessage Client. Four of these bugs can execute malicious code on a remote iOS device, without any prior user interaction. Researchers release a study into Bug Bounty Programs and Responsible Disclosure for ethical hacking in IoT ‘Dropbox Paper’ leaks out email addresses and names on sharing document publicly DoorDash data breach leaks personal details of 4.9 million customers, workers, and merchants
Read more
  • 0
  • 0
  • 22002
Modal Close icon
Modal Close icon