Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 23050

article-image-why-wall-street-unfriended-facebook-stocks-lost-over-120-billion-in-market-value-after-q2-2018-earnings-call
Natasha Mathur
27 Jul 2018
6 min read
Save for later

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Natasha Mathur
27 Jul 2018
6 min read
After been found guilty of providing discriminatory advertisements on its platform earlier this week, Facebook hit yet another wall yesterday as its stock closed falling down by 18.96% on Thursday with shares selling at $176.26. This means that the company lost around $120 billion in market value overnight, making it the largest loss of value ever in a day for a US-traded company since Intel Corp’s two-decade-old crash. Intel had lost a little over $18 billion in one day, 18 years back. Despite the 41.24% revenue growth compared to last year, this was Facebook’s biggest stock market drop ever. Here’s the stock chart from NASDAQ showing the figures:   Facebook’s market capitalization was worth $629.6 on Wednesday. As soon as Facebook’s Earnings calls concluded by the end of market trading on Thursday, it’s worth dropped to $510 billion after the close. Also, as Facebook’s market shares continued to drop down during Thursday’s market, it left its CEO, Mark Zuckerberg with less than $70 billion, wiping out nearly $17 billion of his personal stake, according to Bloomberg. Also, he was demoted from the third to the sixth position on Bloomberg’s Billionaires Index. Active user growth starting to stagnate in mature markets According to David Wehner, CFO at Facebook, “the Daily active users count on Facebook reached 1.47 billion, up 11% compared to last year, led by growth in India, Indonesia, and the Philippines. This number represents approximately 66% of the 2.23 billion monthly active users in Q2”. Facebook’s daily active users He also mentioned that  “MAUs (monthly active users) were up 228M or 11% compared to last year. It is worth noting that MAU and DAU in Europe were both down slightly quarter-over-quarter due to the GDPR rollout, consistent with the outlook we gave on the Q1 call”. Facebook’s Monthly Active users In fact, Facebook has implemented several privacy policy changes in the last few months. This is due to the European Union's General Data Protection Regulation ( GDPR ) as the company's earnings report revealed the effects of the GDPR rules. Revenue Growth Rate is falling too Speaking of revenue expectations, Wehner gave investors a heads up that revenue growth rates will decline in the third and fourth quarters. Wehner states that the company’s “total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4.”  Facebook reiterated further that these numbers won’t get better anytime soon.                                                 Facebook’s Q2 2018 revenue Wehner further spoke explained the reasons for the decline in revenue,“There are several factors contributing to that deceleration..we expect the currency to be a slight headwind in the second half ...we plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization. We are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth”. Let’s look at other performance indicators Other financial highlights of Q2 2018 are as follows: Mobile advertising revenue represented 91% of advertising revenue for q2 2018, which is up from approx. 87% of the advertising revenue in Q2 2017. Capital expenditures for Q2 2018 were $3.46 billion which is up from $1.4 billion in Q2 2017. Headcount was 30,275 around June 30, which is an increase of 47% year-over-year. Cash, Cash equivalents, and marketable securities were $42.3 billion at the end of Q2 2018, an increase from $35.45 billion at the end of the Q2 2017. Wehner also mentioned that the company “continue to expect that full-year 2018 total expenses will grow in the range of 50-60% compared to last year. In addition to increases in core product development and infrastructure -- growth is driven by increasing investments -- safety & security, AR/VR, marketing, and content acquisition”. Another reason for the overall loss is that Facebook has been dealing with criticism for quite some time now over its content policies, its issues regarding user’s private data and its changing rules for advertisers. In fact, it is currently investigating data analytics firm Crimson Hexagon over misuse of data. Mark Zuckerberg also said over a conference call with financial analysts that Facebook has been investing heavily in “safety, security, and privacy” and that how they’re “investing - in security that it will start to impact our profitability, we’re starting to see that this quarter - we run this company for the long term, not for the next quarter”. Here’s what the public feels about the recent wipe-out: https://twitter.com/TeaPainUSA/status/1022586648155054081 https://twitter.com/alistairmilne/status/1022550933014753280 So, why did Facebook’s stocks crash? As we can see, Facebook’s performance itself in Q2 2018 has been better than its performance last year for the same quarter as far as revenue goes. Ironically, scandals and lawsuits have had little impact on Facebook’s growth. For example, Facebook recovered from the Cambridge Analytica scandal fully within two months as far share prices are concerned. The Mueller indictment report released earlier this month managed to arrest growth for merely a couple of days before the company bounced back. The discriminatory advertising verdict against Facebook, had no impact on its bullish growth earlier this week. This brings us to conclude that the public sentiments and market reactions against Facebook have very different underlying reasons. The market’s strong reactions are mainly due to concerns over the active user growth slowdown, the lack of monetization opportunities on the more popular Instagram platform, and Facebook’s perceived lack of ability to evolve successfully to new political and regulatory policies such as the GDPR. Wall Street has been indifferent to Facebook’s long list of scandals, in some ways, enabling the company’s ‘move fast and break things’ approach. In his earnings call on Thursday, Zuckerberg hinted that Facebook may not be keen on ‘growth at all costs’ by saying things like “we’re investing so much in security that it will significantly impact our profitability” and then Wehner adding, “Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.” And that has got Wall street unfriending Facebook with just a click of the button! Is Facebook planning to spy on you through your mobile’s microphones? Facebook to launch AR ads on its news feed to let you try on products virtually Decoding the reasons behind Alphabet’s record high earnings in Q2 2018  
Read more
  • 0
  • 0
  • 22853

article-image-security-researcher-publicly-releases-second-steam-zero-day-after-being-banned-from-valves-bug-bounty-program
Savia Lobo
22 Aug 2019
6 min read
Save for later

Security researcher publicly releases second Steam zero-day after being banned from Valve's bug bounty program

Savia Lobo
22 Aug 2019
6 min read
Updated with Valve’s response: Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it “a mistake”. A Russian security researcher, Vasily Kravets, has found a second zero-day vulnerability in the Steam gaming platform, in a span of two weeks. The researcher said he reported the first Steam zero-day vulnerability earlier in August, to its parent company, Valve, and tried to have it fixed before public disclosure. However, “he said he couldn't do the same with the second because the company banned him from submitting further bug reports via its public bug bounty program on the HackerOne platform,” ZDNet reports. Source: amonitoring.ru This first flaw was a “privilege-escalation vulnerability that can allow an attacker to level up and run any program with the highest possible rights on any Windows computer with Steam installed. It was released after Valve said it wouldn’t fix it (Valve then published a patch, that the same researcher said can be bypassed),” according to Threatpost. Although Kravets was banned from the Hacker One platform, he disclosed the second flaw that enables a local privilege escalation in the Steam client on Tuesday and said that the flaw would be simple for any OS user to exploit. Kravets told Threatpost that he is not aware of a patch for the vulnerability. “Any user on a PC could do all actions from exploit’s description (even ‘Guest’ I think, but I didn’t check this). So [the] only requirement is Steam,” Kravets told Threatpost. He also said, “It’s sad and simple — Valve keeps failing. The last patch, that should have solved the problem, can be easily bypassed so the vulnerability still exists. Yes, I’ve checked, it works like a charm.” Another security researcher, Matt Nelson also said he had found the exact same bug as Kravets had, which “he too reported to Valve's HackerOne program, only to go through a similar bad experience as Kravets,” ZDNet reports. He said both Valve and HackerOne took five days to acknowledge the bug and later refused to patch it. Further, they locked the bug report when Nelson wanted to disclose the bug publicly and warn users. “Nelson later released proof-of-concept code for the first Steam zero-day, and also criticized Valve and HackerOne for their abysmal handling of his bug report”, ZDNet reports. https://twitter.com/enigma0x3/status/1148031014171811841 “Despite any application itself could be harmful, achieving maximum privileges can lead to much more disastrous consequences. For example, disabling firewall and antivirus, rootkit installation, concealing of process-miner, theft any PC user’s private data — is just a small portion of what could be done,”  said Kravets. Kravets demonstrated the second Steam zero-day and also detailed the vulnerability on his website. Per Threatpost as of August 21, “Valve did not respond to a request for comment about the vulnerability, bug bounty incident and whether a patch is available. HackerOne did not have a comment.” Other researchers who have participated in Valve’s bug bounty program are infuriated over Valve’s decision to not only block Kravets from submitting further bug reports, but also refusing to patch the flaw. https://twitter.com/Viss/status/1164055856230440960 https://twitter.com/kamenrannaa/status/1164408827266998273 A user on Reddit writes, “If management isn't going to take these issues seriously and respect a bug bounty program, then you need to bring about some change from within. Now they are just getting bug reports for free.” Nelson said the Hacker One “representative said the vulnerability was out of scope to qualify for Valve’s bug bounty program,” Ars Technica writes. Further, when Nelson said that he was not seeking any monetary gains and only wanted the public to be aware of the vulnerability, the HackerOne representative asked Nelson to “please familiarize yourself with our disclosure guidelines and ensure that you’re not putting the company or yourself at risk. https://www.hackerone.com/disclosure-guidelines.” https://twitter.com/enigma0x3/status/1160961861560479744 Nelson also reported the vulnerability directly to Valve. Valve, first acknowledged the report and “noted that I shouldn’t expect any further communication.” He never heard anything more from the company. In am email to Ars Technica, Nelson writes, “I can certainly believe that the scoping was misinterpreted by HackerOne staff during the triage efforts. It is mind-blowing to me that the people at HackerOne who are responsible for triaging vulnerability reports for a company as large as Valve didn’t see the importance of Local Privilege Escalation and simply wrote the entire report off due to misreading the scope.” A HackerOne spokeswoman told Ars Technica, “We aim to explicitly communicate our policies and values in all cases and here we could have done better. Vulnerability disclosure is an inherently murky process and we are, and have always been, committed to protecting the interests of hackers. Our disclosure guidelines emphasize mutual respect and empathy, encouraging all to act in good faith and for the benefit of the common good.” Katie Moussouris, founder and CEO of Luta Security, also said, “Silencing the researcher on one issue is in complete violation of the ISO standard practices, and banning them from reporting further issues is simply irresponsible to affected users who would otherwise have benefited from these researchers continuing to engage and report issues privately to get them fixed. The norms of vulnerability disclosure are being warped by platforms that put profits before people.” Valve agrees that turning down Kravets’ request was “a mistake” Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it a mistake. In an email statement to ZDNet, a Valve representative said that “the company has shipped fixes for the Steam client, updated its bug bounty program rules, and is reviewing the researcher's ban on its public bug bounty program.” The company also writes, “Our HackerOne program rules were intended only to exclude reports of Steam being instructed to launch previously installed malware on a user’s machine as that local user. Instead, misinterpretation of the rules also led to the exclusion of a more serious attack that also performed local privilege escalation through Steam.  In regards to the specific researchers, we are reviewing the details of each situation to determine the appropriate actions. We aren’t going to discuss the details of each situation or the status of their accounts at this time.” To know more about this news in detail, read Kravets’ blog post. You could also check out Threatpost’s detailed coverage. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops A second zero-day found in Firefox was used to attack Coinbase employees; fix released in Firefox 67.0.4 and Firefox ESR 60.7.2 The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issue
Read more
  • 0
  • 0
  • 22832

article-image-finishing-attack-report-and-withdraw
Packt
21 Dec 2016
11 min read
Save for later

Finishing the Attack: Report and Withdraw

Packt
21 Dec 2016
11 min read
In this article by Michael McPhee and Jason Beltrame, the author of the book Penetration Testing with Raspberry Pi - Second Edition we will look at the final stage of the Penetration Testing Kill Chain, which is Reporting and Withdrawing. Some may argue the validity and importance of this step, since much of the hard-hitting effort and impact. But, without properly cleaning up and covering our tracks, we can leave little breadcrumbs with can notify others to where we have been and also what we have done. This article covers the following topics: Covering our tracks Masking our network footprint (For more resources related to this topic, see here.) Covering our tracks One of the key tasks in which penetration testers as well as criminals tend to fail is cleaning up after they breach a system. Forensic evidence can be anything from the digital network footprint (the IP address, type of network traffic seen on the wire, and so on) to the logs on a compromised endpoint. There is also evidence on the used tools, such as those used when using a Raspberry Pi to do something malicious. An example is running more ~/.bash_history on a Raspberry Pi to see the entire history of the commands that were used. The good news for Raspberry Pi hackers is that they don't have to worry about storage elements such as ROM since the only storage to consider is the microSD card. This means attackers just need to re-flash the microSD card to erase evidence that the Raspberry Pi was used. Before doing that, let's work our way through the clean up process starting from the compromised system to the last step of reimaging our Raspberry Pi. Wiping logs The first step we should perform to cover our tracks is clean any event logs from the compromised system that we accessed. For Windows systems, we can use a tool within Metasploit called Clearev that does this for us in an automated fashion. Clearev is designed to access a Windows system and wipe the logs. An overzealous administrator might notice the changes when we clean the logs. However, most administrators will never notice the changes. Also, since the logs are wiped, the worst that could happen is that an administrator might identify that their systems have been breached, but the logs containing our access information would have been removed. Clearev comes with the Metasploit arsenal. To use clearev once we have breached a Windows system with a Meterpreter, type meterpreter > clearev. There is no further configuration, which means clearev just wipes the logs upon execution. The following screenshot shows what that will look like: Here is an example of the logs before they are wiped on a Windows system: Another way to wipe off logs from a compromised Windows system is by installing a Windows log cleaning program. There are many options available to download, such as ClearLogs found at http://ntsecurity.nu/toolbox/clearlogs/. Programs such as these are simple to use, we can just install and run it on a target once we are finished with our penetration test. We can also just delete the logs manually using the C: del %WINDR%* .log /a/s/q/f command. This command directs all logs using /a including subfolders /s, disables any queries so we don't get prompted, and /f forces this action. Whichever program you use, make sure to delete the executable file once the log files are removed so that the file isn't identified during a future forensic investigation. For Linux systems, we need to get access to the /var/log folder to find the log files. Once we have access to the log files, we can simply open them and remove all entries. The following screenshot shows an example of our Raspberry Pi's log folder: We can just delete the files using the remove command, rm, such as rm FILE.txt or delete the entire folder; however, this wouldn't be as stealthy as wiping existing files clean of your footprint. Another option is in Bash. We can simply type > /path/to/file to empty the contents of a file, without removing it necessarily. This approach has some stealth benefits. Kali Linux does not have a GUI-based text editor, so one easy-to-use tool that we can install is gedit. We'll use apt-get install gedit to download it. Once installed, we can find gedit under the application dropdown or just type gedit in the terminal window. As we can see from the following screenshot, it looks like many common text file editors. Let's click on File and select files from the /var/log folder to modify them. We also need to erase the command history since the Bash shell saves the last 500 commands. This forensic evidence can be accessed by typing the more ~/.bash_history command. The following screenshot shows the first of the hundreds of commands we recently ran on my Raspberry Pi: To verify the number of stored commands in the history file, we can type the echo $HISTSIZE command. To erase this history, let's type export HISTSIZE=0. From this point, the shell will not store any command history, that is, if we press the up arrow key, it will not show the last command. These commands can also be placed in a .bashrc file on Linux hosts. The following screenshot shows that we have verified if our last 500 commands are stored. It also shows what happens after we erase them: It is a best practice to set this command prior to using any commands on a compromised system, so that nothing is stored upfront. You could log out and log back in once the export HISTSIZE=0 command is set to clear your history as well. You should also do this on your C&C server once you conclude your penetration test if you have any concerns of being investigated. A more aggressive and quicker way to remove our history file on a Linux system is to shred it with the shred –zu /root/.bash_history command. This command overwrites the history file with zeros and then deletes the log files. We can verify this using the less /root/.bash_history command to see if there is anything left in your history file, as shown in the following screenshot: Masking our network footprint Anonymity is a key ingredient when performing our attacks, unless we don't mind someone being able to trace us back to our location and giving up our position. Because of this, we need a way to hide or mask where we are coming from. This approach is perfect for a proxy or groups of proxies if we really want to make sure we don't leave a trail of breadcrumbs. When using a proxy, the source of an attack will look as though it is coming from the proxy instead of the real source. Layering multiple proxies can help provide an onion effect, in which each layer hides the other, and makes it very difficult to determine the real source during any forensic investigation. Proxies come in various types and flavors. There are websites devoted for hiding our source online, and with a quick Google search, we can see some of the most popular like hide.me, Hidestar, NewIPNow, ProxySite and even AnonyMouse. Following is a screenshot from NewIPNow website. Administrators of proxies can see all traffic as well as identify both the target and the victims that communicate through their proxy. It is highly recommended that you research any proxy prior to using it as some might use information captured without your permission. This includes providing forensic evidence to authorities or selling your sensitive information. Using ProxyChains Now, if web based proxies are not what we are looking for, we can use our Raspberry Pi as a proxy server utilizing the ProxyChains application. ProxyChains is very easy application to setup and start using. First, we need to install the application. This can be accomplished this by running the following command from the CLI: root@kali:~# apt-get install proxychains Once installed, we just need to edit the ProxyChains configuration located at /etc/proxychains.conf, and put in the proxy servers we would like to use: There are lots of options out there for finding public proxies. We should certainly use with some caution, as some proxies will use our data without our permission, so we'll be sure to do our research prior to using one. Once we have one picked out and have updated our proxychains.conf file, we can test it out. To use ProxyChains, we just need to follow the following syntax: proxychains <command you want tunneled and proxied> <opt args> Based on that syntax, to run a nmap scan, we would use the following command: root@kali:~# proxychains nmap 192.168.245.0/24 ProxyChains-3.1 (http://proxychains.sf.net) Starting Nmap 7.25BETA1 ( https://nmap.org ) Clearing the data off the Raspberry Pi Now that we have covered our tracks on the network side, as well as on the endpoint, all we have left is any of the equipment that we have left behind. This includes our Raspberry Pi. To reset our Raspberry Pi back to factory defaults, we can refer back to installing Kali Linux. For re-installing Kali or the NOOBS software. This will allow us to have clean image running once again. If we had cloned your golden image we could just re-image our Raspberry Pi with that image. If we don't have the option to re-image or reinstall your Raspberry Pi, we do have the option to just destroy the hardware. The most important piece to destroy would be the microSD card (see image following), as it contains everything that we have done on the Pi. But, we may want to consider destroying any of the interfaces that you may have used (USB WiFi, Ethernet or Bluetooth adapters), as any of those physical MAC addresses may have been recorded on the target network, and therefore could prove that device was there. If we had used our onboard interfaces, we may even need to destroy the Raspberry Pi itself. If the Raspberry Pi is in a location that we cannot get to reclaim it or to destroy it, our only option is to remotely corrupt it so that we can remove any clues of our attack on the target. To do this, we can use the rm command within Kali. The rm command is used to remove files and such from the operating systems. As a cool bonus, rm has some interesting flags that we can use to our advantage. These flags include the –r and the –f flag. The –r flag indicates to perform the operation recursively, so everything in that directory and preceding will be removed while the –f flag is to force the deletion without asking. So running the command rm –fr * from any directory will remove all contents within that directory and anything preceding that. Where this command gets interesting is if we run it from / a.k.a. the top of the directory structure. Since the command will remove everything in that directory and preceding, running it from the top level will remove all files and hence render that box unusable. As any data forensics person will tell us, that data is still there, just not being used by the operation system. So, we really need to overwrite that data. We can do this by using the dd command. We used dd back when we were setting up the Raspberry Pi. We could simply use the following to get the job done: dd if=/dev/urandom of=/dev/sda1 (where sda1 is your microSD card) In this command we are basically writing random characters to the microSD card. Alternatively, we could always just reformat the whole microSD card using the mkfs.ext4 command: mkfs.ext4 /dev/sda1 ( where sda1 is your microSD card ) That is all helpful, but what happens if we don't want to destroy the device until we absolutely need to – as if we want the ability to send over a remote destroy signal? Kali Linux now includes a LUKS Nuke patch with its install. LUKS allows for a unified key to get into the container, and when combined with Logical Volume Manager (LVM), can created an encrypted container that needs a password in order to start the boot process. With the Nuke option, if we specify the Nuke password on boot up instead of the normal passphrase, all the keys on the system are deleted and therefore rendering the data inaccessible. Here are some great links to how and do this, as well as some more details on how it works: https://www.kali.org/tutorials/nuke-kali-linux-luks/ http://www.zdnet.com/article/developers-mull-adding-data-nuke-to-kali-linux/ Summary In this article we sawreports themselves are what our customer sees as our product. It should come as no surprise that we should then take great care to ensure they are well organized, informative, accurate and most importantly, meet the customer's objectives. Resources for Article: Further resources on this subject: Penetration Testing [article] Wireless and Mobile Hacks [article] Building Your Application [article]
Read more
  • 0
  • 0
  • 22518

article-image-analyzing-data-packets
Packt
08 Feb 2016
7 min read
Save for later

Analyzing Data Packets

Packt
08 Feb 2016
7 min read
In this article by Samir Datt, the author of the book Learning Network Forensics, you will learn to get your hands dirty by actually capturing and analyzing network traffic. We will learn how to use different software tools to capture and analyze network traffic with real-world scenarios of accessing data over the Internet and the resultant network capture. The article will cover the following topics: Packet sniffing and analysis using NetworkMiner Case study – sniffing out an insider (For more resources related to this topic, see here.) Packet sniffing and analysis using NetworkMiner NetworkMiner is a passive network sniffing or network forensic tool. It is called a passive tool as it does not send out requests—it sits silently on the network, capturing every packet in the promiscuous mode. NetworkMiner is host-centric. This means that it will classify data based on hosts rather than packets, which is what most sniffers such as Wireshark do. The different steps to NetworkMiner usage are as follows: Download and install the NetworkMiner. Then, configure it. Capture the data in NetworkMiner. Finally, analyze the data. NetworkMiner is available for download at SourceForge: http://sourceforge.net/projects/networkminer/. Though NetworkMiner is not as well known as it should be, it's host-centric approach is refreshingly different and effective. Allowing the users to classify traffic based on the IP addresses and not packets helps us to zero in on activities related to the specific computers that are under suspicion or are being investigated. The NetworkMiner interface is shown in the following screenshot: To begin using NetworkMiner, we start by selecting a network adapter from the drop-down list. NetworkMiner places this adapter in the promiscuous mode. Clicking Start begins NetworkMiner on the task of packet collection. While NetworkMiner has the capability of collecting data packets across the network, its real strength comes in to play after the data has been collected. In most of the scenarios, it makes more sense to use Wireshark to capture packets and then use NetworkMiner to do the analysis on the .pcap file that is captured. As soon as data capturing begins, NetworkMiner swings into action by sorting the packets based on the host IP addresses. This is extremely useful since it allows us to identify traffic that is specific to a single IP on the network. Consider that we have a single suspect with a known IP on the network, then we can focus our investigative resources on just that single IP address. Some really great additional features include the ability to identify the media access control (MAC) address of the network interface card (NIC) in use and also the OS of the suspect system. In fact, the icon on the left-hand side of the IP address shows the OS icon, if detected, as shown in the following screenshot: As we can see in the preceding image, some of the devices that are connected to the network under investigation are Windows and BSD devices. The next tab is the Frames tab. The Frames tab view is similar to that of Wireshark and is perhaps one of the lesser used tabs in NetworkMiner, due to the fact that there are so many other richer options available, as shown in the following screenshot: It gives us inputs on the packet length, source and destination IP address, as well as time to live (TTL) of the packet. NetworkMiner has the ability to collate the packets and then reconstruct the constituent files for viewing by the investigator. These files are shown in the Files tab. Assuming that some files were copied/accessed over a network share, it would be possible to view the reconstructed file in the Files tab. The Files tab also depicts the SSL certificates used over a network. This can also be useful from an investigation perspective, as shown in the following screenshot: Similarly, if pictures have been viewed over the network, these are reconstructed in the Images tab. In fact, this can be quite useful especially, when scanned documents are a part of the network traffic. This may happen when the bad guys try to avoid detection from the keyword-based searching. The following is an image depicting the Images tab: The reconstructed graphics are usually depicted as thumbnails. Right-clicking the thumbnail allows us to open the graphic in a picture editor/viewer. DNS queries are also accessible via another tab, as shown in the following image: There are additional tabs available that are notable from the perspective of an investigation. One of these is the Credentials tab. This stores the information related to interactions involving the exchange of credentials with resources that require logons. It is not uncommon to find username and passwords for plain-text logons listed under this tab. One can also find user accounts for popular sites such as Gmail and Facebook. A screenshot of the Credentials tab is as follows: In a number of cases, it is possible to determine the username and passwords of certain websites. Another great feature in NetworkMiner is the ability to import a set of keywords that are to be used to search within packets in the captured .pcap file. This allows us to separate packets that contain our keywords of interest. A screenshot is as follows: Case study – tracking down an insider XYZ Corporation, a medium-sized Government contractor, found that it had begun to lose business to a tiny competitor that seemed to know exactly what the sales team at XYZ Corp was planning. The senior management suspected that an insider was leaking information to the competitor. A network forensic 007 was called in to investigate the problem. A preliminary information-gathering exercise was initiated and a list of keywords was compiled to help in identifying packets that contained information of interest. A list of possible suspects, who had access to the confidential information, was also compiled. The specific network segment relating to the department in question was put under network surveillance. Wireshark was deployed to capture all the network traffic. Additional storage was made available to store the .pcap files generated by Wireshark. The collected .pcap files were analyzed using NetworkMiner. The following screenshot depicts Wireshark capturing traffic: An in-depth analysis of network traffic produced the following findings: An image showing the registration certificate of the company that was competing with XYZ Corp, providing the names of the directors The address of the company in the registration certificate was the residential address of the sales manager of XYZ Corp E-mail communications using personal e-mail addresses between the directors of the competing company and the senior manager sales of XYZ Corp Further offline analysis showed that the sales manager's wife was related to the director of the competing company It was also seen that the sales manager was connecting to the office Wi-Fi network using his android phone The sales manager was noted to be accessing cloud storage using his phone and transferring important files and contact lists It was noted that the sales manager was also in close communication with a female employee in the accounts department and that the connection was intimate The information collected so far was very indicative of the sales manager's involvement with competitors. Based on the preceding network forensics exercise, it was recommended that a full-fledged digital forensic exercise should be initiated, including that of his assigned laptop and phone device. It was also recommended that sufficient corroborating evidence should be collected using log analysis, RAM analysis, and disk forensics to initiate legal/breach of trust action against the suspect(s). Summary In this article, we moved our skills up a notch. You learned how to analyze the captured packets to see what is happening on the network. We also studied how to see the traffic from the specific IP addresses as well as protocol-specific traffic. We also understood how to look for specific traffic based on keywords. Files, private credentials, and images have been examined to identify activities of interest. We have now become a lot better at investigating network activity. Resources for Article:   Further resources on this subject: Introduction to Mobile Forensics [article] Securing vCloud Using the vCloud Networking and Security App Firewall [article] Configuring the alerts [article]
Read more
  • 0
  • 0
  • 22483

article-image-introducing-mobile-forensics
Packt
07 Sep 2016
21 min read
Save for later

Introducing Mobile Forensics

Packt
07 Sep 2016
21 min read
In this article by Oleg Afonin and Vladimir Katalov, the authors of the book Mobile Forensics – Advanced Investigative Strategies, we will see that today's smartphones are used less for calling and more for socializing, this has resulted in the smartphones holding a lot of sensitive data about their users. Mobile devices keep the user's contacts from a variety of sources (including the phone, social networks, instant messaging, and communication applications), information about phone calls, sent and received text messages, and e-mails and attachments. There are also browser logs and cached geolocation information; pictures and videos taken with the phone's camera; passwords to cloud services, forums, social networks, online portals, and shopping websites; stored payment data; and a lot of other information that can be vital for an investigation. (For more resources related to this topic, see here.) Needless to say, this information is very important for corporate and forensic investigations. In this book, we'll discuss not only how to gain access to all this data, but also what type of data may be available in each particular case. Tablets are no longer used solely as entertainment devices. Equipped with powerful processors and plenty of storage, even the smallest tablets are capable of running full Windows, complete with the Office suite. While not as popular as smartphones, tablets are still widely used to socialize, communicate, plan events, and book trips. Some smartphones are equipped with screens as large as 6.4 inches, while many tablets come with the ability to make voice calls over cellular network. All this makes it difficult to draw a line between a phone (or phablet) and a tablet. Every smartphone on the market has a camera that, unlike a bigger (and possibly better) camera, is always accessible. As a result, an average smartphone contains more photos and videos than a dedicated camera; sometimes, it's gigabytes worth of images and video clips. Smartphones are also storage devices. They can be used (and are used) to keep, carry, and exchange information. Smartphones connected to a corporate network may have access to files and documents not meant to be exposed. Uncontrolled access to corporate networks from employees' smartphones can (and does) cause leaks of highly-sensitive information. Employees come and go. With many companies allowing or even encouraging bring your own device policies, controlling the data that is accessible to those connecting to a corporate network is essential. What You Get Depends on What You Have Unlike personal computers that basically present a single source of information (the device itself consisting of hard drive(s) and volatile memory), mobile forensics deals with multiple data sources. Depending on the sources that are available, investigators may use one or the other tool to acquire information. The mobile device If you have access to the mobile device, you can attempt to perform physical or logical acquisition. Depending on the device itself (hardware) and the operating system it is running, this may or may not be possible. However, physical acquisition still counts as the most complete and up-to-date source of evidence among all available. Generally speaking, physical acquisition is available for most Android smartphones and tablets, older Apple hardware (iPhones up to iPhone 4, the original iPad, iPad mini, and so on), and recent Apple hardware with known passcode. As a rule, Apple devices can only be physically acquired if jailbroken. Since a jailbreak obtains superuser privileges by exploiting a vulnerability in iOS, and Apple actively fixes such vulnerabilities, physical acquisition of iOS devices remains iffy. A physical acquisition technique has been recently developed for some Windows phone devices using Cellebrite Universal Forensic Extraction Device (UFED). Physical acquisition is also available for 64-bit Apple hardware (iPhone 5S and newer, iPad mini 2, and so on). It is worth noting that physical acquisition of 64-bit devices is even more restrictive compared to the older 32-bit hardware, as it requires not only jailbreaking the device and unlocking it with a passcode, but also removing the said passcode from the security settings. Interestingly, according to Apple, even Apple itself cannot extract information from 64-bit iOS devices running iOS 8 and newer, even if they are served a court order. Physical acquisition is available on a limited number of BlackBerry smartphones running BlackBerry OS 7 and earlier. For BlackBerry smartphones, physical acquisition is available for unlocked BlackBerry 7 and lower devices, where supported, using Cellebrite UFED Touch/4PC through the bootloader method. For BlackBerry 10 devices where device encryption is not enabled, a chip-off can successfully acquire the device memory by parsing the physical dump using Cellebrite UFED. Personal computer Notably, the user's personal computer can help acquiring mobile evidence. The PC may contain the phone's offline data backups (such as those produced by Apple iTunes) that contain most of the information stored in the phone and available (or unavailable) during physical acquisition. Lockdown records are created when an iOS device is physically connected to the computer and authorized through iTunes. Lockdown records may be used to gain access to an iOS device without entering the passcode. In addition, the computer may contain binary authentication tokens that can be used to access respective cloud accounts linked to user's mobile devices. Access to cloud storage Many smartphones and tablets, especially those produced by Apple, offer the ability to back up information into an online cloud. Apple smartphones, for example, will automatically back up their content to Apple iCloud every time they are connected to a charger within the reach of a known Wi-Fi network. Windows phone devices exhibit similar behavior. Google, while not featuring full cloud backups like Apple or Microsoft, collects and retains even more information through Google Mobile Services (GMS). This information can also be pulled from the cloud. Since cloud backups are transparent, non-intrusive and require no user interaction, they are left enabled by default by many smartphone users, which makes it possible for an investigator to either acquire the content of the cloud storage or request it from the respective company with a court order. In order to successfully access the phone's cloud storage, one needs to know the user's authentication credentials (login and password). It may be possible to access iCloud by using binary authentication tokens extracted from the user's computer. With manufacturers quickly advancing in their security implementations, cloud forensics is quickly gaining importance and recognition among digital forensic specialists. Stages of mobile forensics This section will briefly discuss the general stages of mobile forensics and is not intended to provide a detailed explanation of each stage. There is more-than-sufficient documentation that can be easily accessed on the Internet that provides an intimate level of detail regarding the stages of mobile forensics. The most important concept for the reader to understand is this: have the least level of impact on the mobile device during all the stages. In other words, an examiner should first work on the continuum of the least-intrusive method to the most-intrusive method, which can be dictated by the type of data needing to be obtained from the mobile device and complexity of the hardware/software of the mobile device. Stage one – device seizure This stage pertains to the physical seizure of the device so that it comes under the control and custody of the investigator/examiner. Consideration must also be given to the legal authority or written consent to seize, extract, and search this data. The physical condition of the device at the time of seizure should be noted, ideally through digital photographic documentation and written notes, such as: Is the device damaged? If, yes, then document the type of damage. Is the device on or off? What is the device date and time if the device is on? If the device is on, what apps are running or observable on the device desktop? If the device is on, is the device desktop accessible to check for passcode and security settings? Several other aspects of device seizure are described in the following as they will affect post-seizure analysis: radio isolation, turning the device off if it is on, remote wipe, and anti-forensics. Seizing – what and how to seize? When it comes to properly acquiring a mobile device, one must be aware of the many differences in how computers and mobile devices operate. Seizing, handling, storing, and extracting mobile devices must follow a different route compared to desktop and even laptop computers. Unlike PCs that can be either online or offline (which includes energy-saving states of sleep and hibernation), smartphones and tablets use a different, always-connected modus of operandi. Tremendous amounts of activities are carried out in the background, even while the device is apparently sleeping. Activities can be scheduled or triggered by a large number of events, including push events from online services and events that are initiated remotely by the user. Another thing to consider when acquiring a mobile device is security. Mobile devices are carried around a lot, and they are designed to be inherently more secure than desktop PCs. Non-removable storage and soldered RAM chips, optional or enforced data encryption, remote kill switches, secure lock screens, and locked bootloaders are just a few security measures to be mentioned. The use of Faraday bags Faraday bags are commonly used to temporarily store seized devices without powering them down. A Faraday bag blocks wireless connectivity to cellular networks, Wi-Fi, Bluetooth, satellite navigation, and any other radios used in mobile devices. Faraday bags are normally designed to shield the range of radio frequencies used by local cellular carriers and satellite navigation (typically the 700-2,600 MHz), as well as the 2.4-5 GHz range used by Wi-Fi networks and Bluetooth. Many Faraday bags are made of specially-coated metallic shielding material that blocks a wide range of radio frequencies. Keeping the power on When dealing with a seized device, it is essential to prevent the device from powering off. Never switching off a working device is one thing, preventing it from powering down is another. Since mobile devices consume power even while the display is off, the standard practice is connecting the device to a charger and placing it into a wireless-blocking Faraday bag. This will prevent the mobile device from shutting down after reaching the low-power state. Why exactly do we need this procedure? The thing is, you may be able to extract more information from a device that was used (unlocked at least once) after the last boot cycle compared to a device that boots up in your lab and you don't know the passcode. To illustrate the potential outcome, let's say you seized an iPhone that is locked with an unknown passcode. The iPhone happens to be jailbroken, so you can attempt using Elcomsoft iOS Forensic Toolkit to extract information. If the device is locked and you don't know the passcode, you will have access to a very limited set of data: Recent geolocation information: Since the main location database remains encrypted, it is only possible to extract limited location data. This limited location data is only accessible if the device was unlocked at least once after the boot has completed. As a result, if you keep the device powered on, you may pull recent geolocation history from this device. If, however, the device shuts down and is only powered on in the lab, the geolocation data will remain inaccessible until the device is unlocked. Incoming calls (numbers only) and text messages: Incoming text messages are temporarily retained unencrypted before the first unlock after cold boot. Once the device is unlocked for the first time after cold boot, the messages will be transferred into the main encrypted database. This means that acquiring a device that was never unlocked after a cold start will only allow access to text messages received by the device during the time it remained locked after the boot. If the iPhone being acquired was unlocked at least once after it was booted (for example, if the device was seized in a turned-on state), you may be able to access significantly more information. The SMS database is decrypted on first unlock, allowing you pulling all text messages and not just those that were received while the device remained locked. App and system logs (installs and updates, net access logs, and so on). SQLite temp files, including write-ahead logs (WAL): These WAL may include messages received by applications such as Skype, Viber, Facebook Messenger, and so on. Once the device is unlocked, the data is merged with corresponding apps' main databases. When extracting a device after a cold boot (never unlocked), you will only have access to notifications received after the boot. If, however, you are extracting a device that was unlocked at least once after booting up, you may be able to extract the complete database with all messages (depending on the data protection class selected by the developer of a particular application). Dealing with the kill switch Mobile operating systems such as Apple iOS, recent versions of Google Android, all versions of BlackBerry OS, and Microsoft Windows phone 8/8.1 (Windows 10 mobile) have an important security feature designed to prevent unauthorized persons from accessing information stored in the device. The so-called kill switch enables the owner to lock or erase the device if the device is reported lost or stolen. While used by legitimate customers to safeguard their data, this feature is also used by suspects who may attempt to remotely destroy evidence if their mobile device is seized. In the recent Morristown man accused of remotely wiping nude photos of underage girlfriend on confiscated phone report (http://wate.com/2015/04/07/morristown-man-accused-of-remotely-wiping-nude-photos-of-underage-girlfriend-on-confiscated-phone/), the accused used the remote kill switch to wipe data stored on his iPhone. Using the Faraday bag is essential to prevent suspects from accessing the kill switch. However, even if the device in question has already been wiped remotely, it does not necessarily mean that all the data is completely lost. Apple iOS, Windows phone 8/8.1, Windows 10 mobile, and the latest version of Android (Android 6.0 Marshmallow) support cloud backups (albeit Android cloud backups contains limited amounts of data). When it comes to BlackBerry 10, the backups are strictly offline, yet the decryption key is tied to the user's BlackBerry ID and stored on BlackBerry servers. The ability to automatically upload backup copies of data into the cloud is a double-edged sword. While offering more convenience to the user, cloud backups make remote acquisition techniques possible. Depending on the platform, all or some information from the device can be retrieved from the cloud by either making use of a forensic tool (for example, Elcomsoft Phone Breaker, Oxygen Forensic Detective) or by serving a government request to the corresponding company (Apple, Google, Microsoft, or BlackBerry). Mobile device anti-forensics There are numerous anti-forensic methods that target evidence acquisition methods used by the law enforcement. It is common for the police to seize a device, connect it to a charger, and place into a Faraday bag. The anti-forensic method used by some technologically-advanced suspects on Android phones involves rooting the device and installing a tool that monitors wireless connectivity of the device. If the tool detects that the device has been idle, connected to a charger, and without wireless connectivity for a predefined period, it performs a factory reset. Since there is no practical way of determining whether such protection is active on the device prior to acquisition, simply following established guidelines presents a risk of evidence being destroyed. If there are reasonable grounds to suspect such a system may be in place, the device can be powered down (while realizing the risk of full-disk encryption preventing subsequent acquisition). While rooting or jailbreaking devices generally makes the device susceptible to advanced acquisition methods, we've seen users who unlocked their bootloader to install a custom recovery, protected access to this custom recovery with a password, and relocked the bootloader. Locked bootloader and password-protected access to custom recovery is an extremely tough combination to break. In several reports, we've become aware of the following anti-forensic technique used by a group of cyber criminals. The devices were configured to automatically wipe user data if certain predefined conditions were met. In this case, the predefined conditions triggering the wipe matched the typical acquisition scenario of placing the device inside a Faraday bag and connecting it to a charger. Once the device reported being charged without wireless connectivity (but not in airplane mode) for a certain amount of time, a special tool triggers a full factory reset of the device. Notably, this is only possible on rooted/jailbroken devices. So far, this anti-forensic technique did not receive a wide recognition. It's used by a small minority of smartphone users, mostly those into cybercrime. The low probability of a smartphone being configured that way is small enough to consider implementing changes to published guidelines. Stage two – data acquisition This stage refers to various methods of extracting data from the device. The methods of data extraction that can be employed are influenced by the following: Type of mobile device: The make, model, hardware, software, and vendor configuration. Availability of a diverse set of hardware and software extraction/analysis tools at the examiner's disposal: There is no tool that does it all, an examiner needs to have access to a number of tools that can assist with data extraction. Physical state of device: Has the device been exposed to damage, such as physical, water, biological fluids such as blood? Often the type of damage can dictate the types of data extraction measures that will be employed on the device. There are several different types of data extractions that determine how much data is obtained from the device: Physical: Binary image of the device has the most potential to recover deleted data and obtains the largest amount of data from the device. This can be the most challenging type of extraction to obtain. File system: This is a representation of the files and folders from the user area of the device, and can contain deleted data, specific to databases. This method will contain less data than a physical data extraction. Logical: This acquires the least amount of data from the device. Examples of this are call history, messages, contacts, pictures, movies, audio files, and so on. This is referred to as low-hanging fruit. No deleted data or source files are obtained. Often the resulting output will be a series of reports produced by the extraction tool. This is often the easiest and quickest type of extraction. Photographic documentation: This method is typically used when all other data extraction avenues are exhausted. In this procedure, the examiner uses a digital camera to photographically document the content being displayed by the device. This is a time-consuming method when there is an extensive amount of information to photograph. Specific data-extraction concepts are explained here: bootloader, jailbreak, rooting, adb, debug, and sim cloning. Root, jailbreak, and unlocked bootloader Rooting or jailbreaking the mobile devices in general makes them susceptible to a wide range of exploits. In the context of mobile forensics, rooted devices are easy to acquire since many forensic acquisition tools rely on root/jailbreak to perform physical acquisition. Devices with unlocked bootloaders allow booting unsigned code, effectively permitting full access to the device even if it's locked with a passcode. However, if the device is encrypted and the passcode is part of the encryption key, bypassing passcode protection may not automatically enable access to encrypted data. Rooting or jailbreaking enables unrestricted access to the filesystem, bypassing the operating system's security measures and allowing the acquisition tool to read information from protected areas. This is one of the reasons for banning rooted devices (as well as devices with unlocked bootloaders) from corporate premises. Installing a jailbreak on iOS devices always makes the phone less secure, enabling third-party code to be injected and run on a system level. This fact is well-known to forensic experts who make use of tools such as Cellebrite UFED or Elcomsoft iOS Forensic Toolkit to perform physical acquisition of jailbroken Apple smartphones. Some Android devices allow unlocking the bootloader, which enables easy and straightforward rooting of the device. While not all Android devices with unlocked bootloaders are rooted, installing root access during acquisition of a bootloader-unlocked device has a much higher chance of success compared to devices that are locked down. Tools such as Cellebrite UFED, Forensic Toolkit (FTK), Oxygen Forensic Suite, and many others can make use of the phone's root status in order to inject acquisition applets and image the device. Unlocked bootloaders can be exploited as well if you use UFED. A bootloader-level exploit exists and is used in UFED to perform acquisition of many Android and Windows phone devices based on the Qualcomm reference platform even if their bootloader is locked. Android ADB debugging Android has a hidden Developer Options menu. Accessing this menu requires a conscious effort of tapping on the OS build number multiple times. Some users enable Developer Options out of curiosity. Once enabled, the Developer Options menu may or may not be possible to hide. Among other things, the Developer Options menu lists an option called USB debugging or ADB debugging. If enabled, this option allows controlling the device via ADB command line, which in turn allows experts using Android debugging tools (adb.exe) to connect to the device from a PC even if it's locked with a passcode. Activated USB debugging exposes a lot of possibilities and can make acquisition possible even if the device is locked with a passcode. Memory card Most smartphone devices and tablets (except iOS devices) have the capability of increasing their storage capacity by using a microSD card. An examiner would remove the memory card from the mobile device/tablet and use either hardware or software write-protection methods, and create a bit stream forensic image of the memory card, which can then be analyzed using forensic software tools, such as X-Ways, Autopsy Sleuth Kit, Forensic Explorer (GetData), EnCase, or FTK (AccessData). Stage three – data analysis This stage of mobile device forensics entails analysis of the acquired data from the device and its components (SIM card and memory card if present). Most mobile forensic acquisition tools that acquire the data from the device memory can also parse the extracted data and provide the examiner functionality within the tool to perform analysis. This entails review of any non-deleted and deleted data. When reviewing non-deleted data, it would be prudent to also perform a manual review of the device to ensure that the extracted and parsed data matches what is displayed by the device. As mobile device storage capacities have increased, it is suggested that a limited subset of data records from the relevant areas be reviewed. So, for example, if a mobile device has over 200 call records, reviewing several call records from missed calls, incoming calls, and outgoing calls can be checked on the device in relation to the similar records in the extracted data. By doing this manual review, it is then possible to discover any discrepancies in the extracted data. Manual device review can only be completed when the device is still in the custody of the examiner. There are situations where, after the data extraction has been completed, the device is released back to the investigator or owner. In situations such as this, the examiner should document that very limited or no manual verification can be performed due to these circumstances. Finally, the reader should be keenly aware that more than one analysis tool can be used to analyze the acquired data. Multiple analysis tools should be considered, especially when a specific type of data cannot be parsed by one tool, but can be analyzed by another. Summary In this article, we've covered the basics of mobile forensics. We discussed the amount of evidence available in today's mobile devices and covered the general steps of mobile forensics. We also discussed how to seize, handle, and store mobile devices, and looked at how criminals can use technology to prevent forensic access. We provided a general overview of the acquisition and analysis steps. For more information on mobile forensics, you can refer to the following books by Packt: Practical Mobile Forensics - Second Edition: https://www.packtpub.com/networking-and-servers/practical-mobile-forensics-second-edition Mastering Mobile Forensics: https://www.packtpub.com/networking-and-servers/mastering-mobile-forensics Learning iOS Forensics: https://www.packtpub.com/networking-and-servers/learning-ios-forensics  Resources for Article: Further resources on this subject: Mobile Forensics [article] Mobile Forensics and Its Challanges [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 22477
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-public-key-infrastructure-pki-and-other-concepts-cryptography-cissp-exam
Packt
28 Oct 2009
10 min read
Save for later

Public Key Infrastructure (PKI) and other Concepts in Cryptography for CISSP Exam

Packt
28 Oct 2009
10 min read
Public key infrastructure Public Key Infrastructure (PKI) is a framework that enables integration of various services that are related to cryptography. The aim of PKI is to provide confidentiality, integrity, access control, authentication, and most importantly, non-repudiation. Non-repudiation is a concept, or a way, to ensure that the sender or receiver of a message cannot deny either sending or receiving such a message in future. One of the important audit checks for non-repudiation is a time stamp. The time stamp is an audit trail that provides information of the time the message is sent by the sender and the time the message is received by the receiver. Encryption and decryption, digital signature, and key exchange are the three primary functions of a PKI. RSS and elliptic curve algorithms provide all of the three primary functions: encryption and decryption, digital signatures, and key exchanges. Diffie-Hellmen algorithm supports key exchanges, while Digital Signature Standard (DSS) is used in digital signatures. Public Key Encryption is the encryption methodology used in PKI and was initially proposed by Diffie and Hellman in 1976. The algorithm is based on mathematical functions and uses asymmetric cryptography, that is, uses a pair of keys. The image above represents a simple document-signing function. In PKI, every user will have two keys known as "pair of keys". One key is known as a private key and the other is known as a public key. The private key is never revealed and is kept with the owner, and the public key is accessible by every one and is stored in a key repository. A key can be used to encrypt as well as to decrypt a message. Most importantly, a message that is encrypted with a private key can only be decrypted with a corresponding public key. Similarly, a message that is encrypted with a public key can only be decrypted with the corresponding private key. In the example image above, Bob wants to send a confidential document to Alice electronically. Bob has four issues to address before this electronic transmission can occur: Ensuring the contents of the document are encrypted such that the document is kept confidential. Ensuring the document is not altered during transmission. Since Alice does not know Bob, he has to somehow prove that the document is indeed sent by him. Ensuring Alice receives the document and that she cannot deny receiving it in future. PKI supports all the above four requirements with methods such as secure messaging, message digests, digital signatures, and non-repudiation services. Secure messaging To ensure that the document is protected from eavesdropping and not altered during the transmission, Bob will first encrypt the document using Alice's public key. This ensures two things: one, that the document is encrypted, and two, only Alice can open it as the document requires the private key of Alice to open it. To summarize, encryption is accomplished using the public key of the receiver and the receiver decrypts with his or her private key. In this method, Bob could ensure that the document is encrypted and only the intended receiver (Alice) can open it. However, Bob cannot ensure whether the contents are altered (Integrity) during transmission by document encryption alone. Message digest In order to ensure that the document is not altered during transmission, Bob performs a hash function on the document. The hash value is a computational value based on the contents of the document. This hash value is known as the message digest. By performing the same hash function on the decrypted document the message, the digest can be obtained by Alice and she can compare it with the one sent by Bob to ensure that the contents are not altered. This process will ensure the integrity requirement. Digital signature In order to prove that the document is sent by Bob to Alice, Bob needs to use a digital signature. Using a digital signature means applying the sender's private key to the message, or document, or to the message digest. This process is known as as signing. Only by using the sender's public key can the message be decrypted. Bob will encrypt the message digest with his private key to create a digital signature. In the scenario illustrated in the image above, Bob will encrypt the document using Alice's public key and sign it using his digital signature. This ensures that Alice can verify that the document is sent by Bob, by verifying the digital signature (Bob's private key) using Bob's public key. Remember a private key and the corresponding public key are linked, albeit mathematically. Alice can also verify that the document is not altered by validating the message digest, and also can open the encrypted document using her private key. Message authentication is an authenticity verification procedure that facilitates the verification of the integrity of the message as well as the authenticity of the source from which the message is received. Digital certificate By digitally signing the document, Bob has assured that the document is sent by him to Alice. However, he has not yet proved that he is Bob. To prove this, Bob needs to use a digital certificate. A digital certificate is an electronic identity issued to a person, system, or an organization by a competent authority after verifying the credentials of the entity. A digital certificate is a public key that is unique for each entity. A certification authority issues digital certificates. In PKI, digital certificates are used for authenticity verification of an entity. An entity can be an individual, system, or an organization. An organization that is involved in issuing, distributing, and revoking digital certificates is known as a Certification Authority (CA). A CA acts as a notary by verifying an entity's identity. One of the important PKI standards pertaining to digital certificates is X.509. It is a standard published by the International Telecommunication Union (ITU) that specifies the standard format for digital certificates. PKI also provides key exchange functionality that facilitates the secure exchange of public keys such that the authenticity of the parties can be verified. Key management procedures Key management consists of four essential procedures concerning public and private keys. They are as follows: Secure generation of keys—Ensures that private and public keys are generated in a secure manner. Secure storage of keys—Ensures that keys are stored securely. Secure distribution of keys—Ensures that keys are not lost or modified during distribution. Secure destruction of keys—Ensures that keys are destroyed completely once the useful life of the key is over. Type of keys NIST Special Publication 800-57 titled Recommendation for Key Management - Part 1: General specifies the following nineteen types of keys: Private signature key—It is a private key of public key pairs and is used to generate digital signatures. It is also used to provide authentication, integrity, and non-repudiation. Public signature verification key—It is the public key of the asymmetric (public) key pair. It is used to verify the digital signature. Symmetric authentication key—It is used with symmetric key algorithms to provide assurance of the integrity and source of the messages. Private authentication key—It is the private key of the asymmetric (public) key pair. It is used to provide assurance of the integrity of information. Public authentication key—Public key of an asymmetric (public) pair that is used to determine the integrity of information and to authenticate the identity of entities. Symmetric data encryption key—It is used to apply confidentiality protection to information. Symmetric key wrapping key—It is a key-encryptin key that is used to encrypt the other symmetric keys. Symmetric and asymmetric random number generation keys—They are used to generate random numbers. Symmetric master key—It is a master key that is used to derive other symmetric keys. Private key transport key—They are the private keys of asymmetric (public) key pairs, which are used to decrypt keys that have been encrypted with the associated public key. Public key transport key—They are the public keys of asymmetric (public) key pairs that are used to decrypt keys that have been encrypted with the associated public key. Symmetric agreement key—It is used to establish keys such as key wrapping keys and data encryption keys using a symmetric key agreement algorithm. Private static key agreement key—It is a private key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Public static key agreement key— It is a public key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Private ephemeral key agreement key—It is a private key of asymmetric (public) key pairs used only once to establish one or more keys such as key wrapping keys and data encryption keys. Public ephemeral key agreement key—It is a public key of asymmetric (public) key pairs that is used in a single key establishment transaction to establish one or more keys. Symmetric authorization key—This key is used to provide privileges to an entity using symmetric cryptographic method. Private authorization key—It is a private key of an asymmetric (public) key pair that is used to provide privileges to an entity. Public authorization key—It is a public key of an asymmetric (public) key pair that is used to verify privileges for an entity that knows the associated private authorization key.   Key management best practices Key Usage refers to using a key for a cryptographic process, and should be limited to using a single key for only one cryptographic process. This is to ensure that the strength of the security provided by the key is not weakened. When a specific key is authorized for use by legitimate entities for a period of time, or the effect of a specific key for a given system is for a specific period, then the time span is known as a cryptoperiod. The purpose of defining a cryptoperiod is to limit a successful cryptanalysis by a malicious entity. Cryptanalysis is the science of analyzing and deciphering code and ciphers. The following assurance requirements are part of the key management process: Integrity protection—Assuring the source and format of the keying material by verification Domain parameter validity—Assuring parameters used by some public key algorithms during the generation of key pairs and digital signatures, and the generation of shared secrets that are subsequently used to derive keying material Public key validity—Assuring that the public key is arithmetically correct Private key possession—Assuring that the possession of the private key is obtained before using the public key Cryptographic algorithm and key size selection are the two important key management parameters that provide adequate protection to the system and the data throughout their expected lifetime. Key states A cryptographic key goes through different states from its generation to destruction. These states are defined as key states. The movement of a cryptographic key from one state to another is known as a key transition. NIST SP800-57 defines the following six key states: Pre-activation state—The key has been generated, but not yet authorized for use Active state—The key may used to cryptographically protect information Deactivated state—The cryptoperiod of the key is expired, but the key is still needed to perform cryptographic operations Destroyed state—The key is destroyed Compromised state—The key is released or determined by an unauthorized entity Destroyed compromised state—The key is destroyed after a compromise or the comprise is found after the key is destroyed
Read more
  • 0
  • 0
  • 22197

article-image-experts-discuss-dark-patterns-and-deceptive-ui-designs-what-are-they-what-do-they-do-how-do-we-stop-them
Sugandha Lahoti
29 Jun 2019
12 min read
Save for later

Experts discuss Dark Patterns and deceptive UI designs: What are they? What do they do? How do we stop them?

Sugandha Lahoti
29 Jun 2019
12 min read
Dark patterns are often used online to deceive users into taking actions they would otherwise not take under effective, informed consent. Dark patterns are generally used by shopping websites, social media platforms, mobile apps and services as a part of their user interface design choices. Dark patterns can lead to financial loss, tricking users into giving up vast amounts of personal data, or inducing compulsive and addictive behavior in adults and children. Using dark patterns is unambiguously unlawful in the United States (under Section 5 of the Federal Trade Commission Act and similar state laws), the European Union (under the Unfair Commercial Practices Directive and similar member state laws), and numerous other jurisdictions. Earlier this week, at the Russell Senate Office Building, a panel of experts met to discuss the implications of Dark patterns in the session, Deceptive Design and Dark Patterns: What are they? What do they do? How do we stop them? The session included remarks from Senator. Mark Warner and Deb Fischer, sponsors of the DETOUR Act, and a panel of experts including Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology). The entire panel of experts included: Tristan Harris (Co-Founder and Executive Director, Center for Humane Technology) Rana Foroohar (Global Business Columnist and Associate Editor, Financial Times) Amina Fazlullah (Policy Counsel, Common Sense Media) Paul Ohm (Professor of Law and Associate Dean, Georgetown Law School), also the moderator Katie McInnis (Policy Counsel, Consumer Reports) Marshall Erwin (Senior Director of Trust & Security, Mozilla) Arunesh Mathur (Dept. of Computer Science, Princeton University) Dark patterns are growing in social media platforms, video games, shopping websites, and are increasingly used to target children The expert session was inaugurated by Arunesh Mathur (Dept. of Computer Science, Princeton University) who talked about his new study by researchers from Princeton University and the University of Chicago. The study suggests that shopping websites are abundant with dark patterns that rely on consumer deception. The researchers conducted a large-scale study, analyzing almost 53K product pages from 11K shopping websites to characterize and quantify the prevalence of dark patterns. They so discovered 1,841 instances of dark patterns on shopping websites, which together represent 15 types of dark patterns. One of the dark patterns was Sneak into Website, which adds additional products to users’ shopping carts without their consent. For example, you would buy a bouquet on a website and the website without your consent would add a greeting card in the hopes that you will actually purchase it. Katie McInnis agreed and added that Dark patterns not only undermine the choices that are available to users on social media and shopping platforms but they can also cost users money. User interfaces are sometimes designed to push a user away from protecting their privacy, making it tough to evaluate them. Amina Fazlullah, Policy Counsel, Common Sense Media said that dark patterns are also being used to target children. Manipulative apps use design techniques to shame or confuse children into in-app purchases or trying to keep them on the app for longer. Children mostly are unable to discern these manipulative techniques. Sometimes the screen will have icons or buttons that will appear to be a part of game play and children will click on them not realizing that they're either being asked to make a purchase or being shown an ad or being directed onto another site. There are games which ask for payments or microtransactions to continue the game forward. Mozilla uses product transparency to curb Dark patterns Marshall Erwin, Senior Director of Trust & Security at Mozilla talked about the negative effects of dark patterns and how they make their own products at Mozilla more transparent.  They have a set of checks and principles in place to avoid dark patterns. No surprises: If users were to figure out or start to understand exactly what is happening with the browser, it should be consistent with their expectations. If the users are surprised, this means browsers need to make a change either by stopping the activity entirely or creating additional transparency that helps people understand. Anti-tracking technology: Cross-site tracking is one of the most pervasive and pernicious dark patterns across the web today that is enabled by cookies. Browsers should take action to decrease the attack surface in the browser and actively protect people from those patterns online.  Mozilla and Apple have introduced anti tracking technology to actively intervene to protect people from the diverse parties that are probably not trustworthy. Detour Act by Senators Warner and Fisher In April, Warner and Fischer had introduced the Deceptive Experiences To Online Users Reduction (DETOUR) Act, a bipartisan legislation to prohibit large online platforms from using dark patterns to trick consumers into handing over their personal data. This act focuses on the activities of large online service providers (over a hundred million users visiting in a given month). Under this act you cannot use practices that trick users into obtaining information or consenting. You will experience new controls about conducting ‘psychological experiments on your users’ and you will no longer be able to target children under 13 with the goal of hooking them into your service. It extends additional rulemaking and enforcement abilities to the Federal Trade Commission. “Protecting users personal data and user autonomy online are truly bipartisan issues”: Senator Mark Warner In his presentation, Warner talked about how 2019 is the year when we need to recognize dark patterns and their ongoing manipulation of American consumers.  While we've all celebrated the benefits that communities have brought from social media, there is also an enormous dark underbelly, he says. It is important that Congress steps up and we play a role as senators such that Americans and their private data is not misused or manipulated going forward. Protecting users personal data and user autonomy online are truly bipartisan issues. This is not a liberal versus conservative, it's much more a future versus past and how we get this future right in a way that takes advantage of social media tools but also put some of the appropriate constraints in place. He says that the driving notion behind the Detour act is that users should have the choice and autonomy when it comes to their personal data. When a company like Facebook asks you to upload your phone contacts or some other highly valuable data to their platform, you ought to have a simple choice yes or no. Companies that run experiments on you without your consent are coercive and Detour act aims to put appropriate protections in place that defend user's ability to make informed choices. In addition to prohibiting large online platforms from using dark patterns to trick consumers into handing over their personal data, the bill would also require informed consent for behavior experimentation. In the process, the bill will be sending a clear message to the platform companies and the FTC that they are now in the business of preserving user's autonomy when it comes to the use of their personal data. The goal, Warner says, is simple - to bring some transparency to what remains a very opaque market and give consumers the tools they need to make informed choices about how and when to share their personal information. “Curbing the use of dark patterns will be foundational to increasing trust online” : Senator Deb Fischer Fischer argued that tech companies are increasingly tailoring users’ online experiences in ways that are more granular. On one hand, she says, you get a more personalized user experience and platforms are more responsive, however it's this variability that allows companies to take that design just a step too far. Companies are constantly competing for users attention and this increases the motivation for a more intrusive and invasive user design. The ability for online platforms to guide the visual interfaces that billions of people view is an incredible influence. It forces us to assess the impact of design on user privacy and well-being. Fundamentally the detour act would prohibit large online platforms from purposely using deceptive user interfaces - dark patterns. The detour act would provide a better accountability system for improved transparency and autonomy online. The legislation would take an important step to restore the hidden options. It would give users a tool to get out of the maze that coaxes you to just click on ‘I agree’. A privacy framework that involves consent cannot function properly if it doesn't ensure the user interface presents fair and transparent options. The detour act would enable the creation of a professional standards body which can register with the Federal Trade Commission. This would serve as a self regulatory body to develop best practices for UI design with the FTC as a backup. She adds, “We need clarity for the enforcement of dark patterns that don't directly involve our wallets. We need policies that place value on user choice and personal data online. We need a stronger mechanism to protect the public interest when the goal for tech companies is to make people engage more and more. User consent remains weakened by the presence of dark patterns and unethical design. Curbing the use of dark patterns will be foundational to increasing trust online. The detour act does provide a key step in getting there.” “The DETOUR act is calling attention to asymmetry and preventing deceptive asymmetry”: Tristan Harris Tristan says that companies are now competing not on manipulating your immediate behavior but manipulating and predicting the future. For example, Facebook has something called loyalty prediction which allows them to sell to an advertiser the ability to predict when you're going to become disloyal to a brand. It can sell that opportunity to another advertiser before probably you know you're going to switch. The DETOUR act is a huge step in the right direction because it's about calling attention to asymmetry and preventing deceptive asymmetry. We need a new relationship for this  asymmetric power by having a duty of care. It’s about treating asymmetrically powerful technologies to be in the service of the systems that they are supposed to protect. He says, we need to switch to a regenerative energy economy that actually treats attention as sacred and not directly tying profit to user extraction. Top questions raised by the panel and online viewers Does A/B testing result in dark patterns? Dark patterns are often a result of A/B testing right where a designer may try things that lead to better engagement or maybe nudge users in a way where the company benefits. However, A/B testing isn't the problem, it’s the intention of how A/B testing is being used. Companies and other organizations should have an oversight on the different experiments that they are conducting to see if A/B testing is actually leading to some kind of concrete harm. The challenge in the space is drawing a line about A/B testing features and optimizing for engagement and decreasing friction. Are consumers smart enough to tackle dark patterns on their own or do we need a legislation? It's well established that for children whose brains are just developing, they're unable to discern these types of deceptive techniques so especially for kids, these types of practices should be banned. For vulnerable families who are juggling all sorts of concerns around income and access to jobs and transportation and health care, putting this on their plate as well is just unreasonable. Dark patterns are deployed for an array of opaque reasons the average user will never recognize. From a consumer perspective, going through and identifying dark pattern techniques--that these platform companies have spent hundreds of thousands  of dollars developing to be as opaque and as tricky as possible--is an unrealistic expectation put on consumers. This is why the DETOUR act and this type of regulation are absolutely necessary and the only way forward. What is it about the largest online providers that make us want to focus on them first or only? Is it their scale or do they have more powerful dark patterns? Is it because they're just harming more people or is it politics? Sometimes larger companies stay wary of indulging in dark patterns because they have a greater risk in terms of getting caught and the PR backlash. However, they do engage in manipulative practices and that warrants a lot of attention. Moreover, targeting bigger companies is just one part of a more comprehensive privacy enforcement environment. Hitting companies that have a large number of users is also great for consumer engagement.  Obviously there is a need to target more broadly but this is a starting point. If Facebook were to suddenly reclass itself and its advertising business model, would you still trust them? No, the leadership that's in charge now for Facebook can not be trusted, especially the organizational cultures that have been building. There are change efforts going on inside of Google and Facebook right now but it’s getting gridlocked. Even if employees want to see policies being changed, they still have bonus structures and employee culture to keep in mind. We recommend you to go through the full hearing here. You can read more about the Detour Act here. U.S. senators introduce a bipartisan bill that bans social media platforms from using ‘dark patterns’ to trick its users. How social media enabled and amplified the Christchurch terrorist attack A new study reveals how shopping websites use ‘dark patterns’ to deceive you into buying things you may not want
Read more
  • 0
  • 0
  • 22147

article-image-planning-and-preparation
Packt
05 Jul 2017
9 min read
Save for later

Planning and Preparation

Packt
05 Jul 2017
9 min read
In this article by Jason Beltrame, authors of the book Penetration Testing Bootcamp, Proper planning and preparation is key to a successful penetration test. It is definitely not as exciting as some of the tasks we will do within the penetration test later, but it will lay the foundation of the penetration test. There are a lot of moving parts to a penetration test, and you need to make sure that you stay on the correct path and know just how far you can and should go. The last thing you want to do in a penetration test is cause a customer outage because you took down their application server with an exploit test (unless, of course, they want us to get to that depth) or scanned the wrong network. Performing any of these actions would cause our penetration-testing career to be a rather short-lived career. In this article, following topics will be covered: Why does penetration testing take place? Building the systems for the penetration test Penetration system software setup (For more resources related to this topic, see here.) Why does penetration testing take place? There are many reasons why penetration tests happen. Sometimes, a company may want to have a stronger understanding of their security footprint. Sometimes, they may have a compliance requirement that they have to meet. Either way, understanding why penetration testing is happening will help you understand the goal of the company. Plus, it will also let you know whether you are performing an internal penetration test or an external penetration test. External penetration tests will follow the flow of an external user and see what they have access to and what they can do with that access. Internal penetration tests are designed to test internal systems, so typically, the penetration box will have full access to that environment, being able to test all software and systems for known vulnerabilities. Since tests have different objectives, we need to treat them differently; therefore, our tools and methodologies will be different. Understanding the engagement One of the first tasks you need to complete prior to starting a penetration test is to have a meeting with the stakeholders and discuss various data points concerning the upcoming penetration test. This meeting could be you as an external entity performing a penetration test for a client or you as an internal security employee doing the test for your own company. The important element here is that the meeting should happen either way, and the same type of information needs to be discussed. During the scoping meeting, the goal is to discuss various items of the penetration test so that you have not only everything you need, but also full management buy-in with clearly defined objectives and deliverables. Full management buy-in is a key component for a successful penetration test. Without it, you may have trouble getting required information from certain teams, scope creep, or general pushback. Building the systems for the penetration test With a clear understanding of expectations, deliverables, and scope, it is now time to start working on getting our penetration systems ready to go. For the hardware, I will be utilizing a decently powered laptop. The laptop specifications are a Macbook Pro with 16 GB of RAM, 256 GB SSD, and a quad-core 2.3 Ghz Intel i7 running VMware Fusion. I will also be using the Raspberry Pi 3. The Raspberry Pi 3 is a 1.2 Ghz ARMv8 64-bit Quad Core, with 1GB of RAM and a 32 GB microSD. Obviously, there is quite a power discrepancy between the laptop and the Raspberry Pi. That is okay though, because I will be using both these devices differently. Any task that requires any sort of processing power will be done on the laptop. I love using the Raspberry Pi because of its small form factor and flexibility. It can be placed in just about any location we need, and if needed, it can be easily concealed. For software, I will be using Kali Linux as my operating system of choice. Kali is a security-oriented Linux distribution that contains a bunch of security tools already installed. Its predecessor, Backtrack, was also a very popular security operating system. One of the benefits of Kali Linux is that it is also available for the Raspberry Pi, which is perfect in our circumstance. This way, we can have a consistent platform between devices we plan to use in our penetration-testing labs. Kali Linux can be downloaded from their site at https://www.kali.org. For the Raspberry Pi, the Kali images are managed by Offensive Security at https://www.offensive-security.com. Even though I am using Kali Linux as my software platform of choice, feel free to use whichever software platform you feel most comfortable with. We will be using a bunch of open source tools for testing. A lot of these tools are available for other distributions and operating systems. Penetration system software setup Setting up Kali Linux on both systems is a bit different since they are different platforms. We won't be diving into a lot of details on the install, but we will be hitting all the major points. This is the process you can use to get the software up and running. We will start with the installation on the Raspberry Pi: Download the images from Offensive Security at https://www.offensive-security.com/kali-linux-arm-images/. Open the Terminal app on OS X. Using the utility xz, you can decompress the Kali image that was downloaded: xz-dkali-2.1.2-rpi2.img.xz Next, you insert the USB microSD card reader with the microSD card into the laptop and verify the disks that are installed so that you know the correct disk to put the Kali image on: diskutillist Once you know the correct disk, you can unmount the disk to prepare to write to it: diskutilunmountDisk/dev/disk2 Now that you have the correct disk unmounted, you will want to write the image to it using the dd command. This process can take some time, so if you want to check on the progress, you can run the Ctrl + T command anytime: sudoddif=kali-2.1.2-rpi2.imgof=/dev/disk2bs=1m Since the image is now written to the microSD drive, you can eject it with the following command: diskutileject/dev/disk2 You then remove the USB microSD card reader, place the microSD card in the Raspberry Pi, and boot it up for the first time. The default login credentials are as follows: Username:root Password:toor You then change the default password on the Raspberry Pi to make sure no one can get into it with the following command: Passwd<INSERTPASSWORDHERE> Making sure the software is up to date is important for any system, especially a secure penetration-testing system. You can accomplish this with the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade After a reboot, you are ready to go on the Raspberry Pi. Next, it's onto setting up the Kali Linux install on the Mac. Since you will be installing Kali as a VM within Fusion, the process will vary compared to another hypervisor or installing on a bare metal system. For me, I like having the flexibility of having OS X running so that I can run commands on there as well: Similar to the Raspberry Pi setup, you need to download the image. You will do that directly via the Kali website. They offer virtual images for downloads as well. If you go to select these, you will be redirected to the Offensive Security site at https://www.offensive-security.com/kali-linux-vmware-virtualbox-image-download/. Now that you have the Kali Linux image downloaded, you need to extract the VMDK. We used 7z via CLI to accomplish this task: Since the VMDK is ready to import now, you will need to go into VMware Fusion and navigate to File | New. A screen similar to the following should be displayed: Click on Create a custom virtual machine. You can select the OS as Other | Other and click on Continue: Now, you will need to import the previously decompressed VMDK. Click on the Use an existing virtual disk radio button, and hit Choose virtual disk. Browse the VMDK. Click on Continue. Then, on the last screen, click on the Finish button. The disk should now start to copy. Give it a few minutes to complete: Once completed, the Kali VM will now boot. Log in with the credentials we used in the Raspberry Pi image: Username:root Password:toor You need to then change the default password that was set to make sure no one can get into it. Open up a terminal within the Kali Linux VM and use the following command: Passwd<INSERTPASSWORDHERE> Make sure the software is up to date, like you did for the Raspberry Pi. To accomplish this, you can use the following commands: apt-getupdate apt-getupgrade apt-getdist-upgrade Once this is complete, the laptop VM is ready to go. Summary Now that we have reached the end of this article, we should have everything that we need for the penetration test. Having had the scoping meeting with all the stakeholders, we were able to get answers to all the questions that we required. Once we completed the planning portion, we moved onto the preparation phase. In this case, the preparation phase involved setting up Kali Linux on both the Raspberry Pi as well as setting it up as a VM on the laptop. We went through the steps of installing and updating the software on each platform as well as some basic administrative tasks. Resources for Article: Further resources on this subject: Introducing Penetration Testing [article] Web app penetration testing in Kali [article] BackTrack 4: Security with Penetration Testing Methodology [article]
Read more
  • 0
  • 0
  • 21847

article-image-cncf-led-open-source-kubernetes-security-audit-reveals-37-flaws-in-kubernetes-cluster-recommendations-proposed
Vincy Davis
09 Aug 2019
7 min read
Save for later

CNCF-led open source Kubernetes security audit reveals 37 flaws in Kubernetes cluster; recommendations proposed

Vincy Davis
09 Aug 2019
7 min read
Last year, the Cloud Native Computing Foundation (CNCF) initiated a process of conducting third-party security audits for its own projects. The aim of these security audits was to improve the overall security of the CNCF ecosystem. CoreDNS, Envoy and Prometheus are some of the CNCF projects which underwent these audits, resulting in identification of several security issues and vulnerabilities in the projects. With the help of the audit results, CoreDNS, Envoy and Prometheus addressed their security issues and later, provided users with documentation for the same. CNCF CTO Chris Aniszczyk says “The main takeaway from these initial audits is that a public security audit is a great way to test the quality of an open source project along with its vulnerability management process and more importantly, how resilient the open source project’s security practices are.” He has also announced that, later this year, CNCF will initiate a bounty program for researchers who identify bugs and other cybersecurity shortcomings in their projects. After tasting initial success, CNCF formed a Security Audit Working Group to provide security audits to their graduated projects, using the funds provided by the CNCF community. CNCF’s graduated projects include Kubernetes, Envoy, Fluentd among others. Due to the complexity and wide scope of the project, the Working group appointed two firms called the Trail of Bits and Atredis Partners to perform Kubernetes security audits. Trail of Bits implements high-end security research to identify security vulnerabilities and reduce risk and strengthen the code. Similarly, Atredis Partners also does complex and research-driven security testing and consulting. Kubernetes security audit findings Three days ago, the Trail of Bits team released an assessment report called the Kubernetes Security Whitepaper, which includes all the key aspects of the Kubernetes attack surface and security architecture. It aims to empower administrators, operators, and developers to make better design and implementation decisions. The Security Whitepaper presents a list of potential threats to Kubernetes cluster. https://twitter.com/Atlas_Hugged/status/1158767960640479232 Kubernetes cluster vulnerabilities A Kubernetes cluster consists of several base components such as kubelet, kube-apiserver, kube-scheduler, kube-controller-manager, and a kube-apiserver storage backend. Components like controllers and schedulers in Kubernetes assist in networking, scheduling, or environment management. Once a base Kubernetes cluster is configured, the Kubernetes clusters are managed by operator-defined objects. These operator-defined objects are referred as abstractions, which represents the state of the Kubernetes cluster. To provide an easy way of configuration and portability, the abstractions also include the component-agnostic. This again increases the operational complexity of a Kubernetes cluster. Since Kubernetes is a large system with many functionalities, the security audit was conducted on selected eight components within the larger Kubernetes ecosystem: Kube-apiserver Etcd Kube-scheduler Kube-controller-manager Cloud-controller-manager Kubelet Kube-proxy Container Runtime The Trail of Bits team firstly identified three types of attackers within a Kubernetes cluster: External attackers (who did not have access to the cluster) Internal attackers (who had transited a trust boundary) Malicious Internal users (who abuse their privilege within the cluster) The security audits resulted in total 37 findings, including 5 high severity, 17 medium severity, 8  low severity and 7 informational in the access control, authentication, timing, and data validation of a Kubernetes cluster. Some of the findings include: Insecure TLS is in use by default Credentials are exposed in environment variables and command-line arguments Names of secrets are leaked in logs No certificate revocation seccomp is not enabled by default Recommendations for Kubernetes cluster administrators and developers The Trail of Bits team have proposed a list of best practices and guideline recommendations for cluster administrators and developers. Recommendations for cluster administrators Attribute Based Access Controls vs Role Based Access Controls: Role-Based Access Controls (RBAC) can be configured dynamically while a cluster is operational. In contrast, Attribute Based Access Control (ABAC) are static in nature. This increases the difficulty of ensuring proper deployment and enforcement of controls. RBAC best practices: Administrators are advised to test their RBAC policies to ensure that the policies defined on the cluster are backed by an appropriate component configuration and that the policies properly restrict behavior. Node-host configurations and permissions: Appropriate authentication and access controls should be in place for the cluster nodes as an attacker with network access can use Kubernetes components to compromise other nodes. Default settings and backwards compatibility: Kubernetes contains many default settings which negatively impact the security of a cluster. Hence, cluster operators and administrators must ensure that the component and workload settings are rapidly changed and redeployed, in case of a compromise or an update. Networking: Due to the complexity of Kubernetes networking, there are many recommendations for maintaining a secure network. Some of them include: proper segmentation, isolation rules of the underlying cluster hosts should be defined. An executing control-plane components host should be isolated to the greatest extent possible. Environment considerations: The security of a cluster’s operating environment should be addressed. If a cluster is hosted on a cloud provider, administrators should ensure that best-practice hardening rules are implemented. Logging and alerting: Centralized logging of both workload and cluster host logs is recommended to enable debugging and event reconstruction. Recommendations for developers Avoid hardcoding paths to dependencies: Developers are advised to be conservative and cautious when handling external paths. Users should be warned if a path was not found, and have an option to specify it through a configuration variable. File permissions checking: Kubernetes should provide users the ability to perform file permissions checking, and enable this feature by default. This will help prevent common file permissions misconfigurations and help promote more secure practices. Monitoring processes on Linux: A Linux process is uniquely identified in the user-space via a process identifier or PID. A PID will point to a given process as long as the process is alive. If it dies, the PID can be reused by another spawned process. Moving processes to a cgroup: While moving a given process to a less restricted cgroup, it is necessary to validate that the process is the correct process after performing the movement. Future cgroup considerations for Kubernetes: Both Kubernetes and the components it uses (runc, Docker) have no support for cgroups. Currently, it is not an issue, however, it would be good to track this topic as it might change in the future. Future process handling considerations for Kubernetes: Tracking and participating in the development of a processes (or threads) on Linux is highly recommended. Kubernetes security audit sets precedent for other open source projects By conducting security audits and open sourcing the findings, Kubernetes, a widely used container-orchestration system, is setting a great precedent to other projects. This shows Kubernetes’ interest in maintaining security in their ecosystem. Though the number of security flaws found in the audit may upset a Kubernetes developer, it surely assures them that Kubernetes is trying its best to stay ahead of potential attackers. The Security Whitepaper and the threat model, provided in the security audit is expected to be of great help to Kubernetes community members for future references. Developers have also appreciated CNCF for undertaking great efforts in securing the Kubernetes system. https://twitter.com/thekonginc/status/1159578833768501248 https://twitter.com/krisnova/status/1159656574584930304 https://twitter.com/zacharyschafer/status/1159658866931589125 To know more details about the security audit of Kubernetes, check out the Kubernetes Security Whitepaper. Kubernetes 1.15 releases with extensibility around core Kubernetes APIs, cluster lifecycle stability, and more! Introducing Ballista, a distributed compute platform based on Kubernetes and Rust CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure
Read more
  • 0
  • 0
  • 21475
article-image-attackers-wiped-many-github-gitlab-and-bitbucket-repos-with-compromised-valid-credentials-leaving-behind-a-ransom-note
Savia Lobo
07 May 2019
5 min read
Save for later

Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note

Savia Lobo
07 May 2019
5 min read
Last week, Git repositories were hit by a suspicious activity where attackers targeted GitHub, GitLab, and Bitbucket users, wiping code and commits from multiple repositories. The surprising fact is that attackers used valid credentials, i.e. a password or personal access token to break into these repositories. Not only did they sweep the entire repository, but they also left a ransom note demanding 0.1 Bitcoin (BTC). On May 3, GitLab’s Director of Security, Kathy Wang, said, “We identified the source based on a support ticket filed by Stefan Gabos yesterday, and immediately began investigating the issue. We have identified affected user accounts and all of those users have been notified. As a result of our investigation, we have strong evidence that the compromised accounts have account passwords being stored in plaintext on deployment of a related repository.” According to GitLab’s official post, “All total, 131 users and 163 repositories were, at a minimum, accessed by the attacker. Affected accounts were temporarily disabled, and the owners were notified.” This incident first took place on May 2, 2019 at around 10 pm GMT when GitLab received the first report of a repository being wiped off with one commit named ‘WARNING’, which contained a single file containing the ransom note asking the targets to transfer 0.1 BTC (approx. $568) to the attacker’s Bitcoin address, if they want to get their data back. If they failed to transfer the amount, the targets were threatened that their code would be hosted as public. Here’s the ransom note that was left behind: “To recover your lost data and avoid leaking it: Send us 0.1 Bitcoin (BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and contact us by Email at admin@gitsbackup.com with your Git login and a Proof of Payment. If you are unsure if we have your data, contact us and we will send you a proof. Your code is downloaded and backed up on our servers. If we dont receive your payment in the next 10 Days, we will make your code public or use them otherwise.” “The targets who had their repos compromised use multiple Git-repository management platforms, with the only other connection between the reports besides Git being that the victims were using the cross-platform SourceTree free Git client”, The Bleeping Computer reports. GitLab, however, commented that they have notified the affected GitLab users and are working to resolve the issue soon. According to BitcoinAbuse.com, a website that tracks Bitcoin addresses used for suspicious activity, there have been 27 abuse reports with the first report filed on May 2. “When searching for it on GitHub we found 392 impacted repositories which got all their commits and code wiped using the 'gitbackup' account which joined the platform seven years ago, on January 25, 2012. Despite that, none of the victims have paid the ransom the hackers have asked for, seeing that the Bitcoin address received only 0.00052525 BTC on May 3 via a single transaction, which is the equivalent of roughly $2.99”, Bleeping Computer mentions. A GitHub spokesperson told the Bleeping Computers, “GitHub has been thoroughly investigating these reports, together with the security teams of other affected companies, and has found no evidence GitHub.com or its authentication systems have been compromised. At this time, it appears that account credentials of some of our users have been compromised as a result of unknown third-party exposures.” Team GitLab has further recommended all GitLab users to enable two-factor authentication and use SSH keys to strengthen their GitLab account. Read Also: Liz Fong-Jones on how to secure SSH with Two Factor Authentication (2FA) One of the StackExchange users said, “I also have 2FA enabled, and never got a text message indicating they had a successful brute login.” One StackExchange user received a response from Atlassian, the company behind Bitbucket and the cross-platform free Git client SourceTree, "Within the past few hours, we detected and blocked an attempt — from a suspicious IP address — to log in with your Atlassian account. We believe that someone used a list of login details stolen from third-party services in an attempt to access multiple accounts." Bitbucket users impacted by this breach, received an email stating, “We are in the process of restoring your repository and expect it to be restored within the next 24 hours. We believe that this was part of a broader attack against several git hosting services, where repository contents were deleted and replaced with a note demanding the payment of ransom. We have not detected any other compromise of Bitbucket. We have proactively reset passwords for those compromised accounts to prevent further malicious activity. We will also work with law enforcement in any investigation that they pursue. We encourage you and your team members to reset all other passwords associated with your Bitbucket account. In addition, we recommend enabling 2FA on your Bitbucket account.” According to Stefen Gabos’ thread on StackExchange Security forum, he mentions that the hacker does not actually delete, but merely alters Git commit headers. So there are chances that code commits can be recovered, in some cases. “All evidence suggests that the hacker has scanned the entire internet for Git config files, extracted credentials, and then used these logins to access and ransom accounts at Git hosting services”, ZDNet reports. https://twitter.com/bad_packets/status/1124429828680085504 To know more about this news and further updates visit GitLab’s official website. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Facebook confessed another data breach; says it “unintentionally uploaded” 1.5 million email contacts without consent Understanding the cost of a cybersecurity attack: The losses organizations face
Read more
  • 0
  • 0
  • 21050

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 20946

article-image-dcleaks-and-guccifer-2-0-how-hackers-used-social-engineering-to-manipulate-the-2016-u-s-elections
Savia Lobo
16 Jul 2018
5 min read
Save for later

DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections

Savia Lobo
16 Jul 2018
5 min read
It’s been more than a year since the Republican party’s Donald Trump won the U.S Presidential elections against Democrat Hillary Clinton. However, Robert Mueller recently indicted 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. These had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. Mueller evoked that the Russians did it using Guccifer 2.0 and DCLeaks following which Twitter decided to suspend both the accounts on its platform. According to the San Diego Union-Tribune, Twitter found the handles of both Guccifer 2.0 and DCLeaks dormant for more than a year and a half. It also verified the accounts being loaded with disseminated emails which were stolen from both Clinton’s camp and the Democratic Party’s organization. Subsequently, it has suspended both accounts. Guccifer 2.0 is an online persona created by the conspirators and was falsely claimed to be a Romanian hacker to avoid evidence of Russian involvement in the 2016 U.S. presidential elections. They are also associated with the leaks of documents from the Democratic National Committee(DNC) through Wikileaks. DCLeaks, with the website, “dcleaks.com” published the stolen data. The DCLeaks site is known to be a front for Russian cyberespionage group Fancy Bear. Mueller, in his indictment, stated that both Guccifer 2.0 and DCLeaks have ties with the GRU (Russian military intelligence unit called the Main Intelligence Directorate) hackers. How hackers social engineered their way into the Clinton Campaign? The attackers or hackers have been said to use the hacking technique known as Spear phishing and the malware used in the process is the X-agent. It is a tool that can collect documents, keystrokes and other information from computers and smartphones through encrypted channels back to servers owned by hackers. As per Mueller’s indictment report, the conspirator created an email account on the name of a known member of the Clinton Campaign. This email id had one letter deviation and looked almost like the original one. Spear Phishing emails were sent across work accounts of more than thirty different Clinton Campaign employees using this fake account. The embedded link within these spear fished emails directed the recipient to a document titled ‘hillary-clinton-favorable-rating.xlsx.’. However, in reality, the recipients were being directed to a GRU-created website. X-agent uses an encrypted “tunneling protocol” tool known as X-Tunnel, that connected to known GRU-associated servers. Using the malware, X-agent, the GRU agents had targeted more than 300 individuals within the Clinton campaign, Democratic National Committee, and Democratic Congressional Campaign Committee, by March 2016.  At the same time, hackers stole nearly 50,000 emails from the Clinton campaign, and by June 2016 they had gained a control over 33 DNC computers and infected them with the malware. The indictment further explains that although the attackers ensured to hide their tracks by erasing the activity logs, the Linux-based version of X-agent programmed with the GRU_registered domain “linuxkrnl.net” were discovered in these networks. Was the Trump campaign impervious to such an attack? Roger Stone, Former Trump campaign adviser was said to be in contact with Guccifer 2.0 during the presidential campaign. As per Mueller ’s indictment, Guccifer 2.0 sent Stone this message, “Please tell me if i can help u anyhow...it would be a great pleasure for me...What do u think about the info on the turnout model for the democrats entire presidential campaign.”. “Pretty standard” was the reply given to Guccifer 2.0. However, Stone said that his conversations with the person behind the account were “innocuous.” Stone further added, “This exchange is entirely public and provides no evidence of collaboration or collusion with Guccifer 2.0 or anyone else in the alleged hacking of the DNC emails.” Stone also stated that he never discussed such innocuous communication with Trump or his presidential campaign. Rod Rosenstein, Deputy Attorney General, U.S  had indicated about this Russian attack since he was a part of Kremlin’s multifaceted approach to boost Trump’s 2016 campaign and on the other hand depreciating Clinton’s campaign. Twitter stated that both the accounts have been suspended for being connected to a network of accounts previously suspended for operating in violation of their rules. However, Hillary Clinton’s supporters expressed their anger by stating that Twitter’s responsiveness to stimuli was too slow and too little. With the midterm elections arriving soon in some months, one question on everyone’s mind is: how are the intelligence department and the department of justice going to ensure the elections are fair and secure? There are high chances of such attacks recurring. On being asked a similar question at a cybersecurity conference, Kirstjen Nielsen, Homeland Security Secretary responded, “Today I can say with confidence that we know whom to contact in every state to share threat information,” Nielsen said. “That ability did not exist in 2016.” Following is Rod Rosenstein’s interview with PBS NewsHour.   Social engineering attacks – things to watch out while online! YouTube has a $25 million plan to counter fake news and misinformation Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news    
Read more
  • 0
  • 0
  • 20860
article-image-ways-improve-performance-your-server-modsecurity-25
Packt
30 Nov 2009
13 min read
Save for later

Ways to improve performance of your server in ModSecurity 2.5

Packt
30 Nov 2009
13 min read
A typical HTTP request To get a better picture of the possible delay incurred when using a web application firewall, it helps to understand the anatomy of a typical HTTP request, and what processing time a typical web page download will incur. This will help us compare any added ModSecurity processing time to the overall time for the entire request. When a user visits a web page, his browser first connects to the server and downloads the main resource requested by the user (for example, an .html file). It then parses the downloaded file to discover any additional files, such as images or scripts, that it must download to be able to render the page. Therefore, from the point of view of a web browser, the following sequence of events happens for each file: Connect to web server. Request required file. Wait for server to start serving file. Download file. Each of these steps adds latency, or delay, to the request. A typical download time for a web page is on the order of hundreds of milliseconds per file for a home cable/DSL user. This can be slower or faster, depending on the speed of the connection and the geographical distance between the client and server. If ModSecurity adds any delay to the page request, it will be to the server processing time, or in other words the time from when the client has connected to the server to when the last byte of content has been sent out to the client. Another aspect that needs to be kept in mind is that ModSecurity will increase the memory usage of Apache. In what is probably the most common Apache configuration, known as "prefork", Apache starts one new child process for each active connection to the server. This means that the number of Apache instances increases and decreases depending on the number of client connections to the server.As the total memory usage of Apache depends on the number of child processes running and the memory usage of each child process, we should look at the way ModSecurity affects the memory usage of Apache. A real-world performance test In this section we will run a performance test on a real web server running Apache 2.2.8 on a Fedora Linux server (kernel 2.6.25). The server has an Intel Xeon 2.33 GHz dual-core processor and 2 GB of RAM. We will start out benchmarking the server when it is running just Apache without having ModSecurity enabled. We will then run our tests with ModSecurity enabled but without any rules loaded. Finally, we will test ModSecurity with a ruleset loaded so that we can draw conclusions about how the performance is affected. The rules we will be using come supplied with ModSecurity and are called the "core ruleset". The core ruleset The ModSecurity core ruleset contains over 120 rules and is shipped with the default ModSecurity source distribution (it's contained in the rules sub-directory). This ruleset is designed to provide "out of the box" protection against some of the most common web attacks used today. Here are some of the things that the core ruleset protects against: Suspicious HTTP requests (for example, missing User-Agent or Accept headers) SQL injection Cross-Site Scripting (XSS) Remote code injection File disclosure We will examine these methods of attack, but for now, let's use the core ruleset and examine how enabling it impacts the performance of your web service. Installing the core ruleset To install the core ruleset, create a new sub-directory named modsec under your Apache conf directory (the location will vary depending on your distribution). Then copy all the .conf files from the rules sub-directory of the source distribution to the new modsec directory: mkdir /etc/httpd/conf/modseccp/home/download/modsecurity-apache/rules/modsecurity_crs_*.conf /etc/httpd/conf/modsec Finally, enter the following line in your httpd.conf file and restart Apache to make it read the new rule files: # Enable ModSecurity core rulesetInclude conf/modsecurity/*.conf Putting the core rules in a separate directory makes it easy to disable them—all you have to do is comment out the above Include line in httpd.conf, restart Apache, and the rules will be disabled. Making sure it works The core ruleset contains a file named modsecurity_crs_10_config.conf. This file contains some of the basic configuration directives needed to turn on the rule engine and configure request and response body access. Since we have already configured these directives, we do not want this file to conflict with our existing configuration, and so we need to disable this. To do this, we simply need to rename the file so that it has a different extension as Apache only loads *.conf files with the Include directive we used above: $ mv modsecurity_crs_10_config.conf modsecurity_crs_10_config.conf.disabled Once we have restarted Apache, we can test that the core ruleset is loaded by attempting to access an URL that it should block. For example, try surfing to http://yourserver/ftp.exe and you should get the error message Method Not Implemented, ensuring that the core rules are loaded. Performance testing basics So what effect does loading the core ruleset have on web application response time and how do we measure this? We could measure the response time for a single request with and without the core ruleset loaded, but this wouldn't have any statistical significance—it could happen that just as one of the requests was being processed, the server started to execute a processor-intensive scheduled task, causing a delayed response time. The best way to compare the response times is to issue a large number of requests and look at the average time it takes for the server to respond. An excellent tool—and the one we are going to use to benchmark the server in the following tests—is called httperf. Written by David Mosberger of Hewlett Packard Research Labs, httperf allows you to simulate high workloads against a web server and obtain statistical data on the performance of the server. You can obtain the program at http://www.hpl.hp.com/research/linux/httperf/ where you'll also find a useful manual page in the PDF file format and a link to the research paper published together with the first version of the tool. Using httperf We'll run httperf with the options --hog (use as many TCP ports as needed), --uri/index.html (request the static web page index.html) and we'll use --num-conn 1000 (initiate a total of 1000 connections). We will be varying the number of requests per second (specified using --rate) to see how the server responds under different workloads. This is what the typical output from httperf looks like when run with the above options: $ ./httperf --hog --server=bytelayer.com --uri /index.html --num-conn1000 --rate 50Total: connections 1000 requests 1000 replies 1000 test-duration20.386 sConnection rate: 49.1 conn/s (20.4 ms/conn, <=30 concurrentconnections)Connection time [ms]: min 404.1 avg 408.2 max 591.3 median 404.5stddev 16.9Connection time [ms]: connect 102.3Connection length [replies/conn]: 1.000Request rate: 49.1 req/s (20.4 ms/req)Request size [B]: 95.0Reply rate [replies/s]: min 46.0 avg 49.0 max 50.0 stddev 2.0 (4samples)Reply time [ms]: response 103.1 transfer 202.9Reply size [B]: header 244.0 content 19531.0 footer 0.0 (total19775.0)Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0CPU time [s]: user 2.37 system 17.14 (user 11.6% system 84.1% total95.7%)Net I/O: 951.9 KB/s (7.8*10^6 bps)Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 The output shows us the number of TCP connections httperf initiated per second ("Connection rate"), the rate at which it requested files from the server ("Request rate"), and the actual reply rate that the server was able to provide ("Reply rate"). We also get statistics on the reply time—the "reply time – response" is the time taken from when the first byte of the request was sent to the server to when the first byte of the reply was received—in this case around 103 milliseconds. The transfer time is the time to receive the entire response from the server. The page we will be requesting in this case, index.html, is 20 KB in size which is a pretty average size for an HTML document. httperf requests the page one time per connection and doesn't follow any links in the page to download additional embedded content or script files, so the number of such links in the page is of no relevance to our test. Getting a baseline: Testing without ModSecurity When running benchmarking tests like this one, it's always important to get a baseline result so that you know the performance of your server when the component you're measuring is not involved. In our case, we will run the tests against the server when ModSecurity is disabled. This will allow us to tell which impact, if any, running with ModSecurity enabled has on the server. Response time The following chart shows the response time, in milliseconds, of the server when it is running without ModSecurity. The number of requests per second is on the horizontal axis: As we can see, the server consistently delivers response times of around 300 milliseconds until we reach about 75 requests per second. Above this, the response time starts increasing, and at around 500 requests per second the response time is almost a second per request. This data is what we will use for comparison purposes when looking at the response time of the server after we enable ModSecurity. Memory usage Finding the memory usage on a Linux system can be quite tricky. Simply running the Linux top utility and looking at the amount of free memory doesn't quite cut it, and the reason is that Linux tries to use almost all free memory as a disk cache. So even on a system with several gigabytes of memory and no memory-hungry processes, you might see a free memory count of only 50 MB or so. Another problem is that Apache uses many child processes, and to accurately measure the memory usage of Apache we need to sum the memory usage of each child process. What we need is a way to measure the memory usage of all the Apache child processes so that we can see how much memory the web server truly uses. To solve this, here is a small shell script that I have written that runs the ps command to find all the Apache processes. It then passes the PID of each Apache process to pmap to find the memory usage, and finally uses awk to extract the memory usage (in KB) for summation. The result is that the memory usage of Apache is printed to the terminal. The actual shell command is only one long line, but I've put it into a file called apache_mem.sh to make it easier to use: #!/bin/sh# apache_mem.sh# Calculate the Apache memory usageps -ef | grep httpd | grep ^apache | awk '{ print $2 }' | xargs pmap -x | grep 'total kB' | awk '{ print $3 }' | awk '{ sum += $1 } END { print sum }' Now, let's use this script to look at the memory usage of all of the Apache processes while we are running our performance test. The following graph shows the memory usage of Apache as the number of requests per second increases: Apache starts out consuming about 300 MB of memory. Memory usage grows steadily and at about 150 requests per second it starts climbing more rapidly. At 500 requests per second, the memory usage is over 2.4 GB—more than the amount of physical RAM of the server. The fact that this is possible is because of the virtual memory architecture that Linux (and all modern operating systems) use. When there is no more physical RAM available, the kernel starts swapping memory pages out to disk, which allows it to continue operating. However, since reading and writing to a hard drive is much slower than to memory, this starts slowing down the server significantly, as evidenced by the increase in response time seen in the previous graph. CPU usage In both of the tests above, the server's CPU usage was consistently around 1 to 2%, no matter what the request rate was. You might have expected a graph of CPU usage in the previous and subsequent tests, but while I measured the CPU usage in each test, it turned out to run at this low utilization rate for all tests, so a graph would not be very useful. Suffice it to say that in these tests, CPU usage was not a factor. ModSecurity without any loaded rules Now, let's enable ModSecurity—but without loading any rules—and see what happens to the response time and memory usage. Both SecRequestBodyAccess and SecResponseBodyAccess were set to On, so if there is any performance penalty associated with buffering requests and responses, we should see this now that we are running ModSecurity without any rules. The following graph shows the response time of Apache with ModSecurity enabled: We can see that the response time graph looks very similar to the response time graph we got when ModSecurity was disabled. The response time starts increasing at around 75 requests per second, and once we pass 350 requests per second, things really start going downhill. The memory usage graph is also almost identical to the previous one: Apache uses around 1.3 MB extra per child process when ModSecurity is loaded, which equals a total increase of memory usage of 26 MB for this particular setup. Compared to the total amount of memory Apache uses when the server is idle (around 300 MB) this equals an increase of about 10%. Mod Security with the core ruleset loaded Now for the really interesting test we'll run httperf against ModSecurity with the core ruleset loaded and look at what that does to the response time and memory usage. Response time The following graph shows the server response time with the core ruleset loaded: At first, the response time is around 340 ms, which is about 35 ms slower than in previous tests. Once the request rate gets above 50, the server response time starts deteriorating. As the request rates grows, the response time gets worse and worse, reaching a full 5 seconds at 100 requests per second. I have capped the graph at 100 requests per second, as the server performance has already deteriorated enough at this point to allow us to see the trend. We see that the point at which memory usage starts increasing has gone down from 75 to 50 requests per second now that we have enabled the core ruleset. This equals a reduction in the maximum number of requests per second the server can handle of 33%.
Read more
  • 0
  • 0
  • 20675

article-image-incident-response-and-live-analysis
Packt
10 Jun 2016
30 min read
Save for later

Incident Response and Live Analysis

Packt
10 Jun 2016
30 min read
In this article by Ayman Shaaban and Konstantin Sapronov, author of the book Practical Windows Forensics, describe the stages of preparation to respond to an incident are a matter which much attention should be paid to. In some cases, the lack of necessary tools during the incident leads to the inability to perform the necessary actions at the right time. Taking into account that the reaction time of an incident depends on the efficiency of the incident handling process, it becomes clear that in order to prepare the IR team, its technical support should be very careful. The whole set of requirements can be divided into several categories for the IR team: Skills Hardware Software (For more resources related to this topic, see here.) Let's consider the main issues that may arise during the preparation of the incident response team in more detail. If we want to build a computer security incident response team, we need people with a certain set of skills and technical expertise to perform technical tasks and effectively communicate with other external contacts. Now, we will consider the skills of members of the team. The set of skills that members of the team need to have can be divided into two groups: Personal skills Technical skills Personal skills Personal skills are very important for a successful response team. This is because the interaction with team members who are technical experts but have poor social skills can lead to misunderstanding and misinterpretation of the results, the consequences of which may affect the team's reputation. A list of key personal skills will be discussed in the following sections. Written communication For many IR teams, a large part of their communication occurs through written documents. These communications can take many forms, including e-mails concerning incidents documentation of event or incident reports, vulnerabilities, and other technical information notifications. Incident response team members must be able to write clearly and concisely, describe activities accurately, and provide information that is easy for their readers to understand. Oral communication The ability to communicate effectively though spoken communication is also an important skill to ensure that the incident response team members say the right words to the right people. Presentation skills Not all technical experts have good presentation skills. They may not be comfortable in front of a large audience. Gaining confidence in presentation skills will take time and effort for the team's members to become more experienced and comfortable in such situations. Diplomacy The members of the incident response team interact with people who may have a variety of goals and needs. Skilled incident response team members will be able to anticipate potential points of contention, be able to respond appropriately, maintain good relationships, and avoid offending others. They also will understand that they are representing the IR team and their organization. Diplomacy and tact are very important. The ability to follow policies and procedures Another important skill that members of the team need is the ability to follow and support the established policies and procedures of the organization or team. Team skills IR staff must be able to work in the team environment as productive and cordial team players. They need to be aware of their responsibilities, contribute to the goals of the team, and work together to share information, workload, and experiences. They must be flexible and willing to adapt to change. They also need skills to interact with other parties. Integrity The nature of IR work means that team members often deal with information that is sensitive and, occasionally, they might have access to information that is newsworthy. The team's members must be trustworthy, discrete, and able to handle information in confidence according to the guidelines, any constituency agreements or regulations, and/or any organizational policies and procedures. In their efforts to provide technical explanations or responses, the IR staff must be careful to provide appropriate and accurate information while avoiding the dissemination of any confidential information that could detrimentally affect another organization's reputation, result in the loss of the IR team's integrity, or affect other activities that involve other parties. Knowing one's limits Another important ability that the IR team's members must have is the ability to be able to readily admit when they have reached the limit of their own knowledge or expertise in a given area. However difficult it is to admit a limitation, individuals must recognize their limitations and actively seek support from their team members, other experts, or their management. Coping with stress The IR team's members often could be in stressful situations. They need to be able to recognize when they are becoming stressed, be willing to make their fellow team members aware of the situation, and take (or seek help with) the necessary steps to control and maintain their composure. In particular, they need the ability to remain calm in tense situations—ranging from an excessive workload to an aggressive caller to an incident where human life or a critical infrastructure may be at risk. The team's reputation, and the individual's personal reputation, will be enhanced or will suffer depending on how such situations are handled. Problem solving IR team members are confronted with data every day, and sometimes, the volume of information is large. Without good problem-solving skills, staff members could become overwhelmed with the volumes of data that are related to incidents and other tasks that need to be handled. Problem-solving skills also include the ability for the IR team's members to "think outside the box" or look at issues from multiple perspectives to identify relevant information or data. Time management Along with problem-solving skills, it is also important for the IR team's members to be able to manage their time effectively. They will be confronted with a multitude of tasks ranging from analyzing, coordinating, and responding to incidents, to performing duties, such as prioritizing their workload, attending and/or preparing for meetings, completing time sheets, collecting statistics, conducting research, giving briefings and presentations, traveling to conferences, and possibly providing on-site technical support. Technical skills Another important component of the skills needed for an IR team to be effective is the technical skills of their staff. These skills, which define the depth and breadth of understanding of the technologies that are used by the team, and the constituency it serves, are outlined in the following sections. In turn, the technical skills, which the IR team members should have, can be divided into two groups: security fundamentals and incident handling skills. Security fundamentals Let's look at some of the security fundamentals in the following subsections. Security principles The IR team's members need to have a general understanding of the basic security principles, such as the following: Confidentiality Availability Authentication Integrity Access control Privacy Nonrepudiation Security vulnerabilities and weaknesses To understand how any specific attack is manifested in a given software or hardware technology, the IR team's members need to be able to first understand the fundamental causes of vulnerabilities through which most attacks are exploited. They need to be able to recognize and categorize the most common types of vulnerabilities and associated attacks, such as those that might involve the following: Physical security issues Protocol design flaws (for example, man-in-the-middle attacks or spoofing) Malicious code (for example, viruses, worms, or Trojan horses) Implementation flaws (for example, buffer overflow or timing windows/race conditions) Configuration weaknesses User errors or indifference The Internet It is important that the IR team's members also understand the Internet. Without this fundamental background information, they will struggle or fail to understand other technical issues, such as the lack of security in underlying protocols and services that are used on the Internet or to anticipate the threats that might occur in the future. Risks The IR team's members need to have a basic understanding of computer security risk analysis. They should understand the effects on their constituency of various types of risks (such as potentially widespread Internet attacks, national security issues as they relate to their team and constituency, physical threats, financial threats, loss of business, reputation, or customer confidence, and damage or loss of data). Network protocols Members of the IR team need to have a basic understanding of the common (or core) network protocols that are used by the team and the constituency that they serve. For each protocol, they should have a basic understanding of the protocol, its specifications, and how it is used. In addition to this, they should understand the common types of threats or attacks against the protocol, as well as strategies to mitigate or eliminate such attacks. For example, at a minimum, the staff should be familiar with protocols, such as IP, TCP, UDP, ICMP, ARP, and RARP. They should understand how these protocols work, what they are used for, the differences between them, some of the common weaknesses, and so on. In addition to this, the staff should have a similar understanding of protocols, such as TFTP, FTP, HTTP, HTTPS, SNMP, SMTP, and any other protocols. The specialist skills include a more in-depth understanding of security concepts and principles in all the preceding areas in addition to expert knowledge in the mechanisms and technologies that lead to flaws in these protocols, the weaknesses that can be exploited (and why), the types of exploitation methods that would likely be used, and the strategies to mitigate or eliminate these potential problems. They should have expert understanding of additional protocols or Internet technologies (DNSSEC, IPv6, IPSEC, and other telecommunication standards that might be implemented or interface with their constituent's networks, such as ATM, BGP, broadband, voice over IP, wireless technology, other routing protocols, or new emerging technologies, and so on). They could then provide expert technical guidance to other members of the team or constituency. Network applications and services The IR team's staff need a basic understanding of the common network applications and services that the team and the constituency use (DNS, NFS, SSH, and so on). For each application or service they should understand the purpose of the application or service, how it works, its common usages, secure configurations, and the common types of threats or attacks against the application or service, as well as mitigation strategies. Network security issues The members of the IR team should have a basic understanding of the concepts of network security and be able to recognize vulnerable points in network configurations. They should understand the concepts and basic perimeter security of network firewalls (design, packet filtering, proxy systems, DMZ, bastion hosts, and so on), router security, the potential for information disclosure of data traveling across the network (for example, packet monitoring or "sniffers"), or threats that are related to accepting untrustworthy information. Host or system security issues In addition to understanding security issues at a network level, the IR team's members need to understand security issues at a host level for the various types of operating systems (UNIX, Windows, or any other operating systems that are used by the team or constituency). Before understanding the security aspects, the IR team's member must first have the following: Experience using the operating system (user security issues) Some familiarity with managing and maintaining the operating system (as an administrator) Then, for each operating system, the IR team member needs to know how to perform the following: Configure (harden) the system securely Review configuration files for security weaknesses Identify common attack methods Determine whether a compromise attempt occurred Determine whether an attempted system compromise was successful Review log files for anomalies Analyze the results of attacks Manage system privileges Secure network daemons Recover from a compromise Malicious code The IR team's members must understand the different types of malicious code attacks that occur and how these can affect their constituency (system compromises, denial of service, loss of data integrity, and so on). Malicious code can have different types of payloads that can cause a denial of service attack or web defacement, or the code can contain more "dynamic" payloads that can be configured to result in multifaceted attack vectors. Staff should understand not only how malicious code is propagated through some of the obvious methods (disks, e-mail, programs, and so on), but they should also understand how it can propagate through other means, such as PostScript, Word macros, MIME, peer-to-peer file sharing, or boot-sector viruses that affect operating systems running on PC and Macintosh platforms. The IR team's staff must be aware of how such attacks occur and are propagated, the risks and damage associated with such attacks, prevention and mitigation strategies, detection and removal processes, and recovery techniques. Specialist skills include expertise in performing analysis, black box testing, reverse engineering malicious code that is associated with such attacks, and in providing advice to the team on the best approaches for an effective response. Programming skills Some team members need to have system and network programming experience. The team should ensure that a range of programming languages is covered on the operating systems that the team and the constituency use. For example, the team should have experience in the following: C Python Awk Java Shell (all variations) Other scripting tools These scripts or programming tools can be used to assist in the analysis and handling of incident information (for example, writing different scripts to count and sort through various logs, search databases, look up information, extract information from logs or files, and collect and merge data). Incident handling skills Local team policies and protocols Understanding and identifying intruder techniques Communication with sites Incident analysis Maintenance of incident records The hardware for IR and Jump Bag Certainly, a set of equipment that may be required during the processing of the incident should be prepared in advance, and this matter should be given much attention. This set is called the Jump Bag. The formation of such a kit is largely due to the budget the organization could afford. Nevertheless, there is a certain necessary minimum, which will allow the team to handle incidents in small quantities. If the budget allows it, it is possible to buy a turnkey solution, which includes all the necessary equipment and the case for its transportation. As an instance of such a solution, FREDL + Ultra Kit could be recommended. FREDL is short for Forensic Recovery of Evidence Device Laptop. With Ultra Kit, this solution will cost about 5000 USD. Ultra Kit contains a set of write-blockers and a set of adapters and connecters to obtain images of hard drives with a different interface: More details can be found on the manufacturer's website at https://www.digitalintelligence.com/products/ultrakit/. Certainly, if we ignore the main drawback of such a solution, this decision has a lot of advantages as compared to the cost. Besides this, you get a complete starter kit to handle the incident. Besides, Ultra Kit allows you to safely transport equipment without fear of damage. The FRED-L laptop is based on a modern hardware, and the specifications are constantly updated to meet modern requirements. Current specifications can be found on the manufacturer's website at http://www.digitalintelligence.com/products/fredl/. However, if you want to replace the expensive solution, you could build a cheaper alternative that will save 20-30% of the budget. It is possible to buy the components included in the review of decisions separately. As a workstation, you can choose a laptop with the following specifications: Intel Core i7-6700K Skylake Quad Core Processor, 4.0 GHz, 8MB Intel Smart Cache 16 GB PC4-17000 DDR4 2133 Memory 256 GB Solid State Internal SATA Drive Intel Z170 Express Chipset NVIDIA GeForce GTX 970M with 6 GB GDDR5 VRAM This specification will provide a comfortable workstation to work on the road. As a case study for the transport of the equipment, we recommend paying attention to Pelican (http://www.pelican.com) cases. In this case, the manufacturer can choose the equipment to meet your needs. One of the typical tasks in handling of incidents is obtaining images from hard drives. For this task, you can use a duplicator or a bunch of write-blockers and computer. Duplicators are certainly a more convenient solution; their usage allows you to quickly get the disk image without using additional software. Their main drawback is the price. However, if you often have to extract the image of hard drives and you have a few thousand dollars, the purchase of the duplicator is a good investment. If the imaging of hard drives is a relatively rare problem and you have a limited budget, you can purchase a write blocker which will cost 300-500 USD. However, it is necessary to use a computer and software. To pick up the necessary equipment, you can visit http://www.insectraforensics.com, where you can find equipment from different manufacturers. Also, do not forget about the hard drives themselves. It is worth buying a few hard drives with large volumes for the possibility of good performance. To summarize, responders need to include the following items in a basic set: Several network cables (straight through or loopback) A serial cable with a serial USB adapter Network serial adapters Hard drives (various sizes) Flash drives A Linux Live DVD A portable drive duplicator with a write-blocker Various drive interface adapters A four port hub A digital camera Cable ties Cable snips Assorted screws and hex drivers Notebooks and pens Chain of Custody forms Incident handling procedure Software After talking about the hardware, we did not forget about the software that you should always have on hand. The variety of software that can be used in the processing of the incident allows you to select software-based preferences, skills, and budget. Some prefer command-line utilities, and some find that GUI is more convenient to use. Sometimes, the use of certain tools is dictated by the circumstances under which it's needed to work. We strongly recommend that you prepare these in advance and thoroughly test the entire set of required software. Live versus mortem The initial reaction to an incident is a very important step in the process of computer incident management. The correct method of carrying out and performing this step depends on the success of the investigation. Moreover, a correct and timely response is needed to reduce the damage caused by the incident. The traditional approach to the analysis of the disks is not always practical, and in some cases, it is simply not possible. In today's world, the development of computer technology has led to many companies having a distribution network in many cities, countries, and continents. Wish this physical disconnection of the computer from the network, following the traditional investigation of each computer is not possible. In such cases, the incident responder should be able to carry out a prior assessment remotely and as soon as possible, view a list of running processes, open network connections, open files, and get a list of registered users in the system. Then, if necessary, carry out a full investigation. In this article, we will look at some approaches that the responder may apply in a given situation. However, even in these cases when we have physical access to the machine, live response is the only way of incident response. For example, cases where we are dealing with large disk arrays. In this case, there are several problems at once. The first problem is that the space to store large amounts of data is also difficult to identify. In addition to this, the time that may be required to analyze large amounts of data is unreasonably high. Typically, such large volumes of data have a highly loaded server serving hundreds of thousands of users, so their trip, or even a reboot, is not acceptable for business. Another scenario that requires the Live Forensics approach is when an encrypted filesystem is used. In cases where the analyst doesn't have the key to decrypt the disc, Live Forensics is a good alternative to obtain data from a system where encryption of the filesystem is used. This is not an exhaustive list of cases when the Live Analysis could be applicable. It is worth noting one very important point. During the Live Analysis, it is not possible to avoid changes in the system. Connecting external USB devices or network connectivity, user log on, or launching an executable file will be modified in the system in a variety of log files, registry keys, and so on. Therefore, you need to understand what changes were caused by the actions of responders and document them. Volatile data Under the principle of "order of Volatility", you must first collect information that is classified as Volatile Data (the list of network connections, the list of running processes, log on sessions, and so on), which will be irretrievably lost in case the computer is powered off. Then, you can start to collect nonvolatile data, which can also be obtained with the traditional approach in the analysis of the disk image. The main difference in this case is that a Live Forensics set of data is easier to obtain with a working machine. This article will focus on the collection of Volatile data. Typically, this category includes the following data: System uptime and the current time Network parameters (NetBIOS name cache, active connections, the routing table, and so on). NIC configuration settings Logged on users and active sessions Loaded drivers Running services Running processes and their related parameters (loaded DLLs, open handles, and ownership) Autostart modules Shared drives and files opened remotely Recording the time and date of the data collection allows you to define a time interval in which the investigator will perform an analysis of the system: (date / t) & (time / t)>%COMPUTER_NAME% systime.txt systeminfo | find "Boot Time" >>% COMPUTERNAME% systime.txt The last command allows you to< show how long the machine worked since the last reboot. Using the %COMPUTERNAME% environment variable, we can set up separate directories for each machine in case we need to repeat the process of collecting information on different computers in a network. In some cases, signs of compromise are clearly visible in the analysis of network activity. The next set of commands allows you to get this information: nbtstat -c> %COMPUTERNAME%NetNameCache.txt netstat -a -n -o>%COMPUTERNAME%NetStat.txt netstat -rn>%COMPUTNAME%NetRoute.txt ipconfig / all>%COMPUTERNAME%NIC.txt promqry>%COMPUTERNAME%NSniff.txt The first command uses nbtstat.exe to obtain information from the cache of NetBIOS. You display the NetBIOS names in their corresponding IP address. The second and third commands use netstat.exe to record all of the active compounds, listening ports, and routing tables. For information about network settings, the ipconfig.exe network interfaces command is used. The last block command starts the Microsoft promqry utility, which allows you to define the network interfaces on the local machine, which operates in promiscuous mode. This mode is required for network sniffers, so the detection of the regime indicates that the computer can run software that listens to network traffic. To enumerate all the logged on users on the computer, you can use the Sysinternals tools: psloggedon -x>%COMPUTERNAME% LoggedUsers.tx: logonsessions -p >> %COMPUTERNAME%LoggedOnUsers.txt The PsLoggedOn.exe command lists both types of users, those who are logged on to the computer locally, and those who logged on remotely over the network. Using the -x switch, you can get the time at which each user logged on. With the -p key, logonsessions will display all of the processes that were started by the user during the session. It should be noted that logonsessions must be run with administrator privileges. To get a list of all drivers that are loaded into the system, you can use the WDK drivers.exe utility: drivers.exe>%COMPUTERNAME%drivers.txt The next set of commands to obtain a list of running processes and related information is as follows: tasklist / svc>%COMPUTERNAME% taskdserv.txt psservice>%COMPUTERNAME% trasklst.txt tasklist / v>%COMPUTERNAME% taskuserinfo.txt pslist / t>%COMPUTERNAME%tasktree.txt listdlls>%COMPUTERNAME%lstdlls.txt handle -a>%COMPUTERNAME%lsthandles.txt The tasklist.exe utility that is made with the / svc key enumerates the list of running processes and services in their context. While the previous command displays a list of running services, PsService receives information on services using the information in the registry and SCM database. Services are a traditional way through which attackers can access a previously compromised system. Services can be configured to run automatically without user intervention, and they can be launched as part of another process, such as svchost.exe. In addition to this, remote access can be provided through completely legitimate services, such as telnet or ftp. To associate users with their running processes, use the tasklist / v command key. To enumerate a list of DLLs loaded in each process and the full path to the DLL, you can use listsdlls.exe from SysInternals. Another handle.exe utility can be used to list all the handles, which are open processes. This handles registry keys, files, ports, mutexes, and so on. Other utilities require run with administrator privileges. These tools can help identify malicious DLLs that were injected into the processes, as well as files, which have not been accessed by these processes. The next group of commands allows you to get a list of programs that are configured to start automatically: autorunsc.exe -a>%COMPUTERNAME% autoruns.txt at>%COMPUTERNAME% at.txt schtasks / query>%COMPUTERNAME% schtask.txt The first command starts the SysInternals utility, autoruns, and displays a list of executables that run at system startup and when users log on. This utility allows you to detect malware that uses the popular and well-known methods for persistent installation into the system. Two other commands (at and schtasks) display a list of commands that run in the schedule. To start the at command also requires administrator privileges. To install backdoors mechanisms, services are often used, but services are constantly working in the system and, thus, can be easily detected during live response. Thus, create a backdoor that runs on a schedule to avoid detection. For example, an attacker could create a task that will run the malware just outside working hours. To get a list of network share drives and disk files that are deleted, you can use the following two commands: psfile>%COMPUTERNAME%openfileremote.txt net share>%COMPUTERNAME%drives.txt Nonvolatile data After Volatile data has been collected, you can continue to collect Nonvolatile Data. This data can be obtained at the stage of analyzing the disk, but as we mentioned earlier, analysis of the disk is not possible in some cases. This data includes the following: The list of installed software and updates User info Metadata about a filesystem's timestamps Registry data However, upon receipt of this data with the live running of the system, there are difficulties that are associated with the fact that many of these files cannot be copied in the usual way, as they are locked by the operating system. To do this, use one of the utilities. One such utility is the RawCopy.exe utility, which is authored by Joakim Schicht. This is a console application that copies files off NTFS volumes using the low-level disk reading method. The application has two mandatory parameters, target file and output path: -param1: This is the full path to the target file to extract; it also supports IndexNumber instead of file path -param2: This is a valid path to output directory This tool will let you copy files that are usually not accessible because the system has locked them. For instance, the registry hives such as SYSTEM and SAM, files inside SYSTEM VOLUME INFORMATION, or any file on the volume. This supports the input file specified either with the full file path or by its $MFT record number (index number). Here's an example of copying the SYSTEM hive off a running system: RawCopy.exe C:WINDOWSsystem32configSYSTEM %COMPUTERNAME%SYSTEM Here's an example of extracting the $MFT by specifying its index number: RawCopy.exe C:0 %COMPUTERNAME%mft Here's an example of extracting the MFT reference number 30224 and all attributes, including $DATA, and dumping it into C:tmp: RawCopy.exe C:30224 C:tmp -AllAttr To download RawCopy, go to https://github.com/jschicht/RawCopy. Knowing what software is installed and what its updates are helps further the investigation because this shows possible ways to compromise a system through a vulnerability in the software. One of the first actions that the attacker makes is to attack during a system scan to detect active services and exploit the vulnerabilities in them. Thus, services that were not patched can be utilized for remote system penetration. One way to install a set of software and updates is to use the systeminfo utility: systeminfo > %COMPUTERNAME%sysinfo.txt. Moreover, skilled attackers can themselves perform the same actions and install necessary updates in order to hide the traces of penetration into the system. After identifying the vulnerable services and their successful exploits, the attacker creates an account for themselves in order to subsequently use legal ways to enter the system. Therefore, the analysis of data about users of the system reveals the following traces of the compromise: The Recent folder contents, including LNK files and jump lists LNK files in the Office Recent folder The Network Recent folder contents The entire temp folder The entire Temporary Internet Files folder The PrivacyIE folder The Cookies folder The Java Cache folder contents Now, let's consider the preceding cases as follows: Collecting the Recent folder is done as follows: robocopy.exe %RECENT% %COMPUTERNAME%Recent /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent log.txt Here %RECENT% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT% = %systemdrive%Documents and Settings%USERNAME%Recent For Windows 6.x (Windows Vista and newer): %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsRecent Collecting the Office Recent folder is done as follows: robocopy.exe %RECENT_OFFICE% %COMPUTERNAME%Recent_Office /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Recent_Officelog.txt Here %RECENT_OFFICE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %RECENT_OFFICE% = %systemdrive%Documents and Settings%USERNAME%Application DataMicrosoftOffice Recent For Windows 6.x (Windows Vista and newer), this is as follows: %RECENT% =%systemdrive%Users%USERNAME%AppDataRoaming MicrosoftWindowsOfficeRecent Collecting the Network Shares Recent folder is done as follows: robocopy.exe %NetShares% %COMPUTERNAME%NetShares /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%NetShareslog.txt Here %NetShares% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %NetShares% = %systemdrive%Documents andSettings%USERNAME%Nethood For Windows 6.x (Windows Vista and newer), this is as follows: %NetShares % =''%systemdrive%Users%USERNAME%AppData RoamingMicrosoftWindowsNetwork Shortcuts'' Collecting the Temporary folder is done as follows: robocopy.exe %TEMP% %COMPUTERNAME%TEMP /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP% = %systemdrive%Documents and Settings%USERNAME% Local SettingsTemp For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP% =''%systemdrive%Users%USERNAME%AppData LocalTemp '' Collecting the Temporary Internet Files folder is done as follows: robocopy.exe %TEMP_INTERNET_FILES% %COMPUTERNAME%TEMP_INTERNET_FILES /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%TEMPlog.txt Here %TEMP_INTERNET_FILE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %TEMP_INTERNET_FILE% = ''%systemdrive%Documents and Settings%USERNAME%Local SettingsTemporary Internet Files'' For Windows 6.x (Windows Vista and newer), this is as follows: %TEMP_INTERNET_FILE% =''%systemdrive%Users%USERNAME% AppDataLocalMicrosoftWindowsTemporary Internet Files" Collecting the PrivacIE folder is done as follows: robocopy.exe %PRIVACYIE % %COMPUTERNAME%PrivacyIE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%/PrivacyIE/log.txt Here %PRIVACYIE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %PRIVACYIE% = ''%systemdrive%Documents andSettings%USERNAME% PrivacIE'' For Windows 6.x (Windows Vista and newer), this is as follows: %PRIVACYIE% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsPrivacIE " Collecting the Cookies folder is done as follows: robocopy.exe %COOKIES% %COMPUTERNAME%Cookies /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%Cookies .txt Here %COOKIES% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %COOKIES% = ''%systemdrive%Documents and Settings%USERNAME%Cookies'' For Windows 6.x (Windows Vista and newer), this is as follows: %COOKIES% =''%systemdrive%Users%USERNAME% AppDataRoamingMicrosoftWindowsCookies" Collecting the Java Cache folder is done as follows: robocopy.exe %JAVACACHE% %COMPUTERNAME%JAVACACHE /ZB /copy:DAT /r:0 /ts /FP /np /E log:%COMPUTERNAME%JAVACAHElog.txt Here %JAVACACHE% depends on the version of Windows. For Windows 5.x (Windows 2000, Windows XP, and Windows 2003), this is as follows: %JAVACACHE% = ''%systemdrive%Documents and Settings%USERNAME%Application DataSunJavaDeployment cache'' For Windows 6.x (Windows Vista and newer), this is as follows: %JAVACACHE% =''%systemdrive%Users%USERNAME%AppData LocalLowSunJavaDeploymentcache" Remote live response However, as mentioned earlier, it is often necessary to carry out the collection of information remotely. On Windows systems, this is often done using the SysInternals PsExec utility. PsExec lets you execute commands on remote computers and does not require the installation of the system. How the program works is a psexec.exe resource executable is another PsExecs executable. This file runs the Windows service on a particular target machine. Before executing the command, PsExec unpacks this hidden resource in the administrative sphere of the remote computer at Admin$ (C:Windows) file Admin$system32psexecsvc.exe. After copying this, PsExec installs and runs the service using the API functions of the Windows management services. Then, after starting psexesvc, a data connection (input commands and getting results) between psexesvc and psexec is established. Upon completion of the work, psexec stops the service and removes it from the target computer. If the remote collection of information is necessary, a working machine running UNIX OS can use the Winexe utility. Winexe is a GNU/Linux-based application that allows users to execute commands remotely on WindowsNT/2000/XP/2003/Vista/7/8 systems. It installs a service on the remote system, executes the command, and uninstalls the service. Winexe allows execution of most of the Windows shell commands: winexe -U [Domain/]User%Password //host command To launch a Windows shell from inside your Linux system, use the following command: winexe -U HOME/Administrator%Pass123 //192.168.0.1 "cmd.exe" Summary In this article, we discussed what we should have in the Jump Bag to handle a computer incident, and what kind of skills the members of the IR team require. Also, we took a look at live response and collected Volatile and Nonvolatile information from a live system. We also discussed different tools to collect information. We also discussed when we should to use a live response approach as an alternative to traditional forensics. Resources for Article: Further resources on this subject: BackTrack Forensics [article] Mobile Phone Forensics – A First Step into Android Forensics [article] Forensics Recovery [article]
Read more
  • 0
  • 0
  • 20650
Modal Close icon
Modal Close icon