Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Cybersecurity

90 Articles
article-image-marriotts-starwood-guest-database-faces-a-massive-data-breach-affecting-500-million-user-data
Savia Lobo
03 Dec 2018
5 min read
Save for later

Marriott’s Starwood guest database faces a massive data breach affecting 500 million user data

Savia Lobo
03 Dec 2018
5 min read
Last week, a popular Hospitality company, Marriott International, unveiled details about a massive data breach, which exposed the personal and financial information of its customers. According to Marriott, this breach was happening over the past four years and collected information about customers who made reservations in its Starwood subsidiary. The information which was subject to the breach included details of approximately 500 million guests. For approximately 327 million of these guests, the information breached includes a combination of name, mailing address, phone number, email address, passport number, Starwood Preferred Guest (“SPG”) account information, date of birth, gender, arrival and departure information, reservation date, and communication preferences. The four-year-long breach that hit Marriott’s customer data Marriott, on September 8, 2018, received an alert from an internal security tool which reported that attempts had been taken to access the Starwood guest reservation database in the United States. Following this, Marriott carried out an investigation which revealed that their Starwood network had been accessed by attackers since 2014. According to Marriott’s news center, “On November 19, 2018, the investigation determined that there was unauthorized access to the database, which contained guest information relating to reservations at Starwood properties* on or before September 10, 2018.” For some users out of the 500 million, the information includes payment card details such as numbers and expiration dates. However,  “the payment card numbers were encrypted using Advanced Encryption Standard encryption (AES-128). There are two components needed to decrypt the payment card numbers, and at this point, Marriott has not been able to rule out the possibility that both were taken. For the remaining guests, the information was limited to name and sometimes other data such as mailing address, email address, or other information”, stated the Marriott News release. Arne Sorenson, Marriott’s President, and Chief Executive Officer said, “We will continue to support the efforts of law enforcement and to work with leading security experts to improve.  Finally, we are devoting the resources necessary to phase out Starwood systems and accelerate the ongoing security enhancements to our network”. Marriott also reported this incident to law enforcement and are notifying regulatory authorities. This is not the first time Starwood data was breached Marriott hoteliers did not exactly mention when the breach hit them four years ago in 2014. However, its subsidiary Starwood revealed that, a few days after being acquired by Marriott, more than 50 of Starwood’s properties were breached in November 2015. According to Starwood’s disclosure at the time, that earlier breach stretched back at least one year, i.e., November 2014. According to Krebs on Security, “Back in 2015, Starwood said the intrusion involved malicious software installed on cash registers at some of its resort restaurants, gift shops and other payment systems that were not part of its guest reservations or membership systems.” In Dec. 2016, KrebsOnSecurity stated, “banks were detecting a pattern of fraudulent transactions on credit cards that had one thing in common: They’d all been used during a short window of time at InterContinental Hotels Group (IHG) properties, including Holiday Inns and other popular chains across the United States.” Marriott said that its own network has not been affected by this four-year data breach and that the investigation only identified unauthorized access to the separate Starwood network. “Marriott is providing its affected guests in the United States, Canada, and the United Kingdom a free year’s worth of service from WebWatcher, one of several companies that advertise the ability to monitor the cybercrime underground for signs that the customer’s personal information is being traded or sold”, said Krebs on Security. What should compromised users do? Companies affected by the breach or as a defense measure pay threat hunters to look out for new intrusions. They can even test their own networks and employees for weaknesses, and arrange for a drill in order to combat their breach response preparedness. For individuals who re-use the same password should try using password managers, which helps remember strong passwords/passphrases and essentially lets you use the same strong master password/passphrase across all Web sites. According to a Krebs on Security’s “assume you’re compromised” philosophy “involves freezing your credit files with the major credit bureaus and regularly ordering free copies of your credit file from annualcreditreport.com to make sure nobody is monkeying with your credit (except you).” Rob Rosenberger, Co-founder of Vmyths, urged everyone who booked a room at their properties since 2014 by tweeting advice that the affected users should change their mother’s maiden name and the social security number soon. https://twitter.com/vmyths/status/1069273409652224000 To know more about the Marriott breach in detail, visit Marriott’s official website. Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016 Dell reveals details on its recent security breach Twitter on the GDPR radar for refusing to provide a user his data due to ‘disproportionate effort’ involved
Read more
  • 0
  • 0
  • 24231

article-image-malicious-code-in-npm-event-stream-package-targets-a-bitcoin-wallet-and-causes-8-million-downloads-in-two-months
Savia Lobo
28 Nov 2018
3 min read
Save for later

Malicious code in npm ‘event-stream' package targets a bitcoin wallet and causes 8 million downloads in two months

Savia Lobo
28 Nov 2018
3 min read
Last week Ayrton Sparling, a Computer Science major at CSUF, California disclosed that the popular npm package, event-stream, contains a malicious package named flatmap-stream. He disclosed the issue via the GitHub issue on the EventStream’s repository. The event-stream npm package was originally created and maintained by Dominic Tarr. However, this popular package has not been updated for a long time now. According to Thomas Hunter’s post on Medium, “Ownership of event-stream, was transferred by the original author to a malicious user, right9ctrl.  The malicious user was able to gain the trust of the original author by making a series of meaningful contributions to the package.” The malicious owner then added a malicious library named flatmap-stream to the events-stream package as a dependency. This led to a download and invocation of the event-stream package (using the malicious 3.3.6 version) by every user. The malicious library download added up to nearly 8 million downloads since it was included in September 2018. The malicious package represents a highly targeted attack and affects an open source app called bitpay/copay. Copay is a secure bitcoin wallet platform for both desktop and mobile devices. “We know the malicious package specifically targets that application because the obfuscated code reads the description field from a project’s package.json file, then uses that description to decode an AES256 encrypted payload”, said Thomas in his post. Post this breakout, many users from Twitter and GitHub have positively supported Dominic. In a statement on the event-stream issue, Dominic stated, “I've shared publish rights with other people before. Of course, If I had realized they had a malicious intent I wouldn't have, but at the time it looked like someone who was actually trying to help me”. https://twitter.com/dominictarr/status/1067186943304159233 As a support to Dominic, André Staltz, an open source hacker, tweeted, https://twitter.com/andrestaltz/status/1067157915398746114 Users affected by this malicious code are advised to eliminate this package from their application by reverting back to version 3.3.4 of event-stream. If the user application deals with Bitcoin, they should inspect its activity in the last 3 months to see if any mined or transferred bitcoins did not make it into their wallet. However, if the application does not deal with bitcoin but is especially sensitive, an inspection of its activity in the last 3 months for any suspicious activity is recommended. This is to analyze the notably data sent on the network to unintended destinations. To know more about this in detail, visit Eventstream’s repository. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Wireshark for analyzing issues and malicious emails in POP, IMAP, and SMTP [Tutorial] Machine learning based Email-sec-360°surpasses 60 antivirus engines in detecting malicious emails
Read more
  • 0
  • 0
  • 23736

article-image-secure-private-cloud-iam
Savia Lobo
10 May 2018
11 min read
Save for later

How to secure a private cloud using IAM

Savia Lobo
10 May 2018
11 min read
In this article, we look at securing the private cloud using IAM. For IAM, OpenStack uses the Keystone project. Keystone provides the identity, token, catalog, and policy services, which are used specifically by OpenStack services. It is organized as a group of internal services exposed on one or many endpoints. For example, an authentication call validates the user and project credentials with the identity service. [box type="shadow" align="" class="" width=""]This article is an excerpt from the book,'Cloud Security Automation'. In this book, you'll learn how to work with OpenStack security modules and learn how private cloud security functions can be automated for better time and cost-effectiveness.[/box] Authentication Authentication is an integral part of an OpenStack deployment and so we must be careful about the system design. Authentication is the process of confirming a user's identity, which means that a user is actually who they claim to be. For example, providing a username and a password when logging into a system. Keystone supports authentication using the username and password, LDAP, and external authentication methods. After successful authentication, the identity service provides the user with an authorization token, which is further used for subsequent service requests. Transport Layer Security (TLS) provides authentication between services and users using X.509 certificates. The default mode for TLS is server-side only authentication, but we can also use certificates for client authentication. However, in authentication, there can also be the case where a hacker is trying to access the console by guessing your username and password. If we have not enabled the policy to handle this, it can be disastrous. For this, we can use the Failed Login Policy, which states that a maximum number of attempts are allowed for a failed login; after that, the account is blocked for a certain number of hours and the user will also get a notification about it. However, the identity service provided in Keystone does not provide a method to limit access to accounts after repeated unsuccessful login attempts. For this, we need to rely on an external authentication system that blocks out an account after a configured number of failed login attempts. Then, the account might only be unlocked with further side-channel intervention, or on request, or after a certain duration. We can use detection techniques to the fullest only when we have a prevention method available to save them from damage. In the detection process, we frequently review the access control logs to identify unauthorized attempts to access accounts. During the review of access control logs, if we find any hints of a brute force attack (where the user tries to guess the username and password to log in to the system), we can define a strong username and password or block the source of the attack (IP) through firewall rules. When we define firewall rules on Keystone node, it restricts the connection, which helps to reduce the attack surface. Apart from this, reviewing access control logs also helps to examine the account activity for unusual logins and suspicious actions, so that we can take corrective actions such as disabling the account. To increase the level of security, we can also utilize MFA for network access to the privileged user accounts. Keystone supports external authentication services through the Apache web server that can provide this functionality. Servers can also enforce client-side authentication using certificates. This will help to get rid of brute force and phishing attacks that may compromise administrator passwords. Authentication methods – internal and external Keystone stores user credentials in a database or may use an LDAP-compliant directory server. The Keystone identity database can be kept separate from databases used by other OpenStack services to reduce the risk of a compromise of the stored credentials. When we use the username and password to authenticate, identity does not apply policies for password strength, expiration, or failed authentication attempts. For this, we need to implement external authentication services. To integrate an external authentication system or organize an existing directory service to manage users account management, we can use LDAP. LDAP simplifies the integration process. In OpenStack authentication and authorization, the policy may be delegated to another service. For example, an organization that is going to deploy a private cloud and already has a database of employees and users in an LDAP system. Using this LDAP as an authentication authority, requests to the Identity service (Keystone) are transferred to the LDAP system, which allows or denies requests based on its policies. After successful authentication, the identity service generates a token for access to the authorized services. Now, if the LDAP has already defined attributes for the user such as the admin, finance, and HR departments, these must be mapped into roles and groups within identity for use by the various OpenStack services. We need to define this mapping into Keystone node configuration files stored at /etc/keystone/keystone.conf. Keystone must not be allowed to write to the LDAP used for authentication outside of the OpenStack Scope, as there is a chance to allow a sufficiently privileged Keystone user to make changes to the LDAP directory, which is not desirable from a security point of view. This can also lead to unauthorized access of other information and resources. So, if we have other authentication providers such as LDAP or Active Directory, then user provisioning always happens at other authentication provider systems. For external authentication, we have the following methods: MFA: The MFA service requires the user to provide additional layers of information for authentication such as a one-time password token or X.509 certificate (called MFA token). Once MFA is implemented, the user will have to enter the MFA token after putting the user ID and password in for a successful login. Password policy enforcement: Once the external authentication service is in place, we can define the strength of the user passwords to conform to the minimum standards for length, diversity of characters, expiration, or failed login attempts. Keystone also supports TLS-based client authentication. TLS client authentication provides an additional authentication factor, apart from the username and password, which provides greater reliability on user identification. It reduces the risk of unauthorized access when usernames and passwords are compromised. However, TLS-based authentication is not cost effective as we need to have a certificate for each of the clients. Authorization Keystone also provides the option of groups and roles. Users belong to groups where a group has a list of roles. All of the OpenStack services, such as Cinder, Glance, nova, and Horizon, reference the roles of the user attempting to access the service. OpenStack policy enforcers always consider the policy rule associated with each resource and use the user’s group or role, and their association, to determine and allow or deny the service access. Before configuring roles, groups, and users, we should document your required access control policies for the OpenStack installation. The policies must be as per the regulatory or legal requirements of the organization. Additional changes to the access control configuration should be done as per the formal policies. These policies must include the conditions and processes for creating, deleting, disabling, and enabling accounts, and for assigning privileges to the accounts. One needs to review these policies from time to time and ensure that the configuration is in compliance with the approved policies. For user creation and administration, there must be a user created with the admin role in Keystone for each OpenStack service. This account will provide the service with the authorization to authenticate users. Nova (compute) and Swift (object storage) can be configured to use the Identity service to store authentication information. For the test environment, we can have tempAuth, which records user credentials in a text file, but it is not recommended for the production environment. The OpenStack administrator must protect sensitive configuration files from unauthorized modification with mandatory access control frameworks such as SELinux or DAC. Also, we need to protect the Keystone configuration files, which are stored at /etc/keystone/keystone.conf, and also the X.509 certificates. It is recommended that cloud admin users must authenticate using the identity service (Keystone) and an external authentication service that supports two-factor authentication. Getting authenticated with two-factor authentication reduces the risk of compromised passwords. It is also recommended in the NIST guideline called NIST 800-53 IA-2(1). Which defines MFA for network access to privileged accounts, when one factor is provided by a separate device from the system being accessed. Policy, tokens, and domains In OpenStack, every service defines the access policies for its resources in a policy file, where a resource can be like an API access, it can create and attach Cinder volume, or it can create an instance. The policy rules are defined in JSON format in a file called policy.json. Only administrators can modify the service-based policy.json file, to control the access to the various resources. However, one has to also ensure that any changes to the access control policies do not unintentionally breach or create an option to breach the security of any resource. Any changes made to policy.json are applied immediately and it does not need any service restart. After a user is authenticated, a token is generated for authorization and access to an OpenStack environment. A token can have a variable lifespan, but the default value is 1 hour. It is also recommended to lower the lifespan of the token to a certain level so that within the specified timeframe the internal service can complete the task. If the token expires before task completion, the system can be unresponsive. Keystone also supports token revocation. For this, it uses an API to revoke a token and to list the revoked tokens. In OpenStack Newton release, there are four supported token types: UUID, PKI, PKIZ, and fernet. After the OpenStack Ocata release, there are two supported token types: UUID and fernet. We'll see all of these token types in detail here: UUID: These tokens are persistent tokens. UUID tokens are 32 bytes in length, which must be persisted in the backend. They are stored in the Keystone backend, along with the metadata for authentication. All of the clients must pass their UUID token to the Keystone (identity service) in order to validate it. PKI and PKIZ: These are signed documents that contain the authentication content, as well as the service catalog. The difference between the PKI and PKIZ is that PKIZ tokens are compressed to help mitigate the size issues of PKI (sometimes PKI tokens becomes very long). Both of these tokens have become obsolete after the Ocata release. The length of PKI and PKIZ tokens typically exceeds 1,600 bytes. The Identity service uses public and private key pairs and certificates in order to create and validate these tokens. Fernet: These tokens are the default supported token provider for OpenStack Pike Release. It is a secure messaging format explicitly designed for use in API tokens. They are nonpersistent, lightweight (fall in the range of 180 to 240 bytes), and reduce the operational overhead. Authentication and authorization metadata is neatly bundled into a message-packed payload, which is then encrypted and signed in as a fernet token. In the OpenStack, the Keystone Service domain is a high-level container for projects, users, and groups. Domains are used to centrally manage all Keystone-based identity components. Compute, storage, and other resources can be logically grouped into multiple projects, which can further be grouped under a master account. Users of different domains can be represented in different authentication backends and have different attributes that must be mapped to a single set of roles and privileges in the policy definitions to access the various service resources. Domain-specific authentication drivers allow the identity service to be configured for multiple domains, using domain-specific configuration files stored at keystone.conf. Federated identity Federated identity enables you to establish trusts between identity providers and the cloud environment (OpenStack Cloud). It gives you secure access to cloud resources using your existing identity. You do not need to remember multiple credentials to access your applications. Now, the question is, what is the reason for using federated identity? This is answered as follows: It enables your security team to manage all of the users (cloud or noncloud) from a single identity application It enables you to set up different identity providers on the basis of the application that somewhere creates an additional workload for the security team and leads the security risk as well It gives ease of life to users by proving them a single credential for all of the apps so that they can save the time they spend on the forgot password page Federated identity enables you to have a single sign-on mechanism. We can implement it using SAML 2.0. To do this, you need to run the identity service provider under Apache. We learned about securing your private cloud and the authentication process therein. If you've enjoyed this article, do check out 'Cloud Security Automation' for a hands-on experience of automating your cloud security and governance. Top 5 cloud security threats to look out for in 2018 Cloud Security Tips: Locking Your Account Down with AWS Identity Access Manager (IAM)
Read more
  • 0
  • 0
  • 23050

article-image-why-wall-street-unfriended-facebook-stocks-lost-over-120-billion-in-market-value-after-q2-2018-earnings-call
Natasha Mathur
27 Jul 2018
6 min read
Save for later

Why Wall Street unfriended Facebook: Stocks fell $120 billion in market value after Q2 2018 earnings call

Natasha Mathur
27 Jul 2018
6 min read
After been found guilty of providing discriminatory advertisements on its platform earlier this week, Facebook hit yet another wall yesterday as its stock closed falling down by 18.96% on Thursday with shares selling at $176.26. This means that the company lost around $120 billion in market value overnight, making it the largest loss of value ever in a day for a US-traded company since Intel Corp’s two-decade-old crash. Intel had lost a little over $18 billion in one day, 18 years back. Despite the 41.24% revenue growth compared to last year, this was Facebook’s biggest stock market drop ever. Here’s the stock chart from NASDAQ showing the figures:   Facebook’s market capitalization was worth $629.6 on Wednesday. As soon as Facebook’s Earnings calls concluded by the end of market trading on Thursday, it’s worth dropped to $510 billion after the close. Also, as Facebook’s market shares continued to drop down during Thursday’s market, it left its CEO, Mark Zuckerberg with less than $70 billion, wiping out nearly $17 billion of his personal stake, according to Bloomberg. Also, he was demoted from the third to the sixth position on Bloomberg’s Billionaires Index. Active user growth starting to stagnate in mature markets According to David Wehner, CFO at Facebook, “the Daily active users count on Facebook reached 1.47 billion, up 11% compared to last year, led by growth in India, Indonesia, and the Philippines. This number represents approximately 66% of the 2.23 billion monthly active users in Q2”. Facebook’s daily active users He also mentioned that  “MAUs (monthly active users) were up 228M or 11% compared to last year. It is worth noting that MAU and DAU in Europe were both down slightly quarter-over-quarter due to the GDPR rollout, consistent with the outlook we gave on the Q1 call”. Facebook’s Monthly Active users In fact, Facebook has implemented several privacy policy changes in the last few months. This is due to the European Union's General Data Protection Regulation ( GDPR ) as the company's earnings report revealed the effects of the GDPR rules. Revenue Growth Rate is falling too Speaking of revenue expectations, Wehner gave investors a heads up that revenue growth rates will decline in the third and fourth quarters. Wehner states that the company’s “total revenue growth rate decelerated approximately 7 percentage points in Q2 compared to Q1. Our total revenue growth rates will continue to decelerate in the second half of 2018, and we expect our revenue growth rates to decline by high single-digit percentages from prior quarters sequentially in both Q3 and Q4.”  Facebook reiterated further that these numbers won’t get better anytime soon.                                                 Facebook’s Q2 2018 revenue Wehner further spoke explained the reasons for the decline in revenue,“There are several factors contributing to that deceleration..we expect the currency to be a slight headwind in the second half ...we plan to grow and promote certain engaging experiences like Stories that currently have lower levels of monetization. We are also giving people who use our services more choices around data privacy which may have an impact on our revenue growth”. Let’s look at other performance indicators Other financial highlights of Q2 2018 are as follows: Mobile advertising revenue represented 91% of advertising revenue for q2 2018, which is up from approx. 87% of the advertising revenue in Q2 2017. Capital expenditures for Q2 2018 were $3.46 billion which is up from $1.4 billion in Q2 2017. Headcount was 30,275 around June 30, which is an increase of 47% year-over-year. Cash, Cash equivalents, and marketable securities were $42.3 billion at the end of Q2 2018, an increase from $35.45 billion at the end of the Q2 2017. Wehner also mentioned that the company “continue to expect that full-year 2018 total expenses will grow in the range of 50-60% compared to last year. In addition to increases in core product development and infrastructure -- growth is driven by increasing investments -- safety & security, AR/VR, marketing, and content acquisition”. Another reason for the overall loss is that Facebook has been dealing with criticism for quite some time now over its content policies, its issues regarding user’s private data and its changing rules for advertisers. In fact, it is currently investigating data analytics firm Crimson Hexagon over misuse of data. Mark Zuckerberg also said over a conference call with financial analysts that Facebook has been investing heavily in “safety, security, and privacy” and that how they’re “investing - in security that it will start to impact our profitability, we’re starting to see that this quarter - we run this company for the long term, not for the next quarter”. Here’s what the public feels about the recent wipe-out: https://twitter.com/TeaPainUSA/status/1022586648155054081 https://twitter.com/alistairmilne/status/1022550933014753280 So, why did Facebook’s stocks crash? As we can see, Facebook’s performance itself in Q2 2018 has been better than its performance last year for the same quarter as far as revenue goes. Ironically, scandals and lawsuits have had little impact on Facebook’s growth. For example, Facebook recovered from the Cambridge Analytica scandal fully within two months as far share prices are concerned. The Mueller indictment report released earlier this month managed to arrest growth for merely a couple of days before the company bounced back. The discriminatory advertising verdict against Facebook, had no impact on its bullish growth earlier this week. This brings us to conclude that the public sentiments and market reactions against Facebook have very different underlying reasons. The market’s strong reactions are mainly due to concerns over the active user growth slowdown, the lack of monetization opportunities on the more popular Instagram platform, and Facebook’s perceived lack of ability to evolve successfully to new political and regulatory policies such as the GDPR. Wall Street has been indifferent to Facebook’s long list of scandals, in some ways, enabling the company’s ‘move fast and break things’ approach. In his earnings call on Thursday, Zuckerberg hinted that Facebook may not be keen on ‘growth at all costs’ by saying things like “we’re investing so much in security that it will significantly impact our profitability” and then Wehner adding, “Looking beyond 2018, we anticipate that total expense growth will exceed revenue growth in 2019.” And that has got Wall street unfriending Facebook with just a click of the button! Is Facebook planning to spy on you through your mobile’s microphones? Facebook to launch AR ads on its news feed to let you try on products virtually Decoding the reasons behind Alphabet’s record high earnings in Q2 2018  
Read more
  • 0
  • 0
  • 22853

article-image-security-researcher-publicly-releases-second-steam-zero-day-after-being-banned-from-valves-bug-bounty-program
Savia Lobo
22 Aug 2019
6 min read
Save for later

Security researcher publicly releases second Steam zero-day after being banned from Valve's bug bounty program

Savia Lobo
22 Aug 2019
6 min read
Updated with Valve’s response: Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it “a mistake”. A Russian security researcher, Vasily Kravets, has found a second zero-day vulnerability in the Steam gaming platform, in a span of two weeks. The researcher said he reported the first Steam zero-day vulnerability earlier in August, to its parent company, Valve, and tried to have it fixed before public disclosure. However, “he said he couldn't do the same with the second because the company banned him from submitting further bug reports via its public bug bounty program on the HackerOne platform,” ZDNet reports. Source: amonitoring.ru This first flaw was a “privilege-escalation vulnerability that can allow an attacker to level up and run any program with the highest possible rights on any Windows computer with Steam installed. It was released after Valve said it wouldn’t fix it (Valve then published a patch, that the same researcher said can be bypassed),” according to Threatpost. Although Kravets was banned from the Hacker One platform, he disclosed the second flaw that enables a local privilege escalation in the Steam client on Tuesday and said that the flaw would be simple for any OS user to exploit. Kravets told Threatpost that he is not aware of a patch for the vulnerability. “Any user on a PC could do all actions from exploit’s description (even ‘Guest’ I think, but I didn’t check this). So [the] only requirement is Steam,” Kravets told Threatpost. He also said, “It’s sad and simple — Valve keeps failing. The last patch, that should have solved the problem, can be easily bypassed so the vulnerability still exists. Yes, I’ve checked, it works like a charm.” Another security researcher, Matt Nelson also said he had found the exact same bug as Kravets had, which “he too reported to Valve's HackerOne program, only to go through a similar bad experience as Kravets,” ZDNet reports. He said both Valve and HackerOne took five days to acknowledge the bug and later refused to patch it. Further, they locked the bug report when Nelson wanted to disclose the bug publicly and warn users. “Nelson later released proof-of-concept code for the first Steam zero-day, and also criticized Valve and HackerOne for their abysmal handling of his bug report”, ZDNet reports. https://twitter.com/enigma0x3/status/1148031014171811841 “Despite any application itself could be harmful, achieving maximum privileges can lead to much more disastrous consequences. For example, disabling firewall and antivirus, rootkit installation, concealing of process-miner, theft any PC user’s private data — is just a small portion of what could be done,”  said Kravets. Kravets demonstrated the second Steam zero-day and also detailed the vulnerability on his website. Per Threatpost as of August 21, “Valve did not respond to a request for comment about the vulnerability, bug bounty incident and whether a patch is available. HackerOne did not have a comment.” Other researchers who have participated in Valve’s bug bounty program are infuriated over Valve’s decision to not only block Kravets from submitting further bug reports, but also refusing to patch the flaw. https://twitter.com/Viss/status/1164055856230440960 https://twitter.com/kamenrannaa/status/1164408827266998273 A user on Reddit writes, “If management isn't going to take these issues seriously and respect a bug bounty program, then you need to bring about some change from within. Now they are just getting bug reports for free.” Nelson said the Hacker One “representative said the vulnerability was out of scope to qualify for Valve’s bug bounty program,” Ars Technica writes. Further, when Nelson said that he was not seeking any monetary gains and only wanted the public to be aware of the vulnerability, the HackerOne representative asked Nelson to “please familiarize yourself with our disclosure guidelines and ensure that you’re not putting the company or yourself at risk. https://www.hackerone.com/disclosure-guidelines.” https://twitter.com/enigma0x3/status/1160961861560479744 Nelson also reported the vulnerability directly to Valve. Valve, first acknowledged the report and “noted that I shouldn’t expect any further communication.” He never heard anything more from the company. In am email to Ars Technica, Nelson writes, “I can certainly believe that the scoping was misinterpreted by HackerOne staff during the triage efforts. It is mind-blowing to me that the people at HackerOne who are responsible for triaging vulnerability reports for a company as large as Valve didn’t see the importance of Local Privilege Escalation and simply wrote the entire report off due to misreading the scope.” A HackerOne spokeswoman told Ars Technica, “We aim to explicitly communicate our policies and values in all cases and here we could have done better. Vulnerability disclosure is an inherently murky process and we are, and have always been, committed to protecting the interests of hackers. Our disclosure guidelines emphasize mutual respect and empathy, encouraging all to act in good faith and for the benefit of the common good.” Katie Moussouris, founder and CEO of Luta Security, also said, “Silencing the researcher on one issue is in complete violation of the ISO standard practices, and banning them from reporting further issues is simply irresponsible to affected users who would otherwise have benefited from these researchers continuing to engage and report issues privately to get them fixed. The norms of vulnerability disclosure are being warped by platforms that put profits before people.” Valve agrees that turning down Kravets’ request was “a mistake” Valve, in a statement on August 22, said that its HackerOne bug bounty program, should not have turned away Kravets when he reported the second vulnerability and called it a mistake. In an email statement to ZDNet, a Valve representative said that “the company has shipped fixes for the Steam client, updated its bug bounty program rules, and is reviewing the researcher's ban on its public bug bounty program.” The company also writes, “Our HackerOne program rules were intended only to exclude reports of Steam being instructed to launch previously installed malware on a user’s machine as that local user. Instead, misinterpretation of the rules also led to the exclusion of a more serious attack that also performed local privilege escalation through Steam.  In regards to the specific researchers, we are reviewing the details of each situation to determine the appropriate actions. We aren’t going to discuss the details of each situation or the status of their accounts at this time.” To know more about this news in detail, read Kravets’ blog post. You could also check out Threatpost’s detailed coverage. Puppet launches Puppet Remediate, a vulnerability remediation solution for IT Ops A second zero-day found in Firefox was used to attack Coinbase employees; fix released in Firefox 67.0.4 and Firefox ESR 60.7.2 The EU Bounty Program enabled in VLC 3.0.7 release, this version fixed the most number of security issue
Read more
  • 0
  • 0
  • 22832

article-image-public-key-infrastructure-pki-and-other-concepts-cryptography-cissp-exam
Packt
28 Oct 2009
10 min read
Save for later

Public Key Infrastructure (PKI) and other Concepts in Cryptography for CISSP Exam

Packt
28 Oct 2009
10 min read
Public key infrastructure Public Key Infrastructure (PKI) is a framework that enables integration of various services that are related to cryptography. The aim of PKI is to provide confidentiality, integrity, access control, authentication, and most importantly, non-repudiation. Non-repudiation is a concept, or a way, to ensure that the sender or receiver of a message cannot deny either sending or receiving such a message in future. One of the important audit checks for non-repudiation is a time stamp. The time stamp is an audit trail that provides information of the time the message is sent by the sender and the time the message is received by the receiver. Encryption and decryption, digital signature, and key exchange are the three primary functions of a PKI. RSS and elliptic curve algorithms provide all of the three primary functions: encryption and decryption, digital signatures, and key exchanges. Diffie-Hellmen algorithm supports key exchanges, while Digital Signature Standard (DSS) is used in digital signatures. Public Key Encryption is the encryption methodology used in PKI and was initially proposed by Diffie and Hellman in 1976. The algorithm is based on mathematical functions and uses asymmetric cryptography, that is, uses a pair of keys. The image above represents a simple document-signing function. In PKI, every user will have two keys known as "pair of keys". One key is known as a private key and the other is known as a public key. The private key is never revealed and is kept with the owner, and the public key is accessible by every one and is stored in a key repository. A key can be used to encrypt as well as to decrypt a message. Most importantly, a message that is encrypted with a private key can only be decrypted with a corresponding public key. Similarly, a message that is encrypted with a public key can only be decrypted with the corresponding private key. In the example image above, Bob wants to send a confidential document to Alice electronically. Bob has four issues to address before this electronic transmission can occur: Ensuring the contents of the document are encrypted such that the document is kept confidential. Ensuring the document is not altered during transmission. Since Alice does not know Bob, he has to somehow prove that the document is indeed sent by him. Ensuring Alice receives the document and that she cannot deny receiving it in future. PKI supports all the above four requirements with methods such as secure messaging, message digests, digital signatures, and non-repudiation services. Secure messaging To ensure that the document is protected from eavesdropping and not altered during the transmission, Bob will first encrypt the document using Alice's public key. This ensures two things: one, that the document is encrypted, and two, only Alice can open it as the document requires the private key of Alice to open it. To summarize, encryption is accomplished using the public key of the receiver and the receiver decrypts with his or her private key. In this method, Bob could ensure that the document is encrypted and only the intended receiver (Alice) can open it. However, Bob cannot ensure whether the contents are altered (Integrity) during transmission by document encryption alone. Message digest In order to ensure that the document is not altered during transmission, Bob performs a hash function on the document. The hash value is a computational value based on the contents of the document. This hash value is known as the message digest. By performing the same hash function on the decrypted document the message, the digest can be obtained by Alice and she can compare it with the one sent by Bob to ensure that the contents are not altered. This process will ensure the integrity requirement. Digital signature In order to prove that the document is sent by Bob to Alice, Bob needs to use a digital signature. Using a digital signature means applying the sender's private key to the message, or document, or to the message digest. This process is known as as signing. Only by using the sender's public key can the message be decrypted. Bob will encrypt the message digest with his private key to create a digital signature. In the scenario illustrated in the image above, Bob will encrypt the document using Alice's public key and sign it using his digital signature. This ensures that Alice can verify that the document is sent by Bob, by verifying the digital signature (Bob's private key) using Bob's public key. Remember a private key and the corresponding public key are linked, albeit mathematically. Alice can also verify that the document is not altered by validating the message digest, and also can open the encrypted document using her private key. Message authentication is an authenticity verification procedure that facilitates the verification of the integrity of the message as well as the authenticity of the source from which the message is received. Digital certificate By digitally signing the document, Bob has assured that the document is sent by him to Alice. However, he has not yet proved that he is Bob. To prove this, Bob needs to use a digital certificate. A digital certificate is an electronic identity issued to a person, system, or an organization by a competent authority after verifying the credentials of the entity. A digital certificate is a public key that is unique for each entity. A certification authority issues digital certificates. In PKI, digital certificates are used for authenticity verification of an entity. An entity can be an individual, system, or an organization. An organization that is involved in issuing, distributing, and revoking digital certificates is known as a Certification Authority (CA). A CA acts as a notary by verifying an entity's identity. One of the important PKI standards pertaining to digital certificates is X.509. It is a standard published by the International Telecommunication Union (ITU) that specifies the standard format for digital certificates. PKI also provides key exchange functionality that facilitates the secure exchange of public keys such that the authenticity of the parties can be verified. Key management procedures Key management consists of four essential procedures concerning public and private keys. They are as follows: Secure generation of keys—Ensures that private and public keys are generated in a secure manner. Secure storage of keys—Ensures that keys are stored securely. Secure distribution of keys—Ensures that keys are not lost or modified during distribution. Secure destruction of keys—Ensures that keys are destroyed completely once the useful life of the key is over. Type of keys NIST Special Publication 800-57 titled Recommendation for Key Management - Part 1: General specifies the following nineteen types of keys: Private signature key—It is a private key of public key pairs and is used to generate digital signatures. It is also used to provide authentication, integrity, and non-repudiation. Public signature verification key—It is the public key of the asymmetric (public) key pair. It is used to verify the digital signature. Symmetric authentication key—It is used with symmetric key algorithms to provide assurance of the integrity and source of the messages. Private authentication key—It is the private key of the asymmetric (public) key pair. It is used to provide assurance of the integrity of information. Public authentication key—Public key of an asymmetric (public) pair that is used to determine the integrity of information and to authenticate the identity of entities. Symmetric data encryption key—It is used to apply confidentiality protection to information. Symmetric key wrapping key—It is a key-encryptin key that is used to encrypt the other symmetric keys. Symmetric and asymmetric random number generation keys—They are used to generate random numbers. Symmetric master key—It is a master key that is used to derive other symmetric keys. Private key transport key—They are the private keys of asymmetric (public) key pairs, which are used to decrypt keys that have been encrypted with the associated public key. Public key transport key—They are the public keys of asymmetric (public) key pairs that are used to decrypt keys that have been encrypted with the associated public key. Symmetric agreement key—It is used to establish keys such as key wrapping keys and data encryption keys using a symmetric key agreement algorithm. Private static key agreement key—It is a private key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Public static key agreement key— It is a public key of asymmetric (public) key pairs that is used to establish keys such as key wrapping keys and data encryption keys. Private ephemeral key agreement key—It is a private key of asymmetric (public) key pairs used only once to establish one or more keys such as key wrapping keys and data encryption keys. Public ephemeral key agreement key—It is a public key of asymmetric (public) key pairs that is used in a single key establishment transaction to establish one or more keys. Symmetric authorization key—This key is used to provide privileges to an entity using symmetric cryptographic method. Private authorization key—It is a private key of an asymmetric (public) key pair that is used to provide privileges to an entity. Public authorization key—It is a public key of an asymmetric (public) key pair that is used to verify privileges for an entity that knows the associated private authorization key.   Key management best practices Key Usage refers to using a key for a cryptographic process, and should be limited to using a single key for only one cryptographic process. This is to ensure that the strength of the security provided by the key is not weakened. When a specific key is authorized for use by legitimate entities for a period of time, or the effect of a specific key for a given system is for a specific period, then the time span is known as a cryptoperiod. The purpose of defining a cryptoperiod is to limit a successful cryptanalysis by a malicious entity. Cryptanalysis is the science of analyzing and deciphering code and ciphers. The following assurance requirements are part of the key management process: Integrity protection—Assuring the source and format of the keying material by verification Domain parameter validity—Assuring parameters used by some public key algorithms during the generation of key pairs and digital signatures, and the generation of shared secrets that are subsequently used to derive keying material Public key validity—Assuring that the public key is arithmetically correct Private key possession—Assuring that the possession of the private key is obtained before using the public key Cryptographic algorithm and key size selection are the two important key management parameters that provide adequate protection to the system and the data throughout their expected lifetime. Key states A cryptographic key goes through different states from its generation to destruction. These states are defined as key states. The movement of a cryptographic key from one state to another is known as a key transition. NIST SP800-57 defines the following six key states: Pre-activation state—The key has been generated, but not yet authorized for use Active state—The key may used to cryptographically protect information Deactivated state—The cryptoperiod of the key is expired, but the key is still needed to perform cryptographic operations Destroyed state—The key is destroyed Compromised state—The key is released or determined by an unauthorized entity Destroyed compromised state—The key is destroyed after a compromise or the comprise is found after the key is destroyed
Read more
  • 0
  • 0
  • 22197
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at £16.99/month. Cancel anytime
article-image-attackers-wiped-many-github-gitlab-and-bitbucket-repos-with-compromised-valid-credentials-leaving-behind-a-ransom-note
Savia Lobo
07 May 2019
5 min read
Save for later

Attackers wiped many GitHub, GitLab, and Bitbucket repos with ‘compromised’ valid credentials leaving behind a ransom note

Savia Lobo
07 May 2019
5 min read
Last week, Git repositories were hit by a suspicious activity where attackers targeted GitHub, GitLab, and Bitbucket users, wiping code and commits from multiple repositories. The surprising fact is that attackers used valid credentials, i.e. a password or personal access token to break into these repositories. Not only did they sweep the entire repository, but they also left a ransom note demanding 0.1 Bitcoin (BTC). On May 3, GitLab’s Director of Security, Kathy Wang, said, “We identified the source based on a support ticket filed by Stefan Gabos yesterday, and immediately began investigating the issue. We have identified affected user accounts and all of those users have been notified. As a result of our investigation, we have strong evidence that the compromised accounts have account passwords being stored in plaintext on deployment of a related repository.” According to GitLab’s official post, “All total, 131 users and 163 repositories were, at a minimum, accessed by the attacker. Affected accounts were temporarily disabled, and the owners were notified.” This incident first took place on May 2, 2019 at around 10 pm GMT when GitLab received the first report of a repository being wiped off with one commit named ‘WARNING’, which contained a single file containing the ransom note asking the targets to transfer 0.1 BTC (approx. $568) to the attacker’s Bitcoin address, if they want to get their data back. If they failed to transfer the amount, the targets were threatened that their code would be hosted as public. Here’s the ransom note that was left behind: “To recover your lost data and avoid leaking it: Send us 0.1 Bitcoin (BTC) to our Bitcoin address 1ES14c7qLb5CYhLMUekctxLgc1FV2Ti9DA and contact us by Email at admin@gitsbackup.com with your Git login and a Proof of Payment. If you are unsure if we have your data, contact us and we will send you a proof. Your code is downloaded and backed up on our servers. If we dont receive your payment in the next 10 Days, we will make your code public or use them otherwise.” “The targets who had their repos compromised use multiple Git-repository management platforms, with the only other connection between the reports besides Git being that the victims were using the cross-platform SourceTree free Git client”, The Bleeping Computer reports. GitLab, however, commented that they have notified the affected GitLab users and are working to resolve the issue soon. According to BitcoinAbuse.com, a website that tracks Bitcoin addresses used for suspicious activity, there have been 27 abuse reports with the first report filed on May 2. “When searching for it on GitHub we found 392 impacted repositories which got all their commits and code wiped using the 'gitbackup' account which joined the platform seven years ago, on January 25, 2012. Despite that, none of the victims have paid the ransom the hackers have asked for, seeing that the Bitcoin address received only 0.00052525 BTC on May 3 via a single transaction, which is the equivalent of roughly $2.99”, Bleeping Computer mentions. A GitHub spokesperson told the Bleeping Computers, “GitHub has been thoroughly investigating these reports, together with the security teams of other affected companies, and has found no evidence GitHub.com or its authentication systems have been compromised. At this time, it appears that account credentials of some of our users have been compromised as a result of unknown third-party exposures.” Team GitLab has further recommended all GitLab users to enable two-factor authentication and use SSH keys to strengthen their GitLab account. Read Also: Liz Fong-Jones on how to secure SSH with Two Factor Authentication (2FA) One of the StackExchange users said, “I also have 2FA enabled, and never got a text message indicating they had a successful brute login.” One StackExchange user received a response from Atlassian, the company behind Bitbucket and the cross-platform free Git client SourceTree, "Within the past few hours, we detected and blocked an attempt — from a suspicious IP address — to log in with your Atlassian account. We believe that someone used a list of login details stolen from third-party services in an attempt to access multiple accounts." Bitbucket users impacted by this breach, received an email stating, “We are in the process of restoring your repository and expect it to be restored within the next 24 hours. We believe that this was part of a broader attack against several git hosting services, where repository contents were deleted and replaced with a note demanding the payment of ransom. We have not detected any other compromise of Bitbucket. We have proactively reset passwords for those compromised accounts to prevent further malicious activity. We will also work with law enforcement in any investigation that they pursue. We encourage you and your team members to reset all other passwords associated with your Bitbucket account. In addition, we recommend enabling 2FA on your Bitbucket account.” According to Stefen Gabos’ thread on StackExchange Security forum, he mentions that the hacker does not actually delete, but merely alters Git commit headers. So there are chances that code commits can be recovered, in some cases. “All evidence suggests that the hacker has scanned the entire internet for Git config files, extracted credentials, and then used these logins to access and ransom accounts at Git hosting services”, ZDNet reports. https://twitter.com/bad_packets/status/1124429828680085504 To know more about this news and further updates visit GitLab’s official website. DockerHub database breach exposes 190K customer data including tokens for GitHub and Bitbucket repositories Facebook confessed another data breach; says it “unintentionally uploaded” 1.5 million email contacts without consent Understanding the cost of a cybersecurity attack: The losses organizations face
Read more
  • 0
  • 0
  • 21050

article-image-migration-spring-security-3
Packt
21 May 2010
5 min read
Save for later

Migration to Spring Security 3

Packt
21 May 2010
5 min read
(For more resources on Spring, see here.) During the course of this article we will: Review important enhancements in Spring Security 3 Understand configuration changes required in your existing Spring Security 2 applications when moving them to Spring Security 3 Illustrate the overall movement of important classes and packages in Spring Security 3 Once you have completed the review of this article, you will be in a good position to migrate an existing application from Spring Security 2 to Spring Security 3. Migrating from Spring Security 2 You may be planning to migrate an existing application to Spring Security 3, or trying to add functionality to a Spring Security 2 application and looking for guidance. We'll try to address both of your concerns in this article. First, we'll run through the important differences between Spring Security 2 and 3—both in terms of features and configuration. Second, we'll provide some guidance in mapping configuration or class name changes. These will better enable you to translate the examples from Spring Security 3 back to Spring Security 2 (where applicable). A very important migration note is that Spring Security 3 mandates a migration to Spring Framework 3 and Java 5 (1.5) or greater. Be aware that in many cases, migrating these other components may have a greater impact on your application than the upgrade of Spring Security! Enhancements in Spring Security 3 Significant enhancements in Spring Security 3 over Spring Security 2 include the following: The addition of Spring Expression Language (SpEL) support for access declarations, both in URL patterns and method access specifications. Additional fine-grained configuration around authentication and accessing successes and failures. Enhanced capabilities of method access declaration, including annotationbased pre-and post-invocation access checks and filtering, as well as highly configurable security namespace XML declarations for custom backing bean behavior. Fine-grained management of session access and concurrency control using the security namespace. Noteworthy revisions to the ACL module, with the removal of the legacy ACL code in o.s.s.acl and some important issues with the ACL framework are addressed. Support for OpenID Attribute Exchange, and other general improvements to the robustness of OpenID. New Kerberos and SAML single sign-on support through the Spring Security Extensions project. Other more innocuous changes encompassed a general restructuring and cleaning up of the codebase and the configuration of the framework, such that the overall structure and usage makes much more sense. The authors of Spring Security have made efforts to add extensibility where none previously existed, especially in the areas of login and URL redirection. If you are already working in a Spring Security 2 environment, you may not find compelling reasons to upgrade if you aren't pushing the boundaries of the framework. However, if you have found limitations in the available extension points, code structure, or configurability of Spring Security 2, you'll welcome many of the minor changes that we discuss in detail in the remainder of this article. Changes to configuration in Spring Security 3 Many of the changes in Spring Security 3 will be visible in the security namespace style of configuration. Although this article cannot cover all of the minor changes in detail, we'll try to cover those changes that will be most likely to affect you as you move to Spring Security 3. Rearranged AuthenticationManager configuration The most obvious changes in Spring Security 3 deal with the configuration of the AuthenticationManager and any related AuthenticationProvider elements. In Spring Security 2, the AuthenticationManager and AuthenticationProvider configuration elements were completely disconnected—declaring an AuthenticationProvider didn't require any notion of an AuthenticationManager at all. <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /></authentication-provider> In Spring Security 2 to declare the <authentication-manager> element as a sibling of any AuthenticationProvider. <authentication-manager alias="authManager"/><authentication-provider> <jdbc-user-service data-source-ref="dataSource"/></authentication-provider><ldap-authentication-provider server-ref="ldap://localhost:10389/"/> In Spring Security 3, all AuthenticationProvider elements must be the children of <authentication-manager> element, so this would be rewritten as follows: <authentication-manager alias="authManager"> <authentication-provider> <jdbc-user-service data-source-ref="dataSource" /> </authentication-provider> <ldap-authentication-provider server-ref= "ldap://localhost:10389/"/></authentication-manager> Of course, this means that the <authentication-manager> element is now required in any security namespace configurations. If you had defined a custom AuthenticationProvider in Spring Security 2, you would have decorated it with the <custom-authentication-provider> element as part of its bean definition. <bean id="signedRequestAuthenticationProvider" class="com.packtpub.springsecurity.security .SignedUsernamePasswordAuthenticationProvider"> <security:custom-authentication-provider/> <property name="userDetailsService" ref="userDetailsService"/><!-- ... --></bean> While moving this custom AuthenticationProvider to Spring Security 3, we would remove the decorator element and instead configure the AuthenticationProvider using the ref attribute of the <authentication-provider> element as follows: <authentication-manager alias="authenticationManager"> <authentication-provider ref= "signedRequestAuthenticationProvider"/></authentication-manager> Of course, the source code of our custom provider would change due to class relocations and renaming in Spring Security 3—look later in the article for basic guidelines, and in the code download for this article to see a detailed mapping. New configuration syntax for session management options In addition to continuing support for the session fixation and concurrency control features from prior versions of the framework, Spring Security 3 adds new configuration capabilities for customizing URLs and classes involved in session and concurrency control management. If your older application was configuring session fixation protection or concurrent session control, the configuration settings have a new home in the <session-management> directive of the <http> element. In Spring Security 2, these options would be configured as follows: <http ... session-fixation-protection="none"><!-- ... --> <concurrent-session-control exception-if-maximum-exceeded ="true" max-sessions="1"/></http> The analogous configuration in Spring Security 3 removes the session-fixation-protection attribute from the <http> element, and consolidates as follows: <http ...> <session-management session-fixation-protection="none"> <concurrency-control error-if-maximum-exceeded ="true" max-sessions="1"/> </session-management></http> You can see that the new logical organization of these options is much more sensible and leaves room for future expansion.
Read more
  • 0
  • 0
  • 20946

article-image-dcleaks-and-guccifer-2-0-how-hackers-used-social-engineering-to-manipulate-the-2016-u-s-elections
Savia Lobo
16 Jul 2018
5 min read
Save for later

DCLeaks and Guccifer 2.0: How hackers used social engineering to manipulate the 2016 U.S. elections

Savia Lobo
16 Jul 2018
5 min read
It’s been more than a year since the Republican party’s Donald Trump won the U.S Presidential elections against Democrat Hillary Clinton. However, Robert Mueller recently indicted 12 Russian military officers who meddled with the 2016 U.S. Presidential elections. These had hacked into the Democratic National Committee, the Democratic Congressional Campaign Committee, and the Clinton campaign. Mueller evoked that the Russians did it using Guccifer 2.0 and DCLeaks following which Twitter decided to suspend both the accounts on its platform. According to the San Diego Union-Tribune, Twitter found the handles of both Guccifer 2.0 and DCLeaks dormant for more than a year and a half. It also verified the accounts being loaded with disseminated emails which were stolen from both Clinton’s camp and the Democratic Party’s organization. Subsequently, it has suspended both accounts. Guccifer 2.0 is an online persona created by the conspirators and was falsely claimed to be a Romanian hacker to avoid evidence of Russian involvement in the 2016 U.S. presidential elections. They are also associated with the leaks of documents from the Democratic National Committee(DNC) through Wikileaks. DCLeaks, with the website, “dcleaks.com” published the stolen data. The DCLeaks site is known to be a front for Russian cyberespionage group Fancy Bear. Mueller, in his indictment, stated that both Guccifer 2.0 and DCLeaks have ties with the GRU (Russian military intelligence unit called the Main Intelligence Directorate) hackers. How hackers social engineered their way into the Clinton Campaign? The attackers or hackers have been said to use the hacking technique known as Spear phishing and the malware used in the process is the X-agent. It is a tool that can collect documents, keystrokes and other information from computers and smartphones through encrypted channels back to servers owned by hackers. As per Mueller’s indictment report, the conspirator created an email account on the name of a known member of the Clinton Campaign. This email id had one letter deviation and looked almost like the original one. Spear Phishing emails were sent across work accounts of more than thirty different Clinton Campaign employees using this fake account. The embedded link within these spear fished emails directed the recipient to a document titled ‘hillary-clinton-favorable-rating.xlsx.’. However, in reality, the recipients were being directed to a GRU-created website. X-agent uses an encrypted “tunneling protocol” tool known as X-Tunnel, that connected to known GRU-associated servers. Using the malware, X-agent, the GRU agents had targeted more than 300 individuals within the Clinton campaign, Democratic National Committee, and Democratic Congressional Campaign Committee, by March 2016.  At the same time, hackers stole nearly 50,000 emails from the Clinton campaign, and by June 2016 they had gained a control over 33 DNC computers and infected them with the malware. The indictment further explains that although the attackers ensured to hide their tracks by erasing the activity logs, the Linux-based version of X-agent programmed with the GRU_registered domain “linuxkrnl.net” were discovered in these networks. Was the Trump campaign impervious to such an attack? Roger Stone, Former Trump campaign adviser was said to be in contact with Guccifer 2.0 during the presidential campaign. As per Mueller ’s indictment, Guccifer 2.0 sent Stone this message, “Please tell me if i can help u anyhow...it would be a great pleasure for me...What do u think about the info on the turnout model for the democrats entire presidential campaign.”. “Pretty standard” was the reply given to Guccifer 2.0. However, Stone said that his conversations with the person behind the account were “innocuous.” Stone further added, “This exchange is entirely public and provides no evidence of collaboration or collusion with Guccifer 2.0 or anyone else in the alleged hacking of the DNC emails.” Stone also stated that he never discussed such innocuous communication with Trump or his presidential campaign. Rod Rosenstein, Deputy Attorney General, U.S  had indicated about this Russian attack since he was a part of Kremlin’s multifaceted approach to boost Trump’s 2016 campaign and on the other hand depreciating Clinton’s campaign. Twitter stated that both the accounts have been suspended for being connected to a network of accounts previously suspended for operating in violation of their rules. However, Hillary Clinton’s supporters expressed their anger by stating that Twitter’s responsiveness to stimuli was too slow and too little. With the midterm elections arriving soon in some months, one question on everyone’s mind is: how are the intelligence department and the department of justice going to ensure the elections are fair and secure? There are high chances of such attacks recurring. On being asked a similar question at a cybersecurity conference, Kirstjen Nielsen, Homeland Security Secretary responded, “Today I can say with confidence that we know whom to contact in every state to share threat information,” Nielsen said. “That ability did not exist in 2016.” Following is Rod Rosenstein’s interview with PBS NewsHour.   Social engineering attacks – things to watch out while online! YouTube has a $25 million plan to counter fake news and misinformation Twitter allegedly deleted 70 million fake accounts in an attempt to curb fake news    
Read more
  • 0
  • 0
  • 20860

article-image-ways-improve-performance-your-server-modsecurity-25
Packt
30 Nov 2009
13 min read
Save for later

Ways to improve performance of your server in ModSecurity 2.5

Packt
30 Nov 2009
13 min read
A typical HTTP request To get a better picture of the possible delay incurred when using a web application firewall, it helps to understand the anatomy of a typical HTTP request, and what processing time a typical web page download will incur. This will help us compare any added ModSecurity processing time to the overall time for the entire request. When a user visits a web page, his browser first connects to the server and downloads the main resource requested by the user (for example, an .html file). It then parses the downloaded file to discover any additional files, such as images or scripts, that it must download to be able to render the page. Therefore, from the point of view of a web browser, the following sequence of events happens for each file: Connect to web server. Request required file. Wait for server to start serving file. Download file. Each of these steps adds latency, or delay, to the request. A typical download time for a web page is on the order of hundreds of milliseconds per file for a home cable/DSL user. This can be slower or faster, depending on the speed of the connection and the geographical distance between the client and server. If ModSecurity adds any delay to the page request, it will be to the server processing time, or in other words the time from when the client has connected to the server to when the last byte of content has been sent out to the client. Another aspect that needs to be kept in mind is that ModSecurity will increase the memory usage of Apache. In what is probably the most common Apache configuration, known as "prefork", Apache starts one new child process for each active connection to the server. This means that the number of Apache instances increases and decreases depending on the number of client connections to the server.As the total memory usage of Apache depends on the number of child processes running and the memory usage of each child process, we should look at the way ModSecurity affects the memory usage of Apache. A real-world performance test In this section we will run a performance test on a real web server running Apache 2.2.8 on a Fedora Linux server (kernel 2.6.25). The server has an Intel Xeon 2.33 GHz dual-core processor and 2 GB of RAM. We will start out benchmarking the server when it is running just Apache without having ModSecurity enabled. We will then run our tests with ModSecurity enabled but without any rules loaded. Finally, we will test ModSecurity with a ruleset loaded so that we can draw conclusions about how the performance is affected. The rules we will be using come supplied with ModSecurity and are called the "core ruleset". The core ruleset The ModSecurity core ruleset contains over 120 rules and is shipped with the default ModSecurity source distribution (it's contained in the rules sub-directory). This ruleset is designed to provide "out of the box" protection against some of the most common web attacks used today. Here are some of the things that the core ruleset protects against: Suspicious HTTP requests (for example, missing User-Agent or Accept headers) SQL injection Cross-Site Scripting (XSS) Remote code injection File disclosure We will examine these methods of attack, but for now, let's use the core ruleset and examine how enabling it impacts the performance of your web service. Installing the core ruleset To install the core ruleset, create a new sub-directory named modsec under your Apache conf directory (the location will vary depending on your distribution). Then copy all the .conf files from the rules sub-directory of the source distribution to the new modsec directory: mkdir /etc/httpd/conf/modseccp/home/download/modsecurity-apache/rules/modsecurity_crs_*.conf /etc/httpd/conf/modsec Finally, enter the following line in your httpd.conf file and restart Apache to make it read the new rule files: # Enable ModSecurity core rulesetInclude conf/modsecurity/*.conf Putting the core rules in a separate directory makes it easy to disable them—all you have to do is comment out the above Include line in httpd.conf, restart Apache, and the rules will be disabled. Making sure it works The core ruleset contains a file named modsecurity_crs_10_config.conf. This file contains some of the basic configuration directives needed to turn on the rule engine and configure request and response body access. Since we have already configured these directives, we do not want this file to conflict with our existing configuration, and so we need to disable this. To do this, we simply need to rename the file so that it has a different extension as Apache only loads *.conf files with the Include directive we used above: $ mv modsecurity_crs_10_config.conf modsecurity_crs_10_config.conf.disabled Once we have restarted Apache, we can test that the core ruleset is loaded by attempting to access an URL that it should block. For example, try surfing to http://yourserver/ftp.exe and you should get the error message Method Not Implemented, ensuring that the core rules are loaded. Performance testing basics So what effect does loading the core ruleset have on web application response time and how do we measure this? We could measure the response time for a single request with and without the core ruleset loaded, but this wouldn't have any statistical significance—it could happen that just as one of the requests was being processed, the server started to execute a processor-intensive scheduled task, causing a delayed response time. The best way to compare the response times is to issue a large number of requests and look at the average time it takes for the server to respond. An excellent tool—and the one we are going to use to benchmark the server in the following tests—is called httperf. Written by David Mosberger of Hewlett Packard Research Labs, httperf allows you to simulate high workloads against a web server and obtain statistical data on the performance of the server. You can obtain the program at http://www.hpl.hp.com/research/linux/httperf/ where you'll also find a useful manual page in the PDF file format and a link to the research paper published together with the first version of the tool. Using httperf We'll run httperf with the options --hog (use as many TCP ports as needed), --uri/index.html (request the static web page index.html) and we'll use --num-conn 1000 (initiate a total of 1000 connections). We will be varying the number of requests per second (specified using --rate) to see how the server responds under different workloads. This is what the typical output from httperf looks like when run with the above options: $ ./httperf --hog --server=bytelayer.com --uri /index.html --num-conn1000 --rate 50Total: connections 1000 requests 1000 replies 1000 test-duration20.386 sConnection rate: 49.1 conn/s (20.4 ms/conn, <=30 concurrentconnections)Connection time [ms]: min 404.1 avg 408.2 max 591.3 median 404.5stddev 16.9Connection time [ms]: connect 102.3Connection length [replies/conn]: 1.000Request rate: 49.1 req/s (20.4 ms/req)Request size [B]: 95.0Reply rate [replies/s]: min 46.0 avg 49.0 max 50.0 stddev 2.0 (4samples)Reply time [ms]: response 103.1 transfer 202.9Reply size [B]: header 244.0 content 19531.0 footer 0.0 (total19775.0)Reply status: 1xx=0 2xx=1000 3xx=0 4xx=0 5xx=0CPU time [s]: user 2.37 system 17.14 (user 11.6% system 84.1% total95.7%)Net I/O: 951.9 KB/s (7.8*10^6 bps)Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 The output shows us the number of TCP connections httperf initiated per second ("Connection rate"), the rate at which it requested files from the server ("Request rate"), and the actual reply rate that the server was able to provide ("Reply rate"). We also get statistics on the reply time—the "reply time – response" is the time taken from when the first byte of the request was sent to the server to when the first byte of the reply was received—in this case around 103 milliseconds. The transfer time is the time to receive the entire response from the server. The page we will be requesting in this case, index.html, is 20 KB in size which is a pretty average size for an HTML document. httperf requests the page one time per connection and doesn't follow any links in the page to download additional embedded content or script files, so the number of such links in the page is of no relevance to our test. Getting a baseline: Testing without ModSecurity When running benchmarking tests like this one, it's always important to get a baseline result so that you know the performance of your server when the component you're measuring is not involved. In our case, we will run the tests against the server when ModSecurity is disabled. This will allow us to tell which impact, if any, running with ModSecurity enabled has on the server. Response time The following chart shows the response time, in milliseconds, of the server when it is running without ModSecurity. The number of requests per second is on the horizontal axis: As we can see, the server consistently delivers response times of around 300 milliseconds until we reach about 75 requests per second. Above this, the response time starts increasing, and at around 500 requests per second the response time is almost a second per request. This data is what we will use for comparison purposes when looking at the response time of the server after we enable ModSecurity. Memory usage Finding the memory usage on a Linux system can be quite tricky. Simply running the Linux top utility and looking at the amount of free memory doesn't quite cut it, and the reason is that Linux tries to use almost all free memory as a disk cache. So even on a system with several gigabytes of memory and no memory-hungry processes, you might see a free memory count of only 50 MB or so. Another problem is that Apache uses many child processes, and to accurately measure the memory usage of Apache we need to sum the memory usage of each child process. What we need is a way to measure the memory usage of all the Apache child processes so that we can see how much memory the web server truly uses. To solve this, here is a small shell script that I have written that runs the ps command to find all the Apache processes. It then passes the PID of each Apache process to pmap to find the memory usage, and finally uses awk to extract the memory usage (in KB) for summation. The result is that the memory usage of Apache is printed to the terminal. The actual shell command is only one long line, but I've put it into a file called apache_mem.sh to make it easier to use: #!/bin/sh# apache_mem.sh# Calculate the Apache memory usageps -ef | grep httpd | grep ^apache | awk '{ print $2 }' | xargs pmap -x | grep 'total kB' | awk '{ print $3 }' | awk '{ sum += $1 } END { print sum }' Now, let's use this script to look at the memory usage of all of the Apache processes while we are running our performance test. The following graph shows the memory usage of Apache as the number of requests per second increases: Apache starts out consuming about 300 MB of memory. Memory usage grows steadily and at about 150 requests per second it starts climbing more rapidly. At 500 requests per second, the memory usage is over 2.4 GB—more than the amount of physical RAM of the server. The fact that this is possible is because of the virtual memory architecture that Linux (and all modern operating systems) use. When there is no more physical RAM available, the kernel starts swapping memory pages out to disk, which allows it to continue operating. However, since reading and writing to a hard drive is much slower than to memory, this starts slowing down the server significantly, as evidenced by the increase in response time seen in the previous graph. CPU usage In both of the tests above, the server's CPU usage was consistently around 1 to 2%, no matter what the request rate was. You might have expected a graph of CPU usage in the previous and subsequent tests, but while I measured the CPU usage in each test, it turned out to run at this low utilization rate for all tests, so a graph would not be very useful. Suffice it to say that in these tests, CPU usage was not a factor. ModSecurity without any loaded rules Now, let's enable ModSecurity—but without loading any rules—and see what happens to the response time and memory usage. Both SecRequestBodyAccess and SecResponseBodyAccess were set to On, so if there is any performance penalty associated with buffering requests and responses, we should see this now that we are running ModSecurity without any rules. The following graph shows the response time of Apache with ModSecurity enabled: We can see that the response time graph looks very similar to the response time graph we got when ModSecurity was disabled. The response time starts increasing at around 75 requests per second, and once we pass 350 requests per second, things really start going downhill. The memory usage graph is also almost identical to the previous one: Apache uses around 1.3 MB extra per child process when ModSecurity is loaded, which equals a total increase of memory usage of 26 MB for this particular setup. Compared to the total amount of memory Apache uses when the server is idle (around 300 MB) this equals an increase of about 10%. Mod Security with the core ruleset loaded Now for the really interesting test we'll run httperf against ModSecurity with the core ruleset loaded and look at what that does to the response time and memory usage. Response time The following graph shows the server response time with the core ruleset loaded: At first, the response time is around 340 ms, which is about 35 ms slower than in previous tests. Once the request rate gets above 50, the server response time starts deteriorating. As the request rates grows, the response time gets worse and worse, reaching a full 5 seconds at 100 requests per second. I have capped the graph at 100 requests per second, as the server performance has already deteriorated enough at this point to allow us to see the trend. We see that the point at which memory usage starts increasing has gone down from 75 to 50 requests per second now that we have enabled the core ruleset. This equals a reduction in the maximum number of requests per second the server can handle of 33%.
Read more
  • 0
  • 0
  • 20675
article-image-communication-and-network-security
Packt
21 Jun 2016
7 min read
Save for later

Communication and Network Security

Packt
21 Jun 2016
7 min read
In this article by M. L. Srinivasan, the author of the book CISSP in 21 Days, Second Edition, the communication and network security domain deals with the security of voice and data communications through Local area, Wide area, and Remote access networking. Candidates are expected to have knowledge in the areas of secure communications; securing networks; threats, vulnerabilities, attacks, and countermeasures to communication networks; and protocols that are used in remote access. (For more resources related to this topic, see here.) Observe the following diagram. This represents seven layers of the OSI model. This article covers protocols and security in the fourth layer, which is the Transport layer: Transport layer protocols and security The Transport layer does two things. One is to pack the data given out by applications to a format that is suitable for transport over the network, and the other is to unpackthe data received from the network to a format suitable for applications. In this layer, some of the important protocols are Transmission Control Protocol (TCP), User Datagram Protocol (UDP), Stream Control Transmission Protocol (SCTP), Datagram Congestion Control Protocol (DCCP), and Fiber Channel Protocol (FCP). The process of packaging the data packets received from the applications is called encapsulation, and the output of such a process is called a datagram. Similarly, the process of unpacking the datagram received from the network is called decapstulation. When moving from the seventh layer down to the fourth one, when the fourth layer's header is placed on data, it comes as a datagram. When the datagram is encapsulated with the third layer's header, it becomes a packet, the encapsulated packet becomes a frame, and puts on the wire as bits. The following section describes some of the important protocols in this layer along with security concerns and countermeasures. Transmission Control Protocol (TCP) It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. A protocol that guarantees the delivery of datagram (packets) to the destination application by way of a suitable mechanism (for example, a three-way handshake SYN, SYN-ACK, and ACK in TCP) is called a connection-oriented protocol. The reliability of the datagram delivery of such protocol is high due to the acknowledgment part by the receiver. This protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, and the secondary one is in terms of controls that are necessary for ensuring reliable transmissions. Applications where the delivery needs to be assured such as e-mail, the World Wide Web (WWW), file transfer,and so on use TCP for transmission. Threats, vulnerabilities, attacks, and countermeasures One of the common threats to TCP is a service disruption. A common vulnerability is half-open connections exhausting the server resources. The Denial of Service attacks such as TCP SYN attacks as well as connection hijacking such as IP Spoofing attacks are possible. A half-open connection is a vulnerability in the TCP implementation.TCP uses a three-way handshake to establish or terminate connections. Refer to the following diagram: In a three-way handshake, the client first (workstation) sends a request to the server (for example www.SomeWebsite.com). This is called a SYN request. The server acknowledges the request by sending a SYN-ACK, and in the process, it creates a buffer for this connection. The client does a final acknowledgement by ACK. TCP requires this setup, since the protocol needs to ensure the reliability of the packet delivery. If the client does not send the final ACK, then the connection is called half open. Since the server has created a buffer for that connection,a certain amount of memory or server resource is consumed. If thousands of such half-open connections are created maliciously, then the server resources maybe completely consumed resulting in the Denial-of-Service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. There are two actions that an attacker might do. One is that the attacker or malicious software will send thousands of SYN to the server and withheld ACK. This is called SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time,all the resources will be consumed resulting in the Denial-of-Service. If the source IP was blocked by some means, then the attacker or the malicious software would try to spoof the source IP addresses to continue the attack. This is called SYN spoofing. SYN attacks such as SYN flooding and SYN spoofing can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on the algorithm and sends it as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information usually in the form of text file sent by the server to a client. Cookies are generally stored in browser disk or client computers, and they are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol and is similar to TCP. However, UDP does not provide the delivery guarantee of data packets. A protocol that does not guarantee the delivery of datagram (packets) to the destination is called connectionless protocol. In other words, the final acknowledgment is not mandatory in UDP. UDP uses one-way communication. The speed delivery of the datagram by UDP is high. UDP is predominantly used where a loss of intermittent packets is acceptable such as video or audio streaming. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats, and validation weaknesses facilitate such threats. UDP flood attacks cause service disruptions, and controlling UDP packet size acts as a countermeasure to such attacks. Internet Control Message Protocol (ICMP) ICMP is used to discover service availability in network devices, servers ,and so on. ICMP expects response messages from devices or systems to confirm the service availability. Threats, vulnerabilities, attacks, and countermeasures Service disruptions are common threats. Validation weaknesses facilitate such threats. ICMP flood attacks, such as the ping of death, causes service disruptions; and controlling ICMP packet size acts as a countermeasure to such attacks. Pinging is a process of sending the Internet Control Message Protocol (ICMP) ECHO_REQUEST message to servers or hosts to check whether they are up and running. In this process,the server or host on the network responds to a ping request, and such a response is called echo. A ping of death refers to sending large numbers of ICMP packets to the server to crash the system. Other protocols in transport layer Stream Control Transmission Protocol (SCTP): This is a connection-oriented protocol similar to TCP, but it provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in UNIX-like operating systems. Datagram Congestion Control Protocol (DCCP): As the name implies, this is a Transport layer protocol that is used for congestion control. Applications her include the Internet telephony and video/audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking. One of the prominent applications here is Storage Area Network (SAN). Storage Area Network (SAN) is a network architecture used to attach remote storage devices, such as tape drives anddisk arrays, to the local server. This facilitates using storage devices as if they are local devices. Summary This article covers protocols and security in thetransport layer, which is the fourth layer. Resources for Article: Further resources on this subject: The GNS3 orchestra [article] CISSP: Vulnerability and Penetration Testing for Access Control [article] CISSP: Security Measures for Access Control [article]
Read more
  • 0
  • 0
  • 20300

article-image-mozilla-announces-3-5-million-award-for-responsible-computer-science-challenge-to-encourage-teaching-ethical-coding-to-cs-graduates
Melisha Dsouza
11 Oct 2018
3 min read
Save for later

Mozilla announces $3.5 million award for ‘Responsible Computer Science Challenge’ to encourage teaching ethical coding to CS graduates

Melisha Dsouza
11 Oct 2018
3 min read
Mozilla, along with Omidyar Network, Schmidt Futures, and Craig Newmark Philanthropies, has launched an initiative for professors, graduate students, and teaching assistants at U.S. colleges and universities to integrate and demonstrate the relevance of ethics into computer science education at the undergraduate level. This competition, titled 'Responsible Computer Science Challenge' has solely been launched to foster the idea of 'ethical coding' into today's times. Code written by computer scientists are widely used in fields ranging from data collection to analysis. Poorly designed code can have a negative impact on a user's privacy and security. This challenge seeks creative approaches to integrating ethics and societal considerations into undergraduate computer science education. Ideas pitched by contestants will be judged by an independent panel of experts from academia, profit and non-profit organizations and tech companies. The best proposals will be awarded up to $3.5 million over the next two years. "We are looking to encourage ways of teaching ethics that make sense in a computer science program, that make sense today, and that make sense in understanding questions of data." -Mitchell Baker, founder and chairwoman of the Mozilla Foundation What is this challenge all about? Professors are encouraged to tweak class material, for example, integrating a reading assignment on ethics to go with each project, or having computer science lessons co-taught with teaching assistants from the ethics department. The coursework introduced should encourage students to use their logical skills and come up with ideas to incorporate humanistic principles. The challenge consists of two stages: a Concept Development and Pilot Stage and a Spread and Scale Stage.T he first stage will award these proposals up to $150,000 to try out their ideas firsthand, for instance at the university where the educator teaches. The second stage will select the best of the pilots and grant them $200,000 to help them scale to other universities. Baker asserts that the competition and its prize money will yield substantial and relevant practical ideas. Ideas will be judged based on the potential of their approach, the feasibility of success, a difference from existing solutions, impact on the society, bringing new perspectives to ethics and scalability of the solution. Mozilla’s competition comes as welcomed venture after many of the top universities, like Harvard and MIT, are taking initiatives to integrate ethics within their computer science department. To know all about the competition, head over to Mozilla’s official Blog. You can also check out the entire coverage of this story at Fast Company. Mozilla drops “meritocracy” from its revised governance statement and leadership structure to actively promote diversity and inclusion Mozilla optimizes calls between JavaScript and WebAssembly in Firefox, making it almost as fast as JS to JS calls Mozilla’s new Firefox DNS security updates spark privacy hue and cry
Read more
  • 0
  • 0
  • 20061

article-image-equifax-data-breach-could-have-been-entirely-preventable-says-house-oversight-and-government-reform-committee-staff-report
Savia Lobo
11 Dec 2018
5 min read
Save for later

Equifax data breach could have been “entirely preventable”, says House oversight and government reform committee staff report

Savia Lobo
11 Dec 2018
5 min read
Update: On July 22, 2019, Equifax announced a global settlement including up to $425 million to help people affected by the data breach.  Two days back, the House Oversight and Government Reform Committee released a staff report on Equifax’s data breach that affected 143 million U.S. consumers on September 7, 2017, which could have been "entirely preventable”. On September 14, 2017, the Committee opened an investigation into the Equifax data breach. After the 14-month-long investigation, the staff report highlights the circumstances of the cyber attack, which compromised the authenticating details, such as dates of birth, and social security numbers, of more than half of American consumers. In August 2017, three weeks before Equifax publicly announced the breach, Richard Smith, the former CEO of Equifax, boasted that the company was managing “almost 1,200 times” the amount of data held in the Library of Congress every day. However, Equifax failed to implement an adequate security program to protect this sensitive data. As a result, Equifax allowed one of the largest data breaches in U.S. history. The loopholes that led to a massive data breach Equifax had serious gaps between IT policy development and execution According to the Committee, Equifax failed to implement clear lines of authority within their internal IT management structure. This led to an execution gap between IT policy development and operation. Thus, the gap restricted the company’s ability to implement security initiatives in a comprehensive and timely manner. On March 7, 2017, a critical vulnerability in the Apache Struts software was publicly disclosed. Equifax used Apache Struts to run certain applications on legacy operating systems. The following day, the Department of Homeland Security alerted Equifax to this critical vulnerability. Equifax’s Global Threat and Vulnerability Management (GTVM) team emailed this alert to over 400 people on March 9, instructing anyone who had Apache Struts running on their system to apply the necessary patch within 48 hours. The Equifax GTVM team also held a meeting on March 16 about this vulnerability. Equifax, however, did not fully patch its systems. Equifax’s Automated Consumer Interview System (ACIS), a custom-built internet-facing consumer dispute portal developed in the 1970s, was running a version of Apache Struts containing the vulnerability. Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed. Equifax had complex and outdated IT systems Equifax’s aggressive growth strategy led to the acquisition of multiple companies, information technology (IT) systems, and data. The acquisition strategy may have been successful for the company’s bottom line and stock price, but this growth also brought increasing complexity to Equifax’s IT systems and expanded data security risk. Both the complexity and antiquated nature of Equifax’s custom-built legacy systems made IT security especially challenging. The company failed to implement responsible security measurements Per the committee, Equifax knew of the potential security risks posed by expired SSL certificates. An internal vulnerability assessment tracker entry dated January 20, 2017, stated “SSLV devices are missing certificates, limiting visibility to web-based attacks on [intrusion prevention system]”. Despite this, the company had allowed over 300 security certificates to expire, including 79 certificates for monitoring business-critical domains. Had Equifax implemented a certificate management process with defined roles and responsibilities, the SSL certificate on the device monitoring the ACIS platform would have been active when the intrusion began on May 13, 2017. The company would have been able to see the suspicious traffic to and from the ACIS platform much earlier – potentially mitigating or preventing the data breach. On August 30, 2018, GAO (U.S. Government Accountability Office) published a report detailing Equifax’s information security remediation activities to date. According to GAO, “ a misconfigured monitoring device allowed encrypted web traffic to go uninspected through the Equifax network. To prevent this from happening again, GAO reported Equifax developed new policies and implemented new tools to ensure network traffic is monitored continuously.” In its 2018 Annual Proxy Statement to investors, Equifax reported on how its Board of Directors was enhancing Board oversight in an effort to strengthen Equifax’s cybersecurity posture. Equifax’s new CEO, Mark Begor told news outlets, “We didn’t have the right defenses in place, but we are investing in the business to protect this from ever happening again.” To know more about this news in detail, read the complete Equifax Data Breach report. Affected users can file now file a claim On July 24, 2019, Equifax announced a settlement of up to $425 million to help people affected by its data breach. This global settlement was done with the Federal Trade Commission, the Consumer Financial Protection Bureau, and 50 U.S. states and territories.  Users whose personal information was exposed in the Equifax data breach can now file a claim on Equifax breach settlement website. For those who are unsure if their data was exposed can find out using the Eligibility tool. To know about the benefits a user would receive on this claim, read FTC’s official blog post. A new data breach on Facebook due to malicious browser extensions allowed almost 81,000 users’ private data up for sale, reports BBC News Uber fined by British ICO and Dutch DPA for nearly $1.2m over a data breach from 2016 Marriott’s Starwood guest database faces a massive data breach affecting 500 million user data
Read more
  • 0
  • 0
  • 20024
article-image-intel-me-has-a-manufacturing-mode-vulnerability-and-even-giant-manufacturers-like-apple-are-not-immune-say-researchers
Savia Lobo
03 Oct 2018
4 min read
Save for later

“Intel ME has a Manufacturing Mode vulnerability, and even giant manufacturers like Apple are not immune,” say researchers

Savia Lobo
03 Oct 2018
4 min read
Yesterday, a group of European information security researchers announced that they have discovered a vulnerability in Intel’s Management Engine (Intel ME) INTEL-SA-00086. They say that the root of this problem is an undocumented Intel ME mode, specifically known as the Manufacturing Mode. Undocumented commands enable overwriting SPI flash memory and implementing the doomsday scenario. The vulnerability could locally exploit of an ME vulnerability (INTEL-SA-00086). What is Manufacturing Mode? Intel ME Manufacturing Mode is intended for configuration and testing of the end platform during manufacturing. However, this mode and its potential risks are not described anywhere in Intel's public documentation. Ordinary users do not have the ability to disable this mode since the relevant utility (part of Intel ME System Tools) is not officially available. As a result, there is no software that can protect, or even notify, the user if this mode is enabled. This mode allows configuring critical platform settings stored in one-time-programmable memory (FUSEs). These settings include those for BootGuard (the mode, policy, and hash for the digital signing key for the ACM and UEFI modules). Some of them are referred to as FPFs (Field Programmable Fuses). An output of the -FPFs option in FPT In addition to FPFs, in Manufacturing Mode the hardware manufacturer can specify settings for Intel ME, which are stored in the Intel ME internal file system (MFS) on SPI flash memory. These parameters can be changed by reprogramming the SPI flash. The parameters are known as CVARs (Configurable NVARs, Named Variables). CVARs, just like FPFs, can be set and read via FPT. Manufacturing mode vulnerability in Intel chips within Apple laptops The researchers analyzed several platforms from a number of manufacturers, including Lenovo and Apple MacBook Prо laptops. The Lenovo models did not have any issues related to Manufacturing Mode. However, they found that the Intel chipsets within the Apple laptops are running in Manufacturing Mode and was found to include the vulnerability CVE-2018-4251. This information was reported to Apple and the vulnerability was patched in June, in the macOS High Sierra update 10.13.5. By exploiting CVE-2018-4251, an attacker could write old versions of Intel ME (such as versions containing vulnerability INTEL-SA-00086) to memory without needing an SPI programmer and without physical access to the computer. Thus, a local vector is possible for exploitation of INTEL-SA-00086, which enables running arbitrary code in ME. The researchers have also stated, in the notes for the INTEL-SA-00086 security bulletin, Intel does not mention enabled Manufacturing Mode as a method for local exploitation in the absence of physical access. Instead, the company incorrectly claims that local exploitation is possible only if access settings for SPI regions have been misconfigured. How can users save themselves from this vulnerability? To keep users safe, the researchers decided to describe how to check the status of Manufacturing Mode and how to disable it. Intel System Tools includes MEInfo in order to allow obtaining thorough diagnostic information about the current state of ME and the platform overall. They demonstrated this utility in their previous research about the undocumented HAP (High Assurance Platform) mode and showed how to disable ME. The utility, when called with the -FWSTS flag, displays a detailed description of status HECI registers and the current status of Manufacturing Mode (when the fourth bit of the FWSTS status register is set, Manufacturing Mode is active). Example of MEInfo output They also created a program for checking the status of Manufacturing Mode if the user for whatever reason does not have access to Intel ME System Tools. Here is what the script shows on affected systems: mmdetect script To disable Manufacturing Mode, FPT has a special option (-CLOSEMNF) that allows setting the recommended access rights for SPI flash regions in the descriptor. Here is what happens when -CLOSEMNF is entered: Process of closing Manufacturing Mode with FPT Thus, the researchers demonstrated that Intel ME has a Manufacturing Mode problem. Even major commercial manufacturers such as Apple are not immune to configuration mistakes on Intel platforms. Also, there is no public information on the topic, leaving end users in the dark about weaknesses that could result in data theft, persistent irremovable rootkits, and even ‘bricking’ of hardware. To know about this vulnerability in detail, visit Positive research’s blog. Meet ‘Foreshadow’: The L1 Terminal Fault in Intel’s chips SpectreRSB targets CPU return stack buffer, found on Intel, AMD, and ARM chipsets Intel faces backlash on Microcode Patches after it prohibited Benchmarking or Comparison  
Read more
  • 0
  • 0
  • 19520

article-image-untangle-vpn-services
Packt
30 Oct 2014
18 min read
Save for later

Untangle VPN Services

Packt
30 Oct 2014
18 min read
This article by Abd El-Monem A. El-Bawab, the author of Untangle Network Security, covers the Untangle solution, OpenVPN. OpenVPN is an SSL/TLS-based VPN, which is mainly used for remote access as it is easy to configure and uses clients that can work on multiple operating systems and devices. OpenVPN can also provide site-to-site connections (only between two Untangle servers) with limited features. (For more resources related to this topic, see here.) OpenVPN Untangle's OpenVPN is an SSL-based VPN solution that is based on the well-known open source application, OpenVPN. Untangle's OpenVPN is mainly used for client-to-site connections with a client feature that is easy to deploy and configure, which is widely available for Windows, Mac, Linux, and smartphones. Untangle's OpenVPN can also be used for site-to-site connections but the two sites need to have Untangle servers. Site-to-site connections between Untangle and third-party devices are not supported. How OpenVPN works In reference to the OSI model, an SSL/TLS-based VPN will only encrypt the application layer's data, while the lower layer's information will be transferred unencrypted. In other words, the application packets will be encrypted. The IP addresses of the server and client are visible; the port number that the server uses for communication between the client and server is also visible, but the actual application port number is not visible. Furthermore, the destination IP address will not be visible; only the VPN server IP address is seen. Secure Sockets Layer (SSL) and Transport Layer Security (TLS) refer to the same thing. SSL is the predecessor of TLS. SSL was originally developed by Netscape and many releases were produced (V.1 to V.3) till it got standardized under the TLS name. The steps to create an SSL-based VPN are as follows: The client will send a message to the VPN server that it wants to initiate an SSL session. Also, it will send a list of all ciphers (hash and encryption protocols) that it supports. The server will respond with a set of selected ciphers and will send its digital certificate to the client. The server's digital certificate includes the server's public key. The client will try to verify the server's digital certificate by checking it against trusted certificate authorities and by checking the certificate's validity (valid from and valid through dates). The server may need to authenticate the client before allowing it to connect to the internal network. This could be achieved either by asking for a valid username and password or by using the user's digital identity certificates. Untangle NGFW uses the digital certificates method. The client will create a session key (which will be used to encrypt the transferred data between the two devices) and will send this key to the server encrypted using the server's public key. Thus, no third party can get the session key as the server is the only device that can decrypt the session key as it's the only party that has the private key. The server will acknowledge the client that it received the session key and is ready for the encrypted data transformation. Configuring Untangle's OpenVPN server settings After installing the OpenVPN application, the application will be turned off. You'll need to turn it on before you can use it. You can configure Untangle's OpenVPN server settings under OpenVPN settings | Server. The settings configure how OpenVPN will be a server for remote clients (which can be clients on Windows, Linux, or any other operating systems, or another Untangle server). The different available settings are as follows: Site Name: This is the name of the OpenVPN site that is used to define the server among other OpenVPN servers inside your origination. This name should be unique across all Untangle servers in the organization. A random name is automatically chosen for the site name. Site URL: This is the URL that the remote client will use to reach this OpenVPN server. This can be configured under Config | Administration | Public Address. If you have more than one WAN interface, the remote client will first try to initiate the connection using the settings defined in the public address. If this fails, it will randomly try the IP of the remaining WAN interfaces. Server Enabled: If checked, the OpenVPN server will run and accept connections from the remote clients. Address Space: This defines the IP subnet that will be used to assign IPs for the remote VPN clients. The value in Address Space must be unique and separate across all existing networks and other OpenVPN address spaces. A default address space will be chosen that does not conflict with the existing configuration: Configuring Untangle's OpenVPN remote client settings Untangle's OpenVPN allows you to create OpenVPN clients to give your office employees, who are out of the company, the ability to remotely access your internal network resources via their PCs and/or smartphones. Also, an OpenVPN client can be imported to another Untangle server to provide site-to-site connection. Each OpenVPN client will have its unique IP (from the address space range defined previously). Thus, each OpenVPN client can only be used for one user. For multiple users, you'll have to create multiple clients as using the same client for multiple users will result in client disconnection issues. Creating a remote client You can create remote access clients by clicking on the Add button located under OpenVPN Settings | Server | Remote Clients. A new window will open, which has the following settings: Enabled: If this checkbox is checked, it will allow the client connection to the OpenVPN server. If unchecked, it will not allow the client connection. Client Name: Give a unique name for the client; this will help you identify the client. Only alphanumeric characters are allowed. Group: Specify the group the client will be a member of. Groups are used to apply similar settings to their members. Type: Select Individual Client for remote access and Network for site-to-site VPN. The following screenshot shows a remote access client created for JDoe: After configuring the client settings, you'll need to press the Done button and then the OK or Apply button to save this client configuration. The new client will be available under the Remote Clients tab, as shown in the following screenshot: Understanding remote client groups Groups are used to group clients together and apply similar settings to the group members. By default, there will be a Default Group. Each group has the following settings: Group Name: Give a suitable name for the group that describes the group settings (for example, full tunneling clients) or the target clients (for example, remote access clients). Full Tunnel: If checked, all the traffic from the remote clients will be sent to the OpenVPN server, which allows Untangle to filter traffic directed to the Internet. If unchecked, the remote client will run in the split tunnel mode, which means that the traffic directed to local resources behind Untangle is sent through VPN, and the traffic directed to the Internet is sent by the machine's default gateway. You can't use Full Tunnel for site-to-site connections. Push DNS: If checked, the remote OpenVPN client will use the DNS settings defined by the OpenVPN server. This is useful to resolve local names and services. Push DNS server: If the OpenVPN server is selected, remote clients will use the OpenVPN server for DNS queries. If set to Custom, DNS servers configured here will be used for DNS queries. Push DNS Custom 1: If the Push DNS server is set to Custom, the value configured here will be used as a primary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Custom 2: If the Push DNS server is set to Custom, the value configured here will be used as a secondary DNS server for the remote client. If blank, no settings will be pushed for the remote client. Push DNS Domain: The configured value will be pushed to the remote clients to extend their domain's search path during DNS resolution. The following screenshot illustrates all these settings: Defining the exported networks Exported networks are used to define the internal networks behind the OpenVPN server that the remote client can reach after successful connection. Additional routes will be added to the remote client's routing table that state that the exported networks (the main site's internal subnet) are reachable through the OpenVPN server. By default, each static non-WAN interface network will be listed in the Exported Networks list: You can modify the default settings or create new entries. The Exported Networks settings are as follows: Enabled: If checked, the defined network will be exported to the remote clients. Export Name: Enter a suitable name for the exported network. Network: This defines the exported network. The exported network should be written in CIDR form. These settings are illustrated in the following screenshot: Using OpenVPN remote access clients So far, we have been configuring the client settings but didn't create the real package to be used on remote systems. We can get the remote client package by pressing the Download Client button located under OpenVPN Settings | Server | Remote Clients, which will start the process of building the OpenVPN client that will be distributed: There are three available options to download the OpenVPN client. The first option is to download the client as a .exe file to be used with the Windows operating system. The second option is to download the client configuration files, which can be used with the Apple and Linux operating systems. The third option is similar to the second one except that the configuration file will be imported to another Untangle NGFW server, which is used for site-to-site scenarios. The following screenshot illustrates this: The configuration files include the following files: <Site_name>.ovpn <Site_name>.conf Keys<Site_name>.-<User_name>.crt Keys<Site_name>.-<User_name>.key Keys<Site_name>.-<User_name>-ca.crt The certificate files are for the client authentication, and the .ovpn and .conf files have the defined connection settings (that is, the OpenVPN server IP, used port, and used ciphers). The following screenshot shows the .ovpn file for the site Untangle-1849: As shown in the following screenshot, the created file (openvpn-JDoe-setup.exe) includes the client name, which helps you identify the different clients and simplifies the process of distributing each file to the right user: Using an OpenVPN client with Windows OS Using an OpenVPN client with the Windows operating system is really very simple. To do this, perform the following steps: Set up the OpenVPN client on the remote machine. The setup is very easy and it's just a next, next, install, and finish setup. To set up and run the application as an administrator is important in order to allow the client to write the VPN routes to the Windows routing table. You should run the client as an administrator every time you use it so that the client can create the required routes. Double-click on the OpenVPN icon on the Windows desktop: The application will run in the system tray: Right-click on the system tray of the application and select Connect. The client will start to initiate the connection to the OpenVPN server and a window with the connection status will appear, as shown in the following screenshot: Once the VPN tunnel is initiated, a notification will appear from the client with the IP assigned to it, as shown in the following screenshot: If the OpenVPN client was running in the task bar and there was an established connection, the client will automatically reconnect to the OpenVPN server if the tunnel was dropped due to Windows being asleep. By default, the OpenVPN client will not start at the Windows login. We can change this and allow it to start without requiring administrative privileges by going to Control Panel | Administrative Tools | Services and changing the OpenVPN service's Startup Type to automatic. Now, in the start parameters field, put –-connect <Site_name>.ovpn; you can find the <site_name>.ovpn under C:Program FilesOpenVPNconfig. Using OpenVPN with non-Windows clients The method to configure OpenVPN clients to work with Untangle is the same for all non-Windows clients. Simply download the .zip file provided by Untangle, which includes the configuration and certificate files, and place them into the application's configuration folder. The steps are as follows: Download and install any of the following OpenVPN-compatible clients for your operating system: For Mac OS X, Untangle, Inc. suggests using Tunnelblick, which is available at http://code.google.com/p/tunnelblick For Linux, OpenVPN clients for different Linux distros can be found at https://openvpn.net/index.php/access-server/download-openvpn-as-sw.html OpenVPN connect for iOS is available at https://itunes.apple.com/us/app/openvpn-connect/id590379981?mt=8 OpenVPN for Android 4.0+ is available at https://play.google.com/store/apps/details?id=net.openvpn.openvpn Log in to the Untangle NGFW server, download the .zip client configuration file, and extract the files from the .zip file. Place the configuration files into any of the following OpenVPN-compatible applications: Tunnelblick: Manually copy the files into the Configurations folder located at ~/Library/Application Support/Tunnelblick. Linux: Copy the extracted files into /etc/openvpn, and then you can connect using sudo openvpn /etc/openvpn/<Site_name>.conf. iOS: Open iTunes and select the files from the config ZIP file to add to the app on your iPhone or iPad. Android: From OpenVPN for an Android application, click on all your precious VPNs. In the top-right corner, click on the folder, and then browse to the folder where you have the OpenVPN .Conf file. Click on the file and hit Select. Then, in the top-right corner, hit the little floppy disc icon to save the import. Now, you should see the imported profile. Click on it to connect to the tunnel. For more information on this, visit http://forums.untangle.com/openvpn/30472-openvpn-android-4-0-a.html. Run the OpenVPN-compatible client. Using OpenVPN for site-to-site connection To use OpenVPN for site-to-site connection, one Untangle NGFW server will run on the OpenVPN server mode, and the other server will run on the client mode. We will need to create a client that will be imported in the remote server. The client settings are shown in the following screenshot: We will need to download the client configuration that is supposed to be imported on another Untangle server (the third option available on the client download menu), and then import this client configuration's zipped file on the remote server. To import the client, on the remote server under the Client tab, browse to the .zip file and press the Submit button. The client will be shown as follows: You'll need to restart the two servers before being able to use the OpenVPN site-to-site connection. The site-to-site connection is bidirectional. Reviewing the connection details The current connected clients (either they were OS clients or another Untangle NGFW client) will appear under Connected Remote Clients located under the Status tab. The screen will show the client name, its external address, and the address assigned to it by OpenVPN. In addition to the connection start time, the amount of transmitted and received MB during this connection is also shown: For the site-to-site connection, the client server will show the name of the remote server, whether the connection is established or not, in addition to the amount of transmitted and received data in MB: Event logs show a detailed connection history as shown in the following screenshot: In addition, there are two reports available for Untangle's OpenVPN: Bandwidth usage: This report shows the maximum and average data transfer rate (KB/s) and the total amount of data transferred that day Top users: This report shows the top users connected to the Untangle OpenVPN server Troubleshooting Untangle's OpenVPN In this section, we will discuss some points to consider when dealing with Untangle NGFW OpenVPN. OpenVPN acts as a router as it will route between different networks. Using OpenVPN with Untangle NGFW in the bridge mode (Untangle NGFW server is behind another router) requires additional configurations. The required configurations are as follows: Create a static route on the router that will route any traffic from the VPN range (the VPN address pool) to the Untangle NGFW server. Create a Port Forward rule for the OpenVPN port 1194 (UDP) on the router to Untangle NGFW. Verify that your setting is correct by going to Config | Administration | Public Address as it is used by Untangle to configure OpenVPN clients, and ensure that the configured address is resolvable from outside the company. If the OpenVPN client is connected, but you can't access anything, perform the following steps: Verify that the hosts you are trying to reach are exported in Exported Networks. Try to ping Untangle NGFW LAN IP address (if exported). Try to bring up the Untangle NGFW GUI by entering the IP address in a browser. If the preceding tasks work, your tunnel is up and operational. If you can't reach any clients inside the network, check for the following conditions: The client machine's firewall is not preventing the connection from the OpenVPN client. The client machine uses Untangle as a gateway or has a static route to send the VPN address pool to Untangle NGFW. In addition, some port forwarding rules on Untangle NGFW are needed for OpenVPN to function properly. The required ports are 53, 445, 389, 88, 135, and 1025. If the site-to-site tunnel is set up correctly, but the two sites can't talk to each other, the reason may be as follows: If your sites have IPs from the same subnet (this probably happens when you use a service from the same ISP for both branches), OpenVPN may fail as it consider no routing is needed from IPs in the same subnet, you should ask your ISP to change the IPs. To get DNS resolution to work over the site-to-site tunnel, you'll need to go to Config | Network | Advanced | DNS Server | Local DNS Servers and add the IP of the DNS server on the far side of the tunnel. Enter the domain in the Domain List column and use the FQDN when accessing resources. You'll need to do this on both sides of the tunnel for it to work from either side. If you are using site-to-site VPN in addition to the client-to-site VPN. However, the OpenVPN client is able to connect to the main site only: You'll need to add VPN Address Pool to Exported Hosts and Networks Lab-based training This section will provide training for the OpenVPN site-to-site and client-to-site scenarios. In this lab, we will mainly use Untangle-01, Untangle-03, and a laptop (192.168.1.7). The ABC bank started a project with Acme schools. As a part of this project, the ABC bank team needs to periodically access files located on Acme-FS01. So, the two parties decided to opt for OpenVPN. However, Acme's network team doesn't want to leave access wide open for ABC bank members, so they set firewall rules to limit ABC bank's access to the file server only. In addition, the IT team director wants to have VPN access from home to the Acme network, which they decided to accomplish using OpenVPN. The following diagram shows the environment used in the site-to-site scenario: To create the site-to-site connection, we will need to do the following steps: Enable OpenVPN Server on Untangle-01. Create a network type client with a remote network of 172.16.1.0/24. Download the client and import it under the Client tab in Untangle-03. Restart the two servers. After the restart, you have a site-to-site VPN connection. However, the Acme network is wide open to the ABC bank, so we need to create a firewall-limiting rule. On Untangle-03, create a rule that will allow any traffic that comes from the OpenVPN interface, and its source is 172.16.136.10 (Untangle-01 Client IP) and is directed to 172.16.1.7 (Acme-FS01). The rule is shown in the following screenshot: Also, we will need a general block rule that comes after the preceding rule in the rule evaluation order. The environment used for the client-to-site connection is shown in the following diagram: To create a client-to-site VPN connection, we need to perform the following steps: Enable the OpenVPN server on Untangle-03. Create an individual client type client on Untangle-03. Distribute the client to the intended user (that is 192.168.1.7). Install OpenVPN on your laptop. Connect using the installed OpenVPN and try to ping Acme-DC01 using its name. The ping will fail because the client is not able to query the Acme DNS. So, in the Default Group settings, change Push DNS Domain to Acme.local. Changing the group settings will not affect the OpenVPN client till the client is restarted. Now, the ping process will be a success. Summary In this article, we covered the VPN services provided by Untangle NGFW. We went deeply into understanding how each solution works. This article also provided a guide on how to configure and deploy the services. Untangle provides a free solution that is based on the well-known open source OpenVPN, which provides an SSL-based VPN. Resources for Article: Further resources on this subject: Important Features of Gitolite [Article] Target Exploitation [Article] IPv6 on Packet Tracer [Article]
Read more
  • 0
  • 0
  • 18642
Modal Close icon
Modal Close icon