Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-drupal-7-fieldscck-using-file-field-modules
Packt
08 Jul 2011
4 min read
Save for later

Drupal 7 fields/CCK: Using the file field modules

Packt
08 Jul 2011
4 min read
Adding and configuring file fields to content types There are many cases where we need to attach files to website content. For instance, a restaurant owner might like to upload their latest menu in PDF format to their website, or a financial institution would like to upload a new product catalog so customers can download and print out the catalog if they need it. The File module is built into the Drupal 7 core, which provides us with the ability to attach files to content easily, to decide the attachment display format, and also to manage file locations. Furthermore, the File module is integrated with fields and provides a file field type, so we can easily attach files to content using the already discussed field system making the process of managing files much more streamlined. Time for action – adding and configuring a file field to the Recipe content type In this section, we will add a file field to the Recipe content type, which will allow files to be attached to Recipe content. Follow these steps: Click on the Structure link in the administration menu at the top of the page. The following page will display a list of options. Click on the Content types link to go to the Content types administration page. Since we want to add a file field to the Recipe content type, we will click on the manage fields link on the Recipe row as shown in the following screenshot: (Move the mouse over the image to enlarge it.) This page will display the existing fields of the Recipe content type. In the Label field enter "File", and in the Field name field enter "file". In the field type select list select File as the field type, the field widget will be automatically switched to File as the field widget. After the values are entered, click on Save. A new window will pop up which provides the field settings for the file field that we are creating. There are two checkboxes, and we will enable both these checkboxes. The last radio button option will be selected by default. Then click on the Save field settings button at the bottom of the page. We clicked on the Save field settings button to store the values for the file field settings that we selected. After that, it will direct us to the file field settings administration page, as in the following screenshot: We can leave the Label field as default as it will be filled automatically with the value we entered previously. We will also leave the Required field as default, because we do not want to force users to attach files to every recipe. In the Help text field, we can enter "Attach files to this recipe". In the Allowed file extensions section, we can enter the file extensions that are allowed to be uploaded. In this case, we will enter "txt, pdf, zip". In the File directory section, we can enter the name of a subdirectory that will store the uploaded files, and in this case, we will enter "recipe_files": In the Maximum upload size section, we can enter a value to limit the file size when uploading files. We will enter "2MB" in this field. The Enable Description field checkbox allows users to enter a description about the uploaded files. In this case, we will enable this option, because we would like users to enter a description of the uploaded files. (Move the mouse over the image to enlarge it.) In the Progress indicator section, we can select which indicator will be used when uploading files. We select Throbber as the progress indicator for this field. (Move the mouse over the image to enlarge it.) You will notice the bottom part of the page is exactly same as in the previous section. We can ignore the bottom part and click on the Save settings button to store all the values we have entered. Drupal will direct us back to the manage fields administration page with a message saying we have successfully saved the configuration for the file field. After creating the file field, the file field row will be added to the table. This table will display the details about the file field we just created. (Move the mouse over the image to enlarge it.)
Read more
  • 0
  • 0
  • 4352

article-image-ejb-31-introduction-interceptors
Packt
06 Jul 2011
7 min read
Save for later

EJB 3.1: Introduction to Interceptors

Packt
06 Jul 2011
7 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook Introduction Most applications have cross-cutting functions which must be performed. These cross-cutting functions may include logging, managing transactions, security, and other aspects of an application. Interceptors provide a way to achieve these cross-cutting activities. The use of interceptors provides a way of adding functionality to a business method without modifying the business method itself. The added functionality is not intermeshed with the business logic resulting in a cleaner and easier to maintain application. Aspect Oriented Programming (AOP) is concerned with providing support for these cross-cutting functions in a transparent fashion. While interceptors do not provide as much support as other AOP languages, they do offer a good level of support. Interceptors can be: Used to keep business logic separate from non-business related activities Easily enabled/disabled Provide consistent behavior across an application Interceptors are specific methods invoked around a method or methods of a target EJB. We will use the term target, to refer to the class containing the method(s) an interceptor will be executing around. The interceptor's method will be executed before the EJB's method is executed. When the interceptor method executes, it is passed as an InvocationContext object. This object provides information relating to the state of the interceptor and the target. Within the interceptor method, the InvocationContext's method proceed can be issued that will result in the target's business method being executed or, as we will see shortly, the next interceptor in the chain. When the business method returns, the interceptor continues execution. This permits execution of code before and after the execution of a business method. Interceptors can be used with: Stateless session EJBs Stateful session EJBs Singleton session EJBs Message-driven beans The @Interceptors annotation defines which interceptors will be executed for all or individual methods of a class. Interceptor classes use the same lifecycle of the EJB they are applied to, in the case of stateful EJBs, which means the interceptor could be passivated and activated. In addition, they support the use of dependency injection. The injection is done using the EJB's naming context. More than one interceptor can be used at a time. The sequence of interceptor execution is referred to as an interceptor chain. For example, an application may need to start a transaction based on the privileges of a user. These actions should also be logged. An interceptor can be defined for each of these activities: validating the user, starting the transaction, and logging the event. The use of interceptor chaining is illustrated in the Using interceptors to handle application statistics recipe. Lifecycle callbacks such as @PreDestroy and @PostConstruct can also be used within interceptors. They can access interceptor state information as discussed in the Using lifecycle methods in interceptors recipe. Interceptors are useful for: Validating parameters and potentially changing them before they are sent to a method Performing security checks Performing logging Performing profiling Gathering statistics An example of parameter validation can be found in the Using the InvocationContext to verify parameters recipe. Security checks are illustrated in the Using interceptors to enforce security recipe. The use of interceptor chaining to record a method's hit count and the time spent in the method is discussed in the Using interceptors to handle application statistics recipe. Interceptors can also be used in conjunction with timer services. The recipes in this article are based largely around a conference registration application as developed in the first recipe. It will be necessary to create this application before the other recipes can be demonstrated. Creating the Registration Application A RegistrationApplication is developed in this recipe. It provides the ability of attendees to register for a conference. The application will record their personal information using an entity and other supporting EJBs. This recipe details how to create this application. Getting ready The RegistrationApplication consists of the following classes: Attendee – An entity representing a person attending the conference AbstractFacade – A facade-based class AttendeeFacade – The facade class for the Attendee class RegistrationManager – Used to control the registration process RegistrationServlet – The GUI interface for the application The steps used to create this application include: Creating the Attendee entity and its supporting classes Creating a RegistrationManager EJB to control the registration process Creating a RegistrationServlet to drive the application The RegistrationManager will be the primary vehicle for the demonstration of interceptors. How to do it... Create a Java EE application called RegistrationApplication. Add a packt package to the EJB module and a servlet package in the application's WAR module. Next, add an Attendee entity to the packt package. This entity possesses four fields: name, title, company, and id. The id field should be auto generated. Add getters and setters for the fields. Also add a default constructor and a three argument constructor for the first three fields. The major components of the class are shown below without the getters and setters. @Entity public class Attendee implements Serializable { private String name; private String title; private String company; private static final long serialVersionUID = 1L; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Attendee() { } public Attendee(String name, String title, String company) { this.name = name; this.title = title; this.company = company; } } Next, add an AttendeeFacade stateless session bean which is derived from the AbstractFacade class. The AbstractFacade class is not shown here. @Stateless public class AttendeeFacade extends AbstractFacade<Attendee> { @PersistenceContext(unitName = "RegistrationApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public AttendeeFacade() { super(Attendee.class); } } Add a RegistrationManager stateful session bean to the packt package. Add a single method, register, to the class. The method should be passed three strings for the name, title, and company of the attendee. It should return an Attendee reference. Use dependency injection to add a reference to the AttendeeFacade. In the register method, create a new Attendee and then use the AttendeeFacade class to create it. Next, return a reference to the Attendee. @Stateful public class RegistrationManager { @EJB AttendeeFacade attendeeFacade; Attendee attendee; public Attendee register(String name, String title, String company) { attendee = new Attendee(name, title, company); attendeeFacade.create(attendee); return attendee; } } In the servlet package of the WAR module, add a servlet called RegistrationServlet. Use dependency injection to add a reference to the RegistrationManager. In the try block of the processRequest method, use the register method to register an attendee and then display the attendee's name. public class RegistrationServlet extends HttpServlet { @EJB RegistrationManager registrationManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { out.println("<html>"); out.println("<head>"); out.println("<title>Servlet RegistrationServlet</title>"); out.println("</head>"); out.println("<body>"); Attendee attendee = registrationManager.register("Bill Schroder", "Manager", "Acme Software"); out.println("<h3>" + attendee.getName() + " has been registered</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the servlet. The output should appear as shown in the following screenshot: How it works... The Attendee entity holds the registration information for each participant. The RegistrationManager session bean only has a single method at this time. In later recipes we will augment this class to add other capabilities. The RegistrationServlet is the client for the EJBs.
Read more
  • 0
  • 0
  • 2412

article-image-ejb-31-working-interceptors
Packt
06 Jul 2011
3 min read
Save for later

EJB 3.1: Working with Interceptors

Packt
06 Jul 2011
3 min read
EJB 3.1 Cookbook Build real world EJB solutions with a collection of simple but incredibly effective recipes with this book and eBook        The recipes in this article are based largely around a conference registration application as developed in the first recipe of the previous article on Introduction to Interceptors. It will be necessary to create this application before the other recipes in this article can be demonstrated. Using interceptors to enforce security While security is an important aspect of many applications, the use of programmatic security can clutter up business logic. The use of declarative annotations has come a long way in making security easier to use and less intrusive. However, there are still times when programmatic security is necessary. When it is, then the use of interceptors can help remove the security code from the business logic. Getting ready The process for using an interceptor to enforce security involves: Configuring and enabling security for the application server Adding a @DeclareRoles to the target class and the interceptor class Creating a security interceptor How to do it... Configure the application to handle security as detailed in Configuring the server to handle security recipe. Add the @DeclareRoles("employee") to the RegistrationManager class. Add a SecurityInterceptor class to the packt package. Inject a SessionContext object into the class. We will use this object to perform programmatic security. Also use the @DeclareRoles annotation. Next, add an interceptor method, verifyAccess, to the class. Use the SessionContext object and its isCallerInRole method to determine if the user is in the "employee" role. If so, invoke the proceed method and display a message to that effect. Otherwise, throw an EJBAccessException. @DeclareRoles("employee") public class SecurityInterceptor { @Resource private SessionContext sessionContext; @AroundInvoke public Object verifyAccess(InvocationContext context) throws Exception { System.out.println("SecurityInterceptor: Invoking method: " + context.getMethod().getName()); if (sessionContext.isCallerInRole("employee")) { Object result = context.proceed(); System.out.println("SecurityInterceptor: Returned from method: " + context.getMethod().getName()); return result; } else { throw new EJBAccessException(); } } } Execute the application. The user should be prompted for a username and password as shown in the following screenshot. Provide a user in the employee role. The application should execute to completion. Depending on the interceptors in place, you will console output similar to the following: INFO: Default Interceptor: Invoking method: register INFO: SimpleInterceptor entered: register INFO: SecurityInterceptor: Invoking method: register INFO: InternalMethod: Invoking method: register INFO: register INFO: Default Interceptor: Invoking method: create INFO: Default Interceptor: Returned from method: create INFO: InternalMethod: Returned from method: register INFO: SecurityInterceptor: Returned from method: register INFO: SimpleInterceptor exited: register INFO: Default Interceptor: Returned from method: register How it works... The @DeclareRoles annotation was used to specify that users in the employee role are associated with the class. The isCallerInRole method checked to see if the current user is in the employee role. When the target method is called, if the user is authorized then the InterceptorContext's proceed method is executed. If the user is not authorized, then the target method is not invoked and an exception is thrown. See also EJB 3.1: Controlling Security Programmatically Using JAAS  
Read more
  • 0
  • 0
  • 3118

article-image-overview-web-services-sakai
Packt
06 Jul 2011
16 min read
Save for later

An overview of web services in Sakai

Packt
06 Jul 2011
16 min read
Connecting to Sakai is straightforward, and simple tasks, such as automatic course creation, take only a few lines of programming effort. There are significant advantages to having web services in the enterprise. If a developer writes an application that calls a number of web services, then the application does not need to know the hidden details behind the services. It just needs to agree on what data to send. This loosely couples the application to the services. Later, if you can replace one web service with another, programmers do not need to change the code on the application side. SOAP works well with most organizations' firewalls, as SOAP uses the same protocol as web browsers. System administrators have a tendency to protect an organization's network by closing unused ports to the outside world. This means that most of the time there is no extra network configuration effort required to enable web services. Another simplifying factor is that a programmer does not need to know the details of SOAP or REST, as there are libraries and frameworks that hide the underlying magic. For the Sakai implementation of SOAP, to add a new service is as simple as writing a small amount of Java code within a text file, which is then compiled automatically and run the first time the service is called. This is great for rapid application development and deployment, as the system administrator does not need to restart Sakai for each change. Just as importantly, the Sakai services use the well-known libraries from the Apache Axis project. SOAP is an XML message passing protocol that, in the case of Sakai sites, sits on top of the Hyper Text Transfer Protocol (HTTP). HTTP is the protocol used by web browsers to obtain web pages from a server. The client sends messages in XML format to a service, including the information that the service needs. Then the service returns a message with the results or an error message. The architects introduced SOAP-based web services first to Sakai , adding RESTful services later. Unlike SOAP, instead of sending XML via HTTP posts to one URL that points to a service, REST sends to a URL that includes information about the entity, such as a user, with which the client wishes to interact. For example, a REST URL for viewing an address book item could look similar to http://host/direct/addressbook_item/15. Applying URLs in this way makes for understandable, human-readable address spaces. This more intuitive approach simplifies coding. Further, SOAP XML passing requires that the client and the server parse the XML and at times, the parsing effort is expensive in CPU cycles and response times. The Entity Broker is an internal service that makes life easier for programmers and helps them manipulate entities. Entities in Sakai are managed pieces of data such as representations of courses, users, grade books, and so on. In the newer versions of Sakai, the Entity Broker has the power to expose entities as RESTful services. In contrast, for SOAP services, if you wanted a new service, you would need to write it yourself. Over time, the Entity Broker exposes more and more entities RESTfully, delivering more hooks free to integrate with other enterprise systems. Both SOAP and REST services sit on top of the HTTP protocol. Protocols This section explains how web browsers talk to servers in order to gather web pages. It explains how to use the telnet command and a visual tool called TCPMON (http://ws.apache.org/commons/tcpmon/tcpmontutorial.html) to gain insight into how web services and Web 2.0 technologies work. Playing with Telnet It turns out that message passing occurs via text commands between the browser and the server. Web browsers use HTTP to get web pages and the embedded content from the server and to send form information to the server. HTTP talks between the client and server via text (7-bit ASCII) commands. When humans talk with each other, they have a wide vocabulary. However, HTTP uses fewer than twenty words. You can directly experiment with HTTP using a Telnet client to send your commands to a web server. For example, if your demonstration Sakai instance is running on port 8080, the following command will get you the login page: telnet localhost 8080 GET /portal/login The GET command does what it sounds like and gets a web page. Forms can use the GET verb to send data at the end of the URL. For example, GET /portal/login?name=alan&age=15 is sending the variables name=alan and age=15 to the server. Installing TCPMON You can use the TCPMON tool to view requests and responses from a web browser such as Firefox. One of TCPMON's abilities is that it can act as an invisible man in the middle, recording the messages between the web browser and the server. Once set up, the requests sent from the browser go to TCPMON and it passes the request on to the server. The server passes back a response and then TCPMON, a transparent proxy, returns the response to the web browser. This allows us to look at all requests and responses graphically. First, you can set up TCPMON to listenon a given port number—by convention, normally port 8888—and then you can configure your web browser to send its requests through the proxy. Then, you can type the address of a given page into the web browser, but instead of going directly to the relevant server, the browser sends the request to the proxy, which then passes it on and passes the response back. TCPMON displays both the request and the responses in a window. You can download TCPMON here. After downloading and unpacking, you can—from within the build directory—run either tcpmon.bat for the Windows environment or tcpmon.sh for the UNIX/Linux environment. To configure a proxy, you can click on the Admin tab and then set the Listen Port to 8888 and select the Proxy radio button. After that, clicking on Add will create a new tab, where the requests and responses will be displayed later. Your favorite web browser now has to recognize the newly-setup proxy. For Firefox 3, you can do this by selecting the menu option Edit/Preferences, and then choosing the Advanced tab and the Network tab, as shown in the next screenshot. You will need to set the proxy options, HTTP proxy to 127.0.0.1, and the port number to 8888. If you do this, you will need to ensure that the No proxies text input is blank. Clicking on the OK button enables the new settings. (Move the mouse over the image to enlarge.) To use the Proxy from within Internet Explorer 7 for a Local Area Network (LAN), you can edit the dialog box found under Tools | Internet Options | Connections | LAN settings. Once the proxy is working, typing http://localhost:8080/portal/login in the address bar will seamlessly return the login page of your local Sakai instance. Otherwise, you will see an error message similar to Proxy Server Refused Connection for Firefox or Internet Explorer cannot display the webpage. To turn off the proxy settings, simply select the No Proxies radio box and click on OK for Firefox 3, and for Internet Explorer 7, unselect the Use a proxy server for the LAN tick box and click on OK Requests and returned status codes When TCPMON is running a proxy on port 8888, it allows you to view the requests from the browser and the response in an extra tab, as shown in the following screenshot. Notice the extra information that the browser sends as part of the request. HTTP/1.1 defines the protocol and version level and the lines below GET are the header variables. The User-Agent defines which client sends the request. The Accept headers tell the server what the capabilities of the browser are, and the Cookie header defines the value stored in a cookie. HTTP is stateless, in principle; each response is based only on the current request. However, to get around this, persistent information can be stored in cookies. Web browsers normally store their representation of a cookie as a little text file or in a small database on the end users' computers. Sakai uses the supporting features of a servlet container, such as Tomcat, to maintain state in cookies. A cookie stores a session ID, and when the server sees the session ID, it can look up the request's server-side state. This state contains information such as whether the user is logged in, or what he or she has ordered. The web browser deletes the local representation of the cookie each time the browser closes. A cookie that is deleted when a web browser closes is known as a session cookie. The server response starts with the protocol followed by a status number. HTTP/1.1 200 OK tells the web browser that the server is using HTTP version 1.1 and was able to return the requested web page successfully. 2xx status codes imply success. 3xx status codes imply some form of redirection and tell the web browser where to try to pick up the requested resource. 4xx status codes are for client errors, such as malformed requests or lack of permission to obtain the resource. 4xx states are fertile grounds for security managers to look in log files for attempted hacking. 5xx status codes mostly have to do with a failure of the server itself and are mostly of interest to system administrators and programmers during the debugging cycle. In most cases, 5xx status numbers are about either high server load or a broken piece of code. Sakai is changing rapidly and even with the most vigorous testing, there are bound to be the occasional hiccups. You will find accurate details of the full range of status codes at: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. Another important part of the response is Content-Type, which tells the web browser which type of material the response is returning, so the browser knows how to handle it. For example, the web browser may want to run a plug-in for video types and display text natively. Content-Length in characters is normally also given. After the header information is finished, there is a newline followed by the content itself. Web browsers interpret any redirects that are returned by sending extra requests. Web browsers also interpret any HTML pages and make multiple requests for resources such as JavaScript files and images. Modern browsers do not wait until the server returns all the requests, but render the HTML page live as the server returns the parts. The GET verb is not very efficient for posting a large amount of data, as the URL has a length limit of around 2000 characters. Further, the end user can see the form data, and the browser may encode entities such as spaces to make the URL unreadable. There is also a security aspect: if you are typing passwords in forms using GET, others may see your password or other details. This is not a good idea, especially at Internet Cafés where the next user who logs on can see the password in the browsing history. The POST verb is a better choice. Let us take as an example the Sakai demonstration login page (http://localhost:8080/portal/login). The login page itself contains a form tag that points to the relogin page with the POST method. <form method="post" action="http://localhost:8080/portal/relogin" enctype="application/x-www-form-urlencoded"> Note that the HTML tag also defines the content type. Key features of the POST request compared to GET are: The form values are stored as content after the header values There is a newline between the end of the header and the data The request mentions data and the amount of data by the use of the Content-Length header value The essential POST values for a login form with user admin (eid=admin) and password admin (pw=admin) will look like: POST http://localhost:8080/portal/relogin HTTP/1.1 Content-Type: application/x-www-form-urlencoded Content-Length: 31 eid=admin&pw=admin&submit=Login POST requests can contain much more information than GET requests, and the requests hide the values from the address bar of the web browser. This is not secure. The header is just as visible as the URL, so POST values are also neither hidden nor secure. The only viable solution is for your web browser to encrypt your transactions using SSL/TLS (http://www.ietf.org/rfc/rfc2246.txt) for security, and this occurs every time you connect to a server using an HTTPS URL. SOAP Sakai uses the Apache Axis framework, which the developers have configured to accept SOAP calls via POST. SOAP sends messages in a specific XML format with the Content-Type, otherwise known as MIME type, application/soap+xml. A programmer does not need to know more than that, as the client libraries take care of the majority of the excruciating low-level details. An example SOAP message generated by the Perl module, SOAP::Lite (http://www.soaplite.com/), for creating a login session in Sakai will look like the following POST data: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope soap_encodingStyle= "http://schemas.xmlsoap.org/soap/encoding/" > <c-gensym3 xsi_type="xsd:string">admin</c-gensym3> <c-gensym5 xsi_type="xsd:string">admin</c-gensym5> </login> </soap:Body> </soap:Envelope> There is an envelope with a body containing data for the service to consume. The important point to remember is that both the client and the server have to be able to parse the specific XML schema. SOAP messages can include extra security features, but Sakai does not require these. The architects expect organizations to encrypt web services using SSL/TSL. The last extra SOAP-related complexity is the Web Service Description Language (http://www.w3.org/TR/wsdl). Web services may change location or exist in multiple locations for redundancy. The service writer can define the location of the services and the data types involved with those services in another file, in XML format. JSON Also worth mentioning is JavaScript Object Notation (JSON), which is another popular format, passed using HTTP. When web developers realized that they could force browsers to load parts of a web page in at a time, it significantly improved the quality of the web browsing experience for the end user. This asynchronous loading enables all kinds of whiz-bang features, such as when you type in a search term and can choose from a set of search term completions before pressing on the Submit button. Asynchronous loading delivers more responsive and richer web pages that feel more like traditional desktop applications than a plain old web page. JSON is one of the formats of choice for passing asynchronous requests and responses. The asynchronous communication normally occurs through HTTP GET or POST, but with a specific content structure that is designed to be human readable and script language parser-friendly. JSON calls have the file extension .json as part of the URL. As mentioned in RFC 4627, an example image object communicated in JSON looks like: { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } By confusing the boundaries between client and server, a lot of the presentation and business logic is locked on the client side in scripting languages such as JavaScript. The scripting language orchestrates the loading of parts of pages and the generation of widget sets. Frameworks such as jQuery (http://jquery.com/) and MyFaces (http://myfaces.apache.org/) significantly ease the client-side programming burden. REST To understand REST, you need to understand the other verbs in HTTP (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html). The full HTTP set is OPTIONS, GET, HEAD, POST, PUT, DELETE, and TRACE. The HEAD verb returns from the server only the headers of the response without the content, and is useful for clients that want to see if the content has changed since the last request. PUT requests that the content in the request be stored at a particular location mentioned in the request. DELETE is for deleting the entity. REST uses the URL of the request to route to the resource, and the HTTP verb GET is used to get a resource, PUT to update, DELETE to delete, and POST to add a new resource. In general, POST request is for creating an item, PUT for updating an item, DELETE for deleting an item, and GET for returning information on the item. In SOAP, you are pointing directly towards the service the client calls or indirectly via the web service description. However, in REST, part of the URL describes the resource or resources you wish to work with. For example, a hypothetical address book application that lists all e-mail addresses in HTML format would look similar to the following: GET /email To list the addresses in XML format or JSON format: GET /email.xml GET /email.json To get the first e-mail address in the list: GET /email/1 To create a new e-mail address, of course remembering to add the rest of e-mail details to the end of the GET: POST /email In addition, to delete address 5 from the list use the following command: DELETE /email/5 To obtain address 5 in other formats such as JSON or XML, then use file extensions at the end of the URL, for example: GET /email/5.json GET /email/5.xml RESTful services are intuitively more descriptive than SOAP services, and they enable easy switching of the format from HTML to JSON to fuel the dynamic and asynchronous loading of websites. Due to the direct use of HTTP verbs by REST, this methodology also fits well with the most common application type: CRUD (Create, Read, Update, and Delete) applications, such as the site or user tools within Sakai. Now that we have discussed the theory, in the next section we shall discuss which Sakai-related SOAP services already exist.
Read more
  • 0
  • 0
  • 3366

article-image-drupal-7-themes-dynamic-theming
Packt
05 Jul 2011
11 min read
Save for later

Drupal 7 Themes: Dynamic Theming

Packt
05 Jul 2011
11 min read
  Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling Designating a separate Admin theme Let's start with one of the simplest techniques, that is, designating a separate theme for the use of your admin interface. The Drupal 7 system comes bundled with the Seven theme, which is purpose-built for use by the administration interface. Seven is assigned as your site's admin theme by default. You can, however, change to any theme you desire. Changing the admin theme is done directly from within the admin system's Theme Manager. To change the admin theme, follow these steps: Log in and access your site's admin system. Select the Appearance option from the Management menu. After the Theme Manager loads in your browser, scroll down to the bottom of the page. You can see at the bottom of that page a combo box labeled Administration theme, as shown in the following screenshot. Select the theme you desire from the combo box. Click Save configuration, and your selected theme should appear immediately. The Administration theme combo box will display all the enabled themes on your site. If you don't see what you want listed in the combo box, scroll back up, and make sure you have enabled the theme you desire. If the theme you desire is not listed in the Theme Manager, you will need to install it first! Additionally note the option listed below the Administration theme combo box: Use the administration theme when editing or creating content. Though this option is enabled by default, you may want to de-select this option. If you de-select the option, the system will use the frontend theme for content creation and editing. In some cases, this is more desirable as it allows you to see the page in context, instead of inside the admin theme. It provides, in other words, a more realistic view of the final content item. Using multiple page templates Apart from basic blog sites, most websites today employ different page layouts for different purposes. In some cases this is as simple as one layout for the home page and another for the internal pages. Other sites take this much further and deliver different layouts based on content, function, level of user access, or other criteria. There are various ways you can meet this need with Drupal. Some of the approaches are quite simple and can be executed directly from the administration interface; others require you to work with the files that make up your Drupal theme. Creative use of configuration and block assignments can address some needs. Most people, however, will need to investigate using multiple templates to achieve the variety they desire. The bad news is that there is no admin system shortcut for controlling multiple templates in Drupal—you must manually create the various templates and customize them to suit your needs. The good news is that creating and implementing additional templates is not terribly difficult and is it possible to attain a high degree of granularity with the techniques described next. Indeed should you be so inclined, you could literally define a distinct template for each individual page of your site! While there are many good reasons for running multiple page templates, you should not create additional templates solely for the purpose of disabling regions to hide blocks. While the approach will work, it will result in a performance hit for the site, as the system will still produce the blocks, only to then wind up not displaying them for the pages. The better practice is to control your block visibility through the Blocks Manager. Drupal employs an order of precedence, implemented using a naming convention. You can unlock the granularity of the system through proper application of the naming convention. It is possible, for example, to associate templates with every element on the path, or with specific users, or with a particular functionality or node type—all through the simple process of creating a copy of the existing template and then naming it appropriately. In Drupal terms, this is called creating template suggestions. When the system detects multiple templates, it prefers the specific to the general. If the system fails to find multiple templates, it will apply the relevant default template from the Drupal core. The fundamental methodology of the system is to use the most specific template file it finds and ignore other, more general templates. This basic principle, combined with proper naming of the templates, gives you control over the template that will be applied in various situations. The default suggestions provided by the Drupal system should be sufficient for the vast majority of theme developers. However, if you find that you need additional suggestions beyond those provided by the system, it is possible to extend your site and add new suggestions. See http://drupal.org/node/190815 for an example of this advanced Drupal theming technique. Let's take a series of four examples to show how this system feature can be employed to provide solutions to common problems: Use a unique template for your site's home page Use a different template for a group of pages Assign a specific template to a specific page Designate a specific template for a specific user Creating a unique home page template Let's assume that you wish to set up a unique look and feel for the home page of a site. The ability to employ different appearance for the home page and the interior pages is one of the most common requests web developers hear. There are several techniques you can employ to achieve the result; which is right for you depends on the extent and nature of the variation required, and to a lesser extent, upon the flexibility of the theme you presently employ. For many people a combination of the techniques will be used. Another factor to consider is the abilities of the people who will be managing and maintaining the site. There is often a conflict between what is easiest for the developers and what will be easiest for the site administrators. You need to keep this in mind and strive to create manageable structures. It is, for example, much easier for a client to manage a site that populates the home page dynamically, then to have to create content in multiple places and remember to assign things in the proper fashion. In this regard, using dedicated templates for the home page is generally preferable. One option to address this issue is the creative use of configuration and assignment. You can achieve a degree of variety within a theme—without creating dedicated templates—by controlling the visibility and positioning of the blocks on the home page. Another option you may want to consider is using a contributed module to assist with this task. The Panels and Views modules in particular are quite useful for assembling complex home page layouts. See Useful Extensions for Themers for more information on these extensions. If configuration and assignment alone do not give you enough flexibility, you will want to consider using a dedicated template that is purpose-built for your home page content. To create a dedicated template for your home page, follow these steps: Access the Drupal installation on your server. Copy your theme's existing page.tpl.php file (if your theme does not have a page.tpl.php file, then copy the default page.tpl.php file from the folder /modules/system). Paste it back in the same directory as the original file and rename it page--front.tpl.php. Make any changes you desire to the new page--front.tpl.php. Save the file. Clear the Drupal theme cache. That's it—it's really that easy. The system will now automatically display your new template file for the site's home page, and use the default page.tpl.php for the rest of the site. Note that page--front.tpl.php will be applied to whatever page you specify as the site's front page using the site configuration settings. To override the default home page setting visit the Site Information page from the Configuration Manager. To change the default home page, enter the path of the page you desire to use as the home page into the field labeled Default home page. Next, let's use the same technique to associate a template with a group of pages. The file naming syntax has changed slightly in Drupal 7. In the past, multiple words contained in a file name were consistently separated with a single hyphen. In Drupal 7, a single hyphen is only used for compound words; a double hyphen is used for targeting a template. For example, page--front.tpl.php uses the double hyphen as it indicates that we are targeting the page template when displayed for the front page. In contrast, maintenance-page.tpl.php shows the single hyphen syntax, as it is a compound name. Remember, suggestions only work when placed in the same directory as the base template. In other words, to get page--front.tpl.php to work, you must place it in the same directory as page.tpl.php. Using a different template for a group of pages You can provide a template to be used by any distinct group of pages. The approach is the same as we saw in the previous section, but the name for the template file derives from the path for the pages in the group. For example, to theme the pages that relate to users, you would create the template page--user.tpl.php. A note on templates and URLs Drupal bases the template order of precedence on the default path generated by the system. If the site is using a module like Pathauto, that alters the path that appears to site visitors, remember that your templates will still be displayed based on the original paths. The exception here being page--front.tpl.php, which will be applied to whatever page you specify as the site's front page using the site's Configuration Manager. The following table presents a list of suggestions you can employ to theme various pages associated with the default page groupings in the Drupal system: The steps involved in creating a template-specific theme to a group of pages is the same as that used for the creation of a dedicated home page template: Access the Drupal installation on your server. Copy your theme's existing page.tpl.php file (if your theme does not have a page.tpl.php file, then copy the default page.tpl.php file from the folder /modules/system). Paste it back in the same directory as the original file and rename it as shown in the table above, for example page--user.tpl.php. Make any changes you desire to the new template. Save the file. Clear the Drupal theme cache. Note that the names given in the table above will set the template for all the pages within the group. If you need a more granular solution—that is, to create a template for a sub-group or an individual page within the group, see the discussion in the following sections. Assigning a specific template to a specific page Taking this to its extreme, you can associate a specific template with a specific page. By way of example, assume we wish to provide a unique template for a specific content item. Let's assume the page you wish to style is located at http://www.demosite.com/node/2. The path of the page gives you the key to the naming of the template you need to style it. In this case, you would create a copy of the page.tpl.php file and rename it to page--node--2.tpl.php. Using template suggestion wildcards One of the most interesting changes in Drupal 7 is the introduction of template suggestion wildcards. In the past, you would have to specify the integer value for individual nodes, for example, page--user--1.tpl.php. If you wished to also style the pages for the entire group of users, you had the choice of either creating page--user.tpl.php, that affects all user pages, including the login forms, or you would be forced to create individual templates to cover each of the individual users. With Drupal 7 we can now simply use a wildcard in place of the integer values, for example, page--user--%.tpl.php. The new template page--user--%.tpl.php will now affect all the individual user pages without affecting the login pages. Designating a specific template for a specific user Assume that you want to add a personalized theme for the user with the ID of 1 (the first user in your Drupal system, and for many sites, the ID used by the super user). To do this, copy the existing page.tpl.php file, rename it to reflect its association with the specific user, and make any changes to the new file. To associate the new template file with the user, name the file: page—-user--1.tpl. Now, when the user with ID=1 logs into the site, they will be presented with this template. Only user 1 will see this template and only when he or she is logged in and visiting the user page.
Read more
  • 0
  • 0
  • 2888

article-image-drupal-7-themes-creating-dynamic-css-styling
Packt
05 Jul 2011
7 min read
Save for later

Drupal 7 Themes: Creating Dynamic CSS Styling

Packt
05 Jul 2011
7 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling The reader would benefit by referring the previous article on Dynamic Theming In addition to creating templates that are displayed conditionally, the Drupal system also enables you to apply CSS selectively. Drupal creates unique identifiers for various elements of the system and you can use those identifiers to create specific CSS selectors. As a result, you can provide styling that responds to the presence (or absence) of specific conditions on any given page. Employing $classes for conditional styling One of the most useful dynamic styling tools is $classes. This variable is intended specifically as an aid to dynamic CSS styling. It allows for the easy creation of CSS selectors that are responsive to either the layout of the page or to the status of the person viewing the page. This technique is typically used to control the styling where there may be one, two, or three columns displayed, or to trigger display for authenticated users. Prior to Drupal 6, $layout was used to detect the page layout. With Drupal 6 we got instead, $body_classes. Now, in Drupal 7, it's $classes. While each was intended to serve a similar purpose, do not try to implement the previous incarnations with Drupal 7, as they are no longer supported! By default $classes is included with the body tag in the system's html.tpl.php file; this means it is available to all themes without the necessity of any additional steps on your part. With the variable in place, the class associated with the body tag will change automatically in response to the conditions on the page at that time. All you need to do to take advantage of this and create the CSS selectors that you wish to see applied in the various situations. The following chart shows the dynamic classes available to you by default in Drupal 7: If you are not certain what this looks like and how it can be used, simply view the homepage of your site with the Bartik theme active. Use the view source option in your browser to then examine the body tag of the page. You will see something like this: <body class="html front not-logged-in one-sidebar sidebar-first page-node">. The class definition you see there is the result of $classes. By way of comparison, log in to your site and repeat this test. The body class will now look something like this: <body class="html front logged-in one-sidebar sidebar-first page-node">. In this example, we see that the class has changed to reflect that the user viewing the page is now logged in. Additional statements may appear, depending on the status of the person viewing the page and the additional modules installed. While the system implements this technique in relation to the body tag, its usage is not limited to just that scenario; you can use $classes with any template and in a variety of situations. If you'd like to see a variation of this technique in action (without having to create it from scratch), take a look at the Bartik theme. Open the node.tpl.php file and you can see the $classes variable added to the div at the top of the page; this allows this template to also employ the conditional classes tool. Note that the placement of $classes is not critical; it does not have to be at the top of the file. You can call this at any point where it is needed. You could, for example, add it to a specific ordered list by printing out $classes in conjunction with the li tag, like this: <li class="<?php print $classes; ?>"> $classes is, in short, a tremendously useful aid to creating dynamic theming. It becomes even more attractive if you master adding your own variables to the function, as discussed in the next section. Adding new variables to $classes To make things even more interesting (and useful), you can add new variables to $classes through use of the variable process functions. This is implemented in similar fashion to other preprocess function. Let's look at an example, in this case, taken from Drupal.org. The purpose here is to add a striping class keyed to the zebra variable and to make it available through $classes. To set this up, follow these steps: Access your theme's template.php file. If you don't have one, create it. Add the following to the file: <?php function mythemename_preprocess_node(&$vars) { // Add a striping class. $vars['classes_array'][] = 'node-' . $vars['zebra']; } ?> Save the file. The variable will now be available in any template in which you implement $classes. Creating dynamic selectors for nodes Another handy resource you can tap into for CSS styling purposes is Drupal's node ID system. By default, Drupal generates a unique ID for each node of the website. Node IDs are assigned at the time of node creation and remain stable for the life of the node. You can use the unique node identifier as a means of activating a unique selector. To make use of this resource, simply create a selector as follows: #node-[nid] { } For example, assume you wish to add a border to the node with the ID of 2. Simply create a new selector in your theme's stylesheet, as shown: #node-2 { border: 1px solid #336600 } As a result, the node with the ID of 2 will now be displayed with a 1-pixel wide solid border. The styling will only affect that specific node. Creating browser-specific stylesheets A common solution for managing some of the difficulties attendant to achieving true cross-browser compatibility is to offer stylesheets that target specific browsers. Internet Explorer tends to be the biggest culprit in this area, with IE6 being particularly cringe-worthy. Ironically, Internet Explorer also provides us with one of the best tools for addressing this issue. Internet Explorer implements a proprietary technology known as Conditional Comments. It is possible to easily add conditional stylesheets to your Drupal system through the use of this technology, but it requires the addition of a contributed module to your system, called Conditional Stylesheets. While it is possible to set up conditional stylesheets without the use of the module, it is more work, requiring you to add multiple lines of code to your template.php. With the module installed, you just add the stylesheet declarations to your .info file and then, using a simple syntax, set the conditions for their use. Note also that the Conditional Stylesheets module is in the queue for inclusion in Drupal 8, so it is certainly worth looking at now. To learn more, visit the project site at http://drupal.org/project/conditional_styles. If, in contrast, you would like to do things manually by creating a preprocess function to add the stylesheet and target it by browser key, please see http://drupal.org/node/744328. Summary This article covers the basics needed to make your Drupal theme responsive to the contents and the users. By applying the techniques discussed in this article, you can control the theming of pages based on content, state of the pages, or the users viewing them. Taking the principles one step further, you can also make the theming of elements within a page conditional. The ability to control the templates used and the styling of the page and its elements is what we call dynamic theming. Further resources on this subject: Drupal 7: Customizing an Existing Theme [Article] Drupal 7 Themes: Dynamic Theming [Article] 25 Useful Extensions for Drupal 7 Themers [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Building an Admin Interface in Drupal 7 Module Development [Article] Content in Drupal: Frequently Asked Questions (FAQ) [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 2660
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-wordpress-3-security-overall-risk-site-and-server
Packt
04 Jul 2011
7 min read
Save for later

WordPress 3 Security: Overall Risk to Site and Server

Packt
04 Jul 2011
7 min read
  WordPress 3 Ultimate Security Protect your WordPress site and its network         Read more about this book       (For more resources on WordPress, see here.) How proactive we can be depends on our hosting plan. Then again, harping back to my point about security’s best friend—awareness—even Automattic bloggers could do with a heads-up. Just as site and server security each rely on the other, this section mixes the two to outline the big picture of woe and general despair. The overall concern isn’t hard to grasp. The server, like any computer, is a filing cabinet. It has many drawers—or ports—that each contain the files upon which a service (or daemon) depends. Fortunately, most drawers can be sealed, welded shut, but are they? Then again, some administrative drawers, for instance containing control panels, must be accessible to us, only to us, using a super-secure key and with the service files themselves providing no frailty to assist forcing an entry. Others, generally in our case the web files drawer, cannot even be locked because, of course, were it so then no one could access our sites. To compound the concern, there’s a risk that someone rummaging about in one drawer can internally access the others and, from there, any networked cabinets. Let's break down our site and server vulnerabilities, vying them against some common attack scenarios which, it should be noted, merely tip the iceberg of malicious possibility. Just keep smiling.   Physical server vulnerabilities Just how secure is the filing cabinet? We’ve covered physical security and expanded on the black art of social engineering. Clearly, we have to trust our web hosts to maintain the data center and to screen their personnel and contractors. Off-server backup is vital.   Open ports with vulnerable services We manage ports, and hence differing types of network traffic, primarily with a firewall. That allows or denies data packets depending on the port to which they navigate. FTP packets, for example, navigate to the server’s port 21. The web service queues up for 80. Secure web traffic—https rather than http—heads for 443. And so on. Regardless of whether or not, say, an FTP server is installed, if 21 is closed then traffic is denied. So here’s the problem. Say you allow an FTP service with a known weakness. Along comes a hacker, exploits the deficiency and gains a foothold into the machine, via its port. Similarly, every service listening on every port is a potential shoo-in for a hacker. Attacking services with a (Distributed) Denial of Service attack Many in the blogging community will be aware of the Digg of death, a nice problem to have where a post’s popularity, duly Digged, leads to a sudden rush of traffic that, if the web host doesn’t intervene and suspend the site, can overwhelm server resources and even crash the box. What’s happened here is an unintentional denial of service, this time via the web service on port 80. As with most attacks, DoS attacks come in many forms but the malicious purpose, often concentrated at big sites or networks and sometimes to gain a commercial or political advantage, is generally to flood services and, ultimately, to disable HTTP. As we introduced earlier, the distributed variety are most powerful, synchronizing the combined processing power of a zombie network, or botnet, against the target.   Access and authentication issues In most cases, we simply deny access by disabling the service and closing its port. Many of us, after all, only ever need web and administration ports. Only? Blimey! Server ports, such as for direct server access or using a more user-friendly middleman such as cPanel, could be used to gain unwanted entry if the corresponding service can be exploited or if a hacker can glean your credentials. Have some typical scenarios. Buffer overflow attacks This highly prevalent kind of memory attack is assisted by poorly written software and utilizes a scrap of code that’s often introduced through a web form field or via a port-listening service, such as that dodgy FTP daemon mentioned previously. Take a simplistic example. You’ve got a slug of RAM in the box and, on submitting data to a form, that queues up in a memory space, a buffer, where it awaits processing. Now, imagine someone submits malicious code that's longer, containing more bits, than the programmer allowed for. Again, the data queues in its buffer but, being too long, it overflows, overwriting the form’s expected command and having itself executed instead. So what about the worry of swiped access credentials? Again, possibilities abound. Intercepting data with man-in-the-middle attacks The MITM is where someone sits between your keystrokes and the server, scouring the data. That could be, for example, a rootkit, a data logger, a network, or a wireless sniffer. If your data transits unencrypted, in plain text, as is the case with FTP or HTTP and commonly with e-mail, then everything is exposed. That includes login credentials. Cracking authentication with password attacks Brute force attacks, on the other hand, run through alphanumeric and special character combinations against a login function, such as for a control panel or the Dashboard, until the password is cracked. They’re helped immensely when the username is known, so there’s a hint not to use that regular old WordPress chestnut, admin. Brute forcing can be time-consuming, but can also be coordinated between multiple zombies, warp-speeding the process with the combined processing power. Dictionary attacks, meanwhile, throw A-Z word lists against the password and hybrid attacks morph brute force and dictionary techniques to crack naïve keys such as pa55worD. The many dangers of cross-site scripting (XSS) XSS crosses bad code—adds it—with an unsecured site. Site users become a secondary target here because when they visit a hacked page, and their browser properly downloads everything as it resolves, they retrieve the bad code to become infected locally. An in-vogue example is the iframe injection which adds a link that leads to, say, a malicious download on another server. When a visitor duly views the page, downloading it locally, malware and all, the attacker has control over that user’s PC. Lovely. There's more. Oh so much more. Books more in fact. There's too much to mention here, but another classic tactic is to use XSS for cookie stealing. ... All that’s involved here is a code injection to some poor page that reports to a log file on the hacker’s server. Page visitors have their cookies chalked up to the log and have their session hijacked, together with their session privileges. If the user’s logged into webmail, so can the hacker. If it’s online banking, goodbye to your funds. If the user’s a logged-in WordPress administrator, you get the picture. Assorted threats with cross-site request forgery (CSRF) This is not the same as XSS, but there are similarities, the main one being that, again, a blameless if poorly built site is crossed with malicious code to cause an effect. A user logs into your site and, in the regular way, is granted a session cookie. The user surfs some pages, one of them having been decorated with some imaginative code from an attacker which the user’s browser correctly downloads. Because that script said to do something to your site and because the unfortunate user hadn’t logged out of your site, relinquishing the cookie, the action is authorized by the user’s browser. What may happen to your site, for example, depends on the user’s privileges so could vary from a password change or data theft to a nice new theme effect called digital soup.  
Read more
  • 0
  • 0
  • 2020

article-image-wordpress-3-security-risks-and-threats
Packt
04 Jul 2011
11 min read
Save for later

WordPress 3 Security: Risks and Threats

Packt
04 Jul 2011
11 min read
  WordPress 3 Ultimate Security Protect your WordPress site and its network         Read more about this book       (For more resources on WordPress, see here.) You may think that most of this is irrelevant to WordPress security. Sadly, you'd be wrong. Your site is only as safe as the weakest link: of the devices that assist in administering it or its server; of your physical security; or of your computing and online discipline. To sharpen the point with a simple example, whether you have an Automattic-managed wordpress.com blog or unmanaged dedicated site hosting, if a hacker grabs a password on your local PC, then all bets are off. If a hacker can borrow your phone, then all bets are off. If a hacker can coerce you to a malicious site, then all bets are off. And so on. Let's get one thing clear. There is no such thing as total security and anyone who says any different is selling something. Then again, what we can achieve, given ongoing attention, is to boost our understanding, to lock our locations, to harden our devices, to consolidate our networks, to screen our sites and, certainly not least of all, to discipline our computing practice. Even this carries no guarantee. Tell you what though, it's pretty darned tight. Let's jump in and, who knows, maybe even have a laugh here and there to keep us awake. Calculated risk So what is the risk? Here's one way to look at the problem: RISK = VULNERABILITY x THREAT A vulnerability is a weakness, a crack in your armour. That could be a dodgy wireless setup or a poorly coded plugin, a password-bearing sticky note, or an unencrypted e-mail. It could just be the tired security guy. It could be 1001 things, and then more besides. The bottom line vulnerability though, respectfully, is our ignorance. A threat, on the other hand, is an exploit, some means of hacking the flaw, in turn compromising an asset such as a PC, a router, a phone, your site. That's the sniffer tool that intercepts your wireless, the code that manipulates the plugin, a colleague that reads the sticky, whoever reads your mail, or the social engineer who tiptoes around security. The risk is the likelihood of getting hacked. If you update the flawed plugin, for instance, then the threat is redundant, reducing the risk. Some risk remains because, when a further vulnerability is found there will be someone, somewhere, who will tailor an exploit to threaten it. This ongoing struggle to minimize risk is the cat and mouse that is security. To minimize risk, we defend vulnerabilities against threats. You may be wondering, why bother calculating risk? After all, any vulnerability requires attention. You'd not be wrong but, such is the myriad complexity of securing multiple assets, any of which can add risk to our site, and given that budgets or our time are at issue, we need to prioritize. Risk factoring helps by initially flagging glaring concerns and, ideally assisted by a security policy, ensuring sensible ongoing maintenance. Securing a site isn't a one-time deal. Such is the threatscape, it's an ongoing discipline.   An overview of our risk Let's take a WordPress site, highlight potential vulnerabilities, and chew over the threats. WordPress is an interactive blogging application written in PHP and working in conjunction with a SQL database to store data and content. The size and complexity of this content manager is extended with third party code such as plugins and themes. The framework and WordPress sites are installed on a web server and that, the platform, and its file system are administered remotely. WordPress. Powering multi-millions of standalone sites plus another 20 million blogs at wordpress.com, Automattic's platform is an attack target coveted by hackers. According to wordpress.org 40% of self-hosted sites run the gauntlet with versions 2.3 to 2.9. Interactive. Just being online, let alone offering interaction, sites are targets. A website, after all, is effectively an open drawer in an otherwise lockable filing cabinet, the server. Now, we're inviting people server-side not just to read but to manipulate files and data. Application, size, and complexity. Not only do applications require security patching but, given the sheer size and complexity of WordPress, there are more holes to plug. Then again, being a mature beast, a non-custom, hardened WordPress site is in itself robust. PHP, third party code, plugins, and themes. Here's a whole new dynamic. The use of poorly written or badly maintained PHP and other code adds a slew of attack vectors. SQL database. Containing our most valuable assets, content and data, MySQL, and other database apps are directly available to users making them immediate targets for hackers. Data. User data from e-mails to banking information is craved by cybercriminals and its compromise, else that of our content, costs sites anything from reputation to a drop or ban in search results as well as carrying the remedial cost of time and money. Content and media. Content is regularly copied without permission. Likewise with media, which can also be linked to and displayed on other sites while you pay for its storage and bandwidth. Upload, FTP, and private areas provide further opportunities for mischief. Sites. Sites-plural adds risk because a compromise to one can be a compromise to all. Web server. Server technologies and wider networks may be hacked directly or via WordPress, jeopardizing sites and data, and being used as springboards for wider attacks. File system. Inadequately secured files provide a means of site and server penetration. Administered remotely. Casual or unsecured content, site, server, and network administration allows for multi-faceted attacks and, conversely, requires discipline, a secure local working environment, and impenetrable local-to-remote connectivity.   Meet the hackers This isn't some cunning ploy by yours-truly to see for how many readers I can attain visitor's rights, you understand. The fact is to catch a thief one has to think like one. Besides, not all hackers are such bad hats. Far from it. Overall there are three types-white hat, grey hat, and black hat-each with their sub-groups. White hat One important precedent sets white hats above and beyond other groups: permission. Also known as ethical hackers, these decent upstanding folks are motivated: To learn about security To test for vulnerabilities To find and monitor malicious activity To report issues To advise others To do nothing illegal To abide by a set of ethics to not harm anyone So when we're testing our security to the limit, that should include us. Keep that in mind. Black hat Out-and-out dodgy dealers. They have nefarious intent and are loosely sub-categorized: Botnets A botnet is a network of automated robots, or scripts, often involved in malicious activity such as spamming or data-mining. The network tends to be comprised of zombie machines, such as your server, which are called upon at will to cause general mayhem. Botnet operators, the actual black hats, have no interest in damaging most sites. Instead they want quiet control of the underlying server resources so their malbots can, by way of more examples, spread malware or Denial of Service (DoS) attacks, the latter using multiple zombies to shower queries to a server to saturate resources and drown out a site. Cybercriminals These are hackers and gangs whose activity ranges from writing and automating malware to data-mining, the extraction of sensitive information to extort or sell for profit. They tend not to make nice enemies, so I'll just add that they're awfully clever. Hacktivists Politically-minded and often inclined towards freedom of information, hacktivists may fit into one of the previous groups, but would argue that they have a justifiable cause. Scrapers While not technically hackers, scrapers steal content-often on an automated basis from site feeds-for the benefit of their generally charmless blog or blog farms. Script kiddies This broad group ranges anything from well-intentioned novices (white hat) to online graffiti artists who, when successfully evading community service, deface sites for kicks. Armed with tutorials galore and a share full of malicious warez, the hell-bent are a great threat because, seeking bragging rights, they spew as much damage as they possibly can. Spammers Again not technically hackers but this vast group leeches off blogs and mailing lists to promote their businesses which frequently seem to revolve around exotic pharmaceutical products. They may automate bomb marketing or embed hidden links but, however educational their comments may be, spammers are generally, but not always, just a nuisance and a benign threat. Misfits Not jargon this time, this miscellaneous group includes disgruntled employees, the generally unloved, and that guy over the road who never really liked you. Grey hat Grey hatters may have good intentions, but seem to have a knack for misplacing their moral compass, so there's a qualification for going into politics. One might argue, for that matter, that government intelligence departments provide a prime example. Hackers and crackers Strictly speaking, hackers are white hat folks who just like pulling things apart to see how they work. Most likely, as kids, they preferred Meccano to Lego. Crackers are black or grey hat. They probably borrowed someone else's Meccano, then built something explosive. Over the years, the lines between hacker and cracker have become blurred to the point that put-out hackers often classify themselves as ethical hackers. This author would argue the point but, largely in the spirit of living language, won't, instead referring to all those trying to break in, for good or bad, as hackers. Let your conscience guide you as to which is which instance and, failing that, find a good priest.   Physically hacked off So far, we have tentatively flagged the importance of a safe working environment and of a secure network from fingertips to page query. We'll begin to tuck in now, first looking at the physical risks to consider along our merry way. Risk falls into the broad categories of physical and technical, and this tome is mostly concerned with the latter. Then again, with physical weaknesses being so commonly exploited by hackers, often as an information-gathering preface to a technical attack, it would be lacking not to mention this security aspect and, moreover, not to sweet-talk the highly successful area of social engineering. Physical risk boils down to the loss or unauthorized use of (materials containing) data: Break-in or, more likely still, a cheeky walk-in Dumpster diving or collecting valuable information, literally from the trash Inside jobs because a disgruntled (ex-)employee can be a dangerous sort Lost property when you leave the laptop on the train Social engineering which is a topic we'll cover separately, so that's ominous Something just breaks ... such as the hard-drive Password-strewn sticky notes aside, here are some more specific red flags to consider when trying to curtail physical risk: Building security whether it's attended or not. By the way, who's got the keys? A cleaner, a doorman, the guy you sacked? Discarded media or paper clues that haven't been criss-cross shredded. Your rubbish is your competitor's profit. Logged on PCs left unlocked, unsecured, and unattended or with hard drives unencrypted and lacking strong admin and user passwords for the BIOS and OS. Media, devices, PCs and their internal/external hardware. Everything should be pocketed or locked away, perhaps in a safe. No Ethernet jack point protection and no idea about the accessibility of the cable beyond the building. No power-surge protection could be a false economy too. This list is not exhaustive. For mid-sized to larger enterprises, it barely scratches the surface and you, at least, do need to employ physical security consultants to advise on anything from office location to layout as well as to train staff to create a security culture. Otherwise, if you work in a team, at least, you need a policy detailing each and every one of these elements, whether they impact your work directly or indirectly. You may consider designating and sub-designating who is responsible for what and policing, for example, kit that leaves the office. Don't forget cell and smart phones and even diaries.  
Read more
  • 0
  • 0
  • 3105

article-image-moodle-history-teaching-using-chats-books-and-plugins
Packt
29 Jun 2011
4 min read
Save for later

Moodle: History Teaching using Chats, Books and Plugins

Packt
29 Jun 2011
4 min read
The Chat Module Students naturally gravitate towards the Chat module in Moodle. It is one of the modules that they effortlessly use whilst working on another task. I often find that they have logged in and are discussing work related tasks in a way that enables them to move forward on a particular task. Another use for the Chat module is to conduct a discussion outside the classroom timetabled lesson when students know that you are available to help them with issues. This is especially relevant to students who embark on study leave in preparation for examinations. It can be a lonely and stressful period. Knowing that they can log in to a chat that has been planned in advance means that they can prepare issues that they wish to discuss about their workload and find out how their peers are tackling the same issues. The teacher can ensure that the chat stays on message and provide useful input at the same time. Setting up a Chatroom We want to set up a chat with students who are on holiday but have some examination preparation to do for a lesson that will take place straight after their return to school. Ideally we would have informed the students prior to starting their holiday that this session would be available to anyone who wished to take part. Log in to the Year 7 History course and turn on editing In the Introduction section, click the Add an activity dropdown Select Chat Enter an appropriate name for the chat Enter some relevant information in the Introduction text Select the date and time for the chat to begin Beside Repeat sessions select No repeats – publish the specified time only Leave other elements at their default settings Click Save changes The following screenshot is the result of clicking Add an activity from the drop-down menu: If we wanted to set up the chatroom so that the chat took place at the same time each day or each week then it is possible to select the appropriate option from the Repeat sessions dropdown. The remaining options make it possible for students to go back and view sessions that they have taken part in. Entering the chatroom When a student or teacher logs in to the course for the appointed chat they will see the chat symbol in the Introduction section. Clicking on the symbol enables them to enter the chatroom via a simple chat window or a more accessible version where checking the box ensures that only new messages appear on the screen as shown in the following screenshot: As long as another student or teacher has entered the chatroom, a chat can begin when users type a message and await a response. The Chat module is a useful way for students to collaborate with each other and with their teacher if they need to. It comes into its own when students are logging in to discuss how to make progress with their collaborative wiki story about a murder in the monastery or when students preparing for an examination share tips and advice to help each other through the experience. Collaboration is the key to effective use of the chat module and teachers need not fear its potential for timewasting if this point is emphasized in the activities that they are working on. Plugins A brief visit to www.moodle.org and a search for ‘plugins’ reveals an extensive list of modules that are available for use with Moodle but stand outside the standard installation. If you have used a blogging tool such as Wordpress you will be familiar with the concept of plugins. Over the last few years, developers have built up a library of plugins which can be used to enhance your Moodle experience. Every teacher has different ways of doing things and it is well worth exploring the plugins database and related forums to find out what teachers are using and how they are using it. There is for example a plugin for writing individual learning plans for students and another plugin called Quickmail which enables you to send an email to everyone on your course even more quickly than the conventional way. Installing plugins Plugins need to be installed and they need administrator rights to run at all. The Book module for example, requires a zip file to be downloaded from the plugins database onto your computer and the files then need to be extracted to a folder in the Mod folder of your Moodle’s software directory. Once it is in the correct folder, the administrator then needs to run the installation. Installation has been successful if you are able to log in to the course and see the Book module as an option in the Add a resource dropdown.
Read more
  • 0
  • 0
  • 4653

article-image-html5-audio-and-video-elements
Packt
28 Jun 2011
9 min read
Save for later

HTML5: Audio and Video Elements

Packt
28 Jun 2011
9 min read
Understanding audio and video file formats There are plenty of different audio and video file formats. These files may include not just video but also audio and metadata—all in one file. These file types include: .avi – A blast from the past, the Audio Video Interleave file format was invented by Microsoft. Does not support most modern audio and video codecs in use today. .flv – Flash video. This used to be the only video file format Flash fully supported. Now it also includes support for .mp4. .mp4 or .mpv – MPEG4 is based on Apple's QuickTime player and requires that software for playback. How it works... Each of the previously mentioned video file formats require a browser plugin or some sort of standalone software for playback. Next, we'll look at new open-source audio and video file formats that don't require plugins or special software and the browsers that support them. H.264 has become of the most commonly used high definition video formats. Used on Blu-ray Discs as well as many Internet video streaming sites including Flash, iTunes Music Store, Silverlight, Vimeo, YouTube, cable television broadcasts, and real-time videoconferencing. In addition, there is a patent on H.264 is therefore, by definition, not open source. Browsers that support H.264 video file format include: Google has now partially rejected the H.264 format and is leaning more toward its support of the new WebM video file format instead. Ogg might be a funny sounding name, but its potential is very serious, I assure you. Ogg is really two things: Ogg Theora, which is a video file format; and Ogg Vorbis, which is an audio file format. Theora is really much more of a video file compression format than it is a playback file format, though it can be used that way also. It has no patents and is therefore considered open source. Fun fact: According to Wikipedia, "Theora is named after Theora Jones, Edison Carter's controller on the Max Headroom television program." Browsers that support the Ogg video file format include: WebM is the newest entrant in the online video file format race. This open source audio/video file format development is sponsored by Google. A WebM file contains both an Ogg Vorbis audio stream as well as a VP8 video stream. It is fairly well supported by media players including Miro, Moovidia, VLC, Winamp, and more, including preliminary support by YouTube. The makers of Flash say it will support WebM in the future, as will Internet Explorer 9. Browsers that currently support WebM include: There's more... So far this may seem like a laundry list of audio and video file formats with spotty browser support at best. If you're starting to feel that way, you'd be right. The truth is no one audio or video file format has emerged as the one true format to rule them all. Instead, we developers will often have to serve up the new audio and video files in multiple formats while letting the browser decide whichever one it's most comfortable and able to play. That's a drag for now but here's hoping in the future we settle on fewer formats with more consistent results. Audio file formats There are a number of audio file formats as well. Let's take a look at those. AAC – Advanced Audio Coding files are better known as AACs. This audio file format was created by design to sound better than MP3s using the same bitrate. Apple uses this audio file format for its iTunes Music Store. Since the AAC audio file format supports DRM, Apple offers files in both protected and unprotected formats. There is an AAC patent, so by definition we can't exactly call this audio file format open source. All Apple hardware products, including their mobile iPhone and iPad devices as well as Flash, support the AAC audio file format. Browsers that support AAC include: MP3 – MPEG-1 Audio Layer 3 files are better known as MP3s. Unless you've been hiding under a rock, you know MP3s are the most ubiquitous audio file format in use today. Capable of playing two channels of sound, these files can be encoded using a variety of bitrates up to 320. Generally, the higher the bitrate, the better the audio file sounds. That also means larger file sizes and therefore slower downloads. There is an MP3 patent, so by definition we can't exactly call this audio file format open source either. Browsers that support MP3 include: Ogg – We previously discussed the Ogg Theora video file format. Now, let's take a look at the Ogg Vorbis audio format. As mentioned before, there is no patent on Ogg files and are therefore considered open source. Another fun fact: According to Wikipedia, "Vorbis is named after a Discworld character, Exquisitor Vorbis in Small Gods by Terry Pratchett." File format agnosticism We've spent a lot of time examining these various video and audio file formats. Each has its own plusses and minuses and are supported (or not) by various browsers. Some work better than others, some sound and look better than others. But here's the good news: The new HTML5 <video> and <audio> elements themselves are file-format agnostic! Those new elements don't care what kind of video or audio file you're referencing. Instead, they serve up whatever you specify and let each browser do whatever it's most comfortable doing. Can we stop the madness one day? The bottom line is that until one new HTML5 audio and one new HTML5 video file format emerges as the clear choice for all browsers and devices, audio and video files are going to have to be encoded more than once for playback. Creating accessible audio and video In this section we will pay attention to those people who rely on assistive technologies. How to do it... First, we'll start with Kroc Camen's "Video for Everybody" code chunk and examine how to make it accessibility friendly to ultimately look like this: <div id="videowrapper"> <video controls height="360" width="640"> <source src="__VIDEO__.MP4" type="video/mp4" /> <source src="__VIDEO__.OGV" type="video/ogg" /> <object width="640" height="360" type="application/ x-shockwave-flash" data="__FLASH__.SWF"> <param name="movie" value="__FLASH__.SWF" /> <param name="flashvars" value="controlbar=over&amp; image=__POSTER__.JPG&amp;file=__VIDEO__.MP4" /> <img src="__VIDEO__.JPG" width="640" height="360" alt="__TITLE__" title="No video playback capabilities, please download the video below" /> </object> <track kind="captions" src="videocaptions.srt" srclang="en" /> <p>Final fallback content</p> </video> <div id="captions"></div> <p><strong>Download Video:</strong> Closed Format: <a href="__VIDEO__.MP4">"MP4"</a> Open Format: <a href="__VIDEO__.OGV">"Ogg"</a> </p> </div> How it works... The first thing you'll notice is we've wrapped the new HTML5 video element in a wrapper div. While this is not strictly necessary semantically, it will give a nice "hook" to tie our CSS into. <div id="videowrapper"> Much of the next chunk should be recognizable from the previous section. Nothing has changed here: <video controls height="360" width="640"> <source src="__VIDEO__.MP4" type="video/mp4" /> <source src="__VIDEO__.OGV" type="video/ogg" /> <object width="640" height="360" type="application/ x-shockwave-flash" data="__FLASH__.SWF"> <param name="movie" value="__FLASH__.SWF" /> <param name="flashvars" value="controlbar=over&amp; image=__POSTER__.JPG&amp;file=__VIDEO__.MP4" /> <img src="__VIDEO__.JPG" width="640" height="360" alt="__TITLE__" title="No video playback capabilities, please download the video below" /> </object> So far, we're still using the approach of serving the new HTML5 video element to those browsers capable of handling it and using Flash as our first fallback option. But what happens next if Flash isn't an option gets interesting: <track kind="captions" src="videocaptions.srt" srclang="en" /> What the heck is that, you might be wondering. "The track element allows authors to specify explicit external timed text tracks for media elements. It does not represent anything on its own." - W3C HTML5 specification Here's our chance to use another new part of the HTML5 spec: the new <track> element. Now, we can reference the type of external file specified in the kind="captions". As you can guess, kind="captions" is for a caption file, whereas kind="descriptions" is for an audio description. Of course the src calls the specific file and srclang sets the source language for the new HTML5 track element. In this case, en represents English. Unfortunately, no browsers currently support the new track element. Lastly, we allow one last bit of fallback content in case the user can't use the new HTML5 video element or Flash when we give them something purely text based. <p>Final fallback content</p> Now, even if the user can't see an image, they'll at least have some descriptive content served to them. Next, we'll create a container div to house our text-based captions. So no browser currently supports closed captioning for the new HTML5 audio or video element, we'll have to leave room to include our own: <div id="captions"></div> Lastly, we'll include Kroc's text prompts to download the HTML5 video in closed or open file formats: <p><strong>Download Video:</strong> Closed Format: <a href="__VIDEO__.MP4">"MP4"</a> Open Format: <a href="__VIDEO__.OGV">"Ogg"</a> </p>
Read more
  • 0
  • 0
  • 1836
article-image-gnu-octave-data-analysis-examples
Packt
28 Jun 2011
7 min read
Save for later

GNU Octave: data analysis examples

Packt
28 Jun 2011
7 min read
Loading data files When performing a statistical analysis of a particular problem, you often have some data stored in a file. You can save your variables (or the entire workspace) using different file formats and then load them back in again. Octave can, of course, also load data from files generated by other programs. There are certain restrictions when you do this which we will discuss here. In the following matter, we will only consider ASCII files, that is, readable text files. When you load data from an ASCII file using the load command, the data is treated as a two-dimensional array. We can then think of the data as a matrix where lines represent the matrix rows and columns the matrix columns. For this matrix to be well defined, the data must be organized such that all the rows have the same number of columns (and therefore the columns the same number of rows). For example, the content of a file called series.dat can be: Next we to load this into Octave's workspace: octave:1> load -ascii series.dat; whereby the data is stored in the variable named series. In fact, Octave is capable of loading the data even if you do not specify the ASCII format. The number of rows and columns are then: octave:2> size(series) ans = 4 3 I prefer the file extension .dat, but again this is optional and can be anything you wish, say .txt, .ascii, .data, or nothing at all. In the data files you can have: Octave comments Data blocks separated by blank lines (or equivalent empty rows) Tabs or single and multi-space for number separation Thus, the following data file will successfully load into Octave: # First block 1 232 334 2 245 334 3 456 342 4 555 321 # Second block 1 231 334 2 244 334 3 450 341 4 557 327 The resulting variable is a matrix with 8 rows and 3 columns. If you know the number of blocks or the block sizes, you can then separate the blocked-data. Now, the following data stored in the file bad.dat will not load into Octave's workspace: 1 232.1 334 2 245.2 3 456.23 4 555.6 because line 1 has three columns whereas lines 2-4 have two columns. If you try to load this file, Octave will complain: octave:3> load -ascii bad.dat error: load: bad.dat: inconsisitent number of columns near line 2 error:load: unable to extract matrix size from file 'bad.dat' Simple descriptive statistics Consider an Octave function mcintgr and its vectorized version mcintgrv. This function can evaluate the integral for a mathematical function f in some interval [a; b] where the function is positive. The Octave function is based on the Monte Carlo method and the return value, that is, the integral, is therefore a stochastic variable. When we calculate a given integral, we should as a minimum present the result as a mean or another appropriate measure of a central value together with an associated statistical uncertainty. This is true for any other stochastic variable, whether it is the height of the pupils in class, length of a plant's leaves, and so on. In this section, we will use Octave for the most simple statistical description of stochastic variables. Histogram and moments Let us calculate the integral given in Equation (5.9) one thousand times using the vectorized version of the Monte Carlo integrator: octave:4> for i=1:1000 > s(i) = mcintgrv("sin", 0, pi, 1000); > endfor The array s now contains a sequence of numbers which we know are approximately 2. Before we make any quantitative statistical description, it is always a good idea to first plot a histogram of the data as this gives an approximation to the true underlying probability distribution of the variable s. The easiest way to do this is by using Octave's hist function which can be called using: octave:5> hist(s, 30, 1) The first argument, s, to hist is the stochastic variable, the second is the number of bins that s should be grouped into (here we have used 30), and the third argument gives the sum of the heights of the histogram (here we set it to 1). The histogram is shown in the figure below. If hist is called via the command hist(s), s is grouped into ten bins and the sum of the heights of the histogram is equal to sum(s). From the figure, we see that mcintgrv produces a sequence of random numbers that appear to be normal (or Gaussian) distributed with a mean of 2. This is what we expected. It then makes good sense to describe the variable via the sample mean defined as: where N is the number of samples (here 1000) and si the i'th data point, as well as the sample variance given by: The variance is a measure of the distribution width and therefore an estimate of the statistical uncertainty of the mean value. Sometimes, one uses the standard deviation instead of the variance. The standard deviation is simply the square root of the variance To calculate the sample mean, sample variance, and the standard deviation in Octave, you use: octave:6> mean(s) ans = 1.9999 octave:7> var(s) ans = 0.002028 octave:8> std(s) ans = 0.044976 In the statistical description of the data, we can also include the skewness which measures the symmetry of the underlying distribution around the mean. If it is positive, it is an indication that the distribution has a long tail stretching towards positive values with respect to the mean. If it is negative, it has a long negative tail. The skewness is often defined as: We can calculate this in Octave via: octave:9> skewness(s) ans = -0.15495 This result is a bit surprising because we would assume from the histogram that the data set represents numbers picked from a normal distribution which is symmetric around the mean and therefore has zero skewness. It illustrates an important point—be careful to use the skewness as a direct measure of the distributions symmetry—you need a very large data set to get a good estimate. You can also calculate the kurtosis which measures the flatness of the sample distribution compared to a normal distribution. Negative kurtosis indicates a relative flatter distribution around the mean and a positive kurtosis that the sample distribution has a sharp peak around the mean. The kurtosis is defined by the following: It can be calculated by the kurtosis function. octave:10> kurtosis(s) ans = -0.02310 The kurtosis has the same problem as the skewness—you need a very large sample size to obtain a good estimate. Sample moments As you may know, the sample mean, variance, skewness, and kurtosis are examples of sample moments. The mean is related to the first moment, the variance the second moment, and so forth. Now, the moments are not uniquely defined. One can, for example, define the k'th absolute sample moment pka and k'th central sample moment pkc as: Notice that the first absolute moment is simply the sample mean, but the first central sample moment is zero. In Octave, you can easily retrieve the sample moments using the moment function, for example, to calculate the second central sample moment you use: octave:11> moment(s, 2, 'c') ans = 0.002022 Here the first input argument is the sample data, the second defines the order of the moment, and the third argument specifies whether we want the central moment 'c' or absolute moment 'a' which is the default. Compare the output with the output from Command 7—why is it not the same?
Read more
  • 0
  • 0
  • 16254

article-image-android-application-testing-adding-functionality-ui
Packt
27 Jun 2011
10 min read
Save for later

Android Application Testing: Adding Functionality to the UI

Packt
27 Jun 2011
10 min read
  Android Application Testing Guide Build intensively tested and bug free Android applications  The user interface is in place. Now we start adding some basic functionality. This functionality will include the code to handle the actual temperature conversion. Temperature conversion From the list of requirements from the previous article we can obtain this statement: When one temperature is entered in one field the other one is automatically updated with the conversion. Following our plan we must implement this as a test to verify that the correct functionality is there. Our test would look something like this: @UiThreadTest public final void testFahrenheitToCelsiusConversion() { mCelsius.clear(); mFahrenheit.clear(); final double f = 32.5; mFahrenheit.requestFocus(); mFahrenheit.setNumber(f); mCelsius.requestFocus(); final double expectedC = TemperatureConverter.fahrenheitToCelsius(f); final double actualC = mCelsius.getNumber(); final double delta = Math.abs(expectedC - actualC); final String msg = "" + f + "F -> " + expectedC + "C but was " + actualC + "C (delta " + delta + ")"; assertTrue(msg, delta < 0.005); } Firstly, as we already know, to interact with the UI changing its values we should run the test on the UI thread and thus is annotated with @UiThreadTest. Secondly, we are using a specialized class to replace EditText providing some convenience methods like clear() or setNumber(). This would improve our application design. Next, we invoke a converter, named TemperatureConverter, a utility class providing the different methods to convert between different temperature units and using different types for the temperature values. Finally, as we will be truncating the results to provide them in a suitable format presented in the user interface we should compare against a delta to assert the value of the conversion. Creating the test as it is will force us to follow the planned path. Our first objective is to add the needed code to get the test to compile and then to satisfy the test's needs. The EditNumber class In our main project, not in the tests one, we should create the class EditNumber extending EditText as we need to extend its functionality. We use Eclipse's help to create this class using File | New | Class or its shortcut in the Toolbars. This screenshot shows the window that appears after using this shortcut: The following table describes the most important fields and their meaning in the previous screen:     Field Description Source folder: The source folder for the newly-created class. In this case the default location is fine. Package: The package where the new class is created. In this case the default package com.example.aatg.tc is fine too. Name: The name of the class. In this case we use EditNumber. Modifiers: Modifiers for the class. In this particular case we are creating a public class. Superclass: The superclass for the newly-created type. We are creating a custom View and extending the behavior of EditText, so this is precisely the class we select for the supertype. Remember to use Browse... to find the correct package. Which method stubs would you like to create? These are the method stubs we want Eclipse to create for us. Selecting Constructors from superclass and Inherited abstract methods would be of great help. As we are creating a custom View we should provide the constructors that are used in different situations, for example when the custom View is used inside an XML layout. Do you want to add comments? Some comments are added automatically when this option is selected. You can configure Eclipse to personalize these comments. Once the class is created we need to change the type of the fields first in our test: public class TemperatureConverterActivityTests extends ActivityInstrumentationTestCase2<TemperatureConverterActivity> { private TemperatureConverterActivity mActivity; private EditNumber mCelsius; private EditNumber mFahrenheit; private TextView mCelsiusLabel; private TextView mFahrenheitLabel; ... Then change any cast that is present in the tests. Eclipse will help you do that. If everything goes well, there are still two problems we need to fix before being able to compile the test: We still don't have the methods clear() and setNumber() in EditNumber We don't have the TemperatureConverter utility class To create the methods we are using Eclipse's helpful actions. Let's choose Create method clear() in type EditNumber. Same for setNumber() and getNumber(). Finally, we must create the TemperatureConverter class. Be sure to create it in the main project and not in the test project. Having done this, in our test select Create method fahrenheitToCelsius in type TemperatureConverter. This fixes our last problem and leads us to a test that we can now compile and run. Surprisingly, or not, when we run the tests, they will fail with an exception: 09-06 13:22:36.927: INFO/TestRunner(348): java.lang. ClassCastException: android.widget.EditText 09-06 13:22:36.927: INFO/TestRunner(348): at com.example.aatg. tc.test.TemperatureConverterActivityTests.setUp( TemperatureConverterActivityTests.java:41) 09-06 13:22:36.927: INFO/TestRunner(348): at junit.framework. TestCase.runBare(TestCase.java:125) That is because we updated all of our Java files to include our newly-created EditNumber class but forgot to change the XMLs, and this could only be detected at runtime. Let's proceed to update our UI definition: <com.example.aatg.tc.EditNumber android_layout_height="wrap_content" android_id="@+id/celsius" android_layout_width="match_parent" android_layout_margin="@dimen/margin" android_gravity="right|center_vertical" android_saveEnabled="true" /> That is, we replace the original EditText by com.example.aatg.tc.EditNumber which is a View extending the original EditText. Now we run the tests again and we discover that all tests pass. But wait a minute, we haven't implemented any conversion or any handling of values in the new EditNumber class and all tests passed with no problem. Yes, they passed because we don't have enough restrictions in our system and the ones in place simply cancel themselves. Before going further, let's analyze what just happened. Our test invoked the mFahrenheit.setNumber(f) method to set the temperature entered in the Fahrenheit field, but setNumber() is not implemented and it is an empty method as generated by Eclipse and does nothing at all. So the field remains empty. Next, the value for expectedC—the expected temperature in Celsius is calculated invoking TemperatureConverter.fahrenheitToCelsius(f), but this is also an empty method as generated by Eclipse. In this case, because Eclipse knows about the return type it returns a constant 0. So expectedC becomes 0. Then the actual value for the conversion is obtained from the UI. In this case invoking getNumber() from EditNumber. But once again this method was automatically generated by Eclipse and to satisfy the restriction imposed by its signature, it must return a value that Eclipse fills with 0. The delta value is again 0, as calculated by Math.abs(expectedC – actualC). And finally our assertion assertTrue(msg, delta < 0.005) is true because delta=0 satisfies the condition, and the test passes. So, is our methodology flawed as it cannot detect a simple situation like this? No, not at all. The problem here is that we don't have enough restrictions and they are satisfied by the default values used by Eclipse to complete auto-generated methods. One alternative could be to throw exceptions at all of the auto-generated methods, something like RuntimeException("not yet implemented") to detect its use when not implemented. But we will be adding enough restrictions in our system to easily trap this condition. TemperatureConverter unit tests It seems, from our previous experience, that the default conversion implemented by Eclipse always returns 0, so we need something more robust. Otherwise this will be only returning a valid result when the parameter takes the value of 32F. The TemperatureConverter is a utility class not related with the Android infrastructure, so a standard unit test will be enough to test it. We create our tests using Eclipse's File | New | JUnit Test Case, filling in some appropriate values, and selecting the method to generate a test as shown in the next screenshot. Firstly, we create the unit test by extending junit.framework.TestCase and selecting com.example.aatg.tc.TemperatureConverter as the class under test: Then by pressing the Next > button we can obtain the list of methods we may want to test: We have implemented only one method in TemperatureConverter, so it's the only one appearing in the list. Other classes implementing more methods will display all the options here. It's good to note that even if the test method is auto-generated by Eclipse it won't pass. It will fail with the message Not yet implemented to remind us that something is missing. Let's start by changing this: /** * Test method for {@link com.example.aatg.tc. TemperatureConverter#fahrenheitToCelsius(double)}. */ public final void testFahrenheitToCelsius() { for (double c: conversionTableDouble.keySet()) { final double f = conversionTableDouble.get(c); final double ca = TemperatureConverter.fahrenheitToCelsius(f); final double delta = Math.abs(ca - c); final String msg = "" + f + "F -> " + c + "C but is " + ca + " (delta " + delta + ")"; assertTrue(msg, delta < 0.0001); } } Creating a conversion table with values for different temperature conversion we know from other sources would be a good way to drive this test. private static final HashMap<Double, Double> conversionTableDouble = new HashMap<Double, Double>(); static { // initialize (c, f) pairs conversionTableDouble.put(0.0, 32.0); conversionTableDouble.put(100.0, 212.0); conversionTableDouble.put(-1.0, 30.20); conversionTableDouble.put(-100.0, -148.0); conversionTableDouble.put(32.0, 89.60); conversionTableDouble.put(-40.0, -40.0); conversionTableDouble.put(-273.0, -459.40); } We may just run this test to verify that it fails, giving us this trace: junit.framework.AssertionFailedError: -40.0F -> -40.0C but is 0.0 (delta 40.0)at com.example.aatg.tc.test.TemperatureConverterTests. testFahrenheitToCelsius(TemperatureConverterTests.java:62) at java.lang.reflect.Method.invokeNative(Native Method) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:169) at android.test.AndroidTestRunner.runTest(AndroidTestRunner. java:154) at android.test.InstrumentationTestRunner.onStart( InstrumentationTestRunner.java:520) at android.app.Instrumentation$InstrumentationThread.run( Instrumentation.java:1447) Well, this was something we were expecting as our conversion always returns 0. Implementing our conversion, we discover that we need some ABSOLUTE_ZERO_F constant: public class TemperatureConverter { public static final double ABSOLUTE_ZERO_C = -273.15d; public static final double ABSOLUTE_ZERO_F = -459.67d; private static final String ERROR_MESSAGE_BELOW_ZERO_FMT = "Invalid temperature: %.2f%c below absolute zero"; public static double fahrenheitToCelsius(double f) { if (f < ABSOLUTE_ZERO_F) { throw new InvalidTemperatureException( String.format(ERROR_MESSAGE_BELOW_ZERO_FMT, f, 'F')); } return ((f - 32) / 1.8d); } } Absolute zero is the theoretical temperature at which entropy would reach its minimum value. To be able to reach this absolute zero state, according to the laws of thermodynamics, the system should be isolated from the rest of the universe. Thus it is an unreachable state. However, by international agreement, absolute zero is defined as 0K on the Kelvin scale and as -273.15°C on the Celsius scale or to -459.67°F on the Fahrenheit scale. We are creating a custom exception, InvalidTemperatureException, to indicate a failure providing a valid temperature to the conversion method. This exception is created simply by extending RuntimeException: public class InvalidTemperatureException extends RuntimeException { public InvalidTemperatureException(String msg) { super(msg); } } Running the tests again we now discover that testFahrenheitToCelsiusConversion test fails, however testFahrenheitToCelsius succeeds. This tells us that now conversions are correctly handled by the converter class but there are still some problems with the UI handling this conversion. A closer look at the failure trace reveals that there's something still returning 0 when it shouldn't. This reminds us that we are still lacking a proper EditNumber implementation. Before proceeding to implement the mentioned methods, let's create the corresponding tests to verify what we are implementing is correct.
Read more
  • 0
  • 0
  • 2689

article-image-android-application-testing-tdd-and-temperature-converter
Packt
27 Jun 2011
7 min read
Save for later

Android application testing: TDD and the temperature converter

Packt
27 Jun 2011
7 min read
Getting started with TDD Briefly, Test Driven Development is the strategy of writing tests along the development process. These test cases are written in advance of the code that is supposed to satisfy them. A single test is added, then the code needed to satisfy the compilation of this test and finally the full set of test cases is run to verify their results. This contrasts with other approaches to the development process where the tests are written at the end when all the coding has been done. Writing the tests in advance of the code that satisfies them has several advantages. First, is that the tests are written in one way or another, while if the tests are left till the end it is highly probable that they are never written. Second, developers take more responsibility for the quality of their work. Design decisions are taken in single steps and finally the code satisfying the tests is improved by refactoring it. This UML activity diagram depicts the Test Driven Development to help us understand the process: The following sections explain the individual activities depicted in this activity diagram. Writing a test case We start our development process with writing a test case. This apparently simple process will put some machinery to work inside our heads. After all, it is not possible to write some code, test it or not, if we don't have a clear understanding of the problem domain and its details. Usually, this step will get you face to face with the aspects of the problem you don't understand, and you need to grasp if you want to model and write the code. Running all tests Once the test is written the obvious following step is to run it, altogether with other tests we have written so far. Here, the importance of an IDE with built-in support of the testing environment is perhaps more evident than in other situations and this could cut the development time by a good fraction. It is expected that firstly, our test fails as we still haven't written any code! To be able to complete our test, we usually write additional code and take design decisions. The additional code written is the minimum possible to get our test to compile. Consider here that not compiling is failing. When we get the test to compile and run, if the test fails then we try to write the minimum amount of code necessary to make the test succeed. This may sound awkward at this point but the following code example in this article will help you understand the process. Optionally, instead of running all tests again you can just run the newly added test first to save some time as sometimes running the tests on the emulator could be rather slow. Then run the whole test suite to verify that everything is still working properly. We don't want to add a new feature by breaking an existing one. Refactoring the code When the test succeeds, we refactor the code added to keep it tidy, clean, and minimal. We run all the tests again, to verify that our refactoring has not broken anything and if the tests are again satisfied, and no more refactoring is needed we finish our task. Running the tests after refactoring is an incredible safety net which has been put in place by this methodology. If we made a mistake refactoring an algorithm, extracting variables, introducing parameters, changing signatures or whatever your refactoring is composed of, this testing infrastructure will detect the problem. Furthermore, if some refactoring or optimization could not be valid for every possible case we can verify it for every case used by the application and expressed as a test case. What is the advantage? Personally, the main advantage I've seen so far is that you focus your destination quickly and is much difficult to divert implementing options in your software that will never be used. This implementation of unneeded features is a wasting of your precious development time and effort. And as you may already know, judiciously administering these resources may be the difference between successfully reaching the end of the project or not. Probably, Test Driven Development could not be indiscriminately applied to any project. I think that, as well as any other technique, you should use your judgment and expertise to recognize where it can be applied and where not. But keep this in mind: there are no silver bullets. The other advantage is that you always have a safety net for your changes. Every time you change a piece of code, you can be absolutely sure that other parts of the system are not affected as long as there are tests verifying that the conditions haven't changed. Understanding the testing requirements To be able to write a test about any subject, we should first understand the Subject under test. We also mentioned that one of the advantages is that you focus your destination quickly instead of revolving around the requirements. Translating requirements into tests and cross-referencing them is perhaps the best way to understand the requirements, and be sure that there is always an implementation and verification for all of them. Also, when the requirements change (something that is very frequent in software development projects), we can change the tests verifying these requirements and then change the implementation to be sure that everything was correctly understood and mapped to code. Creating a sample project—the Temperature Converter Our examples will revolve around an extremely simple Android sample project. It doesn't try to show all the fancy Android features but focuses on testing and gradually building the application from the test, applying the concepts learned before. Let's pretend that we have received a list of requirements to develop an Android temperature converter application. Though oversimplified, we will be following the steps you normally would to develop such an application. However, in this case we will introduce the Test Driven Development techniques in the process. The list of requirements Most usual than not, the list of requirements is very vague and there is a high number of details not fully covered. As an example, let's pretend that we receive this list from the project owner: The application converts temperatures from Celsius to Fahrenheit and vice-versa The user interface presents two fields to enter the temperatures, one for Celsius other for Fahrenheit When one temperature is entered in one field the other one is automatically updated with the conversion If there are errors, they should be displayed to the user, possibly using the same fields Some space in the user interface should be reserved for the on screen keyboard to ease the application operation when several conversions are entered Entry fields should start empty Values entered are decimal values with two digits after the point Digits are right aligned Last entered values should be retained even after the application is paused User interface concept design Let's assume that we receive this conceptual user interface design from the User Interface Design team: Creating the projects Our first step is to create the project. As we mentioned earlier, we are creating a main and a test project. The following screenshot shows the creation of the TemperatureConverter project (all values are typical Android project values): When you are ready to continue you should press the Next > button in order to create the related test project. The creation of the test project is displayed in this screenshot. All values will be selected for you based on your previous entries:
Read more
  • 0
  • 0
  • 2755
article-image-android-application-testing-getting-started
Packt
24 Jun 2011
9 min read
Save for later

Android Application Testing: Getting Started

Packt
24 Jun 2011
9 min read
We will avoid introductions to Android and the Open Handset Alliance (http://www.openhandsetalliance.com) as they are covered in many books already and I am inclined to believe that if you are reading this article covering this more advanced topic you have started with Android development before. However, we will be reviewing the main concepts behind testing and the techniques, frameworks, and tools available to deploy your testing strategy on Android. Brief history Initially, when Android was introduced by the end of 2007, there was very little support for testing in the platform, and for some of us very accustomed to using testing as a component intimately coupled with the development process, it was the time to start developing some frameworks and tools to permit this approach. By that time Android had some rudimentary support for unit testing using JUnit (https://junit.org/junit5/), but it was not fully supported and even less documented. In the process of writing my own library and tools, I discovered Phil Smith's Positron , an Open Source library and a very suitable alternative to support testing on Android, so I decided to extend his excellent work and bring some new and missing pieces to the table. Some aspects of test automation were not included and I started a complementary project to fill that gap, it was consequently named Electron. And although positron is the anti-particle of the electron, and they annihilate if collide, take for granted that that was not the idea, but more the conservation of energy and the generation of some visible light and waves. Later on, Electron entered the first Android Development Challenge (ADC1) in early 2008 and though it obtained a rather good score in some categories, frameworks had no place in that competition. Should you be interested in the origin of testing on Android, please find some articles and videos that were published in my personal blog (http://dtmilano.blogspot.co.uk/search/label/electron). By that time Unit Tests could be run on Eclipse. However, testing was not done on the real target but on a JVM on the local development computer. Google also provided application instrumentation code through the Instrumentation class. When running an application with instrumentation turned on, this class is instantiated for you before any of the application code, allowing you to monitor all of the interaction the system has with the application. An Instrumentation implementation is described to the system through an AndroidManifest.xml file. Software bugs It doesn't matter how hard you try and how much time you invest in design and even how careful you are when programming, mistakes are inevitable and bugs will appear. Bugs and software development are intimately related. However, the term bugs to describe flaws, mistakes, or errors has been used in hardware engineering many decades before even computers were invented. Notwithstanding the story about the term bug coined by Mark II operators at Harvard University, Thomas Edison wrote this in 1878 in a letter to Puskás Tivadar showing the early adoption of the term: "It has been just so in all of my inventions. The first step is an intuition, and comes with a burst, then difficulties arise — this thing gives out and [it is] then that 'Bugs' — as such little faults and difficulties are called — show themselves and months of intense watching, study and labor are requisite before commercial success or failure is certainly reached." How bugs severely affect your projects Bugs affect many aspects of your software development project and it is clearly understood that the sooner in the process you find and squash them, the better. It doesn't matter if you are developing a simple application to publish on the Android Market, you are re-branding the Android experience for an operator, or creating a customized version of Android for a device manufacturer, bugs will delay your shipment and will cost you money. From all of the software development methodologies and techniques, Test Driven Development, an agile component of the software development process, is likely the one that forces you to face your bugs earlier in the development process and thus it is also likely that you will solve more problems up front. Furthermore, the increase in productivity can be clearly appreciated in a project where a software development team uses this technique versus one that is, in the best of the cases, writing tests at the end of the development cycle. If you have been involved in software development for the mobile industry, you will have reasons to believe that with all the rush this stage never occurs. It's funny because usually, this rush is to solve problems that could have been avoided. In a study conducted by the National Institute of Standards and Technology (USA) in 2002, it was reported that software bugs cost the country economy $59.5 billion annually. More than a third of this cost can be avoided if better software testing is performed. But please, don't misunderstand this message. There are no silver bullets in software development and what will lead you to an increase in productivity and manageability of your project is the discipline applying these methodologies and techniques to stay in control. Why, what, how, and when to test You should understand that early bug detection saves huge amount of project resources and reduces software maintenance costs. This is the best known reason to write software tests for your development project. Increased productivity will soon be evident. Additionally, writing the tests will give you a deeper understanding of the requirements and the problem to be solved. You will not be able to write tests for a piece of software you don't understand. This is also the reason behind the approach of writing tests to clearly understand legacy or third party code and having the infrastructure to confidently change or update it. The more the code is covered by your tests, the higher could be your expectations of discovering the hidden bugs. If during this coverage analysis you find that some areas of your code are not exercised, additional tests should be added to cover this code as well. This technique requires a special instrumented Android build to collect probe data and must be disabled for any release code because the impact on performance could severely affect application behavior. To fill in this gap, enter EMMA (http://emma.sourceforge.net/), an open-source toolkit for measuring and reporting Java code coverage, that can offline instrument classes for coverage. It supports various coverage types: class method line basic block Coverage reports can also be obtained in different output formats. EMMA is supported up to some degree by the Android framework and it is possible to build an EMMA instrumented version of Android. This screenshot shows how an EMMA code coverage report is displayed in the Eclipse editor, showing green lines when the code has been tested, provided the corresponding plugin is installed. (Move the mouse over the image to enlarge it.) Unfortunately, the plugin doesn't support Android tests yet, so right now you can use it for your JUnit tests only. Android coverage analysis report is only available through HTML. Tests should be automated and you should run some or all tests every time you introduce a change or addition to your code in order to ensure that all the conditions that were met before are still met and that the new code satisfies the tests as expected. This leads us to the introduction of Continuous Integration. It relies on the automation of tests and building processes. If you don't use automated testing, it is practically impossible to adopt Continuous Integration as part of the development process and it is very difficult to ensure that changes would not break existing code. What to test Strictly speaking you should test every statement in your code but this also depends on different criteria and can be reduced to test the path of execution or just some methods. Usually there's no need to test something that can't be broken, for example it usually makes no sense to test getters and setters as you probably won't be testing the Java compiler on your own code and the compiler would have already performed its tests. In addition to the functional areas you should test, there are some specific areas of Android applications that you should consider. We will be looking at these in the following sections. Activity lifecycle events You should test that your activities handle lifecycle events correctly. If your activity should save its state during onPause() or onDestroy() events and later be able to restore it in onCreate(Bundle savedInstanceState), you should be able to reproduce and test all these conditions and verify that the state was correctly saved and restored. Configuration-changed events should also be tested as some of these events cause the current Activity to be recreated, and you should test correct handling of the event and the newly created Activity preserves the previous state. Configuration changes are triggered even by rotation events, so you should test you application's ability to handle these situations. Database and filesystem operations Database and filesystem operations should be tested to ensure that they are handled correctly. These operations should be tested in isolation at the lower system level, at a higher level through ContentProviders, or from the application itself. To test these components in isolation, Android provides some mock objects in the android.test.mock package. Physical characteristics of the device Much before delivering your application you should be sure that all of the different devices it can be run on are supported or at less you should detect the situation and take pertinent measures. Among other characteristics of the devices, you may find that you should test: Network capabilities Screen densities Screen resolutions Screen sizes Availability of sensors Keyboard and other input devices GPS External storage In this respect Android Virtual Devices play an important role because it is practically impossible to have access to all of the devices with all of the possible combinations of features but you can configure AVD for almost every situation. However, as it was mentioned before, leave your final tests for actual devices where the real users will run the application to understand its behavior.
Read more
  • 0
  • 0
  • 2483

article-image-how-create-lesson-moodle-2
Packt
24 Jun 2011
7 min read
Save for later

How to Create a Lesson in Moodle 2

Packt
24 Jun 2011
7 min read
  History Teaching with Moodle 2 Create a History course in Moodle packed with lessons and activities to make learning and teaching History interactive and fun  Approaching the lesson We plan to introduce our Year 7 History class to the idea of the Doomsday Book as a means by which William reinforced his control over the country. William was naturally curious about the country he had just conquered. He was particularly keen to find out how much it was worth. He despatched officials to every village with detailed questions to ask about the land that they worked on and the animals that they farmed with. He also sent soldiers who threatened to kill people who lied. All of the records from these village surveys were collated into the Doomsday Book. Many Saxons detested the process and the name of the book is derived from this attitude of loathing towards something they regarded as intrusive and unfair. William died before the process could be completed. Clear lesson objectives can be stated at the start of the lesson. Students would be expected to work through each page and answer questions identical to those found in the Quiz module. The lesson gives students the opportunity to return to a page if the required level of understanding has not been achieved. The lesson questions help students to reach an understanding at their own pace. The short video clips we intend to use will come from the excellent National Archive website. It has links to short sequences of approximately ninety seconds in which actors take on the role of villagers and commissioners and offer a variety of opinions about the nature and purpose of the survey that they are taking part in. At the end of the lesson, we want the students to have an understanding of: The purpose of the Domesday Book How the information was compiled A variety of attitudes towards the whole process Our starting point is to create a flow diagram that captures the routes a student might take through the lesson: The students will see the set of objectives, a short introduction to the Doomsday Book, and a table of contents. They can select the videos in any order. When they have watched each video and answered the questions associated with the content they will be asked to write longer answers to a series of summative questions. These answers are marked individually by the teacher who thus gets a good overall idea of how well the students have absorbed the information. The assessment of these questions could easily include our essay outcomes marking scale. The lesson ends when the student has completed all of the answers. The lesson requires: A branch table (the table of contents). Four question pages based upon a common template. One end of branch page. A question page for the longer answers. An end of lesson page. The lesson awards marks for the correct answers to questions on each page in much the same way as if they were part of a quiz. Since we are only adding one question per page the scores for these questions are of less significance than a student's answers to the essay questions at the end of the lesson. It is after all, these summative questions that allow the students to demonstrate their understanding of the content they have been working with. Moodle allows this work to be marked in exactly the same way as if it was an essay. This time it will be in the form of an online essay and will take up its place in the Gradebook. We are, therefore, not interested in a standard mark for the students' participation in the lesson and when we set the lesson up, this will become apparent through the choices we make.   Setting up a lesson It is important to have a clear idea of the lesson structure before starting the creation of the lesson. We have used paper and pen to create a flow diagram. We know which images, videos, and text are needed on each page and have a clear idea of the formative and summative questions that will enable us to challenge our students and assess how well they have understood the significance of the Doomsday Book. We are now in a position to create the lesson: Enter the Year 7 History course and turn on editing. In Topic 1, select Add an Activity and click Lesson. In the Name section, enter an unambiguous name for the lesson as this is the text that students will click on to enter the lesson. Enter the values as shown in the following screenshot: In the General section, we do not want to impose a time limit on the lesson. We do need to state how many options there are likely to be on each question page. For multiple choice questions, there are usually four options. In the Grade section, we want the essay that they compose at the end of the lesson to be marked in the same way that other essays have been marked. In the Grade options, our preference is to avoid using the lesson questions as an assessment activity. We want it to be a practice lesson where students can work through the activities without needing to earn a score. We have turned off scoring. The students' final essay submission will be marked in line with our marking policy. Students can retake it as many times as they want to. In the Flow control section, we have clicked the Show advanced button to see all of the options available. We want students to be able to navigate the pages to check answers and go back to review answers if necessary. They can take the lesson as often as they want as we intend it to be used for revision purposes for a timed essay or in the summer examination. We have ignored the opportunity to add features such as menus and progress bars as we will be creating our own navigation system. This section also concerns the look and feel of the pages if set to a slide show, an option we are not planning to use. We are planning to create a web link on each page rather than have students download files so we will not be using the Popup to file or web page option. If you are concerned about the stability of your Internet connection for the weblinks to videos you plan to show, there is an alternative option. This would involve downloading the files to your computer and converting them to .flv files. They can then be uploaded to the file picker in the usual way and a link can be created to each one using the Choose a file button shown here. Moodle's video player would play the videos and you would not be reliant on an unstable Internet connection to see the results. The Dependent on section allows further restrictions to be imposed that are not appropriate for this lesson. We do however, want to mark the essay that will be submitted in accordance with the custom marking scheme developed earlier in the course. The box in the Outcomes section must be checked. Clicking the Save and return to course button ensures that the newly created lesson, The Domesday Book, awaits in Topic 1.  
Read more
  • 0
  • 0
  • 2844
Modal Close icon
Modal Close icon