Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Server-Side Web Development

406 Articles
article-image-creating-basic-vaadin-project
Packt
13 Dec 2011
10 min read
Save for later

Creating a Basic Vaadin Project

Packt
13 Dec 2011
10 min read
  (For more resources on this topic, see here.)   Understanding Vaadin In order to understand Vaadin, we should first understand what is its goal regarding the development of web applications. Vaadin's philosophy Classical HTML over HTTP application frameworks are coupled to the inherent request/response nature of the HTTP protocol. This simple process translates as follows: The client makes a request to access an URL. The code located on the server parses request parameters (optional). The server writes the response stream accordingly. The response is sent to the client. All major frameworks (and most minor ones, by the way) do not question this model: Struts, Spring MVC, Ruby on Rails, and others, completely adhere to this approach and are built upon this request/response way of looking at things. It is no mystery that HTML/HTTP application developers tend to comprehend applications through a page-flow filter. On the contrary, traditional client-server application developers think in components and data binding because it is the most natural way for them to design applications (for example, a select-box of countries or a name text field). A few recent web frameworks, such as JSF, tried to cross the bridge between components and page-flow, with limited success. The developer handles components, but they are displayed on a page, not a window, and he/she still has to manage the flow from one page to another. The Play Framework (http://www.playframework.org/) takes a radical stance on the page-flow subject, stating that the Servlet API is a useless abstraction on the request/response model and sticks even more to it. Vaadin's philosophy is two-fold: It lets developers design applications through components and data bindings It isolates developers as much as possible from the request/response model in order to think in screens and not in windows This philosophy lets developers design their applications the way it was before the web revolution. In fact, fat client developers can learn Vaadin in a few hours and start creating applications in no time. The downside is that developers, who learned their craft with the thin client and have no prior experience of fat client development, will have a hard time understanding Vaadin as they are inclined to think in page-flow. However, they will be more productive in the long run. Vaadin's architecture In order to achieve its goal, Vaadin uses an original architecture. The first fact of interest is that it is comprised of both a server and a client side. The client side manages thin rendering and user interactions in the browser The server side handles events coming from the client and sends changes made to the user interface to the client Communication between both tiers is done over the HTTP protocol. We will have a look at each of these tiers.   Client server communication Messages in Vaadin use three layers: HTTP, JSON, and UIDL. The former two are completely un-related to the Vaadin framework and are supported by independent third parties; UIDL is internal. HTTP protocol Using the HTTP protocol with Vaadin has the following two main advantages: There is no need to install anything on the client, as browsers handle HTTP (and HTTPS for that matter) natively. Firewalls that let pass the HTTP traffic (a likely occurrence) will let Vaadin applications function normally. JSON message format Vaadin messages between the client and the server use JavaScript Objects Notation (JSON). JSON is an alternative to XML that has the following several differences: First of all, the JSON syntax is lighter than the XML syntax. XML has both a start and an end tag, whereas JSON has a tag coupled with starting brace and ending brace. For example, the following two code snippets convey the same information, but the first requires 78 characters and the second only 63. For a more in depth comparison of JSON and XML, refer to the following URL: http://json.org/xml.html <person> <firstName>John</firstName> <lastName>Doe</lastName> </person> {"person" { {"firstName": "John"}, {"lastName": "Doe"} } The difference varies from message to message, but on an average, it is about 40%. It is a real asset only for big messages, and if you add server GZIP compression, size difference starts to disappear. The reduced size is no disadvantage though. Finally, XML designers go to great length to differentiate between child tags and attributes, the former being more readable to humans and the latter to machines. JSON messages design is much simpler as JSON has no attributes. UIDL "schema" The last stack that is added to JSON and HTTP is the User Interface Definition Language (UIDL). UIDL describes complex user interfaces with JSON syntax. The good news about these technologies is that Vaadin developers won't be exposed to them. The client part The client tier is a very important tier in web applications as it is the one with which the end user directly interacts. In this endeavor, Vaadin uses the excellent Google Web Toolkit (GWT) framework. In the GWT development, there are the following mandatory steps: The code is developed in Java. Then, the GWT compiler transforms the Java code in JavaScript. Finally, the generated JavaScript is bundled with the default HTML and CSS files, which can be modified as a web application. Although novel and unique, this approach provides interesting key features that catch the interest of end users, developers, and system administrators alike: Disconnected capability, in conjunction with HTML 5 client-side data stores Displaying applications on small form factors, such as those of handheld devices Development only with the Java language Excellent scalability, as most of the code is executed on the client side, thus freeing the server side from additional computation On the other hand, there is no such thing as a free lunch! There are definitely disadvantages in using GWT, such as the following: The whole coding/compilation/deployment process adds a degree of complexity to the standard Java web application development. Although a Google GWT plugin is available for Eclipse and NetBeans, IDEs do not provide standard GWT development support. Using GWT development mode directly or through one such plugin is really necessary, because without it, developing is much slower and debugging almost impossible. For more information about GWT dev mode, please refer to the following URL: http://code.google.com/intl/en/webtoolkit/doc/latest/DevGuideCompilingAndDebugging.html There is a consensus in the community that GWT has a higher learning curve than most classic web application frameworks; although the same can be said for others, such as JSF. If the custom JavaScript is necessary, then you have to bind it in Java with the help of a stack named JavaScript Native Interface (JSNI), which is both counter-intuitive and complex. With pure GWT, developers have to write the server-side code themselves (if there is any). Finally, if ever everything is done on the client side, it poses a great security risk. Even with obfuscated code, the business logic is still completely open for inspection from hackers. Vaadin uses GWT features extensively and tries to downplay its disadvantages as much as possible. This is all possible because of the Vaadin server part. The server part Vaadin's server-side code plays a crucial role in the framework. The biggest difference in Vaadin compared to GWT is that developers do not code the client side, but instead code the server side that generates the former. In particular, in GWT applications, the browser loads static resources (the HTML and associated JavaScript), whereas in Vaadin, the browser accesses the servlet that serves those same resources from a JAR (or the WEB-INF folder). The good thing is that it completely shields the developer from the client-code, so he/she cannot make unwanted changes. It may be also seen as a disadvantage, as it makes the developer unable to change the generated JavaScript before deployment. It is possible to add custom JavaScript, although it is rarely necessary. In Vaadin, you code only the server part! There are two important tradeoffs that Vaadin makes in order achieve this: As opposed to GWT, the user interface related code runs on the server, meaning Vaadin applications are not as scalable as pure GWT ones. This should not be a problem in most applications, but if you need to, you should probably leave Vaadin for some less intensive part of the application; stick to GWT or change an entirely new technology. While Vaadin applications are not as scalable as applications architecture around a pure JavaScript frontend and a SOA backend, a study found that a single Amazon EC2 instance could handle more than 10,000 concurrent users per minute, which is much more than your average application. The complete results can be found at the following URL: http://vaadin.com/blog/-/blogs/vaadin-scalabilitystudy-quicktickets Second, each user interaction creates an event from the browser to the server. This can lead to changes in the user interface's model in memory and in turn, propagate modifications to the JavaScript UI on the client. The consequence is that Vaadin applications simply cannot run while being disconnected from the server! If your requirements include the offline mode, then forget Vaadin. Terminal and adapter As in any low-coupled architecture, not all Vaadin framework server classes converse with the client side. In fact, this is the responsibility of one simple interface: com.vaadin.terminal.Terminal. In turn, this interface is used by a part of the framework aptly named as the Terminal Adapter, for it is designed around the Gang of Four Adapter (http://www.vincehuston.org/dp/adapter.html) pattern. This design allows for the client and server code to be completely independent of each other, so that one can be changed without changing the other. Another benefit of the Terminal Adapter is that you could have, for example, other implementations for things such as Swing applications. Yet, the only terminal implementation provided by the current Vaadin implementation is the web browser, namely com.vaadin.terminal.gwt.server.WebBrowser. However, this does not mean that it will always be the case in the future. If you are interested, then browse the Vaadin add-ons directory regularly to check for other implementations, or as an alternative, create your own! Client server synchronization The biggest challenge when representing the same model on two heterogeneous tiers is synchronization between each tier. An update on one tier should be reflected on the other or at least fail gracefully if this synchronization is not possible (an unlikely occurrence considering the modern day infrastructure). Vaadin's answer to this problem is a synchronization key generated by the server and passed on to the client on each request. The next request should send it back to the server or else the latter will restart the current session's application. This may be the cause of the infamous and sometimes frustrating "Out of Sync" error, so keep that in mind.  
Read more
  • 0
  • 0
  • 3062

article-image-adding-worksheets-and-resources-moodle
Packt
27 Oct 2009
12 min read
Save for later

Adding Worksheets and Resources with Moodle

Packt
27 Oct 2009
12 min read
We're teaching the topic of Rivers and Flooding; so to start with, we'll need to introduce our class to some basic facts about rivers and how they work. We aren't going to generate any new stuff yet; we're just going to upload to Moodle what we have already produced in previous years. Putting a worksheet on Moodle The way Moodle works is that we must first upload our worksheet into the course file storage area. Then, in that central section of our course page, we make a link to the worksheet from some appropriately chosen words. Our students click on these words to get to the worksheet. We've got an introductory factsheet (done in Word) about the River Thames. Let's get it into Moodle: Time for action-uploading a factsheet on to Moodle We need to get the worksheet uploaded into Moodle. To get this done, we have to follow a few simple steps. Go to your course page and click on the Turn editing on button, as shown in the following screenshot: Don't worry about all of the new symbols (icons) that appear. In the section you want the worksheet to be displayed, so look for these two boxes: Click on the Add a resource box (I'll go through all its options when we have a recap, later). Select a link to a file or web site. In Name, type the text that you want the students to click on, and in Summary (if you want) add a short description. The following screenshot gives an example of this: Once you're done with the above steps, click on Choose or upload a file. This takes you to the course files storage area. Click on Make a folder, and in the dialog box that is displayed, choose a suitable name for the folder all your worksheets will be stored in (we'll use Worksheets). Click on Create. Click on the folder that you just created (It will be empty except for Parent Folder, which takes you back to the main course files). Click on Upload a file. You'll be prompted to browse your computer's hard drive for the worksheet. Find the worksheet, select it with your cursor and click Open. It will appear as shown in the following screenshot: Click Upload this file. Once the file has been uploaded, it will appear as shown in the following screenshot: What just happened? We just uploaded our first ever worksheet to Moodle. It's now in the course files. Next, we need to make a link on the page that students can click on to get to that worksheet. I know what you're thinking! Thirteen steps, and there's still no sign of our River Thames worksheet on the course page in Moodle. Is it going to be this long-winded every time? Don't worry! There are only two—at worst three—steps left . And although it seems to be a lot of effort the first time, it gets much quicker, as we move on. We are also trying to be organized from the start by putting our worksheets neatly into a folder, so we took a couple of extra steps that we won't have to do next time. The folder will already be there for us. Ofcourse, you can just click on Upload a file and get your worksheets straight into the course files without any sort of order, and they will display for your students just as well. But when you have a lot of worksheets loaded, it will become harder and harder to locate them unless you have a system. Time for action-displaying our factsheet on our course page To get the Moodle course started, we need to create a link that—when clicked, will get the course started, carrying on from where we left off : Click on the word Choose to the right of your worksheet. (We are choosing to put this on Moodle.) The River Thames worksheet now shows in the Location box, under Link to a file or web site. We are almost there! Scroll down and make sure that you have selected the New window option in theWindow box, as shown in the following screenshot: At the bottom of the screen, click on Save and return to course. Done! The option Search for web page would take you to Google or another search engine to find a web site. You could put that web site into the location box instead, and it would make a clickable link for your students to follow. What just happened? Congratulations! You’ve now made a link to the factsheet about the River Thames that will get our Rivers and Flooding course started! By doing the final step above, we will get taken back to the course page where we'll see the words that we wrote in the Name box. They'll be in blue with a line underneath. This tells us it's a clickable link that will take us to the factsheet. If you can do that once, you can do it many times. Have a go hero-putting a slideshow onto Moodle It's important to go through the steps again, pretty quickly, so that you become familiar with them and are able to speed the process up. So why not take one of your slide shows (maybe done in PowerPoint) and upload that to Moodle? Start by creating a folder called Slideshows, so that in future, it will be available for any slideshows that you upload. Or, if you're too tired, just upload another sheet into our Worksheets folder and display that.   Putting a week's worth of slideshows into Moodle Now let's suppose that we have already prepared a week's worth of slideshows. Actually, I could say, a month's worth of worksheets, or a year's worth of exam papers. Basically, what we're going to do is upload several items, all at once. This is very useful because once you get used to uploading and displaying worksheets, you will very quickly start thinking about how tedious it would be, to put them on Moodle one at a time. Especially if you are studying ten major world rivers, and you have to go through all of those steps ten times. Well, you don't! Let's use my River Processes slideshows as our example. I have them saved in a folder on My Computer (as opposed to being shoved at random in a drawer, obviously!). Under normal circumstances, Moodle won't let you upload whole folders just like that. You have to either compress or zip them first (that basically means squeeze it up a bit, so it slides into cyberspace more smoothly). We first need to leave Moodle for a while and go to our own computer. I'm using Windows; for Macs, it will be slightly different. Time for action-getting a whole folder of work into Moodle in one go To view the slideshows, we need to upload the folder containing them from the hard drive of our computer into Moodle. Find the folder that you want to upload, right-click on it, and select Compressed (zipped) Folder within the Send To option. You'll get another folder with the same name, but in ZIP format. Go to your Moodle course page, and in the Administration box, click Files. We're in the course files storage area—this is another way in, if you ever need one! You can upload anything straight into here, and then provide a link to a file or web site. As we have done before, click on Upload and upload the zipped folder (it ends in .zip). Now click on Unzip, which is displayed to the right of your folder name (as shown in the following screenshot), and the folder will be restored to its normal size. What just happened? We put a bunch of slideshows about how rivers work into a folder on our computer. We then zipped the folder to make it slide into Moodle, and then when it was uploaded, we unzipped it to get it back to normal. If you want to be organized, select the checkbox displayed to the left of the zipped folder, and select delete completely. We don't need the zipped folder now, as we have got the original folder back. We now have two choices. Using the Link to a file or web site option in the Add a resource block, we can display each slideshow, in an orderly manner, in the list. We did this with our Thames factsheet, so we know how to do this. Alternatively, we can simply display the folder and let the students open it to get to the slideshows. We're going to opt for the second choice. Why? Bearing in mind about appearances being vital, it would look much neater on our course page if we had a dinky little briefcase icon. The student can click on the briefcase icon to see the list of slideshows, rather than scrolling down a long list on the page. Let us see how this is done: Time for action-displaying a whole folder on Moodle Let us upload the entire folder, which contains the related slideshows, onto Moodle. This will require us to perform only four steps: With editing turned on, click on Add a resource and choose Display a directory. In the Name field, type something meaningful for the students to click on and add a description in the Summary field, if you wish. Click on Display a directory and find the one that you want—for us, RiverProcesses. Scroll down, and click on Save and return to course. What just happened? We made a link to a week's worth of slideshows on our course page, instead of displaying them one at a time. If we looked at the outcome, instead of the icon of a slideshow, such as the PowerPoint icon, we get a folder icon. When the text next to it is clicked, the folder opens, and all of the slideshows inside can be viewed. It is much easier on the eye, when you go directly to the course page, than going through a long list of stuff . Making a 'click here' type link to the River Thames web site Let's learn how to create a link that will lead us to the River Thames web site, or in fact to any web site. However, we're investigating the Thames at the moment, so this would be really helpful. Just imagine, how much simpler it would be for our students to be able to get to a site in one click, rather than type it by hand, spell it wrong, and have it not work. The way we will learn now is easy. In fact, it's so easy that you could do it yourself with only one hint from me. Have a go hero-linking to a web site Do you recollect that we uploaded our worksheet and used Link to a file or web site? We linked it to a file (our worksheet). Here, you just need to link to a web site, and everything else is just the same. When you get to the Link to a file or web site box, instead of clicking Choose or upload a file…, just type in, or copy and paste, the web site that you want to link to (making sure you include only one http://). Remember that we saw earlier, that if you click on Search for web page…, it will take you to Google or some other Search Engine web page to find you a web site that you'd like to link to. The following screenshot shows how to link a file or web site into our Moodle course : That's it! Try it! Go back to your course page; click on the words that you specified as the Name for the web page link, and check whether it works. It should open the web page in a new window, so that once finished, our students can click on the X to close the site and will still have Moodle running in the background. Recap—where do we stand now? We have learnt a lot of interesting things so far. Lets just have a recap of the things that we have learned so far. We have learnt to: Upload and display individual worksheets (as we've worked on the River Thames) Upload and display whole folders of worksheets (as we did with the River Processes slideshows folder) Make a click here type link to any web site that we want, so that our students will just need to click on this link to get to that web site We're now going to have a break from filling up our course for a while, and take a step to another side. Our first venture into Moodle's features was the Link to a file or web site option, but there are many more yet to be investigated. Let's have a closer look at those Add a resource… options in the following screenshot, so that we know, where we are heading: The table below shows all of the Add a Resource… options. What are they, which is the one we need, and what can we safely ignore? You might recognize one or two already. We shall meet the others in a moment.
Read more
  • 0
  • 0
  • 3055

Packt
26 Nov 2013
6 min read
Save for later

CodeIgniter MVC – The Power of Simplicity!

Packt
26 Nov 2013
6 min read
(For more resources related to this topic, see here.) "Simplicity Wins In Big!" Back in the 80s there was a programming Language ADA that according to many contracts was required to be used. ADA was so complex and hard compared to C/C++ to maintain. Today ADA fades like Pascal. C/C++ is the simplicity winner for real time systems arena. In Telecom for network devices management protocols there were two standards in the 90s: CMIP (Common Management Information Protocol) and SNMP (Simple Network Management Protocol). Initially (90s) all telecom Requirement Papers required CMIP support. Eventually after several years a research found that there's about 1:10 or 10x effort to develop and maintain a same system based CMIP compared to SNMP. SNMP is the simplicity winner in network management systems arena! In VoIP or Media over IP, the H.323 and SIP (Session Initiation Protocol) were competing protocols in early 2000. H.323 had the messages in a cryptic binary way. SIP makes it all textual XML fashioned, easy to understand via text editor. Today almost all end point devices powered SIP while H.323 becomes a niche protocol for the VoIP backbone. SIP is the simplicity winner in VoIP arena! Back in 2010 I was looking for a good PHP platform to develop Web Application for my startup 1st product Logodial Zappix (http://zappix.com). I got a recommendation to use DRUPAL for this. I've tried the platform and found it very heavy to manipulate and change for my exact user interaction flow and experience I had in mind. Many times I had to compromise and the overhead of the platform was indeed horrible. Just make Hello world App and tons of irrelevant code will get into the project. Try to make free JavaScript and you found yourself struggling with the platform disabling you from the creativity of client side JavaScript and its Add-ons. I've decided to look for a better platform for my needs. Later on I've heard about Zend Framework MVC (Model-View-Controller framework typed). I've tried to work with it as it is based MVC and a lot of OOP usage, but I've found it heavy... Documentation seems great at first sight, but the more I've used I, looking for vivid examples and explanations, I found myself in endless close circle loops of links. It was lacking a clear explanation and vivid examples. The filling was like every match box moving task, I'd required a semi-trailer of declarations and calls to handle making it... Though it was MVC typed which I greatly liked. Keeping on with my search, I was looking for simple but powerful MVC based PHP which is my favorite language for server side. One day in early 2011 I got a note from a friend that there's a light and cool platform named CodeIgniter (CI in brief). I've checked the documentation link http://ellislab.com/codeigniter/user-guide/ and was amazed from the very clean, simple, well organized and well explained browsing experience. Having Examples? Yes, lots of clear examples, with great community. It was so great and simple. I felt like those platform designers were doing the best effort to make the simplest and most vivid code, reusable and clean OOP fashion from the infrastructure to the last function. I've tried making web app for a trail, trying to load helpers, libraries and use them and greatly loved the experience. Fast forward, today I see a matured CodeIgniter as a Lego like playground that I know well. I've wrote tons of models, helpers, libraries, controllers and views. CodeIgniter Simplicity enables me to do things fast and, clear and well maintained and expandable. In time I've gathered the most useful helpers and libraries, Ajax server and Browser side solutions for reuse, good links to useful add on such as the free Grid Powered Plug-In for CI the http://www.grocerycrud.com/ that keep improving day by day. Today I see Codeigniter as a matured scalable (See at&t and sprint Call Center Web apps based CI), reusability and simplicity champion. The following is the high-level architecture of the Codeigniter MVC with the Controller/s as the hub the application session. The CI controller main use cases are: Handles requests from web browser as HTTP URI call, based on submitted parameters (for example Submitting a Login with the credentials) or with no-parameters (for example Home Page navigation). Handles Asynchronous Ajax requests from the Web Client mostly as JSON HTTP POST request and response. Serving CRON job requests that creates HTTP URI request, calling controller methods, similar to browser navigation, silently from the CRON PHP module. The CI Views main features: Rendered by a controller with optionally set of parameters (scalar, arrays, objects) Has full open access to all the helpers, libraries, models as their rendering controller has. Has the freedom to integrate any JavaScript / 3rd party Web Client side plug-ins. The CI helper/s main features and fashion: Flat functions sets protected from duplication risks Can be loaded for use by any controller and accessed by any rendered view. Can access any CI resource / library and others via the &get_instance() service. The CI Libraries main features and fashion: OOP classes that can expand other 3rd party classes (For example, see the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). Can be used by the CI project controllers and all their rendered views. The CI Model main features and fashion: Similar to Libraries but has access to the default database, that can be expanded to multi databases and any other CI resource via the &get_instance(). OOP classes that can expand other 3rd party classes (For example, See the example of the Google Map wrapper in the new Book). Can access any of the CI resources of other libraries, built-in services via the &get_instance(). It seems that CodeIgniter is continuously increasing its popularity as it has a simple yet high quality OOP core that enables great creativity, reusability, and code clarity naming conventions, which are easy to expand (user class extends CI class), while more third-party application plugins (packages of views and/or models and/or libraries and/or helpers). I found Codeigniter flexible, great reusability enabler, having light infrastructure, enables developer creativity powered active global community. For a day to day the CI code clarity, high performance capabilities, minimal controllable footprint (You decide what helpers/libraries/models to load for each controller). Above all CI blessed with very fast learning curve of PHP developers and many blogs and community sites to share knowledge and raise and resolve issues and changes. CodeIgniter is the simplicity winner I've found for Web Apps MVC Server side. Summary This article introduces the CodeIgniter framework, while initially getting started with web-based applications. Resources for Article: Further resources on this subject: Database Interaction with Codeigniter 1.7 [Article] User Authentication with Codeigniter 1.7 using Facebook Connect [Article] CodeIgniter 1.7 and Objects [Article]
Read more
  • 0
  • 0
  • 3025

article-image-manage-your-money-simple-invoices
Packt
13 May 2010
6 min read
Save for later

Manage Your Money with Simple Invoices

Packt
13 May 2010
6 min read
As a freelancer I have one primitive motive. I want to do work and get paid. Getting paid means I need to generate invoices and keep track of them. I've tried to manage my invoices via spreadsheets and documents, but keeping track of my payments in a series of disconnected files is a fragile and inefficient process. Simple Invoices provides a solution to this. Simple Invoices is a relatively young project, and working with it requires that you're willing to do some manual configurations and tolerate the occasional problem. To work with and install the application, you need to be familiar with running a web server on OS X, Windows, or Linux. The next section, Web Server Required provides some out of the box server packages that allow you to run a server environment on your personal computer. It's point and click easy and perfect for an individual user. Not up for running a web server, but still need to find a reliable invoicing application? No problem. Visit www.simpleinvoices.com for a list of hosted solutions. Let's get started. Web Server Required Simple Invoices is a web application that requires Apache, PHP, and MySQL to function. Even if you're not a system administrator, you can still run a web server on your computer, regardless of your operating system. Windows users can get the required software by installing WAMP from www.wampserver.com. OS X users can install MAMP from www.mamp.info. Linux users can install Apache, MySql, and PHP5 using their distribution's software repositories. The database administrative tool, phpMyAdmin makes managing the MySQL database intuitive. Both the WAMP and MAMP installers contain phpMyAdmin, and we'll use it to setup our databases. Take a moment to setup your web server before continuing with the Simple Invoices installation. Install Simple Invoices Our first step will be to prepare the MySQL database. Open a web browser and navigate to http://localhost/phpmyadmin. Replace localhost with the actual server address. A login screen will display and will prompt you for a user name and password. Enter the the root login information for your MySQL install. MAMP users might try root for both the user name and password. WAMP users might try root with no password. If you plan on keeping your WAMP or MAMP servers installed, setting new root passwords for your MySQL database is a good idea, even if you do not allow external connections to your server. After you log in to phpMyAdmin, you will see a list of databases on the left sidebar; the main content window displays a set of tabs, including Databases, SQL, and Status. Let's create the database. Click on the Privileges tab to display a list of all users and associated access permissions. Find the Add a New User link and click on it. The Add New User page displays. Complete the following fields: User Name: enter simpleinvoices Host: select Local Password: specify a password for the user; then retype it in the field provided Database for User: select the Create database with same name and grant all privileges option Scroll to the bottom of the page and click the Go button. This procedure creates the database user and the database at the same time. If you wanted to use a database name that was different than the user name, then could have selected None for the Database for user configuration and added the database manually via the Databases tab in phpMyAdmin. If you prefer to work with MySQL directly, the SQL code for the steps we just ran is (the *** in the first line is the password): CREATE USER 'simpleinvoices'@'localhost' IDENTIFIED BY '***'; GRANT USAGE ON *.* TO 'simpleinvoices'@'localhost' IDENTIFIED BY '***' WITH MAX_QUERIES_PER_HOUR 0 MAX_CONNECTIONS_PER_HOUR 0 MAX_UPDATES_PER_HOUR 0 MAX_USER_CONNECTIONS 0;CREATE DATABASE IF NOT EXISTS `simpleinvoices`; GRANT ALL PRIVILEGES ON `simpleinvoices`.* TO 'simpleinvoices'@'localhost'; Now that the database is setup, let's download the stable version of Simple Invoices by visiting www.simpleinvoices.org and following the Download link. The versions are identified by the year and version. At the time of this writing, the stable version is 2010.1. Unzip the Simple Invoices download file into a subdirectory on your web server. Because I like to install a lot of software, I like to keep the application name in my directory structure, so my example installation installs to a directory named simpleinvoices. That makes my installation available at http://localhost/simpleinvoices. Pick a directory path that makes sense for you. Not sure where the root of your web server resides on your server? Here are some of the default locations for the various server environments: WAMP – C:wampwww MAMP – /Applications/MAMP/htdocs Linux – /var/www Linux users will need to set the ownership of the tmp directory to the web user and make the tmp directory writable. For an Ubuntu system, the appropriate commands are: chown -R www-data tmpchmod -R 775 tmp The command syntax assumes we're working from the Simple Invoices installation directory on the web server. The web user on Ubuntu and other Debian-based systems is www-data. The -R option in both commands applies the permissions to all sub-directories and files. With the chmod command, you are granting write access to the web user. If you have problems or feel like being less secure, you can reduce this step down to one command: chmod -R 777 tmp. We're almost ready to open the Simple Invoices installer, but before we go to the web browser, we need to define the database connections in the config/config.ini file. At a minimum, we need to specify the database.params.username and database.params.password with the values we used to setup the database. If you skip this step and try to open Simple Invoices in your web browser, you will receive an error message indicating that your config.ini settings are incorrect. The following screenshot shows the relevant settings in confi.ini. Now, we're ready to start Simple Invoices and step through the graphical installer. Open a web browser and navigate to your installation (for example: http://localhost/simpleinvoices). Step 1: Install Database will display in the browser. Review the database connection information and click the Install Database button. Step 2: Import essential data displays. Click the Install Essential Data button to advance the installation. Step 3: Import sample data displays. We can choose to import sample data or start using the application. The sample data contains a few example billers, customers, and invoices. We're going to set all that up from scratch, so I recommend you click the Start using Simple Invoices button. At this point the Simple Invoices dashboard displays with a yellow note that instructs us to configure a biller, a customer, and a product before we create our first invoice. See the following screenshot. You might notice that the default access to Simple Invoices is not protected by a username and password. We can force authentication by adding a user and password via the People > Users screen. Then set the authentication.enabled field in config.ini equal to true.
Read more
  • 0
  • 0
  • 3024

article-image-enabling-apache-axis2-clustering
Packt
25 Feb 2011
6 min read
Save for later

Enabling Apache Axis2 clustering

Packt
25 Feb 2011
6 min read
Clustering for high availability and scalability is one of the main requirements of any enterprise deployment. This is also true for Apache Axis2. High availability refers to the ability to serve client requests by tolerating failures. Scalability is the ability to serve a large number of clients sending a large number of requests without any degradation to the performance. Many large scale enterprises are adapting to web services as the de facto middleware standard. These enterprises have to process millions of transactions per day, or even more. A large number of clients, both human and computer, connect simultaneously to these systems and initiate transactions. Therefore, the servers hosting the web services for these enterprises have to support that level of performance and concurrency. In addition, almost all the transactions happening in such enterprise deployments are critical to the business of the organization. This imposes another requirement for production-ready web services servers, namely, to maintain very low downtime. It is impossible to support that level of scalability and high availability from a single server, despite how powerful the server hardware or how efficient the server software is. Web services clustering is needed to solve this. It allows you to deploy and manage several instances of identical web services across multiple web services servers running on different server machines. Then we can distribute client requests among these machines using a suitable load balancing system to achieve the required level of availability and scalability. Setting up a simple Axis2 cluster Enabling Axis2 clustering is a simple task. Let us look at setting up a simple two node cluster: Extract the Axis2 distribution into two different directories and change the HTTP and HTTPS ports in the respective axis2.xml files. Locate the "Clustering" element in the axis2.xml files and set the enable attribute to true. Start the two Axis2 instances using Simple Axis Server. You should see some messages indicating that clustering has been enabled. That is it! Wasn't that extremely simple? In order to verify that state replication is working, we can deploy a stateful web service on both instances. This web service should set a value in the ConfigurationContext in one operation and try to retrieve that value in another operation. We can call the set value operation on one node, and next call the retrieve operation on the other node. The value set and the value retrieved should be equal. Next, we will look at the clustering configuration language in detail. Writing a highly available clusterable web service In general, you do not have to do anything extra to make your web service clusterable. Any regular web service is clusterable in general. In the case of stateful web services, you need to store the Java serializable replicable properties in the Axis2 ConfigurationContext, ServiceGroupContext, or ServiceContext. Please note that stateful variables you maintain elsewhere will not be replicated. If you have properly configured the Axis2 clustering for state replication, then the Axis2 infrastructure will replicate these properties for you. In the next section, you will be able to look at the details of configuring a cluster for state replication. Let us look at a simple stateful Axis2 web service deployed in the soapsession scope: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } You can deploy this service on two Axis2 nodes in a cluster. You can write a client that will call the setValue operation on the first, and then call the getValue operation on the second node. You will be able to see that the value you set in the first node can be retrieved from the second node. What happens is, when you call the setValue operation on the first node, the value is set in the respective ServiceContext, and replicated to the second node. Therefore, when you call getValue on the second node, the replicated value has been properly set in the respective ServiceContext. As you may have already noticed, you do not have to do anything additional to make a web service clusterable. Axis does the state replication transparently. However, if you require control over state replication, Axis2 provides that option as well. Let us rewrite the same web service, while taking control of the state replication: public class ClusterableService { private static final String VALUE = "value"; public void setValue(String value) { MessageContext.getCurrentMessageContext().getServiceContext(); serviceContext.setProperty(VALUE, value); Replicator.replicate(serviceContext); } public String getValue() { MessageContext.getCurrentMessageContext().getServiceContext(); return (String) serviceContext.getProperty(VALUE); } } Replicator.replicate() will immediately replicate any property changes in the provided Axis2 context. So, how does this setup increase availability? Say, you sent a setValue request to node 1 and node 1 failed soon after replicating that value to the cluster. Now, node 2 will have the originally set value, hence the web service clients can continue unhindered. Stateless Axis2 Web Services Stateless Axis2 Web Services give the best performance, as no state replication is necessary for such services. These services can still be deployed on a load balancer-fronted Axis2 cluster to achieve horizontal scalability. Again, no code change or special coding is necessary to deploy such web services on a cluster. Stateless web services may be deployed in a cluster either to achieve failover behavior or scalability. Setting up a failover cluster A failover cluster is generally fronted by a load balancer and one or more nodes that are designated as primary nodes, while some other nodes are designated as backup nodes. Such a cluster can be set up with or without high availability. If all the states are replicated from the primaries to the backups, then when a failure occurs, the clients can continue without a hitch. This will ensure high availability. However, this state replication has its overhead. If you are deploying only stateless web services, you can run a setup without any state replication. In a pure failover cluster (that is, without any state replication), if the primary fails, the load balancer will route all subsequent requests to the backup node, but some state may be lost, so the clients will have to handle some degree of that failure. The load balancer can be configured in such a way that all requests are generally routed to the primary node, and a failover node is provided in case the primary fails, as shown in the following figure: Increasing horizontal scalability As shown in the figure below, to achieve horizontal scalability, an Axis2 cluster will be fronted by a load balancer (depicted by LB in the following figure). The load balancer will spread the load across the Axis2 cluster according to some load balancing algorithm. The round-robin load balancing algorithm is one such popular and simple algorithm, and works well when all hardware and software on the nodes are identical. Generally, a horizontally scalable cluster will maintain its response time and will not degrade performance under increasing load. Throughput will also increase when the load increases in such a setup. Generally, the number of nodes in the cluster is a function of the expected maximum peak load. In such a cluster, all nodes are active.
Read more
  • 0
  • 0
  • 3018

article-image-web-services-testing-and-soapui
Packt
16 Nov 2012
8 min read
Save for later

Web Services Testing and soapUI

Packt
16 Nov 2012
8 min read
(For more resources related to this topic, see here.) SOA and web services SOA is a distinct approach for separating concerns and building business solutions utilizing loosely coupled and reusable components. SOA is no longer a nice-to-have feature for most of the enterprises and it is widely used in organizations to achieve a lot of strategic advantages. By adopting SOA, organizations can enable their business applications to quickly and efficiently respond to business, process, and integration changes which usually occur in any enterprise environment. Service-oriented solutions If a software system is built by following the principles associated with SOA, it can be considered as a service-oriented solution. Organizations generally tend to build service-oriented solutions in order to leverage flexibility in their businesses, merge or acquire new businesses, and achieve competitive advantages. To understand the use and purpose of SOA and service-oriented solutions, let's have a look at a simplified case study. Case study Smith and Co. is a large motor insurance policy provider located in North America. The company uses a software system to perform all their operations which are associated with insurance claim processing. The system consists of various modules including the following: Customer enrollment and registration Insurance policy processing Insurance claim processing Customer management Accounting Service providers management With the enormous success and client satisfaction of the insurance claims processed by the company during the recent past, Smith and Co. has acquired InsurePlus Inc., one of its competing insurance providers, a few months back. InsurePlus has also provided some of the insurance motor claim policies which are similar to those that Smith and Co. provides to their clients. Therefore, the company management has decided to integrate the insurance claim processing systems used by both companies and deliver one solution to their clients. Smith and Co. uses a lot of Microsoft(TM) technologies and all of their software applications, including the overall insurance policy management system, are built on .NET framework. On the other hand, InsurePlus uses J2EE heavily, and their insurance processing applications are all based on Java technologies. To worsen the problem of integration, InsurePlus consists of a legacy customer management application component as well, which runs on an AS-400 system. The IT departments of both companies faced numerous difficulties when they tried to integrate the software applications in Smith and Co. and InsurePlus Inc. They had to write a lot of adapter modules so that both applications would communicate with each other and do the protocol conversions as needed. In order to overcome these and future integration issues, the IT management of Smith and Co. decided to adopt SOA into their business application development methodology and convert the insurance processing system into a service-oriented solution. As the first step, a lot of wrapper services (web services which encapsulate the logic of different insurance processing modules) were built, exposing them as web services. Therefore the individual modules were able to communicate with each other with minimum integration concerns. By adopting SOA, their applications used a common language, XML, in message transmission and hence a heterogeneous systems such as the .NET based insurance policy handling system in Smith and Co. was able to communicate with the Java based applications running on InsurePlus Inc. By implementing a service-oriented solution, the system at Smith and Co. was able to merge with a lot of other legacy systems with minimum integration overhead. Building blocks of SOA When studying typical service-oriented solutions, we can identify three major building blocks as follows: Web services Mediation Composition Web services Web services are the individual units of business logic in SOA. Web services communicate with each other and other programs or applications by sending messages. Web services consist of a public interface definition which is a central piece of information that assigns the service an identity and enables its invocation. The service container is the SOA middleware component where the web service is hosted for the consuming applications to interact with it. It allows developers to build, deploy, and manage web services and it also represents the server-side processor role in web service frameworks. A list of commonly used web service frameworks can be found at http://en.wikipedia.org/wiki/List_of_web_service_frameworks; here you can find some popular web service middleware such as Windows Communication Foundation (WCF) Apache CXF, Apache Axis2, and so on. Apache Axis2 can be found at http://axis.apache.org/ The service container contains the business logic, which interacts with the service consumer via a service interface. This is shown in the following diagram: Mediation Usually, the message transmission between nodes in a service-oriented solution does not just occur via the typical point-to-point channels. Instead, once a message is received, it can be flowed through multiple intermediaries and subjected to various transformation and conversions as necessary. This behavior is commonly referred to as message mediation and is another important building block in service-oriented solutions. Similar to how the service container is used as the hosting platform for web services, a broker is the corresponding SOA middleware component for message mediation. Usually, enterprise service bus (ESB) acts as a broker in service-oriented solutions Composition In service-oriented solutions, we cannot expect individual web services running alone to provide the desired business functionality. Instead, multiple web services work together and participate in various service compositions. Usually, the web services are pulled together dynamically at the runtime based on the rules specified in business process definitions. The management or coordination of these business processes are governed by the process coordinator, which is the SOA middleware component associated with web service compositions. Simple Object Access Protocol Simple Object Access Protocol (SOAP) can be considered as the foremost messaging standard for use with web services. It is defined by the World Wide Web Consortium (W3C) at http://www.w3.org/TR/2000/NOTE-SOAP-20000508/ as follows: SOAP is a lightweight protocol for exchange of information in a decentralized, distributed environment. It is an XML based protocol that consists of three parts: an envelope that defines a framework for describing what is in a message and how to process it, a set of encoding rules for expressing instances of application-defined datatypes, and a convention for representing remote procedure calls and responses. The SOAP specification has been universally accepted as the standard transport protocol for messages processed by web services. There are two different versions of SOAP specification and both of them are widely used in service-oriented solutions. These two versions are SOAP v1.1 and SOAP v1.2. Regardless of the SOAP specification version, the message format of a SOAP message still remains intact. A SOAP message is an XML document that consists of a mandatory SOAP envelope, an optional SOAP header, and a mandatory SOAP body. The structure of a SOAP message is shown in the following diagram: The SOAP Envelope is the wrapper element which holds all child nodes inside a SOAP message. The SOAP Header element is an optional block where the meta information is stored. Using the headers, SOAP messages are capable of containing different types of supplemental information related to the delivery and processing of messages. This indirectly provides the statelessness for web services as by maintaining SOAP headers, services do not necessarily need to store message-specific logic. Typically, SOAP headers can include the following: Message processing instructions Security policy metadata Addressing information Message correlation data Reliable messaging metadata The SOAP body is the element where the actual message contents are hosted. These contents of the body are usually referred to as the message payload. Let's have a look at a sample SOAP message and relate the preceding concepts through the following diagram: In this example SOAP message, we can clearly identify the three elements; envelope, body, and header. The header element includes a set of child elements such as <wsa:To>, <wsa:ReplyTo>, <wsa:Address>, <wsa:MessageID>, and <wsa:Action>. These header blocks are part of the WS-Addressing specification. Similarly, any header element associated with WS-* specifications can be included inside the SOAP header element. The <s:Body> element carries the actual message payload. In this example, it is the <p:echoString> element with a one child element. When working with SOAP messages, identification of the version of SOAP message is one of the important requirements. At first glance, you can determine the version of the specification used in the SOAP message through the namespace identifier of the <Envelope> element. If the message conforms to SOAP 1.1 specification, it would be http://schemas.xmlsoap.org/soap/envelope/, otherwise http://www.w3.org/2003/05/soap-envelope is the name space identifier of SOAP 1.2 messages. Alternatives to SOAP Though SOAP is considered as the standard protocol for web services communication, it is not the only possible transport protocol which is used. SOAP was designed to be extensible so that the other standards could be integrated into it. The WS-* extensions such as WS-Security, WS-Addressing, and WSReliableMessaging are associated with SOAP messaging due to this extensible nature. In addition to the platform and language agnosticism, SOAP messages can be transmitted over various transports such as HTTP, HTTPS, JMS, and SMTP among others. However, there are a few drawbacks associated with SOAP messaging. The performance degradations due to heavy XML processing and the complexities associated with the usage of various WS-* specifications are two of the most common disadvantages of the SOAP messaging model. Because of these concerns, we can identify some alternative approaches to SOAP.
Read more
  • 0
  • 0
  • 3015
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deployment-and-maintenance
Packt
20 Jul 2015
21 min read
Save for later

Deployment and Maintenance

Packt
20 Jul 2015
21 min read
 In this article by Sandro Pasquali, author of Deploying Node.js, we will learn about the following: Automating the deployment of applications, including a look at the differences between continuous integration, delivery, and deployment Using Git to track local changes and triggering deployment actions via webhooks when appropriate Using Vagrant to synchronize your local development environment with a deployed production server Provisioning a server with Ansible Note that application deployment is a complex topic with many dimensions that are often considered within unique sets of needs. This article is intended as an introduction to some of the technologies and themes you will encounter. Also, note that the scaling issues are part and parcel of deployment. (For more resources related to this topic, see here.) Using GitHub webhooks At the most basic level, deployment involves automatically validating, preparing, and releasing new code into production environments. One of the simplest ways to set up a deployment strategy is to trigger releases whenever changes are committed to a Git repository through the use of webhooks. Paraphrasing the GitHub documentation, webhooks provide a way for notifications to be delivered to an external web server whenever certain actions occur on a repository. In this section, we'll use GitHub webhooks to create a simple continuous deployment workflow, adding more realistic checks and balances. We'll build a local development environment that lets developers work with a clone of the production server code, make changes, and see the results of those changes immediately. As this local development build uses the same repository as the production build, the build process for a chosen environment is simple to configure, and multiple production and/or development boxes can be created with no special effort. The first step is to create a GitHub (www.github.com) account if you don't already have one. Basic accounts are free and easy to set up. Now, let's look at how GitHub webhooks work. Enabling webhooks Create a new folder and insert the following package.json file: {"name": "express-webhook","main": "server.js","dependencies": {"express": "~4.0.0","body-parser": "^1.12.3"}} This ensures that Express 4.x is installed and includes the body-parser package, which is used to handle POST data. Next, create a basic server called server.js: var express = require('express');var app = express();var bodyParser = require('body-parser');var port = process.env.PORT || 8082;app.use(bodyParser.json());app.get('/', function(req, res) {res.send('Hello World!');});app.post('/webhook', function(req, res) {// We'll add this next});app.listen(port);console.log('Express server listening on port ' + port); Enter the folder you've created, and build and run the server with npm install; npm start. Visit localhost:8082/ and you should see "Hello World!" in your browser. Whenever any file changes in a given repository, we want GitHub to push information about the change to /webhook. So, the first step is to create a GitHub repository for the Express server mentioned in the code. Go to your GitHub account and create a new repository with the name 'express-webhook'. The following screenshot shows this: Once the repository is created, enter your local repository folder and run the following commands: git initgit add .git commit -m "first commit"git remote add origin git@github.com:<your username>/express-webhook You should now have a new GitHub repository and a local linked version. The next step is to configure this repository to broadcast the push event on the repository. Navigate to the following URL: https://github.com/<your_username>/express-webhook/settings From here, navigate to Webhooks & Services | Add webhook (you may need to enter your password again). You should now see the following screen: This is where you set up webhooks. Note that the push event is already set as default, and, if asked, you'll want to disable SSL verification for now. GitHub needs a target URL to use POST on change events. If you have your local repository in a location that is already web accessible, enter that now, remembering to append the /webhook route, as in http://www.example.com/webhook. If you are building on a local machine or on another limited network, you'll need to create a secure tunnel that GitHub can use. A free service to do this can be found at http://localtunnel.me/. Follow the instructions on that page, and use the custom URL provided to configure your webhook. Other good forwarding services can be found at https://forwardhq.com/ and https://meetfinch.com/. Now that webhooks are enabled, the next step is to test the system by triggering a push event. Create a new file called readme.md (add whatever you'd like to it), save it, and then run the following commands: git add readme.mdgit commit -m "testing webhooks"git push origin master This will push changes to your GitHub repository. Return to the Webhooks & Services section for the express-webhook repository on GitHub. You should see something like this: This is a good thing! GitHub noticed your push and attempted to deliver information about the changes to the webhook endpoint you set, but the delivery failed as we haven't configured the /webhook route yet—that's to be expected. Inspect the failed delivery payload by clicking on the last attempt—you should see a large JSON file. In that payload, you'll find something like this: "committer": {"name": "Sandro Pasquali","email": "spasquali@gmail.com","username": "sandro-pasquali"},"added": ["readme.md"],"removed": [],"modified": [] It should now be clear what sort of information GitHub will pass along whenever a push event happens. You can now configure the /webhook route in the demonstration Express server to parse this data and do something with that information, such as sending an e-mail to an administrator. For example, use the following code: app.post('/webhook', function(req, res) {console.log(req.body);}); The next time your webhook fires, the entire JSON payload will be displayed. Let's take this to another level, breaking down the autopilot application to see how webhooks can be used to create a build/deploy system. Implementing a build/deploy system using webhooks To demonstrate how to build a webhook-powered deployment system, we're going to use a starter kit for application development. Go ahead and use fork on the repository at https://github.com/sandro-pasquali/autopilot.git. You now have a copy of the autopilot repository, which includes scaffolding for common Gulp tasks, tests, an Express server, and a deploy system that we're now going to explore. The autopilot application implements special features depending on whether you are running it in production or in development. While autopilot is a little too large and complex to fully document here, we're going to take a look at how major components of the system are designed and implemented so that you can build your own or augment existing systems. Here's what we will examine: How to create webhooks on GitHub programmatically How to catch and read webhook payloads How to use payload data to clone, test, and integrate changes How to use PM2 to safely manage and restart servers when code changes If you haven't already used fork on the autopilot repository, do that now. Clone the autopilot repository onto a server or someplace else where it is web-accessible. Follow the instructions on how to connect and push to the fork you've created on GitHub, and get familiar with how to pull and push changes, commit changes, and so on. PM2 delivers a basic deploy system that you might consider for your project (https://github.com/Unitech/PM2/blob/master/ADVANCED_README.md#deployment). Install the cloned autopilot repository with npm install; npm start. Once npm has installed dependencies, an interactive CLI application will lead you through the configuration process. Just hit the Enter key for all the questions, which will set defaults for a local development build (we'll build in production later). Once the configuration is complete, a new development server process controlled by PM2 will have been spawned. You'll see it listed in the PM2 manifest under autopilot-dev in the following screenshot: You will make changes in the /source directory of this development build. When you eventually have a production server in place, you will use git push on the local changes to push them to the autopilot repository on GitHub, triggering a webhook. GitHub will use POST on the information about the change to an Express route that we will define on our server, which will trigger the build process. The build runner will pull your changes from GitHub into a temporary directory, install, build, and test the changes, and if all is well, it will replace the relevant files in your deployed repository. At this point, PM2 will restart, and your changes will be immediately available. Schematically, the flow looks like this: To create webhooks on GitHub programmatically, you will need to create an access token. The following diagram explains the steps from A to B to C: We're going to use the Node library at https://github.com/mikedeboer/node-github to access GitHub. We'll use this package to create hooks on Github using the access token you've just created. Once you have an access token, creating a webhook is easy: var GitHubApi = require("github");github.authenticate({type: "oauth",token: <your token>});github.repos.createHook({"user": <your github username>,"repo": <github repo name>,"name": "web","secret": <any secret string>,"active": true,"events": ["push"],"config": {"url": "http://yourserver.com/git-webhook","content_type": "json"}}, function(err, resp) {...}); Autopilot performs this on startup, removing the need for you to manually create a hook. Now, we are listening for changes. As we saw previously, GitHub will deliver a payload indicating what has been added, what has been deleted, and what has changed. The next step for the autopilot system is to integrate these changes. It is important to remember that, when you use webhooks, you do not have control over how often GitHub will send changesets—if more than one person on your team can push, there is no predicting when those pushes will happen. The autopilot system uses Redis to manage a queue of requests, executing them in order. You will need to manage multiple changes in a way. For now, let's look at a straightforward way to build, test, and integrate changes. In your code bundle, visit autopilot/swanson/push.js. This is a process runner on which fork has been used by buildQueue.js in that same folder. The following information is passed to it: The URL of the GitHub repository that we will clone The directory to clone that repository into (<temp directory>/<commit hash>) The changeset The location of the production repository that will be changed Go ahead and read through the code. Using a few shell scripts, we will clone the changed repository and build it using the same commands you're used to—npm install, npm test, and so on. If the application builds without errors, we need only run through the changeset and replace the old files with the changed files. The final step is to restart our production server so that the changes reach our users. Here is where the real power of PM2 comes into play. When the autopilot system is run in production, PM2 creates a cluster of servers (similar to the Node cluster module). This is important as it allows us to restart the production server incrementally. As we restart one server node in the cluster with the newly pushed content, the other clusters continue to serve old content. This is essential to keeping a zero-downtime production running. Hopefully, the autopilot implementation will give you a few ideas on how to improve this process and customize it to your own needs. Synchronizing local and deployed builds One of the most important (and often difficult) parts of the deployment process is ensuring that the environment an application is being developed, built, and tested within perfectly simulates the environment that application will be deployed into. In this section, you'll learn how to emulate, or virtualize, the environment your deployed application will run within using Vagrant. After demonstrating how this setup can simplify your local development process, we'll use Ansible to provision a remote instance on DigitalOcean. Developing locally with Vagrant For a long while, developers would work directly on running servers or cobble together their own version of the production environment locally, often writing ad hoc scripts and tools to smoothen their development process. This is no longer necessary in a world of virtual machines. In this section, we will learn how to use Vagrant to emulate a production environment within your development environment, advantageously giving you a realistic box to work on testing code for production and isolating your development process from your local machine processes. By definition, Vagrant is used to create a virtual box emulating a production environment. So, we need to install Vagrant, a virtual machine, and a machine image. Finally, we'll need to write the configuration and provisioning scripts for our environment. Go to http://www.vagrantup.com/downloads and install the right Vagrant version for your box. Do the same with VirtualBox here at https://www.virtualbox.org/wiki/Downloads. You now need to add a box to run. For this example, we're going to use Centos 7.0, but you can choose whichever you'd prefer. Create a new folder for this project, enter it, and run the following command: vagrant box add chef/centos-7.0 Usefully, the creators of Vagrant, HashiCorp, provide a search service for Vagrant boxes at https://atlas.hashicorp.com/boxes/search. You will be prompted to choose your virtual environment provider—select virtualbox. All relevant files and machines will now be downloaded. Note that these boxes are very large and may take time to download. You'll now create a configuration file for Vagrant called Vagrantfile. As with npm, the init command quickly sets up a base file. Additionally, we'll need to inform Vagrant of the box we'll be using: vagrant init chef/centos-7.0 Vagrantfile is written in Ruby and defines the Vagrant environment. Open it up now and scan it. There is a lot of commentary, and it makes a useful read. Note the config.vm.box = "chef/centos-7.0" line, which was inserted during the initialization process. Now you can start Vagrant: vagrant up If everything went as expected, your box has been booted within Virtualbox. To confirm that your box is running, use the following code: vagrant ssh If you see a prompt, you've just set up a virtual machine. You'll see that you are in the typical home directory of a CentOS environment. To destroy your box, run vagrant destroy. This deletes the virtual machine by cleaning up captured resources. However, the next vagrant up command will need to do a lot of work to rebuild. If you simply want to shut down your machine, use vagrant halt. Vagrant is useful as a virtualized, production-like environment for developers to work within. To that end, it must be configured to emulate a production environment. In other words, your box must be provisioned by telling Vagrant how it should be configured and what software should be installed whenever vagrant up is run. One strategy for provisioning is to create a shell script that configures our server directly and point the Vagrant provisioning process to that script. Add the following line to Vagrantfile: config.vm.provision "shell", path: "provision.sh" Now, create that file with the following contents in the folder hosting Vagrantfile: # install nvmcurl https://raw.githubusercontent.com/creationix/nvm/v0.24.1/install.sh | bash# restart your shell with nvm enabledsource ~/.bashrc# install the latest Node.jsnvm install 0.12# ensure server default versionnvm alias default 0.12 Destroy any running Vagrant boxes. Run Vagrant again, and you will notice in the output the execution of the commands in our provisioning shell script. When this has been completed, enter your Vagrant box as the root (Vagrant boxes are automatically assigned the root password "vagrant"): vagrant sshsu You will see that Node v0.12.x is installed: node -v It's standard to allow password-less sudo for the Vagrant user. Run visudo and add the following line to the sudoers configuration file: vagrant ALL=(ALL) NOPASSWD: ALL Typically, when you are developing applications, you'll be modifying files in a project directory. You might bind a directory in your Vagrant box to a local code editor and develop in that way. Vagrant offers a simpler solution. Within your VM, there is a /vagrant folder that maps to the folder that Vagrantfile exists within, and these two folders are automatically synced. So, if you add the server.js file to the right folder on your local machine, that file will also show up in your VM's /vagrant folder. Go ahead and create a new test file either in your local folder or in your VM's /vagrant folder. You'll see that file synchronized to both locations regardless of where it was originally created. Let's clone our express-webhook repository from earlier in this article into our Vagrant box. Add the following lines to provision.sh: # install various packages, particularly for gityum groupinstall "Development Tools" -yyum install gettext-devel openssl-devel perl-CPAN perl-devel zlib-devel-yyum install git -y# Move to shared folder, clone and start servercd /vagrantgit clone https://github.com/sandro-pasquali/express-webhookcd express-webhooknpm i; npm start Add the following to Vagrantfile, which will map port 8082 on the Vagrant box (a guest port representing the port our hosted application listens on) to port 8000 on our host machine: config.vm.network "forwarded_port", guest: 8082, host: 8000 Now, we need to restart the Vagrant box (loading this new configuration) and re-provision it: vagrant reloadvagrant provision This will take a while as yum installs various dependencies. When provisioning is complete, you should see this as the last line: ==> default: Express server listening on port 8082 Remembering that we bound the guest port 8082 to the host port 8000, go to your browser and navigate to localhost:8000. You should see "Hello World!" displayed. Also note that in our provisioning script, we cloned to the (shared) /vagrant folder. This means the clone of express-webhook should be visible in the current folder, which will allow you to work on the more easily accessible codebase, knowing it will be automatically synchronized with the version on your Vagrant box. Provisioning with Ansible Configuring your machines by hand, as we've done previously, doesn't scale well. For one, it can be overly difficult to set and manage environment variables. Also, writing your own provisioning scripts is error-prone and no longer necessary given the existence of provisioning tools, such as Ansible. With Ansible, we can define server environments using an organized syntax rather than ad hoc scripts, making it easier to distribute and modify configurations. Let's recreate the provision.sh script developed earlier using Ansible playbooks: Playbooks are Ansible's configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce or a set of steps in a general IT process. Playbooks are expressed in the YAML format (a human-readable data serialization language). To start with, we're going to change Vagrantfile's provisioner to Ansible. First, create the following subdirectories in your Vagrant folder: provisioningcommontasks These will be explained as we proceed through the Ansible setup. Next, create the following configuration file and name it ansible.cfg: [defaults]roles_path = provisioninglog_path = ./ansible.log This indicates that Ansible roles can be found in the /provisioning folder, and that we want to keep a provisioning log in ansible.log. Roles are used to organize tasks and other functions into reusable files. These will be explained shortly. Modify the config.vm.provision definition to the following: config.vm.provision "ansible" do |ansible|ansible.playbook = "provisioning/server.yml"ansible.verbose = "vvvv"end This tells Vagrant to defer to Ansible for provisioning instructions, and that we want the provisioning process to be verbose—we want to get feedback when the provisioning step is running. Also, we can see that the playbook definition, provisioning/server.yml, is expected to exist. Create that file now: ---- hosts: allsudo: yesroles:- commonvars:env:user: 'vagrant'nvm:version: '0.24.1'node_version: '0.12'build:repo_path: 'https://github.com/sandro-pasquali'repo_name: 'express-webhook' Playbooks can contain very complex rules. This simple file indicates that we are going to provision all available hosts using a single role called common. In more complex deployments, an inventory of IP addresses could be set under hosts, but, here, we just want to use a general setting for our one server. Additionally, the provisioning step will be provided with certain environment variables following the forms env.user, nvm.node_version, and so on. These variables will come into play when we define the common role, which will be to provision our Vagrant server with the programs necessary to build, clone, and deploy express-webhook. Finally, we assert that Ansible should run as an administrator (sudo) by default—this is necessary for the yum package manager on CentOS. We're now ready to define the common role. With Ansible, folder structures are important and are implied by the playbook. In our case, Ansible expects the role location (./provisioning, as defined in ansible.cfg) to contain the common folder (reflecting the common role given in the playbook), which itself must contain a tasks folder containing a main.yml file. These last two naming conventions are specific and required. The final step is creating the main.yml file in provisioning/common/tasks. First, we replicate the yum package loaders (see the file in your code bundle for the full list): ---- name: Install necessary OS programsyum: name={{ item }} state=installedwith_items:- autoconf- automake...- git Here, we see a few benefits of Ansible. A human-readable description of yum tasks is provided to a looping structure that will install every item in the list. Next, we run the nvm installer, which simply executes the auto-installer for nvm: - name: Install nvmsudo: noshell: "curl https://raw.githubusercontent.com/creationix/nvm/v{{ nvm.version }}/install.sh | bash" Note that, here, we're overriding the playbook's sudo setting. This can be done on a per-task basis, which gives us the freedom to move between different permission levels while provisioning. We are also able to execute shell commands while at the same time interpolating variables: - name: Update .bashrcsudo: nolineinfile: >dest="/home/{{ env.user }}/.bashrc"line="source /home/{{ env.user }}/.nvm/nvm.sh" Ansible provides extremely useful tools for file manipulation, and we will see here a very common one—updating the .bashrc file for a user. The lineinfile directive makes the addition of aliases, among other things, straightforward. The remainder of the commands follow a similar pattern to implement, in a structured way, the provisioning directives we need for our server. All the files you will need are in your code bundle in the vagrant/with_ansible folder. Once you have them installed, run vagrant up to see Ansible in action. One of the strengths of Ansible is the way it handles contexts. When you start your Vagrant build, you will notice that Ansible gathers facts, as shown in the following screenshot: Simply put, Ansible analyzes the context it is working in and only executes what is necessary to execute. If one of your tasks has already been run, the next time you try vagrant provision, that task will not run again. This is not true for shell scripts! In this way, editing playbooks and reprovisioning does not consume time redundantly changing what has already been changed. Ansible is a powerful tool that can be used for provisioning and much more complex deployment tasks. One of its great strengths is that it can run remotely—unlike most other tools, Ansible uses SSH to connect to remote servers and run operations. There is no need to install it on your production boxes. You are encouraged to browse the Ansible documentation at http://docs.ansible.com/index.html to learn more. Summary In this article, you learned how to deploy a local build into a production-ready environment and the powerful Git webhook tool was demonstrated as a way of creating a continuous integration environment. Resources for Article: Further resources on this subject: Node.js Fundamentals [Article] API with MongoDB and Node.js [Article] So, what is Node.js? [Article]
Read more
  • 0
  • 0
  • 3015

article-image-introduction-moodle
Packt
28 Sep 2011
5 min read
Save for later

Introduction to Moodle

Packt
28 Sep 2011
5 min read
  (For more resources on Moodle, see here.) The Moodle philosophy Moodle is designed to support a style of learning called Social Constructionism. This style of learning is interactive. The social constructionist philosophy believes that people learn best when they interact with the learning material, construct new material for others, and interact with other students about the material. The difference between a traditional class and a class following the social constructionist philosophy is the difference between a lecture and a discussion. Moodle does not require you to use the social constructionist method for your courses. However, it best supports this method. For example, Moodle allows you to add several kinds of static course material. This is course material that a student reads, but does not interact with: Web pages Links to anything on the Web (including material on your Moodle site) A directory of files A label that displays any text or image However, Moodle also allows you to add interactive course material. This is course material that a student interacts with, by answering questions, entering text, or uploading files: Assignment (uploading files to be reviewed by the teacher) Choice (a single question) Lesson (a conditional, branching activity) Quiz (an online test) Moodle also offers activities where students interact with each other. These are used to create social course material: Chat (live online chat between students) Forum (you can have zero or more online bulletin boards for each course) Glossary (students and/or teachers can contribute terms to site-wide glossaries) Wiki (this is a familiar tool for collaboration to most younger students and many older students) Workshop (this supports the peer review and feedback of assignments that students upload) In addition, some of Moodle's add-on modules add even more types of interaction. For example, one add-on module enables students and teachers to schedule appointments with each other. The Moodle experience Because Moodle encourages interaction and exploration, your students' learning experience will often be non-linear. Moodle can be used to enforce a specific order upon a course, using something called conditional activities. Conditional activities can be arranged in a sequence. Your course can contain a mix of conditional and non-linear activities. In this section, I'll take you on a tour of a Moodle learning site. You will see the student's experience from the time that the student arrives at the site, through entering a course, to working through some material in the course. You will also see some student-to-student interaction, and some functions used by the teacher to manage the course. The Moodle Front Page The Front Page of your site is the first thing that most visitors will see. This section takes you on a tour of the Front Page of my demonstration site. Probably the best Moodle demo sites are http://demo.moodle.net/ and http://school.demo.moodle.net/. Arriving at the site When a visitor arrives at a learning site, the visitor sees the Front Page. You can require the visitor to register and log in before seeing any part of your site, or you can allow an anonymous visitor to see a lot of information about the site on the Front Page, which is what I have done: (Move the mouse over the image to enlarge.) One of the first things that a visitor will notice is the announcement at the top and centre of the page, Moodle 2.0 Book Almost Ready!. Below the announcement are two activities: a quiz, Win a Prize: Test Your Knowledge of E-mail History, and a chat room, Global Chat Room. Selecting either of these activities will require to the visitor to register with the site, as shown in the following screenshot: Anonymous, guest, and registered access Notice the line Some courses may allow guest access at the middle of the page. You can set three levels of access for your site, and for individual courses: Anonymous access allows anyone to see the contents of your site's Front Page. Notice that there is no Anonymous access for courses. Even if a course is open to Guests, the visitor must either manually log in as the user Guest, or you must configure the site to automatically log in a visitor as Guest. Guest access requires the user to login as Guest. This allows you to track usage, by looking at the statistics for the user Guest. However, as everyone is logged in as the user Guest, you can't track individual users. Registered access requires the user to register on your site. You can allow people to register with or without e-mail confirmation, require a special code for enrolment, manually create their accounts yourself, import accounts from another system, or use an outside system (like an LDAP server) for your accounts. The Main menu Returning to the Front Page, notice the Main menu in the upper-left corner. This menu consists of two documents that tell the user what the site is about, and how to use it. In Moodle, icons tell the user what kind of resource will be accessed by a link. In this case, the icon tells the user that the first resource is a PDF (Adobe Acrobat) document, and the second is a web page. Course materials that students observe or read, such as web or text pages, hyperlinks, and multimedia files are called Resources.
Read more
  • 0
  • 0
  • 3012

article-image-overview-rest-concepts-and-developing-your-first-web-script-using-alfresco
Packt
30 Aug 2010
10 min read
Save for later

Overview of REST Concepts and Developing your First Web Script using Alfresco

Packt
30 Aug 2010
10 min read
(For more resources on Alfresco, see here.) Web Scripts allow you to develop entire web applications on Alfresco by using just a scripting language—JavaScript and a templating language—FreeMarker. They offer a lightweight framework for quickly developing even complex interfaces such as Alfresco Share and Web Studio. Besides this, Web Scripts can be used to develop Web Services for giving external applications access to the features of the Alfresco repository. Your Web Services, implemented according to the principles of the REST architectural style, can be easily reused by disparate, heterogeneous systems. Specifically, in this article, you will learn: What REST means and how it compares to SOAP What elements are needed to implement a Web Script A lightweight alternative to SOAP Web Services The term Web Services is generally intended to denote a large family of specifications and protocols, of which SOAP is only a small part, which are often employed to let applications provide and consume services over the World Wide Web (WWW). This basically means exchanging XML messages over HTTP. The main problem with the traditional approach to Web Services is that any implementation has to be compliant with a huge, and complicated set of specifications. This makes the application itself complex and typically hard to understand, debug, and maintain. A whole cottage industry has grown with the purpose of providing the tools necessary for letting developers abstract away this complexity. It is virtually impossible to develop any non-trivial application without these tools based on SOAP. In addition, one or more of the other Web Services standards such as WS-Security, WS-Transaction, or WS-Coordination are required. It is also impossible for any one person to have a reasonably in-depth knowledge of a meaningful portion of the whole Web Services stack (sometimes colloquially referred to as WS-*). Recently, a backlash against this heavyweight approach in providing services over the Web has begun and some people have started pushing for a different paradigm, one that did not completely ignore and disrupt the architecture of the World Wide Web. The main objection that the proponents of the REST architectural style, as this paradigm is called, raise with respect to WS-* is that the use of the term Web in Web Services is fraudulent and misleading. The World Wide Web, they claim, was designed in accordance with REST principles and this is precisely why it was able to become the largest, most scalable information architecture ever realized. WS-*, on the other hand, is nothing more than a revamped, RPC-style message exchange paradigm. It's just CORBA once again, only this time over HTTP and using XML, to put it bluntly. As it has purportedly been demonstrated, this approach will never scale to the size of the World Wide Web, as it gets in the way of important web concerns such as cacheability, the proper usage of the HTTP protocol methods, and of well-known MIME types to decouple clients from servers. Of course, you don't have to buy totally into the REST philosophy—which will be described in the next section—in order to appreciate the elegance, simplicity, and usefulness of Alfresco Web Scripts. After all, Alfresco gives you the choice to use either Web Scripts or the traditional, SOAP-based, Web Services. But you have to keep in mind that the newer and cooler pieces of Alfresco, such as Surf, Share, Web Studio, and the CMIS service, are being developed using Web Scripts. It is, therefore, mandatory that you know how the Web Scripts work, how to develop them, and how to interact with them, if you want to be part of this brave new world of RESTful services. REST concepts The term REST had been introduced by Roy T. Fielding, one of the architects of the HTTP protocol, in his Ph.D dissertation titled Architectural Styles and the Design of Network-based Software Architectures (available online at http://www.ics.uci.edu/ ~fielding/pubs/dissertation/top.htm). Constraints In his work, Dr. Fielding introduces an "architectural style for distributed hypermedia systems" called Representational State Transfer (REST). It does so by starting from an architectural style that does not impose any constraints on implementations (called the Null Style) and progressively adds new constraints that together define what REST is. Those constraints are: Client-Server interaction Statelessness Cacheability Uniform Interface Layered System Code-On-Demand (optional) Fielding then goes on to define the main elements of the REST architectural style. Foremost among those are resources and representations. In contrast with distributed object systems, where data is always hidden behind an interface that only exposes operations that clients may perform on said data, "REST components communicate by transferring a representation of a resource in a format matching one of an evolving set of standard data types, selected dynamically based on the capabilities or desires of the recipient and the nature of the resource." Resources It is important to understand what a resource is and what it isn't. A resource is some information that can be named. It can correspond to a specific entity on a data management system such as a record in a database or a document in a DMS such as Alfresco. However, it can also map to a set of entities, such as a list of search results, or a non-virtual object like a person in the physical world. In any case, a resource is not the underlying entity. Resources need to be named, and in a globally distributed system such as the World Wide Web, they must be identified in a way that guarantees the universality and possibly the univocity of identifiers. On the Web, resources are identified using Uniform Resource Identifiers (URI). A specific category of URIs are Uniform Resource Locators (URL) , which provide a way for clients to locate, that is to find, a resource anywhere on the Web, in addition to identifying it. It is also assumed that URIs never change over the lifetime of a resource, no matter how much the internal state of the underlying entities changes over time. This allows the architecture of the Web to scale immensely, as the system does not need to rely on centralized link servers that maintain references separated from the content. Representations Representations are sequences of bytes intended to capture the current or intended state of a resource, as well as metadata (in the form of name / value pairs) about the resource or the representation itself. The format of a representation is called its media type. Examples of media types are plain text, HTML , XML, JPEG, PDF, and so on. When servers and clients use a set of well-known, standardized media types, interoperability between systems is greatly simplified. Sometimes, it is possible for clients and servers to negotiate a specific format from a set that is supported by both. Control data, which is exchanged between systems together with the representation, is used to determine the purpose of a message or the behavior of any intermediaries. Control data can be used by the client, for instance, to inform the server that the representation being transferred is meant to be the intended new state of the resource, or it can be used by the server to control how proxies, or the client itself, may cache representations. The most obvious example of control data on the Web is HTTP methods and result codes. By using the PUT method, for example, a client usually signals to a server that it is sending an updated representation of the resource. REST in practice As we mentioned, REST is really just an abstract architectural style, not a specific architecture, network protocol, or software system. While no existing system exactly adheres to the full set of REST principles, the World Wide Web is probably the most well-known and successful implementation of them. Developing Web Services that follow the REST paradigm boils down to following a handful of rules and using HTTP the way it was meant to be used. The following sections detail some of those rules. Use URLs to identify resources It is important that you design the URLs for your Web Service in such a way that they identify resources and do not describe the operations performed on said resources. It is a common mistake to use URLs such as: /widgetService/createNewWidget /widgetService/readWidget?id=1 /widgetService/updateWidget?id=1 /widgetService/deleteWidget?id=1 whenever, for instance, you want to design a web service for doing CRUD operations on widgets. A proper, RESTful URL space for this kind of usage scenario could instead be something like the following: /widgets/ To identify a collection of widgets /widgets/id To identify a single widget. Then again, a RESTful interaction with a server that implements the previous service would be along the lines of the following (where we have indicated the HTTP verb together with the URL): POST /widgets/ To create a new widget, whose representation is contained in the body of the request GET /widgets/ To obtain a representation (listing) of all widgets of the collection GET /widgets/1 To obtain a representation of the widget having id=1 POST /widgets/1 To update a widget by sending a new representation (the PUT verb could be used here as well) DELETE /widgets/1 To delete a widget You can see here how URLs representing resources and the appropriate usage of HTTP methods can be used to implement a correctly designed RESTful Web Service for CRUD operations on server-side objects. Use HTTP methods properly There are four main methods that a client can use to tell a server which kind of operation to perform. You can call them commands, if you like. These are GET, POST, PUT, and DELETE. The HTTP 1.1 specification lists some other methods, such as HEAD, TRACE, and OPTIONS, but we can ignore them as they are not frequently used. GET GET is meant to be used for requests that are not intended to modify the state of a resource. This does not mean that the processing by the server of a GET request must be free of side effects—it is perfectly legal, for instance, to increment a counter of page views. GET requests, however, should be idempotent. The property of idempotency means that a sequence of N identical requests should have the same side effects as a single request. The methods GET, HEAD, PUT, and DELETE share this property. Basically, by using GET, a client signals that it intends to retrieve the representation of a resource. The server can perform any operation that causes side effects as part of the execution of the method, but the client cannot be held accountable for them. PUT PUT is generally used to send the modified representation of a resource. It is idempotent as well—multiple, identical PUT requests have the same effect as a single request. DELETE DELETE can be used to request the removal of a resource. This is another idempotent method. POST The POST method is used to request that the server accepts the entity enclosed in the request as a new subordinate of the resource identified by the URI named in the request. POST is a bit like the Swiss army knife of HTTP and can be used for a number of purposes, including: Annotation of existing resources Posting a message to a bulletin board, newsgroup, or mailing list Providing a block of data, such as the result of submitting a form, to a data-handling process Extending a database through an append operation POST is not an idempotent method. One of the main objections proponents of REST raise with respect to traditional Web Service architectures is that, with the latter, POST is used for everything. While you shouldn't feel compelled to use every possible HTTP method in your Web Service (it is perfectly RESTful to use only GET and POST), you should at least know the expectations behind them and use them accordingly.
Read more
  • 0
  • 0
  • 3003

article-image-new-modules-moodle-2
Packt
07 Mar 2011
5 min read
Save for later

New Modules for Moodle 2

Packt
07 Mar 2011
5 min read
  Moodle 2.0 First Look Discover what's new in Moodle 2.0, how the new features work, and how it will impact you         Read more about this book       (For more resources on Moodle, see here.) Blogs—before and after There has always been a blogging option in a standard Moodle install. However, some users have found it unsatisfactory because of the following reasons: The blog is attached to the user profile so you can only have one blog There is no way to attach a blog or blog entry to a particular course There is no way for other people to comment on your blog For this reason, alternative blog systems (such as the contributed OU blog module) have become popular as they give users a wider range of options. The standard blog in Moodle 2.0 has changed, and now: A blog entry can optionally be associated with a course It is possible to comment on a blog entry Blog entries from outside of Moodle can be copied in It is now possible to search blog entries Where's my blog? Last year when Emma studied on Moodle 1.9, if she wanted to make a blog entry she would click on her name to access her profile and she'd see a blog tab like the one shown in following screenshot: Alternatively, if her tutor had added the blog menu block, she could click on Add a new entry and create her blog post there as follows: The annoyance was that if she added a new entry in the blog menu of her ICT course, her classmates in her Art course could see that entry (even, confusingly, if the blog menu had a link to entries for just that course). If we follow Emma into the Beginners' French course in Moodle 2.0, we see that she can access her profile from the navigation block by clicking on My profile and then selecting View Profile. (She can also view her profile by clicking on her username as she could in Moodle 1.9). If she then clicks on Blogs she can view all the entries she made anywhere in Moodle and can also add a new entry: As before, Emma can also add her entry through the blog menu, so let's take a look at that. Her tutor, Stuart needs to have added this block to the course. The Blog Menu block To add this to a course a teacher such as Stuart needs to turn on the editing and select Blog menu from the list of available blocks: The Blog menu displays the following links: View all entries for this course: Here's where Emma and others can read blog entries specific to that course. This link shows users all the blog posts for the course they are currently in. View my entries about this course: Here's where Emma can check the entries she has already made associated with this course. This link shows users their own blog posts for the course they are currently in. Add an entry about this course: Here's where Emma can add a blog entry related only to this course. When she does that, she is taken to the editing screen for adding a new blog entry, which she starts as shown in the following screenshot: Just as in Moodle 1.9, she can attach documents, choose to publish publicly or keep to herself and add tags. The changes come as we scroll down. At the bottom of the screen is a section which associates her entry with the course she is presently in: Once she has saved it, she sees her post appear as follows: View all of my entries: Here Emma may see every entry she has made, regardless of which course it was in or whether she made it public or private. Add a new entry: Emma can choose to add a new blog entry here (as she could from her profile) which doesn't have to be specific to any particular course. If she sets it to "anyone on this site", then other users can read her blog wherever they are in Moodle. Search: At the bottom of the Blog menu block is a search box. This enables users to enter a word or phrase and see if anyone has mentioned it in a blog entry The Recent Blog Entries block As our teacher in the Beginners' French course Stuart has enabled the Recent Blog Entries block, there is also a block showing the latest blog entries. Emma's is the most recent entry on the course so hers appears as a link, along with all other recent course entries. Course specific blogs Just to recap and double check—if Emma now visits her other course, How to Be Happy and checks out the View my entries about this course entries link in the Blog menu, she does not see her French course blog post, but instead, sees an entry she has associated with this course: The tutor for this course, Andy, has added the blog tags block. The blog tags block This block is not new; however, it's worth pointing out that the tags are NOT course-specific, and so Emma sees the tags she added to the entries in both courses alongside the tags from other users:  
Read more
  • 0
  • 0
  • 2998
article-image-using-jquery-and-jqueryui-widget-factory-plugins-requirejs
Packt
18 Jun 2013
5 min read
Save for later

Using jQuery and jQueryUI Widget Factory plugins with RequireJS

Packt
18 Jun 2013
5 min read
(For more resources related to this topic, see here.) How to do it... We must declare the jquery alias name within our Require.js configuration file. require.config({// 3rd party script alias namespaths: {// Core Libraries// --------------// jQuery"jquery": "libs/jquery",// Plugins// -------"somePlugin": "libs/plugins/somePlugin"}}); If a jQuery plugin does not register itself as AMD compatible, we must also create a Require.js shim configuration to make sure Require.js loads jQuery before the jQuery plugin. shim: {// Twitter Bootstrap plugins depend on jQuery"bootstrap": ["jquery"]} We will now be able to dynamically load a jQuery plugin with the require() method. // Dynamically loads a jQuery plugin using the require() methodrequire(["somePlugin"], function() {// The callback function is executed after the pluginis loaded}); We will also be able to list a jQuery plugin as a dependency to another module. // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "somePlugin"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); When using a jQueryUI Widget Factory plugin, we create Require.js path names for both the jQueryUI Widget Factory and the jQueryUI Widget Factory plugin: "jqueryui": "libs/jqueryui","selectBoxIt": "libs/plugins/selectBoxIt" Next, create a shim configuration property: // The jQueryUI Widget Factory depends on jQuery"jqueryui": ["jquery"],// The selectBoxIt plugin depends on both jQuery and the jQueryUIWidget Factory"selectBoxIt": ["jqueryui"] We will now be able to dynamically load the jQueryUI Widget Factory plugin with the require() method: // Dynamically loads the jQueryUI Widget Factory plugin, selectBoxIt,using the Require() methodrequire(["selectBoxIt"], function() {// The callback function is executed after selectBoxIt.js(and all of its dependencies) have been loaded}); We will also be able to list the jQueryUI Widget Factory plugin as a dependency to another module: // Sample file// -----------// The define method is passed a dependency array and a callbackfunctiondefine(["jquery", "selectBoxIt"], function ($) {// Wraps all logic inside of a jQuery.ready event$(function() {});}); How it works... Luckily for us, jQuery adheres to the AMD specification and registers itself as a named AMD module. If you are confused about how/why they are doing that, let's take a look at the jQuery source: // Expose jQuery as an AMD moduleif ( typeof define === "function" && define.amd && define.amd.jQuery ){define( "jquery", [], function () { return jQuery; } );} jQuery first checks to make sure there is a global define() function available on the page. Next, jQuery checks if the define function has an amd property, which all AMD loaders that adhere to the AMD API should have. Remember that in JavaScript, functions are first class objects, and can contain properties. Finally, jQuery checks to see if the amd property contains a jQuery property, which should only be there for AMD loaders that understand the issues with loading multiple versions of jQuery in a page that all might call the define() function. Essentially, jQuery is checking that an AMD script loader is on the page, and then registering itself as a named AMD module (jquery). Since jQuery exports itself as the named AMD module, jquery, you must use this exact name when setting the path configuration to your own version of jQuery, or Require.js will throw an error. If a jQuery plugin registers itself as an anonymous AMD module and jQuery is also listed with the proper lowercased jquery alias name within your Require.js configuration file, using the plugin with the require() and define() methods will work as you expect. Unfortunately, most jQuery plugins are not AMD compatible, and do not wrap themselves in an optional define() method and list jquery as a dependency. To get around this issue, we can use the Require.js shim object configuration like we have seen before to tell Require. js that a file depends on jQuery. The shim configuration is a great solution for jQuery plugins that do not register themselves as AMD modules. Unfortunately, unlike jQuery, the jQueryUI does not currently register itself as a named AMD module, which means that plugin authors that use the jQueryUI Widget Factory cannot provide AMD compatibility. Since the jQueryUI Widget Factory is not AMD compatible, we must use a workaround involving the paths and shim configuration objects to properly define the plugin as an AMD module. There's more... You will most likely always register your own files as anonymous AMD modules, but jQuery is a special case. Registering itself as a named AMD module allows other third-party libraries that depend on jQuery, such as jQuery plugins, to become AMD compatible by calling the define() method themselves and using the community agreed upon module name, jquery, to list jQuery as a dependency. Summary This article demonstrates how to use jQuery and jQueryUI Widget Factory plugins with Require.js. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] HTML5 Presentations - creating our initial presentation [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 2982

article-image-hands-tutorial-ejb-31-security
Packt
15 Jun 2011
9 min read
Save for later

Hands-on Tutorial on EJB 3.1 Security

Packt
15 Jun 2011
9 min read
EJB 3.1 Cookbook Security is an important aspect of many applications. Central to EJB security is the control of access to classes and methods. There are two approaches to controlling access to EJBs. The first, and the simplest, is through the use of declarative annotations to specify the types of access permitted. The second approach is to use code to control access to the business methods of an EJB. This second approach should not be used unless the declarative approach does not meet the needs of the application. For example, access to a method may be denied during certain times of the day or during certain maintenance periods. Declarative security is not able to handle these types of situations. In order to incorporate security into an application, it is necessary to understand the Java EE environment and its terminology. The administration of security for the underlying operating system is different from that provided by the EE server. The EE server is concerned with realms, users and groups. The application is largely concerned with roles. The roles need to be mapped to users and groups of a realm for the application to function properly. A realm is a domain for a server that incorporates security policies. It possesses a set of users and groups which are considered valid users of an application. A user typically corresponds to an individual while a group is a collection of individuals. Group members frequently share a common set of responsibilities. A Java EE server may manage multiple realms. An application is concerned with roles. Access to EJBs and their methods is determined by the role of a user. Roles are defined in such a manner as to provide a logical way of deciding which users/groups can access which methods. For example, a management type role may have the capability to approve a travel voucher whereas an employee role should not have that capability. By assigning certain users to a role and then specifying which roles can access which methods, we are able to control access to EJBs. The use of groups makes the process of assigning roles easier. Instead of having to map each individual to a role, the user is assigned to a group and the group is mapped to a role. The business code does not have to check every individual. The Java EE server manages the assignment of users to groups. The application needs only be concerned with controlling a group's access. A group is a server level concept. Roles are application level. One group can be associated with multiple applications. For example, a student group may use a student club and student registration application while a faculty group might also use the registration application but with more capability. A role is simply a name for a set of capabilities. For example, an auditor role may be to review and certify a set of accounts. This role would require read access to many, if not all, of the accounts. However, modification privileges may be restricted. Each application has its own set of roles which have been defined to meet the security needs of the application. The EE server manages realms consisting of users, groups, and resources. The server will authenticate users using Java's underlying security features. The user is then referred to as a principal and has a credential containing the user's security attributes. During the deployment of an application, users and groups are mapped to roles of the application using a deployment descriptor. The configuration of the deployment descriptor is normally the responsibility of the application deployer. During the execution of the application, the Java Authentication and Authorization Service (JAAS) API authenticates a user and creates a principal representing the user. The principal is then passed to an EJB. Security in a Java EE environment can be viewed from different perspectives. When information is passed between clients and servers, transport level security comes into play. Security at this level can include Secure HTTP (HTTPS) and Secure Sockets Layer (SSL). Messages can be sent across a network in the form of Simple Object Access Protocol (SOAP) messages. These messages can be encrypted. The EE container for EJBs provides application level security which is the focus of the chapter. Most servers provide unified security support between the web container and the EJB container. For example, calls from a servlet in a web container to an EJB are handled automatically resulting in a flexible security mechanism. Most of the recipes presented in this article are interrelated. If your intention is to try out the code examples, then make sure you cover the first two recipes as they provide the framework for the execution of the other recipes. In the first recipe, Creating the SecurityApplication, we create the foundation application for the remaining recipes. In the second recipe, Configuring the server to handle security, the basic steps needed to configure security for an application are presented. The use of declarative security is covered in the Controlling security using declarations recipe while programmatic security is discussed in the next article on Controlling security programmatically. The Understanding and declaring roles recipe examines roles in more detail and the Propagating identity recipe talks about how the identity of a user is managed in an application. Creating the SecurityApplication In this article, we will create a SecurityApplication built around a simple Voucher entity to persist travel information. This is a simplified version of an application that allows a user to submit a voucher and for a manager to approve or disapprove it. The voucher entity itself will hold only minimal information. Getting ready The illustration of security will be based on a series of classes: Voucher – An entity holding travel-related information VoucherFacade – A facade class for the entity AbstractFacade – The base class of the VoucherFacade VoucherManager – A class used to manage vouchers and where most of the security techniques will be demonstrated SecurityServlet – A servlet used to drive the demonstrations All of these classes will be members of the packt package in the EJB module except for the servlet which will be placed in the servlet package of the WAR module. How to do it... Create a Java EE application called SecurityApplication with an EJB and a WAR module. Add a packt package to the EJB module and an entity called Voucher to the package. Add five private instance variables to hold a minimal amount of travel information: name, destination, amount, approved, and an id. Also, add a default and a three argument constructor to the class to initialize the name, destination, and amount fields. The approved field is also set to false. The intent of this field is to indicate whether the voucher has been approved or not. Though not shown below, also add getter and setter methods for these fields. You may want to add other methods such as a toString method if desired. @Entity public class Voucher implements Serializable { private String name; private String destination; private BigDecimal amount; private boolean approved; @Id @GeneratedValue(strategy = GenerationType.AUTO) private Long id; public Voucher() { } public Voucher(String name, String destination, BigDecimal amount) { this.name = name; this.destination = destination; this.amount = amount; this.approved = false; } ... } Next, add an AbstractFacade class and a VoucherFacade class derived from it. The VoucherFacade class is shown below. As with other facade classes found in previous chapters, the class provides a way of accessing an entity manager and the base class methods of the AbstractFacade class. @Stateless public class VoucherFacade extends AbstractFacade<Voucher> { @PersistenceContext(unitName = "SecurityApplication-ejbPU") private EntityManager em; protected EntityManager getEntityManager() { return em; } public VoucherFacade() { super(Voucher.class); } } Next, add a stateful EJB called VoucherManager. Inject an instance of the VoucherFacade class using the @EJB annotation. Also add an instance variable for a Voucher. We need a createVoucher method that accepts a name, destination, and amount arguments, and then creates and subsequently persists the Voucher. Also, add get methods to return the name, destination, and amount of the voucher. @Stateful public class VoucherManager { @EJB VoucherFacade voucherFacade; Voucher voucher; public void createVoucher(String name, String destination, BigDecimal amount) { voucher = new Voucher(name, destination, amount); voucherFacade.create(voucher); } public String getName() { return voucher.getName(); } public String getDestination() { return voucher.getDestination(); } public BigDecimal getAmount() { return voucher.getAmount(); } ... } Next add three methods: submit – This method is intended to be used by an employee to submit a voucher for approval by a manager. To help explain the example, display a message showing when the method has been submitted. approve – This method is used by a manager to approve a voucher. It should set the approved field to true and return true. reject – This method is used by a manager to reject a voucher. It should set the approved field to false and return false. @Stateful public class VoucherManager { ... public void submit() { System.out.println("Voucher submitted"); } public boolean approve() { voucher.setApproved(true); return true; } public boolean reject() { voucher.setApproved(false); return false; } } To complete the application framework, add a package called servlet to the WAR module and a servlet called SecurityServlet to the package. Use the @EJB annotation to inject a VoucherManager instance field into the servlet. In the try block of the processRequest method, add code to create a new voucher and then use the submit method to submit it. Next, display a message indicating the submission of the voucher. public class SecurityServlet extends HttpServlet { @EJB VoucherManager voucherManager; protected void processRequest(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { response.setContentType("text/html;charset=UTF-8"); PrintWriter out = response.getWriter(); try { voucherManager.createVoucher("Susan Billings", "SanFrancisco", BigDecimal.valueOf(2150.75)); voucherManager.submit(); out.println("<html>"); out.println("<head>"); out.println("<title>Servlet SecurityServlet</title>"); out.println("</head>"); out.println("<body>"); out.println("<h3>Voucher was submitted</h3>"); out.println("</body>"); out.println("</html>"); } finally { out.close(); } } ... } Execute the SecurityServlet. Its output should appear as shown in the following screenshot: How it works... In the Voucher entity, notice the use of BigDecimal for the amount field. This java.math package class is a better choice for currency data than float or double. Its use avoids problems which can occur with rounding. The @GeneratedValue annotation, used with the id field, is for creating an entity facade. In the VoucherManager class, notice the injection of the stateless VoucherFacade session EJB into a stateful VoucherManager EJB. Each invocation of a VoucherFacade method may result in the method being executed against a different instance of VoucherManager. This is the correct use of a stateless session EJB. The injection of a stateful EJB into a stateless EJB is not recommended.  
Read more
  • 0
  • 0
  • 2977

article-image-radrails-views
Packt
21 Oct 2009
6 min read
Save for later

RadRails Views

Packt
21 Oct 2009
6 min read
Opening the RadRails Views Some of the views that we will go through in this article are available as part of the Rails default perspective, which means you don't need to do anything special to open them; they will appear as tabbed views in a pane at the bottom of your workbench. Just look for the tab name of the view you want to see and click on it to make it visible. However, there are some views that are not opened by default, or maybe you closed them at some point accidentally, or maybe you changed to the Debug perspective and you want to display some of the RadRails views there. When you need to open a view whose tab is not displaying, you can go to the Window menu, and select the Show View option. If you are in the Rails perspective, all the available views will be displayed in that menu, as you can see in the screenshot above. When opening this menu from a different perspective, you will not see the RadRails views here, but you can select Other.... If this is the case, in the Show View dialog, most of the views will appear under the Ruby category, except for the Generators, Rails API, and Rake Tasks views, which are located under Rails. Documentation Views As happens with any modern programming language, Ruby has an extensive API. There are lots of libraries and classes and even with Ruby being an intuitive language with a neat consistent API, often we need to read the documentation. As you probably know, Ruby provides a standard documentation format called RDoc, which uses the comments in the source code to generate documentation. We can access this RDoc documentation in different ways, mainly in HTML format through a browser or by using the command-line tool RI. This produces a plain-text output directly at the command shell, in a similar way to the man command in a UNIX system. RadRails doesn't add any new functionality to the built-in documentation, but provides some convenient views so we can explore it without losing the context of our project's source. Ruby Interactive (RI) View This view provides a fast and comfortable way of browsing the local documentation in the same way as you would use RI from the command line. You can look either for a class or a method name. Just start typing at the input box at the top left corner of the view and the list below will display the matching entries. That's a nice improvement over the command line interface, since you can see the results as you type instead of having to run a complete search every time. If you know the name of both the class and the method you are looking for, then you can write them using the hash (pound) sign as a separator. For example, to get the documentation for the sum method of the class Enumerable you would write Enumerable#sum. The documentation will display in the right pane, with a convenient highlighting of the referenced methods and classes. Even if the search results of RI don't look very attractive compared to the output of the HTML-based documentation views, RI has the advantage of searching locally on your computer, so you can use it even when working off-line. Ruby Core, Ruby Standard Library, and Rails API There are three more views related to documentation in RadRails: Ruby Core API, Ruby Standard Library API, and Rails API. Unlike the RI view, these ones look for the information over the Internet, so you will not be able to use them unless you are on-line. On the other hand, the information is displayed in a more attractive way than with RI, and it provides links to the source code of the consulted methods, so if the documentation is not enough, you can always take a look at the inner details of the implementation. The Ruby Core API view displays the documentation of the classes included in Ruby's core. These are the classes you can directly use without a previous require statement. The documentation rendered is that at http://www.ruby-doc.org/core/. You are probably familiar with this type of layout, since it's the default RDoc output. The upper pane displays the navigation links, and the lower pane shows the detail of the documentation. The navigation is divided into three frames. The one to the left shows the files in which the source code is, the one in the middle shows the Classes and Modules, and in the third one you can find all the methods in the API. The Ruby Standard Library API is composed of all the classes and modules that are not a part of Ruby's core, but are typically distributed as a part of the Ruby installation. You can directly use these classes after a require statement in your code. The Ruby Standard Library API View displays the information from http://www.ruby-doc.org/stdlib. In this case, the navigation is the same as in Ruby Core, but with an additional area to the left, in which you can see all the available packages (the ones you would require for using the classes within your code). When you select a package link, you will see the files, classes, and methods for that single package. The last of the documentation views displays information about the Rails API. It includes the documentation of ActiveRecord, the ActionPack, ActiveSupport, and the rest of the Rails components. The information is obtained from http://api.rubyonrails.org. In this case the layout is slightly different because the information about the files, classes, and methods is displayed to the left instead at the top of the view. Apart from that, the behavior is identical to that of the Ruby Core API view. Since some of the API descriptions are fairly long, it can be convenient to maximize the documentation views when you are using them. Remember you can maximize any of the views by double-clicking its tab or by using the maximize icon on the view's toolbar. Double-clicking again will restore the view to the original size and position.
Read more
  • 0
  • 0
  • 2953
article-image-understanding-tdd
Packt
03 Sep 2015
31 min read
Save for later

Understanding TDD

Packt
03 Sep 2015
31 min read
 In this article by Viktor Farcic and Alex Garcia, the authors of the book Test-Driven Java Development, we will go through TDD in a simple procedure of writing tests before the actual implementation. It's an inversion of a traditional approach where testing is performed after the code is written. (For more resources related to this topic, see here.) Red-green-refactor Test-driven development is a process that relies on the repetition of a very short development cycle. It is based on the test-first concept of extreme programming (XP) that encourages simple design with a high level of confidence. The procedure that drives this cycle is called red-green-refactor. The procedure itself is simple and it consists of a few steps that are repeated over and over again: Write a test. Run all tests. Write the implementation code. Run all tests. Refactor. Run all tests. Since a test is written before the actual implementation, it is supposed to fail. If it doesn't, the test is wrong. It describes something that already exists or it was written incorrectly. Being in the green state while writing tests is a sign of a false positive. Tests like these should be removed or refactored. While writing tests, we are in the red state. When the implementation of a test is finished, all tests should pass and then we will be in the green state. If the last test failed, implementation is wrong and should be corrected. Either the test we just finished is incorrect or the implementation of that test did not meet the specification we had set. If any but the last test failed, we broke something and changes should be reverted. When this happens, the natural reaction is to spend as much time as needed to fix the code so that all tests are passing. However, this is wrong. If a fix is not done in a matter of minutes, the best thing to do is to revert the changes. After all, everything worked not long ago. Implementation that broke something is obviously wrong, so why not go back to where we started and think again about the correct way to implement the test? That way, we wasted minutes on a wrong implementation instead of wasting much more time to correct something that was not done right in the first place. Existing test coverage (excluding the implementation of the last test) should be sacred. We change the existing code through intentional refactoring, not as a way to fix recently written code. Do not make the implementation of the last test final, but provide just enough code for this test to pass. Write the code in any way you want, but do it fast. Once everything is green, we have confidence that there is a safety net in the form of tests. From this moment on, we can proceed to refactor the code. This means that we are making the code better and more optimum without introducing new features. While refactoring is in place, all tests should be passing all the time. If, while refactoring, one of the tests failed, refactor broke an existing functionality and, as before, changes should be reverted. Not only that at this stage we are not changing any features, but we are also not introducing any new tests. All we're doing is making the code better while continuously running all tests to make sure that nothing got broken. At the same time, we're proving code correctness and cutting down on future maintenance costs. Once refactoring is finished, the process is repeated. It's an endless loop of a very short cycle. Speed is the key Imagine a game of ping pong (or table tennis). The game is very fast; sometimes it is hard to even follow the ball when professionals play the game. TDD is very similar. TDD veterans tend not to spend more than a minute on either side of the table (test and implementation). Write a short test and run all tests (ping), write the implementation and run all tests (pong), write another test (ping), write implementation of that test (pong), refactor and confirm that all tests are passing (score), and then repeat—ping, pong, ping, pong, ping, pong, score, serve again. Do not try to make the perfect code. Instead, try to keep the ball rolling until you think that the time is right to score (refactor). Time between switching from tests to implementation (and vice versa) should be measured in minutes (if not seconds). It's not about testing T in TDD is often misunderstood. Test-driven development is the way we approach the design. It is the way to force us to think about the implementation and to what the code needs to do before writing it. It is the way to focus on requirements and implementation of just one thing at a time—organize your thoughts and better structure the code. This does not mean that tests resulting from TDD are useless—it is far from that. They are very useful and they allow us to develop with great speed without being afraid that something will be broken. This is especially true when refactoring takes place. Being able to reorganize the code while having the confidence that no functionality is broken is a huge boost to the quality. The main objective of test-driven development is testable code design with tests as a very useful side product. Testing Even though the main objective of test-driven development is the approach to code design, tests are still a very important aspect of TDD and we should have a clear understanding of two major groups of techniques as follows: Black-box testing White-box testing The black-box testing Black-box testing (also known as functional testing) treats software under test as a black-box without knowing its internals. Tests use software interfaces and try to ensure that they work as expected. As long as functionality of interfaces remains unchanged, tests should pass even if internals are changed. Tester is aware of what the program should do, but does not have the knowledge of how it does it. Black-box testing is most commonly used type of testing in traditional organizations that have testers as a separate department, especially when they are not proficient in coding and have difficulties understanding it. This technique provides an external perspective on the software under test. Some of the advantages of black-box testing are as follows: Efficient for large segments of code Code access, understanding the code, and ability to code are not required Separation between user's and developer's perspectives Some of the disadvantages of black-box testing are as follows: Limited coverage, since only a fraction of test scenarios is performed Inefficient testing due to tester's lack of knowledge about software internals Blind coverage, since tester has limited knowledge about the application If tests are driving the development, they are often done in the form of acceptance criteria that is later used as a definition of what should be developed. Automated black-box testing relies on some form of automation such as behavior-driven development (BDD). The white-box testing White-box testing (also known as clear-box testing, glass-box testing, transparent-box testing, and structural testing) looks inside the software that is being tested and uses that knowledge as part of the testing process. If, for example, an exception should be thrown under certain conditions, a test might want to reproduce those conditions. White-box testing requires internal knowledge of the system and programming skills. It provides an internal perspective on the software under test. Some of the advantages of white-box testing are as follows: Efficient in finding errors and problems Required knowledge of internals of the software under test is beneficial for thorough testing Allows finding hidden errors Programmers introspection Helps optimizing the code Due to the required internal knowledge of the software, maximum coverage is obtained Some of the disadvantages of white-box testing are as follows: It might not find unimplemented or missing features Requires high-level knowledge of internals of the software under test Requires code access Tests are often tightly coupled to the implementation details of the production code, causing unwanted test failures when the code is refactored. White-box testing is almost always automated and, in most cases, has the form of unit tests. When white-box testing is done before the implementation, it takes the form of TDD. The difference between quality checking and quality assurance The approach to testing can also be distinguished by looking at the objectives they are trying to accomplish. Those objectives are often split between quality checking (QC) and quality assurance (QA). While quality checking is focused on defects identification, quality assurance tries to prevent them. QC is product-oriented and intends to make sure that results are as expected. On the other hand, QA is more focused on processes that assure that quality is built-in. It tries to make sure that correct things are done in the correct way. While quality checking had a more important role in the past, with the emergence of TDD, acceptance test-driven development (ATDD), and later on behavior-driven development (BDD), focus has been shifting towards quality assurance. Better tests No matter whether one is using black-box, white-box, or both types of testing, the order in which they are written is very important. Requirements (specifications and user stories) are written before the code that implements them. They come first so they define the code, not the other way around. The same can be said for tests. If they are written after the code is done, in a certain way, that code (and the functionalities it implements) is defining tests. Tests that are defined by an already existing application are biased. They have a tendency to confirm what code does, and not to test whether client's expectations are met, or that the code is behaving as expected. With manual testing, that is less the case since it is often done by a siloed QC department (even though it's often called QA). They tend to work on tests' definition in isolation from developers. That in itself leads to bigger problems caused by inevitably poor communication and the police syndrome where testers are not trying to help the team to write applications with quality built-in, but to find faults at the end of the process. The sooner we find problems, the cheaper it is to fix them. Tests written in the TDD fashion (including its flavors such as ATDD and BDD) are an attempt to develop applications with quality built-in from the very start. It's an attempt to avoid having problems in the first place. Mocking In order for tests to run fast and provide constant feedback, code needs to be organized in such a way that the methods, functions, and classes can be easily replaced with mocks and stubs. A common word for this type of replacements of the actual code is test double. Speed of the execution can be severely affected with external dependencies; for example, our code might need to communicate with the database. By mocking external dependencies, we are able to increase that speed drastically. Whole unit tests suite execution should be measured in minutes, if not seconds. Designing the code in a way that it can be easily mocked and stubbed, forces us to better structure that code by applying separation of concerns. More important than speed is the benefit of removal of external factors. Setting up databases, web servers, external APIs, and other dependencies that our code might need, is both time consuming and unreliable. In many cases, those dependencies might not even be available. For example, we might need to create a code that communicates with a database and have someone else create a schema. Without mocks, we would need to wait until that schema is set. With or without mocks, the code should be written in a way that we can easily replace one dependency with another. Executable documentation Another very useful aspect of TDD (and well-structured tests in general) is documentation. In most cases, it is much easier to find out what the code does by looking at tests than the implementation itself. What is the purpose of some methods? Look at the tests associated with it. What is the desired functionality of some part of the application UI? Look at the tests associated with it. Documentation written in the form of tests is one of the pillars of TDD and deserves further explanation. The main problem with (traditional) software documentation is that it is not up to date most of the time. As soon as some part of the code changes, the documentation stops reflecting the actual situation. This statement applies to almost any type of documentation, with requirements and test cases being the most affected. The necessity to document code is often a sign that the code itself is not well written.Moreover, no matter how hard we try, documentation inevitably gets outdated. Developers shouldn't rely on system documentation because it is almost never up to date. Besides, no documentation can provide as detailed and up-to-date description of the code as the code itself. Using code as documentation, does not exclude other types of documents. The key is to avoid duplication. If details of the system can be obtained by reading the code, other types of documentation can provide quick guidelines and a high-level overview. Non-code documentation should answer questions such as what the general purpose of the system is and what technologies are used by the system. In many cases, a simple README is enough to provide the quick start that developers need. Sections such as project description, environment setup, installation, and build and packaging instructions are very helpful for newcomers. From there on, code is the bible. Implementation code provides all needed details while test code acts as the description of the intent behind the production code. Tests are executable documentation with TDD being the most common way to create and maintain it. Assuming that some form of Continuous Integration (CI) is in use, if some part of test-documentation is incorrect, it will fail and be fixed soon afterwards. CI solves the problem of incorrect test-documentation, but it does not ensure that all functionality is documented. For this reason (among many others), test-documentation should be created in the TDD fashion. If all functionality is defined as tests before the implementation code is written and execution of all tests is successful, then tests act as a complete and up-to-date information that can be used by developers. What should we do with the rest of the team? Testers, customers, managers, and other non coders might not be able to obtain the necessary information from the production and test code. As we saw earlier, two most common types of testing are black-box and white-box testing. This division is important since it also divides testers into those who do know how to write or at least read code (white-box testing) and those who don't (black-box testing). In some cases, testers can do both types. However, more often than not, they do not know how to code so the documentation that is usable for developers is not usable for them. If documentation needs to be decoupled from the code, unit tests are not a good match. That is one of the reasons why BDD came in to being. BDD can provide documentation necessary for non-coders, while still maintaining the advantages of TDD and automation. Customers need to be able to define new functionality of the system, as well as to be able to get information about all the important aspects of the current system. That documentation should not be too technical (code is not an option), but it still must be always up to date. BDD narratives and scenarios are one of the best ways to provide this type of documentation. Ability to act as acceptance criteria (written before the code), be executed frequently (preferably on every commit), and be written in natural language makes BDD stories not only always up to date, but usable by those who do not want to inspect the code. Documentation is an integral part of the software. As with any other part of the code, it needs to be tested often so that we're sure that it is accurate and up to date. The only cost-effective way to have accurate and up-to-date information is to have executable documentation that can be integrated into your continuous integration system. TDD as a methodology is a good way to move towards this direction. On a low level, unit tests are a best fit. On the other hand, BDD provides a good way to work on a functional level while maintaining understanding accomplished using natural language. No debugging We (authors of this article) almost never debug applications we're working on! This statement might sound pompous, but it's true. We almost never debug because there is rarely a reason to debug an application. When tests are written before the code and the code coverage is high, we can have high confidence that the application works as expected. This does not mean that applications written using TDD do not have bugs—they do. All applications do. However, when that happens, it is easy to isolate them by simply looking for the code that is not covered with tests. Tests themselves might not include some cases. In that situation, the action is to write additional tests. With high code coverage, finding the cause of some bug is much faster through tests than spending time debugging line by line until the culprit is found. With all this in mind, let's go through the TDD best practices. Best practices Coding best practices are a set of informal rules that the software development community has learned over time, which can help improve the quality of software. While each application needs a level of creativity and originality (after all, we're trying to build something new or better), coding practices help us avoid some of the problems others faced before us. If you're just starting with TDD, it is a good idea to apply some (if not all) of the best practices generated by others. For easier classification of test-driven development best practices, we divided them into four categories: Naming conventions Processes Development practices Tools As you'll see, not all of them are exclusive to TDD. Since a big part of test-driven development consists of writing tests, many of the best practices presented in the following sections apply to testing in general, while others are related to general coding best practices. No matter the origin, all of them are useful when practicing TDD. Take the advice with a certain dose of skepticism. Being a great programmer is not only about knowing how to code, but also about being able to decide which practice, framework or style best suits the project and the team. Being agile is not about following someone else's rules, but about knowing how to adapt to circumstances and choose the best tools and practices that suit the team and the project. Naming conventions Naming conventions help to organize tests better, so that it is easier for developers to find what they're looking for. Another benefit is that many tools expect that those conventions are followed. There are many naming conventions in use, and those presented here are just a drop in the ocean. The logic is that any naming convention is better than none. Most important is that everyone on the team knows what conventions are being used and are comfortable with them. Choosing more popular conventions has the advantage that newcomers to the team can get up to speed fast since they can leverage existing knowledge to find their way around. Separate the implementation from the test code Benefits: It avoids accidentally packaging tests together with production binaries; many build tools expect tests to be in a certain source directory. Common practice is to have at least two source directories. Implementation code should be located in src/main/java and test code in src/test/java. In bigger projects, the number of source directories can increase but the separation between implementation and tests should remain as is. Build tools such as Gradle and Maven expect source directories separation as well as naming conventions. You might have noticed that the build.gradle files that we used throughout this article did not have explicitly specified what to test nor what classes to use to create a .jar file. Gradle assumes that tests are in src/test/java and that the implementation code that should be packaged into a jar file is in src/main/java. Place test classes in the same package as implementation Benefits: Knowing that tests are in the same package as the code helps finding code faster. As stated in the previous practice, even though packages are the same, classes are in the separate source directories. All exercises throughout this article followed this convention. Name test classes in a similar fashion to the classes they test Benefits: Knowing that tests have a similar name to the classes they are testing helps in finding the classes faster. One commonly used practice is to name tests the same as the implementation classes, with the suffix Test. If, for example, the implementation class is TickTackToe, the test class should be TickTackToeTest. However, in all cases, with the exception of those we used throughout the refactoring exercises, we prefer the suffix Spec. It helps to make a clear distinction that test methods are primarily created as a way to specify what will be developed. Testing is a great subproduct of those specifications. Use descriptive names for test methods Benefits: It helps in understanding the objective of tests. Using method names that describe tests is beneficial when trying to figure out why some tests failed or when the coverage should be increased with more tests. It should be clear what conditions are set before the test, what actions are performed and what is the expected outcome. There are many different ways to name test methods and our preferred method is to name them using the Given/When/Then syntax used in the BDD scenarios. Given describes (pre)conditions, When describes actions, and Then describes the expected outcome. If some test does not have preconditions (usually set using @Before and @BeforeClass annotations), Given can be skipped. Let's take a look at one of the specifications we created for our TickTackToe application:   @Test public void whenPlayAndWholeHorizontalLineThenWinner() { ticTacToe.play(1, 1); // X ticTacToe.play(1, 2); // O ticTacToe.play(2, 1); // X ticTacToe.play(2, 2); // O String actual = ticTacToe.play(3, 1); // X assertEquals("X is the winner", actual); } Just by reading the name of the method, we can understand what it is about. When we play and the whole horizontal line is populated, then we have a winner. Do not rely only on comments to provide information about the test objective. Comments do not appear when tests are executed from your favorite IDE nor do they appear in reports generated by CI or build tools. Processes TDD processes are the core set of practices. Successful implementation of TDD depends on practices described in this section. Write a test before writing the implementation code Benefits: It ensures that testable code is written; ensures that every line of code gets tests written for it. By writing or modifying the test first, the developer is focused on requirements before starting to work on the implementation code. This is the main difference compared to writing tests after the implementation is done. The additional benefit is that with the tests written first, we are avoiding the danger that the tests work as quality checking instead of quality assurance. We're trying to ensure that quality is built in as opposed to checking later whether we met quality objectives. Only write new code when the test is failing Benefits: It confirms that the test does not work without the implementation. If tests are passing without the need to write or modify the implementation code, then either the functionality is already implemented or the test is defective. If new functionality is indeed missing, then the test always passes and is therefore useless. Tests should fail for the expected reason. Even though there are no guarantees that the test is verifying the right thing, with fail first and for the expected reason, confidence that verification is correct should be high. Rerun all tests every time the implementation code changes Benefits: It ensures that there is no unexpected side effect caused by code changes. Every time any part of the implementation code changes, all tests should be run. Ideally, tests are fast to execute and can be run by the developer locally. Once code is submitted to version control, all tests should be run again to ensure that there was no problem due to code merges. This is specially important when more than one developer is working on the code. Continuous integration tools such as Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo) should be used to pull the code from the repository, compile it, and run tests. All tests should pass before a new test is written Benefits: The focus is maintained on a small unit of work; implementation code is (almost) always in working condition. It is sometimes tempting to write multiple tests before the actual implementation. In other cases, developers ignore problems detected by existing tests and move towards new features. This should be avoided whenever possible. In most cases, breaking this rule will only introduce technical debt that will need to be paid with interest. One of the goals of TDD is that the implementation code is (almost) always working as expected. Some projects, due to pressures to reach the delivery date or maintain the budget, break this rule and dedicate time to new features, leaving the task of fixing the code associated with failed tests for later. These projects usually end up postponing the inevitable. Refactor only after all tests are passing Benefits: This type of refactoring is safe. If all implementation code that can be affected has tests and they are all passing, it is relatively safe to refactor. In most cases, there is no need for new tests. Small modifications to existing tests should be enough. The expected outcome of refactoring is to have all tests passing both before and after the code is modified. Development practices Practices listed in this section are focused on the best way to write tests. Write the simplest code to pass the test Benefits: It ensures cleaner and clearer design; avoids unnecessary features. The idea is that the simpler the implementation, the better and easier it is to maintain the product. The idea adheres to the keep it simple stupid (KISS) principle. This states that most systems work best if they are kept simple rather than made complex; therefore, simplicity should be a key goal in design, and unnecessary complexity should be avoided. Write assertions first, act later Benefits: This clarifies the purpose of the requirements and tests early. Once the assertion is written, the purpose of the test is clear and the developer can concentrate on the code that will accomplish that assertion and, later on, on the actual implementation. Minimize assertions in each test Benefits: This avoids assertion roulette; allows execution of more asserts. If multiple assertions are used within one test method, it might be hard to tell which of them caused a test failure. This is especially common when tests are executed as part of the continuous integration process. If the problem cannot be reproduced on a developer's machine (as may be the case if the problem is caused by environmental issues), fixing the problem may be difficult and time consuming. When one assert fails, execution of that test method stops. If there are other asserts in that method, they will not be run and information that can be used in debugging is lost. Last but not least, having multiple asserts creates confusion about the objective of the test. This practice does not mean that there should always be only one assert per test method. If there are other asserts that test the same logical condition or unit of functionality, they can be used within the same method. Let's go through few examples: @Test public final void whenOneNumberIsUsedThenReturnValueIsThatSameNumber() { Assert.assertEquals(3, StringCalculator.add("3")); } @Test public final void whenTwoNumbersAreUsedThenReturnValueIsTheirSum() { Assert.assertEquals(3+6, StringCalculator.add("3,6")); } The preceding code contains two specifications that clearly define what the objective of those tests is. By reading the method names and looking at the assert, there should be clarity on what is being tested. Consider the following for example: @Test public final void whenNegativeNumbersAreUsedThenRuntimeExceptionIsThrown() { RuntimeException exception = null; try { StringCalculator.add("3,-6,15,-18,46,33"); } catch (RuntimeException e) { exception = e; } Assert.assertNotNull("Exception was not thrown", exception); Assert.assertEquals("Negatives not allowed: [-6, -18]", exception.getMessage()); } This specification has more than one assert, but they are testing the same logical unit of functionality. The first assert is confirming that the exception exists, and the second that its message is correct. When multiple asserts are used in one test method, they should all contain messages that explain the failure. This way debugging the failed assert is easier. In the case of one assert per test method, messages are welcome, but not necessary since it should be clear from the method name what the objective of the test is. @Test public final void whenAddIsUsedThenItWorks() { Assert.assertEquals(0, StringCalculator.add("")); Assert.assertEquals(3, StringCalculator.add("3")); Assert.assertEquals(3+6, StringCalculator.add("3,6")); Assert.assertEquals(3+6+15+18+46+33, StringCalculator.add("3,6,15,18,46,33")); Assert.assertEquals(3+6+15, StringCalculator.add("3,6n15")); Assert.assertEquals(3+6+15, StringCalculator.add("//;n3;6;15")); Assert.assertEquals(3+1000+6, StringCalculator.add("3,1000,1001,6,1234")); } This test has many asserts. It is unclear what the functionality is, and if one of them fails, it is unknown whether the rest would work or not. It might be hard to understand the failure when this test is executed through some of the CI tools. Do not introduce dependencies between tests Benefits: The tests work in any order independently, whether all or only a subset is run Each test should be independent from the others. Developers should be able to execute any individual test, a set of tests, or all of them. Often, due to the test runner's design, there is no guarantee that tests will be executed in any particular order. If there are dependencies between tests, they might easily be broken with the introduction of new ones. Tests should run fast Benefits: These tests are used often. If it takes a lot of time to run tests, developers will stop using them or run only a small subset related to the changes they are making. The benefit of fast tests, besides fostering their usage, is quick feedback. The sooner the problem is detected, the easier it is to fix it. Knowledge about the code that produced the problem is still fresh. If the developer already started working on the next feature while waiting for the completion of the execution of the tests, he might decide to postpone fixing the problem until that new feature is developed. On the other hand, if he drops his current work to fix the bug, time is lost in context switching. Tests should be so quick that developers can run all of them after each change without getting bored or frustrated. Use test doubles Benefits: This reduces code dependency and test execution will be faster. Mocks are prerequisites for fast execution of tests and ability to concentrate on a single unit of functionality. By mocking dependencies external to the method that is being tested, the developer is able to focus on the task at hand without spending time in setting them up. In the case of bigger teams, those dependencies might not even be developed. Also, the execution of tests without mocks tends to be slow. Good candidates for mocks are databases, other products, services, and so on. Use set-up and tear-down methods Benefits: This allows set-up and tear-down code to be executed before and after the class or each method. In many cases, some code needs to be executed before the test class or before each method in a class. For this purpose, JUnit has @BeforeClass and @Before annotations that should be used as the setup phase. @BeforeClass executes the associated method before the class is loaded (before the first test method is run). @Before executes the associated method before each test is run. Both should be used when there are certain preconditions required by tests. The most common example is setting up test data in the (hopefully in-memory) database. At the opposite end are @After and @AfterClass annotations, which should be used as the tear-down phase. Their main purpose is to destroy data or a state created during the setup phase or by the tests themselves. As stated in one of the previous practices, each test should be independent from the others. Moreover, no test should be affected by the others. Tear-down phase helps to maintain the system as if no test was previously executed. Do not use base classes in tests Benefits: It provides test clarity. Developers often approach test code in the same way as implementation. One of the common mistakes is to create base classes that are extended by tests. This practice avoids code duplication at the expense of tests clarity. When possible, base classes used for testing should be avoided or limited. Having to navigate from the test class to its parent, parent of the parent, and so on in order to understand the logic behind tests introduces often unnecessary confusion. Clarity in tests should be more important than avoiding code duplication. Tools TDD, coding and testing in general, are heavily dependent on other tools and processes. Some of the most important ones are as follows. Each of them is too big a topic to be explored in this article, so they will be described only briefly. Code coverage and Continuous integration (CI) Benefits: It gives assurance that everything is tested Code coverage practice and tools are very valuable in determining that all code, branches, and complexity is tested. Some of the tools are JaCoCo (http://www.eclemma.org/jacoco/), Clover (https://www.atlassian.com/software/clover/overview), and Cobertura (http://cobertura.github.io/cobertura/). Continuous Integration (CI) tools are a must for all except the most trivial projects. Some of the most used tools are Jenkins (http://jenkins-ci.org/), Hudson (http://hudson-ci.org/), Travis (https://travis-ci.org/), and Bamboo (https://www.atlassian.com/software/bamboo). Use TDD together with BDD Benefits: Both developer unit tests and functional customer facing tests are covered. While TDD with unit tests is a great practice, in many cases, it does not provide all the testing that projects need. TDD is fast to develop, helps the design process, and gives confidence through fast feedback. On the other hand, BDD is more suitable for integration and functional testing, provides better process for requirement gathering through narratives, and is a better way of communicating with clients through scenarios. Both should be used, and together they provide a full process that involves all stakeholders and team members. TDD (based on unit tests) and BDD should be driving the development process. Our recommendation is to use TDD for high code coverage and fast feedback, and BDD as automated acceptance tests. While TDD is mostly oriented towards white-box, BDD often aims at black-box testing. Both TDD and BDD are trying to focus on quality assurance instead of quality checking. Summary You learned that it is a way to design the code through short and repeatable cycle called red-green-refactor. Failure is an expected state that should not only be embraced, but enforced throughout the TDD process. This cycle is so short that we move from one phase to another with great speed. While code design is the main objective, tests created throughout the TDD process are a valuable asset that should be utilized and severely impact on our view of traditional testing practices. We went through the most common of those practices such as white-box and black-box testing, tried to put them into the TDD perspective, and showed benefits that they can bring to each other. You discovered that mocks are a very important tool that is often a must when writing tests. Finally, we discussed how tests can and should be utilized as executable documentation and how TDD can make debugging much less necessary. Now that we are armed with theoretical knowledge, it is time to set up the development environment and get an overview and comparison of different testing frameworks and tools. Now we will walk you through all the TDD best practices in detail and refresh the knowledge and experience you gained throughout this article. Resources for Article: Further resources on this subject: RESTful Services JAX-RS 2.0[article] Java Refactoring in NetBeans[article] Developing a JavaFX Application for iOS [article]
Read more
  • 0
  • 0
  • 2942

article-image-moodle-20-faqs
Packt
14 Oct 2010
8 min read
Save for later

Moodle 2.0 FAQs

Packt
14 Oct 2010
8 min read
Moodle 2.0 First Look Discover what's new in Moodle 2.0, how the new features work, and how it will impact you Get an insight into the new features of Moodle 2.0 Discover the benefits of brand new additions such as Comments and Conditional Activities Master the changes in administration with Moodle 2.0 The first and only book that covers all of the fantastic new features of Moodle 2.0         Read more about this book       (For more resources on Moodle, see here.)   Question: What are the basic requirements for Moodle 2.0 to function? Answer: It's important that either you (if you're doing this yourself) or your Moodle admin or webhost are aware of the requirements for Moodle 2.0. It needs: PHP must be 5.2.8 or later One of the following databases: MySQL 5.0.25 or later (InnoDB storage engine highly recommended) PostgreSQL 8.3 or later Oracle 10.2 or later MS SQL 2005 or later One of the following browsers: Firefox 3 or later Safari 3 or later Google Chrome 4 or later Opera 9 or later MS Internet Explorer 7 or later   Question: How can I upgrade to Moodle 2.0? Answer: If you already have an installation of Moodle, you will find instructions for upgrading in the docs on the main Moodle site here http://docs.moodle.org/en/Upgrading_to_Moodle_2.0. If you are upgrading from an earlier version of Moodle (such as 1.8) then you should upgrade to Moodle 1.9 first before going to 2.0. You must update incrementally; shortcuts – for example. updating from 1.7 directly to 2.0 -- are simply not possible. Read the docs carefully if you are planning on upgrading from very early versions such as 1.5 or 1.6.   Question: What are the potential problems with upgrading? Answer: There are a few challenges that one may come across while upgrading from Moodle 1.9 to 2.0 which are listed below: Themes: The way themes work has changed completely. While this allows for more flexible coding and templating, it does mean that if you had a customized theme it will not transfer over to Moodle 2 without some redesigning beforehand. Third party add-ons and custom code: The same applies to third party add-ons and custom code: it is highly unlikely they will work without significant alterations. Backup and Restore: Making courses from 1.9 or earlier restore into Moodle 2. 0 has proved very problematic and is still not entirely achievable. Although this is a priority for the Moodle developers, there is at the time of writing only a workaround involving restoring your course to a 1.9 site and then upgrading it to 2.0.   Question: How can teachers and students manage their learning? Answer: The two new features of Moodle 2.0 help teacher and students manage their learning: Conditional activities: A way to organize a course so that tasks are only available dependent on certain grades being obtained or criteria being met beforehand. Completion tracking: A way for students to have checkboxes next to their tasks that are either automatically marked as complete or which students themselves can manually mark if they feel they've finished the exercise – or alternatively a way for whole courses to be checked off as finished.   Question: What are the changes in the Themes structure for Moodle 2.0? Answer: The themes structure has been completely rewritten for Moodle 2.0. Themes that worked in 1.9 needed to be updated to work in 2.0. There is a wide variety of attractive new themes available. If you need to update your own theme or would like information on Moodle 2.0 theming, you will find the documentation at http://docs.moodle.org/en/Development:Themes_2.0 helpful. New to Moodle 2.0 are the following: Designer Mode: Turn this on so you're not served cached versions of themes, if you are designing themes or developing code. Allow theme changes in the URL: Enabling this will let users alter their theme via their Moodle URL using the syntax Allow blocks to use the dock: Enabling this will allow users to dock blocks if the theme supports it.   Question: Can we customize the MyMoodle page in Moodle 2.0? Answer: Yes, we can customize the default MyMoodle page. It's worth noting that on the MyMoodle page we can add blocks to the middle as well as the sides. With editing turned on, we're given the option to move a block to a central location.   Question: Can we Comment on the Moodle blog? Answer: Commenting on the Moodle blog is a bit of a workaround really; the Moodle blog doesn't really have a built-in commenting facility like, say WordPress. Rather, Moodle is making use of the new Comments feature which ordinarily appears as a block anywhere you want to add it.   Question: What are the improvements in the Blog option in Moodle 2.0 as compared to the previous version? Answer: There has always been a blogging option in a standard Moodle install. However, some users have found it unsatisfactory because of the following reasons: The blog is attached to the user profile so you can only have one blog There is no way to attach a blog or blog entry to a particular course There is no way for other people to comment on your blog For this reason, alternative blog systems (such as the contributed OU blog module) have become popular as they give users a wider range of options. The standard blog in Moodle 2.0 has changed, and now: A blog entry can optionally be associated with a course It is possible to comment on a blog entry Blog entries from outside of Moodle can be copied in It is now possible to search blog entries   Question: How to enable/disable the docking facility in Moodle 2.0? Answer: The docking facility can be managed in Moodle 2.0 as follows: The "docking" facility may be enabled or disabled for themes in Site Administration | Appearance | Themes | Theme settings. If we click the icon shown in the following screenshot, we also have the option of "docking" this over to the far left as a narrow tab.   Question: Has the HTML editor been replaced by some other editing tool? What is its advantage? Answer: In Moodle 2.0, the HTML editor has been replaced with a version known as Tiny MCE, a very popular Open Source editor you might have encountered in content management systems or blogging software such as WordPress. Along with Internet Explorer and Firefox, it will work with web browsers such as Safari, Chrome, and Opera, unlike Moodle's previous HTML editor. The following screenshot shows the new editor (on the bottom) with the original editor (on the top): There are many more options available to us when adding descriptions of our materials or summaries of our courses. However, one of the most powerful new features is the ability to add and embed media directly from within this new HTML editor.   Question: What have been the improvements related to Moodle Quiz? Answer: The following are the improvements to Moodle Quiz: The set up page has been simplified Creating questions has been simplified It's possible to flag questions for later referral Questions can be accessed with one click in the post-quiz review and correct/ incorrect questions are color-coded in an easy-to access navigation block   Question: What are Cohorts? Answer: Cohort is Moodle 2.0's take on the long wished for site-wide groups. When we click on the link we're taken to the following screen where we click on Add to enter details of the cohort we want to create:   Question: Has there been any modification in the Filters menu as compared to the previous versions On/Off options? Answer: The Manage Filters in Moodle 2.0 equates to the Filters menu in Moodle 1.9. The Manage Filters screen looks like the following screenshot (note—the screenshot only displays the first three filters): Previously, filters were either On or Off. Now we have three choices: Disabled: Nobody, in any course, can enable a filter. On: A filter is enabled by default and teachers can disable if they wish to. Off but available: A filter is off but teachers can enable it in their own courses.   Question: What are the changes in Site Administration? Answer: Perhaps the simplest way to explore this is to look at how this menu has altered since Moodle 1.9: Notifications/Registrations: A small but important change in Moodle 1.9, the Notifications screen contained a button you could click to register your site with http://moodle.org/. The page this took you to now has its own billing in Moodle 2.0, as the Registration link. Community hubs: The main Moodle community hub is known as MOOCH and you register with it here. You can also register your site with other community hubs. If you register with hubs, then teachers can add a Community block in their courses where users can search for a suitable course to enroll in or download. Summary In this article we took a look at the queries regarding what Moodle 2.0 has to offer with the exciting new modules and enhanced features, and the major overhauls in the file uploading and navigation system. Further resources on this subject: Moodle 1.9 Math [Book] Moodle Administration [Book] Moodle 1.9 for Teaching Special Education Children (5-10): Beginner's Guide [Book] Moodle 2.0: What's New in Add a Resource [Article] What's New in Moodle 2.0 [Article]
Read more
  • 0
  • 0
  • 2932
Modal Close icon
Modal Close icon