Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-user-access-control-drupal-6
Packt
23 Oct 2009
16 min read
Save for later

User Access Control in Drupal 6

Packt
23 Oct 2009
16 min read
Before we continue, it is worth pointing out that at the moment of adding the basic functionality you are more than likely using the administrative user (user number 1) for all the site's development needs. That is absolutely fine, but once the major changes to the site are completed, you should begin using a normal administrative user that has only the permissions required to complete your day-to-day tasks. The next section will highlight the general philosophy behind user access, which should make the reason for this clear. Planning an Access Policy When you think about how your site should work, focus in on what will be required of yourself, other community members, or even anonymous users. For instance: Will there be a team of moderators working to ensure that the content of the site conforms to the dictates of good taste and avoids material that is tantamount to hate speech, and so on? Will there be subject experts who are allowed to create and maintain their own content? How much will anonymous visitors be allowed to become involved, or will they be forced to merely window shop without being able to contribute? Some of you might feel that the site should grow organically with the community, and so you want to be extremely flexible in your approach. However, you can take it as given that Drupal's access policies are already flexible, given how easy it is to reconfigure, so it is good practice to start out with a sensible set of access rules, even if they are going to change over time. If you need to make modifications later, so be it, but at least there will be a coherent set of rules from the start. The first and foremost rule of security that can be applied directly to our situation is Grant a user permissions sufficient for completing the intended task, and no more! Our entire approach is going to be governed by this rule. With a bit of thought you should be able to see why this is so important. The last thing anyone wants is for an anonymous user to be able to modify the personal blog of a respected industry expert. This means that each type of user should have carefully controlled permissions that effectively block their ability to act outside the scope of their remit. One upshot of this is that it is better to create a larger number of specific roles, rather than create a generic role or two, and allow everyone to use those catch-all permissions. A role constitutes a number of permissions that define what actions any members of that role can and can't perform. We will explore roles in detail in the next section! Drupal gives us fine-grained control over what users can accomplish, and you should make good use of this facility. It may help to think of your access control using the following figure (this does not necessarily represent the actual roles on your site—it's just an example): The shaded region represents the total number of permissions available for the site. Contained within this set are the various roles that exist either by default, like the Anonymous users role, or those you create in order to cater for the different types of users the site will require—in this case, the Blog Writer users and Forum Moderator users roles. From the previous diagram you can see that the Anonymous users role has the smallest set of permissions because they have the smallest area of the total diagram. This set of permissions is totally encapsulated by the Forum Moderator users and Blog Writer users—meaning that forum moderators and blog writers can do everything an anonymous user does, and a whole lot more. Remember, it is not compulsory that forum moderators encapsulate all the permissions of the anonymous users. You can assign any permissions to any role—it's just that in this context it makes sense that a forum moderator should be able to do everything an anonymous user can and more. Of course, the blog writers have a slightly different remit. While they share some privileges in common with the forum administrators, they also have a few of their own. Your permissions as the primary or administrative user encompass the entire set, because there should be nothing that you cannot control. It is up to you to decide which roles are best for the site, but before attempting this it is important to ask: What are roles and how are they used in the first place? To answer this question, let's take a look at the practical side of things in more detail. Roles It may seem a bit odd that we are not beginning a practical look at access control with a discussion on users. After all, it is all about what users can and cannot do! The problem with immediately talking about users is that the focus of a single user is too narrow, and we can learn far more about controlling access by taking a more broad view using roles. Once we have learned everything there is to know about roles, actually working with users becomes a trivial matter. As mentioned, a user role in Drupal defines a set of rules that must be obeyed by all the users in that role. It may be helpful to think of a role as a character in a play. In a play, an actor must always be true to their character (in the same way a user must be faithful to their role in Drupal)—in other words, there is a defined way to behave and the character never deviates (no matter which actor portrays the character). Creating a role in Drupal is very easy. Click the User management link under Administer and select the Roles tab to bring up the following: As you can see, we have two roles already defined by default—the anonymous user and the authenticated user. It is not possible to change these, and so the Operations column is permanently set to locked. To begin with, the anonymous user (this is any user who is browsing the site without logging in) has very few permissions set, and you would more than likely want to keep it this way, despite the fact it is possible to give them any and all permissions. Similarly, the authenticated user, by default, has only a few more permissions than the anonymous user, and it is also sensible to keep these to a minimum. We will see in a little while how to go about deciding who should have which permissions. In order to add a new role, type in a name for the role and click Add role, and you're done. But what name do you want to add? That's the key question! If you are unsure about what name to use, then it is most likely you haven't defined the purpose of the role properly. To see how this is done, let's assume we require a forum moderator who will be a normal user in every way, except for the ability to work directly on the forums (to take some of the burden of responsibility off the administrator's hands) to create new topics, and to edit the content if necessary. To get the ball rolling, type in forum moderator and click Add role—actually, you might even want to be more specific and use something like conservation forum moderator if there will be teams of forum moderators—you get the general idea. Now the roles page should display the new role with the option to edit it, shown in the Operations column. Click edit role in order to change the name of the role or delete it completely. Alternatively, click edit permissions to deal with the permissions for this specific role (we discuss permissions in a moment so let's leave this for now). Our work is just beginning, because now we need to grant or deny the various permissions that the forum moderator role will need in order to successfully fulfill its purpose. New roles are not given any permission at all to begin with—this makes sense, because the last thing we want is to create a role only to find that it has the same permissions as the administrative user. Chances are you will need to add several roles depending on the needs of the site, so add at least a blogger user that can edit their own blog—we will need a few different types to play with later on. Let's move on and take a look at how to flesh out this new role by setting permissions. Permissions In order to work with permissions, click the Permissions link under User management and you should be presented with a screen much like the following (notice the new forum moderator role on the right-hand side of the page): As you can see, this page lists all of the available permissions down the left-hand column and allows you to enable or disable that permission by checking or un-checking boxes in the relevant column. It is easy enough to see that one traverses the list, selecting those permissions required for each role. What is not so easy is actually determining what should and shouldn't be enabled in the first place. Notice too that the permissions given in the list on the left-hand side pertain to specific modules. This means that if we change the site's setup by adding or removing modules, then we will also have to change the permissions on this page. Most times a module is added, you will need to ensure that the permissions are set as required for that module, because by default no permissions are granted. What else can we learn from the permissions page shown in the previous screenshot? Well, what does each permission precisely mean? There are quite a few verbs that allow for completely different actions. The following lists the more common, generic ones, although you might find one or two others crop up every now and then to cater for a specific module: administer: gives the user the ability to affect the function of a module. For example, granting administer rights to the locale module means that the user can add or remove languages, manage strings, and even export .po files. This permission should only ever be given to trusted users, and never to anonymous users. access: gives the user the ability to make use of a module without being able to affect it in any way. For example, granting access rights to the comment module allows a user to view comments without being able to delete, edit, or reply to them. create: gives the user the ability to create content of some sort. For example, granting rights to create stories allows users to do so, but does not also give them the ability to edit those stories. edit any/own: gives the user the ability to work with either anyone's content or specifically the content they have created—depending on whether edit any or edit own is selected. For example, granting edit own rights to the blog module means that the user can modify their own blogs at will. delete any/own: applies to content related modules such as Node and empowers users to remove either anyone's content or confine them to removing only content posted by themselves. For example, setting delete own blog entry allows users to take back any blog postings they may regret having published. There are also other module-specific permissions available, and it is recommended that you play around and understand any new permission(s) you set. Previously, assigning the edit own permission automatically provided the delete own permission. For added security, delete own permissions for individual core content types have been removed from all roles and should be assigned separately. How do we go about setting up the required permissions for the forum moderator user? If we look down the list of permissions shown on the Permission page, we see the following forum-related options (at the moment, the forum moderator permissions are those in the outermost column): Enabling these three options, and then testing out what new powers are made available, should quickly demonstrate that this is not quite what we want. If you are wondering how to actually test this out, you need to create a new user and then assign them to the forum moderator role. The following section on Users explains how to create new users and administer them properly. Jump ahead quickly and check that out so that you have a new user to work with if you are unsure how it is done. The following point might make your life a bit easier: Use two browsers to test out your site. The demo site's development machine has IE and Firefox. Keep one browser for the administrator and the other for anonymous or other users in order to test out changes. This will save you from having to log in and log out whenever testing new permissions. When testing out the new permissions one way or another, you will find that the forum moderator can access and work with all of the forums—assuming you have created any. However, notice that there are node module permissions available, which is quite interesting because most content in Drupal is actually a node. How will this affect the forum moderator? Disable the forum module permissions for the forum moderator user and then enable all the node options for the authenticated user before saving and logging out. Log back in as the forum administrator and it will be clear that despite having revoked the forum based options for this user, it is possible to post to or edit anything in the forum quite easily by selecting the Create content link in the main menu. Is this what you expected? It should be precisely what you expect because the forum moderator is an authenticated user, so they have acquired the permissions that came from the authenticated user. In addition, the forum posts are all nodes, and any authenticated user can add and edit nodes, so even though the forum moderator is not explicitly allowed to work with forums, through generic node permissions we get the same result: Defined roles are given the authenticated user permissions. Actually, the result is not entirely the same because the forum moderator can now also configure all the different types of content on the site, as well as edit any type of content including other people's blogs. This is most certainly undesirable, so log back in as the primary user and remove the node permissions (except the first one) from the authenticated user role. With that done, you can now spend some time building a fairly powerful and comprehensive role-based access control plan. As an addendum, you might find that despite having a goodly amount of control over who does what, there are some things that are not easily done without help from elsewhere. Users A single user account can be given as many or as few permissions as you like via the use of roles. Drupal users are not really anything unless they already have a role that defines the manner in which they can operate within the Drupal framework. Hence, we discussed roles first. Users can be created in two ways. The most common way is by registering on the site—if you haven't already, go ahead and register a new user on your site by clicking the Create new account link on the homepage just to test things out. Remember to supply a valid email address otherwise you won't be able to sign in properly. This will create an authenticated user, with any and all permissions that have been assigned to the authenticated user role. The second way is to use the administrative user to create a new user. In order to do so, log on as the administrative user and click on Users in User management under Administer. Select the Add user tab and follow the instructions on that page. For example, I created a new forum moderator user by ensuring that the relevant role was checked: You will need to supply Drupal with usernames, email addresses, and passwords. Once there are a couple of users to play around with, it's time to begin working with them. Administering Users The site's administrator is given complete access to the other users' account information. By clicking on the edit link shown to the right of each user account (under the Operations column heading) in the Users page under User management, it is possible to make any changes you require to a given user. Before we do though, it's worth noting that the administration page itself is fairly powerful in terms of being able to administer individual users or groups of users with relative ease: The upper box, Show only users where, allows you to specify several filter conditions to cut down the result set and make it more manageable. This will become more and more important as the site accumulates more and more users. Once the various filter options have been implemented, the Update options allow you to apply whatever changes are needed to the list of users selected (by checking the relevant checkbox next to their name). Having both broad, sweeping powers as well as fine-grained control over users is one of the most valuable facilities provided by Drupal, and you will no doubt become very familiar with this page in due course. Click on the edit link next to the forum moderator user and take a look at the Roles section. Notice that it is now possible to stipulate which roles this user belongs to. At present there are only two new roles to be assigned (yours might vary depending on which roles have been created on your setup): Whenever a user is added to another role, they obtain the combined permissions of these roles. With this in mind, you should go about delegating roles in the following fashion: Define the most basic user of the site by setting the anonymous user permissions. Set permissions for a basic authenticated user (i.e. any Tom, Dick or Harry that registers on your site). Create special roles by only adding the specific additional permissions that are required by that role, and no more. Don't re-assign permissions that the authenticated user already has. Create new users by combining whatever roles are required for their duties or needs.
Read more
  • 0
  • 0
  • 3412

article-image-drupal-6-performance-optimization-using-db-maintenance-and-boost-part-1
Packt
23 Mar 2010
8 min read
Save for later

Drupal 6 Performance Optimization Using DB Maintenance and Boost: Part 1

Packt
23 Mar 2010
8 min read
These are not required modules, but rather are recommended modules to add to your Drupal performance arsenal. The way this article will work is that we'll outline the purpose of each module, install and configure it, and then use it on a specific topic, for example, within your site. This will give you some practice using contributed Drupal modules and also a look at the variety of performance based modules that are available from the Drupal project community. Using the DB Maintenance module The DB Maintenance module can be used to optimize your MySQL database tables. Depending on the type of database you are running, the module allows you to use a function called OPTIMIZE TABLE, which troubleshoots and then optimizes various errors in your MySQL tables. For MyISAM tables, the OPTIMIZE TABLE will repair your database tables if they have deleted rows. For BDB and InnoDB types of tables the function will rebuild the entire table. You can use this module in tandem with phpMyAdmin to determine if you do or do not need to optimize your database tables. The benefit of this module is that it allows you to keep your database optimized and defragmented, similar to keeping your computer hard drive optimized and defragmented so that it runs faster, and you can do all this from the Drupal administrative interface. The project page where you can download the module is here: http://drupal.org/project/db_maintenance. Download the module tar.gz and extract it to your desktop. Then, upload the files through FTP, or upload and extract using a cPanel utility if your host provides this. The module should go in your /sites/all/modules directory. Once you upload and extract the module folder, enable the module on your modules admin page and save your configuration. We'll use the version that's recommended for Drupal 6.x, which is 6.x-1.1. You can try out the beta version, but you should not run this beta version on a production level website unless you've tested it sufficiently in a sandbox environment. Once you save your module configuration, you'll notice that the module adds a link to its settings and configuration page under your main Site configuration section. Go to Site configuration | DB maintenance to access the configuration admin screen for the module. The DB maintenance screen will contain a checkbox at the top allowing you to log OPTIMIZE queries. If you check this box, your watchdog log entries module will log all table optimization entries and give you detailed information on the tables that were optimized. At the time of writing this article, the 1.1 version of the DB Maintenance module contained bugs that caused glitches with the method of adding this module's queries to the recent log entries or prevented this entirely. You may also experience these glitches. The module's developers are aware of the issues because they have been posted to the issue queue at http://drupal.org/ on the module project page. Let's go ahead and check this box. You can then select the frequency with which you would like to run the optimization. The choices are daily, Run during every cron, Hourly, Bi-Hourly, Daily, Bi-Daily, Weekly, Bi-Weekly, Monthly, and Bi-Monthly. You can also click on the Optimize now link to force the optimization to occur immediately without scheduling in advance. We'll click on this link for the purpose of this demo, but in future you may want to schedule the optimization. We'll then run a cron job through the Status report, or a module such as Poormanscron, and the tables will be optimized. Next, you can select the tables in your Drupal database that you want to optimize. A nice feature of this module is that it allows you to multi select database tables, only select a few tables, or just one table. This gives you the same flexibility and functionality as your phpMyAdmin tool, but you can run everything from within your Drupal interface. It's like a phpMyAdmin lite version right in your Drupal site. This is a preferred option for those developers who may not have immediate access to a client's phpMyAdmin or a host's database management utility. Choose a selection of tables that you want to optimize, or select all the tables. For this demo I'm going to optimize all of my content type tables, so I'll select all of those. I'll also optimize my block tables: blocksblocks_rolescontent_type_blogcontent_type_bookcontent_type_forumcontent_type_pagecontent_type_photocontent_type_pollcontent_type_storycontent_type_webform Once you've selected the tables you want to optimize, click on the Optimize now link. As with any module or optimization enhancement that you make to your Drupal site, it is good practice to run a full backup of your MySQL database before performing any maintenance, including optimizing tables using the DB Maintenance module. This way you will have a full backup of your data if you run into any issues that the module could potentially create. It's better to play it safe and perform the backup first. Once you click on the Optimize now link, you should receive a message notifying you that the Database tables are optimized. This concludes our discussion and walkthrough of using the DB Maintenance module. Let's now turn to the Boost module and use it to speed up our site page and content loads. Using the Boost module We' re going to turn our attention to the Boost module in this section. Boost is a contributed module that allows you to run incredibly advanced static page caching on your Drupal site. This caching mechanism will help to increase performance and scalability on your site, especially if it gets heavy traffic and anonymous page visits, and it is on a shared hosting environment. This is usually the first contributed performance-based module to turn to for help when you host your Drupal site on a shared server. Developers running Drupal sites on shared servers and running sites that serve predominantly anonymous Drupal users will definitely want to try out this module. It's also a fun module to use from a technical standpoint because you can see the results immediately, as you configure it. The Drupal project page for the module is here: http://drupal.org/project/boost. There is a wealth of detailed information about the module on this project page, including announcements about upcoming conference presentations that focus on the Boost module, testimonials, install instructions, and links to documentation and associated modules that you may want to run alongside Boost. It is very popular and has quite a following in the Drupal development community. I definitely recommend reading about this module and all of its install and configuration instructions in detail before attempting to use it. The install paragraph suggests reading through the module README.txt file before running the install for details on how the module works. There are also detailed instructions and documentation on the module here: http://drupal.org/node/545664. Note that the one requirement to use this module is that your Drupal site must have clean URLs configured and enabled. It's a good idea to make sure you are running clean URLs on your site before you start installing and configuring Boost. Additionally, there are some recommended modules that the developers encourage you to install in tandem with the Boost module. We will install two of these modules: Global Redirect and Transliteration. The Global Redirect module runs a number of checks on your website including the following: Checks the current URL for a Drupal path alias and does a 301 redirect to the URL if it is not being used. Checks the current URL for a trailing / and removes the slash if it's present in Drupal URLs. Checks if the current URL is the same as the site's front page and redirects to the front page if it locates a match. Checks to see if you are using clean URLs. If you do have clean URLs enabled, this module ensures URLs are accessed using the clean URL method rather than an unclean method (for example, ?q=user). Checks access to the URL. If a user does not have permissions to view the URL, then no redirects are allowed. This helps to protect private URL aliases. Checks to ensure the alias matches the URL it is aliasing. So, if you have a URL alias such as /about and this directs to node/23, then a user on your site can access the page using either of those URLs. The Transliteration module removes white space and non-ASCII characters in your URLs. For example, it will try and add underscores to fill white space in a URL. Installing and enabling these two modules will help remove glitches and errors in your site's path structure. If you haven't already, we'll also take the time now to install the Poormanscron module and set up and configure automatic cron runs instead of having to continue running cron manually. We'll return to installing and configuring Poormanscron later in this article, but just keep it on your radar for now. Let's go ahead and install the Boost module and take a closer look at some of its features.
Read more
  • 0
  • 0
  • 3410

article-image-overview-tdd
Packt
06 Nov 2015
11 min read
Save for later

Overview of TDD

Packt
06 Nov 2015
11 min read
 In this article, by Ravi Gupta, Harmeet Singh, and Hetal Prajapati, authors of the book Test-Driven JavaScript Development explain how testing is one of the most important phases in the development of any project, and in the traditional software development model. Testing is usually executed after the code for functionality is written. Test-driven development (TDD) makes a big difference by writing tests before the actual code. You are going to learn TDD for JavaScript and see how this approach can be utilized in the projects. In this article, you are going to learn the following: Complexity of web pages Understanding TDD Benefits of TDD and common myths (For more resources related to this topic, see here.) Complexity of web pages When Tim Berners-Lee wrote the first ever web browser around 1990, it was supposed to run HTML, neither CSS nor JavaScript. Who knew that WWW will be the most powerful communication medium? Since then, there are now a number of technologies and tools which help us write the code and run it for our needs. We do a lot these days with the help of the Internet. We shop, read, learn, share, and collaborate... well, a few words are not going to suffice to explain what we do on the Internet, are they? Over the period of time, our needs have grown to a very complex level, so is the complexity of code written for websites. It's not plain HTML anymore, not some CSS style, not some basic JavaScript tweaks. That time has passed. Pick any site you visit daily, view source by opening developer tools of the browser, and look at the source code of the site. What do you see? Too much code? Too many styles? Too many scripts? The JavaScript code and CSS code is so huge to keep it in as inline, and we need to keep them in different files, sometimes even different folders to keep them organized. Now, what happens before you publish all the code live? You test it. You test each line and see if that works fine. Well, that's a programmer's job. Zero defect, that's what every organization tries to achieve. When that is in focus, testing comes into picture, more importantly, a development style, which is essentially test driven. As the title says for this article, we're going to keep our focus on test-driven JavaScript development.   Understanding Test-driven development TDD, short for Test-driven development, is a process for software development. Kent Beck, who is known for development of TDD, refers this as "Rediscovery." Kent's answer to a question on Quora can be found at https://www.quora.com/Why-does-Kent-Beck-refer-to-the-rediscovery-of-test-driven-development. "The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework in Smalltalk I remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD." If you go and try to find references to TDD, you would even get few references from 1968. It's not a new technique, though did not get so much attention yet. Recently, an interest toward TDD is growing, and as a result, there are a number of tools on the Web. For example, Jasmine, Mocha, DalekJS, JsUnit, QUnit, and Karma are among these popular tools and frameworks. More specifically, test-driven JavaScript development is getting popular these days. Test-driven development is a software development process, which enforces a developer to write test before production code. A developer writes a test, expects a behavior, and writes code to make the test pass. It is needless to mention that the test will always fail at the start. Need of testing To err is human. As a developer, it's not easy to find defects in our own code and often we think that our code is perfect. But there are always some chances that a defect is present in the code. Every organization or individual wants to deliver the best software they can. This is one major reason that every software, every piece of code is well tested before its release. Testing helps to detect and correct defects. There are a number of reasons why testing is needed. They are as follows: To check if the software is functioning as per the requirements There will not be just one device or one platform to run your software The end user will perform an action as a programmer you never expected There was a study conducted by National Institute of Standards and Technology (NIST) in 2002, which reported that software bugs cost the U.S. economy around $60 billion annually. With better testing, more than one-third of the cost could be avoided. The earlier the defect is found, the cheaper it is to fix it. A defect found post release would cost 10-100 times more to fix than if it had already been detected and fixed. The report of the study performed by NIST can be found at http://www.nist.gov/director/planning/upload/report02-3.pdf. If we draw a curve for the cost, it comes as an exponential when it comes to cost. The following figure clearly shows that the cost increases as the project matures with time. Sometimes, it's not possible to fix a defect without making changes in the architecture. In those cases, the cost, sometimes, is so much that developing the software from scratch seems like a better option. Benefits of TDD and common myths Every methodology has its own benefits and myths among people. The following sections will analyze the key benefits and most common myths of TDD. Benefits TDD has its own advantages over regular development approaches. There are a number of benefits, which help make a decision of using TDD over the traditional approach. Automated testing: If you did see a website code, you know that it's not easy to maintain and test all the scripts manually and keep them working. A tester may leave a few checks, but automated tests won't. Manual testing is error prone and slow. Lower cost of overall development: With TDD, the number of debugs is significantly decreased. You develop some code; run tests, if you fail, re-doing the development is significantly faster than debugging and fixing it. TDD aims at detecting defect and correcting them at an early stage, which costs much cheaper than detecting and correcting at a later stage or post release. Also, now debugging is very less frequent and significant amount of time is saved. With the help of tools/test runners like Karma, JSTestDriver, and so on, running every JavaScript tests on browser is not needed, which saves significant time in validation and verification while the development goes on. Increased productivity: Apart from time and financial benefits, TDD helps to increase productivity since the developer becomes more focused and tends to write quality code that passes and fulfills the requirement. Clean, maintainable, and flexible code: Since tests are written first, production code is often very neat and simple. When a new piece of code is added, all the tests can be run at once to see if anything failed with the change. Since we try to keep our tests atomic, and our methods also address a single goal, the code automatically becomes clean. At the end of the application development, there would be thousands of test cases which will guarantee that every piece of logic can be tested. The same test cases also act as documentation for users who are new to the development of system, since these tests act as an example of how the code works. Improved quality and reduced bugs: Complex codes invite bugs. Developers when change anything in neat and simple code, they tend to leave less or no bugs at all. They tend to focus on purpose and write code to fulfill the requirement. Keeps technical debt to minimum: This is one of the major benefits of TDD. Not writing unit tests and documentation is a big part, which increases technical debt for a software/project. Since TDD encourages you to write tests first, and if they are well written, they act as documentation, you keep technical debt for these to minimum. As Wikipedia says, A technical debt can be defined as tasks to be performed before a unit can be called complete. If the debt is not repaid, interest also adds up and makes it harder to make changes at a later stage. More about Technical debt can be found at https://en.wikipedia.org/wiki/Technical_debt. Myths Along with the benefits, TDD has some myths as well. Let's check few of them: Complete code coverage: TDD enforces to write tests first and developers write minimum amount of code to pass the test and almost 100% code coverage is done. But that does not guarantee that nothing is missed and the code is bug free. Code coverage tools do not cover all the paths. There can be infinite possibilities in loops. Of course it's not possible and feasible to check all the paths, but a developer is supposed to take care of major and critical paths. A developer is supposed to take care of business logic, flow, and process code most of the times. No need to test integration parts, setter-getter methods for properties, configurations, UI, and so on. Mocking and stubbing is to be used for integrations. No need of debugging the code: Though test-first development makes one think that debugging is not needed, but it's not always true. You need to know the state of the system when a test failed. That will help you to correct and write the code further. No need of QA: TDD cannot always cover everything. QA plays a very important role in testing. UI defects, integration defects are more likely to be caught by a QA. Even though developers are excellent, there are chances of errors. QA will try every kind of input and unexpected behavior that even a programmer did not cover with test cases. They will always try to crash the system with random inputs and discover defects. I can code faster without tests and can also validate for zero defect: While this may stand true for very small software and websites where code is small and writing test cases may increase overall time of development and delivery of the product. But for bigger products, it helps a lot to identify defects at a very early stage and gives a chance to correct at a very low cost. As seen in the previous screenshots of cost of fixing defects for phases and testing types, the cost of correcting a defect increases with time. Truly, whether TDD is required for a project or not, it depends on context. TDD ensures a good design and architecture: TDD encourages developers to write quality code, but it is not a replacement for good design practice and quality code. Will a team of developers be enough to ensure a stable and scalable architecture? Design should still be done by following the standard practices. You need to write all tests first: Another myth says that you need to write all tests first and then the actual production code. Actually, generally an iterative approach is used. Write some tests first, then some code, run the tests, fix the code, run the tests, write more tests, and so on. With TDD, you always test parts of software and keep developing. There are many myths, and covering all of them is not possible. The point is, TDD offers developers a better opportunity of delivering quality code. TDD helps organizations by delivering close to zero-defect products. Summary In this article, you learned about what TDD is. You learned about the benefits and myths of TDD. Resources for Article: Further resources on this subject: Understanding outside-in [article] Jenkins Continuous Integration [article] Understanding TDD [article]
Read more
  • 0
  • 0
  • 3409

article-image-customizing-prestashop-theme-part-1
Packt
22 Jul 2010
7 min read
Save for later

Customizing PrestaShop Theme Part 1

Packt
22 Jul 2010
7 min read
(For more resources on PrestaShop 1.3, see here.) The most basic level is using the back office panel to customize the layout. Using this knowledge, we can make some quick and easy changes without having any technical knowledge. If you need more advanced changes than what can be achieved here, you will need to edit the theme and the CSS files, which will be explained in the next article on Customizing PrestaShop Theme Part 2. It must be noted that all design changes that you can do in this back office can also be achieved through customization on the theme files (which involves editing of the file's markup) too. Although knowledge of this theme editing approach encased that of the back office setting, it is useful to know that there are reasons to choose the latter option with no "hacking" of scripts even if you are an advanced user as there could be some issues when you have to update to the next PrestaShop version. You will have to update some of these modified files as these changes may not be automatically included in the newer version. We have to decide what kind of layout we would want, just like the interior design space of a building that you are erecting, you need to visualize the spaces and how users will navigate your retail outlet. You will also need to know what kind of resources can help the successful function of your store, customers in real brick-and-mortar stores do not have to ask a lot of questions as they are prone to browsing the items while having the advantage of feeling, smelling, holding, or trying the items at the same time. While this is true for a real store/shop, the online store does not have this advantage. So, consider features/functions that can be a "replacement" to this disadvantage, such as a 30 day return policy. In a real shop, customers may ask questions at the customer service desks. The same thing can be done with your online store; you can add a lot of information that your customers may need while balancing it with a good design, navigation, and browsing experience. This will ensure that the customer finds the information and this reduces the need to repetitively answer the same queries. This is one of the main reasons why an online store exists, which means that information can be obtained easily 24x7. Therefore, in an online shop, you will have to decide on what kind of features you want to introduce, for example, one block for product information, another for customer service information where they can get information on return policy, how to make payment, and so on. This is just a background that is needed to decide the functions of your store. We will not be discussing about what makes a good navigation or whether one way can be more effective than another. We will learn about how you can use the knowledge about theming for PrestaShop-based stores to build your online store or if you are a web designer, your clients' online stores suitable with the stores' concepts. You will also learn how to go about in applying the necessary modules to complement your theme setup. Before we start this article, you should get acquainted with the back office panel. This will help you understand what we are exploring here. In this article, we will be sticking with the default PrestaShop theme and learn how to: Install, uninstall, enable, and disable module blocks in the center, left, and right columns. Transplant and position modules by moving them to columns and within the columns. The default layout Let's have a look again at your current storefront and how the theme is governed by the back office control panel. By looking at the screenshot, you can tell which back office items you need to modify, replace, or set according to your needs. The basic layout outline can be seen in the following screenshot: Besides the back office control over appearance, our theme is also affected by the modules that control the functionality of your store. At this stage, we will be working on the existing modules in PrestaShop. This is where you decide whether your site visitor will see the product categories, the top selling products, your product listing, the specials, your featured products, and so on. If you run an e-commerce store with a payment option that links automatically to a payment gateway, you may want to study a bit more about each of these modules as well. You will also notice that the default theme uses a three column layout with a header in the top block and a footer at the bottom. Through the back office panel, all the default blocks on the left and right columns can be moved or transplanted interchangeably. Some of the blocks in the header (top blocks) can be moved into the left column or right column. The featured product block and the editorial block, which are at the center column, are pretty much stuck in this position. Modules The Modules tab allows you to control the modules you want to use in the store. You will be able to transplant the modules and move them around according to the site navigation you want, considering some limitations at this stage. You can move them up or down in the columns. You may also position them in the left or the right column or you may disable them. You also have the option of adding a new module or choosing ones that are available from the PrestaStore. PrestaShop has some already installed modules, and the number of newly developed ones is growing every day. Now let's move on to Back Office | Modules, as shown in the following screenshot:     We will go into the listing and get some ideas on each one. However, we will focus in greater detail on the modules that affect theming directly. Among the existing modules in this version (PrestaShop 1.3.1), which are readily available for installation, some of them are: Advertisement – 1 module: Google Adsense. Products module – 6 modules: Cross selling, RSS products feed, Products Comments, Products Category, Product tooltips, Send to a Friend module. Stats Engines – 5 modules: Artichow, Google Chart, Visifire, XML/SWF Charts, ExtJS. Payment – 8 modules: Bank Wire, Cash on delivery (COD), Cheque, Google Checkout, Hipay, Moneybookers, Paypal, PaypalAPI. Tools – 14 modules (but only 12 modules listed): Birthday Present, Canonical URL, Home text editor, Customers follow-up, Google sitemap, Featured Products on the homepage, Customers loyalty and rewards, Mail alerts, Newsletter, Customer referral program, SMS Tm4b, and Watermark. Blocks – 23 modules: Block advertising, Top seller block, Cart block, Categories block, Currency block, Info block, Language block, Link block, Manufacturers block, My Account block, New products block, Newsletter block, Block payment logo, Permanent links block, RSS feed block, Quick Search block, Specials block, Suppliers block, Tags block, User info block, Footer links block, Viewed products block, Wish list block. Stats – 25 modules: Google Analytics, Pages not found, Search engine keywords, Best categories, Best customers, Best products, Best suppliers, Best vouchers, Carrier distribution, Catalog statistics v1.0, Catalog evaluation, Data mining for statistics, Geolocation, Condensed stats for the Back Office homepage, Visitors online, Newsletter, Visitors origin, Registered Customer Info, Product details, Customer accounts, Sales and orders, Shop search, Visits and Visitors, Tracking - Front office. You should also see an Icon legend on the right that reads the following: Apart from these three options of installing, enabling, and disabling, you may also add new modules using the button on the top-left corner of the module tab. There are also plenty of third party modules that can be used to make the store more interactive and attractive. Discussing about them is not within the scope of this article.
Read more
  • 0
  • 0
  • 3407

article-image-importance-securing-web-services
Packt
23 Jul 2014
10 min read
Save for later

The Importance of Securing Web Services

Packt
23 Jul 2014
10 min read
(For more resources related to this topic, see here.) In the upcoming sections of this article we are going to briefly explain several concepts about the importance of securing web services. The importance of security The management of securities is one of the main aspects to consider when designing applications. No matter what, neither the functionality nor the information of organizations can be exposed to all users without any kind of restriction. Suppose the case of a human resource management application that allows you to consult wages of their employees, for example, if the company manager needs to know the salary of one of their employees, it is not something of great importance. But in the same context, imagine that one of the employees wants to know the salary of their colleagues, if access to this information is completely open, it could generate problems among employees with varied salaries. Security management options Java provides some options for security management. Right now we will explain some of them and demonstrate how to implement them. All authentication methods are practically based on credentials delivery from the client to the server. In order to perform this, there are several methods: BASIC authentication DIGEST authentication CLIENT CERT authentication Using API keys The Security Management in applications built with Java including those ones with RESTful web services, always rely on JAAS. Basic authentication by providing user credentials Possibly one of the most used techniques in all kind of applications. The user, before gaining functionality over the application is requested to enter a username and password both are validated in order to verify if credentials are correct (belongs to an application user). We are 99 percent sure you have performed this technique at least once, maybe through a customized mechanism, or if you used JEE platform, probably through JAAS. This kind of control is known as basic authentication. In order to have a working example, let’s start our application server JBoss AS 7, then go to bin directory and execute the file add-user.bat (.sh file for UNIX users). Finally, we will create a new user as follows: As a result, we will have a new user in JBOSS_HOME/standalone/configuration/application - users.properties file. JBoss is already set with a default security domain called other; the same one uses the information stored in the file we mentioned earlier in order to authenticate. Right now we are going to configure the application to use this security domain, inside the folder WEB-INF from resteasy-examples project, let's create a file named jboss-web.xml with the following content: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>other</security-domain> </jboss-web> Alright, let's configure the file web.xml in order to aggregate the securities constraints. In the following block of code, you will see on bold what you should add: <?xml version="1.0" encoding="UTF-8"?> <web-app version="3.0" xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"> <!-- Roles --> <security-role> <description>Any rol </description> <role-name>*</role-name> </security-role> <!-- Resource / Role Mapping --> <security-constraint> <display-name>Area secured</display-name> <web-resource-collection> <web-resource-name>protected_resources</web-resource-name> <url-pattern>/services/*</url-pattern> <http-method>GET</http-method> <http-method>POST</http-method> </web-resource-collection> <auth-constraint> <description>User with any role</description> <role-name>*</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> </login-config> </web-app> From a terminal let's go to the home folder of the resteasy-examples project and execute mvn jboss-as:redeploy. Now we are going to test our web service as we did earlier by using SoapUI. We are going to perform request using the POST method to the URL SOAP UI shows us the HTTP 401 error; this means that the request wasn't authorized. This is because we performed the request without delivering the credentials to the server. Digest access authentication This authentication method makes use of a hash function to encrypt the password entered by the user before sending it to the server. This makes it obviously much safer than the BASIC authentication method, in which the user’s password travels in plain text that can be easily read by whoever intercepts. To overcome such drawbacks, digest MD5 authentication applies a function on the combination of the values of the username, realm of application security, and password. As a result we obtain an encrypted string that can hardly be interpreted by an intruder. Now, in order to perform what we explained before, we need to generate a password for our example user. And we have to generate it using the parameters we talked about earlier; username, realm, and password. Let’s go into the directory of JBOSS_HOME/modules/org/picketbox/main/ from a terminal and type the following: java -cp picketbox-4.0.7.Final.jar org.jboss. security.auth.callback.RFC2617Digest username MyRealmName password We will obtain the following result: RFC2617 A1 hash: 8355c2bc1aab3025c8522bd53639c168 Through this process we obtain the encrypted password, and use it in our password storage file (the JBOSS_HOME/standalone/configuration/application-users.properties). We must replace the password in the file and it will be used for the user username. We have to replace it because the old password doesn't contain the realm name information of the application. Next, We have to modify the web.xml file in the tag auth-method and change the value FORM to DIGEST, and we should set the application realm name this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name> </login-config> Now, let's create a new security domain in JBoss, so we can manage the authentication mechanism DIGEST. In the file JBOSS_HOME/standalone/configuration/standalone.xml, on the section <security-domains>, let's add the following entry: <security-domain name="domainDigest" cache-type ="default"> <authentication> <login-module code="UsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir} /application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/ application-roles.properties"/> <module-option name="hashAlgorithm" value="MD5"/> <module-option name= "hashEncoding" value="RFC2617"/> <module-option name="hashUserPassword" value="false"/> <module-option name="hashStorePassword" value="true"/> <module-option name="passwordIsA1Hash" value="true"/> <module-option name="storeDigestCallback" value=" org.jboss.security.auth.callback.RFC2617Digest"/> </login-module> </authentication> </security-domain> Finally, in the application, change the security domain name in the file jboss-web.xml as shown in the following snippet: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>java:/jaas/domainDigest</security-domain> </jboss-web> We are going to change the authentication method from BASIC to DIGEST in the web.xml file. Also we enter the name of the security realm; all these changes must be applied in the tag login-config, this way: <login-config> <auth-method>DIGEST</auth-method> <realm-name>MyRealmName</realm-name </login-config> Now, restart the application server and then redeploy the application on JBoss. To do this, execute the next command on the terminal: mvn jboss-as:redeploy Authentication through certificates It is a mechanism in which a trust agreement is established between the server and the client through certificates. They must be signed by an agency established to ensure that the certificate presented for authentication is legitimate, it is known as CA. This security mechanism needs that our application server uses HTTPS as communication protocol. So we must enable HTTPS. Let's add a connector in the standalone.xml file; look for the following line: <connector name="http" Add the following block of code: <connector name="https" protocol="HTTP/1.1" scheme="https" socket-binding="https" secure="true"> <ssl password="changeit" certificate-key-file="${jboss.server.config.dir}/server.keystore" verify-client="want" ca-certificate-file="${jboss.server.config.dir}/server.truststore"/> </connector> Next we add the security domain: <security-domain name="RequireCertificateDomain"> <authentication> <login-module code="CertificateRoles" flag="required"> <module-option name="securityDomain" value="RequireCertificateDomain"/> <module-option name="verifier" value=" org.jboss.security.auth.certs.AnyCertVerifier"/> <module-option name="usersProperties" value= "${jboss.server.config.dir}/my-users.properties"/> <module-option name="rolesProperties" value= "${jboss.server.config.dir}/my-roles.properties"/> </login-module> </authentication> <jsse keystore-password="changeit" keystore-url= "file:${jboss.server.config.dir}/server.keystore" truststore-password="changeit" truststore-url ="file:${jboss.server.config.dir}/server.truststore"/> </security-domain> As you can see, we need two files: my-users.properties and my-roles.properties, both are empty and located in the JBOSS_HOME/standalone/configuration path. We are going to add the <user-data-constraint> tag in the web.xml in this way: <security-constraint> ...<user-data-constraint> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> Then, change the authentication method to CLIENT-CERT: <login-config> <auth-method>CLIENT-CERT</auth-method> </login-config> And finally change the security domain in the jboss-web.xml file in the following way: <?xml version="1.0" encoding="UTF-8"?> <jboss-web> <security-domain>RequireCertificateDomain</security-domain> </jboss-web> Now, restart the application server, and redeploy the application with Maven: mvn jboss-as:redeploy API keys With the advent of cloud computing, it is not difficult to think of applications that integrate with many others available in the cloud. Right now, it's easy to see how applications interact with Flickr, Facebook, Twitter, Tumblr, and so on through APIKeys usage. This authentication method is used primarily when we need to authenticate from another application but we do not want to access the private user data hosted in another application, on the contrary, if you want to access this information, you must use OAuth. Today it is very easy to get an API key. Simply log into one of the many cloud providers and obtain credentials, consisting of a KEY and a SECRET, the same that are needed to interact with the authenticating service providers. Keep in mind that when creating an API Key, accept the terms of the supplier, which clearly states what we can and cannot do, protecting against abusive users trying to affect their services. The following chart shows how this authentication mechanism works: Summary In this article, we went through some models of authentication. We can apply them to any web service functionality we created. As you realize, it is important to choose the correct security management, otherwise information is exposed and can easily be intercepted and used by third parties. Therefore, tread carefully. Resources for Article: Further resources on this subject: RESTful Java Web Services Design [Article] Debugging REST Web Services [Article] RESTful Services JAX-RS 2.0 [Article]
Read more
  • 0
  • 0
  • 3403

article-image-apache-ofbiz-service-engine-part-1
Packt
21 Oct 2009
6 min read
Save for later

Apache OFBiz Service Engine: Part 1

Packt
21 Oct 2009
6 min read
Defining a Service We first need to define a service. Our first service will be named learningFirstService. In the folder ${component:learning}, create a new folder called servicedef. In that folder, create a new file called services.xml and enter into it this: <?xml version="1.0" encoding="UTF-8" ?> <services xsi:noNamespaceSchemaLocation="http://www.ofbiz.org/dtds/services.xsd"> <description>Learning Component Services</description> <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> <description>Our First Service</description> <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> </service> </services> In the file ${component:learning}ofbiz-component.xml, add after the last <entity-resource> element this: <service-resource type="model" loader="main" location="service def/services.xml"/> That tells our component learning to look for service definitions in the file ${component:learning}servicedefservices.xml. It is important to note that all service definitions are loaded at startup; therefore any changes to any of the service definition files will require a restart! Creating the Java Code for the Service In the package org.ofbiz.learning.learning, create a new class called LearningServices with one static method learningFirstService: package org.ofbiz.learning.learning; import java.util.Map; import org.ofbiz.service.DispatchContext; import org.ofbiz.service.ServiceUtil; public class LearningServices { public static final String module = LearningServices.class.getName(); public static Map learningFirstService(DispatchContext dctx, Map context){ Map resultMap = ServiceUtil.returnSuccess("You have called on service 'learningFirstService' successfully!"); return resultMap; } } Services must return a map. This map must contain at least one entry. This entry must have the key responseMessage (see org.ofbiz.service.ModelService.RESPONSE_MESSAGE), having a value of one of the following: success or ModelService.RESPOND_SUCCESS error or ModelService.RESPOND_ERROR fail or ModelService.RESPOND_FAIL By using ServiceUtil.returnSuccess() to construct the minimal return map, we do not need to bother adding the responseMessage key and value pair. Another entry that is often used is that with the key successMessage (ModelService.SUCCESS_MESSAGE). By doing ServiceUtil.returnSuccess("Some message"), we will get a return map with entry successMessage of value "Some message". Again, ServiceUtil insulates us from having to learn the convention in key names. Testing Our First Service Stop OFBiz, recompile our learning component and restart OFBiz so that the modified ofbiz-component.xml and the new services.xml can be loaded. In ${component:learning}widgetlearningLearningScreens.xml, insert a new Screen Widget: <screen name="TestFirstService"> <section> <widgets> <section> <condition><if-empty field-name="formTarget"/></condition> <actions> <set field="formTarget" value="TestFirstService"/> <set field="title" value="Testing Our First Service"/> </actions> <widgets/> </section> <decorator-screen name="main-decorator" location="${parameters.mainDecoratorLocation}"> <decorator-section name="body"> <include-form name="TestingServices" location="component://learning/widget/learning/LearningForms.xml"/> <label text="Full Name: ${parameters.fullName}"/> </decorator-section> </decorator-screen> </widgets> </section> </screen> In the file ${component:learning}widgetlearningLearningForms.xml, insert a new Form Widget: <form name="TestingServices" type="single" target="${formTarget}"> <field name="firstName"><text/></field> <field name="lastName"><text/></field> <field name="planetId"><text/></field> <field name="submit"><submit/></field> </form> Notice how the formTarget field is being set in the screen and used in the form. For now don't worry about the Full Name label we are setting from the screen. Our service will eventually set that. In the file ${webapp:learning}WEB-INFcontroller.xml, insert a new request map: <request-map uri="TestFirstService"> <event type="service" invoke="learningFirstService"/> <response name="success" type="view" value="TestFirstService"/> </request-map> The control servlet currently has no way of knowing how to handle an event of type service, so in controller.xml we must add a new handler element immediately under the other <handler> elements: <handler name="service" type="request" class="org.ofbiz.webapp.event.ServiceEventHandler"/> <handler name="service-multi" type="request" class="org.ofbiz.webapp.event.ServiceMultiEventHandler"/> We will cover service-multi services later. Finally add a new view map: <view-map name="TestFirstService" type="screen" page="component://learning/widget/learning/LearningScreens.xml#TestFirstService"/> Fire to webapp learning an http OFBiz request TestFirstService, and see that we have successfully invoked our first service: Service Parameters Just like Java methods, OFBiz services can have input and output parameters and just like Java methods, the parameter types must be declared. Input Parameters (IN) Our first service is defined with two parameters: <attribute name="firstName" type="String" mode="IN" optional="true"/> <attribute name="lastName" type="String" mode="IN" optional="true"/> Any parameters sent to the service by the end-user as form parameters, but not in the services list of declared input parameters, will be dropped. Other parameters are converted to a Map by the framework and passed into our static method as the second parameter. Add a new method handleInputParamaters to our LearningServices class. public static Map handleParameters(DispatchContext dctx, Map context){ String firstName = (String)context.get("firstName"); String lastName = (String)context.get("lastName"); String planetId= (String)context.get("planetId"); String message = "firstName: " + firstName + "<br/>"; message = message + "lastName: " + lastName + "<br/>"; message = message + "planetId: " + planetId; Map resultMap = ServiceUtil.returnSuccess(message); return resultMap; } We can now make our service definition invoke this method instead of the learningFirstService method by opening our services.xml file and replacing: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="learningFirstService"> with: <service name="learningFirstService" engine="java" location="org.ofbiz.learning.learning.LearningServices" invoke="handleParameters"> Once again shutdown, recompile, and restart OFBiz. Enter for fields First Name, Last Name, and Planet Id values Some, Name, and Earth, respectively. Submit and notice that only the first two parameters went through to the service. Parameter planetId was dropped silently as it was not declared in the service definition. Modify the service learningFirstService in the file ${component:learning}servicedefservices.xml, and add below the second parameter a third one like this: <attribute name="planetId" type="String" mode="IN" optional="true"/> Restart OFBiz and submit the same values for the three form fields, and see all three parameters go through to the service.  
Read more
  • 0
  • 0
  • 3399
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-layout-dojo-part-2
Packt
15 Oct 2009
12 min read
Save for later

Layout in Dojo: Part 2

Packt
15 Oct 2009
12 min read
GridContainer There are a lot of sites available that let you add a lot of rss feeds and assorted widgets to a personal page, and which then also let you arrange them by dragging the widgets themselves around the page. One of the most known examples is iGoogle, Google's personal homepage for users with a staggering amount of widgets that are easy to move around. This functionality is called a GridContainer in Dojo. If you're not familiar with the concept and have never used a service which lets you rearrange widgets, it works like this: The GridContainer defines a number of different columns, called zones. Each column can contain any number of child widgets, including other containers (like AccordionContainer or BorderContainer). Each child widget becomes draggable and can be dragged into a new position within its own column, or dragged to a new position in another column. As the widget gets dragged, it uses a semi-transparent 'avatar'. As the widget gets dragged, possible target drop zones open up and close themselves dynamically under the cursor, until the widget is dropped on one of them. When a widget is dropped, the target column automatically rearranges itself to make the new widget fit. Here is an example from test_GridContainer.html in /dojox/layout/tests/. This is what the GridContainer looks like from the beginning: It has three columns (zones) defined which contain a number of child widgets. One of them is a Calendar widget, which is then dragged to the second column from its original position in the third: Note the new target area being offered by the second column. This will be closed again if we continue to move the cursor over to the first column. Also, in the example above, transparency of 1.0 (none) is added to the avatar, which looks normal. Finally, the widget is dropped onto the second column, both the source and target column arrange their widgets according to whether one has been added or removed. The implications of this is that it becomes very simple to create highly dynamical interfaces. Some examples might be: An internal "dashboard" for management or other groups in the company which needs rearrangeable views on different data sources. Portlets done right. Using dojox.charting to create different diagrammatic views on data sources read from the server, letting the user create new diagrams and rearranging them in patterns or groups meaningful to the current viewer. A simple front-end for a CMS-system, where the editor widget is used to enter text, and the user can add, delete or change paragraphs as well as dragging them around and rearranging their order. An example of how to create a GridContainer using markup (abbreviated) is as follows: <div id="GC1" dojoType="dojox.layout.GridContainer" nbZones="3" opacity="0.7" allowAutoScroll="true" hasResizableColumns="false" withHandles="true" acceptTypes="dijit.layout.ContentPane, dijit.TitlePane, dijit.ColorPalette, dijit._Calendar"><div dojoType="dijit.layout.ContentPane" class="cpane" label="Content Pane">Content Pane n?1 !</div><div dojoType="dijit.TitlePane" title="Ergo">Non ergo erunt homines deliciis ...</div><div dojoType="dijit.layout.ContentPane" class="cpane" label="Content Pane">Content Pane n?2 !</div><div dojoType="dijit.layout.ContentPane" title="Intellectum">Intellectum est enim mihi quidem in multis, et maxime in me ipso, sed paulo ante in omnibus, cum M....</div><div dojoType="dijit.layout.ContentPane" class="cpane" label="Content Pane">Content Pane n?3 !</div><div dojoType="dijit.layout.ContentPane" class="cpane" label="Content Pane">Content Pane n?4 !</div><div dojoType="dijit._Calendar"></div></div> The GridContainer wraps all of its contents.These are not added is not added in a hierarchical manner, but instead all widgets are declared inside the GridContainer element. When the first column's height is filled, the next widget in the list gets added to the next column, and so on. This is a quite unusual method of layout, and we might see some changes to this mode of layout since the GridContainer is very much beta [2008]. The properties for the GridContainer are the following: //i18n: Object//Contain i18n ressources.i18n: null,//isAutoOrganized: Boolean://Define auto organisation of children into the grid container.isAutoOrganized : true,//isRightFixed: Boolean//Define if the right border has a fixed size.isRightFixed:false,//isLeftFixed: Boolean//Define if the left border has a fixed size.isLeftFixed:false,//hasResizableColumns: Boolean//Allow or not resizing of columns by a grip handle.hasResizableColumns:true,//nbZones: Integer//The number of dropped zones.nbZones:1,//opacity: Integer//Define the opacity of the DnD Avatar.opacity:1,//minColWidth: Integer//Minimum column width in percentage.minColWidth: 20,//minChildWidth: Integer//Minimun children with in pixel (only used for IE6 that doesn't//handle min-width css propertyminChildWidth : 150,//acceptTypes: Array//The gridcontainer will only accept the children that fit to//the types.//In order to do that, the child must have a widgetType or a//dndType attribute corresponding to the accepted type.acceptTypes: [],//mode: String//location to add columns, must be set to left or right(default)mode: "right",//allowAutoScroll: Boolean//auto-scrolling enable inside the GridContainerallowAutoScroll: false,//timeDisplayPopup: Integer//display time of popup in milisecondstimeDisplayPopup: 1500,//isOffset: Boolean//if true : Let the mouse to its original location when moving//(allow to specify it proper offset)//if false : Current behavior, mouse in the upper left corner of//the widgetisOffset: false,//offsetDrag: Object//Allow to specify its own offset (x and y) onl when Parameter//isOffset is trueoffsetDrag : {}, ////withHandles: Boolean//Specify if there is a specific drag handle on widgetswithHandles: false,//handleClasses: Array//Array of classes of nodes that will act as drag handleshandleClasses : [], The property isAutoOrganized, which is set to true by default, can be set to false, which will leave holes in your source columns, and require you to manage the space in the target columns yourself. The opacity variable is the opacity for the 'avatar' of the dragged widget, where 1 is completely solid, and 0 is completely transparent. The hasResizableColumns variable also adds SplitContainer/BorderContainer splitters between columns, so that the user can change the size ratio between columns. The minColWidt/minChildWidth variables manage the minimum widths of columns and child widgets in relation to resizing events. The AcceptTypes variable is an important property, which lets you define which classes you allow to be dropped on a column. In the above example code, that string is set to dijit.layout.ContentPane, dijit.TitlePane, dijit.ColorPalette, dijit._Calendar. This makes it impossible to drop an AccordionContainer on a column. The reason for this is that certain things would want to be fixed, like status bars or menus, but still inside one of the columns. The withHandles variable can be set to true if you want each widget to get a visible 'drag handle' appended to it. RadioGroup The source code of dojox.layout.RadioGroup admits that it probably is poorly named, because it has little to do with radio buttons or groups of them, per se, even if this was probably the case when it was conceived. The RadioGroup extends the StackContainer, doing something you probably had ideas about the first time you saw it – adding flashy animations when changing which child container is shown. One example of how to use StackContainer and its derivatives is an information box for a list of friends. Each information box is created as a ContentPane which loads its content from a URL. As the user clicks on or hovers over the next friend on a nearby list, an event is triggered to show the next item (ContentPane) in the stack. Enter the RadioGroup, which defines its own set of buttons that mirror the ContentPanes which it wraps. The unit test dojox/layout/tests/test_RadioGroup.html defines a small RadioGroup in the following way: <div dojoType="dojox.layout.RadioGroup" style="width:300px; height:300px; float:left;" hasButtons="true"><div dojoType="dijit.layout.ContentPane" title="Dojo" class="dojoPane" style="width:300px; height:300px; "></div><div dojoType="dijit.layout.ContentPane" title="Dijit" class="dijitPane" style="width:300px; height:300px; "></div><div dojoType="dijit.layout.ContentPane" title="Dojox" class="dojoxPane" style="width:300px; height:300px; "></div></div> As you can see, it does not take much space. In the test, the ContentPanes are filled with only the logos for the different parts of Dojo, defined as background images by CSS classes. The RadioGroup iterates over each child ContentPane, and creates a "hover button" for it, which is connected to an event handler which manages the transition, so if you don' t have any specific styling for your page and just want to get a quick mock-up done, the RadioGroup is very easy to work with. The default RadioGroup works very much like its parent class, StackContainer, mostly providing a simple wrapper that generates mouseover buttons. In the same file that defines the basic RadioGroup, there are two more widgets: RadioGroupFade and RadioGroupSlide. These have exactly the same kind of markup as their parent class, RadioGroup. RadioGroupFade looks like this in its entirety: dojo.declare("dojox.layout.RadioGroupFade", dojox.layout.RadioGroup, { // summary: An extension on a stock RadioGroup, that fades the //panes. _hideChild: function(page){ // summary: hide the specified child widget dojo.fadeOut({ node:page.domNode, duration:this.duration, onEnd: dojo.hitch(this,"inherited", arguments) }).play(); }, _showChild: function(page){ // summary: show the specified child widget this.inherited(arguments); dojo.style(page.domNode,"opacity",0); dojo.fadeIn({ node:page.domNode, duration:this.duration }).play(); }}); As you can see, all it does is override two functions from RadioGroup which manage how to show and hide child nodes upon transitions. The basic idea is to use the integral Dojo animations fadeIn and fadeOut for the effects. The other class, RadioGroupSlide, is a little bit longer, but not by much. It goes beyond basic animations and uses a specific easing function. In the beginning of its definition is this variable: // easing: Function// A hook to override the default easing of the pane slides.easing: "dojo.fx.easing.backOut", Later on, in the overridden _hide and _showChild functions, this variable is used when creating a standalone animation: ...this._anim = dojo.animateProperty({ node:page.domNode, properties: { left: 0, top: 0 }, duration: this.duration, easing: this.easing, onEnd: dojo.hitch(page,function(){ if(this.onShow){ this.onShow(); } if(this._loadCheck){ this._loadCheck(); } })});this._anim.play();   What this means is that it is very simple to change (once again) what kind of animation is used when hiding the current child and showing next, which can be very usable. Also, you can see that it is very simple to create your own subclass widget out of RadioGroup which can use custom actions when child nodes are changed. ResizeHandle The ResizeHandle tucks a resize handle, as the name implies, into the corner of an existing element or widget. The element which defines the resize handle itself need not be a child element or even adjacent to the element which is to receive the handle. Instead the id of the target element is defined as an argument to the ResizeHandle as shown here: <div dojoType="dijit.layout.ContentPane" title="Test window" style="width: 300px; height: 200px; padding:10px; border: 1px solid #dedede; position: relative; background: white;" id="testWindow"> ...<div id="hand1" dojoType="dojox.layout.ResizeHandle" targetId="testWindow"></div> </div> In this example, a simple ContentPane is defined first, with some custom styling to make it stand out a little bit. Further on in the same pages comes a ResizeHandle definition which sets the targetId property of the newly created ResizeHandle to that of the ContentPane ('testWindow'). The definition of the ResizeHandle class shows some predictable goodies along with one or two surprises: //targetContainer: DomNode//over-ride targetId and attch this handle directly to a//reference of a DomNodetargetContainer: null,//resizeAxis: String//one of: x|y|xy limit resizing to a single axis, default to xy ...resizeAxis: "xy",//activeResize: Boolean//if true, node will size realtime with mouse movement,//if false, node will create virtual node, and only resize target//on mouseUp.activeResize: false,//activeResizeClass: String//css class applied to virtual resize node.activeResizeClass: 'dojoxResizeHandleClone',//animateSizing: Boolean//only applicable if activeResize = false. onMouseup, animate the//node to the new size.animateSizing: true,//animateMethod: String//one of "chain" or "combine" ... visual effect only.combine will "scale"//node to size, "chain" will alter width, then heightanimateMethod: 'chain',//animateDuration: Integer//time in MS to run sizing animation. if animateMethod="chain",//total animation playtime is 2*animateDuration.animateDuration: 225,//minHeight: Integer//smallest height in px resized node can beminHeight: 100,//minWidth: Integer//smallest width in px resize node can beminWidth: 100, As could be expected, it is simple to change if the resizing is animated during mouse move or afterwards (activeResize: true/false). If afterwards, the animateDuration declares in milliseconds the length of the animation. A very useful property is the ability to lock the resizing action to just one of the two axes. The resizeAxis property defaults to xy, but can be set to only x or only y as well. Both restricts resizing to only one axis and also changes the resize cursor to show correct feedback to which axis is 'active' at the moment. If you at any point want to remove the handle, calling destroy() on it will remove it from the target node without any repercussions.
Read more
  • 0
  • 0
  • 3398

article-image-moodle-online-communities
Packt
14 Apr 2014
9 min read
Save for later

Moodle for Online Communities

Packt
14 Apr 2014
9 min read
(For more resources related to this topic, see here.) Now that you're familiar with the ways to use Moodle for different types of courses, it is time to take a look at how groups of people can come together as an online community and use Moodle to achieve their goals. For example, individuals who have the same interests and want to discuss and share information in order to transfer knowledge can do so very easily in a Moodle course that has been set up for that purpose. There are many practical uses of Moodle for online communities. For example, members of an association or employees of a company can come together to achieve a goal and finish a task. In this case, Moodle provides a perfect place to interact, collaborate, and create a final project or achieve a task. Online communities can also be focused on learning and achievement, and Moodle can be a perfect vehicle for encouraging online communities to support each other to learn, take assessments, and display their certificates and badges. Moodle is also a good platform for a Massive Open Online Course (MOOC). In this article, we'll create flexible Moodle courses that are ideal for online communities and that can be modified easily to create opportunities to harness the power of individuals in many different locations to teach and learn new knowledge and skills. In this article, we'll show you the benefit of Moodle and how to use Moodle for the following online communities and purposes: Knowledge-transfer-focused communities Task-focused communities Communities focused on learning and achievement Moodle and online communities It is often easy to think of Moodle as a learning management system that is used primarily by organizations for their students or employees. The community tends to be well defined as it usually consists of students pursuing a common end, employees of a company, or members of an association or society. However, there are many informal groups and communities that come together because they share interests, the desire to gain knowledge and skills, the need to work together to accomplish tasks, and let people know that they've reached milestones and acquired marketable abilities. For example, an online community may form around the topic of climate change. The group, which may use social media to communicate with each other, would like to share information and get in touch with like-minded individuals. While it's true that they can connect via Facebook, Twitter, and other social media formats, they may lack a platform that gives a "one-stop shopping" solution. Moodle makes it easy to share documents, videos, maps, graphics, audio files, and presentations. It also allows the users to interact with each other via discussion forums. Because we can use but not control social networks, it's important to be mindful of security issues. For that reason, Moodle administrators may wish to consider ways to back up or duplicate key posts or insights within the Moodle installation that can be preserved and stored. In another example, individuals may come together to accomplish a specific task. For example, a group of volunteers may come together to organize a 5K run fundraiser for epilepsy awareness. For such a case, Moodle has an array of activities and resources that can make it possible to collaborate in the planning and publicity of the event and even in the creation of post event summary reports and press releases. Finally, let's consider a person who may wish to ensure that potential employers know the kinds of skills they possess. They can display the certificates they've earned by completing online courses as well as their badges, digital certificates, mentions in high achievers lists, and other gamified evidence of achievement. There are also the MOOCs, which bring together instructional materials, guided group discussions, and automated assessments. With its features and flexibility, Moodle is a perfect platform for MOOCs. Building a knowledge-based online community For our knowledge-based online community, let's consider a group of individuals who would like to know more about climate change and its impact. To build a knowledge-based online community, the following are the steps we need to perform: Choose a mobile-friendly theme. Customize the appearance of your site. Select resources and activities. Moodle makes it possible for people from all locations and affiliations to come together and share information in order to achieve a common objective. We will see how to do this in the following sections. Choosing the best theme for your knowledge-based Moodle online communities As many of the users in the community access Moodle using smartphones, tablets, laptops, and desktops, it is a good idea to select a theme that is responsive, which means that it will be automatically formatted in order to display properly on all devices. You can learn more about themes for Moodle, review them, find out about the developers, read comments, and then download them at https://moodle.org/plugins/browse.php?list=category&id=3. There are many good responsive themes, such as the popular Buckle theme and the Clean theme, that also allow you to customize them. These are the core and contributed themes, which is to say that they were created by developers and are either part of the Moodle installation or available for free download. If you have Moodle 2.5 or a later version installed, your installation of Moodle includes many responsive themes. If it does not, you will need to download and install a theme. In order to select an installed theme, perform the following steps: In the Site administration menu, click on the Appearance menu. Click on Themes. Click on Theme selector. Click on the Change theme button. Review all the themes. Click on the Use theme button next to the theme you want to choose and then click on Continue. Using the best settings for knowledge-based Moodle online communities There are a number of things you can do to customize the appearance of your site so that it is very functional for knowledge-transfer-based Moodle online communities. The following is a brief checklist of items: Select Topics format under the Course format section in the Course default settings window. By selecting topics, you'll be able to organize your content around subjects. Use the General section, which is included as the first topic in all courses. It has the News forum link. You can use this for announcements highlighting resources shared by the community. Include the name of the main contact along with his/her photograph and a brief biographical sketch in News forum. You'll create the sense that there is a real "go-to" person who is helping guide the endeavor. Incorporate social media to encourage sharing and dissemination of new information. Brief updates are very effective, so you may consider including a Twitter feed by adding your Twitter account as one of your social media sites. Even though your main topic of discussion may contain hundreds of subtopics that are of great interest, when you create your Moodle course, it's best to limit the number of subtopics to four or five. If you have too many choices, your users will be too scattered and will not have a chance to connect with each other. Think of your Moodle site as a meeting point. Do you want to have too many breakout sessions and rooms or do you want to have a main networking site? Think of how you would like to encourage users to mingle and interact. Selecting resources and activities for a knowledge-based Moodle online community The following are the items to include if you want to configure Moodle such that it is ideal for individuals who have come together to gain knowledge on a specific topic or problem: Resources: Be sure to include multiple types of files: documents, videos, audio files, and presentations. Activities: Include Quiz and other such activities that allow individuals to test their knowledge. Communication-focused activities: Set up a discussion forum to enable community members to post their thoughts and respond to each other. The key to creating an effective Moodle course for knowledge-transfer-based communities is to give the individual members a chance to post critical and useful information, no matter what the format or the size, and to accommodate social networks. Building a task-based online community Let's consider a group of individuals who are getting together to plan a fundraising event. They need to plan activities, develop materials, and prepare a final report. Moodle can make it fairly easy for people to work together to plan events, collaborate on the development of materials, and share information for a final report. Choosing the best theme for your task-based Moodle online communities If you're using volunteers or people who are using Moodle just for the tasks or completion of tasks, you may have quite a few Moodle "newbies". Since people will be unfamiliar with navigating Moodle and finding the places they need to go, you'll need a theme that is clear, attention-grabbing, and that includes easy-to-follow directions. There are a few themes that are ideal for collaborations and multiple functional groups. We highly recommend the Formal white theme because it is highly customizable from the Theme settings page. You can easily customize the background, text colors, logos, font size, font weight, block size, and more, enabling you to create a clear, friendly, and brand-recognizable site. Formal white is a standard theme, kept up to date, and can be used on many versions of Moodle. You can learn more about the Formal white theme and download it by visiting http://hub.packtpub.com/wp-content/uploads/2014/04/Filetheme_formalwhite.png. In order to customize the appearance of your entire site, perform the following steps: In the Site administration menu, click on Appearance. Click on Themes. Click on Theme settings. Review all the themes settings. Enter the custom information in each box.
Read more
  • 0
  • 0
  • 3392

article-image-installing-drupal-7
Packt
24 Nov 2010
9 min read
Save for later

Installing Drupal 7

Packt
24 Nov 2010
9 min read
Drupal 7 First Look Learn the new features of Drupal 7, how they work and how they will impact you Get to grips with all of the new features in Drupal 7 Upgrade your Drupal 6 site, themes, and modules to Drupal 7 Explore the new Drupal 7 administration interface and map your Drupal 6 administration interface to the new Drupal 7 structure Complete coverage of the DBTNG database layer with usage examples and all API changes for both Themes and Modules         Read more about this book       (For more resources on Drupal, see here.) Drupal's installation process has always been very easy to use, and the Drupal 7 installation makes things even easier. Before beginning to install Drupal 7, you will need a web server running the Apache HTTPD web server. You can also use IIS on Microsoft Windows, but the Apache server is preferred and you will be able to obtain support from the community more easily if you use the Apache server. Want to easily install Apache onto a Microsoft Windows machine? Try XAMPP, which is published by Apache Friends. This package includes Apache, MySQL, and PHP with a standard Microsoft Windows installer. You can download XAMPP from http://www.apachefriends. org/en/xampp.html. Other options include WAMP (http://www. wampserver.com/en/) and MoWeS Portable (http://www. chsoftware.net/en/mowes/mowesportable/mowes.htm). Your server will also need PHP installed on it. Drupal requires at least PHP version 5.2.0. As of this writing, there are some hosts that still do not have PHP 5.2.0 or later installed on their shared hosting accounts, and Red Hat does not include PHP 5.2.0 or later in its default distribution. Check with your host or system administrator before installing Drupal to make sure that the correct version is available. In addition to the web server and PHP, you will also need a database. MySQL and PostgreSQL are the databases that are most frequently used with Drupal, and of the two, MySQL is much more widely used. That being said, you can use Drupal with many different databases and the new DBTNG database abstraction layer will make it easier to deploy to any database. If you are using MySQL, you will need version 5.0.15 or later installed. If you are using PostgreSQL, you will need PostgreSQL 8.3.0 or later. SQLite is also officially supported for use with Drupal and you will need version 3.4.2 or later. After you have a server set up with the proper software, you can download Drupal and begin the installation process. Obtaining Drupal If you have used previous versions of Drupal, the process for downloading Drupal is the same as always. If you are new to Drupal, you will use the following process: Go to the Drupal project page on Drupal.org: http://drupal.org/project/ drupal. Find the latest official release of Drupal 7 and click on the Download link. The release will be named 7.0 or similar. Your browser will ask whether you want to download or Open the file. Make sure to download it to your computer. The file you downloaded is a .tar.gz file, which is a compressed archive similar to a .zip file. You will need to extract the files from this archive onto your computer. If your computer doesn't already have a program that can open .tar.gz files, try 7-Zip, an open source application that easily handles these files. You can download 7-Zip from http://www.7-zip.org. After you have extracted the files, you will need to copy them to your web server's document root. You are now ready to start the installation process. Simply navigate to http://yoursite.com/install.php. Let's step through the installation process in detail now. Selecting an installation profile The first step in the installation process is selecting an installation profile. Drupal prompts you with a screen asking for which installation profile you want to use during the installation: By default, Drupal comes with two installation profiles, the Standard profile and the Minimal profile. Custom distributions may come with additional profiles. Minimal profile The Minimal profile installs a basic configuration of Drupal with only the required functionality enabled. This profile is even more minimal than the base Drupal 6 installation. This profile should be used if you are very familiar with setting up Drupal and don't want some of the additional features activated in the Standard profile. Standard profile The Standard Drupal profile installs and activates several commonly-used features to make your Drupal site more useful immediately. These additional features include: Search form installed on the left sidebar. Powered by Drupal block enabled in the footer. A basic page content type is automatically created to store static content on your site. An article content type is automatically created to store time-specific content. The article content type replaces the story content type from Drupal 6. Both content types are set up with RDF capabilities. User profiles have pictures enabled by default. Profile pictures can have a maximum size of 1024x1024 pixels and be up to 800 KB when they are uploaded. They will be displayed using the thumbnail image style. A taxonomy called Tags is created to allow easy categorization of content on your site. The article content type is enhanced by adding an image field, which allows PNG, GIF, and JPG files to be attached to the article. An administrator role is created that has all permissions activated for it. As new modules are activated, the administrator role will automatically be updated with the permissions for the new module. The Seven theme is activated for the administration section of the site. In most cases, you will want to start with the Standard installation profile, especially if you are setting up an entirely new site or if you are new to Drupal. Language selection The next step in the installation is choosing the language with which you want to install Drupal. By default, Drupal only includes an English installer. If you want to want to install Drupal in another language, you will need to download a translation from Drupal.org. A complete list of translations is available at http://drupal.org/ project/translations. After you download the translation you want to use, you will need to unpack the translation and copy it to your document folder. The process to unpack and copy the files is similar to the process we used when we unpacked and copied the core Drupal files to your server. For now, we will continue with the English installation. Requirements check Drupal will now check the requirements of your server to ensure that it meets the minimum requirements to run Drupal and to ensure that everything is ready for the installation to proceed. The requirements check will appear similar to the following: If Drupal does discover any problems, it will give you information about how to correct the problem. In our case, it looks like we forgot to set up our settings file. The settings file tells Drupal which database to connect to as well as the connection information. To create a settings file, navigate to your document root and then navigate to the sites/default folder. Copy the default.settings.php file to settings.php. You do not need to change any of the information within the file. After you have corrected any problems, click on the proceed with the installation link. Drupal will re-evaluate the requirements and let you know if anything else needs to be changed. This screen has been enhanced in Drupal 7 to provide much more information about your current server settings. Database configuration The next step in installing Drupal is configuring the database where Drupal will store the content and configuration information for your site. The functionality of this screen has also been enhanced in Drupal 7. The key difference is that Drupal 7 will automatically check which types of databases are available to you based on your server setup. Then, it will only allow you to select a database which will work. If you want to run Drupal using a different database server than your web server, you can use the ADVANCED OPTIONS link to configure the database server and port. You can also use ADVANCED OPTIONS if you are setting up multiple sites within a single database. For a Standard installation, enter the name of your database as well as the username and password for the database. This functionality remains the same as in Drupal 6. You will need to create a database outside of the Drupal installation. The actual steps for creating a new database vary depending on your website host. Many hosts have installed phpMyAdmin, which allows you to manage your databases with an easy-to-use web-based interface. If you use phpMyAdmin to create your database, you will need to log in to phpMyAdmin and create a database. You can create a new database from the home page, which should appear similar to the following screenshot depending on the version of phpMyAdmin you are using: You can create a new user for the database in the Privileges tab. After you have entered your database settings, click on the Save and continue button. Drupal will now configure the database and set up your site. As the installation proceeds, Drupal will display its progress. The installation may take several minutes to complete. In the unlikely event that you have problems during the installation, try emptying the database, increasing the amount of memory available to Drupal, and increasing the maximum execution time for a PHP script. You can increase the available memory and execution time in your php.ini file. The relevant sections in php.ini to control memory and execution time are shown in the following screenshot:
Read more
  • 0
  • 0
  • 3392

article-image-moodle-19-theme-design-customizing-header-and-footer-part-2
Packt
23 Apr 2010
6 min read
Save for later

Moodle 1.9 Theme Design: Customizing the Header and Footer (Part 2)

Packt
23 Apr 2010
6 min read
Customizing the footer Obviously, the second thing that we are going to do after we have made changes to our Moodle header file is to carry on and change the footer.html file. The following tasks will be slightly easier than changing the header logo and title text within our Moodle site, as there is much less code and subsequently much less to change. Removing the Moodle logo The first thing that we will notice about the footer in Moodle is that it has the Moodle logo on the front page of your Moodle site and a Home button on all other pages. In addition to this, there is the login info text that shows who is logged in and a link to log out. More often than not Moodle themers will want to remove the Moodle logo so that they can give their Moodle site its own branding. So let's get stuck in with the next exercise, but don't forget that this logo credits the Moodle community. Time for action – deleting the Moodle logo Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad. Find the following two lines of code: echo $loggedinas;echo $homelink; Comment out the second line using a PHP comment: echo $loggedinas;/*echo $homelink; */ Save the footer.html file and refresh your browser window. You should now see the footer without the Moodle logo. What just happened? In this exercise, we learned which parts of the PHP code in the footer.html file control where the Moodle logo appears in the Moodle footer. We also learned how to comment out the PHP code that controls the rendering of the Moodle logo so that it does not appear. You could try to put the Moodle logo back if you want. Removing the login info text and link Now that we have removed the Moodle logo, which of course is completely up to you, you might also want to remove the login info link. This link is used exactly like the one in the top right-hand corner of your Moodle site, insofar as it acts as a place where you can log in and log out and provide details of who you logged in as. The only thing to consider here is that if you decide to remove the login info link from the header.html file and also remove it from the footer, you will have no easy way of logging in or out of Moodle. So it is always wise to leave it either in the header or the footer. You might also consider the advantages of having this here as some Moodle pages such as large courses are very long. So, once the user has scrolled way down the page, he/she has a place to log out if needed. The following task is very simple and will require you to go through similar steps as the"deleting the logo" exercise. The only difference is that you will comment out a different line of code. Time for action – deleting the login info text Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad (or an editor of your choice). Find the following two lines of code: echo $loggedinas;echo $homelink; Comment out the first line by using a PHP comment as shown below: /* echo $loggedinas; */ echo $homelink; Save the footer.html file and refresh your browser window. You will see the footer without the Moodle logo or the login info link. What just happened? In this task, we learned about those parts of the PHP code in the footer.html that control whether the Moodle login info text appears in the Moodle footer similar to the Moodle logo in the previous exercise. We also learned how to comment out the code that controls the rendering of the login info text so that it does not appear. Have a go hero – adding your own copyright or footer text The next thing that we are going to do in this article is to add some custom footer text where the Moodle logo and the login info text were before we removed them. It's completely up to you what to add in the next exercises. If you would like to just add some text to the footer then please do. However, as part of the following tasks we are going to add some copyright text and format it using some very basic HTML. Time for action – adding your own footer text Navigate to your mytheme directory and right-click on the footer.html file and choose Open With | WordPad. At the very top of the file, paste the following text or choose your own footer text to include: My School © 2009/10 All rights reserved. Save the footer.html and refresh your browser. You will see that your footer text is at the bottom of the page on the right-hand side. However, this text is aligned to the left as all text in a browser would be. Open the footer.html file again (if it isn't open already) and wrap the following code around the footer text that you have just added: <div align="right">My School &copy; 2009/10 All rights reserved</div> Save your footer.html file and refresh your browser. You will see that the text is now aligned to the right. What just happened? We just added some very basic footer text to our footer.html file, saved it, and viewed it in our web browser. We have demonstrated here that it is very easy to add our own text to the footer.html file. We have also added some basic HTML formatting to move the text from the left to the right-hand side of the footer. There are other ways to do so, which involve the use of CSS. For instance, we could have given the <div> tag a CSS class and used a CSS selector to align the text to the right. Have a go hero – adding your own footer logo Now try to see if you can edit the footer.html and add the same logo as you have in the header.html in to the footer. Remember that you can put the logo code anywhere outside of a PHP code block. So try to copy the header logo code and paste it into the footer.html. Finally, based on what we have learned, try to align the logo to the right as we did with the footer text.
Read more
  • 0
  • 0
  • 3390
article-image-adding-image-content-joomla
Packt
23 Feb 2010
6 min read
Save for later

Adding Image Content in Joomla!

Packt
23 Feb 2010
6 min read
Images and why we use them in websites Research suggests that images enhance learning by illustrating the concepts visually and by providing visual memory cues to the viewer. We have been using images to describe, tell stories, and record history since our human evolution. I have been in a few situations where, if a language barrier between me and the other party exists, we have resorted to a pencil and napkin in order to communicate effectively. Pictures can convey a message, which might take many thousands of words to describe. This non-textual communication and the visual emotions that the use of images can generate mean they are the perfect medium to complement or replace the text in our web pages: The previous image could easily take a thousand words to describe, which would look uninteresting to the reader and take up valuable space on our web pages. Instead the picture only utilizes a fraction of the space that our description would use and tells a story in itself. Not only do images describe a story, a moment in time, or a fantasy situation; they can provide a visual stimulus, which portrays a style or theme and sets a mood for the viewer. Many Joomla! Templates now contain a high percentage of graphical images in order to produce interesting designs and effects, such as this commercial example from http://www.yootheme.com: Many templates now utilize rounded edges for the graphics, or faded effects, and some create a 3D type effect on our two-dimensional computer screens. This is achieved by using shading and image layering effects. Others use images to create interesting navigation effects which could not be achieved without using these. Besides the design, navigation, and branding effects that images help provide, inside our content and modules, we use images to advertise and communicate to our site visitors. One important trick as web developers is to make sure our images are as optimized as they can be before asking our viewers to load them into their web browsers. This ensures a pleasant user experience because if a site is slow to load, or images are missing, these can be a big turn-off to your site visitors. Image formats and which ones to use Images can often make up 50 percent of a Joomla! website; sometimes even more. These images that get loaded into the browser can be part of your template design, or site images we have loaded into our modules and articles. Choosing an appropriate format for this image content will help optimize the loading times of your web pages, which is one of the most important considerations when building a multimedia rich website: There are a few simple rules we can use in order to choose an appropriate image format. The most important criterion for the final file is the size. The previous image is exported using Adobe Fireworks on a Mac computer. External image editors such as the GNU Image Manipulation Program (GIMP) are good open source solutions for manipulating and optimizing images for the web. GIMP can be downloaded by visiting http://www.gimp.org. With each image requiring loading by the user's browser, page-loading times can be easily affected with the quality and quantity of images you use in your web pages. This in turn results in end user frustration (especially on dial-up Internet connections) and loss of site traffic. Digital images Before we proceed further into file sizes and file types, it is important to mention a note about digital images. Because all of our web page images are stored or viewed on a computer device, they are in a digital format (an electronic file stored using zeros and ones). A digital image is built up of tiny elements called pixels. Pixels are the building blocks of all digital images and are small adjoining squares in a matrix across the length and width of your image. Pixels are so small that you cannot easily see them when the image is viewed on your computer screen: Each pixel is an element containing a single solid color, and when put together all of these tiny dots make up your final image. Usually, more pixels per area make up a higher quality image, but there is a poin t when viewed with the human eye on electronic devices that you cannot actually see the extra detail that the additional pixel's bring to the image: The physical dimensions of a digital image are measured in pixels and all digital images have what is called an image resolution (the image height and width dimensions in pixels). It is possible to save your digital images in numerous formats and compressions in order to optimize the file quality and size. Image editing programs, such as Adobe Fireworks, are specifically designed with web optimization and export options: Lossy and lossless data compression Data compression is an important part of our digital world. File compression is useful because it helps in reducing the consumption of expensive resources such as transmission bandwidth and computer hard disk space. Almost all digital images that we use on the web have been resized or compressed to some degree. When editing or saving files in popular software editing programs, many options are provided to help optimize digital images. Lossy and lossless compression are two terms that describe the compression type of a file, and whether or not the file retains all of the original data once compression and then decompression of that file has taken place. The preferred formats for digital web images (such as JPEG and GIF) all fall into one of the following compression types: LossyA lossy data compression is one where the compression attempts to eliminate redundant or unnecessary file information. Generally, once a file has been compressed, the information that was lost during compression cannot be recovered again. Hence, it is a degradable compression type, and as the file is compressed and decompressed each time, the quality of the file will suffer from generation loss.JPEG and MP3 files are good examples of formats that use lossy compression. LosslessLossless file compression makes use of data compression that retains every single bit of data that was in the original file before compression.Lossless compression is often used when an identical representation of the original data is required once a file has been compressed and then decompressed again. As an example, it is used for the popular ZIP file format. Image formats such as PNG and GIF fall into the lossless compression type. For now, it is not so important to go into any more detail on these compression types, but the following image formats fall into both of these compression categories. It is possible to optimize your website images by choosing the correct format to save and present them to your site users.
Read more
  • 0
  • 0
  • 3389

article-image-getting-started-twitter-flight
Packt
16 Oct 2013
6 min read
Save for later

Getting Started with Twitter Flight

Packt
16 Oct 2013
6 min read
(For more resources related to this topic, see here.) Simplicity First and foremost, Flight is simple. Most frameworks provide layouts, data models, and utilities that tend to produce big and confusing APIs. Learning curves are steep. In contrast, you'll probably only use 10 Flight methods ever, and three of those are almost identical. All components and mixins follow the same simple format. Once you've learned one, you've learned them all. Simplicity means fast ramp-up times for new developers who should be able to come to understand individual components quickly. Efficient complexity management In most frameworks, the complexity of the code increases almost exponentially with the number of features. Dependency diagrams often look like a set of trees, each with branches and roots intermingling to create a dense thicket of connections. A simple change in one place could easily create an unforeseen error in another or a chain of failures that could easily take down the whole application. Flight applications are instead built up from reliable, reusable artifacts known as components. Each component knows nothing of the rest of the application, it simply listens for and triggers events. Components behave like cells in an organism. They have well-defined input and output, are exhaustively testable, and are loosely coupled. A component's cellular nature means that introducing more components has almost no effect on the overall complexity of the code, unlike traditional architectures. The structure remains flat, without any spaghetti code. This is particularly important in large applications. Complexity management is important in any application, but when you're dealing with hundreds or thousands of components, you need to know that they aren't going to have unforeseen knock-on effects. This flat, cellular structure also makes Flight well-suited to large projects with large or remote development teams. Each developer can work on an independent component, without first having to understand the architecture of the entire application. Reusability Flight components have well-defined interfaces and are loosely coupled, making it easy to reuse them within an application, and even across different applications. This separates Flight from other frameworks such as Backbone or AngularJS, where functionality is buried inside layers of complexity and is usually impossible to extract. Not only does this make it easier and faster to build complex applications in Flight but it also offers developers the opportunity to give back to the community. There are already a lot of useful Flight components and mixins being open sourced. Try searching for "flight-" on Bower or GitHub, or check out the list at http://flight-components.jit.su/. Twitter has already been taking advantage of this reusability factor within the company, sharing components such as Typeahead (Twitter's search autocomplete) between Twitter.com and TweetDeck, something which would have been unimaginable a year ago. Agnostic architecture Flight has agnostic architecture. For example, it doesn't matter which templating system is used, or even if one is used at all. Server-side, client-side, or plain old static HTML are all the same to Flight. Flight does not impose a data model, so any method of data storage and processing can be used behind the scenes. This gives the developer freedom to change all aspects of the stack without affecting the UI and the ability to introduce Flight to an existing application without conflict. Improved Performance Performance is about a lot more than how fast the code executes, or how efficient it is with memory. Time to first load is a very important factor. When a user loads a web page, the request must be processed, data must be gathered, and a response will be sent. The response is then rendered by the client. Server-side processing and data gathering is fast. Latency and interpretation makes rendering slow. One of the largest factors in response and rendering speed is the sheer amount of code being sent over the wire. The more code required to render a page, the slower the rendering will be. Most modern JavaScript frameworks use deferred loading (for example, via RequireJS) to reduce the amount of code sent in the first response. However, all this code is needed to be able to render a page, because layout and templating systems only exist on the client. Flight's architecture allows templates to be compiled and rendered on the server, so the first response is a fully-formed web page. Flight components can then be attached to existing DOM nodes and determine their state from the HTML, rather than having to request data over XMLHttpRequest (XHR) and generate the HTML themselves. Well-organized freedom Back in the good old days of JavaScript development, it was all a bit of a free for all. Everyone had their own way of doing things. Code was idiosyncratic rather than idiomatic. In a lot of ways, this was a pain, that is, it was hard to onboard new developers and still harder to keep a codebase consistent and well-organized. On the other hand, there was very little boilerplate code and it was possible to get things done without having to first read lengthy documentation on a big API. jQuery built on this ideal, reduced the amount of boilerplate code required. It made JavaScript code easier to read and write, while not imposing any particular requirements in terms of architecture. What jQuery failed to do (and was never intended to do) was provide an application-level structure. It remained all too easy for code to become a spaghetti mess of callbacks and anonymous functions. Flight solves this problem by providing much needed structure while maintaining a simple, architecture-agnostic approach. Its API empowers developers, helping them to create elegant and well-ordered code, while retaining a sense of freedom. Put simply, it's a joy to use. Summary The Flight API is small and should be familiar to any jQuery user, producing a shallow learning-curve. The atomic nature of components makes them reliable and reusable, and creates truly scalable applications, while its agnostic architecture allows developers to use any technology stack and even introduce Flight into existing applications. Resources for Article: Further resources on this subject: Integrating Twitter and YouTube with MediaWiki [Article] Integrating Twitter with Magento [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 3374

article-image-user-authentication-codeigniter-17-using-twitter-oauth
Packt
21 May 2010
6 min read
Save for later

User Authentication with Codeigniter 1.7 using Twitter oAuth

Packt
21 May 2010
6 min read
(Read more interesting articles on CodeIgniter 1.7 Professional Development here.) How oAuth works Getting used to how Twitter oAuth works takes a little time. When a user comes to your login page, you send a GET request to Twitter for a set of request codes. These request codes are used to verify the user on the Twitter website. The user then goes through to Twitter to either allow or deny your application access to their account. If they allow the application access, they will be taken back to your application. The URL they get sent to will have an oAuth token appended to the end. This is used in the next step. Back at your application, you then send another GET request for some access codes from Twitter. These access codes are used to verify that the user has come directly from Twitter, and has not tried to spoof an oAuth token in their web browser. Registering a Twitter application Before we write any code, we need to register an application with Twitter. This will give us the two access codes that we need. The first is a consumer key, and the second is a secret key. Both are used to identify our application, so if someone posts a message to Twitter through our application, our application name will show up alongside the user's tweet. To register a new application with Twitter, you need to go to http://www.twitter.com/apps/new. You'll be asked for a photo for your application and other information, such as website URL, callback URL, and a description, among other things. You must select the checkbox that reads Yes, use Twitter for login or you will not be able to authenticate any accounts with your application keys. Once you've filled out the form, you'll be able to see your consumer key and consumer secret code. You'll need these later. Don't worry though; you'll be able to get to these at any time so there's no need to save them to your hard drive. Here's a screenshot of my application: Downloading the oAuth library Before we get to write any of our CodeIgniter wrapper library, we need to download the oAuth PHP library. This allows us to use the oAuth protocol without writing the code from scratch ourselves. You can find the PHP Library on the oAuth website at www.oauth.net/code. Scroll down to PHP and click on the link to download the basic PHP Library; or just visit: http://oauth.googlecode.com/svn/code/php/—the file you need is named OAuth.php. Download this file and save it in the folder system/application/libraries/twitter/—you'll need to create the twitter folder. We're simply going to create a folder for each different protocol so that we can easily distinguish between them. Once you've done that, we'll create our Library file. Create a new file in the system/application/libraries/ folder, called Twitter_oauth.php. This is the file that will contain functions to obtain both request and access tokens from Twitter, and verify the user credentials. The next section of the article will go through the process of creating this library alongside the Controller implementation; this is because the whole process requires work on both the front-end and the back-end. Bear with me, as it could get a little confusing, especially when trying to implement a brand new type of system such as Twitter oAuth. Library base class Let's break things down into small sections. The following code is a version of the base class with all its guts pulled out. It simply loads the oAuth library and sets up a set of variables for us to store certain information in. Below this, I'll go over what each of the variables are there for. <?phprequire_once(APPPATH . 'libraries/twitter/OAuth.php');class Twitter_oauth{ var $consumer; var $token; var $method; var $http_status; var $last_api_call;}?> The first variable you'll see is $consumer—it is used to store the credentials for our application keys and the user tokens as and when we get them. The second variable you see on the list is $token—this is used to store the user credentials. A new instance of the oAuth class OAuthConsumer is created and stored in this variable. Thirdly, you'll see the variable $method—this is used to store the oAuth Signature Method (the way we sign our oAuth calls). Finally, the last two variables, $http_status and $last_api_call, are used to store the last HTTP Status Code and the URL of the last API call, respectively. These two variables are used solely for debugging purposes. Controller base class The Controller is the main area where we'll be working, so it is crucial that we design the best way to use it so that we don't have to repeat our code. Therefore, we're going to have our consumer key and consumer secret key in the Controller. Take a look at the Base of our class to get a better idea of what I mean. <?phpsession_start();class Twitter extends Controller{ var $data; function Twitter() { parent::Controller(); $this->data['consumer_key'] = ""; $this->data['consumer_secret'] = "";} The global variable $data will be used to store our consumer key and consumer secret. These must not be left empty and will be provided to you by Twitter when creating your application. We use these when instantiating the Library class, which is why we need it available throughout the Controller instead of just in one function. We also allow for sessions to be used in the Controller, as we want to temporarily store some of the data that we get from Twitter in a session. We could use the CodeIgniter Session Library, but it doesn't offer us as much flexibility as native PHP sessions; this is because with native sessions we don't need to rely on cookies and a database, so we'll stick with the native sessions for this Controller.
Read more
  • 0
  • 0
  • 3370
article-image-find-and-install-add-ons-expand-plone-functionality
Packt
10 Jun 2010
11 min read
Save for later

Find and Install Add-Ons that Expand Plone Functionality

Packt
10 Jun 2010
11 min read
(For more resources on Plone, see here.) Background It seems like every application platform uses a different name for its add-ons: modules, components, libraries, packages, extensions, plug-ins, and more. Add-on packages for the Zope web application server are generally called Products. A Zope product is a bundle of Zope or Plone functionality contained in a Python module or modules. Like Plone, add-on products are distributed in source code, so that you may always read and examine them. Plone itself is actually a set of tightly connected Zope products and Python modules. Plone add-on products may be divided into three major categories: Skins or themes that change Plone’s look and feel or add visual elements like portlets. These are typically the simplest of Plone products. Products that add new content types with specialized functionality. Some are simple extensions of built-in types, others have custom workflows and behaviours. Products that add to or change the behaviour of Plone itself. Where to Find Products Plone.org’s Products section at http://plone.org/products is the place to look for Plone products. At the time of this writing, the Plone.org contains listings for 765 products and 1,901 product releases. The Plone Products section is itself built with a Plone product, the Plone Software Center – often called the PSC – that adds content types for projects, software releases, project roadmaps, issue trackers, and project documentation. Using the Plone Product Pages Visiting the Plone product pages for the first time may be a bewildering experience due to the number of available products. However, by specifying a product category and target Plone version, you will quickly narrow the product selection to the point where it’s worth reading descriptions and following the links to product pages. Product pages typically contain product descriptions, software releases, and a list of available documentation, issue tracker, version control repository, and contact resources. Each release will have release notes, a change log, and a list of Plone versions with which the release has been tested. If the release has a product package available, it will be available here for download. Some releases do not have associated software packages. This may be because the release is still in a planning stage, and the listing is mainly meant to document the product’s development roadmap; or because the development is still in an early stage, and the software is only available from a version-control repository. The release notes commonly include a list of dependencies, and you should make special note of that along with compatible Plone versions. Many products require the installation of other, supporting products. Some require that your server or test workstation have particular system libraries or utilities. Product pages may also have links to a variety of additional resources: product-specific documentation, other release pages, an issue tracker, a roadmap for future development, contact form for the project, and a version-control repository. Playing it Safe with Add-On Products Plone 3 is probably one of the most rigorously tested open-source software packages in existence. While no software is defect free, Plone’s core development team is on the leading edge of software development methodologies and work under a strong testing culture that requires that they prove their components work correctly before they ever become part of Plone. Plone’s library of add-on products is a very different story. Add-on products are contributed by a diverse community of developers. Some add-on products follow the same development and maintenance methodologies as Plone itself; others are haphazard experiments. To complicate matters, today’s haphazard experiment may be – if it succeeds – next year’s rigorously developed and reliable product. (Much of the Plone core codebase began as add-on products.) And, this year’s reliable standby may lose the devotion of its developers and not be upgraded to work with the next version of Plone. If you’re new to the world of open source software, this may seem dismaying. Don’t be discouraged. It is not hard to evaluate the status of a product, and the Plone community is happy to help. Be encouraged by evidence of continual, exciting innovation. Most importantly, stop thinking of yourself as a consumer. Take an interest in the community process that produces good products. Test some early releases and file bug reports and feature requests. Participate in, or help document, test, and fund the development of the products that are most important to you. Product Choice Strategy Trying out new Plone add-on products is great fun, but incorporating them into production websites requires planning and judgement if you’re going to have good long-run results. New versions of Plone pose a particular challenge. Major new releases of Plone don’t just add features: with every major version of Plone the application programming interface (API) and presentation templates change. This is not done arbitrarily, and there is usually a good deal of warning before a major change, but it means that add-on products often need to be updated before they will work with a major new version of Plone. Probably worthwhile to point out that major versions are released very ~18 months, and that minor version upgrades generally do not pose compatibility problems for the vast majority of add-on products. This means that when a new version of Plone appears on the scene, you won’t be able to migrate your Plone site to use it until compatible product versions are available for all the add-on products in use on the site. If you’re using mainstream, well-supported products, this may happen very quickly. Many products are upgraded to work with new Plone versions during the beta and release-candidate stages of Plone development. Some products take longer, and some may not make the jump at all. The products least likely to be updated are often ones made obsolete by new functionality. This creates a somewhat ironic situation when a new version of Plone arrives: the quickest adopters are often those with the least history with the platform. The slowest adopters are sometimes the sites that are most heavily invested in the new features. Consider, as a prime example, Plone.org, a very active, very large, community site which must be conservatively managed and stick with proven versions of add-on products. Plone.org often does not migrate to a new Plone version until many months after release. Is this a problem? Not really – unless you need both the newest features of the newest Plone version and the functionality of a more slowly developed add-on product. If that’s the case, prepare to make an investment of time or money in supporting product development and possibly writing some custom migration scripts. If you want to be more conservative, try the following strategy: Enjoy testing many products and keeping up with new developments by trying them out on a test server. Learn the built-in Plone functionality well, and use it in preference to add-on products whenever possible. Make sure you have a good understanding of the maturity level and degree of developer support for add-on products. Incorporate the smallest number of add-on products reasonably possible into your production sites. Don’t be just a consumer: when you commit to a product, help support it by filing bug reports and feature requests, contributing translations, documentation or code, and answering questions about it on the Plone mailing lists or #plone IRC channel. Evaluating a Product Judging the maturity of a Plone product is generally easy. Start with a product’s project page on Plone.org. The product page may offer you a "Current release" and one or more "Experimental releases". Anything marked as a current release should be stable on its tested Plone versions. If you need a release to work with an earlier version of Plone than the ones supported by the current release, follow the "List all releases..." link. Releases in the "Experimental" list will be marked as "alpha", "beta", or "Release Candidate." These terms are well-defined in practice: Alpha releases are truly experimental, and are usually posted in order to get early feedback. Interfaces and implementations are likely still in flux. Download an alpha release only for testing in an experimental environment, and only for purposes of previewing new features and giving feedback to developers. Do not plan on keeping any content you develop using an alpha release, as there may be no upgrade path to later releases. With a beta release, feature sets and programming interfaces should be stable or changing only incrementally. It’s reasonable to start testing the integration of the product with the platform and with other products. There will typically be an upgrade path to future releases. Bug reports will be welcome and will help develop the product. Release candidates have a fixed feature set and no known major issues. Templates and messages should be complete, so that translators may work on language files with some confidence that their work won’t be lost. If you encounter a bug in release-candidate products, please immediately file an issue report. Products may be re-released repeatedly at any release state. For alpha, beta and RC releases, each additional release changes the release count, but not the version number. So, "PloneFormGen 1.2" (Beta release 6) is the sixth beta release of version 1.2 of PloneFormGen. Once a product release reaches current release status, new releases for maintenance will increment the version number by 0.0.1. "PloneFormGen 1.1.3" is thus the third maintenance release of version 1.1 of that product. Don’t make too much of version numbers or release counts. Release status is a better indicator of maturity. If your site is mission-critical, don’t use beta releases on it. However, if you test carefully before deploying, you may find that some products are ready for live use when late in their beta development on sites where an error or glitch wouldn’t be intolerable. Testing a Product Conscientious Plone site administrators maintain an off-line mirror of their production sites on a secondary server – or even a desktop computer – that they may use for testing purposes. Always test a new product on a test server. Before deploying, test it on a server that has precisely the combination of products in use on your production server. Ideally, test with a copy of the database of your live server. Check the functionality of not only the new product, but also the products you’re already using. The latter is particularly important if you’re using products that alter the base functionality of Plone or Zope. Looking to the Future Evaluating product maturity and testing the product will help you judge its current status, but what about the future? What are the signs of a product that’s likely to be well-maintained and available for future versions of Plone? There are no guarantees, but here are some signs that experienced Plone integrators look for: Developing in public. This is open-source software. Look to see if the product is being developed with a public roadmap for the future, and with a public version-control repository. Plone.org provides product authors with great tools for indicating release plans, and makes a Subversion (SVN) version-control repository available to all product authors. Look to see if they’re using these facilities. Issue tracker status. Every released product should have a public issue (bug) tracker. Look for it. Look to see if it’s being maintained, and if issues are actively responded to. No issue tracker, or lots of old, uncategorized issues are bad signs. Support for multiple Plone versions. If a product has been around a while look to see if versions are available for at least a couple of Plone releases. This might be the previous and current releases, or the current and next releases. Internationalization. Excellent products attract translations. Good development methodologies. This is the hardest criterion for a non-developer to judge, but a forthcoming version of the Plone Software Center will ask developers to rate themselves on compliance with a set of community standards. My guess is that product developers will be pretty honest about these ratings. Several of these criteria have something in common: they allow the Plone community to participate in product maintenance and development. The best projects belong to the community, and not any single author. One of the best ways to get a quick read on the quality of an add-on product is to hop on the #plone IRC channel and ask. Chances are you’ll run into someone who can share their experiences and offer insight. You may even run into the product author him/herself!
Read more
  • 0
  • 0
  • 3367

article-image-overview-web-services-sakai
Packt
06 Jul 2011
16 min read
Save for later

An overview of web services in Sakai

Packt
06 Jul 2011
16 min read
Connecting to Sakai is straightforward, and simple tasks, such as automatic course creation, take only a few lines of programming effort. There are significant advantages to having web services in the enterprise. If a developer writes an application that calls a number of web services, then the application does not need to know the hidden details behind the services. It just needs to agree on what data to send. This loosely couples the application to the services. Later, if you can replace one web service with another, programmers do not need to change the code on the application side. SOAP works well with most organizations' firewalls, as SOAP uses the same protocol as web browsers. System administrators have a tendency to protect an organization's network by closing unused ports to the outside world. This means that most of the time there is no extra network configuration effort required to enable web services. Another simplifying factor is that a programmer does not need to know the details of SOAP or REST, as there are libraries and frameworks that hide the underlying magic. For the Sakai implementation of SOAP, to add a new service is as simple as writing a small amount of Java code within a text file, which is then compiled automatically and run the first time the service is called. This is great for rapid application development and deployment, as the system administrator does not need to restart Sakai for each change. Just as importantly, the Sakai services use the well-known libraries from the Apache Axis project. SOAP is an XML message passing protocol that, in the case of Sakai sites, sits on top of the Hyper Text Transfer Protocol (HTTP). HTTP is the protocol used by web browsers to obtain web pages from a server. The client sends messages in XML format to a service, including the information that the service needs. Then the service returns a message with the results or an error message. The architects introduced SOAP-based web services first to Sakai , adding RESTful services later. Unlike SOAP, instead of sending XML via HTTP posts to one URL that points to a service, REST sends to a URL that includes information about the entity, such as a user, with which the client wishes to interact. For example, a REST URL for viewing an address book item could look similar to http://host/direct/addressbook_item/15. Applying URLs in this way makes for understandable, human-readable address spaces. This more intuitive approach simplifies coding. Further, SOAP XML passing requires that the client and the server parse the XML and at times, the parsing effort is expensive in CPU cycles and response times. The Entity Broker is an internal service that makes life easier for programmers and helps them manipulate entities. Entities in Sakai are managed pieces of data such as representations of courses, users, grade books, and so on. In the newer versions of Sakai, the Entity Broker has the power to expose entities as RESTful services. In contrast, for SOAP services, if you wanted a new service, you would need to write it yourself. Over time, the Entity Broker exposes more and more entities RESTfully, delivering more hooks free to integrate with other enterprise systems. Both SOAP and REST services sit on top of the HTTP protocol. Protocols This section explains how web browsers talk to servers in order to gather web pages. It explains how to use the telnet command and a visual tool called TCPMON (http://ws.apache.org/commons/tcpmon/tcpmontutorial.html) to gain insight into how web services and Web 2.0 technologies work. Playing with Telnet It turns out that message passing occurs via text commands between the browser and the server. Web browsers use HTTP to get web pages and the embedded content from the server and to send form information to the server. HTTP talks between the client and server via text (7-bit ASCII) commands. When humans talk with each other, they have a wide vocabulary. However, HTTP uses fewer than twenty words. You can directly experiment with HTTP using a Telnet client to send your commands to a web server. For example, if your demonstration Sakai instance is running on port 8080, the following command will get you the login page: telnet localhost 8080 GET /portal/login The GET command does what it sounds like and gets a web page. Forms can use the GET verb to send data at the end of the URL. For example, GET /portal/login?name=alan&age=15 is sending the variables name=alan and age=15 to the server. Installing TCPMON You can use the TCPMON tool to view requests and responses from a web browser such as Firefox. One of TCPMON's abilities is that it can act as an invisible man in the middle, recording the messages between the web browser and the server. Once set up, the requests sent from the browser go to TCPMON and it passes the request on to the server. The server passes back a response and then TCPMON, a transparent proxy, returns the response to the web browser. This allows us to look at all requests and responses graphically. First, you can set up TCPMON to listenon a given port number—by convention, normally port 8888—and then you can configure your web browser to send its requests through the proxy. Then, you can type the address of a given page into the web browser, but instead of going directly to the relevant server, the browser sends the request to the proxy, which then passes it on and passes the response back. TCPMON displays both the request and the responses in a window. You can download TCPMON here. After downloading and unpacking, you can—from within the build directory—run either tcpmon.bat for the Windows environment or tcpmon.sh for the UNIX/Linux environment. To configure a proxy, you can click on the Admin tab and then set the Listen Port to 8888 and select the Proxy radio button. After that, clicking on Add will create a new tab, where the requests and responses will be displayed later. Your favorite web browser now has to recognize the newly-setup proxy. For Firefox 3, you can do this by selecting the menu option Edit/Preferences, and then choosing the Advanced tab and the Network tab, as shown in the next screenshot. You will need to set the proxy options, HTTP proxy to 127.0.0.1, and the port number to 8888. If you do this, you will need to ensure that the No proxies text input is blank. Clicking on the OK button enables the new settings. (Move the mouse over the image to enlarge.) To use the Proxy from within Internet Explorer 7 for a Local Area Network (LAN), you can edit the dialog box found under Tools | Internet Options | Connections | LAN settings. Once the proxy is working, typing http://localhost:8080/portal/login in the address bar will seamlessly return the login page of your local Sakai instance. Otherwise, you will see an error message similar to Proxy Server Refused Connection for Firefox or Internet Explorer cannot display the webpage. To turn off the proxy settings, simply select the No Proxies radio box and click on OK for Firefox 3, and for Internet Explorer 7, unselect the Use a proxy server for the LAN tick box and click on OK Requests and returned status codes When TCPMON is running a proxy on port 8888, it allows you to view the requests from the browser and the response in an extra tab, as shown in the following screenshot. Notice the extra information that the browser sends as part of the request. HTTP/1.1 defines the protocol and version level and the lines below GET are the header variables. The User-Agent defines which client sends the request. The Accept headers tell the server what the capabilities of the browser are, and the Cookie header defines the value stored in a cookie. HTTP is stateless, in principle; each response is based only on the current request. However, to get around this, persistent information can be stored in cookies. Web browsers normally store their representation of a cookie as a little text file or in a small database on the end users' computers. Sakai uses the supporting features of a servlet container, such as Tomcat, to maintain state in cookies. A cookie stores a session ID, and when the server sees the session ID, it can look up the request's server-side state. This state contains information such as whether the user is logged in, or what he or she has ordered. The web browser deletes the local representation of the cookie each time the browser closes. A cookie that is deleted when a web browser closes is known as a session cookie. The server response starts with the protocol followed by a status number. HTTP/1.1 200 OK tells the web browser that the server is using HTTP version 1.1 and was able to return the requested web page successfully. 2xx status codes imply success. 3xx status codes imply some form of redirection and tell the web browser where to try to pick up the requested resource. 4xx status codes are for client errors, such as malformed requests or lack of permission to obtain the resource. 4xx states are fertile grounds for security managers to look in log files for attempted hacking. 5xx status codes mostly have to do with a failure of the server itself and are mostly of interest to system administrators and programmers during the debugging cycle. In most cases, 5xx status numbers are about either high server load or a broken piece of code. Sakai is changing rapidly and even with the most vigorous testing, there are bound to be the occasional hiccups. You will find accurate details of the full range of status codes at: http://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html. Another important part of the response is Content-Type, which tells the web browser which type of material the response is returning, so the browser knows how to handle it. For example, the web browser may want to run a plug-in for video types and display text natively. Content-Length in characters is normally also given. After the header information is finished, there is a newline followed by the content itself. Web browsers interpret any redirects that are returned by sending extra requests. Web browsers also interpret any HTML pages and make multiple requests for resources such as JavaScript files and images. Modern browsers do not wait until the server returns all the requests, but render the HTML page live as the server returns the parts. The GET verb is not very efficient for posting a large amount of data, as the URL has a length limit of around 2000 characters. Further, the end user can see the form data, and the browser may encode entities such as spaces to make the URL unreadable. There is also a security aspect: if you are typing passwords in forms using GET, others may see your password or other details. This is not a good idea, especially at Internet Cafés where the next user who logs on can see the password in the browsing history. The POST verb is a better choice. Let us take as an example the Sakai demonstration login page (http://localhost:8080/portal/login). The login page itself contains a form tag that points to the relogin page with the POST method. <form method="post" action="http://localhost:8080/portal/relogin" enctype="application/x-www-form-urlencoded"> Note that the HTML tag also defines the content type. Key features of the POST request compared to GET are: The form values are stored as content after the header values There is a newline between the end of the header and the data The request mentions data and the amount of data by the use of the Content-Length header value The essential POST values for a login form with user admin (eid=admin) and password admin (pw=admin) will look like: POST http://localhost:8080/portal/relogin HTTP/1.1 Content-Type: application/x-www-form-urlencoded Content-Length: 31 eid=admin&pw=admin&submit=Login POST requests can contain much more information than GET requests, and the requests hide the values from the address bar of the web browser. This is not secure. The header is just as visible as the URL, so POST values are also neither hidden nor secure. The only viable solution is for your web browser to encrypt your transactions using SSL/TLS (http://www.ietf.org/rfc/rfc2246.txt) for security, and this occurs every time you connect to a server using an HTTPS URL. SOAP Sakai uses the Apache Axis framework, which the developers have configured to accept SOAP calls via POST. SOAP sends messages in a specific XML format with the Content-Type, otherwise known as MIME type, application/soap+xml. A programmer does not need to know more than that, as the client libraries take care of the majority of the excruciating low-level details. An example SOAP message generated by the Perl module, SOAP::Lite (http://www.soaplite.com/), for creating a login session in Sakai will look like the following POST data: <?xml version="1.0" encoding="UTF-8"?> <soap:Envelope soap_encodingStyle= "http://schemas.xmlsoap.org/soap/encoding/" > <c-gensym3 xsi_type="xsd:string">admin</c-gensym3> <c-gensym5 xsi_type="xsd:string">admin</c-gensym5> </login> </soap:Body> </soap:Envelope> There is an envelope with a body containing data for the service to consume. The important point to remember is that both the client and the server have to be able to parse the specific XML schema. SOAP messages can include extra security features, but Sakai does not require these. The architects expect organizations to encrypt web services using SSL/TSL. The last extra SOAP-related complexity is the Web Service Description Language (http://www.w3.org/TR/wsdl). Web services may change location or exist in multiple locations for redundancy. The service writer can define the location of the services and the data types involved with those services in another file, in XML format. JSON Also worth mentioning is JavaScript Object Notation (JSON), which is another popular format, passed using HTTP. When web developers realized that they could force browsers to load parts of a web page in at a time, it significantly improved the quality of the web browsing experience for the end user. This asynchronous loading enables all kinds of whiz-bang features, such as when you type in a search term and can choose from a set of search term completions before pressing on the Submit button. Asynchronous loading delivers more responsive and richer web pages that feel more like traditional desktop applications than a plain old web page. JSON is one of the formats of choice for passing asynchronous requests and responses. The asynchronous communication normally occurs through HTTP GET or POST, but with a specific content structure that is designed to be human readable and script language parser-friendly. JSON calls have the file extension .json as part of the URL. As mentioned in RFC 4627, an example image object communicated in JSON looks like: { "Image": { "Width": 800, "Height": 600, "Title": "View from 15th Floor", "Thumbnail": { "Url": "http://www.example.com/image/481989943", "Height": 125, "Width": "100" }, "IDs": [116, 943, 234, 38793] } } By confusing the boundaries between client and server, a lot of the presentation and business logic is locked on the client side in scripting languages such as JavaScript. The scripting language orchestrates the loading of parts of pages and the generation of widget sets. Frameworks such as jQuery (http://jquery.com/) and MyFaces (http://myfaces.apache.org/) significantly ease the client-side programming burden. REST To understand REST, you need to understand the other verbs in HTTP (http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html). The full HTTP set is OPTIONS, GET, HEAD, POST, PUT, DELETE, and TRACE. The HEAD verb returns from the server only the headers of the response without the content, and is useful for clients that want to see if the content has changed since the last request. PUT requests that the content in the request be stored at a particular location mentioned in the request. DELETE is for deleting the entity. REST uses the URL of the request to route to the resource, and the HTTP verb GET is used to get a resource, PUT to update, DELETE to delete, and POST to add a new resource. In general, POST request is for creating an item, PUT for updating an item, DELETE for deleting an item, and GET for returning information on the item. In SOAP, you are pointing directly towards the service the client calls or indirectly via the web service description. However, in REST, part of the URL describes the resource or resources you wish to work with. For example, a hypothetical address book application that lists all e-mail addresses in HTML format would look similar to the following: GET /email To list the addresses in XML format or JSON format: GET /email.xml GET /email.json To get the first e-mail address in the list: GET /email/1 To create a new e-mail address, of course remembering to add the rest of e-mail details to the end of the GET: POST /email In addition, to delete address 5 from the list use the following command: DELETE /email/5 To obtain address 5 in other formats such as JSON or XML, then use file extensions at the end of the URL, for example: GET /email/5.json GET /email/5.xml RESTful services are intuitively more descriptive than SOAP services, and they enable easy switching of the format from HTML to JSON to fuel the dynamic and asynchronous loading of websites. Due to the direct use of HTTP verbs by REST, this methodology also fits well with the most common application type: CRUD (Create, Read, Update, and Delete) applications, such as the site or user tools within Sakai. Now that we have discussed the theory, in the next section we shall discuss which Sakai-related SOAP services already exist.
Read more
  • 0
  • 0
  • 3366
Modal Close icon
Modal Close icon