Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-backtrack-5-advanced-wlan-attacks
Packt
13 Sep 2011
4 min read
Save for later

BackTrack 5: Advanced WLAN Attacks

Packt
13 Sep 2011
4 min read
  (For more resources on BackTrack, see here.) Man-in-the-Middle attack MITM attacks are probably one of most potent attacks on a WLAN system. There are different configurations that can be used to conduct the attack. We will use the most common one—the attacker is connected to the Internet using a wired LAN and is creating a fake access point on his client card. This access point broadcasts an SSID similar to a local hotspot in the vicinity. A user may accidently get connected to this fake access point and may continue to believe that he is connected to the legitimate access point. The attacker can now transparently forward all the user's traffic over the Internet using the bridge he has created between the wired and wireless interfaces. In the following lab exercise, we will simulate this attack. Time for action – Man-in-the-Middle attack Follow these instructions to get started: To create the Man-in-the-Middle attack setup, we will first c create a soft access point called mitm on the hacker laptop using airbase-ng. We run the command airbase-ng --essid mitm –c 11 mon0: It is important to note that airbase-ng when run, creates an interface at0 (tap interface). Think of this as the wired-side interface of our software-based access point mitm. Let us now create a bridge on the hacker laptop, consisting of the wired (eth0) and wireless interface (at0). The succession of commands used for this are—brctl addbr mitm-bridge, brctl addif mitm-bridge eth0, brctl addif mitmbridge at0, ifconfig eth0 0.0.0.0 up, ifconfig at0 0.0.0.0 up: We can assign an IP address to this bridge and check the connectivity with the gateway. Please note that we could do the same using DHCP as well. We can assign an IP address to the bridge interface with the command—ifconfig mitm-bridge 192.168.0.199 up. We can then try pinging the gateway 192.168.0.1 to ensure we are connected to the rest of the network: Let us now turn on IP Forwarding in the kernel so that routing and packet forwarding can happen correctly using echo > 1 /proc/sys/net/ipv4/ip_forward: Now let us connect a wireless client to our access point mitm. It would automatically get an IP address over DHCP (server running on the wired-side gateway). The client machine in this case receives the IP address 192.168.0.197. We can ping the wired side gateway 192.168.0.1 to verify connectivity: We see that the host responds to the ping requests as seen: We can also verify that the client is connected by looking at the airbase-ng terminal on the hacker machine: It is interesting to note here that because all the traffic is being relayed from the wireless interface to the wired-side, we have full control over the traffic. We can verify this by starting Wireshark and start sniffing on the at0 interface: (Move the mouse over the image to enlarge it.) Let us now ping the gateway 192.168.0.1 from the client machine. We can now see the packets in Wireshark (apply a display filter for ICMP), even though the packets are not destined for us. This is the power of Man-in-the-Middle attacks! (Move the mouse over the image to enlarge it.) What just happened? We have successfully created the setup for a wireless Man-In-The-Middle attack. We did this by creating a fake access point and bridging it with our Ethernet interface. This ensured that any wireless client connecting to the fake access point would "perceive" that it is connected to the Internet via the wired LAN. Have a go hero – Man-in-the-Middle over pure wireless In the previous exercise, we bridged the wireless interface with a wired one. As we noted earlier, this is one of the possible connection architectures for an MITM. There are other combinations possible as well. An interesting one would be to have two wireless interfaces, one creates the fake access point and the other interface is connected to the authorized access point. Both these interfaces are bridged. So, when a wireless client connects to our fake access point, it gets connected to the authorized access point through the attacker machine. Please note that this configuration would require the use of two wireless cards on the attacker laptop. Check if you can conduct this attack using the in-built card on your laptop along with the external one. This should be a good challenge!
Read more
  • 0
  • 0
  • 4748

article-image-routing-kohana-3
Packt
12 Sep 2011
8 min read
Save for later

Routing in Kohana 3

Packt
12 Sep 2011
8 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Request Flow in Kohana 3. Routing in Kohana If you remember, the bootstrap file comes preconfigured with a default route that follows a very simple structure: Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); This tells Kohana that when it parses the URL for any request, it first finds the base_url, and then the next segment will contain the controller, then the action, then an ID. These are all optional setgments, with the default controller and action being set in the array. We have taken advantage of this route with other controllers like our Profile and Message controller. When we visit http://localhost/egotist/profile, the route sets the controller to profile, and since no action or ID is explicitly defined in the URL, the default action of ‘index’ is used. When we requested http://localhost/egotist/messages/get_messages from within our Profile Controller, we also followed this route; however, neither defaults were needed, and the route asked for the Messages Controller and its get_messages action. In our Profile controller, we are only using one array of example messages to test functionality and the expected behavior of our application. When we implement a data store and have multiple users with profiles in our application, we will need a way to decipher which profile a user wants to see. Because the default route already has an available parameter for ID, we can use that to pass an ID to our Profile Controller’s index action, and have the messages controller then find the proper messages for that user.   Time for action – Making profiles dynamic using ID Once a database is tied to our application, and more than one user has a profile, we will need some way of knowing which profile to display. A simple and effective way to do this is to pass a user ID in the route, and have our controller use that ID to find the right messages for the right user. Let’s add some more test data to our messages system, and use an ID to display the right messages. Open the Profile Controller in our application/classes/controller/ directory named profile.php. Since the action_index() method is the controller action that is called when a profile is viewed, we will need to edit it to look for the ID parameter in the URI like this: public function action_index(){ $content = View::factory(<profile/public>) ->set(<username>, <Test User>) ->bind(<messages>, $messages); $id = (int) $this->request->param(‘id’); $messages_uri = "messages/get_messages/$id"; $messages = Request::factory($messages_uri)->execute()->response; $this->template->content = $content;} Now, we are retrieving the ID from the route and passing it along in our request to the Messages Controller. This means that class must also be updated. Open the messages.php file located in application/classes/controllers/ and modify its action_get_messages() method as follows: public function action_get_messages(){ $id = (int) $this->request->param(‘id’); $messages = array( 1 => array( ‘This is test message one for user 1’, ‘This is test message two for user 1’, ‘This is test message three for user 1’ ), 2 => array( ‘This is test message one for user 2’, ‘This is test message two for user 2’, ‘This is test message three for user 2’ ) ); $messages = array_key_exists($id, $messages) ? $messages[$id] :NULL; $this->request->response = View::factory(‘profile/messages’) ->set(‘messages’, $messages);} Open the page http://localhost/egotist/profile/index/2/. It should look like this: Browsing to http://localhost/egotist/profile/index/1/ will show the messages for user 1, i.e., the test messages placed in the message array under key 1. What just happened? At the very beginning of our index action in our Profile Controller, we set our $id variable by getting the ID parameter from the route. Since Kohana has parsed our route for us, we can now access these parameters via the request object’s param() method. Once we got the ID variable, we then created and executed the request for the message controller’s get_messages action, and passed the ID to that method for it to use. In the Message Controller, we used the same method to extract the ID from the request, and then used that ID to determine which messages from the messages array to display. Although this works fine for illustrating routing for these two users, the code is far from ready, even without a data store or real user data, but it does show how the parameters can be read and used. Because most of the functionality in the controller will be replaced with our database and more precise data being passed around, we can overlook the incompleteness of the current controller actions, and begin looking at creating a URL that is better looking than http://localhost/egotist/profile/index/2/ for finding a user profile by ID. Creating friendly URLs using custom routes Consider how nice it would be if our users could browse to a profile without putting ‘index’ in the action portion of the URI, like this: http://localhost/egotist/profile/2. This looks much more pleasing, and is more in line with what we would like our URLs to look like in web apps. It is in fact very easy to have Kohana use a route to remove the index action from the URI. Routes not only make our URLs more pleasing and descriptive, but they make our application easier to maintain in the long run. We have more control over where our users are being directed from how the URL is constructed, without having to create controller actions designed to handle routing.   Time for action – Creating a Custom Route So far, we have been using the default route that is in our application bootstrap. As our application grows, so will the number of available ‘starting points’ for our user’s requests. Not every controller, action, or parameter has to comply with the default route, and this gives us a lot of flexibility and freedom. We can add a custom route to handle user’s profiles by adding it to our bootstrap.php file. Open the bootstrap.php file located in application/directory and modify the routes block so it looks like this: /** * Set the routes.Each route must have a minimum of a name,a URI * and a set of defaults for the URI. */Route::set(‘profile’, ‘profile/<id>’) ->defaults(array( ‘controller’ => ‘profile’, ‘action’ => ‘index’, ));Route::set(‘default’, ‘(<controller>(/<action>(/<id>)))’) ->defaults(array( ‘controller’ => ‘welcome’, ‘action’ => ‘index’, )); Now, we can view the profile pages without having to pass the index action in the URL. Open http://localhost/egotist/profile/2 in a browser; it should look like this: Browsing to profiles with a more friendly URL is made possible through Kohana’s routes. What just happened? By setting routes using the Route::set static method, we are essentially creating filters that will be used to match requests with routes. We can name these routes; in this case we have one named default, and one named profile. Kohana uses the second parameter in the set() method to compare against the requested URI, and will call the first route that matches the request. Because it uses the first route that matches the request, it is very important when ordering route definitions. If we put the default route before the profile route, the profile route will never be used, as the default route would always match first. Because it looks for a match, it does not use discretion when determining the right route for a request. So if we browse to http://localhost/egotist/profile/index/2, we will be directed to the default route, and get the same result. The default route may not be available for all the routes we create in the future, so create routes that are as explicit as we can for our needs. Right now, our application assumes any data that is passed after a controller segment named ‘profile’ must be the ID for which we are looking. In our current application setup, we only need digits. If a user passes data into the URL that is not numeric for the ID parameter, we do not want it to go to that route. This can be accomplished easily inside the Route::set() method.  
Read more
  • 0
  • 0
  • 3791

article-image-request-flow-kohana-3
Packt
12 Sep 2011
12 min read
Save for later

Request Flow in Kohana 3

Packt
12 Sep 2011
12 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Routing in Kohana 3.   Hierarchy is King in Kohana Kohana is layered in more ways than one. First, it has a cascading files system. This means the framework loads files for each of it’s core parts in a hierarchical order, which is explained in more detail in just a bit. Next, Kohana allows for controllers to initiate requests, making the application workflow follow a hierarchical design pattern. These features are the foundation of HMVC, which essentially is a cascading filesystem, flexible routing and request handling, the ability to execute sub-requests combined with a standard MVC pattern. The framework manages locating and loading the right file by using a core method named Kohana::find_file(). This method searches the filesystem in a predetermined order to load the proper class first. The order the method searches in is (with default paths): Application path (/application) Modules (/modules) as ordered in bootstrap.php System path (/system) Cascading filesystem As the framework loads, it creates a merged filesystem based on the order of loading described above. One of the benefits of loading files this way is the ease to overload classes that would be loaded later in the flow. We never have to, nor should we, alter a file in the system directory. We can override the default behavior of any method by overloading it in the application directory. Another great advantage is the consistency this mechanism offers. We know the exact load order for every class in any application, making it much easier to create custom code and know exactly where it needs to live. This image shows an example application being merged into the final file structure that will be used when completing a request. We can see how some classes in the application layer are overriding files in the modules. This makes it easier to visualize how modules extend and enhance the framework by building on the system core. Our application then sits on top of the system and module files, and then can build and extend the functionality of the module and system layers. Kohana also makes it easy to load third-party libraries, referred to as vendor libraries, into the filesystem. Each of the three layers has five basic folders into which Kohana looks: Classes (/classes) contain all autoloaded class files. This directory includes our Controller, Models, and their supporting classes. Autoloading allows us to use classes without having to include them manually. Any classes inside this directory will automatically be searched and loaded when they are used. Config files (/config) are files containing arrays that can be parsed and loaded using the core method Kohana::config(). Some config files are required to configure and properly load modules, while others may be created by us to make our application easier to maintain, or to keep vendor libraries tidy by moving config data to the framework. Config files are the only files in the cascading filesystem that are not overloaded; all config files are merged with their parent files. Internationalization files (/i18n) make it much easier to create language files that work with our applications to deliver the proper content that best suits the language of our users. Messages (/messages) are much like configuration files, in that they are arrays that are loaded by a core Kohana method. Kohana:: message() parses and returns the messages for a specific array. This functionality is very useful when creating forms and actions for our applications. View files (/views) are the presentation layer of our applications, where view files and template files live.   Request flow in Kohana Now that we have seen how the framework merges files to create a set of files to load on request, it is a good place to see the flow of the files in Kohana. Remember that controllers can invoke requests, making a bit of a loop between controllers, models, and views, but the frameworks always runs in the same order, beginning with the index.php file. The index.php file sets the path to the application, modules, and system directories and saves them to as constants that are then defined for global use. These constants are APPPATH, MODPATH, and SYSPATH, and they hold the paths for the application, modules, and system paths respectively. After the error-reporting levels are set, Kohana looks to see if the install.php file exists. If the install file is not found, Kohana takes the next step in loading the framework by loading the core Kohana class. Next, the index file looks for the application’s Kohana class, first in the application directory, then in the system path. This is the first example of Kohana looking for our files before it looks for its own. The last thing the index file does is bootstrap our application, by requiring the bootstrap.php file and loading it. You probably remember having to configure the bootstrap file when we installed Kohana. This is the file in which we set our base URL and modules; however, it is a bit more important than just basic installation and configuration. The boostrap begins by setting a some basic environment settings, like the default timezone and locale. Next, it enables the autoloader, and defines the application environment. This tells the framework whether our application is in a production or development environment, allowing it to make decisions based on its environment-specific settings. Next, the default options are set and Kohana is initialized, with the Kohana::init() method being called. After Kohana’s initialization, it sets up logging, configuration reading, and then modules. Modules are loaded defined using an array, with the module name as the key, and the path to the module as the value. The modules load order is listed in this array, and are all subject to the same rules and conventions as core and application code. Each module is added to the cascading file system as described above, allowing files to override any that may be added later when the system files are merged. Modules can contain their own init files, named init.php, that act similar to the application bootstrap, adding routes specific to the modules for our application to use. The last thing the bootstrap does in a normal request flow is to set the routes for the application. Kohana ships with a default route that loads the index action in the welcome controller. The Route object’s set method accepts an array that defines the name, URI, and defaults for the parameters set for the URI. By setting default controllers, actions, and params, we can have dynamic URLs that have default values if none are passed. If no controller or action is passed in the URI on a vanilla Kohana install, the welcome controller will be loaded, and the index action invoked as outlined in the array passed to the Route::set() method in the boostrap. Once all the application routes are set via the Route::set() method and init.php files residing in modules, Request::instance() is invoked, setting the request loop into action. As the Request object processes the request, it looks through routes until it finds the right controller to load. The request object then instantiates the controller, passing the request to the controller for it to use. The Controller::before() method is then called, which acts much like a constructor. By being called first, the before() method allows any logic that need to be performed before a controller action is run to execute. Once the before method is complete, the object continues to load the requested functions, just like when a constructor is complete. The controller action, a method in the controller class, is then called, and once complete, it returns the request response. The action method is where the business logic for the request will reside. Once the action is complete, the Controller::after() method is called, much like the destructor of a standard PHP class. Because of the hierarchical structure of Kohana, any controller can initiate a new request, making it possible for other controllers to be loaded, invoking more controller actions, which generate request responses. Once all the requests have been fulfilled, Kohana renders the final request response. The Kohana request flow can seem like it is long and complex, but it can also be looked at as very clean and organized. By using a front controller design pattern, all requests are handled by just one file: index.php. Each and every request that is handled by our applications will begin with this one file. From there the application is bootstrapped, and then the controller designed to handle the specific request is found, executed, and displayed. Although, as we have seen, there is more that is happening, for most of our applications, this simple way of looking at the request flow will make it easy to create powerful web applications using Kohana.   Using the Request object The request flow in Kohana is interesting, and it is easy to see how it can be powerful on a high level, but the best way to understand HMVC and routing in Kohana is to look at some actual code, and see what the resulting outcome is for real world scenarios. Kohana’s Request object determines the proper controller to invoke, and acts as a wrapper for the response. If we look at the Template Controller that we are extending in our Application Controller for the case study site, we can follow the inheritance path back to Kohana’s template controller, and see the request response. One of the best ways to understand what is happening inside the framework is to drill down through the filesystem and look at the actual code. One of the great advantages of open sources frameworks is the ability to read the code that makes the library run. Opening the welcome controller located at application/classes/controller/welcome.php, we see the following class declaration: class Controller_Welcome extends Controller_Application The first thing we see in the base controller class is that it extends another Controller, and then we see the declaration of an object property named $request. This variable holds the Kohana_Request object, the class that created the original controller call. In the constructor, we can see that the Kohana_Request object is being type-hinted for the argument, and it is setting the $request object property on instantiation. All that is left in the base Controller class is the before() and after() methods with no functionality. We can then open our Application Controller, located at application/classes/controller/application.php. The class declaration in this controller looks like this: abstract class Controller_Application extends Controller_Template In this file, we can see the before() method loading the template view into the template variable, and in the after() method, we see the Request obeject ($this->request) having the response body set, ready to be rendered. This class, in turn, extends the Template Controller. The Template Controller is part of the Kohana system. Since we have not created any controllers in our application or modules the original template controller that ships with Kohana is being loaded. It is located at system/classes/controller/template.php. The class declaration in this controller looks like: abstract class Controller_Template extends Kohana_Controller_Template Here things take a twist, and for the first time, we are going to have to leave the /classes/controller/ structure to find an inherited class. The Kohana_Controller_Template class lives in system/classes/kohana/controller/template.php. The class is fairly short and simple, and it has this class declaration: abstract class Kohana_Controller_Template extends Controller This controller (system/classes/controller.php) is the base controller that all requested controller classes must extend. Examining this class will let us see the Request class enter the Controller loop and the template view get sent to the Request object as the response. Walking through the Welcome Controller’s heritage is a great way of seeing how the Request object loads a controller, and how the parent classes all contribute to the request flow in Kohana. It may seem like pointless complexity at first, however, the benefits of transparent extension are very powerful, and Kohana makes the mechanisms work all behind the scenes. But one question still remains: How is the request object aware of the controllers and routes? Athough dissecting Kohana’s Request Class could be a article unto itself, a lot can be answered by looking at the contstructor in the system/classes/kohana/request.php file. The constructor is given the URI, and then stores object properties that the object will later use to execute the request. The Request class does have a couple of key methods that can be very helpful. The first is Request::controller(), which returns the name of the controller for the request, and the other is Request::action(), which similarly returns the name of the action for the request. After loading the routes, the method then iterates through the routes, determines any matches for the URI, and begins setting the controller, action, and parameters for the route. If there are not matches for optional segments of the route, the defaults are stored in the object properties. When the Request object’s execute() method is called, the request is processed, and the response is returned. This happens by first processing the before() method, then the controller action being requested, then the after() method for the class, followed by any other requests until all have completed. The topic of initiating a request from within a controller has arisen a few times, and is the best way to illustrate the Hierarchical aspect of HMVC. Let’s take a look at this process by creating a controller method that initiates a new request, and see how it completes.  
Read more
  • 0
  • 0
  • 4149

article-image-storing-passwords-using-freeradius-authentication
Packt
08 Sep 2011
6 min read
Save for later

Storing Passwords Using FreeRADIUS Authentication

Packt
08 Sep 2011
6 min read
In the previous article we covered the Authentication Methods used while working with FreeRADIUS. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches methods for storing passwords and how they work. Passwords do not need to be stored in clear text and it is better to store them in a hashed format. There are, however, limitations to the kind of authentication protocols that can be used when the passwords are stored as a hash which we will explore in this article. (For more resources on this subject, see here.) Storing passwords Username and password combinations have to be stored somewhere. The following list mentions some of the popular places: Text files: You should be familiar with this method by now. SQL databases: FreeRADIUS includes modules to interact with SQL databases. MySQL is very popular and widely used with FreeRADIUS. Directories: Microsoft's Active Directory or Novell's e-Directory are typical enterprise-size directories. OpenLDAP is a popular open source alternative. The users file and the SQL database that can be used by FreeRADIUS store the username and password as AVPs. When the value of this AVP is in clear text, it can be dangerous if the wrong person gets hold of it. Let's see how this risk can be minimized. Hash formats To reduce this risk, we can store the passwords in a hashed format. A hashed format of a password is like a digital fingerprint of that password's text value. There are many different ways to calculate this hash, for example MD5 or SHA1. The end result of a hash should be a one-way fixed-length encrypted string that uniquely represents the password. It should be impossible to retrieve the original password out of the hash. To make the hash even more secure and more immune to dictionary attacks we can add a salt to the function that generates the hash. A salt is randomly generated bits to be used in combination with the password as input to the one way hash function. With FreeRADIUS we store the salt along with the hash. It is therefore essential to have a random salt with each hash to make a rainbow table attack difficult. The pap module, which is used for PAP authentication, can use passwords stored in the following hash formats to authenticate users: Both MD5 and SSH1 hash functions can be used with a salt to make it more secure. Time for action – hashing our password We will replace the Cleartext-Password AVP in the users file with a more secure hashed password AVP in this section. There seems to be a general confusion on how the hashed password should be created and presented. We will help you clarify this issue in order to produce working hashes for each format. A valuable URL to assist us with the hashes is the OpenLDAP FAQ: http://www.openldap.org/faq/data/cache/419.html There are a few sections that show how to create different types of password hashes. We can adapt this for our own use in FreeRADIUS. Crypt-Password Crypt password hashes have their origins in Unix computing. Stronger hashing methods are preferred over crypt, although crypt is still widely used. The following Perl one-liner will produce a crypt password for passme with the salt value of 'salt': #> perl -e 'print(crypt("passme","salt")."n");' Use this output and change Alice's check entry in the users file from: "alice" Cleartext-Password := "passme" to: "alice" Crypt-Password := "sa85/iGj2UWlA" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the crypt password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using CRYPT password "sa85/iGj2UWlA" MD5-Password The MD5 hash is often used to check the integrity of a file. When downloading a Linux ISO image you are also typically supplied with the MD5 sum of the file. You can then confirm the integrity of the file by using the md5sum command. We can also generate an MD5 hash from a password. We will use Perl to generate and encode the MD5 hash in the correct format that is required by the pap module. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. The following steps will show you how: Create a Perl script with the following contents; we'll name it 4088_04_md5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless($ARGV[0]){ print "Please supply a password to create a MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);print encode_base64($ctx->digest,'')."n"; Make the 4088_04_md5.pl file executable: chmod 755 4088_04_md5.pl Get the MD5 password for passme: ./4088_04_md5.pl passme Use this output and update Alice's entry in the user's file to: "alice" MD5-Password := "ugGBYPwm4MwukpuOBx8FLQ==" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the MD5 password by looking for the following line in the FreeRADIUS debug feedback: [pap] Using MD5 encryption. SMD5-Password This is an MD5 password with salt. The creation of this password hash involves external Perl modules, which you may have to install first before the script can be used. Create a Perl script with the following contents; we'll name it 4088_04_smd5.pl: #! /usr/bin/perl -wuse strict;use Digest::MD5;use MIME::Base64;unless(($ARGV[0])&&($ARGV[1])){ print "Please supply a password and salt to create a salted MD5 hash from.n"; exit;}my $ctx = Digest::MD5->new;$ctx->add($ARGV[0]);my $salt = $ARGV[1];$ctx->add($salt);print encode_base64($ctx->digest . $salt ,'')."n"; Make the 4088_04_smd5.pl file executable: chmod 755 4088_04_smd5.pl Get the SMD5 value for passme using a salt value of 'salt': ./4088_04_smd5.pl passme salt Remember that you should use a random value for the salt. We only used salt here for the demonstration. Use this output and update Alice's entry in the user's file to: "alice" SMD5-Password := "Vr6uPTrGykq4yKig67v5kHNhbHQ=" Restart the FreeRADIUS server in debug mode. Run the authentication request against it again. Ensure that pap now uses the SMD5 password by looking for the following line in the FreeRADIUS debug feedback. [pap] Using SMD5 encryption.
Read more
  • 0
  • 0
  • 38979

article-image-freeradius-working-authentication-methods
Packt
08 Sep 2011
6 min read
Save for later

FreeRADIUS: Working with Authentication Methods

Packt
08 Sep 2011
6 min read
Authentication is a process where we establish if someone is who he or she claims to be. The most common way is by a unique username and password. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, teaches authentication methods and how they work. Extensible Authentication Protocol (EAP) is covered later in a dedicated article. In this article we shall: Discuss PAP, CHAP, and MS-CHAP authentication protocols See when and how authentication is done in FreeRADIUS Explore ways to store passwords Look at other authentication methods (For more resources on this subject, see here.) Authentication protocols This section will give you background on three common authentication protocols. These protocols involve the supply of a username and password. The radtest program uses the Password Authentication Protocol (PAP) by default when testing authentication. PAP is not the only authentication protocol but probably the most generic and widely used. Authentication protocols you should know about are PAP, CHAP, and MS-CHAP. Each of these protocols involves a username and password. The next article on Extensible Authentication Protocol (EAP) protocol will introduce us to more authentication protocols. An authentication protocol is typically used on the data link layer that connects the client with the NAS. The network layer will only be established after the authentication is successful. The NAS acts as a broker to forward the requests from the user to the RADIUS server. The data link layer and network layer are layers inside the Open Systems Interconnect model (OSI model). The discussion of this model is almost guaranteed to be found in any book on networking:http://en.wikipedia.org/wiki/OSI_model PAP PAP was one of the first protocols used to facilitate the supply of a username and password when making point-to-point connections. With PAP the NAS takes the PAP ID and password and sends them in an Access-Request packet as the User-Name and User-Password. PAP is simpler compared to CHAP and MS-CHAP because the NAS simply hands the RADIUS server a username and password, which are then checked. This username and password come directly from the user through the NAS to the server in a single action. Although PAP transmits passwords in clear text, using it should not always be frowned upon. This password is only in clear text between the user and the NAS. The user's password will be encrypted when the NAS forwards the request to the RADIUS server. If PAP is used inside a secure tunnel it is as secure as the tunnel. This is similar to when your credit card details are tunnelled inside an HTTPS connection and delivered to a secure web server. HTTPS stands for Hypertext Transfer Protocol Secure and is a web standard that uses Secure Socket Layer/Transport Layer Security (SSL/TLS) to create a secure channel over an insecure network. Once this secure channel is established, we can transfer sensitive data, like credit card details, through it. HTTPS is used daily to secure many millions of transactions over the Internet. See the following schematic of a typical captive portal configuration. The following table shows the RADIUS AVPs involved in a PAP request: As you can see the value of User-Password is encrypted between the NAS and the RADIUS server. Transporting the user's password from the user to the NAS may be a security risk if it can be captured by a third party. CHAP CHAP stands for Challenge-Handshake Authentication Protocol and was designed as an improvement to PAP. It prevents you from transmitting a cleartext password. CHAP was created in the days when dial-up modems were popular and the concern about PAP's cleartext passwords was high. After a link is established to the NAS, the NAS generates a random challenge and sends it to the user. The user then responds to this challenge by returning a one-way hash calculated on an identifier (sent along with the challenge), the challenge, and the user's password. The user's response is then used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return CHAP Success or CHAP Failure to the user. The NAS can also request at random intervals that the authentication process be repeated by sending a new challenge to the user. This is another reason why it is considered more secure than PAP. One major drawback of CHAP is that although the password is transmitted encrypted, the password source has to be in clear text for FreeRADIUS to perform password verification. The FreeRADIUS FAQ discuss the dangers of transmitting a cleartext password compared to storing all the passwords in clear text on the server. The following table shows the RADIUS AVPs involved in a CHAP request: MS-CHAP MS-CHAP is a challenge-handshake authentication protocol created by Microsoft. There are two versions, MS-CHAP version 1 and MS-CHAP version 2. The challenge sent by the NAS is identical in format to the standard CHAP challenge packet. This includes an identifier and arbitrary challenge. The response from the user is also identical in format to the standard CHAP response packet. The only difference is the format of the Value field. The Value field is sub-formatted to contain MS-CHAP-specific fields. One of the fields (NT-Response) contains the username and password in a very specific encrypted format. The reply from the user will be used by the NAS to create an Access-Request packet, which is sent to the RADIUS server. Depending on the reply from the RADIUS server, the NAS will return Success Packet or Failure Packet to the user. The RADIUS server is not involved with the sending out of the challenge. If you sniff the RADIUS traffic between an NAS and a RADIUS server you can confirm that there is only an Access-Request followed by an Access-Accept or Access-Reject. The sending out of a challenge to the user and receiving a response from her or him is between the NAS and the user. MS-CHAP also has some enhancements that are not part of CHAP, like the user's ability to change his or her password or inclusion of more descriptive error messages. The protocol is tightly integrated with the LAN Manager and NT Password hashes. FreeRADIUS will convert a user's cleartext password to an LM-Password and an NT-Password in order to determine if the password hash that came out of the MS-CHAP request is correct. Although there are known weaknesses with MS-CHAP, it remains widely used and very popular. Never say never. If your current requirement for the RADIUS deployment does not include the use of MS-CHAP, rather cater for the possibility that one day you may use it. The most popular EAP protocol makes use of MS-CHAP. EAP is crucial in Wi-Fi authentication. Because MS-CHAP is vendor specific, VSAs instead of AVPs are part of the Access-Request between the NAS and RADIUS server. This is used together with the User-Name AVP. Now that we know more about the authentication protocols, let's see how FreeRADIUS handles them.
Read more
  • 0
  • 0
  • 20016

article-image-getting-started-freeradius
Packt
07 Sep 2011
6 min read
Save for later

Getting Started with FreeRADIUS

Packt
07 Sep 2011
6 min read
After FreeRADIUS has been installed it needs to be configured for our requirements. This article by Dirk van der Walt, author of FreeRADIUS Beginner's Guide, will help you to get familiar with FreeRADIUS. It assumes that you already know the basics of the RADIUS protocol. In this article we shall: Perform a basic configuration of FreeRADIUS and test it Discover ways of getting help Learn the recommended way to configure and test FreeRADIUS See how everything fits together with FreeRADIUS (For more resources on this subject, see here.) Before you startThis article assumes that you have a clean installation of FreeRADIUS. You will need root access to edit FreeRADIUS configuration files for the basic configuration and testing. A simple setup We start this article by creating a simple setup of FreeRADIUS with the following: The localhost defined as an NAS device (RADIUS client) Alice defined as a test user After we have defined the client and the test user, we will use the radtest program to fill the role of a RADIUS client and test the authentication of Alice. Time for action – configuring FreeRADIUS FreeRADIUS is set up by modifying configuration files. The location of these files depends on how FreeRADIUS was installed: If you have installed the standard FreeRADIUS packages that are provided with the distribution, it will be under /etc/raddb on CentOS and SLES. On Ubuntu it will be under /etc/freeradius. If you have built and installed FreeRADIUS from source using the distribution's package management system it will also be under /etc/raddb on CentOS and SLEs. On Ubuntu it will be under /etc/freeradius. If you have compiled and installed FreeRADIUS using configure, make, make install it will be under /usr/local/etc/raddb. The following instructions assume that the FreeRADIUS configuration directory is your current working directory: Ensure that you are root in order to be able to edit the configuration files. FreeRADIUS includes a default client called localhost. This client can be used by RADIUS client programs on the localhost to help with troubleshooting and testing. Confirm that the following entry exists in the clients.conf file: client localhost { ipaddr = 127.0.0.1 secret = testing123 require_message_authenticator = no nastype = other} Define Alice as a FreeRADIUS test user. Add the following lines at the top of the users file. Make sure the second and third lines are indented by a single tab character: "alice" Cleartext-Password := "passme" Framed-IP-Address = 192.168.1.65, Reply-Message = "Hello, %{User-Name}" Start the FreeRADIUS server in debug mode. Make sure that there is no other instance running by shutting it down through the startup script. We assume Ubuntu in this case. $> sudo su#> /etc/init.d/freeradius stop#> freeradius -X You can also use the more brutal method of kill -9 $(pidof freeradius) or killall freeradius on Ubuntu and kill -9 $(pidof radius) or killall radiusd on CentOS and SLES if the startup script does not stop FreeRADIUS. Ensure FreeRADIUS has started correctly by confirming that the last line on your screen says the following: Ready to process requests. If this did not happen, read through the output of the FreeRADIUS server started in debug mode to see what problem was identified and a possible location thereof. Authenticate Alice using the following command: $> radtest alice passme 127.0.0.1 100 testing123 The debug output of FreeRADIUS will show how the Access-Request packet arrives and how the FreeRADIUS server responds to this request. Radtest will also show the response of the FreeRADIUS server: Sending Access-Request of id 17 to 127.0.0.1 port 1812 User-Name = "alice" User-Password = "passme" NAS-IP-Address = 127.0.1.1 NAS-Port = 100rad_recv: Access-Accept packet from host 127.0.0.1 port 1812,id=147, length=40 Framed-IP-Address = 192.168.1.65 Reply-Message = "Hello, alice" What just happened? We have created a test user on the FreeRADIUS server. We have also used the radtest command as a client to the FreeRADIUS server to test authentication. Let's elaborate on some interesting and important points. Configuring FreeRADIUS Configuration of the FreeRADIUS server is logically divided into different files. These files are modified to configure a certain function, component, or module of FreeRADIUS. There is, however, a main configuration file that sources the various sub-files. This file is called radiusd.conf. The default configuration is suitable for most installations. Very few changes are required to make FreeRADIUS useful in your environment. Clients Although there are many files inside the FreeRADIUS server configuration directory, only a few require further changes. The clients.conf file is used to define clients to the FreeRADIUS server. Before an NAS can use the FreeRADIUS server it has to be defined as a client on the FreeRADIUS server. Let's look at some points about client definitions. Sections A client is defined by a client section. FreeRADIUS uses sections to group and define various things. A section starts with a keyword indicating the section name. This is followed by enclosing brackets. Inside the enclosing brackets are various settings specific to that section. Sections can also be nested. Sometimes the section's keyword is followed by a single word to differentiate between sections of the same type. This allows us to have different client entries in clients.conf. Each client has a short name to distinguish it from the others. The clients.conf file is not the only file where client sections can be defined although it is the usual and most logical place. The following image shows nested client definitions inside a server section: (Move the mouse over the image to enlarge.) Client identification The FreeRADIUS server identifies a client by its IP Address. If an unknown client sends a request to the server, the request will be silently ignored. Shared secret The client and server also require to have a shared secret, which will be used to encrypt and decrypt certain AVPs. The value of the User-Password AVP is encrypted using this shared secret. When the shared secret differs between the client and server, FreeRADIUS server will detect it and warn you when running in debug mode: Failed to authenticate the user. WARNING: Unprintable characters in the password. Double-check the shared secret on the server and the NAS! Message-Authenticator When defining a client you can enforce the presence of the Message-Authenticator AVP in all the requests. Since we will be using the radtest program, which does not include it, we disable it for localhost by setting require_message_authenticator to no. Nastype The nastype is set to other. The value of nastype will determine how the checkrad Perl script will behave. Checkrad is used to determine if a user is already using resources on an NAS. Since localhost does not have this function or need we will define it as other. Common errors If the server is down or the packets from radtest cannot reach the server because of a firewall between them, radtest will try three times and then give up with the following message: radclient: no response from server for ID 133 socket 3 If you run radtest as a normal user it may complain about not having access to the FreeRADIUS dictionary file. This is required for normal operations. The way to solve this is either to change the permissions on the reported file or to run radtest as root.
Read more
  • 0
  • 0
  • 11984
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-how-add-static-material-course-moodle
Packt
06 Sep 2011
7 min read
Save for later

How to Add Static Material to a Course in Moodle

Packt
06 Sep 2011
7 min read
  (For more resources on Moodle, see here.)   Kinds of static course material that can be added Static course material is added from the Add a resource drop-down menu. Using this menu, you can create: Web pages Links to anything on the Web Files A label that displays any text or image Multimedia   Adding links On your Moodle site, you can show content from anywhere on the Web by using a link. You can also link to files that you've uploaded into your course. By default, this content appears in a frame within your course. You can also choose to display it in a new window. When using content from outside sites, you need to consider the legality and reliability of using the link. Is it legal to display the material on your Moodle site? Will the material still be there when your course is running? In this example, I've linked to an online resource from the BBC, which is a fairly reliable source: Remember that the bottom of the window displays Window Settings, so you can choose to display this resource in its own window. You can also set the size of the window. You may want to make it appear in a smaller window, so that it does not completely obscure the window of your Moodle site. This will make it clearer to the student that he or she has opened a new window. To add a link to a resource on the Web: Log in to your course as a Teacher or Site Administrator. In the upper-right corner of the page, if you see a button that reads, Turn editing on, click on this button. If it reads Turn editing off, then you do not need to click on this button. (You will also find this button in the Settings menu on the leftmost side of the page.) From the Add a resource... drop-down menu, select URL. Moodle displays the Adding a new URL page. Enter a Name for the link. This is the name that people will see on the home page of your course. Enter a Description for the link. When the student sees the course's home page, they will see the Name and not the Description. However, whenever this resource is selected from the Navigation bar, the Description will be displayed. Here is a link as it appears on the home page of a course: Here is the same link, as it appears when selected from the Navigation bar: In the External URL field, enter the Web address for this link. From the Display drop-down menu, select the method that you want Moodle to use when displaying the linked page. Embed will insert the linked page into a Moodle page. Your students will see the Navigation bar, any blocks that you have added to the course and navigation links across the top of the page, just like when they view any other page in Moodle. The center of the page will be occupied by the linked page. Open will take the student away from your site, and open the linked page in the window that was occupied by Moodle. In pop-up will launch a new window, containing the linked page on top of the Moodle page. Automatic will make Moodle choose the best method for displaying the linked page. The checkboxes for Display URL name and Display URL description will affect the display of the page, only if Embed is chosen as the display method. If selected, the Name of the link will be displayed above the embedded page, and the Description will be displayed below the embedded page. Under Options, the ShowAdvanced button will display fields that allow you to set the size of the popup window. If you don't select In pop-up as the display method, these fields have no effect. Under Parameters, you can add parameters to the link. In a Web link, a parameter would add information about the course or student to the link. If you have Web programming experience, you might take advantage of this feature. For more about passing parameters in URLs, see http://en.wikipedia.org/wiki/Query_string. Under Common Module Settings, the Visible setting determines if this resource is visible to students. Teachers and site Administrators can always see the resource. Setting this to Hide will completely hide the resource. Teachers can hide some resources and activities at the beginning of a course, and reveal them as the course progresses. Show/Hide versus Restrict availability If you want a resource to be visible, but not available, then use the Restrict Availability settings further down on the page. Those settings enable you to have a resource's name and its description appear, but still make the resource unavailable. You might want to do this for resources that will be used later in a course, when you don't want the student to work ahead of the syllabus. The ID number field allows you to enter an identifier for this resource, which will appear in the Gradebook. If you export grades from the Gradebook and then import them into an external database, you might want the course ID number here to match the ID number that you use in that database. The Restrict Availability settings allow you to set two kinds of conditions that will control whether this resource is available to the student. The Accessible from and Accessible until settings enable you to set dates for when this resource will be available. The Grade condition setting allows you to specify the grade that a student must achieve in another Activity in this course, before being able to access this Resource. The setting for Before activity is available determines if the Resource will be visible while it is unavailable. If it is visible but unavailable, Moodle will display the conditions needed to make it available (achieve a grade, wait for a date, and so on.). Click on one of the Save buttons at the bottom of the page to save your work.   Adding pages Under the Add a resource drop-down menu, select Page to add a Web page to a course. A link to the page that you create will appear on the course's home page. Moodle's HTML editor When you add a Page to your course, Moodle displays a Web page editor. This editor is based on an open source web page editor called TinyMCE. You can use this editor to compose a web page for your course. This page can contain almost anything that a web page outside of Moodle can contain. Pasting text into a Moodle page Many times, we prefer to write text in our favorite word processor instead of writing it in Moodle. Or we may find text that we can legally copy and paste into a Moodle page, somewhere else. Moodle's text editor does allow you to do this. To paste text into a page, you can just use the appropriate keyboard shortcut. Try Ctrl + V for Windows PCs and Apple + V for Macintoshes. If you use this method, the format of the text will be preserved. To paste plain text, without the format of the original text, click on the Paste as Plain Text icon, as shown below: When you paste text from a Microsoft Word document into a web page, it usually includes a lot of non-standard HTML code. This code doesn't work well in all browsers, and makes it more difficult to edit the HTML code in your page. Many advanced web page editors, such as AdobeDreamWeaver, have the ability to clean up Word HTML code. Moodle's web page editor can also clean up Word HTML code. When pasting text that was copied from Word, use the Paste from Word icon, as shown in the image below. This will strip out most of Word's non-standard HTML code.  
Read more
  • 0
  • 0
  • 5097

article-image-yii-11-using-zii-components
Packt
05 Sep 2011
10 min read
Save for later

Yii 1.1: Using Zii Components

Packt
05 Sep 2011
10 min read
  (For more resources on this subject, see here.) Introduction Yii have a useful library called Zii. It's bundled with framework and includes some classes aimed to make the developer's life easier. Its most handy components are grids and lists which allow you to build data in both the admin and user parts of a website in a very fast and efficient way. In this article you'll learn how to use and adjust these components to fit your needs. Also you'll learn about data providers. They are part of the core framework, not Zii, but since they are used extensively with grids and lists, we'll review them here. We'll use Sakila sample database version 0.8 available from official MySQL website: http://dev.mysql.com/doc/sakila/en/sakila.html. Using data providers Data providers are used to encapsulate common data model operations such as sorting, pagination and querying. They are used with grids and lists extensively. Because both widgets and providers are standardized, you can display the same data using different widgets and you can get data for a widget from various providers. Switching providers and widgets is relatively transparent. Currently there are CActiveDataProvider, CArrayDataProvider, and CSqlDataProvider implemented to get data from ActiveRecord models, arrays, and SQL queries respectively. Let's try all these providers to fill a grid with data. Getting ready Create a new application using yiic webapp as described in the official guide. Download the Sakila database from http://dev.mysql.com/doc/sakila/en/sakila.html and execute the downloaded SQLs: first schema then data. Configure the DB connection in protected/config/main.php. Use Gii to create a model for the film table. How to do it... Let's start with a view for a grid controller. Create protected/views/grid/index.php: <?php $this->widget('zii.widgets.grid.CGridView', array('dataProvider' => $dataProvider,))?> Then create a protected/controllers/GridController.php: <?phpclass GridController extends Controller{ public function actionAR() { $dataProvider = new CActiveDataProvider('Film', array( 'pagination'=>array( 'pageSize'=>10, ), 'sort'=>array( 'defaultOrder'=> array('title'=>false), ) )); $this->render('index', array( 'dataProvider' => $dataProvider, )); } public function actionArray() { $yiiDevelopers = array( array( 'name'=>'Qiang Xue', 'id'=>'2', 'forumName'=>'qiang', 'memberSince'=>'Jan 2008', 'location'=>'Washington DC, USA', 'duty'=>'founder and project lead', 'active'=>true, ), array( 'name'=>'Wei Zhuo', 'id'=>'3', 'forumName'=>'wei', 'memberSince'=>'Jan 2008', 'location'=>'Sydney, Australia', 'duty'=>'project site maintenance and development', 'active'=>true, ), array( 'name'=>'Sebastián Thierer', 'id'=>'54', 'forumName'=>'sebas', 'memberSince'=>'Sep 2009', 'location'=>'Argentina', 'duty'=>'component development', 'active'=>true, ), array( 'name'=>'Alexander Makarov', 'id'=>'415', 'forumName'=>'samdark', 'memberSince'=>'Mar 2010', 'location'=>'Russia', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Maurizio Domba', 'id'=>'2650', 'forumName'=>'mdomba', 'memberSince'=>'Aug 2010', 'location'=>'Croatia', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Y!!', 'id'=>'1644', 'forumName'=>'Y!!', 'memberSince'=>'Aug 2010', 'location'=>'Germany', 'duty'=>'core framework development', 'active'=>true, ), array( 'name'=>'Jeffrey Winesett', 'id'=>'15', 'forumName'=>'jefftulsa', 'memberSince'=>'Sep 2010', 'location'=>'Austin, TX, USA', 'duty'=>'documentation and marketing', 'active'=>true, ), array( 'name'=>'Jonah Turnquist', 'id'=>'127', 'forumName'=>'jonah', 'memberSince'=>'Sep 2009 - Aug 2010', 'location'=>'California, US', 'duty'=>'component development', 'active'=>false, ), array( 'name'=>'István Beregszászi', 'id'=>'1286', 'forumName'=>'pestaa', 'memberSince'=>'Sep 2009 - Mar 2010', 'location'=>'Hungary', 'duty'=>'core framework development', 'active'=>false, ), ); $dataProvider = new CArrayDataProvider( $yiiDevelopers, array( 'sort'=>array( 'attributes'=>array('name', 'id', 'active'), 'defaultOrder'=>array('active' => true, 'name' => false), ), 'pagination'=>array( 'pageSize'=>10, ), )); $this->render('index', array( 'dataProvider' => $dataProvider, )); } public function actionSQL() { $count=Yii::app()->db->createCommand('SELECT COUNT(*) FROM film')->queryScalar(); $sql='SELECT * FROM film'; $dataProvider=new CSqlDataProvider($sql, array( 'keyField'=>'film_id', 'totalItemCount'=>$count, 'sort'=>array( 'attributes'=>array('title'), 'defaultOrder'=>array('title' => false), ), 'pagination'=>array( 'pageSize'=>10, ), )); $this->render('index', array( 'dataProvider' => $dataProvider, )); }} Now run grid/aR, grid/array and grid/sql actions and try using the grids. How it works... The view is pretty simple and stays the same for all data providers. We are calling the grid widget and passing the data provider instance to it. Let's review actions one by one starting with actionAR: $dataProvider = new CActiveDataProvider('Film', array( 'pagination'=>array( 'pageSize'=>10, ), 'sort'=>array( 'defaultOrder'=>array('title'=>false), ))); CActiveDataProvider works with active record models. Model class is passed as a first argument of class constructor. Second argument is an array that defines class public properties. In the code above, we are setting pagination to 10 items per page and default sorting by title. Note that instead of using a string we are using an array where keys are column names and values are true or false. true means order is DESC while false means that order is ASC. Defining the default order this way allows Yii to render a triangle showing sorting direction in the column header. In actionArray we are using CArrayDataProvider that can consume any array. $dataProvider = new CArrayDataProvider($yiiDevelopers, array( 'sort'=>array( 'attributes'=>array('name', 'id', 'active'), 'defaultOrder'=>array('active' => true, 'name' => false), ), 'pagination'=>array( 'pageSize'=>10, ),)); First argument accepts an associative array where keys are column names and values are corresponding values. Second argument accepts an array with the same options as in CActiveDataProvider case. In actionSQL, we are using CSqlDataProvider that consumes the SQL query and modifies it automatically allowing pagination. First argument accepts a string with SQL and a second argument with data provider parameters. This time we need to supply calculateTotalItemCount with total count of records manually. For this purpose we need to execute the extra SQL query manually. Also we need to define keyField since the primary key of this table is not id but film_id. To sum up, all data providers are accepting the following properties: pagination CPagination object or an array of initial values for new instance of CPagination. sort CSort object or an array of initial values for new instance of CSort. totalItemCount We need to set this only if the provider, such as CsqlDataProvider, does not implement the calculateTotalItemCount method. There's more... You can use data providers without any special widgets. Replace protected/views/grid/ index.php content with the following: <?php foreach($dataProvider->data as $film):?> <?php echo $film->title?><?php endforeach?> <?php $this->widget('CLinkPager',array( 'pages'=>$dataProvider->pagination))?> Further reading To learn more about data providers refer to the following API pages:   http://www.yiiframework.com/doc/api/CDataProvider http://www.yiiframework.com/doc/api/CActiveDataProvider http://www.yiiframework.com/doc/api/CArrayDataProvider http://www.yiiframework.com/doc/api/CSqlDataProvider http://www.yiiframework.com/doc/api/CSort http://www.yiiframework.com/doc/api/CPagination Using grids Zii grids are very useful to quickly create efficient application admin pages or any pages you need to manage data on. Let's use Gii to generate a grid, see how it works, and how we can customize it. Getting ready Carry out the following steps: Create a new application using yiic webapp as described in the official guide. Download the Sakila database from http://dev.mysql.com/doc/sakila/en/sakila.html. Execute the downloaded SQLs: first schema then data. Configure the DB connection in protected/config/main.php. Use Gii to create models for customer, address, and city tables. How to do it... Open Gii, select Crud Generator, and enter Customer into the Model Class field. Press Preview and then Generate. Gii will generate a controller in protected/controllers/CustomerController.php and a group of views under protected/views/customer/. Run customer controller and go to Manage Customer link. After logging in you should see the grid generated: How it works... Let's start with the admin action of customer controller: public function actionAdmin(){ $model=new Customer('search'); $model->unsetAttributes(); // clear any default values if(isset($_GET['Customer'])) $model->attributes=$_GET['Customer']; $this->render('admin',array( 'model'=>$model, ));} Customer model is created with search scenario, all attribute values are cleaned up, and then filled up with data from $_GET. On the first request $_GET is empty but when you are changing the page, or filtering by first name attribute using the input field below the column name, the following GET parameters are passed to the same action via an AJAX request: Customer[address_id] =Customer[customer_id] =Customer[email] =Customer[first_name] = alexCustomer[last_name] =Customer[store_id] =Customer_page = 2ajax = customer-grid Since scenario is search, the corresponding validation rules from Customer::rules are applied. For the search scenario, Gii generates a safe rule that allows to mass assigning for all fields: array('customer_id, store_id, first_name, last_name, email, address_id, active, create_date, last_update', 'safe','on'=>'search'), Then model is passed to a view protected/views/customer/admin.php. It renders advanced search form and then passes the model to the grid widget: <?php $this->widget('zii.widgets.grid.CGridView', array( 'id'=>'customer-grid', 'dataProvider'=>$model->search(), 'filter'=>$model, 'columns'=>array( 'customer_id', 'store_id', 'first_name', 'last_name', 'email', 'address_id', /* 'active', 'create_date', 'last_update', */ array( 'class'=>'CButtonColumn', ), ),)); ?> Columns used in the grid are passed to columns. When just a name is passed, the corresponding field from data provider is used. Also we can use custom column represented by a class specified. In this case we are using CButtonColumn that renders view, update and delete buttons that are linked to the same named actions and are passing row ID to them so action can be done to a model representing specific row from database. filter property accepts a model filled with data. If it's set, a grid will display multiple text fields at the top that the user can fill to filter the grid. The dataProvider property takes an instance of data provider. In our case it's returned by the model's search method : public function search(){ // Warning: Please modify the following code to remove attributes that // should not be searched. $criteria=new CDbCriteria; $criteria->compare('customer_id',$this->customer_id); $criteria->compare('store_id',$this->store_id); $criteria->compare('first_name',$this->first_name,true); $criteria->compare('last_name',$this->last_name,true); $criteria->compare('email',$this->email,true); $criteria->compare('address_id',$this->address_id); $criteria->compare('active',$this->active); $criteria->compare('create_date',$this->create_date,true); $criteria->compare('last_update',$this->last_update,true); return new CActiveDataProvider(get_class($this), array( 'criteria'=>$criteria, ));} This method is called after the model was filled with $_GET data from the filtering fields so we can use field values to form the criteria for the data provider. In this case all numeric values are compared exactly while string values are compared using partial match.
Read more
  • 0
  • 0
  • 3685

article-image-plone-4-development-creating-custom-workflow
Packt
30 Aug 2011
7 min read
Save for later

Plone 4 Development: Creating a Custom Workflow

Packt
30 Aug 2011
7 min read
Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.        Keeping control with workflow As we have alluded to before, managing permissions directly anywhere other than the site root is usually a bad idea. Every content object in a Plone site is subject to security, and will in most cases inherit permission settings from its parent. If we start making special settings in particular folders, we will quickly lose control. However, if settings are always acquired, how can we restrict access to particular folders or prevent authors from editing published content whilst still giving them rights to work on items in a draft state? The answer to both of these problems is workflow. Workflows are managed by the portal_workflow tool. This controls a mapping of content types to workflows definitions, and sets a default workflow for types not explicitly mapped. The workflow tool allows a workflow chain of multiple workflows to be assigned to a content type. Each workflow is given its own state variable. Multiple workflows can manage permissions concurrently. Plone's user interface does not explicitly support more than one workflow, but can be used in combination with custom user interface elements to address complex security and workflow requirements. The workflow definitions themselves are objects found inside the portal_workflow tool, under the Contents tab. Each definition consists of states, such as private or published, and transitions between them. Transitions can be protected by permissions or restricted to particular roles. Although it is fairly common to protect workflow transitions by role, this is not actually a very good use of the security system. It would be much more sensible to use an appropriate permission. The exception is when custom roles are used solely for the purpose of defining roles in a workflow. Some transitions are automatic, which means that they will be invoked as soon as an object enters a state that has this transition as a possible exit (that is, provided the relevant guard conditions are met). More commonly, transitions are invoked following some user action, normally through the State drop-down menu in Plone's user interface. It is possible to execute code immediately before or after a transition is executed. States may be used simply for information purposes. For example, it is useful to be able to mark a content object as "published" and be able to search for all published content. More commonly, states are also used to control content item security. When an object enters a particular state, either its initial state, when it is first created, or a state that is the result of a workflow transition, the workflow tool can set a number of permissions according to a predefined permissions map associated with the target state. The permissions that are managed by a particular workflow are listed under the Permissions tab on the workflow definition: These permissions are used in a permission map for each state: If you change workflow security settings, your changes will not take effect immediately, since permissions are only modified upon workflow transitions. To synchronize permissions with the current workflow definitions, use the Update security settings button at the bottom of the Workflows tab of the portal_workflow tool. Note that this can take a long time on large sites, because it needs to find all content items using the old settings and update their permissions. If you use the Types control panel in Plone's Site Setup to change workflows, this reindexing happens automatically. Workflows can also be used to manage role-to-group assignments in the same way they can be used to manage role-to-permission assignments. This feature is rarely used in Plone, however. All workflows manage a number of workflow variables, whose values can change with transitions and be queried through the workflow tool. These are rarely changed, however, and Plone relies on a number of the default ones. These include the previous transition (action), the user ID of the person who performed that transition (actor), any associated comments (comments), the date/time of the last transition (time), and the full transition history (review_history). Finally, workflows can define work lists, which are used by Plone's Review list portlet to show pending tasks for the current user. A work list in effect performs a catalog search using the workflow's state variable. In Plone, the state variable is always called review_state. The workflow system is very powerful, and can be used to solve many kinds of problems where objects of the same type need to be in different states. Learning to use it effectively can pay off greatly in the long run. Interacting with workflow in code Interacting with workflow from our own code is usually straightforward. To get the workflow state of a particular object, we can do: from Products.CMFCore.utils import getToolByName wftool = getToolByName(context, 'portal_workflow') review_state = wftool.getInfoFor(context, 'review_state') However, if we are doing a search using the portal_catalog tool, the results it returns has the review state as metadata already: from Products.CMFCore.utils import getToolByName catalog = getToolByName(context, 'portal_catalog') for result in catalog(dict( portal_type=('Document', 'News Item',), review_state=('published', 'public', 'visible',), )): review_state = result.review_state # do something with the review_state To change the workflow state of an object, we can use the following line of code: wftool.doActionFor(context, action='publish') The action here is the name of a transition, which must be available to the current user, from current state of context. There is no (easy) way to directly specify the target state. This is by design: recall that transitions form the paths between states, and may involve additional security restrictions or the triggering of scripts. Again, the Doc tab for the portal_workflow tool and its sub-objects (the workflow definitions and their states and transitions) should be your first point of call if you need more detail. The workflow code can be found in Products.CMFCore.WorkflowTool and Products.DCWorkflow. Installing a custom workflow It is fairly common to create custom workflows when building a Plone website. Plone ships with several useful workflows, but security and approvals processes tend to differ from site to site, so we will often find ourselves creating our own workflows. Workflows are a form of customization. We should ensure they are installable using GenericSetup. However, the workflow XML syntax is quite verbose, so it is often easier to start from the ZMI and export the workflow definition to the filesystem. Designing a workflow for Optilux Cinemas It is important to get the design of a workflow policy right, considering the different roles that need to interact with the objects, and the permissions they should have in the various states. Draft content should be visible to cinema staff, but not customers, and should go through review before being published. The following diagram illustrates this workflow: This workflow will be made the default, and should therefore apply to most content. However, we will keep the standard Plone policy of omitting workflow for the File and Image types. This means that permissions for content items of these types will be acquired from the Folder in which they are contained, making them simpler to manage. In particular, this means it is not necessary to separately publish linked files and embedded images when publishing a Page. Because we need to distinguish between logged-in customers and staff members, we will introduce a new role called StaffMember. This role will be granted View permission by default for all items in the site, much like a Manager or Site Administrator user is by default (although workflow may override this). We will let the Site Administrator role represent site administrators, and the Reviewer role represent content reviewers, as they do in a default Plone installation. We will also create a new group, Staff, which is given the StaffMember role. Among other things, this will allow us to easily grant the Reader, Editor and Contributor role in particular folders to all staff from the Sharing screen. The preceding workflow is designed for content production and review. This is probably the most common use for workflow in Plone, but it is by no means the only use case. For example, the author once used workflows to control the payment status on an Invoice content type. As you become more proficient with the workflow engine, you will find that it is useful in a number of scenarios.
Read more
  • 0
  • 0
  • 2583

article-image-plone-4-development-understanding-zope-security
Packt
30 Aug 2011
6 min read
Save for later

Plone 4 Development: Understanding Zope Security

Packt
30 Aug 2011
6 min read
Security primitives Zope's security is declarative: views, actions, and attributes on content objects are declared to be protected by permissions. Zope takes care of verifying that the current user has the appropriate access rights for a resource. If not, an AccessControl.Unauthorized exception will be raised. This is caught by an error handler which will either redirect the user to a login screen or show an access denied error page. Permissions are not granted to users directly. Instead, they are assigned to roles. Users can be given any number of roles, either site-wide, or in the context of a particular folder, in which case they are referred to as local roles. Global and local roles can also be assigned to groups, in which case all users in that group will have the particular role. (In fact, Zope considers users and groups largely interchangeable, and refers to them more generally as principals.) This makes security settings much more flexible than if they were assigned to individual users. Users and groups Users and groups are kept in user folders, which are found in the ZMI with the name acl_users. There is one user folder at the root of the Zope instance, typically containing only the default Zope-wide administrator that is created by our development buildout the first time it is run. There is also an acl_users folder inside Plone, which manages Plone's users and groups. Plone employs the Pluggable Authentication Service (PAS), a particularly flexible kind of user folder. In PAS, users, groups, their roles, their properties, and other security-related policy are constructed using various interchangeable plugins. For example, an LDAP plugin could allow users to authenticate against an LDAP repository. In day-to-day administration, users and groups are normally managed from Plone's Users and Groups control panel. Permissions Plone relies on a large number of permissions to control various aspects of its functionality. Permissions can be viewed from the Security tab in the ZMI, which lets us assign permissions to roles at a particular object. Note that most permissions are set to Acquire—the default—meaning that they cascade down from the parent folder. Role assignments are additive when permissions are set to acquire. Sometimes, it is appropriate to change permission settings at the root of the Plone site (which can be done using the rolemap.xml GenericSetup import step—more on that follows), but managing permissions from the Security tab anywhere else is almost never a good idea. Keeping track of which security settings are made where in a complex site can be a nightmare. Permissions are the most granular piece of the security puzzle, and can be seen as a consequence of a user's roles in a particular context. Security-aware code should almost always check permissions, rather than roles, because roles can change depending on the current folder and security policy of the site, or even based on an external source such as an LDAP or Active Directory repository. Permissions can be logically divided into three main categories: Those that relate to basic content operations, such as View and Modify portal content. These are used by almost all content types, and defined as constants in the module Products.CMFCore.permissions. Core permissions are normally managed by workflow. Those that control the creation of particular types of content, such as ATContentTypes: Add Image. These are usually set at the Plone site root to apply to the whole site, but they may be managed by workflow on folders. Those that control site-wide policy. For example, the Portlets: Manage portlets permission is usually given to the Manager and Site Administrator roles, because this is typically an operation that only the site's administrator will need to perform. These permissions are usually set at the site root and acquired everywhere else. Occasionally, it may be appropriate to change them here. For example, the Add portal member permission controls whether anonymous users can add themselves (that is, "join" the site) or not. Note that there is a control panel setting for this, under Security in Site Setup. Developers can create new permissions when necessary, although they are encouraged to reuse the ones in Products.CMFCore.permissions if possible. The most commonly used permissions are: Permission Constant Zope Toolkit name Controls Access contents information AccessContents Information zope2.AccessContents Information Low-level Zope permission controlling access to objects View View zope2.View Access to the main view of a content object List folder contents ListFolderContents cmf.ListFolderContents Ability to view folder listings Modify portal content ModifyPortalContent cmf.ModifyPortalContent Edit operations on content Change portal events N/A N/A Modification of the Event content type (largely a historical accident) Manage portal ManagePortal cmf.ManagePortal Operations typically restricted to the Manager role. Request review RequestReview cmf.RequestReview Ability to submit content for review in many workflows. Review portal content ReviewPortalContent cmf.ReviewPortalContent Ability to approve or reject items submitted for review in many workflows. Add portal content AddPortalContent cmf.AddPortalContent add new content in a folder. Note that most content types have their own "add" permissions. In this case, both this permission and the type-specific permission are required. The Constant column in the preceding table refers to constants defined in Products. CMFCore.permissions. The Zope Toolkit name column lists the equivalent names found in ZCML files in packages such as Products.CMFCore, Products.Five and (at least from Zope 2.13), AccessControl. They contain directives such as: <permission id="zope2.View" title="View" /> This is how permissions are defined in the Zope Toolkit. Custom permissions can also be created in this way. Sometimes, we will use ZCML directives which expect a permission attribute, such as: <browser:page name="some-view" class=".someview.SomeView" for="*" permission="zope2.View" /> The permission attribute here must be a Zope Toolkit permission ID. The title of the <permission /> directive is used to map the Zope 2-style permissions (which are really just strings) to Zope Toolkit permission IDs. To declare that a particular view or other resource defined in ZCML should not be subject to security checks, we can use the special permission zope.Public.
Read more
  • 0
  • 0
  • 2263
article-image-professional-plone-4-development-developing-site-strategy
Packt
26 Aug 2011
9 min read
Save for later

Professional Plone 4 Development: Developing a Site Strategy

Packt
26 Aug 2011
9 min read
  Professional Plone 4 Development Build robust, content-centric web applications with Plone 4.         Read more about this book       (For more resources on Plone, see here.) Creating a policy package Our policy package is just a package that can be installed as a Plone add-on. We will use a GenericSetup extension profile in this package to turn a standard Plone installation into one that is configured to our client's needs. We could have used a full-site GenericSetup base profile instead, but by using a GenericSetup extension profile we can avoid replicating the majority of the configuration that is done by Plone. We will use ZopeSkel to create an initial skeleton for the package, which we will call optilux.policy, adopting the optilux.* namespace for all Optilux-specific packages. In your own code, you should of course use a different namespace. It is usually a good idea to base this on the owning organization's name, as we have done here. Note that package names should be all lowercase, without spaces, underscores, or other special characters. If you intend to release your code into the Plone Collective, you can use the collective.* namespace, although other namespaces are allowed too. The plone.* namespace is reserved for packages in the core Plone repository, where the copyright has been transferred to the Plone Foundation. You should normally not use this without first coordinating with the Plone Framework Team. We go into the src/ directory of the buildout and run the following command: $ ../bin/zopeskel plone optilux.policy This uses the plone ZopeSkel template to create a new package called optilux.policy. This will ask us a few questions. We will stick with "easy" mode for now, and answer True when asked whether to register a GenericSetup profile. Note that ZopeSkel will download some packages used by its local command support. This may mean the initial bin/zopeskel command takes a little while to complete, and assumes that we are currently connected to the internet. A local command is a feature of PasteScript, upon which ZopeSkel is built. ZopeSkel registers an addcontent command, which can be used to insert additional snippets of code, such as view registrations or new content types, into the initial skeleton generated by ZopeSkel. We will not use this feature, preferring instead to retain full control over the code we write and avoid the potential pitfalls of code generation. If you wish to use this feature, you will either need to install ZopeSkel and PasteScript into the global Python environment, or add PasteScript to the ${zopeskel:eggs} option in buildout.cfg, so that you get access to the bin/paster command. Run bin/zopeskel --help from the buildout root directory for more information about ZopeSkel and its options. Distribution details Let us now take a closer look at what ZopeSkel has generated for us. We will also consider which files should be added to version control, and which files should be ignored. Item Version control Purpose setup.py Yes Contains instructions for how Setuptools/Distribute (and thus Buildout) should manage the package's distribution. We will make a few modifications to this file later. optilux.policy.egg-info/ Yes Contains additional distribution configuration. In this case, ZopeSkel keeps track of which template was used to generate the initial skeleton using this file. *.egg No ZopeSkel downloads a few eggs that are used for its local command support (Paste, PasteScript, and PasteDeploy) into the distribution directory root. If you do not intend to use the local command support, you can delete these. You should not add these to version control. README.txt Yes If you intend to release your package to the public, you should document it here. PyPI requires that this file be present in the root of a distribution. It is also read into the long_description variable in setup.py. PyPI will attempt to render this as reStructuredText markup (see http://docutils.sourceforge.net/rst.html). docs/ Yes Contains additional documentation, including the software license (which should be the GNU General Public License, version 2, for any packages that import directly from any of Plone's GPL-licensed packages) and a change log. Changes to setup.py Before we can progress, we will make a few modifications to setup.py. Our revised file looks similar to the following code, with changes highlighted: from setuptools import setup, find_packagesimport osversion = '2.0'setup(name='optilux.policy', version=version, description="Policy package for the Optilux Cinemas project", long_description=open("README.txt").read() + "n" + open(os.path.join("docs", "HISTORY.txt")).read(), # Get more strings from # http://pypi.python.org/pypi?%3Aaction=list_classifiers classifiers=[ "Framework :: Plone", "Programming Language :: Python", ], keywords='', author='Martin Aspeli', author_email='optilude@gmail.com', url='http://optilux-cinemas.com', license='GPL', packages=find_packages(exclude=['ez_setup']), namespace_packages=['optilux'], include_package_data=True, zip_safe=False, install_requires=[ 'setuptools', 'Plone', ], extras_require={ 'test': ['plone.app.testing',] }, entry_points=""" # -*- Entry points: -*- [z3c.autoinclude.plugin] target = plone """,# setup_requires=["PasteScript"],# paster_plugins=["ZopeSkel"], ) The changes are as follows: We have added an author name, e-mail address, and updated project URL. These are used as metadata if the distribution is ever uploaded to PyPI. For internal projects, they are less important. We have declared an explicit dependency on the Plone distribution, that is, on Plone itself. This ensures that when our package is installed, so is Plone. We will shortly update our main working set to contain only the optilux. policy distribution. This dependency ensures that Plone is installed as part of our application policy. We have then added a [tests] extra, which adds a dependency on plone. app.testing. We will install this extra as part of the following test working set, making plone.app.testing available in the test runner (but not in the Zope runtime). Finally, we have commented out the setup_requires and paster_plugins options. These are used to support ZopeSkel local commands, which we have decided not to use. The main reason to comment them out is to avoid having Buildout download these additional dependencies into the distribution root directory, saving time, and reducing the number of files in the build. Also note that, unlike distributions downloaded by Buildout in general, there is no "offline" support for these options. Changes to configure.zcml We will also make a minor change to the generated configure.zcml file, removing the line: <five:registerPackage package="." initialize=".initialize" /> This directive is used to register the package as an old-style Zope 2 product. The main reason to do this is to ensure that the initialize() function is called on Zope startup. This may be a useful hook, but most of the time it is superfluous, and requires additional test setup that can make tests more brittle. We can also remove the (empty) initialize() function itself from the optilux/policy/__init__.py file, effectively leaving the file blank. Do not delete __init__.py, however, as it is needed to make this directory into a Python package. Updating the buildout Before we can use our new distribution, we need to add it to our development buildout. We will consider two scenarios: The distribution is under version control in a repository module separate to the development buildout itself. This is the recommended approach. The distribution is not under version control, or is kept inside the version control module of the buildout itself. The example source code that comes with this article is distributed as a simple archive, so it uses this approach. Given the approach we have taken to separating out our buildout configuration into multiple files, we must first update packages.cfg to add the new package. Under the [sources] section, we could add: [sources]optilux.policy = svn https://some-svn-server/optilux.policy/trunk Or, for distributions without a separate version control URL: [sources]optilux.policy = fs optilux.policy We must also update the main and test working sets in the same file: [eggs]main = optilux.policytest = optilux.policy [test] Finally, we must tell Buildout to automatically add this distribution as a develop egg when running the development buildout. This is done near the top of buildout.cfg: auto-checkout = optilux.policy We must rerun buildout to let the changes take effect: $ bin/buildout We can test that the package is now available for import using the zopepy interpreter: $ bin/zopepy>>> from optilux import policy>>> The absence of an ImportError tells us that this package will now be known to the Zope instance in the buildout. To be absolutely sure, you can also open the bin/instance script in a text editor (bin/instance-script.py on Windows) and look for a line in the sys.path mangling referencing the package. Working sets and component configuration It is worth deliberating a little more on how Plone and our new policy package are loaded and configured. At build time: Buildout installs the [instance] part, which will generate the bin/instance script. The plone.recipe.zope2instance recipe calculates a working set from its eggs option, which in our buildout references ${eggs:main}. This contains exactly one distribution: optilux.policy. This in turn depends on the Plone distribution which in turn causes Buildout to install all of Plone. Here, we have made a policy decision to depend on a "big" Plone distribution that includes some optional add-ons. We could also have depended on the smaller Products.CMFPlone distribution (which works for Plone 4.0.2 onwards), which includes only the core of Plone, perhaps adding specific dependencies for add-ons we are interested in. When declaring actual dependencies used by distributions that contain reusable code instead of just policy, you should always depend on the packages you import from or otherwise depend on, and no more. That is, if you import from Products.CMFPlone, you should depend on this, and not on the Plone meta-egg (which itself contains no code, but only declares dependencies on other distributions, including Products. CMFPlone). To learn more about the rationale behind the Products. CMFPlone distribution, see http://dev.plone.org/plone/ticket/10877. At runtime: The bin/instance script starts Zope. Zope loads the site.zcml file (parts/instance/etc/site.zcml) as part of its startup process. This automatically includes the ZCML configuration for packages in the Products.* namespace, including Products.CMFPlone, Plone's main package. Plone uses z3c.autoinclude to automatically load the ZCML configuration of packages that opt into this using the z3c.autoinclude.plugin entry point target = plone. The optilux.policy distribution contains such an entry point, so it will be configured, along with any packages or files it explicitly includes from its own configure.zcml file.
Read more
  • 0
  • 0
  • 1907

article-image-magento-14-theming-making-our-theme-shine
Packt
23 Aug 2011
8 min read
Save for later

Magento 1.4 Theming: Making Our Theme Shine

Packt
23 Aug 2011
8 min read
  Magento 1.4 Theming Cookbook Over 40 recipes to create a fully functional, feature rich, customized Magento theme         Read more about this book       (For more resources on this subject, see here.) Looks quite interesting, doesn't it? Let's get started! Introduction You will find all the necessary code in the code bundle; however, some JavaScript libraries won't be included. Don't worry, they are very easy to find and download! Using Cufón to include any font we like in our theme This technique, which allows us to use any font we like in our site, is quite common these days, and is very, very useful. Years ago, if we wanted to include some uncommon font in our design, we could only do it by using a substitutive image. Nowadays, we have many options, such as some PHP extensions, @font-face, Flash (sIFR), and, of course, Cufón. And that last option is the one we are about to use. Getting ready If you would like to copy the code, instead of writing it, remember you can find all the necessary code in the code bundle in Chapter 5, recipe 1. How to do it... Ready to start this first recipe? Let's go: First we have to decide which font to use; this is totally up to you. In this example I'm going to use this one: http://www.dafont.com/aldo.font. It's a free font, and looks very good! Thanks to Sacha Rein. Just download it or find any other font of your liking. Now we need to use the Cufón generator in order to create the font package. We will go to this URL: http://cufon.shoqolate.com/generate/. There we will be able to see something similar to the following screenshot: On that screen we only have to follow all the instructions and click on the Let's do this button: This will create a file called Aldo_600.font.js, which will be the one we will be using in this example. But we also need something more, the Cufón library. Download it by clicking the Download button: After downloading the Cufón library, we will download a file called cufon-yui.js. We now have all the necessary tools. It's time to make use of them. First place cufon-yui.js and Aldo_600.font.js inside skin/frontend/default/rafael/js. Now we need to edit our layout file. Let's open: app/design/frontend/default/rafael/layout/page.xml. Once we have this file opened we are going to look for this piece of code: <action method="addCss"><stylesheet>css/960_24_col.css</stylesheet></action> <action method="addCss"><stylesheet>css/styles.css</stylesheet></action> And just below it we are going to add the following: <action method="addJs"><script>../skin/frontend/default/rafael/js/cufon-yui.js</script></action> <action method="addJs"><script>../skin/frontend/default/rafael/js/Aldo_600.font.js</script></action> See, we are adding our two JavaScript files. The path is relative to Magento's root JavaScript folder. With this done, our theme will load these two JS files. Now we are going to make use of them. Just open app/design/frontend/default/rafael/template/page/2columns-right.phtml. At the bottom of this file we are going to add the following code: <script type="text/javascript"> Cufon.replace('h1'); </script></body></html> And that's all we need. Check the difference! Before we had something like the following: And now we have this: Every h1 tag of our site will now use our Aldo font, or any other font of your liking. The possibilities are endless! How it works... This recipe was quite easy, but full of possibilities. We first downloaded the necessary JavaScript files, and then made use of them with Magento's add JS layout tag. Later we were able to use the libraries in our template as in any other HTML file. SlideDeck content slider Sometimes our home page is crowded with info, and we still need to place more and more things. A great way of doing so is using sliders. And the SlideDeck one offers a very configurable one. In this recipe we are going to see how to add it to our Magento theme. You shouldn't miss this recipe! How to do it... Making use of the slider in our theme is quite simple, let's see: First we need to go to http://www.slidedeck.com/ and download SlideDeck Lite, as we will be using the Lite version in this recipe. Once downloaded we will end up with a file called slidedeck-1.2.1-litewordpress-1.3.6.zip. Unzip it. Go to skin/frontend/default/rafael/css and place the following files in it: slidedeck.skin.css slidedeck.skin.ie.css back.png corner.png slides.png spines.png Next, place the following files in skin/frontend/default/rafael/js: jquery-1.3.2.min.js slidedeck.jquery.lite.pack.js Good, now everything is in place. Next, open: app/design/frontend/default/rafael/layout/page.xml. Now look for the following piece of code: <block type="page/html_head" name="head" as="head"> <action method="addJs"><script>prototype/prototype.js</script></action> <action method="addJs" ifconfig="dev/js/deprecation"><script>prototype/deprecation.js</script></action> <action method="addJs"><script>lib/ccard.js</script></action> Place the jQuery load tag just before the prototype one, so we have no conflict between libraries: <block type="page/html_head" name="head" as="head"> <action method="addJs"><script>../skin/frontend/default/rafael/js/jquery-1.3.2.min.js</script></action> <action method="addJs"><script>prototype/prototype.js</script></action> <action method="addJs" ifconfig="dev/js/deprecation"><script>prototype/deprecation.js</script></action> Also add the following code before the closing of the block: <action method="addCss"><stylesheet>css/slidedeck.skin.css</stylesheet></action><action method="addCss"><stylesheet>css/slidedeck.skin.ie.css</stylesheet></action><action method="addJs"><script>../skin/frontend/default/rafael/js/slidedeck.jquery.lite.pack.js</script></action> The following steps require us to log in to the administrator panel. Just go to: http://127.0.0.1/magento/index.php/admin. Now to CMS/Pages, open the home page one: Once inside, go to the Content tab. Click on Show/Hide editor. Find the following code: <div class="grid_18"> <a href="#"><img src="{{skin url='img_products/big_banner.jpg'}}" alt="Big banner" title="Big banner" /></a> </div><!-- Big banner → And replace it with the following: <div class="grid_18"> <style type="text/css"> #slidedeck_frame { width: 610px; height: 300px; } </style> <div id="slidedeck_frame" class="skin-slidedeck"> <dl class="slidedeck"> <dt>Slide 1</dt> <dd>Sample slide content</dd> <dt>Slide 2</dt> <dd>Sample slide content</dd> </dl> </div> <script type="text/javascript"> jQuery(document).ready(function($) { $('.slidedeck').slidedeck(); }); </script> <!-- Help support SlideDeck! Place this noscript tag onyour page when you deploy a SlideDeck to provide a link back! --> <noscript> <p>Powered By <a href="http://www.slidedeck.com"title="Visit SlideDeck.com">SlideDeck</a></p> </noscript></div><!-- Big banner → Save the page and reload the frontend. Our front page should have a slider just like the one seen in the following screenshot: And we are done! Remember you can place any content you want inside these tags: <dl class="slidedeck"> <dt>Slide 1</dt> <dd>Sample slide content</dd> <dt>Slide 2</dt> <dd>Sample slide content</dd> </dl> How it works... This recipe is also quite easy. We only need to load the necessary JavaScript files and edit the content of the home page, in order to add the necessary code.
Read more
  • 0
  • 0
  • 2274

article-image-magento-exploring-themes
Packt
16 Aug 2011
6 min read
Save for later

Magento: Exploring Themes

Packt
16 Aug 2011
6 min read
  Magento 1.4 Themes Design Magento terminology Before you look at Magento themes, it's beneficial to know the difference between what Magento calls interfaces and what Magento calls themes, and the distinguishing factors of websites and stores. Magento websites and Magento stores To add to this, the terms websites and stores have a slightly different meaning in Magento than in general and in other systems. For example, if your business is called M2, you might have three Magento stores (managed through the same installation of Magento) called: Blue Store Red Store Yellow Store In this case, Magento refers to M2 as the website and the stores are Blue Store, Red Store, and Yellow Store. Each store then has one or more store views associated with it too. The simplest Magento website consists of a store and store view (usually of the same name): A slightly more complex Magento store may just have one store view for each store. This is a useful technique if you want to manage more than one store in the same Magento installation, with each store selling different products (for example, the Blue Store sells blue products and the Yellow Store sells yellow products). If a store were to make use of more than one Magento store view, it might be, to present customers with a bi-lingual website. For example, our Blue Store may have an English, French, and Japanese store view associated with it: Magento interfaces An interface consists of one or more Magento themes that comprise how your stores look and function for your customers. Interfaces can be assigned at two levels in Magento: At the website level At the store view level If you assign an interface at the website level of your Magento installation, all stores associated with the interface inherit the interface. For example, imagine your website is known as M2 in Magento and it contains three stores called: Blue Store Red Store Yellow Store If you assign an interface at the website level (that is, M2), then the subsequent stores, Blue Store, Red Store, and Yellow Store, inherit this interface: If you assigned the interface at the store view level of Magento, then each store view can retain a different interface: Magento packages A Magento package typically contains a base theme, which contains all of the templates, and other files that Magento needs to run successfully, and a custom theme. Let's take a typical example of a Magento store, M2. This may have two packages: the base package, located in the app/design/frontend/base/ directory and another package which itself consists of two themes: The base theme is in the app/design/frontend/base/ directory. The second package contains the custom theme's default theme in the app/design/frontend/ default/ directory, which acts as a base theme within the package. The custom theme itself, which is the non-default theme, is in the app/design/frontend/our-custom- theme/default/ and app/design/frontend/our-custom-theme/custom-theme/ directories. By default, Magento will look for a required file in the following order: Custom theme directory: app/design/frontend/our-custom-theme/ custom-theme/ Custom theme's default directory: app/design/frontend/our-custom-theme/ default/ Base directory: app/design/frontend/base/ Magento themes A Magento theme fits in to the Magento hierarchy in a number of positions: it can act as an interface or as a store view. There's more to discover about Magento themes yet, though there are two types of Magento theme: a base theme (this was called a default theme in Magento 1.3) and a non-default theme. Base theme A base theme provides all conceivable files that a Magento store requires to run without error, so that non-default themes built to customize a Magento store will not cause errors if a file does not exist within it. The base theme does not contain all of the CSS and images required to style your store, as you'll be doing this with our non-default theme. Don't change the base package! It is important that you do not edit any files in the base package and that you do not attempt to create a custom theme in the base package, as this will make upgrading Magento fully difficult. Make sure any custom themes you are working on are within their own design package; for example, your theme's files should be located at app/design/ frontend/your-package-name/default and skin/frontend/ your-package-name/default. Default themes A default theme in Magento 1.4 changes aspects of your store but does not need to include every file required by Magento as a base theme does, though it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): Default themes in Magento 1.3 In Magento 1.3, the default theme acted the way the base theme did in Magento 1.4, providing every file that your Magento store required to operate. Non-default themes A non-default theme changes aspects of a Magento store but does not need to include every file required by Magento as the base theme does; it must just contain at least one file for at least one aspect of a theme (that is, locales, skins, templates, layout): In this way, non-default themes are similar to a default theme in Magento. Non-default themes can be used to alter your Magento store for different seasonal events such as Christmas, Easter, Eid, Passover, and other religious festivals, as well as events in your industry's corporate calendar such as annual exhibitions and conferences. Blocks in Magento Magento uses blocks to differentiate between the various components of its functionality, with the idea that this makes it easier for Magento developers and Magento theme designers to customize the functionality of Magento and the look and feel of Magento respectively. There are two types of blocks in Magento: Content blocks Structural blocks Content blocks A content block displays the generated XHTML provided by Magento for any given feature. Content blocks are used within Magento structural blocks. Examples of content blocks in Magento include the following: The search feature Product listings The mini cart Category listings Site navigation links Callouts (advertising blocks) The following diagram illustrates how a Magento store might have content blocks positioned within its structural blocks: Simply, content blocks are the what of a Magento theme: they define what type of content appears within any given page or view within Magento. Structural blocks In Magento, a structural block exists only to maintain a visual hierarchy to a page. Typical structural blocks in a Magento theme include: Header Primary area Left column Right column Footer
Read more
  • 0
  • 0
  • 2551
article-image-magento-14-theming-working-existing-themes
Packt
16 Aug 2011
5 min read
Save for later

Magento 1.4 Theming: Working with Existing Themes

Packt
16 Aug 2011
5 min read
  Magento 1.4 Theming Cookbook Over 40 recipes to create a fully functional, feature rich, customized Magento theme         Read more about this book       (For more resources on this subject, see here.) Introduction As we have just noted, this article is going to be more practical. We are going to see some basic things that will help us modify the appearance of our site, from looking for free templates to the process of installing them and making some tiny changes. I am sure this article helps us establish a good foundation though most of the things we are going to see are quite simple, so this is going to be a good introduction. Installing a theme through Magento Connect We are going to see two ways in which we can install a new Magento theme. The first one is through Magento Connect. This involves a number of steps that we are going to follow along this recipe. Getting ready First we need to go to Magento Connect: http://www.magentocommerce.com/magentoconnect. There, search for the theme Magento Classic Theme Free, or, alternatively, load this link: http://www.magentocommerce.com/magento-connect/TemplatesMaster/extension/928/magento-classic-theme. How to do it... This theme, the Magento Classic Theme, is the one we are going to use through this article. Follow these steps to install it: First we need to log into our www.magentocommerce.com site account. This is a necessary step to get the Extension Key. The extension key can be found in a box that looks like the following screenshot: We also find some other useful info there, like the compatibility version, and the price, free in this case. There's also a button that says Get Extension Key. Click on that button: After clicking on the Get Extension Key button, the box that we can see in the preceding screenshot appears. On that image we must accept the license agreement, select the Magento version we are using, and again click on Get Extension Key: This is the last screen. We need to obtain the extension key and copy it as we are going to use it very soon. For the next steps we need to go to our Magento installation Admin area screen. There, carry out the following instructions: Go to System menu, then Magento Connect and finally Magento Connect Manager. The Magento Connect Manager screen will appear, just like the following screenshot: Here we need to insert the same login info as we used in our Magento Admin screen. In the next screen we need to insert the key we have just copied: When we click on the install button the installation process starts. It will only take a few seconds and the theme will be installed into our site. Fast and easy! How it works... We have just selected a theme from the Magento site, and installed it into our own Magento installation through the Magento Connect Manager. We can do this as many times as we want, and install as many themes as we need to. Installing a theme manually In this recipe we are going to see another method of installing themes into our Magento installation. This time we are going to see how to install a theme manually. Why is this important? Just because we may want to use themes that are not available through Magento Connect, as they can be themes purchased in other places, or downloaded from free themes. Getting ready In order to follow this method we need a downloaded theme. You can get the Magento Classic theme from this link: http://blog.templates-master.com/free-magento-classic-theme/ Once done, we will get a file called f002.classic_1.zip; just unzip it. Inside we will find many folders: graphic source: This folder has all the PSD files necessary to help us modify the theme installation: This folder is where we can find installation instructions template source 1.3.2: The theme is for Magento version 1.3.2 template source 1.4.0.1: This theme is for Magento 1.4.0.1 template source 1.4.1.0: This theme is for Magento 1.4.1.0 Though our Magento version is not 1.4.1.0, we could use the theme without problems. What can we find inside that folder? Another two folders: app skin This is the folder structure we should expect from a Magento theme. How to do it... Installing a theme is as easy as copying those two folders into our Magento installation, with all their sub folders and files. In our Magento installation you will see another app and skin folder, so the contents in the folder of the downloaded theme will combine with the already existing ones. No overwriting should occur. And that's all; we don't need to do anything else. Selecting the recently installed theme Good, now we have installed one theme, one way or the other. So, how do we enable this theme? In fact it's quite easy; we can achieve just that from our site admin panel. Getting ready If you haven't logged into your Magento installation, do it now. How to do it... Well, once we are inside the admin panel of our site, we need to carry out the following steps to select our new theme: Go to System menu, and then click on Configuration. Look at the General block and select Design. There you will see the following screenshot: There, in Current Package Name, we can leave it as default. In the Themes section, in the Default column, we can enter the name of the theme, f002 for this example. We are done. Click on the Save config button and refresh the frontend. You will be able to see the changes. How it works... We are done. We have just selected the new theme. As promised it was quite easy, wasn't it? Try it yourself!
Read more
  • 0
  • 0
  • 1947

article-image-understanding-and-developing-node-modules
Packt
11 Aug 2011
5 min read
Save for later

Understanding and Developing Node Modules

Packt
11 Aug 2011
5 min read
Node Web Development A practical introduction to Node, the exciting new server-side JavaScript web development stack What's a module? Modules are the basic building block of constructing Node applications. We have already seen modules in action; every JavaScript file we use in Node is itself a module. It's time to see what they are and how they work. The following code to pull in the fs module, gives us access to its functions: var fs = require('fs'); The require function searches for modules, and loads the module definition into the Node runtime, making its functions available. The fs object (in this case) contains the code (and data) exported by the fs module. Let's look at a brief example of this before we start diving into the details. Ponder over this module, simple.js: var count = 0; exports.next = function() { return count++; } This defines an exported function and a local variable. Now let's use it: The object returned from require('./simple') is the same object, exports, we assigned a function to inside simple.js. Each call to s.next calls the function next in simple.js, which returns (and increments) the value of the count variable, explaining why s.next returns progressively bigger numbers. The rule is that, anything (functions, objects) assigned as a field of exports is exported from the module, and objects inside the module but not assigned to exports are not visible to any code outside the module. This is an example of encapsulation. Now that we've got a taste of modules, let's take a deeper look. Node modules Node's module implementation is strongly inspired by, but not identical to, the CommonJS module specification. The differences between them might only be important if you need to share code between Node and other CommonJS systems. A quick scan of the Modules/1.1.1 spec indicates that the differences are minor, and for our purposes it's enough to just get on with the task of learning to use Node without dwelling too long on the differences. How does Node resolve require('module')? In Node, modules are stored in files, one module per file. There are several ways to specify module names, and several ways to organize the deployment of modules in the file system. It's quite flexible, especially when used with npm, the de-facto standard package manager for Node. Module identifiers and path names Generally speaking the module name is a path name, but with the file extension removed. That is, when we write require('./simple'), Node knows to add .js to the file name and load in simple.js. Modules whose file names end in .js are of course expected to be written in JavaScript. Node also supports binary code native libraries as Node modules. In this case the file name extension to use is .node. It's outside our scope to discuss implementation of a native code Node module, but this gives you enough knowledge to recognize them when you come across them. Some Node modules are not files in the file system, but are baked into the Node executable. These are the Core modules, the ones documented on nodejs.org. Their original existence is as files in the Node source tree but the build process compiles them into the binary Node executable. There are three types of module identifiers: relative, absolute, and top-level. Relative module identifiers begin with "./" or "../" and absolute identifiers begin with "/". These are identical with POSIX file system semantics with path names being relative to the file being executed. Absolute module identifiers obviously are relative to the root of the file system. Top-level module identifiers do not begin with "." , "..", or "/" and instead are simply the module name. These modules are stored in one of several directories, such as a node_modules directory, or those directories listed in the array require.paths, designated by Node to hold these modules. Local modules within your application The universe of all possible modules is split neatly into two kinds, those modules that are part of a specific application, and those modules that aren't. Hopefully the modules that aren't part of a specific application were written to serve a generalized purpose. Let's begin with implementation of modules used within your application. Typically your application will have a directory structure of module files sitting next to each other in the source control system, and then deployed to servers. These modules will know the relative path to their sibling modules within the application, and should use that knowledge to refer to each other using relative module identifiers. For example, to help us understand this, let's look at the structure of an existing Node package, the Express web application framework. It includes several modules structured in a hierarchy that the Express developers found to be useful. You can imagine creating a similar hierarchy for applications reaching a certain level of complexity, subdividing the application into chunks larger than a module but smaller than an application. Unfortunately there isn't a word to describe this, in Node, so we're left with a clumsy phrase like "subdivide into chunks larger than a module". Each subdivided chunk would be implemented as a directory with a few modules in it. In this example, the most likely relative module reference is to utils.js. Depending on the source file which wants to use utils.js it would use one of the following require statements: var utils = require('./lib/utils'); var utils = require('./utils'); var utils = require('../utils');  
Read more
  • 0
  • 0
  • 3473
Modal Close icon
Modal Close icon