Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-article-yui-test
Packt
11 Mar 2013
5 min read
Save for later

YUI Test

Packt
11 Mar 2013
5 min read
(For more resources related to this topic, see here.) YUI Test is one of the most popular JavaScript unit testing frameworks. Although YUI Test is part of the Yahoo! User Interface (YUI) JavaScript library (YUI is an open source JavaScript and CSS library that can be used to build Rich Internet Applications), it can be used to test any independent JavaScript code that does not use the YUI library. YUI Test provides a simple syntax for creating JavaScript test cases that can run either from the browser or from the command line; it also provides a clean mechanism for testing asynchronous (Ajax) JavaScript code. If you are familiar with the syntax of xUnit frameworks (such as JUnit), you will find yourself familiar with the YUI Test syntax. In YUI Test, there are different ways to display test results . You can display the test results in the browser console or develop your custom test runner pages to display the test results. It is preferable to develop custom test runner pages in order to display the test results in all the browsers because some browsers do not support the console object. The console object is supported in Firefox with Firebug installed, Safari 3+, Internet Explorer 8+, and Chrome. Before writing your first YUI test, you need to know the structure of a custom YUI test runner page. We will create the test runner page, BasicRunner.html, that will be the basis for all the test runner pages used in this article. In order to build the test runner page, first of all you need to include the YUI JavaScript file yui-min.js—from the Yahoo! Content Delivery Network (CDN)—in the BasicRunner.html file, as follows: code 1 At the time of this writing, the latest version of YUI Test is 3.6.0, which is the one used in this chapter. After including the YUI JavaScript file, we need to create and configure a YUI instance using the YUI().use API, as follows: code 1 The YUI().use API takes the list of YUI modules to be loaded. For the purpose of testing, we need the YUI 'test' and 'console' modules (the 'test' module is responsible for creating the tests, while the 'console' module is responsible for displaying the test results in a nifty console component). Then, the YUI().use API takes the test's callback function that is called asynchronously once the modules are loaded. The Y parameter in the callback function represents the YUI instance. As shown in the following code snippet taken from the BasicRunner.html file, you can write the tests in the provided callback and then create a console component using the Y.Console object: code 1 The console object is rendered as a block element by setting the style attribute to 'block', and the results within the console can be displayed in the sequence of their executions by setting the newestOnTop attribute to false . Finally, the console component is created on the log div element. Now you can run the tests, and they will be displayed automatically by the YUI console component. The following screenshot shows the BasicRunner.html file's console component without any developed tests: Image Writing your first YUI test The YUI test can contain test suites, test cases, and test functions. A YUI test suite is a group of related test cases. Each test case includes one or more test functions for the JavaScript code. Every test function should contain one or more assertion in order to perform the tests and verify the outputs. The YUI Test.Suite object is responsible for creating a YUI test suite, while the YUI Test.Case object creates a YUI test case. The add method of the Test.Suite object is used for attaching the test case object to the test suite. The following code snippet shows an example of a YUI test suite: code 1 As shown in the preceding code snippet, two test cases are created. The first test case is named testcase1; it contains two test functions, testFunction1 and testFunction2. In YUI Test, you can create a test function simply by starting the function name with the word "test". The second test case is named testcase2; and it contains a single test function, testAnotherFunction. A test suite is created with the name Testsuite. Finally, testcase1 and testcase2 are added to the Testsuite test suite. In YUI Test, you have the option of creating a friendly test name for the test function, as follows: code 1 The "some Testcase" test case contains two tests with the names "The test should do X" and "The test should do Y". Summary In this article, we learned about the basics of YUI Test framework which is used for testing JavaScript applications, and also a walkthrough on how to write your YUI test using the Test.Suite object (that is responsible for creating a YUI test suite) and the Test.Case object (for creating a YUI test case). Resources for Article :   Further resources on this subject: Using Javascript Effects to enhance your Joomla! website for Visitors [Article] Exclusive Offer On jQuery Books: Winner Of 2010 Open-Source JavaScript Libraries [Article] Basics of Exception Handling Mechanism in JavaScript Testing [Article]
Read more
  • 0
  • 0
  • 2311

article-image-look-high-level-programming-operations-php-language
Packt
11 Mar 2013
3 min read
Save for later

A look into the high-level programming operations for the PHP language

Packt
11 Mar 2013
3 min read
(For more resources related to this topic, see here.) Accessing documentation PhpStorm offers four different operations that will help you to access the documentation: Quick Definition, Quick Documentation, Parameter Info, and External Documentation. The first one, Quick Definition, presents the definition of a given symbol. You can use it for a variable, function, method, or class. Quick Documentation allows easy access to DocBlocks. It can be used for all kinds of symbols: variables, functions, methods, and classes. The next operation, Parameter Info, presents simplified information about a function or method interface. Finally, External Documentation will help you to access the official PHP documentation available at php.com. Their shortcuts are as follows: Quick Definition (Ctrl + Shift + I, Mac: alt + Space bar) Quick Documentation (Ctrl + Q, Mac: F1) Parameter Info (Ctrl + P, Mac: command + P) External Documentation (Shift + F1, Mac: shift + F1) The Esc (Mac: shift + esc) hotkey will close any of the previous windows. If you place the cursor inside the parenthesis of $s = str_replace(); and run Parameter Info (Ctrl + P, Mac: command + P), you will get the hint showing all the parameters for the str_replace() function. If that is not enough, place the cursor inside the str_replace function name and press Shift + F1 (Mac: shift + F1). You will get the manual for the function. If you want to test the next operation, open the project created in the Quick start – your first PHP application section and place the cursor inside the class name Controller in the src/My/HelloBundle/Controller/DefaultController.php file. The place where you should place the cursor is denoted with bar | in the following code: class DefaultController extends Cont|roller { } The Quick Definition operation will show you the class definition: The Quick Documentation operation will show you the documentation defined with PhpDoc blocks: It is a formal standard for commenting on the PHP code. The official documentation is available at http://www.phpdoc.org. Generators PhpStorm enables you to do the following: Implement magic methods Override inherited methods Generate constructor, getters, setters, and docblocks All of these operations are available in Code | Generate (Alt + Insert, Mac: command + N). Perform the following steps: Create a new class Foo and place the cursor at the position of | : class Person { | } The Generate dialog box will contain the following operations: The Implement Methods dialog box contains all available magic methods: Create the class with two private properties: class Lorem { private $ipsum; private $dolor; } Then go to Code | Generate | Getters and Setters. In the dialog box select both properties: Then press OK. PhpStorm will generate the following methods: class Lorem { private $ipsum; private $dolor; public function setDolor($dolor) { $this->dolor = $dolor; } public function getDolor() { return $this->dolor; } public function setIpsum($ipsum) { $this->ipsum = $ipsum; } public function getIpsum() { return $this->ipsum; } } Next, go to Code | Generate | DocBlocks and in the dialog box select all the properties and methods: PhpStorm will generate docblocks for each of the selected properties and methods, for example: /** * @param $dolor */ public function setDolor($dolor) { $this->dolor = $dolor; } Summary We just learned that in some cases you don't have to type the code at all, as it can be generated automatically. Generators discussed in this article lifted the burden of type setters, getters and magic functions from your shoulders. We also dived into different ways to access documentation here. Resources for Article : Further resources on this subject: Installing PHP-Nuke [Article] An Introduction to PHP-Nuke [Article] Creating Your Own Theme—A Wordpress Tutorial [Article]
Read more
  • 0
  • 0
  • 2253

article-image-modules
Packt
08 Mar 2013
6 min read
Save for later

Modules

Packt
08 Mar 2013
6 min read
(For more resources related to this topic, see here.) We are going to try mastering modules which will help us break up our application into sensible components. Our configuration file for modules is given as follows: var guidebookConfig = function($routeProvider) { $routeProvider .when('/', { controller: 'ChaptersController', templateUrl: 'view/chapters.html' }) .when('/chapter/:chapterId', { controller: 'ChaptersController', templateUrl: 'view/chapters.html' }) .when('/addNote/:chapterId', { controller: 'AddNoteController', templateUrl: 'view/addNote.html' }) .when('/deleteNote/:chapterId/:noteId', { controller: 'DeleteNoteController', templateUrl: 'view/addNote.html' }) ; }; var Guidebook = angular.module('Guidebook', []). config(guidebookConfig); The last line is where we define our Guidebook module. This creates a namespace for our application. Look at how we defined the add and delete note controllers: Guidebook.controller('AddNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; $scope.cancel = function() { $location.path('/chapter/' + chapterId); } $scope.createNote = function() { NoteModel.addNote(chapterId, $scope.note.content); $location.path('/chapter/' + chapterId); } } ); Guidebook.controller('DeleteNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; NoteModel.deleteNote(chapterId, $routeParams.noteId); $location.path('/chapter/' + chapterId); } ); Since Guidebook is a module, we're really making a call to module.controller() to define our controller. We also called Guidebook.service(), which is really just module.service(), to define our models. One advantage of defining controllers, models, and other components as part of a module is that it groups them together under a common name. This way, if we have multiple applications on the same page, or content from another framework, it's easy to remember which components belong to which application. The second advantage of defining components as part of a module is that AngularJS will do some heavy lifting for us. Take our NoteModel, for instance. We defined it as a service by calling module.service(). AngularJS provides some extra functionality to services within a module. One example of that extra functionality is dependency injection. This is why in our DeleteNoteController, we can include our note model simply by asking for it in the controller's function: Guidebook.controller('DeleteNoteController', function ($scope, $location, $routeParams, NoteModel) { var chapterId = $routeParams.chapterId; NoteModel.deleteNote(chapterId, $routeParams.noteId); $location.path('/chapter/' + chapterId); } ); This only works because we registered both DeleteNoteController and NoteModel as parts of the same module. Let's talk a little more about NoteModel. When we define controllers on our module, we call module.controller(). For our models, we called module.service(). Why isn't there a module.model()? Well, it turns out models are a good bit more complicated than controllers. In our Guidebook application, our models were relatively simple, but what if we were mapping our models to some external API, or a full- fledged relational database? Because there are so many different types of models with vastly different levels of complexity, AngularJS provides three ways to define a model: as a service, as a factory, and as a provider. We'll now see how we define a service in our NoteModel: Guidebook.service('NoteModel', function() { this.getNotesForChapter = function(chapterId) { ... }; this.addNote = function(chapterId, noteContent) { ... }; this.deleteNote = function(chapterId, noteId) { ... }; }); It turns out this is the simplest type of model. We're simply defining an object with a number of functions that our controllers can call. This is all we needed in our case, but what if we were doing something a little more complicated? Let's pretend that instead of using HTML5 local storage to store and retrieve our note data, we're getting it from a web server somewhere. We'll need to add some logic to set up and manage the connection with that server. The service definition can help us a little here with the setup logic; we're returning a function, so we could easily add some initialization logic. When we get into managing the connection, though, things get a little messy. We'll probably want a way to refresh the connection, in case it drops. Where would that go? With our service definition, our only option is to add it as a function like we have for getNotesForChapter(). This is really hacky; we're now exposing how the model is implemented to the controller, and worse, we would be allowing the controller to refresh the connection. In this case, we would be better off using a factory. Here's what our NoteModel would look like if we had defined it as a factory: Guidebook.factory('NoteModel', function() { return { getNotesForChapter: function(chapterId) { ... }, addNote: function(chapterId, noteContent) { ... }, deleteNote: function(chapterId, noteId) { ... } }; }); Now we can add more complex initialization logic to our model. We could define a few functions for managing the connection privately in our factory initialization, and give our data methods references to them. The following is what that might look like: Guidebook.factory('NoteModel', function() { var refreshConnection = function() { ... }; return { getNotesForChapter: function(chapterId) { ... refreshConnection(); ... }, addNote: function(chapterId, noteContent) { ... refreshConnection(); ... }, deleteNote: function(chapterId, noteId) { ... refreshConnection(); ... } }; }); Isn't that so much cleaner? We've once again kept our model's internal workings completely separate from our controller. Let's get even crazier. What if we backed our models with a complex database server with multiple endpoints? How could we configure which endpoint to use? With a service or a factory, we could initialize this in the model. But what if we have a whole bunch of models? Do we really want to hardcode that logic into each one individually? At this point, the model shouldn't be making this decision. We want to choose our endpoints in a configuration file somewhere. Reading from a configuration file in either a service or a factory will be awkward. Models should focus solely on storing and retrieving data, not messing around with configuration updates. Provider to the rescue! If we define our NoteModel as a provider, we can do all kinds of configuration from some neatly abstracted configuration file. Here's how our NoteModel looks if we convert it into a provider: Guidebook.provider('NoteModel', function() { this.endpoint = 'defaultEndpoint'; this.setEndpoint = function(newEndpoint) { this.endpoint = newEndpoint; }; this.$get = function() { var endpoint = this.endpoint; var refreshConnection = function() { // reconnect to endpoint }; return { getNotesForChapter: function(chapterId) { ... refreshConnection(); ... }, addNote: function(chapterId, noteContent) { ... refreshConnection(); ... }, deleteNote: function(chapterId, noteId) { ... refreshConnection(); ... } }; }; }); Now if we add a configuration file to our application, we can configure the endpoint from outside the model: Guidebook.config(function(NoteModelProvider) { NoteModelProvider.setEndpoint('anEndpoint'); }); Providers give us a very powerful architecture. If we have several models, and we have several endpoints, we can configure all of them in one single configuration file. This is ideal, as our models remain single-purpose and our controllers still have no knowledge of the internals of the models they depend on. Summary Thus we saw in this article modules provide an easy, flexible way to create components in AngularJS. We've seen three different ways to define a model, and we've discussed the benefits of using a module to create an application namespace. Resources for Article : Further resources on this subject: Framework Comparison: Backbase AJAX framework Vs Other Similar Framework (Part 1) [Article] Getting Started With Spring MVC - Developing the MVC components [Article] ASP.NET MVC Framework [Article]
Read more
  • 0
  • 0
  • 9456

article-image-planning-your-lessons-using-ipad
Packt
04 Mar 2013
5 min read
Save for later

Planning your lessons using iPad

Packt
04 Mar 2013
5 min read
(For more resources related to this topic, see here.) Getting ready We begin by downloading the Planbook app from the App Store. It costs $9.99 but is definitely worth the money spent. You may also like to download its Mac app from Mac App Store, if you wish to keep your iPad and Mac in sync. How to do it... The home screen of the app shows the available Planbooks, but as it is your first launch of the app there will be no available Planbooks. To start creating lesson plans you need to choose the option Create New Planbook or import one from your dropbox folder. Planbook provides excellent customization features to suit your exact needs. You can add as many courses as you want by clicking on Add Courses. You can specify the dates in the Dates you want included in your Planbook textbox as a date range, and even select the type of schedule you use as illustrated in the next screenshot: Now that you have specified the major features of your plan, you need to click on Continue to specify the time range of each course of a day. Planbook provides standard iOS date pickers to choose start and end times for each course. After putting in all details for a lesson plan, you need to click on Continue, which will take you back to the home screen, where you can now see the Planbook you just made in the list of available files. If you wish to, you can edit the name of your Planbook, delete it, or e-mail it. As we want to move forward to create your detailed plan, you should select Open Planbook File of your plan. You should now see your lesson plan for one entire week. Each course is represented by a different color for each day, and its contents are shown at the very same place minimizing unnecessary navigation. As you have created the complete outline of your lesson plan, it is time you should specify the content of each course. It's easy—just tap on the course you want to create content for, and a screen with many user-friendly customization features will appear. This is shown in the next screenshot: Here you can enter your course content divided into six different text fields in the Text Fields column at the right-hand side, with even the title of each field being editable! You can specify assignments in the textbox Assignments for the course, use attachments in Files and Web Links in the Attachments option related to the course content, and even have keywords in Keywords which might help you with easy look up. If you wish to see your lesson plan on the basis of a single day, simply click on Day tab at the bottom as shown in the following screenshot and you will see a nice, clean day plan. Tapping on the course still works to edit it! You can tap on a course with two fingers if you want to view it in detail. You will find this small feature very useful when you need to have a quick look at the course content while in class. One of the reasons I was so impressed with this app was the set of powerful editing and navigation options it provides. The gear icon at the bottom-right corner of each course pops up a box of tools. You can use this box to: Bump Lesson: You can shift the course further by one day using this option. The course content for the selected day becomes blank. Yank Lesson: This option removes the course for the selected day. The entire course shifts back by one day. Make Non-School Day: This option gets either all the courses of the day shifted further by one day or the content of all courses of the day is deleted, depending on the user's choice. You have a Courses button in the top-right corner, which lets you show/hide courses in the plan. You also have a Share button besides the Courses button that will let you print or e-mail the Planbook, in a day-wise or week-wise arrangement depending on the active tab. To navigate between weeks/days, swipe from the left- to right-hand side or from the right- to left-hand side to move to the previous or next week/day respectively. You can also click on the Week or Day options present at the bottom of the screen to navigate to that particular week/day. How it works... We have now created a Planbook as per your requirements. You can create multiple Planbooks in the same way. The Planbook app saves each Planbook as a file on the device that you can retrieve, modify, and delete at any later stage even when you are offline. There's more... The iPad App Store has many other apps apart from Planbook (which we chose due to its simplicity and powerful features) that facilitate planning lessons. There is an app My LessonPlan, which costs $2.99, but is more advantageous in classes where even students have access to personal devices. iPlanLessons, costing $6.99 is another powerful app and can be considered as an alternative to Planbook. Summary In this article we saw how the Planbook app makes it very easy to create, modify, and share plans for daily planning purposes. We also saw how it is a an easy-to-use app for a beginner who is not comfortable and proficient with iPad apps. Resources for Article : Further resources on this subject: iPhone User Interface: Starting, Stopping, and Multitasking Applications [Article] iPhone: Issues Related to Calls, SMS, and Contacts [Article] Create a Local Ubuntu Repository using Apt-Mirror and Apt-Cacher [Article]
Read more
  • 0
  • 0
  • 1660

article-image-core-net-recipes
Packt
26 Feb 2013
15 min read
Save for later

Core .NET Recipes

Packt
26 Feb 2013
15 min read
(For more resources related to this topic, see here.) Implementing the validation logic using the Repository pattern The Repository pattern abstracts out data-based validation logic. It is a common misconception that to implement the Repository pattern you require a relational database such as MS SQL Server as the backend. Any collection can be treated as a backend for a Repository pattern. The only point to keep in mind is that the business logic or validation logic must treat it as a database for saving, retrieving, and validating its data. In this recipe, we will see how to use a generic collection as backend and abstract out the validation logic for the same. The validation logic makes use of an entity that represents the data related to the user and a class that acts as the repository for the data allowing certain operations. In this case, the operation will include checking whether the user ID chosen by the user is unique or not. How to do it... The following steps will help check the uniqueness of a user ID that is chosen by the user: Launch Visual Studio .NET 2012. Create a new project of Class Library project type. Name it CookBook.Recipes.Core.CustomValidation. Add a folder to the project and set the folder name to DataModel. Add a new class and name it User.cs. Open the User class and create the following public properties: Property name Data type UserName String DateOfBirth DateTime Password String Use the automatic property functionality of .NET to create the properties. The final code of the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Next, let us create the repository. Add a new folder and name it Repository. Add an interface to the Repository folder and name it IRepository.cs. The interface will be similar to the following code snippet: public interface IRepository { } Open the IRepository interface and add the following methods: Name Description Parameter(s) Return Type AddUser Adds a new user User object Void IsUsernameUnique Determines whether the username is already taken or not string Boolean After adding the methods, IRepository will look like the following code: namespace CookBook.Recipes.Core.CustomValidation { public interface IRepository { void AddUser(User user); bool IsUsernameUnique(string userName); } } Next, let us implement IRepository. Create a new class in the Repository folder and name it MockRepository. Make the MockRepository class implement IRepository. The code will be as follows: namespace CookBook.Recipes.Core.Data.Repository { public class MockRepository:IRepository { #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already ///exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } } Now, add a private variable of type List<Users> in the MockRepository class. Name it _users. It will hold the registered users. It is a static variable so that it can hold usernames across multiple instantiations. Add a constructor to the class. Then initialize the _users list and add two users to the list: public class MockRepository:IRepository { #region Variables Private static List<User> _users; #endregion public MockRepository() { _users = new List<User>(); _users.Add(new User() { UserName = "wayne27", DateOfBirth = new DateTime(1950, 9, 27), Password = "knight" }); DateOfBirth = new DateTime(1955, 9, 27), Password = "justice" }); } #region IRepository Members /// <summary> /// Adds a new user to the collection /// </summary> /// <param name="user"></param> public void AddUser(User user) { } /// <summary> /// Checks whether a user with the username already exists /// </summary> /// <param name="userName"></param> /// <returns></returns> public bool IsUsernameUnique(string userName) { } #endregion } Now let us add the code to check whether the username is unique. Add the following statements to the IsUsernameUnique method: bool exists = _users.Exists(u=>u.UserName==userName); return !exists; The method turns out to be as follows: public bool IsUsernameUnique(string userName) { bool exists = _users.Exists(u=>u.UserName==userName); return !exists; } Modify the AddUser method so that it looks as follows: public void AddUser(User user) { _users.Add(user); } How it works... The core of the validation logic lies in the IsUsernameUnique method of the MockRespository class. The reason to place the logic in a different class rather than in the attribute itself was to decouple the attribute from the logic to be validated. It is also an attempt to make it future-proof. In other words, tomorrow, if we want to test the username against a list generated from an XML file, we don't have to modify the attribute. We will just change how IsUsernameUnique works and it will be reflected in the attribute. Also, creating a Plain Old CLR Object (POCO) to hold values entered by the user stops the validation logic code from directly accessing the source of input, that is, the Windows application. Coming back to the IsUsernameUnique method, it makes use of the predicate feature provided by .NET. Predicate allows us to loop over a collection and find a particular item. Predicate can be a static function, a delegate, or a lambda. In our case it is a lambda. bool exists = _users.Exists(u=>u.UserName==userName); In the previous statement, .NET loops over _users and passes the current item to u. We then make use of the item held by u to check whether its username is equal to the username entered by the user. The Exists method returns true if the username is already present. However, we want to know whether the username is unique. So we flip the value returned by Exists in the return statement, as follows: return !exists; Creating a custom validation attribute by extending the validation data annotation .NET provides data annotations as a part of the System.ComponentModel. DataAnnotation namespace. Data annotations are a set of attributes that provides out of the box validation, among other things. However, sometimes none of the in-built validations will suit your specific requirements. In such a scenario, you will have to create your own validation attribute. This recipe will tell you how to do that by extending the validation attribute. The attribute developed will check whether the supplied username is unique or not. We will make use of the validation logic implemented in the previous recipe to create a custom validation attribute named UniqueUserValidator. How to do it... The following steps will help you create a custom validation attribute to meet your specific requirements: Launch Visual Studio 2012. Open the CustomValidation solution. Add a reference to System.ComponentModel.DataAnnotations. Add a new class to the project and name it UniqueUserValidator. Add the following using statements: using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation.MessageRepository; using CookBook.Recipes.Core.Data.Repository; Derive it from ValidationAttribute, which is a part of the System. ComponentModel.DataAnnotations namespace. In code, it would be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { } } Add a property of type IRepository to the class and name it Repository. Add a constructor and initialize Repository to an instance of MockRepository. Once the code is added, the class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } } } Override the IsValid method of ValidationAttribute. Convert the object argument to string. Then call the IsUsernameUnique method of IRepository, pass the string value as a parameter, and return the result. After the modification, the code will be as follows: namespace CookBook.Recipes.Core.CustomValidation { public class UniqueUserValidator:ValidationAttribute { public IRepository Repository {get;set;} public UniqueUserValidator() { this.Repository = new MockRepository(); } public override bool IsValid(object value) { string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); } } } We have completed the implementation of our custom validation attribute. Now let's test it out. Add a new Windows Forms Application project to the solution and name it CustomValidationApp. Add a reference to the System.ComponentModel.DataAnnotations and CustomValidation projects. Rename Form1.cs to Register.cs. Open Register.cs in the design mode. Add controls for username, date of birth, and password and also add two buttons to the form. The form should look like the following screenshot: Name the input control and button as given in the following table: Control Name Textbox txtUsername Button btnOK Since we are validating the User Name field, our main concern is with the textbox for the username and the OK button. I have left out the names of other controls for brevity. Switch to the code view mode. In the constructor, add event handlers for the Click event of btnOK as shown in the following code: public Register() { InitializeComponent(); this.btnOK.Click += new EventHandler(btnOK_Click); } void btnOK_Click(object sender, EventArgs e) { } Open the User class of the CookBook.Recipes.Core.CustomValidation project. Annotate the UserName property with UniqueUserValidator. After modification, the User class will be as follows: namespace CookBook.Recipes.Core.CustomValidation { /// <summary> /// Contains details of the user being registered /// </summary> public class User { [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName { get; set; } public DateTime DateOfBirth { get; set; } public string Password { get; set; } } } Go back to Register.cs in the code view mode. Add the following using statements: using System.ComponentModel; using System.ComponentModel.DataAnnotations; using CookBook.Recipes.Core.CustomValidation; using CookBook.Recipes.Core.Data.Repository; Add the following code to the event handler of btnOK: //create a new user User user = new User() { UserName = txtUsername.Text, DateOfBirth=dtpDob.Value }; //create a validation context for the user instance ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) { foreach (ValidationResult result in errors) MessageBox.Show(result.ErrorMessage); } else { IRepository repository = new MockRepository(); repository.AddUser(user); MessageBox.Show("New user added"); } Hit F5 to run the application. In the textbox add a username, say, dreamwatcher. Click on OK. You will get a message box stating User has been added successfully Enter the same username again and hit the OK button. This time you will get a message saying User name already exists. This means our attribute is working as desired. Go to File | Save Solution As…, enter CustomValidation for Name, and click on OK. We will be making use of this solution in the next recipe. How it works... To understand how UniqueUserValidator works, we have to understand how it is implemented and how it is used/called. Let's start with how it is implemented. It extends ValidationAttribute. The ValidationAttribute class is the base class for all the validation-related attributes provided by data annotations. So the declaration is as follows: public class UniqueUserValidator:ValidationAttribute This allowed us to make use of the public and protected methods/attribute arguments of ValidationAttribute as if it is a part of our attribute. Next, we have a property of type IRepository, which gets initialized to an instance of MockRepository. We have used the interface-based approach so that the attribute will only need a minor change if we decide to test the username against a database table or list generated from a file. In such a scenario, we will just change the following statement: this.Repository = new MockRepository(); The previous statement will be changed to something such as the following: this.Repository = new DBRepository(); Next, we overrode the IsValid method. This is the method that gets called when we use UniqueUserValidator. The parameter of the IsValid method is an object. So we have to typecast it to string and call the IsUniqueUsername method of the Repository property. That is what the following statements accomplish: string valueToTest = Convert.ToString(value); return this.Repository.IsUsernameUnique(valueToTest); Now let us see how we used the validator. We did it by decorating the UserName property of the User class: [UniqueUserValidator(ErrorMessage="User name already exists")] public string UserName {get; set;} As I already mentioned, deriving from ValidatorAttribute helps us in using its properties as well. That's why we can use ErrorMessage even if we have not implemented it. Next, we have to tell .NET to use the attribute to validate the username that has been set. That is done by the following statements in the OK button's Click handler in the Register class: ValidationContext context = new ValidationContext(user, null, null); //holds the validation errors IList<ValidationResult> errors = new List<ValidationResult>(); if (!Validator.TryValidateObject(user, context, errors, true)) First, we instantiate an object of ValidationContext. Its main purpose is to set up the context in which validation will be performed. In our case the context is the User object. Next, we call the TryValidateObject method of the Validator class with the User object, the ValidationContext object, and a list to hold the errors. We also tell the method that we need to validate all properties of the User object by setting the last argument to true. That's how we invoke the validation routine provided by .NET. Using XML to generate a localized validation message In the last recipe you saw that we can pass error messages to be displayed to the validation attribute. However, by default, the attributes accept only a message in the English language. To display a localized custom message, it needs to be fetched from an external source such as an XML file or database. In this recipe, we will see how to use an XML file to act as a backend for localized messages. How to do it... The following steps will help you generate a localized validation message using XML: Open CustomValidation.sln in Visual Studio .NET 2012. Add an XML file to the CookBook.Recipes.Core.CustomValidation project. Name it Messages.xml. In the Properties window, set Build Action to Embedded Resource. Add the following to the Messages.xml file: <?xml version="1.0" encoding="utf-8" ?> <messages> <en> <message key="not_unique_user">User name is not unique</message> </en> <fr> <message key="not_unique_user">Nom d'utilisateur n'est pas unique</message> </fr> </messages> Add a folder to the CookBook.Recipes.Core.CustomValidation project. Name it MessageRepository. Add an interface to the MessageRepository folder and name it IMessageRepository. Add a method to the interface and name it GetMessages. It will have IDictionary<string,string> as a return type and will accept a string value as parameter. The interface will look like the following code: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public interface IMessageRepository { IDictionary<string, string> GetMessages(string locale); } } Add a class to the MessageRespository folder. Name it XmlMessageRepository Add the following using statements: using System.Xml; Implement the IMessageRepository interface. The class will look like the following code once we implement the interface: namespace CookBook.Recipes.Core.CustomValidation.MessageRepository { public class XmlMessageRepository:IMessageRepository { #region IMessageRepository Members public IDictionary<string, string> GetMessages(string locale) { return null; } #endregion } } Modify GetMessages so that it looks like the following code: ublic IDictionary<string, string> GetMessages(string locale) { XmlDocument xDoc = new XmlDocument(); xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceS tream("CustomValidation.Messages.xml")); XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node. InnerText); } return dictionary; } Next let us see how to modify UniqueUserValidator so that it can localize the error message. How it works... The Messages.xml file and the GetMessages method of XmlMessageRespository form the core of the logic to generate a locale-specific message. Message.xml contains the key to the message within the locale tag. We have created the locale tag using the two-letter ISO name of a locale. So, for English it is <en></en> and for French it is <fr></fr>. Each locale tag contains a message tag. The key attribute of the tag will have the key that will tell us which message tag contains the error message. So our code will be as follows: <message key="not_unique_user">User name is not unique</message> not_unique_user is the key to the User is not unique error message. In the GetMessages method, we first load the XML file. Since the file has been set as an embedded resource, we read it as a resource. To do so, we first got the executing assembly, that is, CustomValidation. Then we called GetManifestResourceAsStream and passed the qualified name of the resource, which in this case is CustomValidation.Messages.xml. That is what we achieved in the following statement: xDoc.Load(Assembly.GetExecutingAssembly().GetManifestResourceStream( "CustomValidation.Messages.xml")); Then we constructed an XPath to the message tag using the locale passed as the parameter. Using the XPath query/expression we got the following message nodes: XmlNodeList resources = xDoc.SelectNodes("messages/"+locale+"/ message"); After getting the node list, we looped over it to construct a dictionary. The value of the key attribute of the node became the key of the dictionary. And the value of the node became the corresponding value in the dictionary, as is evident from the following code: SortedDictionary<string, string> dictionary = new SortedDictionary<string, string>(); foreach (XmlNode node in resources) { dictionary.Add(node.Attributes["key"].Value, node.InnerText); } The dictionary was then returned by the method. Next, let's understand how this dictionary is used by UniqueUserValidator.
Read more
  • 0
  • 0
  • 2059

article-image-yii-adding-users-and-user-management-your-site
Packt
21 Feb 2013
9 min read
Save for later

Yii: Adding Users and User Management to Your Site

Packt
21 Feb 2013
9 min read
(For more resources related to this topic, see here.) Mission Checklist This project assumes that you have a web development environment prepared. The files for this project include a Yii project directory with a database schema. To prepare for the project, carry out the following steps replacing the username lomeara with your own username: Copy the project files into your working directory. cp –r ~/Downloads/project_files/Chapter 3/project_files ~/projects/ch3 Make the directories that Yii uses web writeable. cd ~/projects/ch3/ sudo chown -R lomeara:www-data protected/runtime assets protected/models protected/controllers protected/views If you have a link for a previous project, remove it from the webroot directory. rm /opt/lampp/htdocs/cddb Create a link in the webroot directory to the copied directory. cd /opt/lampp/htdocs sudo ln -s ~/projects/ch3 cbdb Import the project into NetBeans (remember to set the project URL to http://localhost/cbdb) and configure for Yii development with PHPUnit. Create a database named cbdb and load the database schema (~/projects/ch3/ protected/data/schema.sql) into it. If you are not using the XAMPP stack or if your access to MySQL is password protected, you should review and update the Yii configuration file (in NetBeans it is ch3/Source Files/protected/config/main.php). Adding a User Object with CRUD As a foundation for our user management system, we will add a User table to the database and then use Gii to build a quick functional interface. Engage Thrusters Let's set the first building block by adding a User table containing the following information: A username Password hash Reference to a person entry for first name and last name In NetBeans, open a SQL Command window for the cbdb database and run the following command: CREATE TABLE 'user' ( 'id' int(10) unsigned NOT NULL AUTO_INCREMENT, 'username' varchar(20) NOT NULL, 'pwd_hash' char(34) NOT NULL, 'person_id' int(10) unsigned NOT NULL, PRIMARY KEY ('id'), UNIQUE KEY 'username' ('username'), CONSTRAINT 'userperson_ibfk_2' FOREIGN KEY ('person_id') REFERENCES 'person' ('id') ON DELETE CASCADE ) ENGINE=InnoDB; Open a web browser to the Gii URL http://localhost/cbdb/index.php/gii(the password configured in the sample code is yiibook) and use Gii to generate a model from the user table. Then, use Gii to generate CRUD from the user model. Back in NetBeans, add a link to the user index in your site's logged in menu (ch3 | Source Files | protected | views | layouts | main.php). It should look like this: } else { $this->widget('zii.widgets.CMenu',array( 'activeCssClass' => 'active', 'activateParents' => true, 'items'=>array( array('label'=>'Home', 'url'=>array('/site/index')), array('label'=>'Comic Books', 'url'=>array('/book'), 'items' => array( array('label'=>'Publishers', 'url'=>array('/publisher')), ) ), array('label'=>'Users', 'url'=>array('/user/index')), array('label'=>'Logout ('.Yii::app()->user- >name.')', 'url'=>array('/site/logout')) ), )); } ?> Right-click on the project name, run the site, and log in with the default username and password (admin/admin). You will see a menu that includes a link named Users. If you click on the Users link in the menu and then click on Create User, you will see a pretty awful-looking user-creation screen. We are going to fix that. First, we will update the user form to include fields for first name, last name, password, and repeat password. Edit ch3 | Source Files | protected | views | user | _form.php and add those fields. Start by changing all instances of $model to $user. Then, add a call to errorSummary on the person data under the errorSummary call on user. <?php echo $form->errorSummary($user); ?> <?php echo $form->errorSummary($person); ?> Add rows for first name and last name at the beginning of the form. <div class="row"> <?php echo $form->labelEx($person,'fname'); ?> <?php echo $form->textField($person,'fname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'fname'); ?> </div> <div class="row"> <?php echo $form->labelEx($person,'lname'); ?> <?php echo $form->textField($person,'lname',array ('size'=>20,'maxlength'=>20)); ?> <?php echo $form->error($person,'lname'); ?> </div> Replace the pwd_hash row with the following two rows: <div class="row"> <?php echo $form->labelEx($user,'password'); ?> <?php echo $form->passwordField($user,'password',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password'); ?> </div> <div class="row"> <?php echo $form->labelEx($user,'password_repeat'); ?> <?php echo $form->passwordField($user,'password_repeat',array ('size'=>20,'maxlength'=>64)); ?> <?php echo $form->error($user,'password_repeat'); ?> </div> Finally, remove the row for person_id. These changes are going to completely break the User create/update form for the time being. We want to capture the password data and ultimately make a hash out of it to store securely in the database. To collect the form inputs, we will add password fields to the User model that do not correspond to values in the database. Edit the User model ch3 | Source Files | protected | models | User.php and add two public variables to the class: class User extends CActiveRecord { public $password; public $password_repeat; In the same User model file, modify the attribute labels function to include labels for the new password fields. public function attributeLabels() { return array( 'id' => 'ID', 'username' => 'Username', 'password' => 'Password', 'password_repeat' => 'Password Repeat' ); } In the same User model file, update the rules function with the following rules: Require username Limit length of username and password Compare password with password repeat Accept only safe values for username and password We will come back to this and improve it, but for now, it should look like the following: public function rules() { // NOTE: you should only define rules for those attributes //that will receive user inputs. return array( array('username', 'required'), array('username', 'length', 'max'=>20), array('password', 'length', 'max'=>32), array('password', 'compare'), array('password_repeat', 'safe'), ); } In order to store the user's first and last name, we must change the Create action in the User controller ch3 | Source Files | protected | controllers | UserController. php to create a Person object in addition to a User object. Change the variable name $model to $user, and add an instance of the Person model. public function actionCreate() { $user=new User; $person=new Person; // Uncomment the following line if AJAX validation is //needed // $this->performAjaxValidation($user); if(isset($_POST['User'])) { $user->attributes=$_POST['User']; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } $this->render('create',array( 'user'=>$user, 'person'=>$person, )); } Don't reload the create user page yet. First, update the last line of the User Create view ch3 | Source Files | protected | views | user | create.php to send a User object and a Person object. <?php echo $this->renderPartial('_form', array('user'=>$user, 'person' =>$person)); ?> Make a change to the attributeLabels function in the Person model (ch3 | Source Files | protected | models | Person.php) to display clearer labels for first name and last name. public function attributeLabels() { return array( 'id' => 'ID', 'fname' => 'First Name', 'lname' => 'Last Name', ); } The resulting user form should look like this: Looks pretty good, but if you try to submit the form, you will receive an error. To fix this, we will change the User Create action in the User controller ch3 | Source Files | protected | controllers | UserController.php to check and save both User and Person data. if(isset($_POST['User'], $_POST['Person'])) { $person->attributes=$_POST['Person']; if($person->save()) { $user->attributes=$_POST['User']; $user->person_id = $person->id; if($user->save()) $this->redirect(array('view','id'=>$user->id)); } } Great! Now you can create users, but if you try to edit a user entry, you see another error. This fix will require a couple of more changes. First, in the user controller ch3 | Source Files | protected | controllers | UserController.php, change the loadModel function to load the user model with its related person information: $model=User::model() ->with('person') ->findByPk((int)$id); Next, in the same file, change the actionUpdate function. Add a call to save the person data, if the user save succeeds: if($model->save()) { $model->person->attributes=$_POST['Person']; $model->person->save(); $this->redirect(array('view','id'=>$model->id)); } Then, in the user update view ch3 | Source Files | protected | views | user | update.php, add the person information to the form render. <?php echo $this->renderPartial('_form', array('user'=>$model, 'person' => $model->person)); ?> One more piece of user management housekeeping; try deleting a user. Look in the database for the user and the person info. Oops. Didn't clean up after itself, did it? Update the User controller ch3 | Source Files | protected | controllers | UserController.php once again. Change the call to delete in the User delete action: $this->loadModel($id)->person->delete(); Objective Complete - Mini Debriefing We have added a new object, User, to our site, and associated it with the Person object to capture the user's first and last name. Gii helped us get the basic structure of our user management function in place, and then we altered the model, view, and controller to bring the pieces together.
Read more
  • 0
  • 0
  • 10727
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
19 Feb 2013
6 min read
Save for later

Rich Internet Application (RIA) – Canvas

Packt
19 Feb 2013
6 min read
(For more resources related to this topic, see here.) RIA — Canvas (Become an expert) If you started your career in web design or development in the late 90s to early 2000s, there is definitely a good chance that at some point, you've been asked to do a zany, cool, and bouncy website using (then) Macromedia Flash. After it was acquired by Adobe in 2005, Flash transformed from being a stage-based, procedural script-running, hard-coded, and embedded object to a Rich Internet Application ( RIA). With the arrival of Adobe Flex as an SDK for Flash's Action Script 3.0, the company tried to lure more developers into Flash development. Later, Adobe donated Flex to the Apache foundation. All this, and yet no browser vendor ever released the Flash plugin integrated with any of their products. Flash-based applications took time to develop, never had any open source frameworks supporting the final product, and came across many memory-hogging security threats. The biggest blow to this technology came when Apple decided not to support Flash with any of the iPad, iPhone, or iPod devices. The message was loud and clear. The web needed a new platform to support Rich Internet Applications, which can be seamlessly integrated in browsers without any third-party plugin requirement at the visitors' end. Presenting HTML5 Canvas. Getting ready The HTML5 canvas element provides a canvas (surprise!) with a specified height and width inside a web page, which can be manipulated with JavaScript to generate graphics and Rich Internet Applications. How to do it... It is just the same as it was with video or audio. <canvas id="TestCanvas" width="300" height="300"> Your browser does not support Canvas element from HTML5. </canvas> The preceding syntax gives us a blank block element with the specified height and width, which can now be identified with some JavaScript by the ID TestCanvas. <script> var test=document.getElementById("TestCanvas"); var col1=c.getContext("2d"); col1.fillRect(0,0,20,80); col1.fillStyle="#808"; </script> A variable named test is defined with the method document.getElementByID() to identify a canvas on the web page. The getContext object, which is a built-in HTML5 object, is defined in another variable called col1. The value 2d provides properties and methods for drawing paths, boxes, circles, text, images, and more. The fillRect(x,y,width,height) method provides four parameters to draw a rectangle on the x and y coordinates. Similarly, the fillStyle() method defines the fill color of the drawing. The output is as follows: The origin of the x and y coordinates lies at the top-left corner of the canvas, unlike the graph paper (which most of us are used to), where it lies in the bottom-left corner. Appending the graph for multiple columns by additional getContext variables can be done as follows: <script> var test=document.getElementById("TestCanvas"); var col1=test.getContext("2d"); col1.fillStyle="#808"; col1.fillRect(10,0,20,80); var col2=test.getContext("2d"); col2.fillStyle="#808"; col2.fillRect(40,0,20,100); var col3=test.getContext("2d"); col3.fillStyle="#808"; col3.fillRect(70,0,20,120); </script> We get the following output: The getContext variables can be defined with different methods as well. To draw a line we use the moveTo(x,y) and lineTo(x,y) methods: line.moveTo(10,10); line.lineTo(150,250); line.stroke(); The moveTo() method defines the starting point of the line and the lineTo() method defines the end point on the x and y coordinates. The stroke() method without any value assigned to it connects the two assigned points with a line stroke. The stroke() and fill() are the ink methods used to define the visibility of the graphic. To draw a circle we use the arc(x,y,r,start,stop) method: circle.beginPath(); circle.arc(150,150,80,0,2*Math.PI); circle.fill(); With the arc() method, we must use either the fill() method or the stroke() method for a visible area. For further exploration, here are a few more canvas methods that can be tried out: Text for canvas: font: This specifies font properties for text fillText(text,x,y): This draws normal text on the canvas strokeText(text,x,y): This draws stroked text without any fill color Here are the syntaxes for the preceding properties: text.font="30px Arial"; text.fillText("HTML5 Canvas",10,50); text.strokeText("HTML5 Canvas Text",10,100);   And for the last example, we will do a raster image drawing using the ID into the canvas: var img=document.getElementById("canvas-bg"); draw.drawImage(img,10,10); Similar to the ID for Canvas, the image ID is selected by the document.getElementById() method, and then we can use it as a background for the selected canvas. The image used with the ID canvas-bg can be placed in a hidden div tag and later can be used as a background for any graph or chart, or any other graphic. One of the most practical applications of the text and image drawing on a canvas could be the customization of a product with label image and text over it. How it works... There are many places where Canvas may be implemented for regular web development practices. It can be used to generate real-time charts, product customization applications, or more complex or simple applications, depending on the requirement. We know that Canvas is an HTML5 element and the key (for Canvas) always remains with the JavaScript used in it. We get support from all the browsers apart from IE8 or below. There's more... It always helps when a developer knows about the resources available at their disposal. Open source JS frameworks for Canvas There are many open source JavaScript frameworks and libraries available for easy development of the graphics with Canvas. A few noteworthy ones are KineticJS and GoJS. Another framework is ThreeJS, which uses WebGL and allows 3D rendering for your web graphics. Summary This article discussed about the Rich Internet Application (RIA) platform with HTML5 and CSS3. We also saw how Canvas can be used implemented in regular web development practices. Resources for Article : Further resources on this subject: Building HTML5 Pages from Scratch [Article] HTML5: Generic Containers [Article] HTML5: Developing Rich Media Applications using Canvas [Article]
Read more
  • 0
  • 0
  • 1588

article-image-apache-solr-configuration
Packt
19 Feb 2013
17 min read
Save for later

Apache Solr Configuration

Packt
19 Feb 2013
17 min read
(For more resources related to this topic, see here.) During the writing of this article, I used Solr version 4.0 and Jetty Version 8.1.5. If another version of Solr is mandatory for a feature to run, then it will be mentioned. If you don't have any experience with Apache Solr, please refer to the Apache Solr tutorial which can be found at : http://lucene.apache.org/solr/tutorial.html. Running Solr on Jetty The simplest way to run Apache Solr on a Jetty servlet container is to run the provided example configuration based on embedded Jetty. But it's not the case here. In this recipe, I would like to show you how to configure and run Solr on a standalone Jetty container. Getting ready First of all you need to download the Jetty servlet container for your platform. You can get your download package from an automatic installer (such as, apt-get), or you can download it yourself from http://jetty.codehaus.org/jetty/ How to do it... The first thing is to install the Jetty servlet container, which is beyond the scope of this article, so we will assume that you have Jetty installed in the /usr/share/jetty directory or you copied the Jetty files to that directory. Let's start by copying the solr.war file to the webapps directory of the Jetty installation (so the whole path would be /usr/share/jetty/webapps). In addition to that we need to create a temporary directory in Jetty installation, so let's create the temp directory in the Jetty installation directory. Next we need to copy and adjust the solr.xml file from the context directory of the Solr example distribution to the context directory of the Jetty installation. The final file contents should look like the following code: <?xml version="1.0"?> <!DOCTYPE Configure PUBLIC "-//Jetty//Configure//EN" "http://www. eclipse.org/jetty/configure.dtd"> <Configure class="org.eclipse.jetty.webapp.WebAppContext"> <Set name="contextPath">/solr</Set> <Set name="war"><SystemProperty name="jetty.home"/>/webapps/solr. war</Set> <Set name="defaultsDescriptor"><SystemProperty name="jetty.home"/>/ etc/webdefault.xml</Set> <Set name="tempDirectory"><Property name="jetty.home" default="."/>/ temp</Set> </Configure> Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.packtpub.com. If you purchased this book elsewhere, you can visit http://www.packtpub.com/support and register to have the files e-mailed directly to you. Now we need to copy the jetty.xml, webdefault.xml, and logging.properties files from the etc directory of the Solr distribution to the configuration directory of Jetty, so in our case to the /usr/share/jetty/etc directory. The next step is to copy the Solr configuration files to the appropriate directory. I'm talking about files such as schema.xml, solrconfig.xml, solr.xml, and so on. Those files should be in the directory specified by the solr.solr.home system variable (in my case this was the /usr/share/solr directory). Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. The last thing about the configuration is to update the /etc/default/jetty file and add –Dsolr.solr.home=/usr/share/solr to the JAVA_OPTIONS variable of that file. The whole line with that variable could look like the following: JAVA_OPTIONS="-Xmx256m -Djava.awt.headless=true -Dsolr.solr.home=/usr/ share/solr/" If you didn't install Jetty with apt-get or a similar software, you may not have the /etc/default/jetty file. In that case, add the –Dsolr.solr.home=/usr/share/solr parameter to the Jetty startup. We can now run Jetty to see if everything is ok. To start Jetty, that was installed, for example, using the apt-get command, use the following command: /etc/init.d/jetty start You can also run Jetty with a java command. Run the following command in the Jetty installation directory: java –Dsolr.solr.home=/usr/share/solr –jar start.jar If there were no exceptions during the startup, we have a running Jetty with Solr deployed and configured. To check if Solr is running, try going to the following address with your web browser: http://localhost:8983/solr/. You should see the Solr front page with cores, or a single core, mentioned. Congratulations! You just successfully installed, configured, and ran the Jetty servlet container with Solr deployed. How it works... For the purpose of this recipe, I assumed that we needed a single core installation with only I and solrconfig.xml configuration files. Multicore installation is very similar – it differs only in terms of the Solr configuration files. The first thing we did was copy the solr.war file and create the temp directory. The WAR file is the actual Solr web application. The temp directory will be used by Jetty to unpack the WAR file. The solr.xml file we placed in the context directory enables Jetty to define the context for the Solr web application. As you can see in its contents, we set the context to be /solr, so our Solr application will be available under http://localhost:8983/solr/ We also specified where Jetty should look for the WAR file (the war property), where the web application descriptor file (the defaultsDescriptor property) is, and finally where the temporary directory will be located (the tempDirectory property). The next step is to provide configuration files for the Solr web application. Those files should be in the directory specified by the system solr.solr.home variable. I decided to use the /usr/share/solr directory to ensure that I'll be able to update Jetty without the need of overriding or deleting the Solr configuration files. When copying the Solr configuration files, you should remember to include all the files and the exact directory structure that Solr needs. So in the directory specified by the solr.solr.home variable, the solr.xml file should be available – the one that describes the cores of your system. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be a cores tag (with the adminPath variable set to the address where Solr's cores administration API is available and the defaultCoreName attribute that says which is the default core). The cores tag is a parent for cores definition – each core should have its own cores tag with name attribute specifying the core name and the instanceDir attribute specifying the directory where the core specific files will be available (such as the conf directory). If you installed Jetty with the apt–get command or similar, you will need to update the /etc/default/jetty file to include the solr.solr.home variable for Solr to be able to see its configuration directory. After all those steps we are ready to launch Jetty. If you installed Jetty with apt–get or a similar software, you can run Jetty with the first command shown in the example. Otherwise you can run Jetty with a java command from the Jetty installation directory. After running the example query in your web browser you should see the Solr front page as a single core. Congratulations! You just successfully configured and ran the Jetty servlet container with Solr deployed. There's more... There are a few tasks you can do to counter some problems when running Solr within the Jetty servlet container. Here are the most common ones that I encountered during my work. I want Jetty to run on a different port Sometimes it's necessary to run Jetty on a different port other than the default one. We have two ways to achieve that: Adding an additional startup parameter, jetty.port. The startup command would look like the following command: java –Djetty.port=9999 –jar start.jar Changing the jetty.xml file – to do that you need to change the following line: <Set name="port"><SystemProperty name="jetty.port" default="8983"/></Set> To: <Set name="port"><SystemProperty name="jetty.port" default="9999"/></Set> Buffer size is too small Buffer overflow is a common problem when our queries are getting too long and too complex, – for example, when we use many logical operators or long phrases. When the standard head buffer is not enough you can resize it to meet your needs. To do that, you add the following line to the Jetty connector in the jetty.xml file. Of course the value shown in the example can be changed to the one that you need: <Set name="headerBufferSize">32768</Set> After adding the value, the connector definition should look more or less like the following snippet: <Call name="addConnector"> <Arg> <New class="org.mortbay.jetty.bio.SocketConnector"> <Set name="port"><SystemProperty name="jetty.port" default="8080"/></ Set> <Set name="maxIdleTime">50000</Set> <Set name="lowResourceMaxIdleTime">1500</Set> <Set name="headerBufferSize">32768</Set> </New> </Arg> </Call> Running Solr on Apache Tomcat Sometimes you need to choose a servlet container other than Jetty. Maybe because your client has other applications running on another servlet container, maybe because you just don't like Jetty. Whatever your requirements are that put Jetty out of the scope of your interest, the first thing that comes to mind is a popular and powerful servlet container – Apache Tomcat. This recipe will give you an idea of how to properly set up and run Solr in the Apache Tomcat environment. Getting ready First of all we need an Apache Tomcat servlet container. It can be found at the Apache Tomcat website – http://tomcat.apache.org. I concentrated on the Tomcat Version 7.x because at the time of writing of this book it was mature and stable. The version that I used during the writing of this recipe was Apache Tomcat 7.0.29, which was the newest one at the time. How to do it... To run Solr on Apache Tomcat we need to follow these simple steps: Firstly, you need to install Apache Tomcat. The Tomcat installation is beyond the scope of this book so we will assume that you have already installed this servlet container in the directory specified by the $TOMCAT_HOME system variable. The second step is preparing the Apache Tomcat configuration files. To do that we need to add the following inscription to the connector definition in the server.xml configuration file: URIEncoding="UTF-8" The portion of the modified server.xml file should look like the following code snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> The third step is to create a proper context file. To do that, create a solr.xml file in the $TOMCAT_HOME/conf/Catalina/localhost directory. The contents of the file should look like the following code: <Context path="/solr" docBase="/usr/share/tomcat/webapps/solr.war" debug="0" crossContext="true"> <Environment name="solr/home" type="java.lang.String" value="/ usr/share/solr/" override="true"/> </Context> The next thing is the Solr deployment. To do that we need the apache-solr-4.0.0.war file that contains the necessary files and libraries to run Solr that is to be copied to the Tomcat webapps directory and renamed solr.war. The one last thing we need to do is add the Solr configuration files. The files that you need to copy are files such as schema.xml, solrconfig.xml, and so on. Those files should be placed in the directory specified by the solr/home variable (in our case /usr/share/solr/). Please don't forget that you need to ensure the proper directory structure. If you are not familiar with the Solr directory structure please take a look at the example deployment that is provided with the standard Solr package. Please remember to preserve the directory structure you'll see in the example deployment, so for example, the /usr/share/solr directory should contain the solr.xml (and in addition zoo.cfg in case you want to use SolrCloud) file with the contents like so: <?xml version="1.0" encoding="UTF-8" ?> <solr persistent="true"> <cores adminPath="/admin/cores" defaultCoreName="collection1"> <core name="collection1" instanceDir="collection1" /> </cores> </solr> All the other configuration files should go to the /usr/share/solr/collection1/ conf directory (place the schema.xml and solrconfig.xml files there along with any additional configuration files your deployment needs). Your cores may have other names than the default collection1, so please be aware of that. Now we can start the servlet container, by running the following command: bin/catalina.sh start In the log file you should see a message like this: Info: Server startup in 3097 ms To ensure that Solr is running properly, you can run a browser and point it to an address where Solr should be visible, like the following: http://localhost:8080/solr/ If you see the page with links to administration pages of each of the cores defined, that means that your Solr is up and running. How it works... Let's start from the second step as the installation part is beyond the scope of this book. As you probably know, Solr uses UTF-8 file encoding. That means that we need to ensure that Apache Tomcat will be informed that all requests and responses made should use that encoding. To do that, we modified the server.xml file in the way shown in the example. The Catalina context file (called solr.xml in our example) says that our Solr application will be available under the /solr context (the path attribute). We also specified the WAR file location (the docBase attribute). We also said that we are not using debug (the debug attribute), and we allowed Solr to access other context manipulation methods. The last thing is to specify the directory where Solr should look for the configuration files. We do that by adding the solr/home environment variable with the value attribute set to the path to the directory where we have put the configuration files. The solr.xml file is pretty simple – there should be the root element called solr. Inside it there should be the cores tag (with the adminPath variable set to the address where the Solr cores administration API is available and the defaultCoreName attribute describing which is the default core). The cores tag is a parent for cores definition – each core should have its own core tag with a name attribute specifying the core name and the instanceDir attribute specifying the directory where the core-specific files will be available (such as the conf directory). The shell command that is shown starts Apache Tomcat. There are some other options of the catalina.sh (or catalina.bat) script; the descriptions of these options are as follows: stop: This stops Apache Tomcat restart: This restarts Apache Tomcat debug: This start Apache Tomcat in debug mode run: This runs Apache Tomcat in the current window, so you can see the output on the console from which you run Tomcat. After running the example address in the web browser, you should see a Solr front page with a core (or cores if you have a multicore deployment). Congratulations! You just successfully configured and ran the Apache Tomcat servlet container with Solr deployed. There's more... There are some other tasks that are common problems when running Solr on Apache Tomcat. Changing the port on which we see Solr running on Tomcat Sometimes it is necessary to run Apache Tomcat on a different port other than 8080, which is the default one. To do that, you need to modify the port variable of the connector definition in the server.xml file located in the $TOMCAT_HOME/conf directory. If you would like your Tomcat to run on port 9999, this definition should look like the following code snippet: <Connector port="9999" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> While the original definition looks like the following snippet: <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" URIEncoding="UTF-8" /> Installing a standalone ZooKeeper You may know that in order to run SolrCloud—the distributed Solr installation–you need to have Apache ZooKeeper installed. Zookeeper is a centralized service for maintaining configurations, naming, and provisioning service synchronization. SolrCloud uses ZooKeeper to synchronize configuration and cluster states (such as elected shard leaders), and that's why it is crucial to have a highly available and fault tolerant ZooKeeper installation. If you have a single ZooKeeper instance and it fails then your SolrCloud cluster will crash too. So, this recipe will show you how to install ZooKeeper so that it's not a single point of failure in your cluster configuration. Getting ready The installation instruction in this recipe contains information about installing ZooKeeper Version 3.4.3, but it should be useable for any minor release changes of Apache ZooKeeper. To download ZooKeeper please go to http://zookeeper.apache.org/releases.html This recipe will show you how to install ZooKeeper in a Linux-based environment. You also need Java installed. How to do it... Let's assume that we decided to install ZooKeeper in the /usr/share/zookeeper directory of our server and we want to have three servers (with IP addresses 192.168.1.1, 192.168.1.2, and 192.168.1.3) hosting the distributed ZooKeeper installation. After downloading the ZooKeeper installation, we create the necessary directory: sudo mkdir /usr/share/zookeeper Then we unpack the downloaded archive to the newly created directory. We do that on three servers. Next we need to change our ZooKeeper configuration file and specify the servers that will form the ZooKeeper quorum, so we edit the /usr/share/zookeeper/conf/ zoo.cfg file and we add the following entries: clientPort=2181 dataDir=/usr/share/zookeeper/data tickTime=2000 initLimit=10 syncLimit=5 server.1=192.168.1.1:2888:3888 server.2=192.168.1.2:2888:3888 server.3=192.168.1.3:2888:3888 And now, we can start the ZooKeeper servers with the following command: /usr/share/zookeeper/bin/zkServer.sh start If everything went well you should see something like the following: JMX enabled by default Using config: /usr/share/zookeeper/bin/../conf/zoo.cfg Starting zookeeper ... STARTED And that's all. Of course you can also add the ZooKeeper service to start automatically during your operating system startup, but that's beyond the scope of the recipe and the book itself. How it works... Let's skip the first part, because creating the directory and unpacking the ZooKeeper server there is quite simple. What I would like to concentrate on are the configuration values of the ZooKeeper server. The clientPort property specifies the port on which our SolrCloud servers should connect to ZooKeeper. The dataDir property specifies the directory where ZooKeeper will hold its data. So far, so good right ? So now, the more advanced properties; the tickTime property specified in milliseconds is the basic time unit for ZooKeeper. The initLimit property specifies how many ticks the initial synchronization phase can take. Finally, the syncLimit property specifies how many ticks can pass between sending the request and receiving an acknowledgement. There are also three additional properties present, server.1, server.2, and server.3. These three properties define the addresses of the ZooKeeper instances that will form the quorum. However, there are three values separated by a colon character. The first part is the IP address of the ZooKeeper server, and the second and third parts are the ports used by ZooKeeper instances to communicate with each other.
Read more
  • 0
  • 0
  • 3533

article-image-introduction-risk-analysis
Packt
18 Feb 2013
21 min read
Save for later

An Introduction to Risk Analysis

Packt
18 Feb 2013
21 min read
(For more resources related to this topic, see here.) Risk analysis First, we must understand what risk is, how it is calculated, and then implement a solution to mitigate or reduce the calculated risk. At this point in the process of developing agile security architecture, we have already defined our data. The following sections assume we know what the data is, just not the true impact to the enterprise if a threat is realized. What is risk analysis? Simply stated, risk analysis is the process of assessing the components of risk; threats, impact, and probability as it relates to an asset, in our case enterprise data. To ascertain risk, the probability of impact to enterprise data must first be calculated. A simple risk analysis output may be the decision to spend capital to protect an asset based on value of the asset and the scope of impact if the risk is not mitigated. This is the most general form of risk analysis, and there are several methods that can be applied to produce a meaningful output. Risk analysis is directly impacted by the maturity of the organization in terms of being able to show value to the enterprise as a whole and understanding the applied risk methodology. If the enterprise does not have a formal risk analysis capability, it will be difficult for the security team to use this method to properly implement security architecture for enterprise initiatives. Without this capability, the enterprise will either spend on the products with the best marketing, or not spend at all. Let's take a closer look at the risk analysis components and figure out where useful analysis data can be obtained. Assessing threats First, we must define what a threat is in order to identify probable threats. It may be difficult to determine threats to the enterprise data if this analysis has never been completed. A threat is anything that can act negatively towards the enterprise assets. It may be a person, virus, malware, or a natural disaster. Due to the broad scope of threats, actions may be purposeful or unintentional in nature adding to the absolute unpredictability of impact. Once a threat is defined, the attributes of threats must be identified and documented. The documentation of threats should include the type of threat, identified threat groupings, motivations if any, and methods of actions. In order to gain understanding of pertinent threats for the enterprise, researching past events may be helpful. Historically, there have been challenges to getting realistic breach data, but better reporting of post-breach findings continues to reduce the uncertainty of analysis. Another method to getting data is leveraging existing security technologies implemented to build a realistic perspective of threats. The following are a few sample questions to guide you on the discovery of threats: What is being detected by the existing infrastructure? What are others in the same industry observing? What post-breach data is available in the same industry vertical? Who would want access to this data? What would motivate a person to attempt unauthorized access to the data? Data theft Destruction Notoriety Hacktivism Retaliation A sample table of data type, threat, and motivation is shown as follows: Data   Threat   Motivation   Credit card numbers   Hacker   Theft, Cybercrime   Trade secrets   Competitor   Competitive advantage   Personally Identifiable Information (PII)   Disgruntled employee   Retaliation, Destruction   Company confidential documents   Accidental leak   None   Client list   Natural disaster   None   This should be developed with as much detail as possible to form a realistic view of threats to the enterprise. There may also be several variations of threats and motivations for threat action on enterprise data. For example, accessing trade secrets by a competitor may be for competitive advantage, or a hacker may take action as part of hacktivism to bring negative press to the enterprise. The more you can elaborate on the possible threats and motivations that exist, the better you will be able to reduce the list to probable threats based on challenging the data you have gathered. It is important to continually challenge the logic used to have the most realistic perspective. Assessing impact Now that the probable threats have been identified, what kind of damage can be done or negative impact can be enacted upon the enterprise and the data. Impact is the outcome of threats acting against the enterprise. This could be a denial-of-service state where the agent, a hacker, uses a tool to starve the enterprise Internet web servers of resources causing a denial-of-service state for legitimate users. Another impact could be the loss of customer credit cards resulting in online fraud, reputation loss, and countless dollars in cleanup and remediation efforts. There are the immediate impacts and residual impacts. Immediate impacts are rather easy to determine because, typically, this is what we see in the news if it is big enough of an issue. Hopefully, the impact data does not come from first-hand experience, but in the case it is, executives should take action and learn from their mistakes. If there is no real-life experience with the impact, researching breach data will help using Internet sites such as DATALOSS db (http://datalossdb.org). Also, understanding the value of the data to the enterprise and its customers will aide in impact calculation. I think the latter impact analysis is more useful, but if the enterprise is unsure, then relying on breach data may be the only option. The following are a few sample discovery questions for business impact analysis: How is the enterprise affected by threat actions? Will we go out of business? Will we lose market share? If the data is deleted or manipulated, can it be recovered or restored? If the building is destroyed, do we have disaster recovery and business continuity capabilities? To get a more accurate assessment of the probable impact or total cost to the enterprise, map out what data is most desirable to steal, destroy, and manipulate. Align the identified threats to the identified data, and apply an impact level to the data indicating if the enterprise would suffer critical to minor loss. These should be as accurate as possible. Work the scenarios out on paper and base the impact analysis on the outcome of the exercises. The following is a sample table to present the identification and assessment of impact based on threat for a retailer. This is generally called a business impact analysis. Data   Threat   Impact   Credit card numbers   Hacker   Critical   Trade secrets   Competitor   Medium   PII   Disgruntled employee   High   Company confidential documents   Accidental leak   Low   Client list   Natural disaster   Medium   Enterprise industry vertical may affect the impact analysis. For instance, a retailer may have greater impact if credit card numbers are stolen than if their client list was stolen. Both scenarios have impact but one may warrant greater protection and more restricted access to limit the scope of impact, and reduce immediate and residual loss. Business impact should be measured in how the threat actions affect the business overall. Is it an annoyance or does it mean the business can no longer function? Natural disasters should also be accounted for and considered when assessing enterprise risk. Assessing probability Now that all conceived threats have been identified along with the business impact for each scenario, how do we really determine risk? Shouldn't risk be based on how likely the threat may take action, succeed, and cause an impact? Yes! The threat can be the most perilous thing imagined but if threat actions may only occur once in three thousand years, investment in protecting against the threat may not be warranted, at least in the near term. Probability data is as difficult, if not more difficult, to find than threat data. However, this calculation has the most influence on the derived risk. If the identified impact is expected to happen twice a year and the business impact is critical, perhaps security budget should be allocated to security mechanisms that mitigate or reduce the impact. The risk of the latter scenario would be higher because it is more probable, not possible, but probable. Anything is possible. I have heard an analogy for this to make the point. In the game of Russian roulette, a semi-automatic pistol either has a bullet in the chamber or it does not, this is possible. With a revolver and a quick spin of the cylinder, you now have a 1 in 6 chance on whether there is a bullet that will be fired when the firing pin strikes forward. This is oversimplified to illustrate possibility versus probability. There are several variables in the example that could affect the outcome such as a misfire, or the safety catch being enabled, stopping the gun's ability to fire. These would be calculated to form an accurate risk value. Make sense? This is how we need to approach probability. Technically, it is a semi-accurate estimation because there is just not enough detailed information on breaches and attacks to draw absolute conclusions. One approach may be to research what is happening in the same industry using online resources and peer groups, and then make intelligent estimates to determine if the enterprise could be affected too. Generally, there are outlier scenarios that require the utmost attention regardless; start here if these have not been identified as a probable risk scenario for the enterprise. The following are a few sample probability estimation questions: Has this event occurred before to the enterprise? Is there data to suggest it is happening now? Are there documented instances for similar enterprises? Do we know anything in regards to occurrence? Is the identified threat and impact really probable? The following table is the continuation of our risk analysis for our fictional retailer: Data   Threat   Impact   Probability   Credit card numbers   Hacker   Critical   High   Trade secrets   Competitor   Medium   Low   PII   Disgruntled employee   High   Medium   Company confidential documents   Accidental leak Low   Low Client list   Natural disaster   Medium   High   Based on the outcome of the probability exercises of identified threats and impacts, risk can be calculated and the appropriate course of action(s) developed and implemented. Assessing risk Now that the enterprise has agreed on what data has value, identified threats to the data, rated the impact to the enterprise, and the estimated probability of the impact occurring, the next logical step is to calculate the risk of the scenarios. Essentially, there are two methods to analyze and present risk: qualitative and quantitative. The decision to use one over the other should be based on the maturity of the enterprise's risk office. In general, a quantitative risk analysis will use descriptive labels like a qualitative method, however, there is more financial and mathematical analysis in quantitative analysis. Qualitative risk analysis Qualitative risk analysis provides a perspective of risk in levels with labels such as Critical, High, Medium, and Low. The enterprise must still define what each level means in a general financial perspective. For instance, a Low risk level may equate to a monetary loss of $1,000 to $100,000. The dollar ranges associated with each risk level will vary by enterprise. This must be agreed on by the entire enterprise so when risk is discussed, everyone is knowledgeable of what each label means financially. Do not confuse the estimated financial loss with the more detailed quantitative risk analysis approach; it is a simple valuation metric for deciding how much investment should be made based on probable monetary loss. The following section is an example qualitative risk analysis presenting the type of input required for the analysis. Notice that this is not a deep analysis of each of these inputs; it is designed to provide a relatively accurate perspective of risk associated with the scenario being analyzed. Qualitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Estimated impact: High, Medium, Low (as indicated in the following table).   Risk Estimated loss ($)   High   > 1,000,000   Medium   500,000 to 900,000   Low   < 500,000   Quantitative risk analysis Quantitative risk analysis is an in-depth assessment of what the monetary loss would be to the enterprise if the identified risk were realized. In order to facilitate this analysis, the enterprise must have a good understanding of its processes to determine a relatively accurate dollar amount for items such as systems, data restoration services, and man-hour break down for recovery or remediation of an impacting event. Typically, enterprises with a mature risk office will undertake this type of analysis to drive priority budget items or find areas to increase insurance, effectively transferring business risk. This will also allow for accurate communication to the board and enterprise executives to know at any given time the amount of risk the enterprise has assumed. With the quantitative approach a more accurate assessment of the threat types, threat capabilities, vulnerability, threat action frequency, and expected loss per threat action are required and must be as accurate as possible. As with qualitative risk analysis, the output of this analysis has to be compared to the cost to mitigate the identified threat. Ideally, the cost to mitigate would be less than the loss expectancy over a determined period of time. This is simple return on investment (ROI) calculation. Let's look again at the scenario used in the qualitative analysis and run it through a quantitative analysis. We will then compare against the price of a security product that would mitigate the risk to see if it is worth the capital expense. Before we begin the quantitative risk analysis, there are a couple of terms that need to be explained: Annual loss expectancy (ALE): The ALE is the calculation of what the financial loss would be to the enterprise if the threat event was to occur for a single year period. This is directly related to threat frequency. In the scenario this is once every three years, dividing the single lost expectancy by annual occurrence provides the ALE. Cost of protection (COP): The COP is the capital expense associated with the purchase or implementation of a security mechanism to mitigate or reduce the risk scenario. An example would be a firewall that costs $150,000 and $50,000 per each year of protection of the loss expectancy period. If the cost of protection over the same period is lower than the loss, this is a good indication that the capital expense is financially worthwhile. Quantitative risk analysis exercise Scenario: Hacker attacks website to steal credit card numbers located in backend database. Threat: External hacker. Threat capability: Novice to pro. Threat capability logic: There are several script-kiddie level tools available to wage SQL injection attacks. SQL injection is also well documented and professional hackers can use advanced techniques in conjunction with the automated tools. Vulnerability: 85 percent (how effective would the threat be with current mitigating mechanisms). Single loss expectation: $250,000. Threat frequency: 3 (how many times per year; this would be roughly once every three years). ALE: $83,000. COP: $150,000 (over 3 years). We will divide the total loss and the cost of protection over three years as, typically, capital expenses are depreciated over three to four years, and the loss is expected once every three years. This will give us the ALE and COP in the equation to determine the cost-benefit analysis. This is a simplified example, but the math would look as follows: $83,000 (ALE) - $50,000 (COP) = $33,000 (cost benefit) The loss is annually $33,000 more than the cost to protect against the threat. The assumption in our example is that the $250,000 figure is 85% of the total asset value, but because we have 15% protection capability, the number is now approximately $294,000. This step can be shortcut out of the equation if the ALE and rate of occurrence are known. When trying to figure out threat capability, try to be as realistic about the threat first. This will help us to better assess vulnerability because you will have a more accurate perspective on how realistic the threat is to the enterprise. For instance, if your scenario requires cracking advanced encryption and extensive system experience, the threat capability would be expert indicating current security controls may be acceptable for the majority of threat agents reducing probability and calculated risk. We tend to exaggerate in security to justify a purchase. We need to stop this trend and focus on what is the best area to spend precious budget dollars. The ultimate goal of a quantitative risk analysis is to ensure that spend for protection does not far exceed the threat the enterprise is protecting against. This is beneficial for the security team in justifying the expense of security budget line items. When the analysis is complete, there should still be a qualitative risk label associated with the risk. Using the above scenario with an annualized risk of $50,000 indicates this scenario is extremely low risk based on the defined risk levels in the qualitative risk exercise even if SLE is used. Does this analysis accurately represent acceptable loss? After an assessment is complete it is good practice to ensure all assumptions still hold true, especially the risk labels and associated monetary amounts. Applying risk analysis to trust models Well, now we can apply our risk methodology to our trust models to decide if we can continue with our implementation as is, or whether we need to change our approach based on risk. Our trust models, which are essentially use cases, rely on completing the risk analysis, which in turn decide the trust level and security mechanisms required to reduce the enterprise risk to an acceptable level. It would be foolish to think that we can shove all requests for similar access directly into one of these buckets without further analysis to determine the real risk associated with the request. After completing one of the risk analysis types we just covered, risk guidance can be provided for the scenario (and I stress guidance). For the sake of simplicity an implementation path may be chosen, but it will lead to compromises in the overall security of the enterprise and is cautioned. I have re-presented the table of one scenario, the external application user. This is a better representation of how a trust model should look with risk and security enforcement established for the scenario. If an enterprise is aware of how it conducts business, then a focused effort in this area should produce a realistic list of interactions with data by whom, with what level of trust, and based on risk, what controls need to be present and enforced by policy and standards. User type   External   Allowed access   Tier 1 DMZ only, Least privilege   Trust level   1 - Not trusted   Risk   Medium   Policy   Acceptable use, Monitoring, Access restrictions   Required security mechanisms   FW, IPS, Web application firewall   The user is assumed to have access to log in to the web application and have more possible interaction with the backend database(s). This should be a focal point for testing, because this is the biggest area of risk in this scenario. Threats such as SQL injection that can be waged against a web application with little to no experience are commonplace. Enterprises that have e-commerce websites typically do not restrict who can create an account. This should have input to the trust decision and ultimately the security architecture applied. Deciding on a risk analysis methodology We have covered the two general types of risk analysis, qualitative and quantitative, but which is best? It depends on several factors: risk awareness of the enterprise, risk analysts' capabilities, risk analysis data, and the influence of risk in the enterprise. If the idea of risk analysis or IT risk analysis is new to the enterprise, then a slow approach with qualitative analysis is recommended to get everyone thinking of risk and what it means to the business. It will be imperative to get an enterprise-wide agreement on the risk labels. Using the lesser involved method does not mean you will not be questioned on the data used in the analysis, so be prepared to defend the data used and explain estimation methods leveraged. If it is decided to use a quantitative risk analysis method, a considerable amount of effort is required along with meticulous loss figures and knowledge of the environment. This method is considered the most effective requiring risk expertise, resources, and an enterprise-wide commitment to risk analysis. This method is more accurate, though it can be argued that since both methods require some level of estimation, the accuracy lies in accurate estimation skills. I use the Douglas Hubbard school of thought on estimating with 90 percent accuracy. You will find his works at his website http://www.hubbardresearch.com/. I highly recommend his title How to Measure Anything: Finding the Value of "Intangibles" in Business, Tantor Media to learn estimation skills. It may be beneficial to have an external firm perform the analysis if the engagement is significant in size. The benefits of both should be that the enterprise is able to make risk-aware decisions on how to securely implement IT solutions. Both should be presented with common risk levels such as High, Medium, Low; essentially the common language everyone can speak knowing a range of financial risk without all the intimate details of how they arrived at the risk level. Other thoughts on risk and new enterprise endeavors Now that you have been presented with types of risk analysis, they should be applied as tools to best approach the new technologies being implemented in the networks of our enterprises. Unfortunately, there are broad brush strokes of trusted and untrusted approaches being applied that may or may not be accurate without risk analysis as a decision input. Two examples where this can be very costly are the new BYOD and cloud initiatives. At first glance these are the two most risky business maneuvers an enterprise can attempt from an information security perspective. Deciding if this really is the case requires an analysis based on trust models and data-centric security architecture. If the proper security mechanisms are implemented and security applied from users to data, the risk can be reduced to a tolerable level. The BYOD business model has many positive benefits to the enterprise, especially capital expense reduction. However, implementing a BYOD or cloud solution without further analysis of risk can introduce significant risk beyond the benefit of the initiative. Do not be quick to spread fear in order to avoid facing the changing landscape we have worked so hard to build and secure. It is different, but at one time, what we know today as the norm was new too. Be cautious but creative, or IT security will be discredited for what will be received as a difficult interaction. This is not the desired perception for IT security. Strive to understand the business case, risk to business assets (data, systems, people, processes, and so on), and then apply sound security architecture as we have discussed so far. Begin evangelizing the new approach to security in the enterprise by developing trust models that everyone can understand. Use this as the introduction to agile security architecture and get input to create models based on risk. By providing a risk-based perspective to emerging technologies and other radical requests, a methodical approach can bring better adoption and overall increased security in the enterprise. Summary In this article, we took a look at analyzing risk by presenting quantitative and qualitative methods including an exercise to understand the approach. The overall goal of security is to be integrated into business processes, so it is truly a part of the business and not an expensive afterthought simply there to patch a security problem. Resources for Article : Further resources on this subject: Microsoft Enterprise Library: Security Application Block [Article] Microsoft Enterprise Library: Authorization and Security Cache [Article] Getting Started with Enterprise Library [Article]
Read more
  • 0
  • 0
  • 3337

article-image-blocking-versus-non-blocking-scripts
Packt
08 Feb 2013
7 min read
Save for later

Blocking versus Non blocking scripts

Packt
08 Feb 2013
7 min read
(For more resources related to this topic, see here.) Blocking versus non blocking The reason we've put this library into the head of the HTML page and not the footer is because we actually want Modernizr to be a blocking script; this way it will test for, and if applicable create or shim, any elements before the DOM is rendered. We also want to be able to tell what features are available to us before the page is rendered. The script will load, Modernizr will test the availability of the new semantic elements, and if necessary, shim in the ones that fail the tests, and the rest of the page load will be on its merry way. Now when I say that we want the script to be "blocking", what I mean by that is the browser will wait to render or download any more of the page content until the script has finished loading. In essence, everything will move in a serial process and future processes will be "blocked" from occurring until after this takes place. This is also referred to as single threaded. More commonly as you may already be aware of, JavaScripts are called upon in the footer of the page, typically before the last body tag or by way of self-construction through use of an anonymous or immediate function, which only builds itself once the DOM is already parsed. The async attribute Even more recently, included page scripts can have the async attribute added to their tag elements, which will tell the browser to download other scripts in parallel. I like to think of serial versus parallel script downloading in terms of a phone conversation, each script being a single phone call. For each call made, a conversation is held, and once complete, the next phone call is made until there aren't any numbers left to be dialled. Parallel or asynchronous would be like having all of the callers on a conference call at one time. The browser, as the person making all these calls at once, has the superpower to hold all these conversations at the same time. I like to think of blocking scripts as phone calls, which contain pieces of information in their conversations that the person or browser would need to know before communicating or dialling up with the other scripts on this metaphoric conference call. Blocking to allow shimming For our needs, however, we want Modernizr to block that, so that all feature tests and shimming can be done before DOM render. The piece of information the browser needs before calling out the other scripts and parts of the page is what features exist, and whether or not semantic HTML5 elements need to be simulated. Doing otherwise could mean tragedy for something being targeted that doesn't exist because our shim wasn't there to serve its purpose by doing so. It would be similar to a roofer trying to attach shingles to a roof without any nails. Think of shimming as the nails for the CSS to attach certain selectors to their respective DOM nodes. Browsers such as IE typically ignore elements they don't recognize by default so the shims make the styles hold to the replicated semantic elements, and blocking the page ensures that happens in a timely manner. Shimming, which is also referred to as a "shiv", is when JavaScript recreates an HTML5 element that doesn't exist natively in the browser. The elements are thus "shimmed" in for use in styling. The browser will often ignore elements that don't exist natively otherwise. Say for example, the browser that was used to render the page did not support the new HTML5 section element tag. If the page wasn't shimmed to accommodate this before the render tree was constructed, you would run the risk of the CSS not working on those section elements. Looking at the reference chart on http://caniuse.com , this is somewhat likely for anyone using IE 8 or earlier: Now that we've adequately covered how to load Modernizr in the page header, we can move back on to the HTML. Adding the navigation Now that we have verified all of the JavaScript that is connected, we can start adding in more visual HTML elements. I'm going to add in five sections to the page and a fixed navigation header to scroll to each of them. Once that is all in place and working, we'll disable the default HTML actions in the navigation and control everything with JavaScript. By doing this, there will be a nice graceful fallback for the two people on the planet that have JavaScript disabled. Just kidding, maybe it's only one person. All joking aside, a no JavaScript fallback will be in place in the event that it is disabled on the page. If everything checks out as it should, you'll see the following printed in the JavaScript console in developer tools: While we're at it let's remove the h1 tag as well. Since we now know for a fact that Modernizr is great, we don't need to "hello world" it. Once the h1 tag is removed, it's time for a bit of navigation. The HTML used is as follows: <!-- Placing everything in the <header> html5 tag. --> <header> <div id="navbar"> <div id="nav"> <!-- Wrap the navigation in the new html5 nav element --> <nav> <href="#frame-1">Section One</a> <href="#frame-2">Section Two</a> <href="#frame-3">Section Three</a> <href="#frame-4">Section Four</a> <href="#frame-5">Section Four</a> </nav> </div> </div> </header> This is a fairly straightforward navigation at the moment. The entire fragment is placed inside the HTML5 header element of the page. A div tag with the id field of navbar will be used for targeting. I prefer to use HTML5 purely for semantic markup of the page as much as possible and to use div tags to target with styles. You could just as easily add CSS selectors to the new elements and they would be picked up as if they were any other inline or block element. The section frames After the nav element we'll add the page section frames. Each frame will be a div element, and each div element will have an id field matching the href attribute of the element from the navigation. For example, the first frame will have the id field of frame-1 which matches the href attribute of the first anchor tag in the navigation. Everything will also be wrapped in a div tag with the id field of main. Each panel or section will have the class name of frame, which allows us to apply common styles across sections as shown in the following code snippet: <div id="main"> <div id="frame-1" ></div> <div id="frame-2" ></div> <div id="frame-3" ></div> <div id="frame-4" ></div> <div id="frame-5" ></div> </div> Summary In this article we saw the basics of blocking versus non blocking scripts. We saw how the async attribute allows the browser to download other scripts in parallel. We blocked scripts using shimming, and also added navigation to the scripts. Lastly, we saw the use of section frames in scripts. Resources for Article : Further resources on this subject: HTML5: Generic Containers [Article] HTML5 Games Development: Using Local Storage to Store Game Data [Article] Building HTML5 Pages from Scratch [Article]
Read more
  • 0
  • 0
  • 16783
article-image-meteorjs-javascript-framework-why-meteor-rocks
Packt
08 Feb 2013
18 min read
Save for later

Meteor.js JavaScript Framework: Why Meteor Rocks!

Packt
08 Feb 2013
18 min read
(For more resources related to this topic, see here.) Modern web applications Our world is changing. With continual advancements in display, computing, and storage capacities, what wasn't possible just a few years ago is now not only possible, but critical to the success of a good application. The Web in particular has undergone significant change. The origin of the web app (client/server) From the beginning, web servers and clients have mimicked the dumb terminal approach to computing, where a server with significantly more processing power than a client will perform operations on data (writing records to a database, math calculations, text searches, and so on), transform the data into a readable format (turn a database record into HTML, and so on), and then serve the result to the client, where it's displayed for the user. In other words, the server does all the work, and the client acts as more of a display, or dumb terminal. The design pattern for this is called...wait for it…the client/server design pattern: This design pattern, borrowed from the dumb terminals and mainframes of the 60s and 70s, was the beginning of the Web as we know it, and has continued to be the design pattern we think of, when we think of the Internet. The rise of the machines (MVC) Before the Web (and ever since), desktops were able to run a program such as a spreadsheet or a word processor without needing to talk to a server. This type of application could do everything it needed to, right there on the big and beefy desktop machine. During the early 90s, desktop computers got faster and better. Even more and more beefy. At the same time, the Web was coming alive. People started having the idea that a hybrid between the beefy desktop application (a fat app) and the connected client/server application (a thin app) would produce the best of both worlds. This kind of hybrid app — quite the opposite of a dumb terminal — was called a smart app. There were many business-oriented smart apps created, but the easiest examples are found in computer games. Massively Multiplayer Online games (MMOs), first-person shooters, and real-time strategies are smart apps where information (the data model) is passed between machines through a server. The client in this case does a lot more than just display the information. It performs most of the processing (or controls) and transforms the data into something to be displayed (the view). This design pattern is simple, but very effective. It's called the Model View Controller (MVC) pattern. The model is all the data. In the context of a smart app, the model is provided by a server. The client makes requests for the model from the server. Once the client gets the model, it performs actions/logic on this data, and then prepares it to be displayed on the screen. This part of the application (talk to the server, modify the data model, and prep data for display) is called the controller. The controller sends commands to the view, which displays the information, and reports back to the controller when something happens on the screen (a button click, for example). The controller receives that feedback, performs logic, and updates the model. Lather, rinse, repeat. Because web browsers were built to be "dumb clients" the idea of using a browser as a smart app was out of the question. Instead, smart apps were built on frameworks such as Microsoft .NET, Java, or Macromedia (now Adobe) Flash. As long as you had the framework installed, you could visit a web page to download/run a smart app. Sometimes you could run the app inside the browser, sometimes you could download it first, but either way, you were running a new type of web app, where the application could talk to the server and share the processing workload. The browser grows up (MVVM) Beginning in the early 2000s, a new twist on the MVC pattern started to emerge. Developers started to realize that, for connected/enterprise "smart apps", there was actually a nested MVC pattern. The server (controller) was performing business logic on the database information (model) through the use of business objects, and then passing that information on to a client application (a "view"). The client was receiving this information from the server, and treating it as its own personal "model." The client would then act as a proper controller, perform logic, and send the information to the view to be displayed on the screen. So, the "view" for the server MVC was the "model" for the second MVC. Then came the thought, "why stop at two?" There was no reason an application couldn't have multiple nested MVCs, with each view becoming the model for the next MVC. In fact, on the client side, there's actually a good reason to do so. Separating actual display logic (such as "this submit button goes here" and "the text area changed value") from the client-side object logic (such as "user can submit this record" and "the phone # has changed") allows a large majority of the code to be reused. The object logic can be ported to another application, and all you have to do is change out the display logic to extend the same model and controller code to a different application or device. From 2004-2005, this idea was refined and modified for smart apps (called the presentation model) by Martin Fowler and Microsoft (called the Model View View-Model). While not strictly the same thing as a nested MVC, the MVVM design pattern applied the concept of a nested MVC to the frontend application. As browser technologies (HTML and JavaScript) matured, it became possible to create smart apps that use the MVVM design pattern directly inside an HTML web page. This pattern makes it possible to run a full-sized application directly from a browser. No more downloading multiple frameworks or separate apps. You can now get the same functionality from visiting a URL as you previously could from buying a packaged product. A giant Meteor appears! Meteor takes the MVVM pattern to the next level. By applying templating through handlebars.js (or other template libraries) and using instant updates, it truly enables a web application to act and perform like a complete, robust smart application. Let's walk through some concepts of how Meteor does this, and then we'll begin to apply this to our Lending Library application. Cached and synchronized data (the model) Meteor supports a cached-and-synchronized data model that is the same on the client and the server. When the client notices a change to the data model, it first caches the change locally, and then tries to sync with the server. At the same time, it is listening to changes coming from the server. This allows the client to have a local copy of the data model, so it can send the results of any changes to the screen quickly, without having to wait for the server to respond. In addition, you'll notice that this is the beginning of the MVVM design pattern, within a nested MVC. In other words, the server publishes data changes, and treats those data changes as the "view" in its own MVC pattern. The client subscribes to those changes, and treats the changes as the "model" in its MVVM pattern. A code example of this is very simple inside of Meteor (although you can make it more complex and therefore more controlled if you'd like): var lists = new Meteor.Collection("lists"); What this one line does is declare that there is a lists data model. Both the client and server will have a version of it, but they treat their versions differently. The client will subscribe to changes announced by the server, and update its model accordingly. The server will publish changes, and listen to change requests from the client, and update its model (its master copy) based on those change requests. Wow. One line of code that does all that! Of course there is more to it, but that's beyond the scope of this article, so we'll move on. To better understand Meteor data synchronization, see the Publish and subscribe section of the Meteor documentation at http://docs.meteor.com/#publishandsubscribe. Templated HTML (the view) The Meteor client renders HTML through the use of templates. Templates in HTML are also called view data bindings. Without getting too deep, a view data binding is a shared piece of data that will be displayed differently if the data changes. The HTML code has a placeholder. In that placeholder different HTML code will be placed, depending on the value of a variable. If the value of that variable changes, the code in the placeholder will change with it, creating a different view. Let's look at a very simple data binding – one that you don't technically need Meteor for – to illustrate the point. In LendLib.html, you will see an HTML (Handlebar) template expression: <div id="categories-container"> {{> categories}} </div> That expression is a placeholder for an HTML template, found just below it: <template name="categories"> <h2 class="title">my stuff</h2>... So, {{> categories}} is basically saying "put whatever is in the template categories right here." And the HTML template with the matching name is providing that. If you want to see how data changes will change the display, change the h2 tag to an h4 tag, and save the change: <template name="categories"> <h4 class="title">my stuff</h4>... You'll see the effect in your browser ("my stuff" become itsy bitsy). That's a template – or view data binding – at work! Change the h4 back to an h2 and save the change. Unless you like the change. No judgment here...okay, maybe a little bit of judgment. It's ugly, and tiny, and hard to read. Seriously, you should change it back before someone sees it and makes fun of you!! Alright, now that we know what a view data binding is, let's see how Meteor uses them. Inside the categories template in LendLib.html, you'll find even more Handlebars templates: <template name="categories"> <h4 class="title">my stuff</h4> <div id="categories" class="btn-group"> {{#each lists}} <div class="category btn btn-inverse"> {{Category}} </div> {{/each}} </div> </template> The first Handlebars expression is part of a pair, and is a for-each statement. {{#each lists}} tells the interpreter to perform the action below it (in this case, make a new div) for each item in the lists collection. lists is the piece of data. {{#each lists}} is the placeholder. Now, inside the #each lists expression, there is one more Handlebars expression. {{Category}} Because this is found inside the #each expression, Category is an implied property of lists. That is to say that {{Category}} is the same as saying this.Category, where this is the current item in the for each loop. So the placeholder is saying "Add the value of this.Category here." Now, if we look in LendLib.js, we will see the values behind the templates. Template.categories.lists = function () { return lists.find(... Here, Meteor is declaring a template variable named lists, found inside a template called categories. That variable happens to be a function. That function is returning all the data in the lists collection, which we defined previously. Remember this line? var lists = new Meteor.Collection("lists"); That lists collection is returned by the declared Template.categories.lists, so that when there's a change to the lists collection, the variable gets updated, and the template's placeholder is changed as well. Let's see this in action. On your web page pointing to http://localhost:3000, open the browser console and enter the following line: > lists.insert({Category:"Games"}); This will update the lists data collection (the model). The template will see this change, and update the HTML code/placeholder. The for each loop will run one additional time, for the new entry in lists, and you'll see the following screen: In regards to the MVVM pattern, the HTML template code is part of the client's view. Any changes to the data are reflected in the browser automatically. Meteor's client code (the View-Model) As discussed in the preceding section, LendLib.js contains the template variables, linking the client's model to the HTML page, which is the client's view. Any logic that happens inside of LendLib.js as a reaction to changes from either the view or the model is part of the View-Model. The View-Model is responsible for tracking changes to the model and presenting those changes in such a way that the view will pick up the changes. It's also responsible for listening to changes coming from the view. By changes, we don't mean a button click or text being entered. Instead, we mean a change to a template value. A declared template is the View-Model, or the model for the view. That means that the client controller has its model (the data from the server) and it knows what to do with that model, and the view has its model (a template) and it knows how to display that model. Let's create some templates We'll now see a real-life example of the MVVM design pattern, and work on our Lending Library at the same time. Adding categories through the console has been a fun exercise, but it's not a long -t term solution. Let's make it so we can do that on the page instead. Open LendLib.html and add a new button just before the {{#each lists}} expression. <div id="categories" class="btn-group"> <div class="category btn btn-inverse" id="btnNewCat">+</div> {{#each lists}} This will add a plus button to the page. Now, we'll want to change out that button for a text field if we click on it. So let's build that functionality using the MVVM pattern, and make it based on the value of a variable in the template. Add the following lines of code: <div id="categories" class="btn-group"> {{#if new_cat}} {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} The first line {{#if new_cat}} checks to see if new_cat is true or false. If it's false, the {{else}} section triggers, and it means we haven't yet indicated we want to add a new category, so we should be displaying the button with the plus sign. In this case, since we haven't defined it yet, new_cat will be false, and so the display won't change. Now let's add the HTML code to display, if we want to add a new category: <div id="categories" class="btn-group"> {{#if new_cat}} <div class="category"> <input type="text" id="add-category" value="" /> </div> {{else}} <div class="category btn btn-inverse" id="btnNewCat">+</div> {{/if}} {{#each lists}} Here we've added an input field, which will show up when new_cat is true. The input field won't show up unless it is, so for now it's hidden. So how do we make new_cat equal true? Save your changes if you haven't already, and open LendingLib.js. First, we'll declare a Session variable, just below our lists template declaration. Template.categories.lists = function () { return lists.find({}, {sort: {Category: 1}}); }; // We are declaring the 'adding_category' flag Session.set('adding_category', false); Now, we declare the new template variable new_cat, which will be a function returning the value of adding_category: // We are declaring the 'adding_category' flag Session.set('adding_category', false); // This returns true if adding_category has been assigned a value //of true Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Save these changes, and you'll see that nothing has changed. Ta-daaa! In reality, this is exactly as it should be, because we haven't done anything to change the value of adding_category yet. Let's do that now. First, we'll declare our click event, which will change the value in our Session variable. Template.categories.new_cat = function () { return Session.equals('adding_category',true); }; Template.categories.events({ 'click #btnNewCat': function (e, t) { Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); } }); Let's take a look at the following line: Template.categories.events({ This line is declaring that there will be events found in the category template. Now let's take a look at the next line: 'click #btnNewCat': function (e, t) { This line tells us that we're looking for a click event on the HTML element with an id="btnNewCat" (which we already created on LendingLib.html). Session.set('adding_category', true); Meteor.flush(); focusText(t.find("#add-category")); We set the Session variable adding_category = true, we flush the DOM (clear up anything wonky), and then we set the focus onto the input box with the expression id="add-category". One last thing to do, and that is to quickly add the helper function focusText(). Just before the closing tag for the if (Meteor.isClient) function, add the following code: /////Generic Helper Functions///// //this function puts our cursor where it needs to be. function focusText(i) { i.focus(); i.select(); }; } //------closing bracket for if(Meteor.isClient){} Now when you save the changes, and click on the plus [ ] button, you'll see the following input box: Fancy! It's still not useful, but we want to pause for a second and reflect on what just happened. We created a conditional template in the HTML page that will either show an input box or a plus button, depending on the value of a variable. That variable belongs to the View-Model. That is to say that if we change the value of the variable (like we do with the click event), then the view automatically updates. We've just completed an MVVM pattern inside a Meteor application! To really bring this home, let's add a change to the lists collection (also part of the View-Model, remember?) and figure out a way to hide the input field when we're done. First, we need to add a listener for the keyup event. Or to put it another way, we want to listen when the user types something in the box and hits Enter. When that happens, we want to have a category added, based on what the user typed. First, let's declare the event handler. Just after the click event for #btnNewCat, let's add another event handler: focusText(t.find("#add-category")); }, 'keyup #add-category': function (e,t){ if (e.which === 13) { var catVal = String(e.target.value || ""); if (catVal) { lists.insert({Category:catVal}); Session.set('adding_category', false); } } } }); We add a "," at the end of the click function, and then added the keyup event handler. if (e.which === 13) This line checks to see if we hit the Enter/return key. var catVal = String(e.target.value || ""); if (catVal) This checks to see if the input field has any value in it. lists.insert({Category:catVal}); If it does, we want to add an entry to the lists collection. Session.set('adding_category', false); Then we want to hide the input box, which we can do by simply modifying the value of adding_category. One more thing to add, and we're all done. If we click away from the input box, we want to hide it, and bring back the plus button. We already know how to do that inside the MVVM pattern by now, so let's add a quick function that changes the value of adding_category. Add one more comma after the keyup event handler, and insert the following event handler: Session.set('adding_category', false); } } }, 'focusout #add-category': function(e,t){ Session.set('adding_category',false); } }); Save your changes, and let's see this in action! In your web browser, on http://localhost:3000 , click on the plus sign — add the word Clothes and hit Enter. Your screen should now resemble the following: Feel free to add more categories if you want. Also, experiment with clicking on the plus button, typing something in, and then clicking away from the input field. Summary In this article you've learned about the history of web applications, and seen how we've moved from a traditional client/server model to a full-fledged MVVM design pattern. You've seen how Meteor uses templates and synchronized data to make things very easy to manage, providing a clean separation between our view, our view logic, and our data. Lastly, you've added more to the Lending Library, making a button to add categories, and you've done it all using changes to the View-Model, rather than directly editing the HTML.   Resources for Article : Further resources on this subject: How to Build a RSS Reader for Windows Phone 7 [Article] Applying Special Effects in 3D Game Development with Microsoft Silverlight 3: Part 2 [Article] Top features of KnockoutJS [Article]
Read more
  • 0
  • 0
  • 3543

article-image-why-coffeescript
Packt
31 Jan 2013
9 min read
Save for later

Why CoffeeScript?

Packt
31 Jan 2013
9 min read
(For more resources related to this topic, see here.) CoffeeScript CoffeeScript compiles to JavaScript and follows its idioms closely. It's quite possible to rewrite any CoffeeScript code in Javascript and it won't look drastically different. So why would you want to use CoffeeScript? As an experienced JavaScript programmer, you might think that learning a completely new language is simply not worth the time and effort. But ultimately, code is for programmers. The compiler doesn't care how the code looks or how clear its meaning is; either it will run or it won't. We aim to write expressive code as programmers so that we can read, reference, understand, modify, and rewrite it. If the code is too complex or filled with needless ceremony, it will be harder to understand and maintain. CoffeeScript gives us an advantage to clarify our ideas and write more readable code. It's a misconception to think that CoffeeScript is very different from JavaScript. There might be some drastic syntax differences here and there, but in essence, CoffeeScript was designed to polish the rough edges of JavaScript to reveal the beautiful language hidden beneath. It steers programmers towards JavaScript's so-called "good parts" and holds strong opinions of what constitutes good JavaScript. One of the mantras of the CoffeeScript community is: "It's just JavaScript", and I have also found that the best way to truly comprehend the language is to look at how it generates its output, which is actually quite readable and understandable code. Throughout this article, we'll highlight some of the differences between the two languages, often focusing on the things in JavaScript that CoffeeScript tries to improve. In this way, I would not only like to give you an overview of the major features of the language, but also prepare you to be able to debug your CoffeeScript from its generated code once you start using it more often, as well as being able to convert existing JavaScript. Let's start with some of the things CoffeeScript fixes in JavaScript. CoffeeScript syntax One of the great things about CoffeeScript is that you tend to write much shorter and more succinct programs than you normally would in JavaScript. Some of this is because of the powerful features added to the language, but it also makes a few tweaks to the general syntax of JavaScript to transform it to something quite elegant. It does away with all the semicolons, braces, and other cruft that usually contributes to a lot of the "line noise" in JavaScript. To illustrate this, let's look at an example. On the left-hand side of the following table is CoffeeScript; on the right-hand side is the generated JavaScript: CoffeeScript JavaScript fibonacci = (n) -> return 0 if n == 0 return 1 if n == 1 (fibonacci n-1) + (fibonacci n-2) alert fibonacci 10 var fibonacci; fibonacci = function(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return (fibonacci(n - 1)) + (fibonacci(n - 2)); }; alert(fibonacci(10)); To run the code examples in this article, you can use the great Try CoffeeScript online tool, at http://coffeescript.org. It allows you to type in CoffeeScript code, which will then display the equivalent JavaScript in a side pane. You can also run the code right from the browser (by clicking the Run button in the upper-left corner). At first, the two languages might appear to be quite drastically different, but hopefully as we go through the differences, you'll see that it's all still JavaScript with some small tweaks and a lot of nice syntactical sugar. Semicolons and braces As you might have noticed, CoffeeScript does away with all the trailing semicolons at the end of a line. You can still use a semicolon if you want to put two expressions on a single line. It also does away with enclosing braces (also known as curly brackets) for code blocks such as if statements, switch, and the try..catch block. Whitespace You might be wondering how the parser figures out where your code blocks start and end. The CoffeeScript compiler does this by using syntactical whitespace. This means that indentation is used for delimited code blocks instead of braces. This is perhaps one of the most controversial features of the language. If you think about it, in almost all languages, programmers tend to already use indentation of code blocks to improve readability, so why not make it part of the syntax? This is not a new concept, and was mostly borrowed from Python. If you have any experience with significant whitespace language, you will not have any trouble with CoffeeScript indentation. If you don't, it might take some getting used to, but it makes for code that is wonderfully readable and easy to scan, while shaving off quite a few keystrokes. I'm willing to bet that if you do take the time to get over some initial reservations you might have, you might just grow to love block indentation. Blocks can be indented with tabs or spaces, but be careful about being consistent using one or the other, or CoffeeScript will not be able to parse your code correctly. Parenthesis You'll see that the clause of the if statement does not need be enclosed within parentheses. The same goes for the alert function; you'll see that the single string parameter follows the function call without parentheses as well. In CoffeeScript, parentheses are optional in function calls with parameters, clauses for if..else statements, as well as while loops. Although functions with arguments do not need parentheses, it is still a good idea to use them in cases where ambiguity might exist. The CoffeeScript community has come up with a nice idiom: wrapping the whole function call in parenthesis. The use of the alert function in CoffeeScript is shown in the following table: CoffeeScript JavaScript alert square 2 * 2.5 + 1 alert(square(2 * 2.5 + 1)); alert (square 2 * 2.5) + 1 alert((square(2 * 2.5)) + 1); Functions are first class objects in JavaScript. This means that when you refer to a function without parentheses, it will return the function itself, as a value. Thus, in CoffeeScript you still need to add parentheses when calling a function with no arguments. By making these few tweaks to the syntax of JavaScript, CoffeeScript arguably already improves the readability and succinctness of your code by a big factor, and also saves you quite a lot of keystrokes. But it has a few other tricks up its sleeve. Most programmers who have written a fair amount of JavaScript would probably agree that one of the phrases that gets typed the most frequently would have to be the function definition function(){}. Functions are really at the heart of JavaScript, yet not without its many warts. CoffeeScript has great function syntax The fact that you can treat functions as first class objects as well as being able to create anonymous functions is one of JavaScript's most powerful features. However, the syntax can be very awkward and make the code hard to read (especially if you start nesting functions). But CoffeeScript has a fix for this. Have a look at the following snippets: CoffeeScript JavaScript -> alert 'hi there!' square = (n) -> n * n var square; (function() { return alert('hi there!'); }); square = function(n) { return n * n; }; Here, we are creating two anonymous functions, the first just displays a dialog and the second will return the square of its argument. You've probably noticed the funny -> symbol and might have figured out what it does. Yep, that is how you define a function in CoffeeScript. I have come across a couple of different names for the symbol but the most accepted term seems to be a thin arrow or just an arrow. Notice that the first function definition has no arguments and thus we can drop the parenthesis. The second function does have a single argument, which is enclosed in parenthesis, which goes in front of the -> symbol. With what we now know, we can formulate a few simple substitution rules to convert JavaScript function declarations to CoffeeScript. They are as follows: Replace the function keyword with -> If the function has no arguments, drop the parenthesis If it has arguments, move the whole argument list with parenthesis in front of the -> symbol Make sure that the function body is properly indented and then drop the enclosing braces Return isn't required You might have noted that in both the functions, we left out the return keyword. By default, CoffeeScript will return the last expression in your function. It will try to do this in all the paths of execution. CoffeeScript will try turning any statement (fragment of code that returns nothing) into an expression that returns a value. CoffeeScript programmers will often refer to this feature of the language by saying that everything is an expression. This means you don't need to type return anymore, but keep in mind that this can, in many cases, alter your code subtly, because of the fact that you will always return something. If you need to return a value from a function before the last statement, you can still use return. Function arguments Function arguments can also take an optional default value. In the following code snippet you'll see that the optional value specified is assigned in the body of the generated Javascript: CoffeeScript JavaScript square = (n=1) -> alert(n * n) var square; square = function(n) { if (n == null) { n = 1; } return alert(n * n); }; In JavaScript, each function has an array-like structure called arguments with an indexed property for each argument that was passed to the function. You can use arguments to pass in a variable number of parameters to a function. Each parameter will be an element in arguments and thus you don't have to refer to parameters by name. Although the arguments object acts somewhat like an array, it is in not in fact a "real" array and lacks most of the standard array methods. Often, you'll find that arguments doesn't provide the functionality needed to inspect and manipulate its elements like they are used with an array. Summary We saw how it can help you write shorter, cleaner, and more elegant code than you normally would in JavaScript and avoid many of its pitfalls. We came to realize that even though CoffeeScripts' syntax seems to be quite different from JavaScript, it actually maps pretty closely to its generated output. Resources for Article : Further resources on this subject: ASP.Net Site Performance: Improving JavaScript Loading [Article] Build iPhone, Android and iPad Applications using jQTouch [Article] An Overview of the Node Package Manager [Article]
Read more
  • 0
  • 0
  • 7687

article-image-article-getting-started-with-html5-modernizer-url
Packt
31 Jan 2013
4 min read
Save for later

Getting Started with Modernizr

Packt
31 Jan 2013
4 min read
(For more resources on this subject, see here.) Detect and design with features, not User agents (browsers) What if you could build your website based on features instead of for the individual browser idiosyncrasies by manufacturer and version, making your website not just backward compatible but also forward compatible? You could quite potentially build a fully backward and forward compatible experience using a single code base across the entire UA spectrum. What I mean by this is instead of baking in an MSIE 7.0 version, an MSIE 8.0 version , a Firefox version, and so on of your website, and then using JavaScript to listen for, or sniff out, the browser version in play, it would be much simpler to instead build a single version of your website that supports all of the older generation, latest generation, and in many cases even future generation technologies, such as a video API,box-shadow,and first-of-type. Think of your website as a full-fledged cable television network broadcasting over 130 channels, and your users as customers that sign up for only the most basic package available, of only 15 channels. Any time that they upgrade their cable (browser) package to one offering additional channels (features), they can begin enjoying them immediately because you have already been broadcasting to each one of those channels the entire time. What happens now is that a proverbial line is drawn in the sand, and the site is built on the assumption that a particular set of features will exist and are thus supported. If not, fallbacks are in place to allow a smooth degradation of the experience as usual, but more importantly the site is built to adopt features that the browser will eventually have. Modernizr can detect CSS features, such as @font-face , box-shadow , and CSS gradients. It can also detect HTML5 elements, such as canvas , localstorage, and application cache. In all it can detect over 40 features for you, the developer. Another term commonly used to describe this technique is "progressive enhancement ". When the time fi nally comes that the user decides to upgrade their browser, the new features that the more recent browser version brings with it, for example text-shadow , will automatically be detected and picked up by your website, to be leveraged by your site with no extra work or code from you when they do. Without any additional work on your part, any text that is assigned text-shadow attributes will turn on at the fl ick of a switch so that user's experience will smoothly, and progressively be enhanced. What is Modernizr? More importantly, why should you use it? At its foundation, Modernizr is a feature-detection library powered by none other than JavaScript. Here is an example of conditionally adding CSS classes based on the browser, also known as the User Agent . When the browser parses this HTML document and fi nds a match, that class will be conditionally added to the page. Now that the browser version has been found, the developer can use CSS to alter the page based on the version of the browser that is used to parse the page. In the following example, IE 7, IE 8, and IE 9 all use a different method for a drop shadow attribute on an anchor element: /* IE7 Conditional class using UA sniffing */ .lt-ie7 a{ display: block; float: left; background: url( drop-shadow. gif ); } .lt-ie8 a{ display: inline-block; background: url( drop-shadow.png ); } .lt-ie9 a{ display: inline-block; box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } The problem with the conditional method of applying styles is that not only does it require more code, but it also leaves a burden on the developer to know what browser version is capable of a given feature, in this case box-shadow . Here is the same example using Modernizr . Note how Modernizr has done the heavy lifting for you, irrespective of whether or not the box-shadow feature is supported by the browser: /* Box shadow using Modernizr CSS feature detected classes */ .box-shadow a{ box-shadow: 10px 5px 5px rgba(0,0,0,0.5); } .no-box-shadow a{ background: url( drop-shadow.gif ); }   The Modernizr namespace The Modernizr JavaScript library in your web page's header is a lightweight feature library that will test your browser for the features that it may or may not support, and store those results in a JavaScript namespace aptly named Modernizr. You can then use those test results as you build your website. From this point on everything you need to know about what features your user's browser can and cannot support can be checked for in two separate places. Going back to the cable television analogy, you now know what channels (features) your user does and does not have, and can act accordingly.
Read more
  • 0
  • 0
  • 5100
article-image-downloading-and-setting-bootstrap
Packt
30 Jan 2013
4 min read
Save for later

Downloading and setting up Bootstrap

Packt
30 Jan 2013
4 min read
(For more resources related to this topic, see here.) Getting ready Twitter Bootstrap is more than a set of code. It is an online community. To get started, you will do well to familiarize yourself with Twitter Bootstrap's home base: http://twitter.github.com/bootstrap/ Here you'll find the following: The documentation: If this is your first visit, grab a cup of coffee and spend some time perusing the pages, scanning the components, reading the details, and soaking it in. (You'll see this is going to be fun.) The download button: You can get the latest and greatest versions of the Twitter Bootstrap's CSS, JavaScript plugins, and icons, compiled and ready for action, coming to you in a convenient ZIP folder. This is where we'll start. Downloading the example code You can download the example code files for all Packt books you have purchased from your account at http://www.PacktPub.com. If you purchased this book elsewhere, you can visit http://www.PacktPub.com/support and register to have the files e-mailed directly to you. How to do it… Whatever your experience level, as promised, I'll walk you through all the necessary steps. Here goes! Go to the Bootstrap homepage: http://twitter.github.com/bootstrap/ Click on the large Download Bootstrap button. Locate the download file and unzip or extract it. You should get a folder named simply bootstrap. Inside this folder you should find the folders and files shown in thefollowing screenshot: From the homepage, click on the main navigation item: Get started. Scroll down, or use the secondary navigation, to navigate to the heading: Examples. The direct link is: http://twitter.github.com/bootstrap/getting-started. html#examples Right-click and download the leftmost example, labeled Basic Marketing Site. You'll see that it is an HTML file, named hero.html Save (or move) it to your main bootstrap folder, right alongside the folders named css, img, and js. Rename the file index.html (a standard name for what will become our homepage). You should now see something similar to the following screenshot: Next, we need to update the links to the stylesheets Why? When you downloaded the starter template file, you changed the relationship between the file and its stylesheets. We need to let it know where to find the stylesheets in this new file structure. Open index.html (formerly, hero.html) in your code editor. Need a code editor? Windows users: You might try Notepad++ (http://notepadplus-plus.org/download/) Mac users: Consider TextWrangler (http://www. barebones.com/products/textwrangler/)   Find these lines near the top of the file (lines 11-18 in version 2.0.2): Update the href attributes in both link tags to read as follows: Save your changes! You're set to go! Open it up in your browser! (Double-click on index.html.) You should see something like this: Congratulations! Your first Bootstrap site is underway. Problems? Don't worry. If your page doesn't look like this yet, let me help you spot the problem. Revisit the steps above and double-check a couple of things: Are your folders and files in the right relationship? (see step 3 as detailed previosuly) In your index.html, did you update the href attributes in both stylesheet links? (These should be lines 11 and 18 as of Twitter Bootstrap version 2.1.0.) There's more… Of course, this is not the only way you could organize your files. Some developers prefer to place stylesheets, images, and JavaScript files all within a larger folder named assets or library. The organization method I've presented is recommended by the developers who contribute to the HTML5 Boilerplate. One advantage of this approach is that it reduces the length of the paths to our site assets. Thus, whereas others might have a path to a background image such as this: url('assets/img/bg.jpg'); In the organization scheme I've recommended it will be shorter: url('img/bg.jpg'); This is not a big deal for a single line of code. However, when you consider that there will be many links to stylesheets, JavaScript files, and images running throughout your site files, when we reduce each path a few characters, this can add up. And in a world where speed matters, every bit counts. Shorter paths save characters, reduce file size, and help support faster web browsing. Summary This article gave us a quick introduction to the Twitter Bootstrap. We've got a fair idea as to how to download and set up our Bootstrap. By following these simple steps, we can easily create our first Bootstrap site. Resources for Article : Further resources on this subject: Starting Up Tomcat 6: Part 1 [Article] Build your own Application to access Twitter using Java and NetBeans: Part 2 [Article] Integrating Twitter with Magento [Article]
Read more
  • 0
  • 0
  • 10132

article-image-layout-extnet
Packt
30 Jan 2013
16 min read
Save for later

Layout with Ext.NET

Packt
30 Jan 2013
16 min read
(For more resources related to this topic, see here.) Border layout The Border layout is perhaps one of the more popular layouts. While quite complex at first glance, it is popular because it turns out to be quite flexible to design and to use. It offers common elements often seen in complex web applications, such as an area for header content, footer content, a main content area, plus areas to either side. All are separately scrollable and resizable if needed, among other benefits. In Ext speak, these areas are called Regions, and are given names of North, South, Center, East, and West regions. Only the Center region is mandatory. It is also the one without any given dimensions; it will resize to fit the remaining area after all the other regions have been set. A West or East region must have a width defined, and North or South regions must have a height defined. These can be defined using the Width or Height property (in pixels) or using the Flex property which helps provide ratios. Each region can be any Ext.NET component; a very common option is Panel or a subclass of Panel. There are limits, however: for example, a Window is intended to be floating so cannot be one of the regions. This offers a lot of flexibility and can help avoid nesting too many Panels in order to show other components such as GridPanels or TabPanels, for example. Here is a screenshot showing a simple Border layout being applied to the entire page (that is, the viewport) using a 2-column style layout: We have configured a Border layout with two regions; a West region and a Center region. The Border layout is applied to the whole page (this is an example of using it with Viewport. Here is the code: <%@ Page Language="C#" %> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" Theme="Gray" /> <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Viewport> </body> </html> The code has a Viewport configured with a Border layout via the Layout property. Then, into the Items collection two Panels are added, for the West and Center regions. The value of the Layout property is case insensitive and can take variations, such as Border, border, borderlayout, BorderLayout, and so on. As regions of a Border layout we can also configure options such as whether you want split bars, whether Panels are collapsible, and more. Our example uses the following: The West region Panel has been configured to be collapsible (using Collapsible="true"). This creates a small button in the title area which, when clicked, will smoothly animate the collapse of that region (which can then be clicked again to open it). When collapsed, the title area itself can also be clicked which will float the region into appearance, rather than permanently opening it (allowing the user to glimpse at the content and mouse away to close the region). This floating capability can be turned off by using Floatable="false" on the Panel. Split="true" gives a split bar with a collapse button between the regions. This next example shows a more complex Border layout where all regions are used: The markup used for the previous is very similar to the first example, so we will only show the Viewport portion: <ext:Viewport runat="server" Layout="border"> <Items> <ext:Panel Region="North" Split="true" Title="North" Height="75" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel runat="server" Region="Center" Title="Center content" /> <ext:Panel Region="East" Split="true" Title="East" Width="150" Collapsible="true" /> <ext:Panel Region="South" Split="true" Title="South" Height="75" Collapsible="true" /> </Items> </ext:Viewport> Although each Panel has a title set via the Title property, it is optional. For example, you may want to omit the title from the North region if you want an application header or banner bar, where the title bar could be superfluous. Different ways to create the same components The previous examples were shown using the specific Layout="Border" markup. However, there are a number of ways this can be marked up or written in code. For example, You can code these entirely in markup as we have seen You can create these entirely in code You can use a mixture of markup and code to suit your needs Here are some quick examples: Border layout from code This is the code version of the first two-panel Border layout example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { var viewport = new Viewport { Layout = "border", Items = { new Ext.Net.Panel { Region = Region.West, Title = "West", Width = 200, Collapsible = true, Split = true }, new Ext.Net.Panel { Region = Region.Center, Title = "Center content" } } }; this.Form.Controls.Add(viewport); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <form runat="server"> <ext:ResourceManager runat="server" Theme="Gray" /> </form> </body> </html> There are a number of things going on here worth mentioning: The appropriate panels have been added to the Viewport's Items collection Finally, the Viewport is added to the page via the form's Controls Collection If you are used to programming with ASP.NET, you normally add a control to the Controls collection of an ASP.NET control. However, when Ext.NET controls add themselves to each other, it is usually done via the Items collection. This helps create a more optimal initialization script. This also means that only Ext.NET components participate in the layout logic. There is also the Content property in markup (or ContentControls property in code-behind) which can be used to add non-Ext.NET controls or raw HTML, though they will not take part in the layout. It is important to note that configuring Items and Content together should be avoided, especially if a layout is set on the parent container. This is because the parent container will only use the Items collection. Some layouts may hide the Content section altogether or have other undesired results. In general, use only one at a time, not both because the Viewport is the outer-most control; it is added to the Controls collection of the form itself. Another important thing to bear in mind is that the Viewport must be the only top-level visible control. That means it cannot be placed inside a div, for example it must be added directly to the body or to the <form runat="server"> only. In addition, there should not be any sibling controls (except floating widgets, like Window). Mixing markup and code The same 2-panel Border layout can also be mixed in various ways. For example: <%@ Page Language="C#" %> <script runat="server"> protected void Page_Load(object sender, EventArgs e) { this.WestPanel.Title = "West"; this.WestPanel.Split = true; this.WestPanel.Collapsible = true; this.Viewport1.Items.Add(new Ext.Net.Panel { Region = Region.Center, Title = "Center content" }); } </script> <!DOCTYPE html> <html> <head runat="server"> <title>Border Layout Example</title> </head> <body> <ext:ResourceManager runat="server" /> <ext:Viewport ID="Viewport1" runat="server" Layout="Border"> <Items> <ext:Panel ID="WestPanel" runat="server" Region="West" Width="200" /> </Items> </ext:Viewport> </body> </html> In the previous example, the Viewport and the initial part of the West region have been defined in markup. The Center region Panel has been added via code and the rest of the West Panel's properties have been set in code-behind. As with most ASP. NET controls, you can mix and match these as you need. Loading layout items via User Controls A powerful capability that Ext.NET provides is being able to load layout components from User Controls. This is achieved by using the UserControlLoader component. Consider this example: <ext:Viewport runat="server" Layout="Border"> <Items> <ext:UserControlLoader Path="WestPanel.ascx" /> <ext:Panel Region="Center" /> </Items> </ext:Viewport> In this code, we have replaced the West region Panel that was used in earlier examples with a UserControlLoader component and set the Path property to load a user control in the same directory as this page. That user control is very simple for our example: <%@ Control Language="C#" %> <ext:Panel runat="server" Region="West" Split="true" Title="West" Width="200" Collapsible="true" /> In other words, we have simply moved our Panel from our earlier example into a user control and loaded that instead. Though a small example, this demonstrates some useful reuse capability. Also note that although we used the UserControlLoader in this Border layout example, it can be used anywhere else as needed, as it is an Ext.NET component. The containing component does not have to be a Viewport Note also that the containing component does not have to be a Viewport. It can be any other appropriate container, such as another Panel or a Window. Let's do just that: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="150" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" /> </Items> </ext:Window> The container has changed from a Viewport to a Window (with dimensions). It will produce this: More than one item with the same region In previous versions of Ext JS and Ext.NET you could only have one component in a given region, for example, only one North region Panel, one West region Panel, and so on. New to Ext.NET 2 is the ability to have more than one item in the same region. This can be very flexible and improve performance slightly. This is because in the past if you wanted the appearance of say multiple West columns, you would need to create nested Border layouts (which is still an option of course). But now, you can simply add two components to a Border layout and give them the same region value. Nested Border layouts are still possible in case the flexibility is needed (and helps make porting from an earlier version easier). First, here is an example using nested Border layouts to achieve three vertical columns: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Inner Center" /> </Items> </ext:Panel> </Items> </ext:Window> This code will produce the following output: The previous code is only a slight variation of the example preceding it, but has a few notable changes: The Center region Panel has itself been given the layout as Border. This means that although this is a Center region for the window that it is a part of, this Panel is itself another Border layout. The nested Border layout then has two further Panels, an additional West region and an additional Center region. Note, the Title has also been removed from the outer Center region so that when they are rendered, they line up to look like three Panels next to each other. Here is the same example, but without using a nested border Panel and instead, just adding another West region Panel to the containing Window: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="West" Split="true" Title="Inner West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Title="Center content" Border="false" /> </Items> </ext:Window> Regions are not limited to Panels only A common problem with layouts is to start off creating more deeply nested controls than needed and the example earlier shows that it is not always needed. Multiple items with the same region helps to prevent nesting Border Layouts unnecessarily. Another inefficiency typical with the Border layout usage is using too many containing Panels in each region. For example, there may be a Center region Panel which then contains a TabPanel. However, as TabPanel is a subclass of Panel it can be given a region directly, therefore avoiding an unnecessary Panel to contain the TabPanel: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="False"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="True" /> <ext:TabPanel Region="Center"> <Items> <ext:Panel Title="First Tab" /> <ext:Panel Title="Second Tab" /> </Items> </ext:TabPanel> </Items> </ext:Window> This code will produce the following output: The differences with the nested Border layout example shown earlier are: The outer Center region has been changed from Panel to TabPanel. TabPanels manage their own items' layout so Layout="Border" is removed. The TabPanel also has Border="false" taken out (so it is true by default). The inner Panels have had their regions, Split, and other border related attributes taken out. This is because they are not inside a nested Border layout now; they are tabs. Other Panels, such as TreePanel or GridPanel, can also be used as we will see. Something that can be fiddly from time to time is knowing which borders to take off and which ones to keep when you have nested layouts and controls like this. There is a logic to it, but sometimes a quick bit of trial and error can also help figure it out! As a programmer this sounds minor and unimportant, but usually you want to prevent the borders becoming too thick, as aesthetically it can be off-putting, whereas just the right amount of borders can help make the application look clean and professional. You can always give components a class via the Cls property and then in CSS you can fine tune the borders (and other styles of course) as you need. Weighted regions Another feature new to Ext.NET 2 is that regions can be given weighting to influence how they are rendered and spaced out. Prior versions would require nested Border layouts to achieve this. To see how this works, consider this example to put a South region only inside the Center Panel: To achieve this output, if we used the old way—the nested Border layouts—we would do something like this: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" /> <ext:Panel Region="Center" Layout="Border" Border="false"> <Items> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Panel> </Items> </ext:Window> In the preceding code, we make the Center region itself be a Border layout with an inner Center region and a South region. This way the outer West region takes up all the space on the left. If the South region was part of the outer Border layout, then it would span across the entire bottom area of the window. But the same effect can be achieved using weighting. This means you do not need nested Border layouts; the three Panels can all be items of the containing window, which means a few less objects being created on the client: <ext:Window runat="server" Layout="Border" Height="200" Width="400" Border="false"> <Items> <ext:Panel Region="West" Split="true" Title="West" Width="100" Collapsible="true" Weight="10" /> <ext:Panel Region="Center" Title="Center" /> <ext:Panel Region="South" Split="true" Title="South" Height="100" Collapsible="true" /> </Items> </ext:Window> The way region weights work is that the region with the highest weight is assigned space from the border before other regions. If more than one region has the same weight as another, they are assigned space based on their position in the owner's Items collection (that is first come, first served). In the preceding code, we set the Weight property to 10 to the West region only, so it is rendered first and, thus, takes up all the space it can before the other two are rendered. This allows for many flexible options and Ext.NET has an example where you can configure different values to see the effects of different weights: http://examples.ext.net/#/Layout/BorderLayout/Regions_Weights/ As the previous examples show, there are many ways to define the layout, offering you more flexibility, especially if generating from code-behind in a very dynamic way. Knowing that there are so many ways to define the layout, we can now speed up our look at many other types of layouts. Summary This article covered one of the numerous layout options available in Ext.NET, that is, the Border layout, to help you organize your web applications. Resources for Article : Further resources on this subject: Your First ASP.NET MVC Application [Article] Customizing and Extending the ASP.NET MVC Framework [Article] Tips & Tricks for Ext JS 3.x [Article]
Read more
  • 0
  • 0
  • 4842
Modal Close icon
Modal Close icon