Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-architecture-backbone
Packt
04 Nov 2015
18 min read
Save for later

Architecture of Backbone

Packt
04 Nov 2015
18 min read
In this article by Abiee Echamea, author of the book Mastering Backbone.js, you will see that one of the best things about Backbone is the freedom of building applications with the libraries of your choice, no batteries included. Backbone is not a framework but a library. Building applications with it can be challenging as no structure is provided. The developer is responsible for code organization and how to wire the pieces of code across the application; it's a big responsibility. Bad decisions about code organization can lead to buggy and unmaintainable applications that nobody wants to see. In this article, you will learn the following topics: Delegating the right responsibilities to Backbone objects Splitting the application into small and maintainable scripts (For more resources related to this topic, see here.) The big picture We can split application into two big logical parts. The first is an infrastructure part or root application, which is responsible for providing common components and utilities to the whole system. It has handlers to show error messages, activate menu items, manage breadcrumbs, and so on. It also owns common views such as dialog layouts or loading the progress bar. A root application is responsible for providing common components and utilities to the whole system. A root application is the main entry point to the system. It bootstraps the common objects, sets the global configuration, instantiates routers, attaches general services to a global application, renders the main application layout at the body element, sets up third-party plugins, starts a Backbone history, and instantiates, renders, and initializes components such as a header or breadcrumb. However, the root application itself does nothing; it is just the infrastructure to provide services to the other parts that we can call subapplications or modules. Subapplications are small applications that run business value code. It's where the real work happens. Subapplications are focused on a specific domain area, for example, invoices, mailboxes, or chats, and should be decoupled from the other applications. Each subapplication has its own router, entities, and views. To decouple subapplications from the root application, communication is made through a message bus implemented with the Backbone.Events or Backbone.Radio plugin such that services are requested to the application by triggering events instead of call methods on an object. Subapplications are focused on a specific domain area and should be decoupled from the root application and other subapplications. Figure 1.1 shows a component diagram of the application. As you can see, the root application depends on the routers of the subapplications due to the Backbone.history requirement to instantiate all the routers before calling the start method and the root application does this. Once Backbone.history is started, the browser's URL is processed and a route handler in a subapplication is triggered; this is the entry point for subapplications. Additionally, a default route can be defined in the root application for any route that is not handled on the subapplications. Figure 1.1: Logical organization of a Backbone application When you build Backbone applications in this way, you know exactly which object has the responsibility, so debugging and improving the application is easier. Remember, divide and conquer. Also by doing this, you make your code more testable, improving its robustness. Responsibilities of the Backbone objects One of the biggest issues with the Backbone documentation is no clues about how to use its objects. Developers should figure out the responsibilities for each object across the application but at least you have some experience working with Backbone already and this is not an easy task. The next sections will describe the best uses for each Backbone object. In this way, you will have a clearer idea about the scope of responsibilities of Backbone, and this will be the starting point of designing our application architecture. Keep in mind, Backbone is a library with foundation objects, so you will need to bring your own objects and structure to make an awesome Backbone application. Models This is the place where the general business logic lives. A specific business logic should be placed on other sites. A general business logic is all the rules that are so general that they can be used on multiple use cases, while specific business logic is a use case itself. Let's imagine a shopping cart. A model can be an item in the cart. The logic behind this model can include calculating the total by multiplying the unit price by the quantity or setting a new quantity. In this scenario, assume that the shop has a business rule that a customer can buy the same product only three times. This is a specific business rule because it is specific for this business, or how many stores do you know with this rule? These business rules take place on other sites and should be avoided on models. Also, it's a good idea to validate the model data before sending requests to the server. Backbone helps us with the validate method for this, so it's reasonable to put validation logic here too. Models often synchronize the data with the server, so direct calls to servers such as AJAX calls should be encapsulated at the model level. Models are the most basic pieces of information and logic; keep this in mind. Collections Consider collections as data repositories similar to a database. Collections are often used to fetch the data from the server and render its contents as lists or tables. It's not usual to see business logic here. Resource servers have different ways to deal with lists of resources. For instance, while some servers accept a skip parameter for pagination, others have a page parameter for the same purpose. Another case is responses; a server can respond with a plain array while other prefer sending an object with a data, list, or some other key, where an array of objects is placed. There is no standard way. Collections can deal with these issues, making server requests transparent for the rest of the application. Views Views have the responsibility of handling Document Object Model (DOM). Views work closely with the template engines rendering the templates and putting the results in DOM. Listen for low-level events using a jQuery API and transform them into domain ones. Views abstract the user interactions transforming his/her actions into data structures for the application, for example, clicking on a save button in a form view will create a plain object with the information in the input and trigger a domain event such as save:contact with this object attached. Then a domain-specific object can apply domain logic to the data and show a result. Business logic on views should be avoided, but basic form validations are allowed, such as accepting only numbers, but complex validations should be done on the model. Routers Routers have a simple responsibility: listening for URL changes on the browser and transforming them into a call to a handler. A router knows which handler to call for a given URL and also decodes the URL parameters and passes them to the handlers. The root application bootstraps the infrastructure, but routers decide which subapplication will be executed. In this way, routers are a kind of entry point. Domain objects It is possible to develop Backbone applications using only the Backbone objects described in the previous section, but for a medium-to-large application, it's not sufficient. We need to introduce a new kind of object with well-delimited responsibilities that use and coordinate the Backbone foundation objects. Subapplication facade This object is the public interface of a subapplication. Any interaction with the subapplication should be done through its methods. Direct calls to internal objects of the subapplication are discouraged. Typically, methods on this controller are called from the router but can be called from anywhere. The main responsibility of this object is simplifying the subapplication internals, so its work is to fetch the data from the server through models or collections and in case an error occurs during the process, it has to show an error message to the user. Once the data is loaded in a model or collection, it creates a subapplication controller that knows the views that should be rendered and has the handlers to deal with its events. The subapplication facade will transform the URL request into a Backbone data object. It shows the right error message; creates a subapplication controller, and delegates the control to it. The subapplication controller or mediator This object acts as an air traffic controller for the views, models, and collections. With a Backbone data object, it will instantiate and render the appropriate views and then coordinate them. However, the coordination task is not easy in complex layouts. Due to loose coupling reasons, a view cannot call the methods or events of the other views directly. Instead of this, a view triggers an event and the controller handles the event and orchestrates the view's behavior, if necessary. Note how the views are isolated, handling just their owned portion of DOM and triggering events when they need to communicate something. Business logic for simple use cases can be implemented here, but for more complex interactions, another strategy is needed. This object implements the mediator pattern allowing other basic objects such as views and models to keep it simple and allow loose coupling. The logic workflow The application starts bootstrapping common components and then initializes all the routers available for the subapplications and starts Backbone.history. See Figure 1.2, After initialization, the URL on the browser will trigger a route for a subapplication, then a route handler instantiates a subapplication facade object and calls the method that knows how to handle the request. The facade will create a Backbone data object, such as a collection, and fetch the data from the server calling its fetch method. If an error is issued while fetching the data, the subapplication facade will ask the root application to show the error, for example, a 500 Internal Server Error. Figure 1.2: Abstract architecture for subapplications Once the data is in a model or collection, the subapplication facade will instantiate the subapplication object that knows the business rules for the use case and pass the model or collection to it. Then, it renders one or more view with the information of the model or collection and places the results in the DOM. The views will listen for DOM events, for example, click, and transform them into a higher-level event to be consumed by the application object. The subapplication object listens for events on models and views and coordinates them when an event is triggered. When the business rules are not too complex, they can be implemented on this application object, such as deleting a model. Models and views can be in sync with the Backbone events or use a library for bindings such as Backbone.Stickit. In the next section, we will describe this process step by step with code examples for a better understanding of the concepts explained. Route handling The entry point for a subapplication is given by its routes, which ideally share the same namespace. For instance, a contacts subapplication can have these routes: contacts: Lists all the available contacts Contacts/page/:page: Paginates the contacts collection contacts/new: Shows a form to create a new contact contacts/view/:id: Shows an invoice given its ID contacts/edit/:id: Shows a form to edit a contact Note how all the routes start with the /contacts prefix. It's a good practice to use the same prefix for all the subapplication routes. In this way, the user will know where he/she is in the application, and you will have a clean separation of responsibilities. Use the same prefix for all URLs in one subapplication; avoid mixing routes with the other subapplications. When the user points the browser to one of these routes, a route handler is triggered. The function handler parses the URL request and delegates the request to the subapplication object, as follows: var ContactsRouter = Backbone.Router.extend({ routes: { "contacts": "showContactList", "contacts/page/:page": "showContactList", "contacts/new": "createContact", "contacts/view/:id": "showContact", "contacts/edit/:id": "editContact" }, showContactList: function(page) { page = page || 1; page = page > 0 ? page : 1; var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactList(page); }, createContact: function() { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showNewContactForm(); }, showContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactById(contactId); }, editContact: function(contactId) { var region = new Region({el: '#main'}); var app = new ContactsApp({region: region}); app.showContactEditorById(contactId); } }); The validation of the URL parameters should be done on the router as shown in the showContactList method. Once the validation is done, ContactsRouter instantiates an application object, ContactsApp, which is a facade for the Contacts subapplication; finally, ContactsRouter calls an API method to handle the user request. The router doesn't know anything about business logic; it just knows how to decode the URL requests and which object to call in order to handle the request. Here, the region object points to an existing DOM node by passing the application and tells us where the application should be rendered. The subapplication facade A subapplication is composed of smaller pieces that handle specific use cases. In the case of the contacts app, a use case can be see a contact, create a new contact, or edit a contact. The implementation of these use cases is separated on different objects that handle views, events, and business logic for a specific use case. The facade basically fetches the data from the server, handles the connection errors, and creates the objects needed for the use case, as shown here: function ContactsApp(options) { this.region = options.region; this.showContactList = function(page) { App.trigger("loading:start"); new ContactCollection().fetch({ success: _.bind(function(collection, response, options) { this._showList(collection); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showList = function(contacts) { var contactList = new ContactList({region: this.region}); contactList.showList(contacts); } this.showNewContactForm = function() { this._showEditor(new Contact()); }; this.showContactEditorById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showEditor(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showEditor = function(contact) { var contactEditor = new ContactEditor({region: this.region}); contactEditor.showEditor(contact); } this.showContactById = function(contactId) { new Contact({id: contactId}).fetch({ success: _.bind(function(model, response, options) { this._showViewer(model); App.trigger("loading:stop"); }, this), fail: function(collection, response, options) { App.trigger("loading:stop"); App.trigger("server:error", response); } }); }; this._showViewer = function(contact) { var contactViewer = new ContactViewer({region: this.region}); contactViewer.showContact(contact); } } The simplest handler is showNewContactForm, which is called when the user wants to create a new contact. This creates a new Contact object and passes to the _showEditor method, which will render an editor for a blank Contact. The handler doesn't need to know how to do this because the ContactEditor application will do the job. Other handlers follow the same pattern, triggering an event for the root application to show a loading widget to the user while fetching the data from the server. Once the server responds successfully, it calls another method to handle the result. If an error occurs during the operation, it triggers an event to the root application to show a friendly error to the user. Handlers receive an object and create an application object that renders a set of views and handles the user interactions. The object created will respond to the action of the users, that is, let's imagine the object handling a form to save a contact. When users click on the save button, it will handle the save process and maybe show a message such as Are you sure want to save the changes and take the right action? The subapplication mediator The responsibility of the subapplication mediator object is to render the required layout and views to be showed to the user. It knows which views need to be rendered and in which order, so instantiate the views with the models if needed and put the results on the DOM. After rendering the necessary views, it will listen for user interactions as Backbone events triggered from the views; methods on the object will handle the interaction as described in the use cases. The mediator pattern is applied to this object to coordinate efforts between the views. For example, imagine that we have a form with contact data. As the user made some input in the edition form, other views will render a preview business card for the contact; in this case, the form view will trigger changes to the application object and the application object will tell the business card view to use a new set of data each time. As you can see, the views are decoupled and this is the objective of the application object. The following snippet shows the application that shows a list of contacts. It creates a ContactListView view, which knows how to render a collection of contacts and pass the contacts collection to be rendered: var ContactList = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showList = function(contacts) { var contactList = new ContactListView({ collection: contacts }); this.region.show(contactList); this.listenTo(contactList, "item:contact:delete", this._deleteContact); } this._deleteContact = function(contact) { if (confirm('Are you sure?')) { contact.collection.remove(contact); } } this.close = function() { this.stopListening(); } } The ContactListView view will be responsible for transforming this into the DOM nodes and responding to collection events such as adding a new contact or removing one. Once the view is initialized, it is rendered on a specific region previously specified. When the view is finally on DOM, the application listens for the "item:contact:delete" event, which will be triggered if the user clicks on a delete button rendered for each contact. To see a contact, a ContactViewer application is responsible for managing the use case, which is as follows: var ContactViewer = function(options) { _.extend(this, Backbone.Events); this.region = options.region; this.showContact = function(contact) { var contactView = new ContactView({model: contact}); this.region.show(contactView); this.listenTo(contactView, "contact:delete", this._deleteContact); }, this._deleteContact = function(contact) { if (confirm("Are you sure?")) { contact.destroy({ success: function() { App.router.navigate("/contacts", true); }, error: function() { alert("Something goes wrong"); } }); } } } It's the same situation, that is, the contact list creates a view that manages the DOM interactions, renders on the specified region, and listens for events. From the details view of a contact, users can delete them. Similar to a list, a _deleteContact method handles the event, but the difference is when a contact is deleted, the application is redirected to the list of contacts, which is the expected behavior. You can see how the handler uses the root application infrastructure by calling the navigate method of the global App.router. The handler forms to create or edit contacts are very similar, so the same ContactEditor can be used for both the cases. This object will show a form to the user and will wait for the save action, as shown in the following code: var ContactEditor = function(options) { _.extend(this, Backbone.Events) this.region = options.region; this.showEditor = function(contact) { var contactForm = new ContactForm({model: contact}); this.region.show(contactForm); this.listenTo(contactForm, "contact:save", this._saveContact); }, this._saveContact = function(contact) { contact.save({ success: function() { alert("Successfully saved"); App.router.navigate("/contacts"); }, error: function() { alert("Something goes wrong"); } }); } } In this case, the model can have modifications in its data. In simple layouts, the views and model can work nicely with the model-view data bindings, so no extra code is needed. In this case, we will assume that the model is updated as the user puts in information in the form, for example, Backbone.Stickit. When the save button is clicked, a "contact:save" event is triggered and the application responds with the _saveContact method. See how the method issues a save call to the standard Backbone model and waits for the result. In successful requests, a message will be displayed and the user is redirected to the contact list. In errors, a message will tell the user that the application found a problem while saving the contact. The implementation details about the views are outside of the scope of this article, but you can abstract the work made by this object by seeing the snippets in this section. Summary In this article, we started by describing in a general way how a Backbone application works. It describes two main parts, a root application and subapplications. A root application provides common infrastructure to the other smaller and focused applications that we call subapplications. Subapplications are loose-coupled with the other subapplications and should own resources such as views, controllers, routers, and so on. A subapplication manages a small part of the system and no more. Communication between the subapplications and root application is made through an event-driven bus, such as Backbone.Events or Backbone.Radio. The user interacts with the application using views that a subapplication renders. A subapplication mediator orchestrates interaction between the views, models, and collections. It also handles the business logic such as saving or deleting a resource. Resources for Article: Further resources on this subject: Object-Oriented JavaScript with Backbone Classes [article] Building a Simple Blog [article] Marionette View Types and Their Use [article]
Read more
  • 0
  • 0
  • 4182

article-image-microsoft-dynamics-nav-2009-using-journals-and-entries-custom-application
Packt
10 Jun 2010
9 min read
Save for later

Microsoft Dynamics NAV 2009: Using the journals and entries in a custom application

Packt
10 Jun 2010
9 min read
(Further on Microsoft Dynamics NAV:here.) Designing a journal Now it is time to start on the product part of the Squash Application. In this part we will no longer reverse engineer in detail. We will learn how to search in the standard functionality and reuse parts in our own software. For this part we will look at resources in Microsoft Dynamics NAV. Resources are similar to use products as Items but far less complex making it easier to look and learn. Squash Court master data Our company has 12 courts that we want to register in Microsoft Dynamics NAV. This master data is comparable to resources so we'll go ahead and copy this functionality. Resources are not attached to umbrella data like the vendor/squash player tables. We need the number series again so we'll add a new number series to our squash setup table. The Squash Court table should look like this after creation: Chapter objects With this chapter some objects are required. A description of how to import these objects can be found in the Appendix. After the import process is completed make sure that your current database is the default database for the role tailored client and run Page 123456701, Squash Setup. From this page select the Action Initialise Squash Application. This will execute the C/AL code in the InitSquashApp function of this page, which will prepare demo data for us to play with. The objects are prepared and tested in a Microsoft Dynamics NAV 2009 SP1 W1 database. Reservations When running a squash court we want to be able to keep track of reservations.Looking at standard Dynamics NAV functionality it might be a good idea to create Squash player Journal. The Journal can create entries for reservations that can be invoiced. A journal needs the object structure. The journal is prepared in the objects delivered with this article. Creating a new journal from scratch is a lot of work and can easily lead to making mistakes. It is easier and safer to copy an existing Journal structure from the standard application that is similar to the journal we need for our design. In our example we have copied the Resource Journals. You can export these objects to text format, and then rename and renumber the objects to be reused easily. The squash journal objects are renumbered and renamed from the resource journal. all journals have the same structure. The template, batch and register tables are almost always the same, whereas the journal line and ledger entry table contain function specific fields. Let's have a look at all of them one by one. Journal Template The Journal Template has several fields as shown in the following screenshot: Lets discuss these fields in more detail: Name: This is the unique name. It is possible to define as many Templates as required but usually one Template per Form ID and one for Recurring will do. If you want journals with different source codes you need to have more templates. Description: A readable and understandable description of its purpose. Test Report ID: All Templates have a test report that allows the user to check for posting errors. Form ID: For some journals, more UI objects are required. For example, the General Journals have a special form for bank and cash. Posting Report ID: This report is printed when a user selects Post and Print. Force Posting Report: Use this option when a posting report is mandatory. Source Code: Here you can enter a Trail Code for all the postings done via this Journal. Reason Code: This functionality is similar to source sodes. Recurring: Whenever you post lines from a recurring journal, new lines are automatically created with a posting date defined in the recurring date formula. No. Series: When you use this feature the Document No. in the Journal Line is automatically populated with a new number from this Number Series. Posting No. Series: Use this feature for recurring journals. Journal Batch Journal Batch has various fields as shown in the following screenshot: Lets discuss these fields in more detail: Journal Template Name: The name of the Journal Template this batch refers to Name : Each batch should have a unique code Description: A readable and explaining description for this batch Reason Code: When populated, this Reason Code will overrule the Reason Code from the Journal Template No. Series: When populated this No. Series will overrule the No. Series from the Journal Template Posting No. Series: When populated this Posting No. Series will overrule the Posting No. Series from the Journal Template Register The Register table has various fields as shown in the following screenshot: Lets discuss these fields in more detail: No.: This field is automatically and incrementally populated for each transaction with this journal. There are no gaps between the numbers. From Entry No.: A reference to the first Ledger Entry created is with this transaction. To Entry No.: A reference to the last Ledger Entry is created with this transaction. Creation Date: Always populated with the real date when the transaction was posted. User ID: The ID of the end user who has posted the transaction. The Journal The journal line has a number of mandatory fields that are required for all journals and some fields that are required for its designed functionality. In our case the journal should create a reservation which then can be invoiced. This requires some information to be populated in the lines. Reservation The reservation process is a logistical process that requires us to know the number of the Squash Court, the date and the time of the reservation. We also need to know how long the players want to play. To check the reservation it might also be useful to store the number of the Squash Player. Invoicing For the invoicing part we need to know the price we need to invoice. It might also be useful to store the cost to see our profit. For the system to figure out the proper G/L Account for the turnover we also need to define a General Product Posting Group. Journal Template Name: This is a reference to the current journal template. Line No. : Each journal has a virtually unlimited number of lines; this number is automatically incremented by 10000 allowing lines to be created in between. Entry Type: Reservation or invoice. Document No.: This number can be used to give to the squash player as a reservation number. When the entry type is invoice, it is the invoice number. Posting Date: Posting date is usually the reservation date but when the entry type is invoice it might be the date of the invoice which might differ from the posting date in the general ledger. Squash Player No.: A reference to the squash player who has made the reservation. Squash Court No.: A reference to the squash court. Description: This is automatically updated with the number of the squash court, reservation date and times, but can be changed by the user. Reservation Date: The actual date of the reservation. From Time: The starting time of the reservation. We allow only whole or half hours. To Time: The ending time of the reservation. We only allow whole and half hours. This is automatically populated when people enter a quantity. Quantity: The number of hours playing time. We only allow units of 0.5 to be entered here. This is automatically calculated when the times are populated. Unit Cost: The cost to run a Squash Court for one hour. Total Cost: The cost for this reservation. Unit Price: The invoice price for this reservation per hour. This depends on whether or not the squash player is a member or not. Total Price: The total invoice price for this reservation. Shortcut Dimension Code 1 & 2: A reference to the dimensions used for this transaction. Applies-to Entry No.: When a reservation is invoiced, this is the reference to the squash entry no. of the reservation. Source Code: Inherited from the journal batch or template and used when posting the transaction. Chargeable: When this option is used, there will not be an invoice for the reservation. Journal Batch Name: A reference to the journal batch that is used for this transaction. Reason Code: Inherited from the journal batch or template, and used when posting the transaction. Recurring Method: When the journal is a recurring journal you can use this field to determine whether the amount field is blanked after posting the lines. Recurring Frequency: This field determines the new posting date after the recurring lines are posted. Gen. Bus. Posting Group: The combination of general business and product posting group determines the G/L cccount for turnover when we invoice the reservation. The Gen. Bus. Posting Group is inherited from the bill-to customer. Gen. Prod. Posting Group: This will be inherited from the squash player. External Document No.: When a squash player wants us to note a reference number we can store it here. Posting No. Series: When the journal template has a posting no. series it is populated here to be used when posting. Bill-to Customer No.: This determines who is paying for the reservation. We will inherit this from the squash player. So now we have a place to enter reservations but we have something to do before we can start doing this. Some fields were determined to be inherited and calculated: The time field needs calculation to avoid people entering wrong values The Unit Price should be calculated The Unit Cost, Posting groups and Bill-to Customer No. need to be inherited As final cherry on top, we will look at implementing dimensions
Read more
  • 0
  • 0
  • 4181

article-image-form-validation-codeigniter-17
Packt
26 Apr 2010
8 min read
Save for later

Form Validation with Codeigniter 1.7

Packt
26 Apr 2010
8 min read
Form validation is an important part of any application. Take a look at your favorite web application, notice that there are many forms in these web apps, and it is important that they be secure. It is also important that you have rules that should be adhered to; this also helps to keep a layer of security. In this article by Adam Griffiths, author of CodeIgniter 1.7 Professional Development, you will: Learn how the form validation process works Build a contact form Apply validation rules to the form's input fields Use callbacks to create your own rules We will cover database interaction seperately. (Read more interesting articles on CodeIgniter 1.7 Professional Development here.) Why should I validate my forms? The answer to this question is simple: security. If you simply left your forms bare, with no validation, and then stored this information directly in a database, you are liable to attack. People can simply place in some SQL code and can see a dump of a part or all of your database. By using form validation and creating rules, you will disallow most, if not all, of these practices from occurring. By having set validation rules you can limit the types of data being allowed in your forms. Best of all, the Form Validation Library makes it easy to re-populate your form fields and to show individual errors for each field, making the overall end user experience better; which can mean a lot in an environment with many forms. Even if you are building a contact form, it is a good idea to validate your forms to stop people abusing your form. Using the Form Validation Library In this article, we'll go over a Contact Form and how to use the Form Validation Library methods for validation. The form validation process The Form Validation processes are different for the developers and for users. Read on to see how the user interacts with the forms, as well as how the developer will create the forms. The user's process A form is displayed to the user, who then fills it in and submits it. The Form Validation Library then checks the form against any rules that the developer has set. If an error occurs the library returns these errors and they are shown against the form with the fields re-populated. This process proceeds until a valid form is submitted. The development process You create a form, along with a dynamic value from a form helper function—this will re-populate the data if needed. You will also display individual or global errors in the form view file. You set validation rules, which must be adhered to. Then you check to see if the validation process has been run, and if it has not, you load the form view file. Contact form We validate the form data using the Form Validation Library to complete tasks such as checking for empty fields, validating the e-mail, and then send the e-mail off. All of the code shown should be in the index() function of your email controller. Loading the assets We need to load two libraries for our contact form: the Form Validation Library and the Email class. We can do this in one line, by passing an array to the load->library function. $this->load->library(array('email', 'form_validation')); We also need to load two helpers: the email helper and the form helper. We will do this in the same way as we loaded the two libraries in the previous line of code. $this->load->helper(array('email', 'form')); Setting the rules The next step in using the Form Validation Library is to set the rules for the form. These rules are set and must be adhered to. The way we set rules is by using the set_rules() function of the Form Validation Library. We use the function as follows: $this->form_validation-> set_rules('field_name', 'human_name', 'rules'); As you can see, the function accepts three parameters. The first is the name of the form field that you wish to set the rule for. The second parameter is the name that you wish to be assigned to this, for humans to read. The final parameter is where you pass any validation rules. List of validation rules The following rules are readily available for use: required matches[field_name] min_length[x] max_length[x] exact_length[x] alpha alpha_numeric alpha_dash numeric integer is_natural is_natural_no_zero valid_email valid_emails valid_ip valid_base64 As you can see, some of these rules have a single parameter. The rule matches[] will return TRUE if the field matches the field name passed to it. The min_length[], max_length[], and exact_length[] rules will take an integer as a parameter and check if the minimum length, maximum length respectively, or exact length matches the rule. The rules with no parameters are pretty much self-explanatory. You are able to use more than one rule, simply separate rules with a vertical bar '|' and they will cascade. These rules can also be called as discrete functions. You may also use any native PHP function that accepts one parameter as a rule. $this->form_validation->required($string);$this->form_validation->is_array($string); // native PHP // function as a rule Prepping data We can also use various prepping functions to prep the data before we apply rules to it. Here's a list of the prepping rules that we can perform: xss_clean prep_for_form prep_url strip_image_tags encode_php_tags The first function listed is xss_clean. This basically strips out any code and unwanted characters, and replaces them with HTML entities. The function prep_for_form will convert special characters so that HTML data can be shown in a form without breaking it. The function prep_url will simply add http:// to a URL, if it is missing. The function strip_image_tags will remove image tags, leaving the RAW image URL. The function encode_php_tags will convert PHP tags into entities. You may also use any native PHP function that accepts one parameter as a rule. The rules Now that we know how to set rules and what the rules we can use are, we can go ahead and set the rules necessary for our form. All fields should be required, and the e-mail field should be validated to ensure that the e-mail address is correctly formatted. We also want to run all of the data through the XSS filter. $this->form_validation-> set_rules('name', 'Name', 'required|xss_clean');$this->form_validation-> set_rules('email', 'Email Address', 'required|valid_email|xss_clean');$this->form_validation-> set_rules('subject', 'Subject', 'required|xss_clean');$this->form_validation->set_rules('message', 'Message', 'required|xss_clean'); Check the validation process Instead of checking one of the form field's POST value to check if the form has been submitted, we simply check to see if the Form Validation Library has run. We do this by using the following code: if($this->form_validation->run() === FALSE){ // load the contact form}else // send the email} It's fairly simple: if the Form Validation Library hasn't processed a form, we display the form to the user; if the library has processed a form and there are no errors, we'll send the e-mail off. Sending the email As you'll notice, everything is the same as how we got the field data earlier. $name = $this->input->post('name');$email = $this->input->post('email');$subject = $this->input->post('subject');$message = $this->input->post('message'); $this->email->from($email, $name);$this->email->to('youremail@yourdomain.ext'); $this->email->subject($subject);$this->email->message($message); $this->email->send(); Final controller code Here is the entirety of our controller code: <?phpclass Email extends Controller{function Email(){parent::Controller();} // function Email()function index(){$this->load->library(array('email', 'form_validation'));$this->load->helper(array('email', 'form'));$this->form_validation->set_rules('name', 'Name', 'required|xss_clean');$this->form_validation->set_rules('email', 'Email Address','required|valid_email|xss_clean');$this->form_validation->set_rules('subject', 'Subject', 'required|xss_clean');$this->form_validation->set_rules('message', 'Message', 'required|xss_clean');if($this->form_validation->run() == FALSE){$this->load->view('email'); // load the contact form}else{$name = $this->input->post('name');$email = $this->input->post('email');$subject = $this->input->post('subject');$message = $this->input->post('message');$this->email->from($email, $name);$this->email->to('youremail@yourdomain.ext');$this->email->subject($subject);$this->email->message($message);$this->email->send();}} // function index()} // class Email extends Controller?>
Read more
  • 0
  • 0
  • 4174

article-image-ajax-implementation-apex
Packt
01 Jun 2010
8 min read
Save for later

AJAX Implementation in APEX

Packt
01 Jun 2010
8 min read
(For more resources on Oracle, see here.) APEX introduced AJAX supports in version 2.0 (the product was called HTML DB back then). The support includes a dedicated AJAX framework that allows us to use AJAX in our APEX applications, and it covers both the client and the server sides. AJAX support on the client side The APEX built-in JavaScript library includes a special JavaScript file with the implementation of the AJAX client-side components. In earlier versions this file was called htmldb_get.js, and in APEX 3.1, it was changed to apex_get_3_1.js. In version 3.1, APEX also started to implement JavaScript namespace in the apex_ns_3_1.js file. Within the file, there is a definition to an apex.ajax namespace. I'm not mentioning the names of these files just for the sake of it. As the AJAX framework is not officially documented within the APEX documentation, these files can be very important and a useful source of information. By default, these files are automatically loaded into every application page as part of the #HEAD# substitution string in the Header section of the page template. This means that, by default, AJAX functionality is available to us on every page of our application, without taking any extra measures. The htmldb_Get object The APEX implementation of AJAX is based on the htmldb_Get object and as we'll see, creating a new instance of htmldb_Get is always the first step in performing an AJAX request. The htmldb_Get constructor function has seven parameters: function htmldb_Get(obj,flow,req,page,instance,proc,queryString) 1—obj The first parameter is a String that can be set to null, a name of a page item (DOM element), or an element ID. Setting this parameter to null will cause the result of the AJAX request to be assigned in a JavaScript variable. We should use this value every time we need to process the AJAX returned result, like in the cases where we return XML or JSON formatted data, or when we are relaying on the returned result, further in our JavaScript code flow. The APEX built-in JavaScript library defines, in the apex_builder.js file, (which is also loaded into every application page, just like apex_ get_3_1.js), a JavaScript global variable called gReturn. You can use this variable and assign it the AJAX returned result. Setting this parameter to the name (ID) of a page item will set the item value property with the result of the AJAX call. You should make sure that the result of the AJAX call matches the nature of the item value property. For example, if you are returning a text string into a text item it will work just fine. However, if you are returning an HTML snippet of code into the same item, you'll most likely not get the result you wanted. Setting this parameter to a DOM element ID, which is not an input item on the page, will set its innerHTML property to the result of the AJAX call. Injecting HTML code, using the innerHTML property, is a cross-browser issue. Moreover, we can't always set innerHTML along the DOM tree. To avoid potential problems, I strongly recommend that you use this option with <div> elements only. 2—flow This parameter represents the application ID. If we are calling htmldb_Get() from an external JavaScript file, this parameter should be set to $v('pFlowId') or its equivalent in version 3.1 or before ($x('pFlowId').value or html_GetElement('pFlowId').value ). This is also the default value, in case this parameter is left null. If we are calling htmldb_Get() as part of an inline JavaScript code we can use the Substitution String notation &APP_ID. (just to remind you that the trailing period is part of the syntax). Less common, but if you are using Oracle Web Toolkit to generate dynamic code (for dynamic content) that includes AJAX, you can also use the bind variable notation :APP_ID. (In this case, the period is just a punctuation mark.) 3—req This String parameter stands for the REQUEST value. Using the keyword APPLICATION_PROCESS with this parameter allows us to name an application level On Demand—PL/SQL Anonymous Block process that will be fired as part of the AJAX server-side processing. For example: 'APPLICATION_PROCESS=demo_code'. This parameter is case sensitive, and as a String, should be enclosed with quotes. If, as part of the AJAX call, we are not invoking an on-demand process, this parameter should be set to null (which is its default value). 4—page This parameter represents an application page ID. The APEX AJAX process allows us to invoke any application page, to run it in the background, on the server side, and then clip portions of the generated HTML code for this page into the AJAX calling page. In these cases, we should set this parameter to the page ID that we want to pull from. The default value of this parameter is 0 (this stands for page 0). However, this value can be problematic at times, especially when page 0 has not been defined on the application, or when there are inconsistencies between the Authorization scheme, or the page Authentication (such as Public and Required Authentication) of page 0 and the AJAX calling page. These inconsistencies can fail the execution of the AJAX process. In cases where you are not pulling information from another page, the safe bet is to set this parameter to the page ID of the AJAX calling page, using $v('pFlowStepId') or its equivalent for versions earlier than 3.1. In the case of an inline code, the &APP_PAGE_ID. Substitution String can also be used. Using the calling page ID as the default value for this parameter can be considered a "good practice" even for upcoming APEX versions, where implementation of page level on-demand process will probably be introduced. I hope you remember that as of version 3.2, we can only define on-demand processes on the application level. 5—instance This parameter represents the APEX session ID, and should almost always be left null (personally, I never encountered the need to set it otherwise). In this case, it will be populated with the result of $v('pInstance') or its earliest versions. 6—proc This String parameter allows us to invoke a stored or packaged procedure on the database as part of the AJAX process. The common behavior of the APEX AJAX framework is to use the application level On Demand PL/SQL Anonymous Block process as the logic of the AJAX server-side component. In this case, the on-demand process is named through the third parameter—req—using the keyword APPLICATION_PROCESS, and this parameter—proc—should be left null. The parameter will be populated with its default value of 'wwv_flow.show'(the single quotes are part of the syntax, as this is a String parameter). However, the APEX AJAX framework also allows us to invoke an external (to APEX) stored (or packaged) procedure as the logic of the AJAX server side. In this case, we can utilize an already existing logic in the database. Moreover, we can benefit from the "regular" advantages of stored procedures, such as a pre-complied code, for better performance, or the option to use wrapped PL/SQL packages, which can protect our business logic better (the APEX on-demand PL/SQL process can be accessed on the database level as clear text). The parameter should be formatted as a URL and can be in the form of a relative URL. In this case, the system will complete the relative URL into a full path URL based on the current window.location.href property. As with all stored or packaged procedures that we wish to use in our APEX application, the user (and in the case of using DAD, the APEX public user) should have the proper privileges on the stored procedure. In case the stored procedure, or the packaged procedure, doesn't have a public synonym defined for it then the procedure name should be qualified with the owner schema. For example, with inline code we can use: '#OWNER#.my_package.my_proc' For external code, you should retrieve the owner and make it available on the page (e.g. assign it to a JavaScript global variable) or define a public synonym for the owner schema and package. 7—queryString This parameter allows us to add parameters to the stored (packaged) procedure that we named in the previous parameter—proc. As we are ultimately dealing with constructing a URL, that will be POSTed to the server, this parameter should take the form of POST parameters in a query string—pairs of name=value, delimited by ampersand (&). Let's assume that my_proc has two parameters: p_arg1 and p_arg2. In this case, the queryString parameter should be set similar to the following: 'p_arg1=Hello&p_arg2=World' As we are talking about components of a URL, the values should be escaped so their code will be a legal URL. You can use the APEX built-in JavaScript function htmldb_Get_escape() to do that. If you are using the req parameter to invoke an APEX on-demand process with your AJAX call, the proc and queryString parameters should be left null. In this case, you can close the htmldb_Get() syntax right after the page parameter. If, on the other hand, you are invoking a stored (packaged) procedure, the req parameter should be set to null.
Read more
  • 0
  • 0
  • 4164

article-image-constructing-common-ui-widgets
Packt
22 Apr 2015
21 min read
Save for later

Constructing Common UI Widgets

Packt
22 Apr 2015
21 min read
One of the biggest features that draws developers to Ext JS is the vast array of UI widgets available out of the box. The ease with which they can be integrated with each other and the attractive and consistent visuals each of them offers is also a big attraction. No other framework can compete on this front, and this is a huge reason Ext JS leads the field of large-scale web applications. In this article by Stuart Ashworth and Andrew Duncan by authors of the book, Ext JS Essentials, we will look at how UI widgets fit into the framework's structure, how they interact with each other, and how we can retrieve and reference them. We will then delve under the surface and investigate the lifecycle of a component and the stages it will go through during the lifetime of an application. (For more resources related to this topic, see here.) Anatomy of a UI widget Every UI element in Ext JS extends from the base component class Ext.Component. This class is responsible for rendering UI elements to the HTML document. They are generally sized and positioned by layouts used by their parent components and participate in the automatic component lifecycle process. You can imagine an instance of Ext.Component as a single section of the user interface in a similar way that you might think of a DOM element when building traditional web interfaces. Each subclass of Ext.Component builds upon this simple fact and is responsible for generating more complex HTML structures or combining multiple Ext.Components to create a more complex interface. Ext.Component classes, however, can't contain other Ext.Components. To combine components, one must use the Ext.container.Container class, which itself extends from Ext.Component. This class allows multiple components to be rendered inside it and have their size and positioning managed by the framework's layout classes. Components and HTML Creating and manipulating UIs using components requires a slightly different way of thinking than you may be used to when creating interactive websites with libraries such as jQuery. The Ext.Component class provides a layer of abstraction from the underlying HTML and allows us to encapsulate additional logic to build and manipulate this HTML. This concept is different from the way other libraries allow you to manipulate UI elements and provides a hurdle for new developers to get over. The Ext.Component class generates HTML for us, which we rarely need to interact with directly; instead, we manipulate the configuration and properties of the component. The following code and screenshot show the HTML generated by a simple Ext.Component instance: var simpleComponent = Ext.create('Ext.Component', { html   : 'Ext JS Essentials!', renderTo: Ext.getBody() }); As you can see, a simple <DIV> tag is created, which is given some CSS classes and an autogenerated ID, and has the HTML config displayed inside it. This generated HTML is created and managed by the Ext.dom.Element class, which wraps a DOM element and its children, offering us numerous helper methods to interrogate and manipulate it. After it is rendered, each Ext.Component instance has the element instance stored in its el property. You can then use this property to manipulate the underlying HTML that represents the component. As mentioned earlier, the el property won't be populated until the component has been rendered to the DOM. You should put logic dependent on altering the raw HTML of the component in an afterrender event listener or override the afterRender method. The following example shows how you can manipulate the underlying HTML once the component has been rendered. It will set the background color of the element to red: Ext.create('Ext.Component', { html     : 'Ext JS Essentials!', renderTo : Ext.getBody(), listeners: {    afterrender: function(comp) {      comp.el.setStyle('background-color', 'red');    } } }); It is important to understand that digging into and updating the HTML and CSS that Ext JS creates for you is a dangerous game to play and can result in unexpected results when the framework tries to update things itself. There is usually a framework way to achieve the manipulations you want to include, which we recommend you use first. We always advise new developers to try not to fight the framework too much when starting out. Instead, we encourage them to follow its conventions and patterns, rather than having to wrestle it to do things in the way they may have previously done when developing traditional websites and web apps. The component lifecycle When a component is created, it follows a lifecycle process that is important to understand, so as to have an awareness of the order in which things happen. By understanding this sequence of events, you will have a much better idea of where your logic will fit and ensure you have control over your components at the right points. The creation lifecycle The following process is followed when a new component is instantiated and rendered to the document by adding it to an existing container. When a component is shown explicitly (for example, without adding to a parent, such as a floating component) some additional steps are included. These have been denoted with a * in the following process. constructor First, the class' constructor function is executed, which triggers all of the other steps in turn. By overriding this function, we can add any setup code required for the component. Config options processed The next thing to be handled is the config options that are present in the class. This involves each option's apply and update methods being called, if they exist, meaning the values are available via the getter from now onwards. initComponent The initComponent method is now called and is generally used to apply configurations to the class and perform any initialization logic. render Once added to a container, or when the show method is called, the component is rendered to the document. boxready At this stage, the component is rendered and has been laid out by its parent's layout class, and is ready at its initial size. This event will only happen once on the component's first layout. activate (*) If the component is a floating item, then the activate event will fire, showing that the component is the active one on the screen. This will also fire when the component is brought back to focus, for example, in a Tab panel when a tab is selected. show (*) Similar to the previous step, the show event will fire when the component is finally visible on screen. The destruction process When we are removing a component from the Viewport and want to destroy it, it will follow a destruction sequence that we can use to ensure things are cleaned up sufficiently, so as to avoid memory leaks and so on. The framework takes care of the majority of this cleanup for us, but it is important that we tidy up any additional things we instantiate. hide (*) When a component is manually hidden (using the hide method), this event will fire and any additional hide logic can be included here. deactivate (*) Similar to the activate step, this is fired when the component becomes inactive. As with the activate step, this will happen when floating and nested components are hidden and are no longer the items under focus. destroy This is the final step in the teardown process and is implemented when the component and its internal properties and objects are cleaned up. At this stage, it is best to remove event handlers, destroy subclasses, and ensure any other references are released. Component Queries Ext JS boasts a powerful system to retrieve references to components called Component Queries. This is a CSS/XPath style query syntax that lets us target broad sets or specific components within our application. For example, within our controller, we may want to find a button with the text "Save" within a component of type MyForm. In this section, we will demonstrate the Component Query syntax and how it can be used to select components. We will also go into details about how it can be used within Ext.container.Container classes to scope selections. xtypes Before we dive in, it is important to understand the concept of xtypes in Ext JS. An xtype is a shorthand name for an Ext.Component that allows us to identify its declarative component configuration objects. For example, we can create a new Ext.Component as a child of an Ext.container.Container using an xtype with the following code: Ext.create('Ext.Container', { items: [    {      xtype: 'component',      html : 'My Component!'    } ] }); Using xtypes allows you to lazily instantiate components when required, rather than having them all created upfront. Common component xtypes include: Classes xtypes Ext.tab.Panel tabpanel Ext.container.Container container Ext.grid.Panel gridpanel Ext.Button button xtypes form the basis of our Component Query syntax in the same way that element types (for example, div, p, span, and so on) do for CSS selectors. We will use these heavily in the following examples. Sample component structure We will use the following sample component structure—a panel with a child tab panel, form, and buttons—to perform our example queries on: var panel = Ext.create('Ext.panel.Panel', { height : 500, width : 500, renderTo: Ext.getBody(), layout: {    type : 'vbox',    align: 'stretch' }, items : [    {      xtype : 'tabpanel',      itemId: 'mainTabPanel',      flex : 1,      items : [        {          xtype : 'panel',          title : 'Users',          itemId: 'usersPanel',          layout: {            type : 'vbox',            align: 'stretch'            },            tbar : [              {                xtype : 'button',                text : 'Edit',                itemId: 'editButton'                }              ],              items : [                {                  xtype : 'form',                  border : 0,                  items : [                  {                      xtype : 'textfield',                      fieldLabel: 'Name',                      allowBlank: false                    },                    {                      xtype : 'textfield',                      fieldLabel: 'Email',                      allowBlank: false                    }                  ],                  buttons: [                    {                      xtype : 'button',                      text : 'Save',                      action: 'saveUser'                    }                  ]                },                {                  xtype : 'grid',                  flex : 1,                  border : 0,                  columns: [                    {                     header : 'Name',                      dataIndex: 'Name',                      flex : 1                    },                    {                      header : 'Email',                      dataIndex: 'Email'                    }                   ],                  store : Ext.create('Ext.data.Store', {                    fields: [                      'Name',                      'Email'                    ],                    data : [                      {                        Name : 'Joe Bloggs',                        Email: 'joe@example.com'                      },                      {                        Name : 'Jane Doe',                        Email: 'jane@example.com'                      }                    ]                  })                }              ]            }          ]        },        {          xtype : 'component',          itemId : 'footerComponent',          html : 'Footer Information',          extraOptions: {            option1: 'test',            option2: 'test'          },          height : 40        }      ]    }); Queries with Ext.ComponentQuery The Ext.ComponentQuery class is used to perform Component Queries, with the query method primarily used. This method accepts two parameters: a query string and an optional Ext.container.Container instance to use as the root of the selection (that is, only components below this one in the hierarchy will be returned). The method will return an array of components or an empty array if none are found. We will work through a number of scenarios and use Component Queries to find a specific set of components. Finding components based on xtype As we have seen, we use xtypes like element types in CSS selectors. We can select all the Ext.panel.Panel instances using its xtype—panel: var panels = Ext.ComponentQuery.query('panel'); We can also add the concept of hierarchy by including a second xtype separated by a space. The following code will select all Ext.Button instances that are descendants (at any level) of an Ext.panel.Panel class: var buttons = Ext.ComponentQuery.query('panel buttons'); We could also use the > character to limit it to buttons that are direct descendants of a panel. var directDescendantButtons = Ext.ComponentQuery.query('panel > button'); Finding components based on attributes It is simple to select a component based on the value of a property. We use the XPath syntax to specify the attribute and the value. The following code will select buttons with an action attribute of saveUser: var saveButtons = Ext.ComponentQuery.query('button[action="saveUser"]); Finding components based on itemIds ItemIds are commonly used to retrieve components, and they are specially optimized for performance within the ComponentQuery class. They should be unique only within their parent container and not globally unique like the id config. To select a component based on itemId, we prefix the itemId with a # symbol: var usersPanel = Ext.ComponentQuery.query('#usersPanel'); Finding components based on member functions It is also possible to identify matching components based on the result of a function of that component. For example, we can select all text fields whose values are valid (that is, when a call to the isValid method returns true): var validFields = Ext.ComponentQuery.query('form > textfield{isValid()}'); Scoped Component Queries All of our previous examples will search the entire component tree to find matches, but often we may want to keep our searches local to a specific container and its descendants. This can help reduce the complexity of the query and improve the performance, as fewer components have to be processed. Ext.Containers have three handy methods to do this: up, down, and query. We will take each of these in turn and explain their features. up This method accepts a selector and will traverse up the hierarchy to find a single matching parent component. This can be useful to find the grid panel that a button belongs to, so an action can be taken on it: var grid = button.up('gridpanel'); down This returns the first descendant component that matches the given selector: var firstButton = grid.down('button'); query The query method performs much like Ext.ComponentQuery.query but is automatically scoped to the current container. This means that it will search all descendant components of the current container and return all matching ones as an array. var allButtons = grid.query('button'); Hierarchical data with trees Now that we know and understand components, their lifecycle, and how to retrieve references to them, we will move on to more specific UI widgets. The tree panel component allows us to display hierarchical data in a way that reflects the data's structure and relationships. In our application, we are going to use a tree panel to represent our navigation structure to allow users to see how the different areas of the app are linked and structured. Binding to a data source Like all other data-bound components, tree panels must be bound to a data store—in this particular case it must be an Ext.data.TreeStore instance or subclass, as it takes advantage of the extra features added to this specialist store class. We will make use of the BizDash.store.Navigation TreeStore to bind to our tree panel. Defining a tree panel The tree panel is defined in the Ext.tree.Panel class (which has an xtype of treepanel), which we will extend to create a custom class called BizDash.view.navigation.NavigationTree: Ext.define('BizDash.view.navigation.NavigationTree', { extend: 'Ext.tree.Panel', alias: 'widget.navigation-NavigationTree', store : 'Navigation', columns: [    {      xtype : 'treecolumn',      text : 'Navigation',      dataIndex: 'Label',      flex : 1    } ], rootVisible: false, useArrows : true }); We configure the tree to be bound to our TreeStore by using its storeId, in this case, Navigation. A tree panel is a subclass of the Ext.panel.Table class (similar to the Ext.grid.Panel class), which means it must have a columns configuration present. This tells the component what values to display as part of the tree. In a simple, traditional tree, we might only have one column showing the item and its children; however, we can define multiple columns and display additional fields in each row. This would be useful if we were displaying, for example, files and folders and wanted to have additional columns to display the file type and file size of each item. In our example, we are only going to have one column, displaying the Label field. We do this by using the treecolumn xtype, which is responsible for rendering the tree's navigation elements. Without defining treecolumn, the component won't display correctly. The treecolumn xtype's configuration allows us to define which of the attached data model's fields to use (dataIndex), the column's header text (text), and the fact that the column should fill the horizontal space. Additionally, we set the rootVisible to false, so the data's root is hidden, as it has no real meaning other than holding the rest of the data together. Finally, we set useArrows to true, so the items with children use an arrow instead of the +/- icon. Summary In this article, we have learnt how Ext JS' components fit together and the lifecycle that they follow when created and destroyed. We covered the component lifecycle and Component Queries. Resources for Article: Further resources on this subject: So, what is Ext JS? [article] Function passing [article] Static Data Management [article]
Read more
  • 0
  • 0
  • 4163

article-image-request-flow-kohana-3
Packt
12 Sep 2011
12 min read
Save for later

Request Flow in Kohana 3

Packt
12 Sep 2011
12 min read
  (For more resources on this topic, see here.) The reader can benefit from the previous article on Routing in Kohana 3.   Hierarchy is King in Kohana Kohana is layered in more ways than one. First, it has a cascading files system. This means the framework loads files for each of it’s core parts in a hierarchical order, which is explained in more detail in just a bit. Next, Kohana allows for controllers to initiate requests, making the application workflow follow a hierarchical design pattern. These features are the foundation of HMVC, which essentially is a cascading filesystem, flexible routing and request handling, the ability to execute sub-requests combined with a standard MVC pattern. The framework manages locating and loading the right file by using a core method named Kohana::find_file(). This method searches the filesystem in a predetermined order to load the proper class first. The order the method searches in is (with default paths): Application path (/application) Modules (/modules) as ordered in bootstrap.php System path (/system) Cascading filesystem As the framework loads, it creates a merged filesystem based on the order of loading described above. One of the benefits of loading files this way is the ease to overload classes that would be loaded later in the flow. We never have to, nor should we, alter a file in the system directory. We can override the default behavior of any method by overloading it in the application directory. Another great advantage is the consistency this mechanism offers. We know the exact load order for every class in any application, making it much easier to create custom code and know exactly where it needs to live. This image shows an example application being merged into the final file structure that will be used when completing a request. We can see how some classes in the application layer are overriding files in the modules. This makes it easier to visualize how modules extend and enhance the framework by building on the system core. Our application then sits on top of the system and module files, and then can build and extend the functionality of the module and system layers. Kohana also makes it easy to load third-party libraries, referred to as vendor libraries, into the filesystem. Each of the three layers has five basic folders into which Kohana looks: Classes (/classes) contain all autoloaded class files. This directory includes our Controller, Models, and their supporting classes. Autoloading allows us to use classes without having to include them manually. Any classes inside this directory will automatically be searched and loaded when they are used. Config files (/config) are files containing arrays that can be parsed and loaded using the core method Kohana::config(). Some config files are required to configure and properly load modules, while others may be created by us to make our application easier to maintain, or to keep vendor libraries tidy by moving config data to the framework. Config files are the only files in the cascading filesystem that are not overloaded; all config files are merged with their parent files. Internationalization files (/i18n) make it much easier to create language files that work with our applications to deliver the proper content that best suits the language of our users. Messages (/messages) are much like configuration files, in that they are arrays that are loaded by a core Kohana method. Kohana:: message() parses and returns the messages for a specific array. This functionality is very useful when creating forms and actions for our applications. View files (/views) are the presentation layer of our applications, where view files and template files live.   Request flow in Kohana Now that we have seen how the framework merges files to create a set of files to load on request, it is a good place to see the flow of the files in Kohana. Remember that controllers can invoke requests, making a bit of a loop between controllers, models, and views, but the frameworks always runs in the same order, beginning with the index.php file. The index.php file sets the path to the application, modules, and system directories and saves them to as constants that are then defined for global use. These constants are APPPATH, MODPATH, and SYSPATH, and they hold the paths for the application, modules, and system paths respectively. After the error-reporting levels are set, Kohana looks to see if the install.php file exists. If the install file is not found, Kohana takes the next step in loading the framework by loading the core Kohana class. Next, the index file looks for the application’s Kohana class, first in the application directory, then in the system path. This is the first example of Kohana looking for our files before it looks for its own. The last thing the index file does is bootstrap our application, by requiring the bootstrap.php file and loading it. You probably remember having to configure the bootstrap file when we installed Kohana. This is the file in which we set our base URL and modules; however, it is a bit more important than just basic installation and configuration. The boostrap begins by setting a some basic environment settings, like the default timezone and locale. Next, it enables the autoloader, and defines the application environment. This tells the framework whether our application is in a production or development environment, allowing it to make decisions based on its environment-specific settings. Next, the default options are set and Kohana is initialized, with the Kohana::init() method being called. After Kohana’s initialization, it sets up logging, configuration reading, and then modules. Modules are loaded defined using an array, with the module name as the key, and the path to the module as the value. The modules load order is listed in this array, and are all subject to the same rules and conventions as core and application code. Each module is added to the cascading file system as described above, allowing files to override any that may be added later when the system files are merged. Modules can contain their own init files, named init.php, that act similar to the application bootstrap, adding routes specific to the modules for our application to use. The last thing the bootstrap does in a normal request flow is to set the routes for the application. Kohana ships with a default route that loads the index action in the welcome controller. The Route object’s set method accepts an array that defines the name, URI, and defaults for the parameters set for the URI. By setting default controllers, actions, and params, we can have dynamic URLs that have default values if none are passed. If no controller or action is passed in the URI on a vanilla Kohana install, the welcome controller will be loaded, and the index action invoked as outlined in the array passed to the Route::set() method in the boostrap. Once all the application routes are set via the Route::set() method and init.php files residing in modules, Request::instance() is invoked, setting the request loop into action. As the Request object processes the request, it looks through routes until it finds the right controller to load. The request object then instantiates the controller, passing the request to the controller for it to use. The Controller::before() method is then called, which acts much like a constructor. By being called first, the before() method allows any logic that need to be performed before a controller action is run to execute. Once the before method is complete, the object continues to load the requested functions, just like when a constructor is complete. The controller action, a method in the controller class, is then called, and once complete, it returns the request response. The action method is where the business logic for the request will reside. Once the action is complete, the Controller::after() method is called, much like the destructor of a standard PHP class. Because of the hierarchical structure of Kohana, any controller can initiate a new request, making it possible for other controllers to be loaded, invoking more controller actions, which generate request responses. Once all the requests have been fulfilled, Kohana renders the final request response. The Kohana request flow can seem like it is long and complex, but it can also be looked at as very clean and organized. By using a front controller design pattern, all requests are handled by just one file: index.php. Each and every request that is handled by our applications will begin with this one file. From there the application is bootstrapped, and then the controller designed to handle the specific request is found, executed, and displayed. Although, as we have seen, there is more that is happening, for most of our applications, this simple way of looking at the request flow will make it easy to create powerful web applications using Kohana.   Using the Request object The request flow in Kohana is interesting, and it is easy to see how it can be powerful on a high level, but the best way to understand HMVC and routing in Kohana is to look at some actual code, and see what the resulting outcome is for real world scenarios. Kohana’s Request object determines the proper controller to invoke, and acts as a wrapper for the response. If we look at the Template Controller that we are extending in our Application Controller for the case study site, we can follow the inheritance path back to Kohana’s template controller, and see the request response. One of the best ways to understand what is happening inside the framework is to drill down through the filesystem and look at the actual code. One of the great advantages of open sources frameworks is the ability to read the code that makes the library run. Opening the welcome controller located at application/classes/controller/welcome.php, we see the following class declaration: class Controller_Welcome extends Controller_Application The first thing we see in the base controller class is that it extends another Controller, and then we see the declaration of an object property named $request. This variable holds the Kohana_Request object, the class that created the original controller call. In the constructor, we can see that the Kohana_Request object is being type-hinted for the argument, and it is setting the $request object property on instantiation. All that is left in the base Controller class is the before() and after() methods with no functionality. We can then open our Application Controller, located at application/classes/controller/application.php. The class declaration in this controller looks like this: abstract class Controller_Application extends Controller_Template In this file, we can see the before() method loading the template view into the template variable, and in the after() method, we see the Request obeject ($this->request) having the response body set, ready to be rendered. This class, in turn, extends the Template Controller. The Template Controller is part of the Kohana system. Since we have not created any controllers in our application or modules the original template controller that ships with Kohana is being loaded. It is located at system/classes/controller/template.php. The class declaration in this controller looks like: abstract class Controller_Template extends Kohana_Controller_Template Here things take a twist, and for the first time, we are going to have to leave the /classes/controller/ structure to find an inherited class. The Kohana_Controller_Template class lives in system/classes/kohana/controller/template.php. The class is fairly short and simple, and it has this class declaration: abstract class Kohana_Controller_Template extends Controller This controller (system/classes/controller.php) is the base controller that all requested controller classes must extend. Examining this class will let us see the Request class enter the Controller loop and the template view get sent to the Request object as the response. Walking through the Welcome Controller’s heritage is a great way of seeing how the Request object loads a controller, and how the parent classes all contribute to the request flow in Kohana. It may seem like pointless complexity at first, however, the benefits of transparent extension are very powerful, and Kohana makes the mechanisms work all behind the scenes. But one question still remains: How is the request object aware of the controllers and routes? Athough dissecting Kohana’s Request Class could be a article unto itself, a lot can be answered by looking at the contstructor in the system/classes/kohana/request.php file. The constructor is given the URI, and then stores object properties that the object will later use to execute the request. The Request class does have a couple of key methods that can be very helpful. The first is Request::controller(), which returns the name of the controller for the request, and the other is Request::action(), which similarly returns the name of the action for the request. After loading the routes, the method then iterates through the routes, determines any matches for the URI, and begins setting the controller, action, and parameters for the route. If there are not matches for optional segments of the route, the defaults are stored in the object properties. When the Request object’s execute() method is called, the request is processed, and the response is returned. This happens by first processing the before() method, then the controller action being requested, then the after() method for the class, followed by any other requests until all have completed. The topic of initiating a request from within a controller has arisen a few times, and is the best way to illustrate the Hierarchical aspect of HMVC. Let’s take a look at this process by creating a controller method that initiates a new request, and see how it completes.  
Read more
  • 0
  • 0
  • 4149
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-creating-functions-and-operations
Packt
10 Aug 2015
18 min read
Save for later

Creating Functions and Operations

Packt
10 Aug 2015
18 min read
In this article by Alex Libby, author of the book Sass Essentials, we will learn how to use operators or functions to construct a whole site theme from just a handful of colors, or defining font sizes for the entire site from a single value. You will learn how to do all these things in this article. Okay, so let's get started! (For more resources related to this topic, see here.) Creating values using functions and operators Imagine a scenario where you're creating a masterpiece that has taken days to put together, with a stunning choice of colors that has taken almost as long as the building of the project and yet, the client isn't happy with the color choice. What to do? At this point, I'm sure that while you're all smiles to the customer, you'd be quietly cursing the amount of work they've just landed you with, this late on a Friday. Sound familiar? I'll bet you scrap the colors and go back to poring over lots of color combinations, right? It'll work, but it will surely take a lot more time and effort. There's a better way to achieve this; instead of creating or choosing lots of different colors, we only need to choose one and create all of the others automatically. How? Easy! When working with Sass, we can use a little bit of simple math to build our color palette. One of the key tenets of Sass is its ability to work out values dynamically, using nothing more than a little simple math; we could define font sizes from H1 to H6 automatically, create new shades of colors, or even work out the right percentages to use when creating responsive sites! We will take a look at each of these examples throughout the article, but for now, let's focus on the principles of creating our colors using Sass. Creating colors using functions We can use simple math and functions to create just about any type of value, but colors are where these two really come into their own. The great thing about Sass is that we can work out the hex value for just about any color we want to, from a limited range of colors. This can easily be done using techniques such as adding two values together, or subtracting one value from another. To get a feel of how the color operators work, head over to the official documentation at http://sass-lang.com/documentation/file.SASS_REFERENCE.html#color_operations—it is worth reading! Nothing wrong with adding or subtracting values—it's a perfectly valid option, and will result in a valid hex code when compiled. But would you know that both values are actually deep shades of blue? Therein lies the benefit of using functions; instead of using math operators, we can simply say this: p { color: darken(#010203, 10%); } This, I am sure you will agree, is easier to understand as well as being infinitely more readable! The use of functions opens up a world of opportunities for us. We can use any one of the array of functions such as lighten(), darken(), mix(), or adjust-hue() to get a feel of how easy it is to get the values. If we head over to http://jackiebalzer.com/color, we can see that the author has exploded a number of Sass (and Compass—we will use this later) functions, so we can see what colors are displayed, along with their numerical values, as soon as we change the initial two values. Okay, we could play with the site ad infinitum, but I feel a demo coming on—to explore the effects of using the color functions to generate new colors. Let's construct a simple demo. For this exercise, we will dig up a copy of the colorvariables demo and modify it so that we're only assigning one color variable, not six. For this exercise, I will assume you are using Koala to compile the code. Okay, let's make a start: We'll start with opening up a copy of colorvariables.scss in your favorite text editor and removing lines 1 to 15 from the start of the file. Next, add the following lines, so that we should be left with this at the start of the file: $darkRed: #a43; $white: #fff; $black: #000;   $colorBox1: $darkRed; $colorBox2: lighten($darkRed, 30%); $colorBox3: adjust-hue($darkRed, 35%); $colorBox4: complement($darkRed); $colorBox5: saturate($darkRed, 30%); $colorBox6: adjust-color($darkRed, $green: 25); Save the file as colorfunctions.scss. We need a copy of the markup file to go with this code, so go ahead and extract a copy of colorvariables.html from the code download, saving it as colorfunctions.html in the root of our project area. Don't forget to change the link for the CSS file within to colorfunctions.css! Fire up Koala, then drag and drop colorfunctions.scss from our project area over the main part of the application window to add it to the list: Right-click on the file name and select Compile, and then wait for it to show Success in a green information box. If we preview the results of our work in a browser, we should see the following boxes appear: At this point, we have a working set of colors—granted, we might have to work a little on making sure that they all work together. But the key point here is that we have only specified one color, and that the others are all calculated automatically through Sass. Now that we are only defining one color by default, how easy is it to change the colors in our code? Well, it is a cinch to do so. Let's try it out using the help of the SassMeister playground. Changing the colors in use We can easily change the values used in the code, and continue to refresh the browser after each change. However, this isn't a quick way to figure out which colors work; to get a quicker response, there is an easier way: use the online Sass playground at http://www.sassmeister.com. This is the perfect way to try out different colors—the site automatically recompiles the code and updates the result as soon as we make a change. Try copying the HTML and SCSS code into the play area to view the result. The following screenshot shows the same code used in our demo, ready for us to try using different calculations: All images work on the principle that we take a base color (in this case, $dark-blue, or #a43), then adjust the color either by a percentage or a numeric value. When compiled, Sass calculates what the new value should be and uses this in the CSS. Take, for example, the color used for #box6, which is a dark orange with a brown tone, as shown in this screenshot: To get a feel of some of the functions that we can use to create new colors (or shades of existing colors), take a look at the main documentation at http://sass-lang.com/documentation/Sass/Script/Functions.html, or https://www.makerscabin.com/web/sass/learn/colors. These sites list a variety of different functions that we can use to create our masterpiece. We can also extend the functions that we have in Sass with the help of custom functions, such as the toolbox available at https://github.com/at-import/color-schemer—this may be worth a look. In our demo, we used a dark red color as our base. If we're ever stuck for ideas on colors, or want to get the right HEX, RGB(A), or even HSL(A) codes, then there are dozens of sites online that will give us these values. Here are a couple of them that you can try: HSLa Explorer, by Chris Coyier—this is available at https://css-tricks.com/examples/HSLaExplorer/. HSL Color Picker by Brandon Mathis—this is available at http://hslpicker.com/. If we know the name, but want to get a Sass value, then we can always try the list of 1,500+ colors at https://github.com/FearMediocrity/sass-color-palettes/blob/master/colors.scss. What's more, the list can easily be imported into our CSS, although it would make better sense to simply copy the chosen values into our Sass file, and compile from there instead. Mixing colors The one thing that we've not discussed, but is equally useful is that we are not limited to using functions on their own; we can mix and match any number of functions to produce our colors. A great way to choose colors, and get the appropriate blend of functions to use, is at http://sassme.arc90.com/. Using the available sliders, we can choose our color, and get the appropriate functions to use in our Sass code. The following image shows how: In most cases, we will likely only need to use two functions (a mix of darken and adjust hue, for example); if we are using more than two–three functions, then we should perhaps rethink our approach! In this case, a better alternative is to use Sass's mix() function, as follows: $white: #fff; $berry: hsl(267, 100%, 35%); p { mix($white, $berry, 0.7) } …which will give the following valid CSS: p { color: #5101b3; } This is a useful alternative to use in place of the command we've just touched on; after all, would you understand what adjust_hue(desaturate(darken(#db4e29, 2), 41), 67) would give as a color? Granted, it is something of an extreme calculation, nonetheless, it is technically valid. If we use mix() instead, it matches more closely to what we might do, for example, when mixing paint. After all, how else would we lighten its color, if not by adding a light-colored paint? Okay, let's move on. What's next? I hear you ask. Well, so far we've used core Sass for all our functions, but it's time to go a little further afield. Let's take a look at how you can use external libraries to add extra functionality. In our next demo, we're going to introduce using Compass, which you will often see being used with Sass. Using an external library So far, we've looked at using core Sass functions to produce our colors—nothing wrong with this; the question is, can we take things a step further? Absolutely, once we've gained some experience with using these functions, we can introduce custom functions (or helpers) that expand what we can do. A great library for this purpose is Compass, available at http://www.compass-style.org; we'll make use of this to change the colors which we created from our earlier boxes demo, in the section, Creating colors using functions. Compass is a CSS authoring framework, which provides extra mixins and reusable patterns to add extra functionality to Sass. In our demo, we're using shade(), which is one of the several color helpers provided by the Compass library. Let's make a start: We're using Compass in this demo, so we'll begin with installing the library. To do this, fire up Command Prompt, then navigate to our project area. We need to make sure that our installation RubyGems system software is up to date, so at Command Prompt, enter the following, and then press Enter: gem update --system Next, we're installing Compass itself—at the prompt, enter this command, and then press Enter: gem install compass Compass works best when we get it to create a project shell (or template) for us. To do this, first browse to http://www.compass-style.org/install, and then enter the following in the Tell us about your project… area: Leave anything in grey text as blank. This produces the following commands—enter each at Command Prompt, pressing Enter each time: Navigate back to Command Prompt. We need to compile our SCSS code, so go ahead and enter this command at the prompt (or copy and paste it), then press Enter: compass watch –sourcemap Next, extract a copy of the colorlibrary folder from the code download, and save it to the project area. In colorlibrary.scss, comment out the existing line for $backgrd_box6_color, and add the following immediately below it: $backgrd_box6_color: shade($backgrd_box5_color, 25%); Save the changes to colorlibrary.scss. If all is well, Compass's watch facility should kick in and recompile the code automatically. To verify that this has been done, look in the css subfolder of the colorlibrary folder, and you should see both the compiled CSS and the source map files present. If you find Compass compiles files in unexpected folders, then try using the following command to specify the source and destination folders when compiling: compass watch --sass-dir sass --css-dir css If all is well, we will see the boxes, when previewing the results in a browser window, as in the following image. Notice how Box 6 has gone a nice shade of deep red (if not almost brown)? To really confirm that all the changes have taken place as required, we can fire up a DOM inspector such as Firebug; a quick check confirms that the color has indeed changed: If we explore even further, we can see that the compiled code shows that the original line for Box 6 has been commented out, and that we're using the new function from the Compass helper library: This is a great way to push the boundaries of what we can do when creating colors. To learn more about using the Compass helper functions, it's worth exploring the official documentation at http://compass-style.org/reference/compass/helpers/colors/. We used the shade() function in our code, which darkens the color used. There is a key difference to using something such as darken() to perform the same change. To get a feel of the difference, take a look at the article on the CreativeBloq website at http://www.creativebloq.com/css3/colour-theming-sass-and-compass-6135593, which explains the difference very well. The documentation is a little lacking in terms of how to use the color helpers; the key is not to treat them as if they were normal mixins or functions, but to simply reference them in our code. To explore more on how to use these functions, take a look at the article by Antti Hiljá at http://clubmate.fi/how-to-use-the-compass-helper-functions/. We can, of course, create mixins to create palettes—for a more complex example, take a look at http://www.zingdesign.com/how-to-generate-a-colour-palette-with-compass/ to understand how such a mixin can be created using Compass. Okay, let's move on. So far, we've talked about using functions to manipulate colors; the flip side is that we are likely to use operators to manipulate values such as font sizes. For now, let's change tack and take a look at creating new values for changing font sizes. Changing font sizes using operators We already talked about using functions to create practically any value. Well, we've seen how to do it with colors; we can apply similar principles to creating font sizes too. In this case, we set a base font size (in the same way that we set a base color), and then simply increase or decrease font sizes as desired. In this instance, we won't use functions, but instead, use standard math operators, such as add, subtract, or divide. When working with these operators, there are a couple of points to remember: Sass math functions preserve units—this means we can't work on numbers with different units, such as adding a px value to a rem value, but can work with numbers that can be converted to the same format, such as inches to centimeters If we multiply two values with the same units, then this will produce square units (that is, 10px * 10px == 100px * px). At the same time, px * px will throw an error as it is an invalid unit in CSS. There are some quirks when working with / as a division operator —in most instances, it is normally used to separate two values, such as defining a pair of font size values. However, if the value is surrounded in parentheses, used as a part of another arithmetic expression, or is stored in a variable, then this will be treated as a division operator. For full details, it is worth reading the relevant section in the official documentation at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#division-and-slash. With these in mind, let's create a simple demo—a perfect use for Sass is to automatically work out sizes from H1 through to H6. We could just do this in a simple text editor, but this time, let's break with tradition and build our demo directly into a session on http://www.sassmeister.com. We can then play around with the values set, and see the effects of the changes immediately. If we're happy with the results of our work, we can copy the final version into a text editor and save them as standard SCSS (or CSS) files. Let's begin by browsing to http://www.sassmeister.com, and adding the following HTML markup window: <html> <head>    <meta charset="utf-8" />    <title>Demo: Assigning colors using variables</title>    <link rel="stylesheet" type="text/css" href="css/     colorvariables.css"> </head> <body>    <h1>The cat sat on the mat</h1>    <h2>The cat sat on the mat</h2>    <h3>The cat sat on the mat</h3>    <h4>The cat sat on the mat</h4>    <h5>The cat sat on the mat</h5>    <h6>The cat sat on the mat</h6> </body> </html> Next, add the following to the SCSS window—we first set a base value of 3.0, followed by a starting color of #b26d61, or a dark, moderate red: $baseSize: 3.0; $baseColor: #b26d61; We need to add our H1 to H6 styles. The rem mixin was created by Chris Coyier, at https://css-tricks.com/snippets/css/less-mixin-for-rem-font-sizing/. We first set the font size, followed by setting the font color, using either the base color set earlier, or a function to produce a different shade: h1 { font-size: $baseSize; color: $baseColor; }   h2 { font-size: ($baseSize - 0.2); color: darken($baseColor, 20%); }   h3 { font-size: ($baseSize - 0.4); color: lighten($baseColor, 10%); }   h4 { font-size: ($baseSize - 0.6); color: saturate($baseColor, 20%); }   h5 { font-size: ($baseSize - 0.8); color: $baseColor - 111; }   h6 { font-size: ($baseSize - 1.0); color: rgb(red($baseColor) + 10, 23, 145); } SassMeister will automatically compile the code to produce a valid CSS, as shown in this screenshot: Try changing the base size of 3.0 to a different value—using http://www.sassmeister.com, we can instantly see how this affects the overall size of each H value. Note how we're multiplying the base variable by 10 to set the pixel value, or simply using the value passed to render each heading. In each instance, we can concatenate the appropriate unit using a plus (+) symbol. We then subtract an increasing value from $baseSize, before using this value as the font size for the relevant H value. You can see a similar example of this by Andy Baudoin as a CodePen, at http://codepen.io/baudoin/pen/HdliD/. He makes good use of nesting to display the color and strength of shade. Note that it uses a little JavaScript to add the text of the color that each line represents, and can be ignored; it does not affect the Sass used in the demo. The great thing about using a site such SassMeister is that we can play around with values and immediately see the results. For more details on using number operations in Sass, browse to the official documentation, which is at http://sass-lang.com/documentation/file.Sass_REFERENCE.html#number_operations. Okay, onwards we go. Let's turn our attention to creating something a little more substantial; we're going to create a complete site theme using the power of Sass and a few simple calculations. Summary Phew! What a tour! One of the key concepts of Sass is the use of functions and operators to create values, so let's take a moment to recap what we have covered throughout this article. We kicked off with a look at creating color values using functions, before discovering how we can mix and match different functions to create different shades, or using external libraries to add extra functionality to Sass. We then moved on to take a look at another key use of functions, with a look at defining different font sizes, using standard math operators. Resources for Article: Further resources on this subject: Nesting, Extend, Placeholders, and Mixins [article] Implementation of SASS [article] Constructing Common UI Widgets [article]
Read more
  • 0
  • 0
  • 4138

article-image-understanding-dotnetnuke-core-architecture
Packt
06 Apr 2010
13 min read
Save for later

Understanding the DotNetNuke Core Architecture

Packt
06 Apr 2010
13 min read
Architecture overview As opposed to traditional web applications that may rely on a multitude of web pages to deliver content, DotNetNuke uses a single main page called Default.aspx. The content for this page is generated dynamically by using a tabID value to retrieve the skin and modules needed to build the page requested, from the DotNetNuke database. Before we move on, we should discuss what is meant by a tab and a page. As you read this article, you will notice the word "tab" is sometimes used when referring to pages in your DotNetNuke portal. In the original IBuySpy application, pages were referred to as tabs because they resembled tabs when added to the page. IBuySpy application, the skeleton ASP.NET Framework, was created by Microsoft to demonstrate ASP.NET features, and DotNetNuke was originally derived from it. This continued in the original versions of the DotNetNuke project. Starting with version 3.0, and continuing with version 5.2.x, there has been an ongoing effort to rename most of these instances to reflect what they really are: pages. Most references to "tabs" have been changed to "pages", but the conversion is not complete. For this reason, you will see both—tabs and pages—in the database, in the project files, and in this text. We will use these terms interchangeably throughout this text as we look into the core architecture of DNN. We will begin with a general overview of what happens when a user requests a page on your DotNetNuke portal. The process for rendering a page in DotNetNuke works like this: a user navigates to a page on your portal; this calls the Default.aspx page, passing the tabid parameter in the querystring to let the application identify the page being requested. The example http://www.dotnetnuke.com/Default. aspx?tabid=476 demonstrates this. DotNetNuke 3.2 introduced something called URL rewriting. This takes the querystring shown above and rewrites it so that it is in a format that helps increase search engine hits. We will cover the HTTP module that is responsible for this in more detail later in this article. The rewritten URL would resemble http://localhost/DotNetNuke/Home.aspx. This assumes that the page name for tabid 476 is Home. While referring to URLs in this article we will be using the non-rewritten version of the URL. URL rewriting can be turned off in the friendly URL section of the Host Settings page. The querystring value (?tabid=476) is sent to the database, where the information required for the page is retrieved, as shown in the following diagram: The portal that the user is accessing can be determined in a number of ways, but as you can see from the Tabs table (see the following screenshot), each page/tab contains a reference to the portal it belongs to in the PortalID field. Once the server has a reference to the page that the user requested (using the tabID), it can determine what modules belong to that page. Although there are many more tables involved in this process, you can see that these tables hold not only the page and modules needed to generate the page, but also what pane to place them on (PaneName) and what container skin to apply (ContainerSrc). All of this information is returned to the web server, and the Default.aspx page is constructed with it and returned to the user who requested it along with the required modules and skins, as shown in the following diagram. Now, this is of course a very general overview of the process, but as we work through this article, we will delve deeper into the code that makes this process work, and in the end, show a request work its way through the framework to deliver a page to a user. Diving into the core There are over 160,000 lines of code in the DotNetNuke application. There is no practical (or even possible) way to cover the entire code base. In this section, we will go in depth into what I believe are the main portions of the code base: the PortalSettings as well as the companion classes found in the portals folder; the web.config file including the HTTP Modules and Providers; and the Global.asax and Globals.vb files. We will start our discussion of the core with two objects that play an integral part in the construction of the architecture. The Context object and the PortalSettings class will both be referred to quite often in the code, and so it is important that you have a good understanding of what they do. Using the Context object in your application ASP .NET has taken intrinsic objects like the Request and the Application objects and wrapped them together with other relevant items into an intrinsic object called Context. The Context object (HttpContext) can be found in the System.Web namespace. In the following table, you will find some of the objects that make up the HttpContext object. Title Description Application Gets the HttpApplicationState object for the current HTTP request. Cache Gets the Cache object for the current HTTP request. Current Gets the HttpContext object for the current HTTP request. This is a static (shared in VB) property of the HttpContext class, through which you access all other instance properties discussed in this table, that together enable you to process and respond to an HTTP request. Items Gets a key-value collection that can be used to organize and share data between an IHttpModule and an IHttpHandler during an HTTP request. Request Gets the HttpRequest object for the current HTTP request. This is used to extract data submitted by the client, and information about the client itself (for example, IP ), and the current request. Response Gets the HttpResponse object for the current HTTP response. This is used to send data to the client together with other response-related information such as headers, cacheability, redirect information, and so on. Server Gets the HttpServerUtility object that provides methods used in processing web requests. Session Gets the HttpSessionState instance for the current HTTP request. User Gets or sets security information for the current HTTP request. Notice that most of the descriptions talk about the "current" request object, or the "current" response object. The Global.asax file, which we will look at soon, reacts on every single request made to your application, and so it is only concerned with whoever is "currently" accessing a resource. The HttpContext object contains all HTTP-specific information about an individual HTTP request. In particular, the HttpContext.Current property can give you the context for the current request from anywhere in the application domain. The DotNetNuke core relies on the HttpContext.Current property to hold everything from the application name to the portal settings and through this makes it available to you. The PortalSettings class The portal settings play a major role in the dynamic generation of your pages and as such will be referred to quite often in the other portions of the code. The portal settings are represented by the PortalSettings class which you will find in the EntitiesPortalPortalSettings.vb file. As you can see from the private variables in this class, most of what goes on in your portal will at some point needto access this object. This object will hold everything from the ID of the portal to the default language, and as we will see later, is responsible for determining the skins and modules needed for each page. Private _PortalId As IntegerPrivate _PortalName As StringPrivate _HomeDirectory As StringPrivate _LogoFile As StringPrivate _FooterText As StringPrivate _ExpiryDate As DatePrivate _UserRegistration As IntegerPrivate _BannerAdvertising As IntegerPrivate _Currency As StringPrivate _AdministratorId As IntegerPrivate _Email As StringPrivate _HostFee As SinglePrivate _HostSpace As IntegerPrivate _PageQuota As IntegerPrivate _UserQuota As IntegerPrivate _AdministratorRoleId As IntegerPrivate _AdministratorRoleName As StringPrivate _RegisteredRoleId As IntegerPrivate _RegisteredRoleName As StringPrivate _Description As StringPrivate _KeyWords As StringPrivate _BackgroundFile As StringPrivate _GUID As GuidPrivate _SiteLogHistory As IntegerPrivate _AdminTabId As IntegerPrivate _SuperTabId As IntegerPrivate _SplashTabId As IntegerPrivate _HomeTabId As IntegerPrivate _LoginTabId As IntegerPrivate _UserTabId As IntegerPrivate _DefaultLanguage As StringPrivate _TimeZoneOffset As IntegerPrivate _Version As StringPrivate _ActiveTab As TabInfoPrivate _PortalAlias As PortalAliasInfoPrivate _AdminContainer As SkinInfoPrivate _AdminSkin As SkinInfoPrivate _PortalContainer As SkinInfoPrivate _PortalSkin As SkinInfoPrivate _Users As IntegerPrivate _Pages As Integer The PortalSettings class itself is simple. It is filled by using one of the constructors that accepts one or more parameters. These constructors then call the private GetPortalSettings method . The method is passed a tabID and a PortalInfo object. You already know that the tabID represents the ID of the page being requested, but the PortalInfo object is something new. This class can be found in the same folder as the PortalSettings class and contains the basic information about a portal such as PortalID, PortalName, and Description. However, from the PortalSettings object, we can retrieve all the information associated with the portal. If you look at the code inside the constructors, you will see that the PortalController object is used to retrieve the PortalInfo object. The PortalInfo object is saved in cache for the time that is specifi ed on the Host Settings page. A drop-down box on the Host Settings page (DesktopModulesAdminHostSettingsHostSettings.ascx) is used to set the cache. No caching:0 Light caching:1 Moderate caching:3 Heavy caching:6 The value in this dropdown ranges from 0 to 6; the code in the DataCache object takes the value set in the drop-down and multiplies it by 20 to determine the cache duration. Once the cache time is set, the method checks if the PortalSettings object already resides there. Retrieving these settings from the database for every request would cause your site to run slowly, so placing them in a cache for the duration you select helps increase the speed of your site. Recent versions of DotNetNuke have focused heavily on providing an extensive caching service. An example of this can be seen in the following code: Dim cacheKey As String = String.Format(DataCache.PortalCacheKey,PortalId.ToString())Return CBO.GetCachedObject(Of PortalInfo)(New CacheItemArgs(cacheKey, DataCache.PortalCacheTimeOut,DataCache.PortalCachePriority, PortalId),AddressOf GetPortalCallback) We can see in the previous code that the CBO object is used to return an object from the cache. CBO is an object that is seen frequently throughout the DotNetNuke core. This object's primary function is to return the populated business object. This is done in several ways using different methods provided by CBO. Some methods are used to map data from an IDataReader to the properties of a business object. However, in this example, the Get CachedObject method handles the logic needed to determine if the object should be retrieved from the cache or from the database. If the object is not already cached, it will use the GetPortalCallback method passed to the method to retrieve the portal settings from the database. This method is located in the PortalController class (EntitiesPortalPortalController.vb) and is responsible for retrieving the portal information from the database. Dim portalID As Integer = DirectCast(cacheItemArgs.ParamList(0),Integer)Return CBO.FillObject(Of PortalInfo)(DataProvider.Instance _.GetPortal(portalID, Entities.Host.Host.ContentLocale.ToString)) This will fi ll the PortalInfo object (EntitiesPortalPortalInfo.vb), which as we mentioned, holds the portal information. This object in turn is returned to the GetCachedObject method. Once this is complete, the object is then cached to help prevent the need to call the database during the next request for the portal information. There is also a section of code (not shown) that verifi es whether the object was successfully stored in the cache and adds an entry to the event log if the item failed to be cached. ' if we retrieved a valid object and we are using cachingIf objObject IsNot Nothing AndAlso timeOut > 0 Then' save the object in the cacheDataCache.SetCache(cacheItemArgs.CacheKey, objObject, _cacheItemArgs.CacheDependency, Cache.NoAbsoluteExpiration, _TimeSpan.FromMinutes(timeOut), cacheItemArgs.CachePriority, _cacheItemArgs.CacheCallback)…End If After the portal settings are saved, the properties of the current tab information are retrieved and populated in the ActiveTab property. The current tab is retrieved by using the tabID that was originally passed to the constructor. This is handled by the VerifyPortalTab method and done by getting a list of all of the tabs for the current portal. Like the portal settings themselves, the tabs are saved in cache to boost performance. The calls to the caching provider, this time, are handled by the TabController (EntitiesTabsTabController.vb). In the last VerifyPortalTab method, the code will loop through all of the host and non-host tabs, returned by the TabController, for the site until the current tab is located. ' find the tab in the portalTabs collectionIf TabId <> Null.NullInteger ThenIf portalTabs.TryGetValue(TabId, objTab) Then'Check if Tab has been deleted (is in recycle bin)If Not (objTab.IsDeleted) ThenMe.ActiveTab = objTab.Clone()isVerified = TrueEnd IfEnd IfEnd If' find the tab in the hostTabs collectionIf Not isVerified AndAlso TabId <> Null.NullInteger ThenIf hostTabs.TryGetValue(TabId, objTab) Then'Check if Tab has been deleted (is in recycle bin)If Not (objTab.IsDeleted) ThenMe.ActiveTab = objTab.Clone()isVerified = TrueEnd IfEnd IfEnd If If the tab was not found in either of these collections, the code attempts to use the splash page, home page, or the fi rst page of the non-host pages. After the current tab is located, further handling of some of its properties is done back in the GetPortalSettings method. This includes formatting the path for the skin and default container used by the page, as well as collecting information on the modules placed on the page. Me.ActiveTab.SkinSrc = _SkinController.FormatSkinSrc(Me.ActiveTab.SkinSrc, Me)Me.ActiveTab.SkinPath = _SkinController.FormatSkinPath(Me.ActiveTab.SkinSrc)…For Each kvp As KeyValuePair(Of Integer, ModuleInfo) In _objModules.GetTabModules(Me.ActiveTab.TabID)' clone the module object _( to avoid creating an object reference to the data cache )Dim cloneModule As ModuleInfo = kvp.Value.Clone' set custom propertiesIf Null.IsNull(cloneModule.StartDate) ThencloneModule.StartDate = Date.MinValueEnd IfIf Null.IsNull(cloneModule.EndDate) ThencloneModule.EndDate = Date.MaxValueEnd If' containerIf cloneModule.ContainerSrc = "" ThencloneModule.ContainerSrc = Me.ActiveTab.ContainerSrcEnd IfcloneModule.ContainerSrc = _SkinController.FormatSkinSrc(cloneModule.ContainerSrc, Me)cloneModule.ContainerPath = _SkinController.FormatSkinPath(cloneModule.ContainerSrc)' process tab panesIf objPaneModules.ContainsKey(cloneModule.PaneName) = False ThenobjPaneModules.Add(cloneModule.PaneName, 0)End IfcloneModule.PaneModuleCount = 0If Not cloneModule.IsDeleted ThenobjPaneModules(cloneModule.PaneName) = _objPaneModules(cloneModule.PaneName) + 1cloneModule.PaneModuleIndex = _objPaneModules(cloneModule.PaneName) - 1End IfMe.ActiveTab.Modules.Add(cloneModule)Next We have now discussed some of the highlights of the PortalSettings object as well as how it is populated with the information it contains. In doing so, we also saw abrief example of the robust caching service provided by DotNetNuke. You will see the PortalSettings class referenced many times in the core DotNetNuke code, so gaining a good understanding of how this class works will help you to become more familiar with the DNN code base. You will also fi nd this object to be very helpful while building custom extensions for DotNetNuke. The caching provider itself is a large topic, and reaches beyond the scope of this article. However, simply understanding how to work with it in the ways shown in these examples should satisfy the needs of most developers. It is important to note that, you can get any type of object cached by DNN by passing in a key for your object to DataCache.SetCache method, together with the data, and some optional arguments. While fetching the object back from DataCache.GetCache, you pass in the same key, and check the result. A non-null (non-Nothing in VB) return value means you have fetched the object successfully from the cache, otherwise you would need to fetch it from the database.
Read more
  • 0
  • 0
  • 4138

article-image-article-creating-an-application-using-aspnetmvcangularjsservicestack
Packt
23 Jul 2014
8 min read
Save for later

Creating an Application using ASP.NET MVC, AngularJS and ServiceStack

Packt
23 Jul 2014
8 min read
(For more resources related to this topic, see here.) Routing considerations for ASP.NET MVC and AngularJS In the previous example, we had to make changes to the ASP.NET MVC routing so it ignores the requests handled by the ServiceStack framework. Since the AngularJS application currently uses hashbang URLs, we don't need to make any other changes to the ASP.NET MVC routing. Changing an AngularJS application to use the HTML5 History API instead of hashbang URLs requires a lot more work as it will conflict directly with the ASP.NET MVC routing. You need to set up IIS URL rewriting and use the URL Rewrite module for IIS 7 and higher, which is available at www.iis.net/downloads/microsoft/url-rewrite. AngularJS application routes have to be mapped using this module to the ASP.NET MVC view that hosts the client-side application. We also need to ensure that web service request paths are excluded from URL rewriting. You can explore some changes required for the HTML5 navigation mode in the project found in the Example2 folder from the source code for this article. The HTML5 History API is not supported in Internet Explorer 8 and 9. Using ASP.NET bundling and minification features for AngularJS files So far, we have referenced and included JavaScript and CSS files directly in the _Layout.cshtml file. This makes it difficult to reuse script references between different views, and the assets are not concatenated and minified when deployed to a production environment. Microsoft provides a NuGet package called Microsoft.AspNet.Web.Optimization that contains this essential functionality. When you create a new ASP.NET MVC project, it gets installed and configured with default options. First, we need to add a new BundleConfig.cs file, which will define collections of scripts and style sheets under a virtual path, such as ~/bundles/app, that does not match a physical file. This file will contain the following code: bundles.Add(new ScriptBundle("~/bundles/app").Include( "~/scripts/app/app.js", "~/scripts/app/services/*.js", "~/scripts/app/controllers/*.js")); You can explore these changes in the project found in the Example3 folder from the source code for this article. If you take a look at the BundleConfig.cs file, you will see three script bundles and one style sheet bundle defined. Nothing is stopping you from defining only one script bundle instead, to reduce the resource requests further. We can now reference the bundles in the _Layout.cshtml file and replace the previous scripts with the following code: @Scripts.Render("~/bundles/basejs") @Scripts.Render("~/bundles/angular") @Scripts.Render("~/bundles/app") Each time we add a new file to a location like ~/scripts/app/services/ it will automatically be included in its bundle. If we add the following line of code to the BundleConfig.RegisterBundles method, when we run the application, the scripts or style sheets defined in a bundle will be minified (all of the whitespace, line separators, and comments will be removed) and concatenated in a single file: BundleTable.EnableOptimizations = true; If we take a look at the page source, the script section now looks like the following code: <script src ="/bundles/basejs?v=bWXds_q0E1qezGAjF9o48iD8-hlMNv7nlAONwLLM0Wo1"></script> <script src ="/bundles/angular?v=k-PtTeaKyBiBwT4gVnEq9YTPNruD0u7n13IOEzGTvfw1"></script> <script src ="/bundles/app?v=OKa5fFQWjXSQCNcBuWm9FJLcPFS8hGM6uq1SIdZNXWc1"></script> Using this process, the previous separate requests for each script or style sheet file will be reduced to a request to one or more bundles that are much reduced in content due to concatenation and minification. For convenience, there is a new EnableOptimizations value in web.config that will enable or disable the concatenation and minification of the asset bundles. Securing the AngularJS application We previously discussed that we need to ensure that all browser requests are secured and validated on the server for specific scenarios. Any browser request can be manipulated and changed even unintentionally, so we cannot rely on client-side validation alone. When discussing securing an AngularJS application, there are a couple of alternatives available, of which I'll mention the following: You can use client-side authentication and employ a web service call to authenticate the current user. You can create a time-limited authentication token that will be passed with each data request. This approach involves additional code in the AngularJS application to handle authentication. You can rely on server-side authentication and use an ASP.NET MVC view that will handle any unauthenticated request. This view will redirect to the view that hosts the AngularJS application only when the authentication is successful. The AngularJS application will implicitly use an authentication cookie that is set on the server side, and it does not need any additional code to handle authentication. I prefer server-side authentication as it can be reused with other server-side views and reduces the code required to implement it on both the client side and server side. We can implement server-side authentication in at least two ways, as follows: We can use the ASP.NET Identity system or the older ASP.NET Membership system for scenarios where we need to integrate with an existing application We can use built-in ServiceStack authentication features, which have a wide range of options with support for many authentication providers. This approach has the benefit that we can add a set of web service methods that can be used for authentication outside of the ASP.NET MVC context. The last approach ensures the best integration between ASP.NET MVC and ServiceStack, and it allows us to introduce a ServiceStack NuGet package that provides new productivity benefits for our sample application. Using the ServiceStack.Mvc library ServiceStack has a library that allows deeper integration with ASP.NET MVC through the ServiceStack.Mvc NuGet package. This library provides access to the ServiceStack dependency injection system for ASP.NET MVC applications. It also introduces a new base controller class called ServiceStackController; this can be used by ASP.NET MVC controllers to gain access to the ServiceStack caching, session, and authentication infrastructures. To install this package, you need to run the following command in the NuGet Package Manager Console: Install-Package ServiceStack.Mvc -Version 3.9.71 The following line needs to be added to the AppHost.Configure method, and it will register a ServiceStack controller factory class for ASP.NET MVC: ControllerBuilder.Current.SetControllerFactory(new FunqControllerFactory(container)); The ControllerBuilder.Current.SetControllerFactory method is an ASP.NET MVC extension point that allows the replacement of its DefaultControllerFactory class with a custom one. This class is tasked with matching requests with controllers, among other responsibilities. The FunqControllerFactory class provided in the new NuGet package inherits the DefaultControllerFactory class and ensures that all controllers that have dependencies managed by the ServiceStack dependency injection system will be resolved at application runtime. To exemplify this, the BicycleRepository class is now referenced in the HomeController class, as shown in the following code: public class HomeController : Controller { public BicycleRepository BicycleRepository { get; set; } // // GET: /Home/ public ActionResult Index() { ViewBag.BicyclesCount = BicycleRepository.GetAll().Count(); return View(); } } The application menu now displays the current number of bicycles as initialized in the BicycleRepository class. If we add a new bicycle and refresh the browser page, the menu bicycle count is updated. This highlights the fact that the ASP.NET MVC application uses the same BicycleRepository instance as ServiceStack web services. You can explore this example in the project found in the Example4 folder from the source code for this article. Using the ServiceStack.Mvc library, we have reached a new milestone by bridging ASP.NET MVC controllers with ServiceStack services. In the next section, we will effectively transition to a single server-side application with unified caching, session, and authentication infrastructures. The building blocks of the ServiceStack security infrastructure ServiceStack has built-in, optional authentication and authorization provided by its AuthFeature plugin, which builds on two other important components as follows: Caching: Every service or controller powered by ServiceStack has optional access to an ICacheClient interface instance that provides cache-related methods. The interface needs to be registered as an instance of one of the many caching providers available: an in-memory cache, a relational database cache, a cache based on a key value data store using Redis, a memcached-based cache, a Microsoft Azure cache, and even a cache based on Amazon DynamoDB. Sessions: These are enabled by the SessionFeature ServiceStack plugin and rely on the caching component when the AuthFeature plugin is not enabled. Every service or controller powered by ServiceStack has an ISession property that provides read and write access to the session data. Each ServiceStack request automatically has two cookies set: an ss-id cookie, which is a regular session cookie, and an ss-pid cookie, which is a permanent cookie with an expiry date set far in the future. You can also gain access to a typed session as part of the AuthFeature plugin that will be explored next.
Read more
  • 0
  • 0
  • 4135

article-image-aspnet-and-jquery-how-to-create-rich-content
Packt
27 Apr 2011
7 min read
Save for later

ASP.NET and jQuery: how to create rich content

Packt
27 Apr 2011
7 min read
In a nutshell, the jQuery UI library provides the following: Completely configurable widgets like accordion, tabs, progressbar, datepicker, slider, autocomplete, dialog, and button Interactive features like draggable, droppable, resizable, selectable, and sortable Advanced animation effects like show, hide, toggle, blind, bounce, explode, fade, highlight, pulsate, puff, scale, slide, etc Customisable themes to suit the look and feel of your website Using jQuery and ASP.NET together In this article, we will primarily take a look at integration of jQuery UI with ASP.NET to build rich content quickly and easily. Read more: ASP.NET: Using jQuery UI Widgets Getting started Let's start by creating a new ASP.NET website Chapter9 in Visual Studio. Go to the download page of jQuery UI at http://jqueryui.com/download, which allows customizable downloads based on the features required in the web application. For the purpose of this article, we will download the default build as shown next: (Move the mouse over the image to enlarge.) jQuery UI allows various custom themes. We will select the Sunny theme for our project: Save the downloaded file. The download basically consists of the following: css folder consisting of the the theme files development-bundle folder consisting of demos, documents, raw script files, etc. js folder consisting of the minified version of jQuery library and jQuery UI Save the earlier mentioned css folder in the current project. Save the minified version of jQuery UI and jQuery library in a script folder js in the project. In addition to including the jQuery library on ASP.NET pages, also include the UI library as shown: <script src="js/jquery-1.4.1.js" type="text/javascript"></script> <script src="js/jquery-ui-1.8.9.custom.min.js" type="text/javascript"></script> Include the downloaded theme on the aspx pages as follows: <link href="css/sunny/jquery-ui-1.8.9.custom.css" rel="stylesheet" type="text/css" /> Now let's move on to the recipes and explore some of the powerful functionalities of jQuery UI. Creating an accordion control The jQuery accordion widget allows the creation of collapsible panels on the page without the need for page refresh. Using an accordion control, a single panel is displayed at a time while the remaining panels are hidden. Getting Ready Create a new web form Recipe1.aspx in the current project. Add a main content panel to the page. Within the main panel, add pairs of headers and subpanels as shown next: <asp:Panel id="contentArea" runat="server"> <h3><a href="#">Section 1</a></h3> <asp:Panel ID="Panel1" runat="server"> Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. </asp:Panel> <h3><a href="#">Section 2</a></h3> <asp:Panel ID="Panel2" runat="server"> Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. </asp:Panel> <h3><a href="#">Section 3</a></h3> <asp:Panel ID="Panel3" runat="server"> Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. </asp:Panel> <h3><a href="#">Section 4</a></h3> <asp:Panel ID="Panel4" runat="server"> Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum </asp:Panel> </asp:Panel> Add some styling to the main content panel as required: #contentArea { width: 300px; height: 100%; } Our accordion markup is now ready. We will now transform this markup into an accordion control using the functionalities of jQuery UI. How to do it... In the document.ready() function of the jQuery script block, apply the accordion() method to the main content panel: $("#contentArea").accordion(); Thus, the complete jQuery UI solution for the problem at hand is as follows: <script language="javascript" type="text/javascript"> $(document).ready(function(){ $("#contentArea").accordion(); }); </script> How it works... Run the web page. The accordion control is displayed as shown in the following screenshot: Click on the respective panel headers to display the required panels. Note that the accordion control only displays the active panel at a time. The remaining panels are hidden from the user. There's more... For detailed documentation on the jQuery UI accordion widget, please visit http://jqueryui.com/demos/accordion/. Creating a tab control The jQuery UI tab widget helps to create tab controls quickly and easily on ASP.NET web pages. The tab control helps in organizing content on a page thus improving the presentation of bulky content. With the help of jQuery UI tab widget, the content can also be retrieved dynamically using AJAX. In this recipe, we will see a simple example of applying this powerful widget to ASP.NET forms. Getting Ready Create a new web form Recipe2.aspx in the current project. Add an ASP.NET container panel to the page. Within this container panel, add subpanels corresponding to the tab contents. Also add hyperlinks to each of the subpanels. Thus the complete aspx markup of the web form is as shown next: <form id="form1" runat="server"> <asp:panel id="contentArea" runat="server"> <ul> <li><a href="#tab1">Tab 1</a></li> <li><a href="#tab2">Tab 2</a></li> <li><a href="#tab3">Tab 3</a></li> <li><a href="#tab4">Tab 4</a></li> </ul> <asp:panel ID="tab1" runat="server"> <p>Lorem ipsum dolor sit amet, consectetur adipisicing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua.</p> </asp:panel> <asp:panel ID="tab2" runat="server"> <p>Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.</p> </asp:panel> <asp:panel ID="tab3" runat="server"> <p>Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. </p> </asp:panel> <asp:panel ID="tab4" runat="server"> <p>Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum</p> </asp:panel> </asp:panel> </form> Next, we will see how we can transform this markup into a tab control using jQuery UI. How to do it... In the document.ready() function of the jQuery script block, apply the tabs() method to the container panel as follows: $("#contentArea").tabs(); Thus, the complete jQuery UI solution for creating the tab control is as follows: <script language="javascript" type="text/javascript" $(document).ready(function(){ $("#contentArea").tabs(); }); </script> How it works... Run the web form. The page displays the tabbed content as follows: Click on the respective tab headers to view the required content. There's more… For detailed documentation on the jQuery UI tabs widget, visit the jQuery website.
Read more
  • 0
  • 0
  • 4133
article-image-cocoa-and-objective-c-animating-calayers
Packt
27 May 2011
6 min read
Save for later

Cocoa and Objective-C: Animating CALayers

Packt
27 May 2011
6 min read
Cocoa and Objective-C Cookbook Understanding the CALayer class In this recipe, we will use multiple layers to draw our custom view. Using a delegate method, we will draw the background of our view. A second layer will be used to include an image centered in the view. When complete, the sample application will resemble the following screenshot: Getting ready In Xcode, create a new Cocoa Application and name it CALayer. How to do it... In the Xcode project, right-click on the Classes folder and choose Add…, then choose New File… Under the MacOS X section, select Cocoa Class, then select Objective-C class. Finally, choose NSView in the Subclass of popup. Name the new file MyView.m. In the Xcode project, expand the Frameworks group and then expand Other Frameworks. Right-click on Other Frameworks and choose Add…, then choose Existing Frameworks… Find QuartzCore.framework in the list of frameworks and choose Add. Click on MyView.m to open it and add the following import: #import <QuartzCore/QuartzCore.h> Remove the initWithFrame: and drawRect: methods. Add the following awakeFromNib method: - (void) awakeFromNib { CALayer *largeLayer = [CALayer layer]; [largeLayer setName:@"large"]; [largeLayer setDelegate:self]; [largeLayer setBounds:[self bounds]]; [largeLayer setBorderWidth:4.0]; [largeLayer setLayoutManager:[CAConstraintLayoutManager layoutManager]]; CALayer *smallLayer = [CALayer layer]; [smallLayer setName:@"small"]; CGImageRef image = [self convertImage:[NSImage imageNamed:@"LearningJQuery"]]; [smallLayer setBounds:CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image))]; [smallLayer setContents:(id)image]; [smallLayer addConstraint:[CAConstraint constraintWithAttribute:kCAConstraintMidY relativeTo:@"superlayer" attribute:kCAConstraintMidY]]; [smallLayer addConstraint:[CAConstraint constraintWithAttribute:kCAConstraintMidX relativeTo:@"superlayer" attribute:kCAConstraintMidX]]; CFRelease(image); [largeLayer addSublayer:smallLayer]; [largeLayer setNeedsDisplay]; [self setLayer:largeLayer]; [self setWantsLayer:YES]; } Add the following two methods to the MyView class as well: - (void)drawLayer:(CALayer *)layer inContext:(CGContextRef)context { CGContextSetRGBFillColor(context, .5, .5, .5, 1); CGContextFillRect(context, [layer bounds]); } - (CGImageRef) convertImage:(NSImage *)image { CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef )[image TIFFRepresentation], NULL); CGImageRef imageRef = CGImageSourceCreateImageAtIndex(source, 0, NULL); CFRelease(source); return imageRef; } Open the MyView.h file and add the following method declaration: - (CGImageRef) convertImage:(NSImage *)image; Right-click on the CALayer project in Xcode's project view and choose Add…, then choose Existing Files…. Choose the LearningJQuery.jpg image and click on Add. Double-click on the MainMenu.xib file in the Xcode project. From Interface Builder's Library palette, drag a Custom View into the application window. From Interface Builder's Inspectors palette, select the Identity tab and set the Class popup to MyView. Back in Xcode, choose Build and Run from the toolbar to run the application. How it works... We are creating two layers for our view. The first layer is a large layer, which will be the same size as our MyView view. We set the large layers delegate to self so that we can draw the layer in the drawLayer: delegate method. The drawLayer: delegate method simply fills the layer with a mid-gray color. Next, we set the bounds property and a border width property on the larger layer. Next, we create a smaller layer whose contents will be the image that we included in the project. We also add a layout manager to this layer and configure the constraints of the layout manager to keep the smaller layer centered both horizontally and vertically relative to the larger view using the superlayer keyword. Lastly, we set the small layer as a sub-layer of the large layer and force a redraw of the large layer by calling setNeedsDisplay. Next, we set the large layer as the MyView's layer. We also need to call the setWantsLayer:YES on the MyView to enable the use of layers in our view. There's more... Since we used a layout manager to center the image in the view, the layout manager will also handle the centering of the image when the user resizes the view or window. To see this in action, modify the Size properties in Interface Builder for the custom view as shown in the screenshot below: Animation by changing properties Cocoa provides a way to animate views by changing properties using implied animations. In this recipe, we will resize our custom view when the resize button is clicked, by changing the views frame size. Getting ready In Xcode, create a new Cocoa Application and name it ChangingProperties. How to do it... In the Xcode project, right-click on the Classes folder and choose Add…, then choose New File… Under the MacOS X section, select Cocoa Class, then select Objective-C class. Finally, choose NSView from the Subclass of popup. Name the new file MyView.m. Click on the MyView.m file to open it and add the following in the drawRect: method: NSBezierPath *path = [NSBezierPath bezierPathWithRoundedRect:[ self bounds] xRadius:8.0 yRadius:8.0]; [path setClip]; [[NSColor whiteColor] setFill]; [NSBezierPath fillRect:[self bounds]]; [path setLineWidth:3.0]; [[NSColor grayColor] setStroke]; [path stroke]; Click on the ChangingPropertiesAppDelegate.h to open it. Next, insert an import for the MyView.h header file: #import "MyView.h" Add the following variables to the class interface: NSButton *button; MyView *myView; Add the following properties to the class interface: @property (assign) IBOutlet NSButton *button; @property (assign) IBOutlet MyView *myView; Add the method declaration for when the Resize button is clicked: - (IBAction) resizeButtonHit:(id)sender; Click on the ChangingPropertiesAppDelegate.m file to open it. Add our synthesized variables below the synthesized window variable: @synthesize button; @synthesize myView; Create a global static boolean for tracking the size of the view: static BOOL isSmall = YES; Add the resizeButtonHit: method to the class implementation: - (IBAction) resizeButtonHit:(id)sender { NSRect small = NSMakeRect(20, 250, 150, 90); NSRect large = NSMakeRect(20, 100, 440, 240); if (isSmall == YES) { [[myView animator] setFrame:large]; isSmall = NO; } else { [[myView animator] setFrame:small]; isSmall = YES; } } Double-click on the MainMenu.xib file in the Xcode project. From Interface Builders Library palette, drag a Custom View into the application window. From Interface Builder's Inspector's palette, select the Identity tab and set the Class popup to MyView. From the Library palette, drag a Push Button into the application window. Adjust the layout of the Custom View and button so that it resembles the screenshot below: From Interface Builder's Inspector's palette, select the Identity tab and set the Class popup to MyView. Right-click on the Changing Properties App Delegate so that you can connect the outlets to the MyView Custom View, the Resize Push Button, and the resizeButtonHit action: Back in Xcode, choose Build and Run from the toolbar to run the application. How it works... We define two sizes for our view, one small size that is the same as the initial size of the view, and one large size. Depending on the state of the isSmall global variable, we set the view's frame size to one of our predefined sizes. Note that we set the view's frame via the views animator property. By using this property, we make use of the implicit animations available in the view. There's more... Using the same technique, we can animate several other properties of the view such as its position or opacity. For more information on which properties can be implicitly animated, see Apple's Core Animation Programming Guide.
Read more
  • 0
  • 0
  • 4130

article-image-testing-save-dialog-java-using-swing
Packt
21 Oct 2009
10 min read
Save for later

Testing a Save As Dialog in Java using Swing

Packt
21 Oct 2009
10 min read
In this article, we will use an example from our demonstration application, Ikon Do It. The code from this article is in the packages jet.ikonmaker and jet.ikonmaker.test. Click here to download the code. The Ikon Do It 'Save as' Dialog The 'Ikon Do It' application has a Save as function that allows the icon on which we are currently working to be saved with another name. Activating the Save as button displays a very simple dialog for entering the new name. The following figure shows the 'Ikon Do It' Save as dialog. Not all values are allowed as possible new names. Certain characters (such as '*') are prohibited, as are names that are already used. In order to make testing easy, we implemented the dialog as a public class called SaveAsDialog, rather than as an inner class of the main user interface component. We might normally balk at giving such a trivial component its own class, but it is easier to test when written this way and it makes a good example. Also, once a simple version of this dialog is working and tested, it is possible to think of enhancements that would definitely make it too complex to be an inner class. For example, there could be a small status area that explains why a name is not allowed (the current implementation just disables the Ok button when an illegal name is entered, which is not very user-friendly). The API for SaveAsDialog is as follows. Names of icons are represented by IkonName instances. A SaveAsDialog is created with a list of existing IkonNames. It is shown with a show() method that blocks until either Ok or Cancel is activated. If Ok is pressed, the value entered can be retrieved using the name() method. Here then are the public methods: public class SaveAsDialog { public SaveAsDialog( JFrame owningFrame, SortedSet<IkonName> existingNames ) { ... } /** * Show the dialog, blocking until ok or cancel is activated. */ public void show() { ... } /** * The most recently entered name. */ public IkonName name() { ... } /** * Returns true if the dialog was cancelled. */ public boolean wasCancelled() { ... }} Note that SaveAsDialog does not extend JDialog or JFrame, but will use delegation. Also note that the constructor of SaveAsDialog does not have parameters that would couple it to the rest of the system. This means a handler interface is not required in order to make this simple class testable. The main class uses SaveAsDialog as follows: private void saveAs() { SaveAsDialog sad = new SaveAsDialog( frame, store.storedIkonNames() ); sad.show(); if (!sad.wasCancelled()) { //Create a copy with the new name. IkonName newName = sad.name(); Ikon saveAsIkon = ikon.copy( newName ); //Save and then load the new ikon. store.saveNewIkon( saveAsIkon ); loadIkon( newName ); }} Outline of the Unit Test The things we want to test are: Initial settings: The text field is empty. The text field is a sensible size. The Ok button is disabled. The Cancel button is enabled. The dialog is a sensible size. Usability: The Escape key cancels the dialog. The Enter key activates the Ok button. The mnemonics for Ok and Cancel work. Correctness. The Ok button is disabled if the entered name: Contains characters such as '*', '', '/'. Is just white-space. Is one already being used. API test: unit tests for each of the public methods. As with most unit tests, our test class has an init() method for getting an object into a known state, and a cleanup() method called at the end of each test. The instance variables are: A JFrame and a set of IkonNames from which the SaveAsDialog can be constructed A SaveAsDialog, which is the object under test. A UserStrings and a UISaveAsDialog (listed later on) for manipulating the SaveAsDialog with keystrokes. A ShowerThread, which is a Thread for showing the SaveAsDialog. This is listed later on. The outline of the unit test is: public class SaveAsDialogTest { private JFrame frame; private SaveAsDialog sad; private IkonMakerUserStrings = IkonMakerUserStrings.instance(); private SortedSet<IkonName> names; private UISaveAsDialog ui; private Shower shower; ... private void init() { ... } private void cleanup() { ... } private class ShowerThread extends Thread { ... }} UI Helper Methods A lot of the work in this unit test will be done by the static methods in our helper class, UI. Some of these are isEnabled(), runInEventThread(), and findNamedComponent(). The new methods are listed now, according to their function. Dialogs If a dialog is showing, we can search for a dialog by name, get its size, and read its title: public final class UI { ... /** * Safely read the showing state of the given window. */ public static boolean isShowing( final Window window ) { final boolean[] resultHolder = new boolean[]{false}; runInEventThread( new Runnable() { public void run() { resultHolder[0] = window.isShowing(); } } ); return resultHolder[0]; } /** * The first found dialog that has the given name and * is showing (though the owning frame need not be showing). */ public static Dialog findNamedDialog( String name ) { Frame[] allFrames = Frame.getFrames(); for (Frame allFrame : allFrames) { Window[] subWindows = allFrame.getOwnedWindows(); for (Window subWindow : subWindows) { if (subWindow instanceof Dialog) { Dialog d = (Dialog) subWindow; if (name.equals( d.getName() ) && d.isShowing()) { return (Dialog) subWindow; } } } } return null; } /** * Safely read the size of the given component. */ public static Dimension getSize( final Component component ) { final Dimension[] resultHolder = new Dimension[]{null}; runInEventThread( new Runnable() { public void run() { resultHolder[0] = component.getSize(); } } ); return resultHolder[0]; } /** * Safely read the title of the given dialog. */ public static String getTitle( final Dialog dialog ) { final String[] resultHolder = new String[]{null}; runInEventThread( new Runnable() { public void run() { resultHolder[0] = dialog.getTitle(); } } ); return resultHolder[0]; } ...} Getting the Text of a Text Field The method is getText(), and there is a variant to retrieve just the selected text: //... from UI/** * Safely read the text of the given text component. */public static String getText( JTextComponent textComponent ) { return getTextImpl( textComponent, true );}/** * Safely read the selected text of the given text component. */public static String getSelectedText( JTextComponent textComponent ) { return getTextImpl( textComponent, false );}private static String getTextImpl( final JTextComponent textComponent, final boolean allText ) { final String[] resultHolder = new String[]{null}; runInEventThread( new Runnable() { public void run() { resultHolder[0] = allText ? textComponent.getText() : textComponent.getSelectedText(); } } ); return resultHolder[0];} Frame Disposal In a lot of our unit tests, we will want to dispose of any dialogs or frames that are still showing at the end of a test. This method is brutal but effective: //... from UIpublic static void disposeOfAllFrames() { Runnable runnable = new Runnable() { public void run() { Frame[] allFrames = Frame.getFrames(); for (Frame allFrame : allFrames) { allFrame.dispose(); } } }; runInEventThread( runnable );} Unit Test Infrastructure Having seen the broad outline of the test class and the UI methods needed, we can look closely at the implementation of the test. We'll start with the UI Wrapper class and the init() and cleanup() methods. The UISaveAsDialog Class UISaveAsDialog has methods for entering a name and for accessing the dialog, buttons, and text field. The data entry methods use a Cyborg, while the component accessor methods use UI: public class UISaveAsDialog { Cyborg robot = new Cyborg(); private IkonMakerUserStrings us = IkonMakerUserStrings.instance(); protected Dialog namedDialog; public UISaveAsDialog() { namedDialog = UI.findNamedDialog( SaveAsDialog.DIALOG_NAME ); Waiting.waitFor( new Waiting.ItHappened() { public boolean itHappened() { return nameField().hasFocus(); } }, 1000 ); } public JButton okButton() { return (JButton) UI.findNamedComponent( IkonMakerUserStrings.OK ); } public Dialog dialog() { return namedDialog; } public JButton cancelButton() { return (JButton) UI.findNamedComponent( IkonMakerUserStrings.CANCEL ); } public JTextField nameField() { return (JTextField) UI.findNamedComponent( IkonMakerUserStrings.NAME ); } public void saveAs( String newName ) { enterName( newName ); robot.enter(); } public void enterName( String newName ) { robot.selectAllText(); robot.type( newName ); } public void ok() { robot.altChar( us.mnemonic( IkonMakerUserStrings.OK ) ); } public void cancel() { robot.altChar( us.mnemonic( IkonMakerUserStrings.CANCEL ) ); }} A point to note here is the code in the constructor that waits for the name text field to have focus. This is necessary because the inner workings of Swing set the focus within a shown modal dialog as a separate event. That is, we can't assume that showing the dialog and setting the focus within it happen within a single atomic event. Apart from this wrinkle, all of the methods of UISaveDialog are straightforward applications of UI methods. The ShowerThread Class Since SaveAsDialog.show() blocks, we cannot call this from our main thread; instead we spawn a new thread. This thread could just be an anonymous inner class in the init() method: private void init() { //Not really what we do... //setup...then launch a thread to show the dialog. //Start a thread to show the dialog (it is modal). new Thread( "SaveAsDialogShower" ) { public void run() { sad = new SaveAsDialog( frame, names ); sad.show(); } }.start(); //Now wait for the dialog to show...} The problem with this approach is that it does not allow us to investigate the state of the Thread that called the show() method. We want to write tests that check that this thread is blocked while the dialog is showing. Our solution is a simple inner class: private class ShowerThread extends Thread { private boolean isAwakened; public ShowerThread() { super( "Shower" ); setDaemon( true ); } public void run() { Runnable runnable = new Runnable() { public void run() { sad.show(); } }; UI.runInEventThread( runnable ); isAwakened = true; } public boolean isAwakened() { return Waiting.waitFor( new Waiting.ItHappened() { public boolean itHappened() { return isAwakened; } }, 1000 ); }} The method of most interest here is isAwakened(), which waits for up to one second for the awake flag to have been set, this uses a class, Waiting. Another point of interest is that we've given our new thread a name (by the call super("Shower") in the constructor). It's really useful to give each thread we create a name. The init() Method The job of the init() method is to create and show the SaveAsDialog instance so that it can be tested: private void init() { //Note 1 names = new TreeSet<IkonName>(); names.add( new IkonName( "Albus" ) ); names.add( new IkonName( "Minerva" ) ); names.add( new IkonName( "Severus" ) ); names.add( new IkonName( "Alastair" ) ); //Note 2 Runnable creator = new Runnable() { public void run() { frame = new JFrame( "SaveAsDialogTest" ); frame.setVisible( true ); sad = new SaveAsDialog( frame, names ); } }; UI.runInEventThread( creator ); //Note 3 //Start a thread to show the dialog (it is modal). shower = new ShowerThread(); shower.start(); //Note 4 //Wait for the dialog to be showing. Waiting.waitFor( new Waiting.ItHappened() { public boolean itHappened() { return UI.findNamedFrame( SaveAsDialog.DIALOG_NAME ) != null; } }, 1000 ); //Note 5 ui = new UISaveAsDialog();} Now let's look at some of the key points in this code. Note 1: In this block of code we create a set of IkonNames with which our SaveAsDialog can be created. Note 2: It's convenient to create and show the owning frame and create the SaveAsDialog in a single Runnable. An alternative would be to create and show the frame with a UI call and use the Runnable just for creating the SaveAsDialog. Note 3: Here we start our Shower, which will call the blocking show() method of SaveAsDialog from the event thread. Note 4: Having called show() via the event dispatch thread from our Shower thread, we need to wait for the dialog to actually be showing on the screen. The way we do this is to search for a dialog that is on the screen and has the correct name. Note 5: Once the SaveAsDialog is showing, we can create our UI Wrapper for it. The cleanup() Method The cleanup() method closes all frames in a thread-safe manner: private void cleanup() { UI.disposeOfAllFrames();}
Read more
  • 0
  • 0
  • 4122

article-image-frontend-soa-taming-beast-frontend-web-development
Wesley Cho
08 May 2015
6 min read
Save for later

Frontend SOA: Taming the beast of frontend web development

Wesley Cho
08 May 2015
6 min read
Frontend web development is a difficult domain for creating scalable applications.  There are many challenges when it comes to architecture, such as how to best organize HTML, CSS, and JavaScript files, or how to create build tooling to allow an optimal development & production environment. In addition, complexity has increased measurably. Templating & routing have been transplanted to the concern of frontend web engineers as a result of the push towards single page applications (SPAs).  A wealth of frameworks can be found as listed on todomvc.com.  AngularJS is one that rose to prominence almost two years ago on the back of declarative html, strong testability, and two-way data binding, but even now it is seeing some churn due to Angular 2.0 breaking backwards compatibility completely and the rise of React, which is Facebook’s new view layer bringing the idea of a virtual DOM for performance optimization not previously seen in frontend web architecture.  Angular 2.0 itself is also looking like a juggernaut with decoupled components that harkens to more pure JavaScript & is already boasting of performance gains of roughly 5x compared to Angular 1.x. With this much churn, frontend web apps have become difficult to architect for the long term.  This requires us to take a step back and think about the direction of browsers. The Future of Browsers We know that ECMAScript 6 (ES6) is already making its headway into browsers - ES6 changes how JavaScript is structured greatly with a proper module system, and adds a lot of syntactical sugar.  Web Components are also going to change how we build our views as well. Instead of: .home-view { ... } We will be writing: <template id=”home-view”> <style> … </style> <my-navbar></my-navbar> <my-content></my-content> <script> … </script> </template> <home-view></home-view> <script> var proto = Object.create(HTMLElement.prototype); proto.createdCallback = function () { var root = this.createRoot(); var template = document.querySelector(‘#home-view’); var clone = document.importNode(template.content, true); root.appendChild(clone); }; document.registerElement(‘home-view’, { prototype: proto }); </script> This is drastically different from how we build components now.  In addition, libraries & frameworks are already being built with this in mind.  Angular 2 is using annotations provided by Traceur, Google’s ES6 + ES7 to ES5 transpiler, to provide syntactical sugar for creating one way bindings to the DOM and to DOM events.  React and Ember also have plans to integrate Web Components into their workflows.  Aurelia is already structured in a way to take advantage of it when it drops. What can we do to future proof ourselves for when these technologies drop? Solution  For starters, it is important to realize that creating HTML and CSS is relatively cheap compared to managing a complex JavaScript codebase built on top of a framework or library.  Frontend web development is seeing architecture pains that have already been solved in other domains, except it has the additional problem of the standard challenge of integrating UI into that structure.  This seems to suggest that the solution is to create a frontend service-oriented architecture (SOA) where most of the heavy logic is offloaded to pure JavaScript with only utility library additions (i.e. Underscore/Lodash).  This would allow us to choose view layers with relative ease, and move fast in case it turns out a particular view library/framework turns out not to meet requirements.  It also prevents the endemic problem of having to rewrite whole codebases due to having to swap out libraries/frameworks. For example, consider this sample Angular controller (a similarly contrived example can be created using other pieces of tech as well): angular.module(‘DemoApp’) .controller(‘DemoCtrl’, function ($scope, $http) { $scope.getItems = function () { $http.get(‘/items/’) .then(function (response) { $scope.items = response.data.items; $scope.$emit(‘items:received’, $scope.items); }); }; }); This sample controller has a method getItems that fetches items, updates the model, and then emits the information so that parent views have access to that change.  This is ugly because it hardcodes application structure hierarchy and mixes it with server query logic, which is a separate concern.  In addition, it also mixes the usage of Angular’s internals into the application code, tying some pure abstract logic heavily in with the framework’s internals.  It is not all that uncommon to see developers make these simple architecture mistakes. With the proper module system that ES6 brings, this simplifies to (items.js): import {fetch} from ‘fetch’; export class items { getAll() { return fetch.get(‘/items’) .then(function (response) { return response.json(); }); } }; And demoCtrl.js: import {BaseCtrl} from ‘./baseCtrl.js’; import {items} from ‘./items’; export class DemoCtrl extends BaseCtrl { constructor() { super(); } getItems() { let self = this; return Items.getAll() .then(function (items) { self.items = items; return items; }); } }; And main.js: import {items} from ‘./items’; import {DemoCtrl} from ‘./DemoCtrl’; angular.module(‘DemoApp’, []) .factory(‘items’, items) .controller(‘DemoCtrl’, DemoCtrl);  If you want to use anything from $scope, you can modify the usage of DemoCtrl straight in the controller definition and just instantiate it inside the function.  With promises, which are also available natively in ES6, you can chain upon them in the implementation of DemoCtrl in the Angular code base. The kicker about this approach is that this can also be done currently in ES5, and is not limited with using Angular - it applies equally as well with any other library or framework, such as Backbone, Ember, and React!  It also allows you to churn out very testable code. I recommend this as a best practice for architecting complex frontend web apps - the only caveat is if the other aspects of engineering prevent this from being a possibility, such as the business requirements of time and people resources available.  This approach allows us to tame the beast of maintaining & scaling frontend web apps while still being able to adapt quickly to the constantly changing landscape. About this author Wesley Cho is a senior frontend engineer at Jiff (http://www.jiff.com/).  He has contributed features & bug fixes and reported numerous issues to numerous libraries in the Angular ecosystem, including AngularJS, Ionic, UI Bootstrap, and UI Router, as well as authored several libraries.
Read more
  • 0
  • 0
  • 4112
article-image-customizing-your-vim-work-area
Packt
14 May 2010
12 min read
Save for later

Customizing Your Vim for work area

Packt
14 May 2010
12 min read
(Read more interesting articles on Hacking Vim 7.2 here.) Work area personalization In this article, we introduce a list of smaller, good-to-know modifications for the editor area in Vim. The idea with these recipes is that they all give you some sort of help or optimization when you use Vim for editing text or code. Adding a more visual cursor Sometimes, you have a lot of syntax coloring in the file you are editing. This can make the task of tracking the cursor really hard. If you could just mark the line the cursor is currently in, then it would be easier to track it. Many have tried to fix this with Vim scripts, but the results have been near useless (mainly due to slowness, which prevented scrolling longer texts at an acceptable speed). Not until version 7 did Vim have a solution for this. But then it came up with not just one, but two possible solutions for cursor tracking. The first one is the cursorline command , which basically marks the entire line with, for example, another background color without breaking the syntax coloring. To turn it on, use the following command: :set cursorline The color it uses is the one defined in the CursorLine color group. You can change this to any color or styling you like, for example: :highlight CursorLine guibg=lightblue ctermbg=lightgray If you are working with a lot of aligned file content (such as tab-separated data), the next solution for cursor tracking comes in handy: :set cursorcolumn This command marks the current column (here the cursor is placed) by coloring the entire column through the entire file, for example. As with the cursor line, you can change the settings for how the cursor column should be marked. The color group to change is named cursorcolumn. Adding both the cursor line and column marking makes the cursor look like a crosshair, thus making it impossible to miss. Even though the cursorline and cursorcolumn functionalities are implemented natively in Vim, it can still give quite a slowdown when scrolling through the file. Adding line numbers Often when compiling and debugging code, you will get error messages stating that the error is in some line. One could, of course, start counting lines from the top to find the line, but Vim has a solution to go directly to some line number. Just execute :XXX where XXX is the line number, and you will be taken to the XXX line. Alternatively, you can go into normal mode (press the Esc key) and then simply use XXXgg or XXXG (again XXX is the line number). Sometimes, however, it is nice to have an indication of the line number right there in the editor, and that's where the following command comes in handy: :set number Now, you get line numbers to the left of each line in the file. By default, the numbers take up four columns of space, three for numbers, and one for spacing. This meansthat the width of the numbers will be the same until you have more than 999 lines. If you get above this number of lines, an extra column will be added and the content will be moved to the right. Of course, you can change the default number of columns used for the line numbers. This can be achieved by changing the following property: :set numberwidth=XXX Replace XXX with the number of columns that you want. Even though it would be nice to make the number of columns higher in order to get more spacing between code and line numbers, this is not achievable with the numberwidth property. This is because the line numbers will be right-aligned within the columns. In the following figure, you can see how line numbers are shown as right-aligne when a higher number of columns are set in numberwidth: You can change the styling of the line numbers and the columns they are in by making changes to the LineNr color group. Spell checking your language We all know it! Even if we are really good spellers, it still happens from time to time that we misspell a word or hit the wrong keys. In the past, you had to run your texts (that you had written in Vim) through some sort of spell checker such as Aspell or Ispell . This was a tiresome process that could only be performed as a final task, unless you wanted to do it over and over again. With version 7 of Vim, this troublesome way of spell checking is over. Now, Vim has got a built-in spell checker with support for more than 50 languages from around the world. The new spell checker marks the wrongly written words as you type them in, so you know right away that there is an error. The command to execute to turn on this helpful spell checker feature is: :set spell This turns on the spell checker with the default language (English). If you don't use English much and would prefer to use another language in the spell checker, then there is no problem changing this. Just add the code of the language you would like to use to the spelllang property . For example: :set spelllang=de Here, the language is set to German (Deutsch) as the spell checker language of choice. The language name can be written in several different formats. American English, for example, can be written as: en_us us American Names can even be an industry-related name such as medical. If Vim does not recognize the language name, Vim will highlight it when you execute the property-setting command. If you change the spelllang setting to a language not already installed, then Vim will ask you if it should try to automatically retrieve it from the Vim homepage. Personally, I tend to work in several different languages in Vim, and I really don't want to tell Vim all the time which language I am using right now. Vim has a solution for this. By appending more language codes to the spelllang property (separated by commas), you can tell Vim to check the spelling in more than one language. :set spelllang=en,da,de,it Vim will then take the languages from the start to the end, and check if the words match any word in one of these languages. If they do, then they are not marked as a spelling error. Of course, this means that you can have a word spelled wrong in the language you are using but spelled correctly in another language, thereby introducing a hidden spelling error. You can find language packages for a lot of languages at the Vim FTP site: ftp://ftp.vim.org/pub/vim/runtime/spell. Spelling errors are marked differently in Vim and Gvim. In regular Vim, the misspelled word is marked with the SpellBad color group (normally, white on red). In Gvim, the misspelled word is marked with a red curvy line underneath the word. This can, of course, be changed by changing the settings of the color group. Whenever you encounter a misspelled word, you can ask Vim to suggest better ways to spell the word. This is simply done by placing the cursor over the word, going into the normal mode (press Esc), and then pressing Z + =. If possible, Vim will give you a list of good guesses for the word you were actually trying to write. In front of each suggestion is a number. Press the number you find in front of the right spelling (of the word you wanted) or press Enter if the word is not there. Often Vim gives you a long list of alternatives for your misspelled word, but unless you have spelled the word completely wrong, chances are that the correct word is within the top five of the alternatives. If this is the case, and you don't want to look through the entire list of alternatives, then you can limit the output with the following command: :set spellsuggest=X Set X to the number of alternative ways of spelling you want Vim to suggest. Adding helpful tool tips In the Modifying tabs recipe, we learned about how to use tool tips to store more information using less space in the tabs in Gvim. To build on top of that same idea with this recipe, we move on and use tool tips in other places in the editor. The editing area is the largest part of Vim. So, why not try to add some extra information to the contents of this area by using tool tips? In Vim, tool tips for the editing area are called balloons and they are only shown when the cursor is hovering over one of the characters. The commands you will need to know in order to use the balloons are: The first command is the one you will use to actually turn on this functionality in Vim. :set ballooneval The second command tells Vim how long it should wait before showing the tool tip/balloon (the delay is in milliseconds and as a default is set to 600). :set balloondelay=400 The last command is the one that actually sets the string that Vim will show in the balloon. :set ballonexpr="textstring" This can either be a static text string or the return of some function. In order to have access to information about the place where you are hovering over a character in the editor, Vim provides access to a list of variables holding such information: abc<space>and abc<enter> Both expand 123abc<space> Will not expand as the abbreviation is part of a word abcd<space> Will not expand because there are letters after the abbreviation abc Will not expand until another special letter is pressed So with these variables in hand, let's look at some examples. Example 1: The first example is based on one from the Vim help system. It shows how to make a simple function that will show the information from all the available variables. function! SimpleBalloon() return 'Cursor is at line/column: ' . v:beval_lnum . '/' . v:beval_col . ' in file ' . bufname(v:beval_bufnr) . '. Word under cursor is: "' . v:beval_text . '"' endfunction set balloonexpr=SimpleBalloon() set balloonevalcode 59 The result will look similar to the following screenshot: Example 2: Let's look at a more advanced example that explores the use of balloons for specific areas in editing. In this example, we will put together a function that gives us great information balloons for two areas at the same time: Misspelled words: The balloon gives ideas for alternative words Folded text: The balloon gives a preview of what's in the fold So, let's take a look at what the function should look for, to detect if the cursor is hovering over either a misspelled word or a fold line (a single line representing multiple lines folded behind it). In order to detect if a word is misspelled, the spell check would need to be turned on: :set spell If it is on, then calling the built-in spell checker function—spellsuggest()—would return alternative words if the hovered word was misspelled. So, to see if a word is misspelled, just check if the spellsuggest() returns anything. There is, however, a small catch. spellsuggest() also returns alternative, similar words if the word is not misspelled. To get around this, another function has to be used on the input word before putting it into the spellsuggest() function . This extra function is the spellbadword(). This basically moves the cursor to the first misspelled word in the sentence that it gets as input, and then returns the word. We just input a single word and if it is not misspelled, then the function cannot return any words. Putting no word into spellsuggest() results in getting nothing back, so we can now check if a word is misspelled or not. It is even simpler to check if a word is in a line, in a fold. You simply have to call the foldclosed()function on the line number of the line over which the cursor is hovering (remember v:beval_lnum ?), and it will return the number of the first line in the current fold; if not in a fold, then it returns -1. In other words, if foldclosed(v:beval_lnum) returns anything but -1 and 0, we are in a fold. Putting all of this detection together and adding functionality to construct the balloon text ends up as the following function: function! FoldSpellBalloon() let foldStart = foldclosed(v:beval_lnum ) let foldEnd = foldclosedend(v:beval_lnum) let lines = [] " Detect if we are in a fold if foldStart < 0 " Detect if we are on a misspelled word let lines = spellsuggest( spellbadword(v:beval_text)[ 0 ], 5, 0 ) else " we are in a fold let numLines = foldEnd - foldStart + 1 " if we have too many lines in fold, show only the first 14 " and the last 14 lines if ( numLines > 31 ) let lines = getline( foldStart, foldStart + 14 ) let lines += [ '-- Snipped ' . ( numLines - 30 ) . ' lines --' ] let lines += getline( foldEnd - 14, foldEnd ) else "less than 30 lines, lets show all of them let lines = getline( foldStart, foldEnd ) endif endif " return result return join( lines, has( "balloon_multiline" ) ? "n" : " " ) endfunction set balloonexpr=FoldSpellBalloon() set ballooneval The result is some really helpful balloons in the editing area of Vim that can improve your work cycle tremendously. The following screenshot shows how the information balloon could look when it is used to preview a folded range of lines from a file: Instead, if the balloon is used on a misspelled word, it will look like the following screenshot: In Production Boosters, you can learn more about how to use folding of lines to boost productivity in Vim.
Read more
  • 0
  • 0
  • 4103

article-image-ibm-filenet-p8-content-manager-administrative-tools-and-tasks
Packt
10 Feb 2011
11 min read
Save for later

IBM FileNet P8 Content Manager: Administrative Tools and Tasks

Packt
10 Feb 2011
11 min read
Getting Started with IBM FileNet P8 Content Manager Install, customize, and administer the powerful FileNet Enterprise Content Management platform Quickly get up to speed on all significant features and the major components of IBM FileNet P8 Content Manager Provides technical details that are valuable both for beginners and experienced Content Management professionals alike, without repeating product reference documentation Gives a big picture description of Enterprise Content Management and related IT areas to set the context for Content Manager Written by an IBM employee, Bill Carpenter, who has extensive experience in Content Manager product development, this book gives practical tips and notes with a step-by-step approach to design real Enterprise Content Management solutions to solve your business needs        The following will be covered in the next article. A discussion of an Object Store and what's in it An example of creating a custom class and adding custom properties to it FEM must run on a Microsoft Windows machine. Even if you are using virtual machine images or other isolated servers for your CM environment, you might wish to install FEM on a normal Windows desktop machine for your own convenience. Domain and GCD Here's a simple question: what is a P8 Domain? It's easy to give a simple answer—it's the top-level container of all P8 things in a given installation. That needs a little clarification, though, because it seems a little circular; things are in a Domain because a Domain knows about them. In a straightforward technical sense, things are in the same Domain if they share the same Global Configuration Database (GCD) . The GCD is, literally, a database. If we were installing additional CE servers, they would share that GCD if we wanted them to be part of the same Domain. When you first open FEM and look at the tree view in the left-hand panel, most of the things you are looking at are things at the Domain level. We'll be referring to the FEM tree view often, and we're talking about the left-hand part of the user interface, as seen in the following screenshot: FEM remembers the state of the tree view from session to session. When you start FEM the next time, it will try to open the nodes you had open when you exited. That will often mean something of a delay as it reads extensive data for each open Object Store node. You might find it a useful habit to close up all of the nodes before you exit FEM. Most things within a Domain know about and can connect directly to each other, and nothing in a given Domain knows about any other Domain. The GCD, and thus the Domain, contains: Simple properties of the Domain object itself Domain-level objects Configuration objects for more complex aspects of the Domain environment Pointers to other components, both as part of the CE environment and external to it It's a little bit subjective as to which things are objects and which are pointers to other components. It's also a little bit subjective as to what a configuration object is for something and what a set of properties is of that something. Let's not dwell on those philosophical subtleties. Let's instead look at a more specific list: Properties: These simple properties control the behavior of or describe characteristics of the Domain itself. Name and ID: Like most P8 objects, a Domain has both a Name and an ID. It's counterintuitive, but you will rarely need to know these, and you might even sometimes forget the name of your own Domain. The reason is that you will always be connecting to some particular CE server, and that CE server is a member of exactly one Domain. Therefore, all of the APIs related to a Domain object are able to use a defaulting mechanism that means "the current Domain". Database schemas: There are properties containing the database schemas for an Object Store for each type of database supported by P8. CM uses this schema, which is an actual script of SQL statements, by default when first fleshing out a new Object Store to create tables and columns. Interestingly, you can customize the schema when you perform the Object Store creation task (either via FEM or via the API), but you should not do so on a whim. Permissions: The Domain object itself is subject to access controls, and so it has a Permissions property. The actual set of access rights available is specific to Domain operations, but it is conceptually similar to access control on other objects. Domain-level objects: A few types of objects are contained directly within the Domain itself. We'll talk about configuration objects in a minute, but there are a couple of non-configuration objects in the Domain. AddOns: An AddOn is a bundle of metadata representing the needs of a discrete piece of functionality that is not built into the CE server. Some are provided with the product, and others are provided by third parties. An AddOn must first be created, and it is then available in the GCD for possible installation in one or more Object Stores. Marking Sets: Marking Sets are a Mandatory Access Control mechanism, Security Features and Planning. Individual markings can be applied to objects in an Object Store, but the overall definition resides directly under Domain so that they may be applied uniformly across all Object Stores. Configuration objects: Directories: All CM authentication and authorization ultimately comes down to data obtained from an LDAP directory. Some of those lookups are done by the application server, and some are done directly by the CE server. The directory configuration objects tell the CE server how to communicate with that directory or directories. Subsystem configurations: There are several logical subsystems within the CE that are controlled by their own fl avors of subsystem configuration objects. Examples include trace logging configuration and CSE configuration. These are typically configured at the domain level and inherited by lower level topology nodes. A description of topology nodes is coming up in the next section of this article. Pointers to components: Content Cache Areas: The Domain contains configuration information for content caches, which are handy for distributed deployments. Rendition Engines: The Domain contains configuration and connectivity information for separately installed Rendition Engines (sometimes called publishing engines). Fixed Content Devices: The domain contains configuration and connectivity information for external devices and federation sources for content. PE Connection Points and Isolated Regions: The domain contains configuration and connectivity information for the Process Engine. Object Stores: The heart of the CE ecosystem is the collection of ObjectStores. Text Search Engine: The Domain contains configuration and connectivity information for a separately-installed Content Search Engine. In addition to the items directly available in the tree view shown above, most of the remainder of the items contained directly within the Domain are available one way or another in the pop-up panel you get when you right-click on the Domain node in FEM and select Properties. The pop-up panel General tab contains FEM version information. The formatting may look a little strange because the CM release number, including any fix packs, and build number are mapped into the Microsoft scheme for putting version info into DLL properties. In the previous figures, 4.51.0.100 represents CM 4.5.1.0, build 100. That's reinforced by the internal designation of the build number, dap451.100, in parentheses. Luckily, you don't really have to understand this scheme. You may occasionally be asked to report the numbers to IBM support, but a faithful copying is all that is required. Topology levels There is an explicit hierarchical topology for a Domain. It shows up most frequently when configuring subsystems. For example, CE server trace logging can be configured at any of the topology levels, with the most specific configuration settings being used. What we mean by that should be clearer once we've explained how the topology levels are used. You can see these topology levels in the expanded tree view in the left-hand side of FEM in the following screenshot: At the highest level of the hierarchy is the Domain, discussed in the previous section. It corresponds to all of the components in the CE part of the CM installation. Within a domain are one or more sites. The best way to think of a site is as a portion of a Domain located in a particular geographic area. That matters because networked communications differ in character between geographically separate areas when compared to communications within an area. The difference in character is primarily due to two factors—latency and bandwidth. Latency is a characterization of the amount of time it takes a packet to travel from one end of a connection to another. It takes longer for a network packet to travel a long distance, both because of the laws of physics and because there will usually be more network switching and routing components in the path. Bandwidth is a characterization of how much information can be carried over a connection in some fixed period of time. Bandwidth is almost always more constrained over long distances due to budgetary or capacity limits. Managing network traffic traveling between geographic areas is an important planning factor for distributed deployments. A site contains one or more virtual servers. A virtual server is a collection of CE servers that act functionally as if they were a single server (from the point of view of the applications). Most often, this situation comes about through the use of clustering or farming for high availability or load balancing reasons. A site might contain multiple virtual servers for any reason that makes sense to the enterprise. Perhaps, for example, the virtual servers are used to segment different application mixes or user populations. A virtual server contains one or more servers. A server is a single, addressable CE server instance running in a J2EE application server. These are sometimes referred to as physical servers, but in the 21st century that is often not literally true. In terms of running software, the only things that tangibly exist are individual CE servers. There is no independently-running piece of software that is the Domain or GCD. There is no separate piece of software that is an Object Store (except in the sense that it's a database mediated by the RDBMS software). All CE activity happens in a CE server. There may be other servers running software in CM—Process Engine, Content Search Engine, Rendition Engine, and Application Engine. The previous paragraph is just trying to clarify that there is no piece of running software representing the topology levels other than the server. You don't have to worry about runtime requests being handed off to another level up or down the topological hierarchy. Not every installation will have the need to distinguish all of those topology levels. In our all-in-one installation, the Domain contains a single site. That site was created automatically during installation and is conventionally called Initial Site, though we could change that if we wanted to. The site contains a single virtual server, and that virtual server contains a single server. This is typical for a development or demo installation, but you should be able to see how it could be expanded with the defined topology levels to any size deployment, even to a deployment that is global in scope. You could use these different topology levels for a scheme other than the one just described; the only downside would be that nobody else would understand your deployment terms. Using topology levels We mentioned previously that many subsystems can be configured at any of the levels. Although it's most common to do domain-wide configuration, you might, for example, want to enable trace logging on a single CE server for some troubleshooting purpose. When interpreting subsystem configuration data, the CE server first looks for configuration data for the local CE server (that is, itself). If any is found, it is used. Otherwise, the CE server looks for configuration data for the containing virtual server, then the containing site, and then the Domain. Where present, the most specific configuration data is used. A set of configuration data, if used, is used as the complete configuration. That is, the configuration objects at different topology levels are not blended to create an "effective configuration". CE has a feature called request forwarding. Because the conversation between the CE server and the database holding an Object Store is chattier than the conversation between CE clients and the CE server, there can be a performance benefit to having requests handled by a CE server that is closer, in networking terms, to that database. When a CE server forwards a request internally to another CE server, it uses a URL configured on a virtual server. The site object holds the configuration options for whether CE servers can forward requests and whether they can accept forwarded requests. Sites are the containers for content cache areas, text index areas, Rendition Engine connections, storage areas, and Object Stores. That is, each of those things is associated with a specific site.
Read more
  • 0
  • 0
  • 4100
Modal Close icon
Modal Close icon