Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-getting-started-facebook-application-development-using-coldfusionrailo
Packt
19 Jun 2010
5 min read
Save for later

Getting Started with Facebook Application Development using ColdFusion/Railo

Packt
19 Jun 2010
5 min read
There are other CFML Facebook articles on the internet such as Ray Camden’s tutorial with ColdFusion 8; however Facebook continues to innovate and change, and a majority of those resources are out of date for Facebook’s 2010 updates. Things such as “profile boxes” are passé, and now you have to work with “Application Tabs.” In addition, I have found that there are some general concepts of how Facebook applications work. These have not been covered well in other resources. Why Facebook? According to statistics, Facebook is the 3rd highest traffic site in the US right now (statistics for the rest of the world weren’t readily available). The nature of Facebook is that people socialize, and look at what other people are doing, which means that if your friends post that they are using certain applications or visiting certain sites, you know about it, and for most of us, that’s a good enough reason to check it out.  Thats what's called Grass roots marketing, and it works. “The average U.S. Internet user spends more time on Facebook than on Google, Yahoo, YouTube, Microsoft, Wikipedia and Amazon combined.” That should tell you something. There is a big market to tap into, and should answer the question—why Facebook. Even if you think Facebook isn't a useful tool for you, you can’t argue with the numbers when it comes to reaching potential customers. Why CFML with Facebook? Hopefully your interest in ColdFusion and/or Railo answers this. Since CFML is such an easy to learn and powerful extensible programming language, it only makes sense that we should be able to build Facebook applications with it. There are always some cautions with making websites talk to each other. Using CFML with Facebook is no different; however most of these have been overcome by people already, and you can easily zip through this by copy/pasting the work of others. The basic framework for my applications is the same, and you can use this as your jumping-off point to work on your own applications. Understanding Data Flow Facebook is rather unique in how it is structured, and understanding this structure is critical to being able to build applications properly. You will save yourself a lot of frustration by reviewing this section before you begin writing code. In most websites or web applications, people type in a web address, and they connect directly to your web server, where your application handles the business logic, database interaction and any other work, and then gives web content back to the requesting user. This is not the case with Facebook. With Facebook applications, users open up their web browsers to a Facebook web address (the “Canvas URL”), Facebook’s servers make a “behind the scenes” request to your web server (the “Callback URL”), your application then responds to Facebook’s request, and then, Facebook does the final markup and sends the web page content back to the user’s browser. If you followed that, you see that users always interact with Facebook, while Facebook’s server is the one that talks to your application. You can also connect back to Facebook via their RESTful API to get information about users, friends, photos, posts and more. So here are two important concepts to understand: Your Facebook application code lives on your web server, separate from Facebook. You will get web requests from Facebook on behalf of Facebook users. Users should always be interacting with Facebook’s web site; They should never go directly to your web server The Canvas URL is a Facebook address, which you will setup in the next section. The Callback URL is the root where you put your application files (*.cfc and *.cfm). It is also where you will put your CSS files, images, and anything else your application needs. The Callback URL can be a directory on any web hosting account, so there is no need to setup a separate web host for your Facebook application. Setting up a new Facebook application Generally speaking, setting up a new Facebook application is pretty easy. There are a few things that can trip you up, and I will highlight them. The first thing to do is log into your Facebook account, and authorize the Facebook Developer application by going to this URL:http://apps.facebook.com/developer/ Once you have authorized this application, you will see a link to create a new application. Create a new application, and give it a name: Fill in the description if you want, give it an icon and logo if you wish. Click on the Canvas menu option. Enter the canvas page url (this becomes the URL on facebook’s site that you and your users will go to – apps.facebook.com/yourapp). Enter the callback URL (the full URL to YOUR web server directory where your CFML code will reside. Very important: Select Render method to be “FBML” (which stands for Facebook Markup Language). The other options you can leave as their default values. When you are done, save your changes. The application summary page will show you some important information, specifically the API Key and Application Secret, which you will need in your application later. Consider using Facebook’s “sandbox” mode which makes your application invisible to the world while you are developing it. Likewise, when you are done with your application, consider using Facebook’s application directory to promote your application.
Read more
  • 0
  • 0
  • 2277

article-image-getting-started-jquery
Packt
18 Jun 2010
3 min read
Save for later

Getting Started with jQuery

Packt
18 Jun 2010
3 min read
(For more resources on jQuery, see here.) jQuery - How it works To understand how jQuery can ease web client (JavaScript based) development, one has to understand two aspects of jQuery. They are: Functionalities Modules Understanding the functionalities/services provided by jQuery will tell you what jQuery provides and understanding the modules that constitute jQuery will tell you how to access the services provided by jQuery. Here are the details. Functionalities The functionalities provided by jQuery can be classified into following: Selection Attributes handling Element manipulation Ajax Callbacks Event Handling Among the above listed functionalities, selection, element manipulation and event handling makes common tasks very easily implementable or trivial. Selection Using this functionality one can select one or multiple HTML elements. The raw JavaScript equivalent of the selection functionality is: document.getElementByID(‘<element id>’) or document.getElementByTagName(‘<tag name>’) Attributes handling One of most required task in JavaScript is to change the value of an attribute of a tag. The conventional way is to use getElementByID to get the element and then use index to get to the required attribute. jQuery eases it by using selection and attributes handling functionality in conjunction. Element handling There are scenarios where the values of tags need to be modified. One of such scenarios is rewriting text of a <p> tag based on selection from combo box. That is where element handling functionality of jQuery comes handy. Using the element handling or DOM scripting, as it is popularly known, one can not only access a tag but also perform manipulation such as appending child tags to multiple occurrences of a specific tag without using for loop. Ajax Ajax is of the concept and implementation that brought the usefulness of JavaScript to the fore. However, it also brought the complexities and the boilerplate code required for using Ajax to its full potential. The Ajax related functionalities of jQuery encapsulates away the boilerplate code and lets one concentrate on the result of the Ajax call. The main point to keep in mind is that encapsulation of the setup code does not mean that one cannot access the Ajax related events. jQuery takes care of that too and one can register to the Ajax events and handle them. Callbacks There are many scenarios in web development, where you want to initiate another task on the basis of completion of one task. An example of such a scenario involves animation. If you want to execute a task after completion of an animation, you will need callback function. The core of jQuery is implemented in such a way that most of the API supports callbacks. Event handling One of the main aspects of JavaScript and its relationship with HTML is the events triggered by the form elements can be handled using JavaScript. However, when multiple elements and multiple events come into picture, the code complexity becomes very hard to handle. The core of jQuery is geared towards handling the events in such a way that complexity can be maintained at manageable levels. Now that we have discussed the main functionalities of jQuery, let us move onto the main modules of jQuery and how the functionalities map onto the functionalities.
Read more
  • 0
  • 0
  • 2141

article-image-security-and-disaster-recovery-prestashop-13
Packt
18 Jun 2010
6 min read
Save for later

Security and Disaster Recovery in PrestaShop 1.3

Packt
18 Jun 2010
6 min read
We will do everything possible to make sure our store is not the victim of a successful attack. Fortunately, the PrestaShop team takes security very seriously and issues updates and fixes as soon as possible after any problems are discovered. We just have to make sure we do everything we can and also implement the PrestaShop upgrades as soon as they are available. It is also vital that we always have a recent copy of our store because one day, it is probably inevitable that our shop will die on us. It might be a hacker or maybe we will accidentally muck it up ourselves. A recent backup to handle this type of event is a minor inconvenience, because without one, it is an expensive catastrophe. So let's get on with it... Types of security attacks There are different types of security attacks. Here is a very brief explanation of some of the most common ones. Hopefully, this will make it clear why security is an ongoing and evolving issue and not something that can ever be 100 percent solved out of the box. Common sense issues These are often overlooked—make sure your passwords are impossible to guess. Use number sequences that are memorable to you but unguessable and meaningless to everyone else. Combine number sequences with regular letters in a variety of upper and lower case. Don't share your passwords with anyone. This applies to anyone who has access to your shop or hosting account. Brute force This is when an attacker uses software to repeatedly attempt to gain access or discover a password by guessing. Clearly, the simplest defence against this is a secure password. A good password is one with upper and lower case characters, apparently random numbers and words that are not names or are in the dictionary. Does your administrator password stand up to these criteria? SQL injection attack A malicious person amends, deletes, or retrieves information from your database by cleverly manipulating the forms or database requests contained in the code of PrestaShop. By appending to legitimate PrestaShop database code, harm can be done or breaches of security can be achieved. Cross-site scripting Attackers add instructions to access code on another site. They do this by appending a URL pointing to malicious code to a PHP URL of a legitimate page on your site. User error This is straight forward. It is likely that while developing or amending your website, you will mess up some or perhaps all of your PrestaShop. I did it once while writing this article. I will give you the full details of my slightly embarrassing confession later. So with so many ways that things can go wrong, we better start looking at some solutions. Employees and user security If you plan to employ someone or if you have a partner who is going to help in your new shop, it makes good sense to create a new user account so that they have their own login details. Even if it will be only you who needs to use the PrestaShop control panel, there is still a good argument for creating two or more accounts. Here is why. First we will consider a scenario, though a slightly exaggerated one: Guns4u.com: Guns4u wants to offer articles about how to use its products. The management, probably correctly, believe that in-depth how-tos about all its products will boost sales and increase customer retention. The diverse nature of their products makes employing a single writer impossible. For example, an expert on small arms is rarely an expert on ground-to-air ordinance. And a user of laser targeting equipment probably doesn't know the first thing about ship-based artillery. This is quite a problem. The management decides they need a way to allow a whole team of freelance writers who can login directly to the PrestaShop CMS. But bearing in mind the highly dubious backgrounds some of these writers will have, how can they be trusted in the PrestaShop control panel? Users of Guns4u.com: Suppose you employ somebody to write articles for you. You don't really want them being able to play with product prices or payment modules. You would want to restrict them to the CMS area of the control panel. Similarly, your partner might be helping you wrap and pack your products. To avoid accidents you might like to restrict them to the Customers and Orders tab. Now consider this scenario. Even you, after reading this article, can make a mistake. It is a really good idea to create at least one extra user account for you. I always make myself a wrapping and packing account. I use it all the time and it is reassuring to know that I can't accidentally click anything that can cause a problem. This type of user security is common in large organisations. On a company intranet, employees will almost always be restricted to areas of the company system to which they need and nothing more. Below is how to create a new user account and then after that we will look at profiles and permissions to enforce the restrictions and permissions suitable to us. Okay, let's create a new user. Time for action – creating users As you have come to expect, this is really easy. Click on the Employees tab and then click on the Add new link. Enter the Last name, First name, and E-mail address of your new employee or user. The status box enables you to allow or disallow access to the new employee. Unless you have a reason for creating an account and not letting them use it, select the check mark (Allow). If you have reason to want to stop your new employee or user accessing your control panel, simply come back here and click on the cross. In the Profile drop-down box, choose Administrator. This will give the new user full access. We will investigate when this is a good idea and when you might like to change this, if you would like to add our freelance writer next. Click the Save button to create the new user account.
Read more
  • 0
  • 0
  • 3981

article-image-build-iphone-android-and-ipad-applications-using-jqtouch
Packt
18 Jun 2010
12 min read
Save for later

Build iPhone, Android and iPad Applications using jQTouch

Packt
18 Jun 2010
12 min read
  jQuery Plugin Development Beginner's Guide Build powerful, interactive plugins to implement jQuery in the best way possible Utilize jQuery's plugin framework to create a wide range of useful jQuery plugins from scratch Understand development patterns and best practices and move up the ladder to master plugin development Discover the ins and outs of some of the most popular jQuery plugins in action A Beginner's Guide packed with examples and step-by-step instructions to quickly get your hands dirty in developing high quality jQuery plugins       jQuery is a javascript framework that simplifies javascript development life cycle for web applications. Its greatest force comes from the ease of use and the huge number of plugins available. As a result of which javascript developers are exposed to a large number of enterprise components like Sort Tables, Editable Tables with Ajax and also web application components for animation, data manipulation. One such plugin with very powerful effects is the jQTouch; this plugin can be used by any web application developer with small experience in jquery to build applications for iPhone, iPad and Android devices. For now, just to get a feel, you can point your internet enabled iPad, iPhone or Android device to http://www.afrovisiongroup.com/twigle and test the application. Other examples of applications that can be developed using jQtouch include Gmail for the iPad or facebook touch. Getting Started Before we start using jQTouch, I would love to put across a few facts about jQTouch. jQTouch is a plugin for jQuery which means it only enhances jQuery to build smartphone applications that support swiping, and all the other touch gestures. Before you begin development with jQTouch, I would suggest you get comfortable with jQuery. jQTouch applications are not developed like regular web applications, where in an index page will be loaded with links that lead to other pages, and each page is loaded from the server every time a visitor clicks on a link. With jQTouch, all the pages are loaded once inside the index.html and each page is represented as a seperate div element in the index page. For example, the following html code snippet (<div id='page_name'>content</div>) represents a page in your jQTouch application and a link to that page is as follows (<a href='#page_name'>link to page name</a>). You can have as many pages as you want with all the pages having links to other pages inside the index.html file, but remember all this is stored in one single file index.html. The link clicks and navigation actions are implemented using javascript inbuilt into jQTouch. You will get to understand this as we implement twigle. Let's first get to know more about twigle. It is a twitter search application for smartphones loaded from the web. We will use jQTouch for client side development, jQuery ajax plugin for the server side communication and PHP in the backend to get the search results from the twitter Search API. jQTouch comes with javascript files and css files with themes. This defines the look and feel of the application. You won't have to bother about the design as the plugin already comes with predefined styles and graphics from which you can use as the base and extend it further to create your own unique looks. There are two themes that come with the plugin: apple theme and jqt theme.  Just like the name implies, the apple theme looks and feels like native iPhone OS apps. The plugin styles are predefined for the toolbar, rounded button, etc. You will discover this as we move on. jQTouch applications are basically developed in a single file, usually index.html. It contains the html code, javascript code and the styling. Everything in your application happens inside this file which gets loaded into your smartphone once like gmail and the other google applications. For example [code]  &lt;html&gt; &lt;head&gt; &lt;/head&gt; &lt;body&gt; &lt;div id='home'&gt; &lt;div class='toolbar'&gt;  Home Page &lt;/div&gt; &lt;div&gt;  this is the home page &lt;/div&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; [/code] The above html code should produce the following: After installing and initializing the jQtouch plugin with the apple theme, you should have the following: Notice how the <div class='toolbar'><h1>Home Page</h1></div> gets styled into the iPhone or iPad toolbar's look and feel. Now, on the whole, the page looks more or less like a native iPhone application. Developing with jQTouch To develop your iPhone OS or Android OS applications with jQtouch you need to have jQuery and jQTouch libraries which you can download from http://www.jqtouch.com/. Next, get your favorite code editor (dreamweaver, notepad ++, etc) and we can get started. Remember, we are going to look at how to develop an application like twigle here. You can check out the demo of the application at http://www.afrovisiongroup.com/twigle. This is a twitter search application for smartphones loaded from the web. We will use jQTouch for client side development, jQuery ajax plugin for the server side communication and PHP in the backend to get the search results from the twitter Search API. Lets Get to work: Create a folder on your local web server directory called twigle Download the jQTouch package and unzip it into the folder twigle, this will give you the following structure: twigle/demos(this folder contains the sample applications. You can look at the source to learn more about these) /extensions(this folder contains jQTouch extensions that are like its own plugins) /jqtouch(this folder contains the javascript and css files needed for jQTouch to work) /themes (this folder contains the theme files and you can create your own themes too) /license.txt /readme.txt /sample.htaccess Now we create two files in the twigle folder: index.html and twigle.php The index.html will hold our application views(pages represented as html DIV Tags) and the twigle.php will be our business logic backend that connects the twitter API to our index.html front end. Javascript and AJAX communications comes between the index.html and the twigle.php to load twitter search results for any given search request. Paste the following code into the index.html file: [code] &lt;!doctype html&gt;  &lt;html&gt;      &lt;head&gt;          &lt;meta charset="UTF-8" /&gt;          &lt;/head&gt; &lt;body&gt; &lt;div id="home"&gt;          &lt;div class="toolbar"&gt;             &lt;h1&gt;TWIGLE&lt;/h1&gt;             &lt;a href="#info" class="button leftButton flip"&gt;Info&lt;/a&gt;             &lt;!-- &lt;a href="#search_results" class="button add slideup"&gt;+&lt;/a&gt;   --&gt;        &lt;/div&gt;             &lt;form id="search"&gt;  &lt;ul class="rounded"&gt; &lt;li id="notice"&gt; Type your search term below and hit search twitter&lt;/li&gt; &lt;li&gt; &lt;input type="text" id="keyword" name="keyword" placeholder="type your search term here"&gt; &lt;/li&gt; &lt;/ul&gt; &lt;a href="#" class="whiteButton submit"&gt;SEARCH TWITTER&lt;/a&gt;  &lt;/form&gt;        &lt;/div&gt;         &lt;/div&gt;  &lt;div id="info"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1&gt;TWIGLE BY mambenanje&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class='rounded'&gt; &lt;li&gt;mambenanje is CEO of AfroVisioN Group - www.afrovisiongroup.com&lt;br /&gt; And TWIGLE is a tutorial he did for packtpub.com&lt;/li&gt; &lt;li&gt;TWIGLE runs on iPhone and Android because its powered by jqtouch and it helps users search twitter from their internet connected handhelds&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;div id="search_results"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1 id="search_title"&gt;Search results&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class="rounded" id="results"&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; &lt;/body&gt; &lt;/html&gt; [/code] Thats the DOM structure for our application. Taking a close look at it,  you will see three main div siblings of the <body> tag. These divs represent the pages our application will have and only one of these divs appears at a time in a jQTouch application. Note the toolbar class that is called inside each of those divs to represent the app view's toolbar(title bar + menu) on every given page. The <ul classs='rounded'> is also needed to represent rounded listed items typical for iPhone applications. So in summary our application has three pages which would be home, info and search_results. Lets explain the DOM for every page: Home: [code] &lt;div id="home"&gt;              &lt;div class="toolbar"&gt;                  &lt;h1&gt;TWIGLE&lt;/h1&gt;                  &lt;a href="#info" class="button leftButton flip"&gt;Info&lt;/a&gt;                   &lt;!-- &lt;a href="#search_results" class="button add slideup"&gt;+&lt;/a&gt;   --&gt;             &lt;/div&gt;              &lt;form id="search"&gt;  &lt;ul class="rounded"&gt; &lt;li id="notice"&gt; Type your search term below and hit search twitter&lt;/li&gt; &lt;li&gt; &lt;input type="text" id="keyword" name="keyword" placeholder="type your search term here"&gt; &lt;/li&gt; &lt;/ul&gt; &lt;a href="#" class="whiteButton submit"&gt;SEARCH TWITTER&lt;/a&gt;  &lt;/form&gt;             &lt;/div&gt;          &lt;/div&gt;  [/code] The home page has a toolbar that contains the TWIGLE heading, along with a jQTouch button that is left aligned and when clicked, flips to the next page which is Info. The other button which leads to the search_results page is commented out using html comments. Its there to show that you can add more buttons to the toolbar. Next is the form which has the id:search. This is how jQTouch works with forms with no action or method. The form submission is done via javascript which will be explained later. The rest is instruction and the keyword input field. Look closely at the search twitter button. Its not a typical input button, but an anchor tag styled with jQTouch theme classes that tells jQTouch this is a white button. It is responsible for initiating the form submission. The home page is the most important page in this application as it contains the form and like every home page it is also the welcome page of the application. The Info Page: [code]   &lt;div id="info"&gt; &lt;div class="toolbar"&gt; &lt;a href="#home" class="button leftButton flip"&gt;back&lt;/a&gt; &lt;h1&gt;TWIGLE BY mambenanje&lt;/h1&gt; &lt;/div&gt; &lt;div&gt; &lt;ul class='rounded'&gt; &lt;li&gt;mambenanje is CEO of AfroVisioN Group - www.afrovisiongroup.com&lt;br /&gt; And TWIGLE is a tutorial he did for packtpub.com&lt;/li&gt; &lt;li&gt;TWIGLE runs on iPhone and Android because its powered by jqtouch and it helps users search twitter from their internet connected handhelds&lt;/li&gt; &lt;/ul&gt; &lt;/div&gt; &lt;/div&gt; [/code] Its a tradition in software development to always have an about page for the software and iPhone/Android apps are no exception. The info page was created to give users of the twigle application an idea how this application came about. Closely look at the toolbar. It contains a button that leads to the home page and is styled to appear like a button. It flips to the home page when clicked. The rest is just literature that is presented in rounded lists.  
Read more
  • 0
  • 0
  • 6300

article-image-distributed-transaction-using-wcf
Packt
17 Jun 2010
12 min read
Save for later

Distributed transaction using WCF

Packt
17 Jun 2010
12 min read
(Read more interesting articles on WCF 4.0 here.) Creating the DistNorthwind solution In this article, we will create a new solution based on the LINQNorthwind solution.We will copy all of the source code from the LINQNorthwind directory to a new directory and then customize it to suit our needs. The steps here are very similar to the steps in the previous chapter when we created the LINQNorthwind solution.Please refer to the previous chapter for diagrams. Follow these steps to create the new solution: Create a new directory named DistNorthwind under the existing C:SOAwithWCFandLINQProjects directory. Copy all of the files under the C:SOAwithWCFandLINQProjectsLINQNorthwind directory to the C:SOAwithWCFandLINQProjectsDistNorthwind directory. Remove the folder, LINQNorthwindClient. We will create a new client for this solution. Change all the folder names under the new folder, DistNorthwind, from LINQNorthwindxxx to DistNorthwindxxx. Change the solution files' names from LINQNorthwind.sln to DistNorthwind.sln, and also from LINQNorthwind.suo to DistNorthwind.suo. Now we have the file structures ready for the new solution but all the file contents and the solution structure are still for the old solution. Next we need to change them to work for the new solution. We will first change all the related WCF service files. Once we have the service up and running we will create a new client to test this new service. Start Visual Studio 2010 and open this solution: C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwind.sln. Click on the OK button to close the projects were not loaded correctly warning dialog. From Solution Explorer, remove all five projects (they should all be unavailable). Right-click on the solution item and select Add | Existing Projects… to add these four projects to the solution. Note that these are the projects under the DistNorthwind folder, not under the LINQNorthwind folder: LINQNorthwindEntities.csproj, LINQNorthwindDAL.csproj, LINQNorthwindLogic.csproj, and LINQNorthwindService.csproj. In Solution Explorer, change all four projects' names from LINQNorthwindxxx to DistNorthwindxxx. In Solution Explorer, right-click on each project, select Properties (or select menu Project | DistNorthwindxxx Properties), then change the Assembly name from LINQNorthwindxxx to DistNorthwindxxx, and change the Default namespace from MyWCFServices.LINQNorthwindxxx to MyWCFServices.DistNorthwindxxx. Open the following files and change the word LINQNorthwind to DistNorthwind wherever it occurs: ProductEntity.cs, ProductDAO.cs, ProductLogic.cs, IProductService.cs, and ProductService.cs. Open the file, app.config, in the DistNorthwindService project and change the word LINQNorthwind to DistNorthwind in this file. The screenshot below shows the final structure of the new solution, DistNorthwind: Now we have finished modifying the service projects. If you build the solution now you should see no errors. You can set the service project as the startup project, run the program. Hosting the WCF service in IIS The WCF service is now hosted within WCF Service Host.We had to start the WCF Service Host before we ran our test client.Not only do you have to start the WCF Service Host, you also have to start the WCF Test client and leave it open. This is not that nice. In addition, we will add another service later in this articleto test distributed transaction support with two databases and it is not that easy to host two services with one WCF Service Host.So, in this article, we will first decouple our WCF service from Visual Studio to host it in IIS. You can follow these steps to host this WCF service in IIS: In Windows Explorer, go to the directory C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. Within this folder create a new text file, ProductService.svc, to contain the following one line of code: <%@ServiceHost Service="MyWCFServices.DistNorthwindService. ProductService"%> Again within this folder copy the file, App.config, to Web.config and remove the following lines from the new Web.config file: <host> <baseAddresses> <add baseAddress="http://localhost:8080/ Design_Time_Addresses/MyWCFServices/ DistNorthwindService/ProductService/" /> </baseAddresses> </host> Now open IIS Manager, add a new application, DistNorthwindService, and set its physical path to C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. If you choose to use the default application pool, DefaultAppPool, make sure it is a .NET 4.0 application pool.If you are using Windows XP you can create a new virtual directory, DistNorthwindService, set its local path to the above directory, and make sure its ASP.NET version is 4.0. From Visual Studio, in Solution Explorer, right-click on the project item,DistNorthwindService, select Properties, then click on the Build Events tab, and enter the following code to the Post-build event command line box: copy .*.* ..With this Post-build event command line, whenever DistNorthwindService is rebuilt the service binary files will be copied to the C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindServicebin directory so that the service hosted in IIS will always be up-to-date. From Visual Studio, in Solution Explorer, right-click on the project item, DistNorthwindService, and select Rebuild. Now you have finished setting up the service to be hosted in IIS. Open Internet Explorer, go to the following address, and you should see the ProductService description in the browser: http://localhost/DistNorthwindService/ProductService.svc Testing the transaction behavior of the WCF service Before explaining how to enhance this WCF service to support distributed transactions, we will first confirm that the existing WCF service doesn't support distributed transactions. In this article, we will test the following scenarios: Create a WPF client to call the service twice in one method. The first service call should succeed and the second service call should fail. Verify that the update in the first service call has been committed to the database, which means that the WCF service does not support distributed transactions. Wrap the two service calls in one TransactionScope and redo the test. Verify that the update in the first service call has still been committed to the database which means the WCF service does not support distributed transactions even if both service calls are within one transaction scope. Add a second database support to the WCF service. Modify the client to update both databases in one method. The first update should succeed and the second update should fail. Verify that the first update has been committed to the database, which means the WCF service does not support distributed transactions with multiple databases. Creating a client to call the WCF service sequentially The first scenario to test is that within one method of the client application two service calls will be made and one of them will fail. We then verify whether the update in the successful service call has been committed to the database. If it has been, it will mean that the two service calls are not within a single atomic transaction and will indicate that the WCF service doesn't support distributed transactions. You can follow these steps to create a WPF client for this test case: In Solution Explorer, right-click on the solution item and select Add | New Project… from the context menu. Select Visual C# | WPF Application as the template. Enter DistributedWPF as the Name. Click on the OK button to create the new client project. Now the new test client should have been created and added to the solution. Let's follow these steps to customize this client so that we can call ProductService twice within one method and test the distributed transaction support of this WCF service: On the WPF MainWindow designer surface, add the following controls (you can double-click on the MainWindow.xaml item to open this window and make sure you are on the design mode, not the XAML mode): A label with Content Product ID Two textboxes named txtProductID1 and txtProductID2 A button named btnGetProduct with Content Get Product Details A separator to separate above controls from below controls Two labels with content Product1 Details and Product2 Details Two textboxes named txtProduct1Details and txtProduct2Details, with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked A separator to separate above controls from below controls A label with content New Price Two textboxes named txtNewPrice1 and txtNewPrice2 A button named btnUpdatePrice with Content Update Price A separator to separate above controls from below controls Two labels with content Update1 Results and Update2 Results Two textboxes named txtUpdate1Results and txtUpdate2Results with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked Your MainWindow design surface should look like the following screenshot: In Solution Explorer, right-click on the DistNorthwindWPF project item, select Add Service Reference… and add a service reference of the product service to the project. The namespace of this service reference should be ProductServiceProxy and the URL of the product service should be like this:http://localhost/DistNorthwindService/ProductService.svc On the MainWindow.xaml designer surface, double-click on the Get Product Details button to create an event handler for this button. In the MainWindow.xaml.cs file, add the following using statement: using DistNorthwindWPF.ProductServiceProxy; Again in the MainWindow.xaml.cs file, add the following two class members: Product product1, product2; Now add the following method to the MainWindow.xaml.cs file: private string GetProduct(TextBox txtProductID, ref Product product) { string result = ""; try { int productID = Int32.Parse(txtProductID.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); product = client.GetProduct(productID); StringBuilder sb = new StringBuilder(); sb.Append("ProductID:" + product.ProductID.ToString() + "n"); sb.Append("ProductName:" + product.ProductName + "n"); sb.Append("UnitPrice:" + product.UnitPrice.ToString() + "n"); sb.Append("RowVersion:"); foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message.ToString(); } return result; } This method will call the product service to retrieve a product from the database, format the product details to a string, and return the string. This string will be displayed on the screen. The product object will also be returned so that later on we can reuse this object to update the price of the product. Inside the event handler of the Get Product Details button, add the following two lines of code to get and display the product details: txtProduct1Details.Text = GetProduct(txtProductID1, ref product1); txtProduct2Details.Text = GetProduct(txtProductID2, ref product2); Now we have finished adding code to retrieve products from the database through the Product WCF service. Set DistNorthwindWPF as the startup project, press Ctrl + F5 to start the WPF test client, enter 30 and 31 as the product IDs, and then click on the Get Product Details button. You should get a window like this image: To update the prices of these two products follow these steps to add the code to the project: On the MainWindow.xaml design surface and double-click on the Update Price button to add an event handler for this button. Add the following method to the MainWindow.xaml.cs file: private string UpdatePrice( TextBox txtNewPrice, ref Product product, ref bool updateResult) { string result = ""; try { product.UnitPrice = Decimal.Parse(txtNewPrice.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); updateResult = client.UpdateProduct(ref product); StringBuilder sb = new StringBuilder(); if (updateResult == true) { sb.Append("Price updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("New RowVersion:"); } else { sb.Append("Price not updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("Old RowVersion:"); } foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message; } return result; } This method will call the product service to update the price of a product in the database. The update result will be formatted and returned so that later on we can display it. The updated product object with the new RowVersion will also be returned so that later on we can update the price of the same product again and again. Inside the event handler of the Update Price button, add the following code to update the product prices: if (product1 == null) { txtUpdate1Results.Text = "Get product details first"; } else if (product2 == null) { txtUpdate2Results.Text = "Get product details first"; } else { bool update1Result = false, update2Result = false; txtUpdate1Results.Text = UpdatePrice( txtNewPrice1, ref product1, ref update1Result); txtUpdate2Results.Text = UpdatePrice( txtNewPrice2, ref product2, ref update2Result); } Testing the sequential calls to the WCF service Let's run the program now to test the distributed transaction support of the WCF service. We will first update two products with two valid prices to make sure our code works with normal use cases. Then we will update one product with a valid price and another with an invalid price. We will verify that the update with the valid price has been committed to the database, regardless of the failure of the other update. Let's follow these steps for this test: Press Ctrl + F5 to start the program. Enter 30 and 31 as product IDs in the top two textboxes and click on the Get Product Details button to retrieve the two products. Note that the prices for these two products are 25.89 and 12.5 respectively. Enter 26.89 and 13.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. The update results are true for both updates, as shown in following screenshot: Now enter 27.89 and -14.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. This time the update result for product 30 is still True but for the second update the result is False. Click on the Get Product Details button again to refresh the product prices so that we can verify the update results. We know that the second service call should fail so the second update should not be committed to the database. From the test result we know this is true (the second product price didn't change). However from the test result we also know that the first update in the first service call has been committed to the database (the first product price has been changed). This means that the first call to the service is not rolled back even when a subsequent service call has failed. Therefore each service call is in a separate standalone transaction. In other words, the two sequential service calls are not within one distributed transaction.
Read more
  • 0
  • 0
  • 2138

article-image-checkbox-persistence-tabular-forms-reports
Packt
17 Jun 2010
7 min read
Save for later

Checkbox Persistence in Tabular Forms (Reports)

Packt
17 Jun 2010
7 min read
(For more resources on Oracle, see here.) One of the problems we are facing with Tabular Forms is that pagination doesn't submit the current view of the Tabular Form (Report) page, and if we are using Partial Page Refresh (PPR), it doesn't even reload the entire page. As such, Session State is not saved prior to us moving to the next/previous view. Without saving Session State, all the changes that we might have made to the current form view will be lost upon using pagination. This problematic behavior is most notable when we are using a checkboxes column in our Tabular Form (Report). We can mark specific checkboxes in the current Tabular Form (Report) view, but if we paginate to another view, and then return, the marked checkboxes will be cleared (no Session State, no history to rely on). In some cases, it can be very useful to save the marked checkboxes while paginating through the Tabular Form (Report). Joel Kallman, from the APEX development team, blogged about this issue (http://joelkallman.blogspot.com/2008/03/ preserving-checked-checkboxes-in-report.html) and offered a simple solution, which uses AJAX and APEX collections. Using APEX collections means that the marked checkboxes will be preserved for the duration of a specific user's current APEX session. If that's what you need, Joel's solution is very good as it utilizes built-in APEX resources in an optimal way. However, sometimes the current APEX session is not persistent enough. In one of my applications I needed more lasting persistence, which can be used crossed APEX users and sessions. So, I took Joel's idea and modified it a bit. Instead of using APEX collections, I've decided to save the checked checkboxes into a database table. The database table, of course, can support unlimited persistence across users. Report on CUSTOMERS We are going to use a simple report on the CUSTOMERS table, where the first column is a checkboxes column. The following is a screenshot of the report region: W e are going to use AJAX to preserve the status of the checkboxes in the following scenarios: Using the checkbox in the header of the first column to check or clear all the checkboxes in the first column of the current report view Individual row check or clearing of a checkbox The first column—the checkboxes column—represents the CUST_ID column of the CUSTOMERS table, and we are going to implement persistence by saving the values of this column, for all the checked rows, in a table called CUSTOMERS_VIP. This table includes only one column: CREATE TABLE "CUSTOMERS_VIP" ( "CUST_ID" NUMBER(7,0) NOT NULL ENABLE, CONSTRAINT "CUSTOMERS_VIP_PK" PRIMARY KEY ("CUST_ID") ENABLE) Bear in mind: In this particular example we are talking about crossed APEX users and sessions persistence. If, however, you need to maintain a specific user-level persistence, as it happens natively when using APEX collections, you can add a second column to the table that can hold the APP_USER of the user. In this case, you'll need to amend the appropriate WHERE clauses and the INSERT statements, to include and reflect the second column. The report SQL query The following is the SQL code used for the report: SELECT apex_item.checkbox(10,l.cust_id,'onclick=updateCB(this);', r.cust_id) as cust_id, l.cust_name, l.cust_address1, l.cust_address2, l.cust_city, l.cust_zip_code, (select r1.sname from states r1 where l.cust_state = r1.code) state, (select r2.cname from countries r2 where l.cust_country = r2.code) countryFROM customers l, customers_vip rWHERE r.cust_id (+) = l.cust_idORDER BY cust_name The Bold segments of the SELECT statement are the ones we are most interested in. The APEX_ITEM.CHECKBOX function creates a checkboxes column in the report. Its third parameter—p_attributes—allows us to define HTML attributes within the checkbox <input> tag. We are using this parameter to attach an onclick event to every checkbox in the column. The event fires a JavaScript function— updateCB(this)—which takes the current checkbox object as a parameter and initiates an AJAX process. The fourth parameter of the APEX_ITEM.CHECKBOX function—p_checked_ values—allows us to determine the initial status of the checkbox. If the value of this parameter is equal to the value of the checkbox (determined by the second parameter—p_value) the checkbox will be checked. This parameter is the heart of the solution. Its value is taken from the CUSTOMERS_VIP table using outer join with the value of the checkbox. The outcome is that every time the CUSTOMERS_VIP table contains a CUST_ID value equal to the current checkbox value, this checkbox will be checked. The report headers In the Report Attributes tab we can set the report headers using the Custom option. We are going to use this option to set friendlier report headers, but mostly to define the first column header—a checkbox that allows us to toggle the status of all the column checkboxes. The full HTML code we are using for the header of the first column is: <input type="checkbox" id = "CB" onclick="toggleAll(this,10);"title="Mark/Clear All"> We are actually creating a checkbox, with an ID of CB and an onclick event that fires the JavaScript function toggleAll(this,10). The first parameter of this function is a reference to the checkbox object, and the second one is the first parameter—p_idx—of the APEX_ITEM.CHECKBOX function we are using to create the checkbox column. The AJAX client-side JavaScript functions So far, we have mentioned two JavaScript functions that initiate an AJAX call. The first—updateCB()—initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of a single (row) checkbox. The second one—toggleAll()— initiates an AJAX call that updates the CUSTOMERS_VIP file according to the status of the entire checkboxes column. Let's review these functions. The updateCB() JavaScript function The following is the code of this function: function updateCB(pItem){ var get = new htmldb_Get(null, $v('pFlowId'), 'APPLICATION_PROCESS=update_CB',$v('pFlowStepId')); get.addParam('x01',pItem.value); get.addParam('x02',pItem.checked); get.GetAsync(function(){return;}); get = null;} The function accepts, as a parameter, a reference to an object—this—that points to the checkbox we just clicked. We are using this reference to set the temporary item x01 to the value of the checkbox and x02 to its status (checked/unchecked). As we are using the AJ AX related temporary items, we are using the addParam() method to do so. These items will be available to us in the on-demand PL/SQL process update_CD, which implements the server-side logic of this AJAX call. We stated this process in the third parameter of the htmldb_Get constructor function— 'APPLICATION_PROCESS=update_CB'. In this example, we are using the name 'get' for the variable referencing the new instance of htmldb_Get object. The use of this name is very common in many AJAX examples, especially on the OTN APEX forum, and its related examples. As we'll see when we review the server-side logic of this AJAX call, all it does is update—insert or delete—the content of the CUSTOMERS_VIP table. As such, it doesn't have an immediate effect on the client side, and we don't need to wait for its result. This is a classic case for us to use an asynchronous AJAX call. We do so by using the GetAsync() method. In this specific case, as the client side doesn't need to process any server response, we can use an empty function as the GetAsync() parameter.
Read more
  • 0
  • 0
  • 3664
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-improving-plone-3-product-performance
Packt
11 Jun 2010
7 min read
Save for later

Improving Plone 3 Product Performance

Packt
11 Jun 2010
7 min read
(For more resources on Plone, see here.) Introduction CMS Plone provides: A Means of adding, editing, and managing content A database to store content A mechanism to serve content in HTML or other formats Fortunately, it also supplies the tools to do all these things in an incredibly easy and powerful way. For example, content producers can create a new article without worrying how it will look or what other information will be surrounding the main information. To do this Plone must compose a single HTML output file (if we are talking from a web browser viewpoint) by joining and rendering several sources of data according to the place, importance, and target they are meant for. As it is built upon the Zope application server, all these jobs are easy for Plone. However, they have a tremendous impact as far as work and performance goes. If enough care is not taken, then a whole website could be stuck due to a couple of user requests. In this article, we'll look at the various performance improvements and how to measure these enhancements. We are not going to make a comprehensive review of all the options to tweak or set up a Zope-based web application, like configuring a like configuring a proxy cache or a load balancer. There are lots of places, maybe too many, where you can find information about these topics. We invite you to read these articles and tutorials and subscribe or visit Zope and Plone mailing lists: http://projects.zestsoftware.nl/guidelines/guidelines/caching/ caching1_background.html http://plone.org/documentation/tutorial/buildout/a-deployment-configuration/ http://plone.org/documentation/tutorial/optimizing-plone Installing CacheFu with a policy product When a user requests HTML pages from a website, many things can be expressed about the downloading files by setting special headers in the HTTP response. If managed cautiously, the server can save lots of time and, consequently, work by telling the browser how to store and reuse many of the resources it has got. CacheFu is the Plone add-on product that streamlines HTTP header handling in order to obtain the required performance. We could add a couple of lines to the buildout.cfg file to download and install CacheFu. Then we could add some code in our end user content type products (pox.video and Products.poxContentTypes) to configure CacheFu properly to deliver them in an efficient way. However, if we do so, we would be forcing these products to automatically install CacheFu, even if we were testing them in a development environment. To prevent this, we are going to create a policy product and add some code to install and configure CacheFu. A policy product is a regular package that will take care of general customizations to meet customer requirements. For information on how to create a policy product see Creating a policy product. Getting ready To achieve this we'll use pox.policy, the policy product created in Creating a policy product. How to do it... Automatically fetch dependencies of the policy product: Open setup.py in the root pox.policy folder and modify the install_requires variable of the setup call: setup(name='pox.policy', ... install_requires=['setuptools', # -*- Extra requirements: -*- 'Products.CacheSetup', ], Install dependencies during policy product installation. In the profiles/default folder, modify the metadata.xml file: <?xml version="1.0"?><metadata> <version>1</version> <dependencies> <dependency>profile-Products.CacheSetup:default</dependency> </dependencies></metadata You could also add here all the other products you plan to install as dependencies, instead of adding them individually in the buildout.cfg file. Configure products during the policy product installation. Our policy product already has a <genericsetup:importStep /> directive in its main component configuration file (configure.zcml). This import step tells GenericSetup to process a method in the setuphandlers module (we could have several steps, each of them with a matching method). Then modify the setupVarious method to do what we want, that is, to apply some settings to CacheFu. from zope.app.component.hooks import getSitefrom Products.CMFCore.utils import getToolByNamefrom config import * def setupVarious(context): if context.readDataFile('pox.policy_various.txt') is None: return portal = getSite() # perform custom operations # Get portal_cache_settings (from CacheFu) and # update plone-content-types rule pcs = getToolByName(portal, 'portal_cache_settings') rules = pcs.getRules() rule = getattr(rules, 'plone-content-types') rule.setContentTypes(list(rule.getContentTypes()) + CACHED_CONTENT) The above code has been shortened for clarity's sake. Check the accompanying code bundle for the full version. Add or update a config.py file in your package with all configuration options: # Content types that should be cached in plone-content-types# rule of CacheFuCACHED_CONTENT = ['XNewsItem', 'Video',] Build your instance up again and launch it: ./bin/buildout./bin/instance fg After installing the pox.policy product (it's automatically installed during buildout as explained in Creating a policy product) we should see our content types—Video and XNewsItem—listed within the cached content types. The next screenshot corresponds to the following URL: http://localhost:8080/plone/portal_cache_settings/with-caching-proxy/rules/plone-content-types. The with-caching-proxy part of the URL matches the Cache Policy field; and the plone-content-types part matches the Short Name field. As we added Python code, we must test it. Create this doctest in the README.txt file in the pox.policy package folder: Check that our content types are properly configured >>> pcs = getToolByName(self.portal, 'portal_cache_settings')>>> rules = pcs.getRules()>>> rule = getattr(rules, 'plone-content-types')>>> 'Video' in rule.getContentTypes()True>>> 'XNewsItem' in rule.getContentTypes()True Modify the tests module by replacing the ptc.setupPloneSite() line with these ones: # We first tell Zope there's a CacheSetup product availableztc.installProduct('CacheSetup') # And then we install pox.policy product in Plone.# This should take care of installing CacheSetup in Plone alsoptc.setupPloneSite(products=['pox.policy']) And then uncomment the ZopeDocFileSuite: # Integration tests that use PloneTestCaseztc.ZopeDocFileSuite( 'README.txt', package='pox.policy', test_class=TestCase), Run this test suite with the following command: ./bin/instance test -s pox.policy How it works... In the preceding steps, we have created a specific procedure to install and configure other products (CacheFu in our case). This will help us in the final production environment startup as well as on installation of other development environments we could need (when a new member joins the development team, for instance). In Step 1 of the How to do it... section, we modified setup.py to download and install a dependency package during the installation process, which is done on instance buildout. Getting dependencies in this way is possible when products are delivered in egg format thanks to Python eggs repositories and distribution services. If you need to get an old-style product, you'll have to add it to the [productdistros] part in buildout.cfg. Products.CacheSetup is the package name for CacheFu and contains these dependencies: CMFSquidTool, PageCacheManager, and PolicyHTTPCacheManager. There's more... For more information about CacheFu visit the project home page at http://plone.org/products/cachefu. You can also check for its latest version and release notes at Python Package Index (PyPI, a.k.a. The Cheese Shop): http://pypi.python.org/pypi/Products.CacheSetup. The first link that we recommended in the Introduction is a great help in understanding how CacheFu works: http://projects.zestsoftware.nl/guidelines/guidelines/caching/caching1_background.html. See also Creating a policy product Installing and configuring an egg repository
Read more
  • 0
  • 0
  • 5043

article-image-find-and-install-add-ons-expand-plone-functionality
Packt
10 Jun 2010
11 min read
Save for later

Find and Install Add-Ons that Expand Plone Functionality

Packt
10 Jun 2010
11 min read
(For more resources on Plone, see here.) Background It seems like every application platform uses a different name for its add-ons: modules, components, libraries, packages, extensions, plug-ins, and more. Add-on packages for the Zope web application server are generally called Products. A Zope product is a bundle of Zope or Plone functionality contained in a Python module or modules. Like Plone, add-on products are distributed in source code, so that you may always read and examine them. Plone itself is actually a set of tightly connected Zope products and Python modules. Plone add-on products may be divided into three major categories: Skins or themes that change Plone’s look and feel or add visual elements like portlets. These are typically the simplest of Plone products. Products that add new content types with specialized functionality. Some are simple extensions of built-in types, others have custom workflows and behaviours. Products that add to or change the behaviour of Plone itself. Where to Find Products Plone.org’s Products section at http://plone.org/products is the place to look for Plone products. At the time of this writing, the Plone.org contains listings for 765 products and 1,901 product releases. The Plone Products section is itself built with a Plone product, the Plone Software Center – often called the PSC – that adds content types for projects, software releases, project roadmaps, issue trackers, and project documentation. Using the Plone Product Pages Visiting the Plone product pages for the first time may be a bewildering experience due to the number of available products. However, by specifying a product category and target Plone version, you will quickly narrow the product selection to the point where it’s worth reading descriptions and following the links to product pages. Product pages typically contain product descriptions, software releases, and a list of available documentation, issue tracker, version control repository, and contact resources. Each release will have release notes, a change log, and a list of Plone versions with which the release has been tested. If the release has a product package available, it will be available here for download. Some releases do not have associated software packages. This may be because the release is still in a planning stage, and the listing is mainly meant to document the product’s development roadmap; or because the development is still in an early stage, and the software is only available from a version-control repository. The release notes commonly include a list of dependencies, and you should make special note of that along with compatible Plone versions. Many products require the installation of other, supporting products. Some require that your server or test workstation have particular system libraries or utilities. Product pages may also have links to a variety of additional resources: product-specific documentation, other release pages, an issue tracker, a roadmap for future development, contact form for the project, and a version-control repository. Playing it Safe with Add-On Products Plone 3 is probably one of the most rigorously tested open-source software packages in existence. While no software is defect free, Plone’s core development team is on the leading edge of software development methodologies and work under a strong testing culture that requires that they prove their components work correctly before they ever become part of Plone. Plone’s library of add-on products is a very different story. Add-on products are contributed by a diverse community of developers. Some add-on products follow the same development and maintenance methodologies as Plone itself; others are haphazard experiments. To complicate matters, today’s haphazard experiment may be – if it succeeds – next year’s rigorously developed and reliable product. (Much of the Plone core codebase began as add-on products.) And, this year’s reliable standby may lose the devotion of its developers and not be upgraded to work with the next version of Plone. If you’re new to the world of open source software, this may seem dismaying. Don’t be discouraged. It is not hard to evaluate the status of a product, and the Plone community is happy to help. Be encouraged by evidence of continual, exciting innovation. Most importantly, stop thinking of yourself as a consumer. Take an interest in the community process that produces good products. Test some early releases and file bug reports and feature requests. Participate in, or help document, test, and fund the development of the products that are most important to you. Product Choice Strategy Trying out new Plone add-on products is great fun, but incorporating them into production websites requires planning and judgement if you’re going to have good long-run results. New versions of Plone pose a particular challenge. Major new releases of Plone don’t just add features: with every major version of Plone the application programming interface (API) and presentation templates change. This is not done arbitrarily, and there is usually a good deal of warning before a major change, but it means that add-on products often need to be updated before they will work with a major new version of Plone. Probably worthwhile to point out that major versions are released very ~18 months, and that minor version upgrades generally do not pose compatibility problems for the vast majority of add-on products. This means that when a new version of Plone appears on the scene, you won’t be able to migrate your Plone site to use it until compatible product versions are available for all the add-on products in use on the site. If you’re using mainstream, well-supported products, this may happen very quickly. Many products are upgraded to work with new Plone versions during the beta and release-candidate stages of Plone development. Some products take longer, and some may not make the jump at all. The products least likely to be updated are often ones made obsolete by new functionality. This creates a somewhat ironic situation when a new version of Plone arrives: the quickest adopters are often those with the least history with the platform. The slowest adopters are sometimes the sites that are most heavily invested in the new features. Consider, as a prime example, Plone.org, a very active, very large, community site which must be conservatively managed and stick with proven versions of add-on products. Plone.org often does not migrate to a new Plone version until many months after release. Is this a problem? Not really – unless you need both the newest features of the newest Plone version and the functionality of a more slowly developed add-on product. If that’s the case, prepare to make an investment of time or money in supporting product development and possibly writing some custom migration scripts. If you want to be more conservative, try the following strategy: Enjoy testing many products and keeping up with new developments by trying them out on a test server. Learn the built-in Plone functionality well, and use it in preference to add-on products whenever possible. Make sure you have a good understanding of the maturity level and degree of developer support for add-on products. Incorporate the smallest number of add-on products reasonably possible into your production sites. Don’t be just a consumer: when you commit to a product, help support it by filing bug reports and feature requests, contributing translations, documentation or code, and answering questions about it on the Plone mailing lists or #plone IRC channel. Evaluating a Product Judging the maturity of a Plone product is generally easy. Start with a product’s project page on Plone.org. The product page may offer you a "Current release" and one or more "Experimental releases". Anything marked as a current release should be stable on its tested Plone versions. If you need a release to work with an earlier version of Plone than the ones supported by the current release, follow the "List all releases..." link. Releases in the "Experimental" list will be marked as "alpha", "beta", or "Release Candidate." These terms are well-defined in practice: Alpha releases are truly experimental, and are usually posted in order to get early feedback. Interfaces and implementations are likely still in flux. Download an alpha release only for testing in an experimental environment, and only for purposes of previewing new features and giving feedback to developers. Do not plan on keeping any content you develop using an alpha release, as there may be no upgrade path to later releases. With a beta release, feature sets and programming interfaces should be stable or changing only incrementally. It’s reasonable to start testing the integration of the product with the platform and with other products. There will typically be an upgrade path to future releases. Bug reports will be welcome and will help develop the product. Release candidates have a fixed feature set and no known major issues. Templates and messages should be complete, so that translators may work on language files with some confidence that their work won’t be lost. If you encounter a bug in release-candidate products, please immediately file an issue report. Products may be re-released repeatedly at any release state. For alpha, beta and RC releases, each additional release changes the release count, but not the version number. So, "PloneFormGen 1.2" (Beta release 6) is the sixth beta release of version 1.2 of PloneFormGen. Once a product release reaches current release status, new releases for maintenance will increment the version number by 0.0.1. "PloneFormGen 1.1.3" is thus the third maintenance release of version 1.1 of that product. Don’t make too much of version numbers or release counts. Release status is a better indicator of maturity. If your site is mission-critical, don’t use beta releases on it. However, if you test carefully before deploying, you may find that some products are ready for live use when late in their beta development on sites where an error or glitch wouldn’t be intolerable. Testing a Product Conscientious Plone site administrators maintain an off-line mirror of their production sites on a secondary server – or even a desktop computer – that they may use for testing purposes. Always test a new product on a test server. Before deploying, test it on a server that has precisely the combination of products in use on your production server. Ideally, test with a copy of the database of your live server. Check the functionality of not only the new product, but also the products you’re already using. The latter is particularly important if you’re using products that alter the base functionality of Plone or Zope. Looking to the Future Evaluating product maturity and testing the product will help you judge its current status, but what about the future? What are the signs of a product that’s likely to be well-maintained and available for future versions of Plone? There are no guarantees, but here are some signs that experienced Plone integrators look for: Developing in public. This is open-source software. Look to see if the product is being developed with a public roadmap for the future, and with a public version-control repository. Plone.org provides product authors with great tools for indicating release plans, and makes a Subversion (SVN) version-control repository available to all product authors. Look to see if they’re using these facilities. Issue tracker status. Every released product should have a public issue (bug) tracker. Look for it. Look to see if it’s being maintained, and if issues are actively responded to. No issue tracker, or lots of old, uncategorized issues are bad signs. Support for multiple Plone versions. If a product has been around a while look to see if versions are available for at least a couple of Plone releases. This might be the previous and current releases, or the current and next releases. Internationalization. Excellent products attract translations. Good development methodologies. This is the hardest criterion for a non-developer to judge, but a forthcoming version of the Plone Software Center will ask developers to rate themselves on compliance with a set of community standards. My guess is that product developers will be pretty honest about these ratings. Several of these criteria have something in common: they allow the Plone community to participate in product maintenance and development. The best projects belong to the community, and not any single author. One of the best ways to get a quick read on the quality of an add-on product is to hop on the #plone IRC channel and ask. Chances are you’ll run into someone who can share their experiences and offer insight. You may even run into the product author him/herself!
Read more
  • 0
  • 0
  • 3367

article-image-microsoft-dynamics-nav-2009-using-journals-and-entries-custom-application
Packt
10 Jun 2010
9 min read
Save for later

Microsoft Dynamics NAV 2009: Using the journals and entries in a custom application

Packt
10 Jun 2010
9 min read
(Further on Microsoft Dynamics NAV:here.) Designing a journal Now it is time to start on the product part of the Squash Application. In this part we will no longer reverse engineer in detail. We will learn how to search in the standard functionality and reuse parts in our own software. For this part we will look at resources in Microsoft Dynamics NAV. Resources are similar to use products as Items but far less complex making it easier to look and learn. Squash Court master data Our company has 12 courts that we want to register in Microsoft Dynamics NAV. This master data is comparable to resources so we'll go ahead and copy this functionality. Resources are not attached to umbrella data like the vendor/squash player tables. We need the number series again so we'll add a new number series to our squash setup table. The Squash Court table should look like this after creation: Chapter objects With this chapter some objects are required. A description of how to import these objects can be found in the Appendix. After the import process is completed make sure that your current database is the default database for the role tailored client and run Page 123456701, Squash Setup. From this page select the Action Initialise Squash Application. This will execute the C/AL code in the InitSquashApp function of this page, which will prepare demo data for us to play with. The objects are prepared and tested in a Microsoft Dynamics NAV 2009 SP1 W1 database. Reservations When running a squash court we want to be able to keep track of reservations.Looking at standard Dynamics NAV functionality it might be a good idea to create Squash player Journal. The Journal can create entries for reservations that can be invoiced. A journal needs the object structure. The journal is prepared in the objects delivered with this article. Creating a new journal from scratch is a lot of work and can easily lead to making mistakes. It is easier and safer to copy an existing Journal structure from the standard application that is similar to the journal we need for our design. In our example we have copied the Resource Journals. You can export these objects to text format, and then rename and renumber the objects to be reused easily. The squash journal objects are renumbered and renamed from the resource journal. all journals have the same structure. The template, batch and register tables are almost always the same, whereas the journal line and ledger entry table contain function specific fields. Let's have a look at all of them one by one. Journal Template The Journal Template has several fields as shown in the following screenshot: Lets discuss these fields in more detail: Name: This is the unique name. It is possible to define as many Templates as required but usually one Template per Form ID and one for Recurring will do. If you want journals with different source codes you need to have more templates. Description: A readable and understandable description of its purpose. Test Report ID: All Templates have a test report that allows the user to check for posting errors. Form ID: For some journals, more UI objects are required. For example, the General Journals have a special form for bank and cash. Posting Report ID: This report is printed when a user selects Post and Print. Force Posting Report: Use this option when a posting report is mandatory. Source Code: Here you can enter a Trail Code for all the postings done via this Journal. Reason Code: This functionality is similar to source sodes. Recurring: Whenever you post lines from a recurring journal, new lines are automatically created with a posting date defined in the recurring date formula. No. Series: When you use this feature the Document No. in the Journal Line is automatically populated with a new number from this Number Series. Posting No. Series: Use this feature for recurring journals. Journal Batch Journal Batch has various fields as shown in the following screenshot: Lets discuss these fields in more detail: Journal Template Name: The name of the Journal Template this batch refers to Name : Each batch should have a unique code Description: A readable and explaining description for this batch Reason Code: When populated, this Reason Code will overrule the Reason Code from the Journal Template No. Series: When populated this No. Series will overrule the No. Series from the Journal Template Posting No. Series: When populated this Posting No. Series will overrule the Posting No. Series from the Journal Template Register The Register table has various fields as shown in the following screenshot: Lets discuss these fields in more detail: No.: This field is automatically and incrementally populated for each transaction with this journal. There are no gaps between the numbers. From Entry No.: A reference to the first Ledger Entry created is with this transaction. To Entry No.: A reference to the last Ledger Entry is created with this transaction. Creation Date: Always populated with the real date when the transaction was posted. User ID: The ID of the end user who has posted the transaction. The Journal The journal line has a number of mandatory fields that are required for all journals and some fields that are required for its designed functionality. In our case the journal should create a reservation which then can be invoiced. This requires some information to be populated in the lines. Reservation The reservation process is a logistical process that requires us to know the number of the Squash Court, the date and the time of the reservation. We also need to know how long the players want to play. To check the reservation it might also be useful to store the number of the Squash Player. Invoicing For the invoicing part we need to know the price we need to invoice. It might also be useful to store the cost to see our profit. For the system to figure out the proper G/L Account for the turnover we also need to define a General Product Posting Group. Journal Template Name: This is a reference to the current journal template. Line No. : Each journal has a virtually unlimited number of lines; this number is automatically incremented by 10000 allowing lines to be created in between. Entry Type: Reservation or invoice. Document No.: This number can be used to give to the squash player as a reservation number. When the entry type is invoice, it is the invoice number. Posting Date: Posting date is usually the reservation date but when the entry type is invoice it might be the date of the invoice which might differ from the posting date in the general ledger. Squash Player No.: A reference to the squash player who has made the reservation. Squash Court No.: A reference to the squash court. Description: This is automatically updated with the number of the squash court, reservation date and times, but can be changed by the user. Reservation Date: The actual date of the reservation. From Time: The starting time of the reservation. We allow only whole or half hours. To Time: The ending time of the reservation. We only allow whole and half hours. This is automatically populated when people enter a quantity. Quantity: The number of hours playing time. We only allow units of 0.5 to be entered here. This is automatically calculated when the times are populated. Unit Cost: The cost to run a Squash Court for one hour. Total Cost: The cost for this reservation. Unit Price: The invoice price for this reservation per hour. This depends on whether or not the squash player is a member or not. Total Price: The total invoice price for this reservation. Shortcut Dimension Code 1 & 2: A reference to the dimensions used for this transaction. Applies-to Entry No.: When a reservation is invoiced, this is the reference to the squash entry no. of the reservation. Source Code: Inherited from the journal batch or template and used when posting the transaction. Chargeable: When this option is used, there will not be an invoice for the reservation. Journal Batch Name: A reference to the journal batch that is used for this transaction. Reason Code: Inherited from the journal batch or template, and used when posting the transaction. Recurring Method: When the journal is a recurring journal you can use this field to determine whether the amount field is blanked after posting the lines. Recurring Frequency: This field determines the new posting date after the recurring lines are posted. Gen. Bus. Posting Group: The combination of general business and product posting group determines the G/L cccount for turnover when we invoice the reservation. The Gen. Bus. Posting Group is inherited from the bill-to customer. Gen. Prod. Posting Group: This will be inherited from the squash player. External Document No.: When a squash player wants us to note a reference number we can store it here. Posting No. Series: When the journal template has a posting no. series it is populated here to be used when posting. Bill-to Customer No.: This determines who is paying for the reservation. We will inherit this from the squash player. So now we have a place to enter reservations but we have something to do before we can start doing this. Some fields were determined to be inherited and calculated: The time field needs calculation to avoid people entering wrong values The Unit Price should be calculated The Unit Cost, Posting groups and Bill-to Customer No. need to be inherited As final cherry on top, we will look at implementing dimensions
Read more
  • 0
  • 0
  • 4181

article-image-implementing-wcf-service-real-world
Packt
09 Jun 2010
18 min read
Save for later

Implementing a WCF Service in the Real World

Packt
09 Jun 2010
18 min read
WCF is the acronym for Windows Communication Foundation. It is Microsoft's latest technology that enables applications in a distributed environment to communicate with each other. In this article by, Mike Liu, author of  WCF 4.0 Multi-tier Services Development with LINQ to Entities, we will create and test the WCF service by following these steps: Create the project using a WCF Service Library template Create the project using a WCF Service Application template Create the Service Operation Contracts Create the Data Contracts Add a Product Entity project Add a business logic layer project Call the business logic layer from the service interface layer Test the service Here ,In this article, we will learn how to separate the service interface layer from the business logic layer (Read more interesting articles on WCF 4.0 here.) Why layer a service? An important aspect of SOA design is that service boundaries should be explicit, which means hiding all the details of the implementation behind the service boundary. This includes revealing or dictating what particular technology was used. Furthermore, inside the implementation of a service, the code responsible for the data manipulation should be separated from the code responsible for the business logic. So in the real world, it is always good practice to implement a WCF service in three or more layers. The three layers are the service interface layer, the business logic layer, and the data access layer. Service interface layer: This layer will include the service contracts and operation contracts that are used to define the service interfaces that will be exposed at the service boundary. Data contracts are also defined to pass in and out of the service. If any exception is expected to be thrown outside of the service, then Fault contracts will also be defined at this layer. Business logic layer: This layer will apply the actual business logic to the service operations. It will check the preconditions of each operation, perform business activities, and return any necessary results to the caller of the service. Data access layer: This layer will take care of all of the tasks needed to access the underlying databases. It will use a specific data adapter to query and update the databases. This layer will handle connections to databases, transaction processing, and concurrency controlling. Neither the service interface layer nor the business logic layer needs to worry about these things. Layering provides separation of concerns and better factoring of code, which gives you better maintainability and the ability to split out layers into separate physical tiers for scalability. The data access code should be separated into its own layer that focuses on performing translation services between the databases and the application domain. Services should be placed in a separate service layer that focuses on performing translation services between the service-oriented external world and the application domain. The service interface layer will be compiled into a separate class assembly and hosted in a service host environment. The outside world will only know about and have access to this layer. Whenever a request is received by the service interface layer, the request will be dispatched to the business logic layer, and the business logic layer will get the actual work done. If any database support is needed by the business logic layer, it will always go through the data access layer. Creating a new solution and project using WCF templates We need to create a new solution for this example and add a new WCF project to this solution. This time we will use the built-in Visual Studio WCF templates for the new project. Using the C# WCF service library template There are a few built-in WCF service templates within Visual Studio 2010; two of them are Visual Studio WCF Service Library and Visual Studio WCF Service Application. In this article, we will use the service library template. Follow these steps to create the RealNorthwind solution and the project using the service library template: Start Visual Studio 2010, select menu option File New | Project…|, and you will see the New Project dialog box. From this point onwards, we will create a completely new solution and save it in a different location. In the New Project window, specify Visual C# WCF | WCF| Service Library as the project template, RealNorthwindService as the (project) name, and RealNorthwind as the solution name. Make sure that the checkbox Create directory for solution is selected. Click on the OK button, and the solution is created with a WCF project inside it. The project already has an IService1.cs file to define a service interface and Service1.cs to implement the service. It also has an app.config file, which we will cover shortly. Using the C# WCF service application template Instead of using the Visual Studio WCF Service Library template to create our new WCF project, we can use the Visual Studio Service Application template to create the new WCF project. Because we have created the solution, we will add a new project using the Visual Studio WCF Service Application template. Right-click on the solution item in Solution Explorer, select menu option Add New Project…| from the context menu, and you will see the Add New Project dialog box. In the Add New Project window, specify Visual C# | WCF Service Application as the project template, RealNorthwindService2 as the (project) name, and leave the default location of C:SOAWithWCFandLINQProjectsRealNorthwind unchanged. Click on the OK button and the new project will be added to the solution.The project already has an IService1.cs file to define a service interface, and Service1.svc.cs to implement the service. It also has a Service1.svc file and a web.config file, which are used to host the new WCF service. It has also had the necessary references added to the project such as System.ServiceModel. You can follow these steps to test this service: Change this new project, RealNorthwindService2, to be the startup project(right-click on it from Solution Explorer and select Set as Startup Project). Then run it (Ctrl + F5 or F5). You will see that it can now run. You will see that ASP.NET Development Server has been started, and a browser is open listing all of the files under the RealNorthwindService2 project folder.Clicking on the Service1.svc file will open the metadata page of the WCF service in this project. If you have pressed F5 in the previous step to run this project, you might see a warning message box asking you if you want to enable debugging for the WCF service. As we said earlier, you can choose enable debugging or just run in the non-debugging mode. You may also have noticed that the WCF Service Host is started together with ASP.NET Development Server. This is actually another way of hosting a WCF service in Visual Studio 2010. It has been started at this point because, within the same solution, there is a WCF service project (RealNorthwindService) created using the WCF Service Library template. So far we have used two different Visual Studio WCF templates to create two projects. The first project, using the C# WCF Service Library template, is a more sophisticated one because this project is actually an application containing a WCF service, a hosting application (WcfSvcHost), and a WCF Test Client. This means that we don't need to write any other code to host it, and as soon as we have implemented a service, we can use the built-in WCF Test Client to invoke it. This makes it very convenient for WCF development. The second project, using the C# WCF Service Application template, is actually a website. This is the hosting application of the WCF service so you don't have to create a separate hosting application for the WCF service. As we have already covered them and you now have a solid understanding of these styles, we will not discuss them further. But keep in mind that you have this option, although in most cases it is better to keep the WCF service as clean as possible, without any hosting functionalities attached to it. To focus on the WCF service using the WCF Service Library template, we now need to remove the project RealNorthwindService2 from the solution. In Solution Explorer, right-click on the RealNorthwindService2 project item and select Remove from the context menu. Then you will see a warning message box. Click on the OK button in this message box and the RealNorthwindService2 project will be removed from the solution. Note that all the files of this project are still on your hard drive. You will need to delete them using Windows Explorer. Creating the service interface layer In this article, we will create the service interface layer contracts. Because two sample files have already been created for us, we will try to reuse them as much as possible. Then we will start customizing these two files to create the service contracts. Creating the service interfaces To create the service interfaces, we need to open the IService1.cs file and do the following: Change its namespace from RealNorthwindService to: MyWCFServices.RealNorthwindService Change the interface name from IService1 to IProductService. Don't be worried if you see the warning message before the interface definition line, as we will change the web.config file in one of the following steps. Change the first operation contract definition from this line: string GetData(int value); to this line: Product GetProduct(int id); Change the second operation contract definition from this line: CompositeType GetDataUsingDataContract(CompositeType composite); to this line: bool UpdateProduct(Product product); Change the filename from IService1.cs to IProductService.cs. With these changes, we have defined two service contracts. The first one can be used to get the product details for a specific product ID, while the second one can be used to update a specific product. The product type, which we used to define these service contracts, is still not defined. The content of the service interface for RealNorthwindService.ProductService should look like this now: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ [ServiceContract] public interface IProductService { [OperationContract] Product GetProduct(int id); [OperationContract] bool UpdateProduct(Product product); // TODO: Add your service operations here }} This is not the whole content of the IProductService.cs file. The bottom part of this file should still have the class, CompositeType. Creating the data contracts Another important aspect of SOA design is that you shouldn't assume that the consuming application supports a complex object model. One part of the service boundary definition is the data contract definition for the complex types that will be passed as operation parameters or return values. For maximum interoperability and alignment with SOA principles, you should not pass any .NET-specific types such as DataSet or Exceptions across the service boundary. You should stick to fairly simple data structure objects such as classes with properties and backing member fields. You can pass objects that have nested complex types such as 'Customer with an Order collection'. However, you shouldn't make any assumption about the consumer being able to support object-oriented constructs such as inheritance or base-classes for interoperable web services. In our example, we will create a complex data type to represent a product object. This data contract will have five properties: ProductID, ProductName, QuantityPerUnit, UnitPrice, and Discontinued. These will be used to communicate with client applications. For example, a supplier may call the web service to update the price of a particular product or to mark a product for discontinuation. It is preferable to put data contracts in separate files within a separate assembly but, to simplify our example, we will put DataContract in the same file as the service contract. We will modify the file, IProductService.cs, as follows: Change the DataContract name from CompositeType to Product. Change the fields from the following lines: bool boolValue = true;string stringValue = "Hello "; to these seven lines: int productID;string productName;string quantityPerUnit;decimal unitPrice;bool discontinued; Delete the old boolValue and StringValue DataMember properties. Then, for each of the above fields, add a DataMember property. For example, for productID, we will have this DataMember property: [DataMember]public int ProductID{ get { return productID; } set { productID = value; }} A better way is to take advantage of the automatic property feature of C#, and add the following ProductID DataMember without defining the productID field: [DataMember]public int ProductID { get; set; } To save some space, we will use the latter format. So, we need to delete all of those field definitions and add an automatic property for each field, with the first letter capitalized. The data contract part of the finished service contract file, IProductService.cs,should now look like this: [DataContract]public class Product{ [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; }} Implementing the service contracts To implement the two service interfaces that we defined, open the Service1.cs file and do the following: Change its namespace from RealNorthwindService to MyWCFServices.RealNorthwindService. Change the class name from Service1 to ProductService. Make it inherit from the IProductService interface, instead of IService1. The class definition line should be like this: public class ProductService : IProductService Delete the GetData and GetDataUsingDataContract methods. Add the following method, to get a product: public Product GetProduct(int id){ // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10.0; return product;} In this method, we created a fake product and returned it to the client.Later, we will remove the hard-coded product from this method and call the business logic to get the real product. Add the following method to update a product: public bool UpdateProduct(Product product){ // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true;} Also, in this method, we don't update anything. Instead, we always return true if a valid price is passed in. Change the filename from Service1.cs to ProductService.cs. The content of the ProductService.cs file should be like this: using System;using System.Collections.Generic;using System.Linq;using System.Runtime.Serialization;using System.ServiceModel;using System.Text;namespace MyWCFServices.RealNorthwindService{ public class ProductService : IProductService { public Product GetProduct(int id) { // TODO: call business logic layer to retrieve product Product product = new Product(); product.ProductID = id; product.ProductName = "fake product name from service layer"; product.UnitPrice = (decimal)10; return product; } public bool UpdateProduct(Product product) { // TODO: call business logic layer to update product if (product.UnitPrice <= 0) return false; else return true; } }} Modifying the app.config file Because we have changed the service name, we have to make the appropriate changes to the configuration file. Note that when you rename the service, if you have used the refactor feature of Visual Studio, some of the following tasks may have been done by Visual Studio. Follow these steps to change the configuration file: Open the app.config file from Solution Explorer. Change all instances of the RealNorthwindService string except the one in baseAddress to MyWCFServices.RealNorthwindService. This is for the namespace change. Change the RealNorthwindService string in baseAddress to MyWCFServices/RealNorthwindService. Change all instances of the Service1 string to ProductService. This is for the actual service name change. Change the service address port from 8731 to 8080. This is to prepare for the client application, which we will create soon. You can also change Design_Time_Addresses to whatever address you want, or delete the baseAddress part from the service. This can be used to test your service locally. We will leave it unchanged for our example. The content of the app.config file should now look like this: <?xml version="1.0" encoding="utf-8" ?><configuration> <system.web> <compilation debug="true" /> </system.web> <!-- When deploying the service library project, the content of the config file must be added to the host's app.config file. System.Configuration does not support config files for libraries. --> <system.serviceModel> <services> <service name="MyWCFServices.RealNorthwindService. ProductService"> <endpoint address="" binding="wsHttpBinding" contract="MyWCFServices. RealNorthwindService.IProductService"> <identity> <dns value="localhost" /> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" /> <host> <baseAddresses> <add baseAddress="http://localhost:8080/Design_Time_ Addresses/MyWCFServices/ RealNorthwindService/ProductService/" /> </baseAddresses> </host> </service> </services> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="True"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="False" /> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> </configuration> Testing the service using WCF Test Client Because we are using the WCF Service Library template in this example, we are now ready to test this web service. As we pointed out when creating this project, this service will be hosted in the Visual Studio 2010 WCF Service Host environment. To start the service, press F5 or Ctrl + F5. WcfSvcHost will be started and WCF Test Client is also started. This is a Visual Studio 2010 built-in test client for WCF Service Library projects. In order to run the WCF Test Client you have to log into your machine as a local administrator. You also have to start Visual Studio as an administrator because we have changed the service port from 8732 to 8080 (port 8732 is pre-registered but 8080 is not). Again, if you get an Access is denied error, make sure you run Visual Studio as an administrator (under Windows XP you need to log on as an administrator). Now from this WCF Test Client we can double-click on an operation to test it.First, let us test the GetProduct operation. Now the message Invoking Service… will be displayed in the status bar as the client is trying to connect to the server. It may take a while for this initial connection to be made as several things need to be done in the background. Once the connection has been established, a channel will be created and the client will call the service to perform the requested operation. Once the operation has been completed on the server side, the response package will be sent back to the client, and the WCF Test Client will display this response in the bottom panel. If you started the test client in debugging mode (by pressing F5), you can set a breakpoint at a line inside the GetProduct method in the RealNorthwindService.cs file, and when the Invoke button is clicked, the breakpoint will be hit so that you can debug the service as we explained earlier. However, here you don't need to attach to the WCF Service Host. Note that the response is always the same, no matter what product ID you use to retrieve the product. Specifically, the product name is hard-coded, as shown in the diagram. Moreover, from the client response panel, we can see that several properties of the Product object have been assigned default values. Also, because the product ID is an integer value from the WCF Test Client, you can only enter an integer for it. If a non-integer value is entered, when you click on the Invoke button, you will get an error message box to warn you that you have entered a value with the wrong type. Now let's test the operation, UpdateProduct. The Request/Response packages are displayed in grids by default but you have the option of displaying them in XML format. Just select the XML tab at the bottom of the right-side panel, and you will see the XML-formatted Request/Response packages. From these XML strings, you can see that they are SOAP messages. Besides testing operations, you can also look at the configuration settings of the web service. Just double-click on Config File from the left-side panel and the configuration file will be displayed in the right-side panel. This will show you the bindings for the service, the addresses of the service, and the contract for the service. What you see here for the configuration file is not an exact image of the actual configuration file. It hides some information such as debugging mode and service behavior, and includes some additional information on reliable sessions and compression mode. If you are satisfied with the test results, just close the WCF Test Client, and you will go back to Visual Studio IDE. Note that as soon as you close the client, the WCF Service Host is stopped. This is different from hosting a service inside ASP.NET Development Server, where ASP.NET Development Server still stays active even after you close the client.
Read more
  • 0
  • 0
  • 13236
article-image-objects-and-types-documentum-65-content-management-foundations-sequel
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations- A Sequel

Packt
04 Jun 2010
11 min read
Content persistence We have seen so far how metadata is persisted but it is not obvious how content is persisted and associated with its metadata. All sysobjects (objects of type dm_sysobject and its subtypes) other than folders (objects of type dm_folder and its subtypes) can have associated content. We saw that a document can have content in the form of renditions as well as in primary format. How are these content files associated with a sysobject? In other words, how does Content Server know what metadata is associated with a content fi le? How does it know that one content fi le is a rendition of another one? Content Server manages content files using content objects, which (indirectly) point to the physical locations of content files and associate them with sysobjects. Locating content files Recall that Documentum repositories can store content in various types of storage systems including a file system, a Relational Database Management System (RDBMS), a content-addressed storage (CAS), or external storage devices. Content Server decides to store each file in a location based on the configuration and the presence of products like Content Storage Services. In general, users are not concerned about where the file is stored since Content Server is able to retrieve the file from the location where it was stored. We will discuss the physical location of a content file without worrying about why Content Server chose to use that location. Content object Every content file in the repository has an associated content object, which stores information about the location of the fi le and identifi es the sysobjects associated with it. These sysobjects are referred to as the parent objects of the content object. A content object is an object of type dmr_content, whose key attributes are listed as follows: Attribute Description parent_count Number of parent objects parent_id List of object IDs of the parent objects storage_id Object ID of the store object representing the storage area holding the content. data_ticket A value used internally to retrieve the content. The value and its usage depend upon the type of storage used. i_contents When the content is stored in turbo storage, this property contains the actual content. If the content is larger than the size of this property (2000 characters for databases other than Sybase, 255 for Sybase), the content is stored in a dmi_subcontent object and this property is unused. If the content is stored in content addressed storage, it contains the content address. If the content is stored in external storage, it contains the token used to retrieve the content. rendition Identifies if it's a rendition and its related behavior 0 means original content 1 means rendition generated by server 2 means rendition generated by client 3 means rendition not to be removed when its primary content is updated or removed format Object ID of the format object representing the format of the content full_content_size Content file size in bytes, except when the content is stored in external storage Object-content relationship Content Server manages content objects while performing content-related operations. Content associated with a sysobject is categorized as primary content or a rendition. A rendition is a content fi le associated with a sysobject that is not its primary content. Content in the first content file added to a sysobject is called its primary content and its format is referred to as the primary format for the parent object. Any other content added to the parent object in the same format is also called primary content, though it is rarely done by users manually. This ability to add multiple primary content files is typically utilized programmatically by applications for their internal use. While a sysobject can have multiple primary content files it is also possible for one content object to have multiple parent objects. This just means that a content file can be shared by multiple objects. Putting it together The details about content persistence can become confusing due to the number of objects involved and the relationships among various attributes. It becomes even more complicated when the full Content Server capabilities (such as multiple content files for one sysobject) are manifested. We will look at a simple scenario to visually grasp how content persistence works in common situations. Documentum provides multiple options for locating the content file. DFC provides the getPath() method and DQL provides get_file_url administration method for this purpose. This section has been included to satisfy the reader's curiosity about content persistence and works through the information manually. This discussion can be treated as supplementary to technical fundamentals.. The sysobject is named paystub.jpg. The primary content file is in jpg format and the rendition is in pdf format, as shown in the following figure: The following figure shows the objects involved in the content persistence for this document. The central object is of type dm_document. The figure also includes two content objects and one format object. Let's try to understand the relationships by asking specific questions. How many content files, primary or renditions, are there for the document paystub.jpg? This question can be answered by looking for the corresponding content objects. We look for dmr_content objects that have the document's object ID in one of their parent_id values. This figure shows that there are two such content objects. Which of these content objects represents the primary content and which one is a rendition? This can be determined by looking at the rendition attribute. The content object on the left shows rendition=0, which indicates primary content. The content object on the right shows rendition=2, which indicates rendition generated by client (recall that we manually imported this rendition). What is the primary format for this document? This is easy to answer by looking at the a_content_type attribute on the document itself. If we need to know the format for a content object we can look for the dm_format object which has the same object ID as the value present in the format property of the content object. In the fi gure above, the format object for the primary content object is shown which represents a JPEG image. Thus, the format determined for the primary content of the object is expected to match the value of a_content_type property of the object. The format object for the rendition is not shown but it would be PDF. What is the exact physical location of the primary content file? As mentioned in the beginning of this section, there are DFC and DQL methods which can provide this information. For understanding content persistence, we will deduce this manually for a file store, which represents storage on a file system. For other types of storage, an exact location might not be evident since we need to rely on the storage interface to access the content file. Deducing the exact file path requires the ability to convert a decimal number to a hexadecimal (hex) number; this can be done with pen and paper or using one of the free tools available on the Web. Also remember that negative numbers are represented with what is known as a 2's-complement notation and many of these tools either don't handle 2's complement or don't support enough digits for our purposes. There are two parts of the file path—the root path for the file store and the path of the file relative to this root path. In order to fi gure out the root path, we identify the fi le store first. Find the dm_filestore object whose object ID is the same as the value in storage_id property of the content object. Then find the dm_location object whose object name is the same as the root property on the file store object. The file_ system_path property on this location object has the root path for the fi le store, which is C:Documentumdatalocaldevcontent_storage_01 in the figure above. In order to find the relative path of the content fi le, we look at data_ticket (data type integer) on the content object. Find the 8-digit hex representation for this number. Treat the hex number as a string and split the string with path separators (slashes, / or depending on the operating system) after every two characters. Suffi x the right-most two characters with the file extension (.jpg), which can be inferred from the format associated with the content object. Prefix the path with an 8-digit hex representation of the repository ID. This gives us the relative path of the content file, which is 000000108009be.jpg in the figure above. Prefix this path with the file store root path identified earlier to get the full path of the content file. Content persistence in Documentum appears to be complicated at first sight. There are a number of separate objects involved here and that is somewhat similar to having several tables in a relational database when we normalize the schema. At a high level, this complexity in the content persistence model serves to provide scalability, flexibility by supporting multiple kinds of content stores, and ease of managing changes in such an environment. Lightweight and shareable object types So far we have primarily dealt with standard types. Lightweight and shareable object types work together to provide performance improvements, which are significant when a large number of lightweight objects share information. The key performance benefits are in terms of savings in storage and in the time it takes to import a large number of documents that share metadata. These types are suitable for use in transactional and archival applications but are not recommended for traditional content management. The term transactional content (as in business transactions) was coined by Forrester Research to describe content typically originating from external parties, such as customers and partners, and driving transactional back-office business processes. Transactional Content Management (TCM) unifi es process, content, and compliance to support solutions involving transactional content. Our example scenario of mortgage loan approval process management is a perfect example of TCM. It involves numerous types of documents, several external parties, and sub-processes implementing parts of the overall process. Lightweight and shareable types play a central role in the High Volume Server, which enhances the performance of Content Server for TCM. A lightweight object type (also known as LwSO for Lightweight SysObject ) is a subtype of a shareable type. When a lightweight object is created, it references an object of its shareable supertype called the parent object of the lightweight object. Conversely, the lightweight object is called the child object of the shareable object. Additional lightweight objects of the same type can share the same parent object. These lightweight objects share the information present in the common parent object rather than each carrying a copy of that information. In order to make the best use of lightweight objects we need to address a couple of questions. When should we use lightweight objects? Lightweight objects are useful when there are a large number of attribute values that are identical for a group of objects. This redundant information can be pushed into one parent object and shared by the lightweight objects. What kind of information is suitable for sharing in the parent object? System-managed metadata, such as policies for security, retention, storage, and so on, are usually applied to a group of objects based on certain criteria. For example, all the documents in one loan application packet could use a single ACL and retention information, which could be placed into the shareable parent object. The specific information about each document would reside in a separate lightweight object. Lightweight object persistence Persistence for lightweight objects works much the same way it works for objects of standard types, with one exception. A lightweight object is a subtype of a shareable type and these types have their separate tables as usual. For a standard type, each object has separate records in all of these tables, with each record identified by the object ID of the object. However, when multiple lightweight objects share one parent object there is only one object ID (of the parent object) in the tables of the shareable type. The lightweight objects need to refer to the object ID of the parent object, which is different from the object ID of any of the lightweight objects, in order to access the shared properties. This reference is made via an attribute named i_sharing_parent, as shown in the last figure.
Read more
  • 0
  • 0
  • 10250

article-image-objects-and-types-documentum-65-content-management-foundations
Packt
04 Jun 2010
11 min read
Save for later

Objects and Types in Documentum 6.5 Content Management Foundations

Packt
04 Jun 2010
11 min read
Objects Documentum uses an object-oriented model to store information within the repository. Everything stored in the repository participates in this object model in some way. For example, a user, a document, and a folder are all represented as objects. An object store s data in its properties (also known as attributes) and has methods that can be used to interact with the object. Properties A content item stored in the repository has an associated object to store its metadata. Since metadata is stored in object properties, the terms metadata and properties are used interchangeably. For example, a document stored in the repository may have its title, subject, and keywords stored in the associated object. However, note that objects can exist in the repository without an associated content item. Such objects are sometimes referred to as contentless objects. For example, user objects and permission set objects do not have any associated content. Each object property has a data type, which can be one of boolean, integer, string, double, time, or ID. A boolean value is true or false. A string value consists of text. A double value is a floating point number. A time value represents a timestamp, including dates. An ID value represents an object ID that uniquely identifi es an object in the repository. Object IDs are discussed in detail later in this article. A property can be single-valued or repeating. Each single-valued property holds one value. For example, the object_name property of a document contains one value and it is of type string. This means that the document can only have one name. On the other hand, keywords is a repeating property and can have multiple string values. For example, a document may have object_name='LoanApp_1234567891.txt' and keywords='John Doe','application','1234567891'. The following figure shows a visual representation of this object. Typically, only properties are shown on the object while methods are shown when needed. Furthermore, only the properties relevant to the discussion are shown. Objects will be illustrated in this manner throughout the article series: Methods Methods are operations that can be performed on an object. An operation often alters some properties of the object. For example, the checkout method can be used to check out an object. Checking out an object sets the r_lock_owner property with the name of the user performing the checkout. Methods are usually invoked using Documentum Foundation Classes (DFCs) programmatically, though they can be indirectly invoked using API. In general, Documentum Query Language (DQL) cannot be used to invoke arbitrary methods on objects. DQL is discussed later in this article. Note that the term method may be used in two different contexts within Documentum. A method as a defined operation on an object type is usually invoked programmatically through DFC. There is also the concept of a method representing code that can be invoked via a job, workflow activity, or a lifecycle operation. This qualification will be made explicit when the context is not clear. Working with objects We used Webtop for performing various operations on documents, where the term document referred to an object with content. Some of these operations are not specific to content and apply to objects in general. For example, checkout and checkin can be performed on contentless objects as well. On the other hand, import, export, and renditions deal specifi cally with content. Talking specifically about operations on metadata, we can view, modify, and export object properties using Webtop. Viewing and editing properties Using Webtop, object properties can be viewed using the View | Properties menu item, shortcut P, or the right-click context menu. The following screenshot shows the properties of the example object discussed earlier. Note that the same screen can be used to modify and save the properties as well. Multiple objects can be selected before viewing properties. In this case, a special dialog shows the common properties for the selected objects, as shown in the following figure. Any changes made on this dialog are applied to all the selected objects. On the properties screen, single-valued properties can be edited directly while repeating properties provide a separate screen for editing through Edit links. Some properties cannot be modified by users at any time. Other properties may not be editable because object security prevents it or if the object is immutable. Object immutability Certain operations on an object mark it as immutable, which means that object properties cannot be changed. An object is marked immutable by setting r_immutable_flag to true. Content Server prevents changes to the content and metadata of an immutable object with the exception of a few special attributes that relate to the operations that are still allowed on immutable objects. For example, users can set a version label on the object, link the object to a folder, unlink it from a folder, delete it, change its lifecycle, and perform one of the lifecycle operations such as promote/demote/suspend/resume. The attributes affected by the allowed operations are allowed to be updated. An object is marked immutable in the following situations: When an object is versioned or branched, it becomes an old version and is marked immutable. An object can be frozen which makes it immutable and imposes some other restrictions. Some virtual document operations can freeze the involved objects. A retention policy can make the documents under its control immutable. Certain operations such as unfreezing a document can reset the immutability flag making the object changeable again. Exporting properties Metadata can be exported from repository lists, such as folder contents and search results. Property values of the objects are exported and saved as a .csv (comma-separated values) file, which can be opened in Microsoft Excel or in a text editor. Metadata export can be performed using Tools | Export to CSV menu item or the right-click context menu. Before exporting the properties, the user is able to choose the properties to export from the available ones. Object types Objects in a repository may represent different kinds of entities – one object may represent a workflow while another object may represent a document, for example. As a result, these objects may have different properties and methods. Every time Content Server creates an object, it needs to determine the properties and methods that the object is going to possess. This information comes from an object type (also referred to as type). The term attribute is synonymous with property and the two are used interchangeably. It is common to use the term attribute when talking about a property name and to use property when referring to its value. We will use a dot notation to indicate that an attribute belongs to an object or a type. For example, objectA.title or dm_sysobject. object_name. This notation is succinct and unambiguous and is consistent with many programming languages. An object type is a template for creating objects. In other words, an object is an instance of its type. A Documentum repository contains many predefined types and allows addition of new user-defined types (also known as custom types). The most commonly used predefined object type for storing documents in the repository is dm_document. We have already seen how folders are used to organize documents. Folders are stored as objects of type dm_folder. A cabinet is a special kind of folder that does not have a parent folder and is stored as an object of type dm_cabinet. Users are represented as objects of type dm_user and a group of users is represented as an object of dm_group. Workflows use a process definition object of type dm_process, while the definition of a lifecycle is stored in an object of type dm_policy. The following figure shows some of these types: Just like everything else in the repository, a type is also represented as an object, which holds structural information about the type. This object is of type dm_type and stores information such as the name of the type, name of its supertype, and details about the attributes in the type. The following figure shows an object of type dm_document and an object of type dm_type representing dm_document. It also indicates how the type hierarchy information is stored in the object of type dm_type. The types present in the repository can be viewed using Documentum Administrator (DA). The following screenshot shows some attributes for the type dm_sysobject. This screen provides controls to scroll through the attributes when there are a large number of attributes present. The Info tab provides information about the type other than the attributes. While the obvious use of a type is to define the structure and behavior of one kind of object, there is another very important utility of types. A type can be used to refer to all the objects of that type as a set. For example, queries restrict their scope by specifying a type where only the objects of that type are considered for matches. In our example scenario, the loan officer may want to search for all loan applications assigned to her. This query will be straightforward if there is an object type for loan applications. Queries are introduced later in this article. As another example, audit events can be restricted to a particular object type resulting in only the objects of this type being audited. Type names and property names Each object type uses an internal type name, such as dm_document, which is used for uniquely identifying the type within queries and application code. Each type also has a label, which is a user-friendly name often used by applications for displaying information to the end users. For example, the type dm_document has the label Document. Conventionally, internal names of predefined (defined by Documentum for Content Server or other client products) types start with dm, as described here: dm_: (general) represents commonly used object types such as dm_document, which is generally used for storing documents. dmr_: (read only) represents read-only object types such as dmr_content, which stores information about a content file. dmi_: (internal) represents internal object types such as dmi_workitem, which stores information about a task. dmc_: (client) represents object types supporting Documentum client applications. For example, dmc_calendar objects are used by Collaboration Services for holding calendar events. Just like an object type each property also has an internal name and a label. For example, the label for property object_name is Name. There are some additional conventions for internal names for properties. These names may begin with the following prefixes: r_: (read only) normally indicates that the property is controlled by the Content Server and cannot be modified by users or applications. For example, r_object_id represents the unique ID for the object. On the other hand, r_version_label is an interesting property. It is a repeating property and has at least one value supplied by the Content Server while others may be supplied by users or applications. i_: (internal) is similar to r_ except that this property is used internally by the Content Server and normally not seen by users and applications. i_chronicle_id binds all the versions together in to a version tree and is managed by the Content Server. a_: (application) indicates that this property is intended to be used by applications and can be modified by applications and users. For example, the format of a document is stored in a_content_type. This property helps Webtop launch an appropriate desktop application to open a document. The other three prefixes can also be considered to imply system or non-application attributes, in general. _: (computed) indicates that this property is not stored in the repository and is computed by Content Server as needed. These properties are also normally read-only for applications. For example, each object has a property called _changed, which indicates whether it has been changed since it was last saved. Many of the computed properties are related to security and most are used for caching information in user sessions.
Read more
  • 0
  • 0
  • 14293

article-image-customize-backend-component-joomla-15
Packt
04 Jun 2010
13 min read
Save for later

Customize backend Component in Joomla! 1.5

Packt
04 Jun 2010
13 min read
(Read more interesting articles on Joomla! 1.5here.) Itemized data Most components handle and display itemized data. Itemized data is data having many instances; most commonly this reflects rows in a database table. When dealing with itemized data there are three areas of functionality that users generally expect: Pagination Ordering Filtering and searching In this section we will discuss each of these areas of functionality and how to implement them in the backend of a component. Pagination To make large amounts of itemized data easier to understand, we can split the data across multiple pages. Joomla! provides us with the JPagination class to help us handle pagination in our extensions. There are four important attributes associated with the JPagination class: limitstart: This is the item with which we begin a page, for example the first page will always begin with item 0. limit: This is the maximum number of items to display on a page. total: This is the total number of items across all the pages. _viewall: This is the option to ignore pagination and display all items. Before we dive into piles of code, let's take the time to examine the listFooter, the footer that is used at the bottom of pagination lists: The box to the far left describes the maximum number of items to display per page (limit). The remaining buttons are used to navigate between pages. The final text defines the current page out of the total number of pages. The great thing about this footer is we don't have to work very hard to create it! We can use a JPagination object to build it. This not only means that it is easy to implement, but that the pagination footers are consistent throughout Joomla!. JPagination is used extensively by components in the backend when displaying lists of items. In order to add pagination to our revues list we must make some modifications to our backend revues model. Our current model consists of one private property $_revues and two methods: getRevues() and delete(). We need to add two additional private properties for pagination purposes. Let's place them immediately following the existing $_revues property: /** @var array of revue objects */var $_revues = null;/** @var int total number of revues */var $_total = null;/** @var JPagination object */var $_pagination = null; Next we must add a class constructor, as we will need to retrieve and initialize the global pagination variables $limit and $limitstart. JModel objects store a state object in order to record the state of the model. It is common to use the state variables limit and limitstart to record the number of items per page and starting item for the page. We set the state variables in the constructor: /** * Constructor */function __construct(){ global $mainframe; parent::__construct(); // Get the pagination request variables $limit = $mainframe->getUserStateFromRequest( 'global.list.limit', 'limit', $mainframe->getCfg('list_limit')); $limitstart = $mainframe->getUserStateFromRequest( $option.'limitstart', 'limitstart', 0); // Set the state pagination variables $this->setState('limit', $limit); $this->setState('limitstart', $limitstart);} Remember that $mainframe references the global JApplication object. We use the getUserStateFromRequest() method to get the limit and limitstart variables. We use the user state variable, global.list.limit, to determine the limit. This variable is used throughout Joomla! to determine the length of lists. For example, if we were to view the Article Manager and select a limit of five items per page, if we move to a different list it will also be limited to five items. If a value is set in the request value limit (part of the listFooter), we use that value. Alternatively we use the previous value, and if that is not set we use the default value defined in the application configuration. The limitstart variable is retrieved from the user state value $option, plus .limitstart. The $option value holds the component name, for example com_content. If we build a component that has multiple lists we should add an extra level to this, which is normally named after the entity. If a value is set in the request value limitstart (part of the listFooter) we use that value. Alternatively we use the previous value, and if that is not set we use the default value 0, which will lead us to the first page. The reason we retrieve these values in the constructor and not in another method is that in addition to using these values for the JPagination object, we will also need them when getting data from the database. In our existing component model we have a single method for retrieving data from the database, getRevues(). For reasons that will become apparent shortly we need to create a private method that will build the query string and modify our getRevues() method to use it. /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid; return $query;} We now must modify our getRevues() method: /** * Get a list of revues * * @access public * @return array of objects */function getRevues(){ // Get the database connection $db =& $this->_db; if( empty($this->_revues) ) { // Build query and get the limits from current state $query = $this->_buildQuery(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); $this->_revues = $this->_getList($query, $limitstart, $limit); } // Return the list of revues return $this->_revues;} We retrieve the object state variables limit and limitstart and pass them to the private JModel method _getList(). The _getList() method is used to get an array of objects from the database based on a query and, optionally, limit and limitstart. The last two parameters will modify the first parameter, a query, in such a way that we only return the desired results. For example if we requested page 1 and were displaying a maximum of five items per page, the following would be appended to the query: LIMIT 0, 5. To handle pagination we need to add a method called getPagination() to our model. This method will handle items we are trying to paginate using a JPagination object. Here is our code for the getPagination() method: /** * Get a pagination object * * @access public * @return pagination object */function getPagination(){ if (empty($this->_pagination)) { // Import the pagination library jimport('joomla.html.pagination'); // Prepare the pagination values $total = $this->getTotal(); $limitstart = $this->getState('limitstart'); $limit = $this->getState('limit'); // Create the pagination object $this->_pagination = new JPagination($total, $limitstart, $limit); } return $this->_pagination;} There are three important aspects to this method. We use the private property $_pagination to cache the object, we use the getTotal() method to determine the total number of items, and we use the getState() method to determine the number of results to display. The getTotal() method is a method that we must define in order to use. We don't have to use this name or this mechanism to determine the total number of items. Here is one way of implementing the getTotal() method: /** * Get number of items * * @access public * @return integer */function getTotal(){ if (empty($this->_total)) { $query = $this->_buildQuery(); $this->_total = $this->_getListCount($query); } return $this->_total;} This method calls our model's private method _buildQuery() to build the query, the same query that we use to retrieve our list of revues. We then use the private JModel method _getListCount()to count the number of results that will be returned from the query. We now have all we need to be able to add pagination to our revues list except for actually adding pagination to our list page. We need to add a few lines of code to our revues/view.html.php file. We will need to access to global user state variables so we must add a reference to the global application object as the first line in our display method: global $mainframe; Next we need to create and populate an array that will contain user state information. We will add this code immediately after the code that builds the toolbar: // Prepare list array$lists = array();// Get the user state$filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'published');$filter_order_Dir = $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC');// Build the list array for use in the layout$lists['order'] = $filter_order;$lists['order_Dir'] = $filter_order_Dir;// Get revues and pagination from the model$model =& $this->getModel( 'revues' );$revues =& $model->getRevues();$page =& $model->getPagination();// Assign references for the layout to use$this->assignRef('lists', $lists);$this->assignRef('revues', $revues);$this->assignRef('page', $page); After we create and populate the $lists array, we add a variable $page that receives a reference to a JPagination object by calling our model's getPagination() method. And finally we assign references to the $lists and $page variables so that our layout can access them. Within our layout default.php file we must make some minor changes toward the end of the existing code. Between the closing </tbody> tag and the </table> tag we must add the following: <tfoot> <tr> <td colspan="10"> <?php echo $this->page->getListFooter(); ?> </td> </tr></tfoot> This creates the pagination footer using the JPagination method getListFooter(). The final change we need to make is to add two hidden fields to the form. Under the existing hidden fields we add the following code: <input type="hidden" name="filter_order" value="<?php echo $this->lists['order']; ?>" /><input type="hidden" name="filter_order_Dir" value="" /> The most important thing to notice is that we leave the value of the filter_order_Dir field empty. This is because the listFooter deals with this for us. That is it! We now have added pagination to our page. Ordering Another enhancement that we can add is the ability to sort or order our data by column, which we can accomplish easily using the JHTML grid.sort type. And, as an added bonus, we have already completed a significant amount of the necessary code when we added pagination. Most of the changes to revues/view.html.php that we made for pagination are used for implementing column ordering; we don't have to make a single change. We also added two hidden fields, filter_order and filter_order_Dir, to our layout form, default.php. The first defines the column to order our data and the latter defines the direction, ascending or descending. Most of the column headings for our existing layout are currently composed of simple text wrapped in table heading tags (<th>Title</th> for example). We need to replace the text with the output of the grid.sort function for those columns that we wish to be orderable. Here is our new code: <thead> <tr> <th width="20" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('ID'), 'id', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20" nowrap="nowrap"> <input type="checkbox" name="toggle" value="" onclick="checkAll( <?php echo count($this->revues); ?>);" /> </th> <th width="40%"> <?php echo JHTML::_('grid.sort', JText::_('TITLE'), 'title', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="20%"> <?php echo JHTML::_('grid.sort', JText::_('REVUER'), 'revuer', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap"> <?php echo JHTML::_('grid.sort', JText::_('REVUED'), 'revued', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="80" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', 'ORDER', 'ordering', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="10" nowrap="nowrap"> <?php if($ordering) echo JHTML::_('grid.order', $this->revues); ?> </th> <th width="50" nowrap="nowrap"> <?php echo JText::_('HITS'); ?> </th> <th width="100" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('CATEGORY'), 'category', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> <th width="60" nowrap="nowrap" align="center"> <?php echo JHTML::_('grid.sort', JText::_('PUBLISHED'), 'published', $this->lists['order_Dir'], $this->lists['order'] ); ?> </th> </tr></thead> Let's look at the last column, Published, and dissect the call to grid.sort. Following grid.sort we have the name of the column, filtered through JText::_() passing it a key to our translation file. The next parameter is the sort value, the current order direction, and the current column by which the data is ordered. In order for us to be able to use these headings to order our data we must make a few additional modifications to our JModel class. We created the _buildQuery() method earlier when we were adding pagination. We now need to make a change to that method to handle ordering: /** * Builds a query to get data from #__boxoffice_revues * @return string SQL query */function _buildQuery(){ $db =& $this->getDBO(); $rtable = $db->nameQuote('#__boxoffice_revues'); $ctable = $db->nameQuote('#__categories'); $query = ' SELECT r.*, cc.title AS cat_title' . ' FROM ' . $rtable. ' AS r' . ' LEFT JOIN '.$ctable.' AS cc ON cc.id=r.catid' . $this->_buildQueryOrderBy(); return $query;} Our method now calls a method named _buildQueryOrderBy() that builds the ORDER BY clause for the query: /** * Build the ORDER part of a query * * @return string part of an SQL query */function _buildQueryOrderBy(){ global $mainframe, $option; // Array of allowable order fields $orders = array('title', 'revuer', 'revued', 'category', 'published', 'ordering', 'id'); // Get the order field and direction, default order field // is 'ordering', default direction is ascending $filter_order = $mainframe->getUserStateFromRequest( $option.'filter_order', 'filter_order', 'ordering'); $filter_order_Dir = strtoupper( $mainframe->getUserStateFromRequest( $option.'filter_order_Dir', 'filter_order_Dir', 'ASC')); // Validate the order direction, must be ASC or DESC if ($filter_order_Dir != 'ASC' && $filter_order_Dir != 'DESC') { $filter_order_Dir = 'ASC'; } // If order column is unknown use the default if (!in_array($filter_order, $orders)) { $filter_order = 'ordering'; } $orderby = ' ORDER BY '.$filter_order.' '.$filter_order_Dir; if ($filter_order != 'ordering') { $orderby .= ' , ordering '; } // Return the ORDER BY clause return $orderby;} As with the view, we retrieve the order column name and direction using the application getUserStateFromRequest() method. Since this data is going to be used to interact with the database, we perform some data sanity checks to ensure that the data is safe to use with the database. Now that we have done this we can use the table headings to order itemized data. This is a screenshot of such a table: Notice that the current ordering is title descending, as denoted by the small arrow to the right of Title.
Read more
  • 0
  • 0
  • 2596
article-image-red5-video-demand-flash-server
Packt
04 Jun 2010
6 min read
Save for later

Red5: A video-on-demand Flash Server

Packt
04 Jun 2010
6 min read
Plone does not provide a responsive user experience out of the box. This is not because the system is slow, but because it simply does (too) much. It does a lot of security checks and workflow operations, handles the content rules, does content validation, and so on. Still, there are some high-traffic sites running with the popular Content Management System. How do they manage? "All Plone integrators are caching experts." This saying is commonly heard and read in the Plone community. And it is true. If we want a fast and responsive system, we have to use caching and load-balancing applications to spread the load. The article discusses a practical example. We will set up a protected video-on-demand solution with Plone and a Red5 server. We will see how to integrate it with Plone for an effective and secure video-streaming solution. The Red5 server is an open source Flash server. It is written in Java and is very extensible via plugins. There are plugins for transcoding, different kinds of streaming, and several other manipulations we might want to do with video or audio content. What we want to investigate here is how to integrate video streams protected by Plone permissions. (For more resources on Plone, see here.) Requirements for setting up a Red5 server The requirement for running a Red5 Flash server is Java 6. We can check the Java version by running this: $ java -versionjava version "1.6.0_17"Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-9M3125)Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) The version needs to be 1.6 at least. The earlier versions of the Red5 server run with 1.5, but the plugin for protecting the media files needs Java 6. To get Java 6, if we do not have it already, we can download it from the Sun home page. There are packages available for Windows and Linux. Some Linux distributions have different implementations of Java because of licensing issues. You may check the corresponding documentation if this is the case for you. Mac OS X ships with its own Java bundled. To set the Java version to 1.6 on Mac OS X, we need to do the following: $ cd /System/Library/Frameworks/JavaVM.framework/Versions$ rm Current*$ ln -s 1.6 Current$ ln -s 1.6 CurrentJDK After doing so, we should double-check the Java version with the command shown before. The Red5 server is available as a package for various operating systems. In the next section, we will see how we can integrate a Red5 server into a Plone buildout. A Red5 buildout Red5 can be downloaded in several different ways. As it is open source, even the sources are available as a tarball from the product home page. For the buildout, we use the bundle of ready compiled Java libraries. This bundle comes with everything needed to run a standalone Red5 server. There are startup scripts provided for Windows and Bash (usable with Linux and Mac OS X). Let's see how to configure our buildout. The buildout needs the usual common elements for a Plone 3.3.3 installation. Apart from the application and the instance, the Red5-specific parts are also present: a fss storage part and a part for setting up the supervisor. [buildout]newest = falseparts =zope2instancefssred5red5-webappred5-protectedVODsupervisorextends =http://dist.plone.org/release/3.3.3/versions.cfgversions = versionsfind-links =http://dist.plone.org/release/3.3.3http://dist.plone.org/thirdpartyhttp://pypi.python.org/simple/ There is nothing special in the zope2 application part. [zope2]recipe = plone.recipe.zope2installfake-zope-eggs = trueurl = ${versions:zope2-url} On the Plone side, we need—despite of the fss eggs—a package called unimr.red5.protectedvod. This package with the rather complicated name creates rather complicated one-time URLs for the communication with Red5. [instance]recipe = plone.recipe.zope2instancezope2-location = ${zope2:location}user = admin:adminhttp-address = 8080eggs =Ploneunimr.red5.protectedvodiw.fsszcml =unimr.red5.protectedvodiw.fssiw.fss-meta First, we need to configure FileSystemStorage.FileSystemStorage is used for sharing the videos between Plone and Red5. The videos are uploaded via the Plone UI and they are put on the filesystem. The storage strategy needs to be either site1 or site2. These two strategies store the binary data with its original filename and file extension. The extension is needed for the Red5 server to recognize the file. [fss]recipe = iw.recipe.fsszope-instances =${instance:location}storages =global /site /site site2 The red5 part downloads and extracts the Red5 application. We have to envision that everything is placed into the parts directory. This includes configurations, plugins, logs, and even content. We need to be extra careful with changing the recipe in the buildout if running in production mode. The content we share with Plone is symlinked, so this is not a problem. For the logs, we might change the position to outside the parts directory and symlink them back. [red5]recipe = hexagonit.recipe.downloadurl = http://www.red5.org/downloads/0_8/red5-0.8.0.tar.gz The next part adds our custom application, which handles the temporary links used for protection, to the Red5 application. The plugin is shipped together with the unimr.red5.protectedvod egg we use on the Plone side. It is easier to get it from the Subversion repository directly. [red5-webapp]recipe = infrae.subversionurls = http://svn.plone.org/svn/collective/unimr.red5.protectedvod/trunk/unimr/red5/protectedvod/red5-webapp red5-webapp The red5-protectedVOD part configures the protectedVOD plugin. Basically, the WAR archive we checked out in the previous step is extracted. If the location of the fss storage does not exist already, it is symlinked into the streams directory of the plugin. The streams directory is the usual place for media files for Red5. [red5-protectedVOD]recipe = iw.recipe.cmdon_install = trueon_update = falsecmds =mkdir -p ${red5:location}/webapps/protectedVODcd ${red5:location}/webapps/protectedVODjar xvf ${red5-webapp:location}/red5-webapp/protectedVOD_0.1-red5_0.8-java6.warcd streamsif [ ! -L ${red5:location}/webapps/protectedVOD/streams/fss_storage_site ];then ln -s ${buildout:directory}/var/fss_storage_site .;fi The commands used above are Unix/Linux centric. Until Vista/ Server 2008, Windows didn't understand symbolic links. That's why the whole idea of the recipe doesn't work. The recipe might work with Windows Vista, Windows Server 2008, or Windows 7; but the commands look different Finally, we add the Red5 server to our supervisor configuration. We need to set the RED5_HOME environment variable, so that the startup script can find the necessary libraries of Red5. [supervisor]recipe = collective.recipe.supervisorprograms =30 instance2 ${instance2:location}/bin/runzope ${instance2:location}true40 red5 env [RED5_HOME=${red5:location} ${red5:location}/red5.sh]${red5:location} true After running the buildout, we can start the supervisor by issuing the following command: bin/supervisord The supervisor will take care of running all the subprocesses. To find out more on the supervisor, we may visit its website. To check if everything worked, we can request a status report by issuing this: bin/supervisorctl statusinstance RUNNING pid 2176, uptime 3:00:23red5 RUNNING pid 7563, uptime 0:51:25
Read more
  • 0
  • 0
  • 3252

article-image-improving-components-joomla-15
Packt
03 Jun 2010
16 min read
Save for later

Improving components with Joomla! 1.5

Packt
03 Jun 2010
16 min read
(Read more interesting articles on Nhibernate 2 Beginner's Guide here.) Improving components We are going to be working almost exclusively on the backend component in this article but most of what we will be covering could easily be adapted for the frontend component if we wished to do so. Component backend When we build the backend of a component there are some very important things to consider. Most components will include at least two backend views or forms; one will display a list of items and another will provide a form for creating or editing a single item. There may be additional views depending on the component but for now we will work with our com_boxoffice component, which consists of two views. Toolbars Although we have already built our component toolbars, we didn't spend much time discussing all the features and capabilities that are available to us, so let's start with a bit of a review and then add a few enhancements to our component. Our backend component has two toolbars. The frst is displayed when we access our component from the Components | Box Offce Revues menu: The second toolbar is displayed when we click on the New or Edit button, or click on a movie title link in the list that is displayed: Administration toolbars consist of a title and a set of buttons that provide built-in functionality; it requires only a minimum amount of effort to add signifcant functionality to our administration page. We add buttons to our toolbar in our view classes using the static JToolBarHelper class. In our administration/components/com_boxoffice/views folder we have two views, revues, and revue. In the revues/view.html.php file we generated the toolbar with the following code: JToolBarHelper::title( JText::_( 'Box Office Revues' ), 'generic.png' );JToolBarHelper::deleteList();JToolBarHelper::editListX();JToolBarHelper::addNewX();JToolBarHelper::preferences( 'com_boxoffice', '200' );JToolBarHelper::help( 'help', true ); In our example we set the title of our menu bar to Box Offce Revues, passing it through JText::_(), which will translate it if we have installed a language file. Next we add Delete, Edit, New, Preferences, and Help buttons. Note that whenever we use JToolBarHelper we must set the title before we add any buttons. There are many different buttons that we can add to the menu bar; if we cannot find a suitable button we can define our own. Most of the buttons behave as form buttons for the form adminForm, which we will discuss shortly. Some buttons require certain input fields to be included with the adminForm in order to function correctly. The following table lists the available buttons that we can add to the menu bar. Method Name Description addNew Adds an add new button to the menu bar.> addNewX Adds an extended version of the add new button calling hideMainMenu() before submitbutton(). apply Adds an apply button to the menu bar. archiveList Adds an archive button to the menu bar. assign Adds an assign button to the menu bar. back Adds a back button to the menu bar. cancel Adds a cancel button to the menu bar. custom Adds a custom button to the menu bar. customX Adds an extended version of the custom button calling hideMainMenu() before submitbutton(). deleteList Adds a delete button to the menu bar. deleteListX Adds an extended version of the delete button calling hideMainMenu() before submitbutton(). Divider Adds a divider, a vertical line to the menu bar. editCss Adds an edit CSS button to the menu bar. editCssX Adds an extended version of the edit CSS button calling hideMainMenu() before submitbutton(). editHtml Adds an edit HTML button to the menu bar. editHtmlX Adds an extended version of the edit HTML button calling hideMainMenu() before submitbutton(). editList Adds an edit button to the menu bar. editListX Adds an extended version of the edit button calling hideMainMenu() before submitbutton(). help Adds an Help button to the menu bar. makeDefault Adds an Default button to the menu bar. media_manager Adds an Media Manager button to the menu bar. preferences Adds a Preferences button to the menu bar. preview Adds a Preview button to the menu bar. publish Adds a Publish button to the menu bar. publishList Adds a Publish button to the menu bar. save Adds a Save button to the menu bar. Spacer Adds a sizable spacer to the menu bar. title Sets the Title and the icon class of the menu bar. trash Adds a Trash button to the menu bar. unarchiveList Adds an Unarchive button to the menu bar. unpublish Adds an Unpublish button to the menu bar. unpublishList Adds an Unpublish button to the menu bar. Submenu Directly below the main menu bar is an area reserved for the submenu. There are two methods available to populate the submenu. The submenu is automatically populated with items defined in the component XML manifest file. We can also modify the submenu, adding or removing menu items using the JSubMenuHelper class. We will begin by adding a submenu using the component XML manifest file. When we last updated our component XML manifest file we placed a menu item in the Administration section: <menu>Box Office Revues</menu> This placed a menu item under the Components menu. Our component utilizes a single table, #__boxoffice_revues, which stores specific information related to movie revues. One thing that might make our component more useful is to add the ability to categorize movies by genre (for example: action, romance, science fiction, and so on). Joomla!'s built-in #__categories table will make this easy to implement. We will need to make a few changes in several places so let's get started. The first change we need to make is to modify our #_box_office_revues table, adding a foreign key field that will point to a record in the #__categories table. We will add one field to our table immediately after the primary key field id: `catid` int(11) NOT NULL default '0', If you have installed phpMyAdmin you can easily add this new field without losing any existing data. Be sure to update the install.sql file for future component installs. Next we will add our submenu items to the component XML manifest file, immediately after the existing menu declaration: <submenu> <menu link="option=com_boxoffice">Revues</menu> <menu link="option=com_categories &amp;section=com_boxoffice">Categories</menu> Note that we use &amp; rather than an ampersand (&) character to avoid problems with XML parsing. Since we modifed our #__boxoffice_revues table we must update our JTable subclass /tables/revue.php to match by adding the following lines immediately after the id field: /** @var int */var $catid = 0; And finally, we need to modify our layout /views/revue/tmpl/default.php to allow us to select a category or genre for our movie (place this immediately after the </tr> tag of the frst table row, the one that contains our movie title): <tr> <td width="100" align="right" class="key"> <label for="catid"> <?php echo JText::_('Movie Genre'); ?>: </label> </td> <td> <?php echo JHTML::_('list.category', 'catid', 'com_boxoffice', $this->revue->catid );?> </td></tr> The call to JHTML::_() produces the HTML to display the selection drop-down list of component specifc categories. The static JHTML class is an integral part of the joomla.html library which we will discuss in the next section. Creating submenu items through the component XML manifest file is not the only method at our disposal; we can modify the submenu using the static JSubMenuHelper class. Please note however that these methods differ in a number of ways. Submenu items added using the manifest file will appear as submenu items under the Components menu item as well as the submenu area of the menu bar. For example the Components menu will appear as it does in the following screenshot: The submenu items will appear on the component list page as shown in the following image: And the submenu items will also appear on the Category Manager page: If we were to use JSubMenuHelper class the submenu items would only appear on our component submenu bar; they would not appear on Components | Box Offce Revues or on the Category Manager submenu which would eliminate the means of returning to our component menu. For these reasons it is generally better to create submenus that link to other components using the XML manifest file. There are, however, valid reasons for using JSubMenuHelper to create submenu items. If your component provides additional views of your data adding submenu items using JSubMenuHelper would be the more appropriate method for doing so. This example adds two options to the submenu using JSubMenuHelper: // get the current task$task = JRequest::getCmd('task');if ($task == 'item1' || $task == 'item2'){ // determine selected task $selected = ($task == 'item1'); // prepare links $item1 = 'index.php?option=com_myextension&task=item1'; $item2 = 'index.php?option=com_myextension&task=item2'; // add sub menu items JSubMenuHelper::addEntry(JText::_('Item 1'), $item1, $selected); JSubMenuHelper::addEntry(JText::_('Item 2'), $item2, $selected);} The addEntry() method adds a new item to the submenu. Items are added in order of appearance. The frst parameter is the name, the second is the link location, and the third is true if the item is the current menu item. The next screenshot depicts the given example, in the component My Extension, when the selected task is Item1: There is one more thing that we can do with the submenu. We can remove it. This is especially useful with views for which, when a user navigates away without following the correct procedure, an item becomes locked. If we modify the hidemainmenu request value to 1, the submenu will not be displayed. We normally do this in methods in our controllers; a common method in which this would be done is edit(). This example demonstrates how: JRequest::setVar('hidemainmenu', 1); There is one other caveat when doing this; the main menu will be deactivated. This screenshot depicts the main menu across the top of backend: This screenshot depicts the main menu across the top of backend when hidemainmenu is enabled; you will notice that all of the menu items are grayed out: The joomla.html library The joomla.html library provides a comprehensive set of classes for use in rendering XHMTL. An integral part of the library is the static JHTML class. Within this class is the class loader method JHTML::_(), that we will use to generate and render XHTML elements and JavaScript behaviors. We generate an XHTML element or JavaScript behavior using the following method: echo JHTML::_('type', 'parameter_1', …,'parameter_N'); The JHTML class supports eight basic XHTML element types; there are eight supporting classes that provide support for more complex XHTML element types and JavaScript behaviors. While we will not be using every available element type or behavior, we will make good use of a signifcant number of them throughout this article; enough for you to make use of others as the need arises. The basic element types are: calendar Generates a calendar control field and a clickable calendar image date Returns a formatted date string iframe Generates a XHTML <iframe></iframe> element image Generates a XHTML <img></img> element link Generates a XHTML <a></a> element script Generates a XHTML <script></script> element style Generates a <link rel=”stylesheet” style=”text/css” /> element tooltip Generates a popup tooltip using JavaScript There are eight supporting classes that provide more complex elements and behaviors that we generally define as grouped types. Grouped types are identifed by a group name and a type name. The supporting classes and group names are: Class Group Description JHTMLBehavior behavior Creates JavaScript client-side behaviors JHTMLEmail Email Provides email address cloaking JHTMLForm Form Generates a hidden token field JHTMLGrid Grid Creates HTML form grids JHTMLImage image Enables a type of image overriding in templates JHTMLList list Generates common selection lists JHTMLMenu menu Generates menus JHTMLSelect select Generates dropdown selection boxes All group types are invoked using the JHTML::_('group.type',…) syntax. The following section provides an overview of the available group types. behavior These types are special because they deal with JavaScript in order to create client-side behaviors. We'll use behavior.modal as an example. This behavior allows us to display an inline modal window that is populated from a specifc URI. A modal window is a window that prevents a user from returning to the originating window until the modal window has been closed. A good example of this is the 'Pagebreak' button used in the article manager when editing an article. The behavior.modal type does not return anything; it prepares the necessary JavaScript. In fact, none of the behavior types return data; they are designed solely to import functionality into the document. This example demonstrates how we can use the behavior.modal type to open a modal window that uses www.example.org as the source: // prepare the JavaScript parameters$params = array('size'=>array('x'=>100, 'y'=>100));// add the JavaScriptJHTML::_('behavior.modal', 'a.mymodal', $params);// create the modal window linkecho '<a class="mymodal" title="example" href="http://www.example.org" rel="{handler: 'iframe', size: {x: 400, y: 150}}">Example Modal Window</a>'; The a.mymodal parameter is used to identify the elements that we want to attach the modal window to. In this case, we want to use all <a> tags of class mymodal. This parameter is optional; the default selector is a.modal. We use $params to specify default settings for modal windows. This list details the keys that we can use in this array to define default values: ajaxOptions size onOpen onClose onUpdate onResize onMove onShow onHide The link that we create can only be seen as special because of the JavaScript in the rel attribute. This JavaScript array is used to determine the exact behavior of the modal window for this link. We must always specify handler; this is used to determine how to parse the input from the link. In most cases, this will be iframe, but we can also use image, adopt, url, and string. The size parameter is optional; here it is used to override the default specifed when we used the behavior.modal type to import the JavaScript. The settings have three layers of inheritance: The default settings defined in the modal.js file The settings we define when using the behavior.modal type The settings we define when creating the link This is a screenshot of the resultant modal window when the link is used : Here are the behavior types: calendar Adds JavaScript to use the showCalendar() function caption Places the image title beneath an image combobox Adds JavaScript to add combo selection to text fields formvalidation Adds the generic JFormValidator JavaScript class to the document keepalive Adds JavaScript to maintain a user’s session modal Adds JavaScript to implement modal windows mootools Adds the MooTools JavaScript library to the document head switcher Adds JavaScript to toggle between hidden and displayed elements tooltip Adds JavaScript required to enable tooltips tree Instantiates the MooTools JavaScript class MooTree uploader Adds a dynamic file uploading mechanism using JavaScript email There is only one e-mail type. cloak Adds JavaScript to encrypt e-mail addresses in the browser form There is only one form type. token Generates a hidden token field to reduce the risk of CSRF exploits grid The grid types are used for displaying a dataset's item elements in a table of a backend form. There are seven grid types, each of which handles a commonly defined database field such as access, published, ordering, checked_out. The grid types are used within a form named adminForm that must include a hidden field named boxchecked with a default value of 0 and another named task that will be used to determine which task a controller will execute. To illustrate how the grid types are used we will use grid.id and grid.published along with our component database table #__boxoffice_revues that has a primary key field named id, a field named published, which we use to determine if an item should be displayed, and a field named name. We can determine the published state of a record in our table by using grid.published. This example demonstrates how we might process each record in a view form layout and output data into a grid or table ($this->revues is an array of objects representing records from the table): <?php $i = 0; foreach ($this->revues as $row) : $checkbox = JHTML::_('grid.id', ++$i, $row->id); $published = JHTML::_('grid.published', $row, $i); ?> <tr class=<?php echo "row$i%2"; ?>"> <td><?php echo $checkbox; ?></td> <td><?php echo $row->name; ?></td> <td align="center"><?php echo $published ?></td> </tr><?php endforeach;?> If $revues were to contain two objects named Item 1 and Item 2, of which only the frst object is published, the resulting table would look like this: Not all of the grid types are used for data item elements. The grid.sort and grid.order types are used to render table column headings. The grid.state type is used to display an item state selection box, All, Published, Unpublished and, optionally, Archived and Trashed. The grid types include: access Generates an access group text link checkedOut Generates selectable checkbox or small padlock image id Generates a selectable checkbox order Outputs a clickable image for every orderable column published Outputs a clickable image that toggles between published & unpublished sort Outputs a sortable heading for a grid/table column state Outputs a drop-down selection box called filter_state image We use the image types to perform a form of image overriding by determining if a template image is present before using a system default image. We will use image.site to illustrate, using an image named edit.png: echo JHTML::_('image.site', 'edit.png'); This will output an image tag for the image named edit.png. The image will be located in the currently selected template's /images folder. If edit.png is not found in the /images folder then the /images/M_images/edit.png file will be used. We can change the default directories using the $directory and $param_directory parameters. There are two image types, image.administrator and image.site. administrator Loads image from backend templates image directory or default image site Loads image from frontend templates image directory or default image
Read more
  • 0
  • 0
  • 1793
Modal Close icon
Modal Close icon