Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-creating-website-artisteer
Packt
30 Apr 2013
5 min read
Save for later

Creating a website with Artisteer

Packt
30 Apr 2013
5 min read
(For more resources related to this topic, see here.) Layout The first thing that we should set up while designing a new website is its width. If you are interested in creating web pages, you probably have a monitor with a large widescreen and good resolution. But we have to remember that not all of your visitors will have such good hardware. All the templates generated by Artisteer are centered, and almost all modern browsers enable you to freely zoom the page. It's far better to let some of your visitors enlarge the site than to make the rest of them use the horizontal scroll bar while reading. The resolution you choose will depend on the target audience of your site. Usually, private computers have better parameters than the typical PCs used for just office work in companies. So if you design a site that you know will be viewed mostly by private individuals, you can choose a slightly wider layout than you might for a typical business site. But you cannot forget that many nonbusiness websites, such as community sites, are often accessed from offices. So what is the answer? In my opinion, a layout with a width of 1,000 pixels is still a good choice for most of the cases. Such width ensures that the site will be displayed correctly on a pretty old, but still commonly used, nonwide 17'' monitor. (The typical resolution for this hardware is 1,024 x 768 and such a layout will fill the whole screen.) As more and more users have now started using computers that are equipped with a far better screen, you can consider increasing the resolution slightly, to, for example, 1,150 pixels. Remember that not every user will visit your site using a desktop. Many laptops, and especially netbooks and tablets, don't have wide screens either. Remember that the width of the page must be a little lower than the total resolution of the screen. You should reserve some space for the vertical scrollbar. We are going to set up the width of our project traditionally to 1,000 pixels. To do this, click on the Layout tab on the ribbon, and next to the Sheet Width button. Choose 1000 pixels from the available options on the list. The Sheet Options window is divided into two areas: on the left you can choose from the values expressed in pixels, while on the right, as a percentage. The percentage value means that the page doesn't have a fixed width, but it will change according to the parameters of the screen it is displayed on (according to the chosen percentage value). Designing layouts with the width defined in percentage might seem to be a great idea; and indeed, this technique, when properly used, can lead to great results. But you have to remember, that in such a case, all page elements have to be similarly prepared in order, to be able to adapt to the dynamically changing width of the site. It is far simpler to achieve good results for the layout with fixed values (expressed in pixels). It is a common rule while working with Artisteer that after clicking on a button on the ribbon, you get the list containing the most commonly used standard values. If you need a custom value, however, you can click on the button located at the bottom of the list to go to a window where you can freely set up and choose the required value. For example, while choosing the width of a layout, clicking on the More Sheet Widths... button (located just under the list) will lead you to a window where you can set up the required width with an accuracy up to 1 pixel. We can set the required value in three ways: We can click on the up and down arrows that are located on the right side of the field. We can move the mouse cursor on the field and use the slider that appears. We can click on the field. The text cursor will appear. Then we can type the required value using the keyboard. For me, this is the most comfortable way, especially since the slider's minimal progress is more than 1. Panel mode versus windows mode If you look carefully at the displayed windows, on the bottom-right corner you will see a panel mode button. This button switches Artisteer's interface between panel mode and windows mode. In the windows mode, the advanced settings are displayed in windows. In the panel mode, the advanced settings are displayed on the side panel located on the right side of Artisteer's window. If you are using a wide screen, you may find the panel mode to be more comfortable. Its advantage is that the side panel doesn't cover anything on your project, so you have a better view to observe the changes. Such a change is persistent and if you switch to the panel mode, all the advanced settings will be displayed in the right panel, as long as you decide to go back into the windows mode. To reverse, find and click on the icon located in the top-right corner of the side panel (just next to the x button that closes the panel). Summary This article has covered some features exclusive to Artisteer. It has also explained a brief process of how to create stunning templates for websites using Artisteer. Resources for Article : Further resources on this subject: Creating and Using Templates with Cacti 0.8 [Article] Using Templates to Display Channel Content in ExpressionEngine [Article] Working with Templates in Apache Roller 4.0 [Article]
Read more
  • 0
  • 0
  • 2140

article-image-distributed-transaction-using-wcf
Packt
17 Jun 2010
12 min read
Save for later

Distributed transaction using WCF

Packt
17 Jun 2010
12 min read
(Read more interesting articles on WCF 4.0 here.) Creating the DistNorthwind solution In this article, we will create a new solution based on the LINQNorthwind solution.We will copy all of the source code from the LINQNorthwind directory to a new directory and then customize it to suit our needs. The steps here are very similar to the steps in the previous chapter when we created the LINQNorthwind solution.Please refer to the previous chapter for diagrams. Follow these steps to create the new solution: Create a new directory named DistNorthwind under the existing C:SOAwithWCFandLINQProjects directory. Copy all of the files under the C:SOAwithWCFandLINQProjectsLINQNorthwind directory to the C:SOAwithWCFandLINQProjectsDistNorthwind directory. Remove the folder, LINQNorthwindClient. We will create a new client for this solution. Change all the folder names under the new folder, DistNorthwind, from LINQNorthwindxxx to DistNorthwindxxx. Change the solution files' names from LINQNorthwind.sln to DistNorthwind.sln, and also from LINQNorthwind.suo to DistNorthwind.suo. Now we have the file structures ready for the new solution but all the file contents and the solution structure are still for the old solution. Next we need to change them to work for the new solution. We will first change all the related WCF service files. Once we have the service up and running we will create a new client to test this new service. Start Visual Studio 2010 and open this solution: C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwind.sln. Click on the OK button to close the projects were not loaded correctly warning dialog. From Solution Explorer, remove all five projects (they should all be unavailable). Right-click on the solution item and select Add | Existing Projects… to add these four projects to the solution. Note that these are the projects under the DistNorthwind folder, not under the LINQNorthwind folder: LINQNorthwindEntities.csproj, LINQNorthwindDAL.csproj, LINQNorthwindLogic.csproj, and LINQNorthwindService.csproj. In Solution Explorer, change all four projects' names from LINQNorthwindxxx to DistNorthwindxxx. In Solution Explorer, right-click on each project, select Properties (or select menu Project | DistNorthwindxxx Properties), then change the Assembly name from LINQNorthwindxxx to DistNorthwindxxx, and change the Default namespace from MyWCFServices.LINQNorthwindxxx to MyWCFServices.DistNorthwindxxx. Open the following files and change the word LINQNorthwind to DistNorthwind wherever it occurs: ProductEntity.cs, ProductDAO.cs, ProductLogic.cs, IProductService.cs, and ProductService.cs. Open the file, app.config, in the DistNorthwindService project and change the word LINQNorthwind to DistNorthwind in this file. The screenshot below shows the final structure of the new solution, DistNorthwind: Now we have finished modifying the service projects. If you build the solution now you should see no errors. You can set the service project as the startup project, run the program. Hosting the WCF service in IIS The WCF service is now hosted within WCF Service Host.We had to start the WCF Service Host before we ran our test client.Not only do you have to start the WCF Service Host, you also have to start the WCF Test client and leave it open. This is not that nice. In addition, we will add another service later in this articleto test distributed transaction support with two databases and it is not that easy to host two services with one WCF Service Host.So, in this article, we will first decouple our WCF service from Visual Studio to host it in IIS. You can follow these steps to host this WCF service in IIS: In Windows Explorer, go to the directory C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. Within this folder create a new text file, ProductService.svc, to contain the following one line of code: <%@ServiceHost Service="MyWCFServices.DistNorthwindService. ProductService"%> Again within this folder copy the file, App.config, to Web.config and remove the following lines from the new Web.config file: <host> <baseAddresses> <add baseAddress="http://localhost:8080/ Design_Time_Addresses/MyWCFServices/ DistNorthwindService/ProductService/" /> </baseAddresses> </host> Now open IIS Manager, add a new application, DistNorthwindService, and set its physical path to C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindService. If you choose to use the default application pool, DefaultAppPool, make sure it is a .NET 4.0 application pool.If you are using Windows XP you can create a new virtual directory, DistNorthwindService, set its local path to the above directory, and make sure its ASP.NET version is 4.0. From Visual Studio, in Solution Explorer, right-click on the project item,DistNorthwindService, select Properties, then click on the Build Events tab, and enter the following code to the Post-build event command line box: copy .*.* ..With this Post-build event command line, whenever DistNorthwindService is rebuilt the service binary files will be copied to the C:SOAWithWCFandLINQProjectsDistNorthwindDistNorthwindServicebin directory so that the service hosted in IIS will always be up-to-date. From Visual Studio, in Solution Explorer, right-click on the project item, DistNorthwindService, and select Rebuild. Now you have finished setting up the service to be hosted in IIS. Open Internet Explorer, go to the following address, and you should see the ProductService description in the browser: http://localhost/DistNorthwindService/ProductService.svc Testing the transaction behavior of the WCF service Before explaining how to enhance this WCF service to support distributed transactions, we will first confirm that the existing WCF service doesn't support distributed transactions. In this article, we will test the following scenarios: Create a WPF client to call the service twice in one method. The first service call should succeed and the second service call should fail. Verify that the update in the first service call has been committed to the database, which means that the WCF service does not support distributed transactions. Wrap the two service calls in one TransactionScope and redo the test. Verify that the update in the first service call has still been committed to the database which means the WCF service does not support distributed transactions even if both service calls are within one transaction scope. Add a second database support to the WCF service. Modify the client to update both databases in one method. The first update should succeed and the second update should fail. Verify that the first update has been committed to the database, which means the WCF service does not support distributed transactions with multiple databases. Creating a client to call the WCF service sequentially The first scenario to test is that within one method of the client application two service calls will be made and one of them will fail. We then verify whether the update in the successful service call has been committed to the database. If it has been, it will mean that the two service calls are not within a single atomic transaction and will indicate that the WCF service doesn't support distributed transactions. You can follow these steps to create a WPF client for this test case: In Solution Explorer, right-click on the solution item and select Add | New Project… from the context menu. Select Visual C# | WPF Application as the template. Enter DistributedWPF as the Name. Click on the OK button to create the new client project. Now the new test client should have been created and added to the solution. Let's follow these steps to customize this client so that we can call ProductService twice within one method and test the distributed transaction support of this WCF service: On the WPF MainWindow designer surface, add the following controls (you can double-click on the MainWindow.xaml item to open this window and make sure you are on the design mode, not the XAML mode): A label with Content Product ID Two textboxes named txtProductID1 and txtProductID2 A button named btnGetProduct with Content Get Product Details A separator to separate above controls from below controls Two labels with content Product1 Details and Product2 Details Two textboxes named txtProduct1Details and txtProduct2Details, with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked A separator to separate above controls from below controls A label with content New Price Two textboxes named txtNewPrice1 and txtNewPrice2 A button named btnUpdatePrice with Content Update Price A separator to separate above controls from below controls Two labels with content Update1 Results and Update2 Results Two textboxes named txtUpdate1Results and txtUpdate2Results with the following properties: AcceptsReturn: checked Background: Beige HorizontalScrollbarVisibility: Auto VerticalScrollbarVisibility: Auto IsReadOnly: checked Your MainWindow design surface should look like the following screenshot: In Solution Explorer, right-click on the DistNorthwindWPF project item, select Add Service Reference… and add a service reference of the product service to the project. The namespace of this service reference should be ProductServiceProxy and the URL of the product service should be like this:http://localhost/DistNorthwindService/ProductService.svc On the MainWindow.xaml designer surface, double-click on the Get Product Details button to create an event handler for this button. In the MainWindow.xaml.cs file, add the following using statement: using DistNorthwindWPF.ProductServiceProxy; Again in the MainWindow.xaml.cs file, add the following two class members: Product product1, product2; Now add the following method to the MainWindow.xaml.cs file: private string GetProduct(TextBox txtProductID, ref Product product) { string result = ""; try { int productID = Int32.Parse(txtProductID.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); product = client.GetProduct(productID); StringBuilder sb = new StringBuilder(); sb.Append("ProductID:" + product.ProductID.ToString() + "n"); sb.Append("ProductName:" + product.ProductName + "n"); sb.Append("UnitPrice:" + product.UnitPrice.ToString() + "n"); sb.Append("RowVersion:"); foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message.ToString(); } return result; } This method will call the product service to retrieve a product from the database, format the product details to a string, and return the string. This string will be displayed on the screen. The product object will also be returned so that later on we can reuse this object to update the price of the product. Inside the event handler of the Get Product Details button, add the following two lines of code to get and display the product details: txtProduct1Details.Text = GetProduct(txtProductID1, ref product1); txtProduct2Details.Text = GetProduct(txtProductID2, ref product2); Now we have finished adding code to retrieve products from the database through the Product WCF service. Set DistNorthwindWPF as the startup project, press Ctrl + F5 to start the WPF test client, enter 30 and 31 as the product IDs, and then click on the Get Product Details button. You should get a window like this image: To update the prices of these two products follow these steps to add the code to the project: On the MainWindow.xaml design surface and double-click on the Update Price button to add an event handler for this button. Add the following method to the MainWindow.xaml.cs file: private string UpdatePrice( TextBox txtNewPrice, ref Product product, ref bool updateResult) { string result = ""; try { product.UnitPrice = Decimal.Parse(txtNewPrice.Text.ToString()); ProductServiceClient client = new ProductServiceClient(); updateResult = client.UpdateProduct(ref product); StringBuilder sb = new StringBuilder(); if (updateResult == true) { sb.Append("Price updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("New RowVersion:"); } else { sb.Append("Price not updated to "); sb.Append(txtNewPrice.Text.ToString()); sb.Append("n"); sb.Append("Update result:"); sb.Append(updateResult.ToString()); sb.Append("n"); sb.Append("Old RowVersion:"); } foreach (var x in product.RowVersion.AsEnumerable()) { sb.Append(x.ToString()); sb.Append(" "); } result = sb.ToString(); } catch (Exception ex) { result = "Exception: " + ex.Message; } return result; } This method will call the product service to update the price of a product in the database. The update result will be formatted and returned so that later on we can display it. The updated product object with the new RowVersion will also be returned so that later on we can update the price of the same product again and again. Inside the event handler of the Update Price button, add the following code to update the product prices: if (product1 == null) { txtUpdate1Results.Text = "Get product details first"; } else if (product2 == null) { txtUpdate2Results.Text = "Get product details first"; } else { bool update1Result = false, update2Result = false; txtUpdate1Results.Text = UpdatePrice( txtNewPrice1, ref product1, ref update1Result); txtUpdate2Results.Text = UpdatePrice( txtNewPrice2, ref product2, ref update2Result); } Testing the sequential calls to the WCF service Let's run the program now to test the distributed transaction support of the WCF service. We will first update two products with two valid prices to make sure our code works with normal use cases. Then we will update one product with a valid price and another with an invalid price. We will verify that the update with the valid price has been committed to the database, regardless of the failure of the other update. Let's follow these steps for this test: Press Ctrl + F5 to start the program. Enter 30 and 31 as product IDs in the top two textboxes and click on the Get Product Details button to retrieve the two products. Note that the prices for these two products are 25.89 and 12.5 respectively. Enter 26.89 and 13.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. The update results are true for both updates, as shown in following screenshot: Now enter 27.89 and -14.5 as new prices in the middle two textboxes and click on the Update Price button to update these two products. This time the update result for product 30 is still True but for the second update the result is False. Click on the Get Product Details button again to refresh the product prices so that we can verify the update results. We know that the second service call should fail so the second update should not be committed to the database. From the test result we know this is true (the second product price didn't change). However from the test result we also know that the first update in the first service call has been committed to the database (the first product price has been changed). This means that the first call to the service is not rolled back even when a subsequent service call has failed. Therefore each service call is in a separate standalone transaction. In other words, the two sequential service calls are not within one distributed transaction.
Read more
  • 0
  • 0
  • 2138

article-image-activating-buddypress-default-theme-and-setting-and-configuring-buddypress
Packt
30 Jul 2010
4 min read
Save for later

Activating the BuddyPress Default Theme and Setting up and Configuring BuddyPress

Packt
30 Jul 2010
4 min read
(For more resources on WordPress, see here.) After installing and activating BuddyPress, an alert will appear at the top of your screen to tell you that the functionality offered by BuddyPress isn't available on your website just yet. For these features to be made available to your members, you will need to install a BuddyPress-compatible theme.   Activating this theme is a two step process. First, navigate to Appearance | Themes and then click Activate for the BuddyPress Default theme. Next, click on SuperAdmin | Themes. Tick the radio button labeled Yes for the BuddyPress Default theme and then click Apply Changes. Now the BuddyPress Default theme will be in use on your site and available for usage by your users. Setting up and configuring BuddyPress After activating the plugin, you might have noticed that a new BuddyPress menu appeared. Click on BuddyPress | General Settings to access the BuddyPress Settings screen. BuddyPress Settings There are only two settings on this screen that you need to concern yourself with; the rest can be left at their defaults settings. The first option that you need to alter is located at the top of the screen and is labeled Base profile group name. As you can see, this is currently set to Base. This text appears in a couple of places. First, when your users go to My Account | Profile | Edit Profile, they will see Editing 'Base' Profile Group.   The second place that this text can be found is on the BuddyPress Profile Field Setup screen where it's used as the name of the default field group.   In both instances, something less enigmatic would be beneficial. Think of a descriptive label that would be useful in both situations and then enter it into the Base profile group name textbox.   You will find the other setting that you need to configure located at the bottom of your screen, so scroll down until you see the Default User Avatar. In this area, select the type of avatar that you would like to display for users without a custom avatar and then click Save Settings. Component Setup Now, click on BuddyPress | Component Setup to be taken to the BuddyPress Component Setup screen. By default, all of the components found on this screen are enabled. How you choose to configure the majority of these settings will depend upon your preferences and the features that you would like to make available on your website. It should be noted, however, that both the bbPress Forums and Groups components should remain enabled if you plan on integrating bbPress into your community portal. Also, the Extended Profiles component should be left set to Enabled, so that your members can have more detailed profiles attached to their accounts. Profile Field Setup Skip the Forums Setup screen for now, and instead click on Profile Field Setup. On this screen, there are three actions that you can take. You can add additional field groups, add new fields, and then choose the location for each of these fields within their group. At present, your installation of BuddyPress has one default field placed within one default field group which now bears the name that it was given when you changed it from Base on the BuddyPress Settings screen. Any fields located in this default field group will appear on the signup screen under the heading of Profile Details.   This field group also appears on the screen that your users see when they go to edit their profile.   Any additional field groups that you add will only be visible to users when they wish to edit their profile. As things stand, your users will have a profile that consists of nothing more than their name. Since that doesn't make for much of a profile you need to add some additional field groups and fields. With the addition of these new groups and fields, it will be possible for your members to build a robust profile page.
Read more
  • 0
  • 0
  • 2137

article-image-working-events-mootools-part-1
Packt
07 Jan 2010
9 min read
Save for later

Working with Events in MooTools: Part 1

Packt
07 Jan 2010
9 min read
We have a lot of things to cover in this article, so hold on to your seats and enjoy the ride! What are Events, Exactly? Events are simply things that happen in our web pages. MooTools supports all HTML 4.01 event attributes like onclick and onmouseout, but the framework refers to them without the "on"  prefix (click instead of onclick, mouseout instead of onmouseout). What's neat about MooTools is that it not only extends HTML 4.01 event attributes with a few of its own, but also ensures that methods and functions that deal with web page events work across all web browsers by providing us with a solid, built-in object called Events. Event is part of the Native component of MooTools, and is also referred to as the Event hash. You can read the official W3C specifications on events, in the HTML 4.01 Specification, section 18.2.3, under Intrinsic Events:http://www.w3.org/TR/html401/interact/scripts.html#h-18.2.3 We'll go over all of the available event attributes in MooTools so you can learn what stuff we can listen to. There are several events that we can detect or "listen to". We can, for the sake of discussion, divide them into five groups: window events, form events, keyboard events, mouse events, and MooTools custom events. Window Events Window events refer to activities that occur in the background. There are only two window events. HTML Event attribute / MooTools event name What is it? onload / load This event occurs when the window and images on the page has fully loaded and/or when all of the iFrames in the page have loaded. It can be used for monitoring when the web page has fully loaded (such as when you want to know if all images have been downloaded). onunload / unload This even happens when a window or an iFrame is removed from the web page. It has limited use. Form events There are events that occur within Form elements (such as <input> elements), and we'll refer to these as form events. For example, the onfocus  event is triggered when the user clicks on an input field (you'll see this in action in this article), effectively focusing on that particular input field. Some of these events apply event to non-form elements. HTML Event attribute / MooTools event name What is it? onblur / blur This event occurs when an element loses focus, either because the user has clicked out of it, or because the user used the Tab key to move away from it. This is helpful for monitoring the instant the user loses focus on a particular element. onchange / change This event happens when the element loses focus or when its original value has changed. This is helpful for knowing when the user starts typing in a input text field or textarea, or when a user selects a different option in a select drop-down element. onfocus / focus This event is the opposite, of the blur event: it is triggered when the user focuses on an element. This is useful for watching when the user highlights a form field or when they have navigated to it using the Tab key. onreset / reset This event only applies to form elements. This event is triggered when the form has been reset to its default values. onselect / select This event happens when the user highlights (selects) text in a text field. onsubmit / submit This event is only for form elements. This event occurs when the user submits a web form. Keyboard events There are events that happen when a user presses on a keyboard input device; let's call these keyboard events. For example, the onkeypress event is triggered when you press any key on your keyboard. HTML Event attribute / MooTools event name What is it? onkeydown / keydown This event occurs when the user holds down a keyboard key. onkeypress / keypress This event is triggered whenever the user presses a keyboard key. onkeyup /keyup This event happens when the user releases a key. Mouse events There are several HTML event properties that allow you to deal with activities related to the mouse. Clicking, moving, double-clicking, and hovering are all mouse events. HTML Event attribute / MooTools event name What is it? onclick / click This event occurs whenever the user uses the mouse button to click on an element. ondblclick / dblclick This even occurs whenever the user double-clicks on an element. onmousedown / mousedown This event occurs when the mouse button is held down. onmouseup / mouseup This even occurs when the mouse button is released. onmousemove / mousemove This event occurs when the mouse is moved. onmouseout / mouseout This event occurs when the mouse pointer is removed from the target element. onmouseover / mouseover This event occurs when the mouse pointer enters the target element. MooTools Custom Mouse Events MooTools supplies us with three custom events that extend the standard mouse events. MooTools event name What is it? mouseenter This event is triggered when the user's mouse pointer enters an element, but does not fire again when the mouse goes over a children element (unlike mouseover). This is useful for detecting the mouseover event once in nested element structures, such as <li><a>item</a></li>. If we were to use the mouseover event, it would be triggered twice, once for <li> and once again for <a>. mouseleave This event works similarly to mouseenter in that it is triggered only once when the mouse pointer exits the target element. Unlike the mouseout event, which will be triggered more than once for nested element structures. mousewheel This even is triggered when the scroller on a mouse is used (available in most modern mice input devices, usually situated in between the left and right mouse buttons). Adding Event Listeners We can attach event listeners to elements on a web page using the addEvent and addEvents method. By doing so, we're able to find out whenever that event happens, as well as execute a function to react to them. Adding event listeners is the basis for interactivity and is where JavaScript (and subsequently) MooTools has gained its popularity. Imagine being able to perform an operation whenever a user hovers over an image, or clicks on a link, or whenever the user submits a form -- the possibilities are endless. Adding a single event listener The addEvent method allows you to add one event listener to an element method and follows the format: $(yourelement).addEvent(event, function(){}) For example, in the following code block, we attach a click event listener for all <a> elements. When the user clicks on any hyperlink on our web page, it runs a function that opens up an alert dialog box that says You clicked on a hyperlink. $$('a').addEvent('click', function(){ alert('You clicked on a hyperlink');}); In a moment, we'll create a simple web form the highlights the input field that the user is focused on. Time for action – Highlighting focused fields of web forms Let's start with our web form's HTML. We'll use <input> and <textarea> tags that will hold our user's information as well as provide them a means to submit the web form (input type="button").We use the <label>  tag to indicate to the user what information to put inside each form field. <form action="" method="get"> <p><label for="Name">Name: </label> <input name="Name" type="text"></p> <p><label for="Email">Email: </label> <input name="Email" type="text"></p> <p><label for="Comment">Comment: </label> <textarea name="Comment" cols="" rows=""></textarea></p> <p><input name="Submit" type="button" value="Submit"></p></form> With the above markup, this is how our form looks like: Our web form is a bit dull, so how about we spruce it up a bit with some CSS? Read the comments to gain insight on some of the more important CSS properties to take a note off. /* Center the fields using text-align:center */form { width:300px; border:1px solid #b1b8c2; background-color:#e3e9f2; text-align:center; padding:25px 0;}label { display:block; font:12px normal Verdana, Geneva, sans-serif;}/* Give the input and textarea fields a 1px border */input, textarea { width:250px; border:1px solid #5c616a;}textarea { height:100px;}p { text-align:left; display:block; width:250px; overflow:auto; padding:0; margin:5px 0;}/* We will give fields that are currently focused on the .focus class which will give them a distinct thicker border and background color compared to the other input fields */.focused { border:3px solid #b1b8c2; background-color: #e8e3e3;} With just a few styles, our simple web form has transformed to a more attractive form field. Let us move onto the JavaScript. We use the addEvent method to add an event listener for the form event, onfocus. When the user focuses on a particular field, we run a function that adds the .focus CSS class on that field we declared as a style rule in step 2. We'll also remove .focus class on other fields on the web page. window.addEvent('domready', function(){ var els = $$('input, textarea') els.addEvent('focus', function(){ els.removeClass('focused'); this.addClass('focused'); })}); Now, when you focus on a form field, it will be highlighted with a thick blue border and with a lighter blue background. Users who use the Tab key to navigate in between form fields will appreciate this feature since they'll clearly see which field is active. What just happened? In the previous example, we created a simple and interactive web form that highlights the current field the user has active. We did this by using the addEvent method, adding an event listener for the focus form event. When the user focuses on a particular input field, we ran a function that adds the .focusCSS class which highlights the focused field <input> or <textarea>with a thick blue border with a light blue background. By highlighting active fields in a web form, we have just improved our form's usability by providing visual feedback about which field the user is currently on.
Read more
  • 0
  • 0
  • 2136

article-image-creating-site-accesses-ez-publish-4
Packt
19 Nov 2009
6 min read
Save for later

Creating Site Accesses with eZ Publish 4

Packt
19 Nov 2009
6 min read
What is the siteaccess system? To override eZ Publish's default configuration, we need to create a collection of configuration settings called siteaccess. The role of a siteaccess is to indicate to eZ which database, design, and var directory should be used for a particular context. With siteaccess it is possible to use the same content and different designs (for example, when creating a mobile version of our site) or do the opposite (for example, managing a multilingual site where the template doesn't change but the content does). It's also possible to create an administration siteaccess, where we can manage any kind of content, such as users, media files and, of course, the articles, or a frontend siteaccess that is the website, where we can only view the public published content. A typical eZ publish site consists of two siteaccesses: a public interface for visitors and a restricted interface for editors. In this case, both siteaccesses use the same content, but different designs. Whereas the administration siteaccess would most likely use the built-in administration design, the public siteaccess would probably use a custom design. The following illustration, taken from the official eZ Publish documentation, shows this scenario: Usually, in big projects it is also useful to have two additional siteaccesses: a staging siteaccess and a developing siteaccess. The first is used in a staging environment to make frequent deployments of modifications that can be tested by the customer (in this case, the siteaccess uses a different database but the same design as for the public and admin siteaccesses). The second one, instead, is used by developers on their local machine (this siteaccess uses a local database, but once again uses the same design as for the public and admin siteaccesses). A single eZ publish installation can host a virtually unlimited number of sites by simply adding new siteaccesses, designs, and databases. Siteaccess folder structure The configuration settings for siteaccesses are located inside a dedicated subfolder within the /settings/siteaccess folder. The name of the subfolder is the actual name of the siteaccess. It is very important to remember that a siteaccess name can only contain letters, numbers, and underscores. The following illustration shows a setup with two siteaccesses: admin and public. When a siteaccess is in use, eZ publish reads the configuration file in the following sequence: Default configuration settings: /settings/*.ini Siteaccess settings: /settings/siteaccess/[name_of_siteaccess]/*.ini.append.php Global overrides: /settings/override/*.ini.append.php eZ Publish will first read the default configuration settings. Then, it will determine which siteaccess to use based on the rules that are defined in the global override for site.ini /settings/override/site.ini.append.php. When it knows which siteaccess has to be used, it will go into the correct siteaccess folder and read the configuration files that belong to that siteaccess. The settings of the siteaccess will override the default configuration settings. For example, if a siteaccess uses a database called packtmediaproject_test, the system will find this and automatically use the specified database when an incoming request is processed. Finally, eZ Publish reads the configuration files in the global override directory. The settings in the global override directory will override all other settings. So, if a database called packtmediaproject is specified in the global override directory for site.ini, then eZ publish will attempt to use that database regardless of what is specified in the siteaccess settings. If a setting is not overridden either by the siteaccess, or from within a global override, then the default setting will be used. The default settings are set by the .ini files located in the /settings directory. The following figure illustrates how the system reads the configuration files, using the site.ini file as an example:                                                                                             Creating a siteaccess for dev, staging, and production environments Once we have finished installing eZ Publish, we'll find a folder called setting/siteaccess, with the default siteaccess automatically configured. In our case we'll find these folders: admin: This folder usually isn't used as siteaccess, but it contains a standard configuration file that can be used to set up the administration panel setup: This folder contains all of the configuration files that are used during the installation process ezwebin_site: This is where the main design is imported directly from the eZ.no site for the package eZ Webin ita, eng, fre: Last but not least, the ita, eng, and fre folders have the configuration files used by the site to enable internationalization and localization The ezwebin_site_admin is created by the webmin site package, and contains all of the configuration files for the administration panel. Enterprise siteaccess schema In an enterprise development process, it is very important to have four more siteaccesses: dev dev_panel staging staging_panel The siteaccesses dev and dev_panel will be used as a development playground installation, which can be used by the development team members, with their own configuration parameters, such as database connection, path, and debug file. This will help them to test different configuration parameters or extensions without impacting the production site. The siteaccesses staging and staging_panel will be used as a staging arena that can be used by a customer to evaluate new functionality before it is released to production. Usually, the staging installation is installed on a clone of the production server, to make sure that everything works in the same way. In our case, we will work on the same server to better understand how to create the different siteaccesses. All siteacccesses will have some configuration files in common, and sometimes these have to assign the same value to the parameters specified inside them. For example, if you need to create a new language siteacccess, you'll need to copy the same module configuration files to be sure that they will work in the same way for all of the languages. In this case, it will be useful to create a symbolic link from one siteaccess to another. If you don't know what a Linux symbolic link is, you can think of it as a virtual pointer to a real file, like a Windows XP shortcut.
Read more
  • 0
  • 0
  • 2134

article-image-organizing-jade-projects
Packt
24 Mar 2014
9 min read
Save for later

Organizing Jade Projects

Packt
24 Mar 2014
9 min read
(For more resources related to this topic, see here.) Now that you know how to use all the things that Jade can do, here's when you should use them. Jade is pretty flexible when it comes to organizing projects; the language itself doesn't impose much structure on your project. However, there are some conventions you should follow, as they will typically make your code easier to manage. This article will cover those conventions and best practices. General best practices Most of the good practices that are used when writing HTML carry over to Jade. Some of these include the following: Using a consistent naming convention for ID's, class names, and (in this case) mixin names and variables Adding alt text to images Choosing appropriate tags to describe content and page structure The list goes on, but these are all things you should already be familiar with. So now we're going to discuss some practices that are more Jade-specific. Keeping logic out of templates When working with a templating language, like Jade, that allows you to use advanced logical operations, separation of concerns (SoC) becomes an important practice. In this context, SoC is the separation of business and presentational logic, allowing each part to be developed and updated independently. An easy point to draw the border between business and presentation is where data is passed to the template. Business logic is kept in the main code of your application and passes the data to be presented (as well-formed JSON objects) to your template engine. From there, the presentation layer takes the data and performs whatever logic is needed to make that data into a readable web page. An additional advantage of this separation is that the JSON data can be passed to a template over stdio (to the server-side Jade compiler), or it can be passed over TCP/IP (to be evaluated client side). Since the template only formats the given data, it doesn't matter where it is rendered, and can be used on both server and client. For documenting the format of the JSON data, try JSON Schema (http://json-schema.org/). In addition to describing the interface between that your presentation layer uses, it can be used in tests to validate the structure of the JSON that your business layer produces. Inlining When writing HTML, it is commonly advised that you don't use inline styles or scripts because it is harder to maintain. This advice still applies to the way you write your Jade. For everything but the smallest one-page projects, tests, and mockups, you should separate your styles and scripts into different files. These files may then be compiled separately and linked to your HTML with style or link tags. Or, you could include them directly into the Jade. But either way, the important part is that you keep it separated from your markup in your source code. However, in your compiled HTML you don't need to worry about keeping inlined styles out. The advice about avoiding inline styles applies only to your source code and is purely for making your codebase easier to manage. In fact, according to Best Practices for Speeding Up Your Web Site (http://developer.yahoo.com/performance/rules.html) it is much better to combine your files to minimize HTTP requests, so inlining at compile time is a really good idea. It's also worth noting that, even though Jade can help you inline scripts and styles during compilation, there are better ways to perform these compile-time optimizations. For example, build-tools like AssetGraph (https://github.com/assetgraph/assetgraph) can do all the inlining, minifying, and combining you need, without you needing to put code to do so in your templates. Minification We can pass arguments through filters to compilers for things like minifying. This feature is useful for small projects for which you might not want to set up a full build-tool. Also, minification does reduce the size of your assets making it a very easy way to speed up your site. However, your markup shouldn't really concern itself with details like how the site is minified, so filter arguments aren't the best solution for minifying. Just like inlining, it is much better to do this with a tool like AssetGraph. That way your markup is free of "build instructions". Removing style-induced redundancy A lot of redundant markup is added just to make styling easier: we have wrappers for every conceivable part of the page, empty divs and spans, and plenty of other forms of useless markup. The best way to deal with this stuff is to improve your CSS so it isn't reliant on wrappers and the like. Failing that, we can still use mixins to take that redundancy out of the main part of our code and hide it away until we have better CSS to deal with it. For example, consider the following example that uses a repetitive navigation bar: input#home_nav(type='radio', name='nav', value='home', checked) label(for='home_nav') a(href='#home') home input#blog_nav(type='radio', name='nav', value='blog') label(for='blog_nav') a(href='#blog') blog input#portfolio_nav(type='radio', name='nav', value='portfolio') label(for='portfolio_nav') a(href='#portfolio') portfolio //- ...and so on Instead of using the preceding code, it can be refactored into a reusable mixin as shown in the following code snippet: mixin navbar(pages) - checked = true for page in pages input( type='radio', name='nav', value=page, id="#{page}_nav", checked=checked) label(for="#{page}_nav") a(href="##{page}") #{page} - checked = false The preceding mixin can be then called later in your markup using the following code: +navbar(['home', 'blog', 'portfolio']) Semantic divisions Sometimes, even though there is no redundancy present, dividing templates into separated mixins and blocks can be a good idea. Not only does it provide encapsulation (which makes debugging easier), but the division represents a logical separation of the different parts of a page. A common example of this would be dividing a page between the header, footer, sidebar, and main content. These could be combined into one monolithic file, but putting each in a separate block represents their separation, can make the project easier to navigate, and allows each to be extended individually. Server-side versus client-side rendering Since Jade can be used on both the client-side and server-side, we can choose to do the rendering of the templates off the server. However, there are costs and benefits associated with each approach, so the decision must be made depending on the project. Client-side rendering Using the Single Page Application (SPA) design, we can do everything but the compilation of the basic HTML structure on the client-side. This allows for a static page that loads content from a dynamic backend and passes that content to Jade templates compiled for client-side usage. For example, we could have simple webapp that, once loaded, fires off a AJAX request to a server running WordPress with a simple JSON API, and displays the posts it gets by passing the the JSON to templates. The benefits of this design is that the page itself is static (and therefore easily cacheable), with the SPA design, navigation is much faster (especially if content is preloaded), and significantly less data is transferred because of the terse JSON format that the content is formatted in (rather than it being already wrapped in HTML). Also, we get a very clean separation of content and presentation by actually forcing content to be moved into a CMS and out of the codebase. Finally, we avoid the risk of coupling the rendering too tightly with the CMS by forcing all content to be passed over HTTP in JSON—in fact, they are so separated that they don't even need to be on the same server. But, there are some issues too—the reliance on JavaScript for loading content means that users who don't have JS enabled will not be able to load content normally and search engines will not be able to see your content without implementing _escaped_fragment_ URLs. Thus, some fallback is needed, whether it is a full site that is able to function without JS or just simple HTML snapshots rendered using a headless browser, it is a source of additional work. Server-side rendering We can, of course, render everything on the server-side and just send regular HTML to the browser. This is the most backwards compatible, since the site will behave just as any static HTML site would, but we don't get any of the benefits of client-side rendering either. We could still use some client-side Jade for enhancements, but the idea is the same: the majority gets rendered on the server-side and full HTML pages need to be sent when the user navigates to a new page. Build systems Although the Jade compiler is fully capable of compiling projects on its own, in practice, it is often better to use a build system because they can make interfacing with the compiler easier. In addition, build systems often help automate other tasks such as minification, compiling other languages, and even deployment. Some examples of these build systems are Roots (http://roots.cx/), Grunt (http://gruntjs.com/), and even GNU's Make (http://www.gnu.org/software/make/). For example, Roots can recompile Jade automatically each time you save it and even refresh an in-browser preview of that page. Continuous recompilation helps you notice errors sooner and Roots helps you avoid the hassle of manually running a command to recompile. Summary In this article, we just finished taking a look at some of the best practices to follow when organizing Jade projects. Also, we looked at the use of third-party tools to automate tasks. Resources for Article: Further resources on this subject: So, what is Node.js? [Article] RSS Web Widget [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 2134
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deep-customization-bootstrap
Packt
19 Dec 2014
8 min read
Save for later

Deep Customization of Bootstrap

Packt
19 Dec 2014
8 min read
This article is written by Aravind Shenoy and Ulrich Sossou, the authors of the book, Learning Bootstrap. It will introduce you to the concept of deep customization of Bootstrap. (For more resources related to this topic, see here.) Adding your own style sheet works when you are trying to do something quick or when the modifications are minimal. Customizing Bootstrap beyond small changes involves using the uncompiled Bootstrap source code. The Bootstrap CSS source code is written in LESS with variables and mixins to allow easy customization. LESS is an open source CSS preprocessor with cool features used to speed up your development time. LESS allows you to engage an efficient and modular style of working making it easier to maintain your CSS styling in your projects. The advantages of using variables in LESS are profound. You can reuse the same code many times thereby following the write once, use anywhere paradigm. Variables can be globally declared, which allows you to specify certain values in a single place. This needs to be updated only once if changes are required. LESS variables allow you to specify widely-used values such as colors, font family, and sizes in a single file. By modifying a single variable, the changes will be reflected in all the Bootstrap components that use it; for example, to change the background color of the body element to green (#00FF00 is the hexadecimal code for green), all you need to do is change the value of the variable called @body-bg in Bootstrap as shown in the following code: @body-bg: #00FF00; Mixins are similar to variables but for whole classes. Mixins enable you to embed the properties of a class into another. It allows you to group multiple code lines together so that it can be used numerous times across the style sheet. Mixins can also be used alongside variables and functions resulting in multiple inheritances; for example, to add clearfix to an article, you can use the .clearfix mixin as shown in the left column of the following table. It will result in all clearfix declarations included in the compiled CSS code shown in the right column: Mixin CSS code article { .clearfix; }   { article:before, article:after { content: " "; // 1 display: table; // 2 } article:after { clear: both; } }   A clearfix mixin is a way for an element to automatically clear after itself, so that you don't need to add additional markup. It's generally used in float layouts, where elements are floated to be stacked horizontally. Let's look at a pragmatic example to understand how this kind of customization is used in a real-time scenario: Download and unzip the Bootstrap files into a folder. Create an HTML file called bootstrap_example and save it in the same folder where you saved the Bootstrap files. Add the following code to it: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of this code upon execution will be as follows: The Bootstrap folder includes the following folders and file:     css     fonts     js     bootstrap_example.html This Bootstrap folder is shown in the following screenshot: Since we are going to use the Bootstrap source code now, let's download the ZIP file and keep it at any location. Unzip it, and we can see the contents of the folder as shown in the following screenshot: Let's now create a new folder called bootstrap in the css folder. The contents of our css folder will appear as displayed in the following screenshot: Copy the contents of the less folder from the source code and paste it into the newly created bootstrap folder inside the css folder. Thus, the contents of the same bootstrap folder within the css folder will appear as displayed in the following screenshot: In the bootstrap folder, look for the variable.less file and open it using Notepad or Notepad++. In this example, we are using a simple Notepad, and on opening the variable.less file with Notepad, we can see the contents of the file as shown in the following screenshot: Currently, we can see @body-bg is assigned the default value #fff as the color code. Change the background color of the body element to green by assigning the value #00ff00 to it. Save the file and later on, look for the bootstrap.less file in the bootstrap folder. In the next step, we are going to use WinLess. Open WinLess and add the contents of the bootstrap folder to it. In the folder pane, you will see all the less files loaded as shown in the following screenshot:   Now, we need to uncheck all the files and only select the bootstrap.less file as shown in following screenshot:  Click on Compile. This will compile your bootstrap.less file to bootstrap.css. Copy the newly compiled bootstrap.css file from the bootstrap folder and paste it into the css folder thereby replacing the original bootstrap.css file. Now that we have the updated bootstrap.css file, go back to bootstrap_example.html and execute it. Upon execution, the output of the code would be as follows:  Thus, we can see that the background color of the <body> element turns to green as we have altered it globally in the variables.less file that was linked to the bootstrap.less file, which was later compiled to bootstrap.css by WinLess. We can also use LESS variables and mixins to customize Bootstrap. We can import the Bootstrap files and add our customizations. Let's now create our own less file called styles.less in the css folder. We will now include the Bootstrap files by adding the following line of code in the styles.less file: @import "./bootstrap/bootstrap.less"; We have given the path,./bootstrap/bootstrap.less as per the location of the bootstrap.less file. Remember to give the appropriate path if you have placed it at any other location. Now, let's try a few customizations and add the following code to styles.less: @body-bg: #FFA500; @padding-large-horizontal: 40px; @font-size-base: 7px; @line-height-base: 9px; @border-radius-large: 75px; The next step is to compile the styles.less file to styles.css. We will again use WinLess for this purpose. You have to uncheck all the options and select only styles.less to be compiled:  On compilation, the styles.css file will contain all the CSS declarations from Bootstrap. The next step would be to add the styles.css stylesheet to the bootstrap_example.html file.So your HTML code will look like this: <!DOCTYPE html> <html> <head> <title>BootStrap with Packt</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial- scale=1.0"> <!-- Downloaded Bootstrap CSS --> <link href="css/bootstrap.css" rel="stylesheet"> <!-- JavaScript plugins (requires jQuery) --> <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/ jquery.min.js"></script> <!-- Include all compiled plugins (below), or include individual files as needed --> <script src="js/bootstrap.min.js"></script> <link href="css/styles.css" rel="stylesheet"> </head> <body> <h1>Welcome to Packt</h1> <button type="button" class="btn btn-default btn-lg" id="packt">PACKT LESSONS</button> </body> </html> The output of the code is as follows: Since we changed the background color to orange (#ffa500), created a border radius, and defined the font-size-base and line-height-base, the output on execution was as displayed in the preceding screenshot. The LESS variables should be added to the styles.less file after the Bootstrap import so that they override the variables defined in the Bootstrap files. In short, all the custom code you write should be added after the Bootstrap import. Summary Therefore, we had a look at the procedure to implement Deep Customization in Bootstrap. However, we are still at the start of the journey. The learning curve is always steep as there is so much more to learn. Learning is always an ongoing process and it would never cease to exist. Thus, there is still a long way to go and in a pragmatic sense, the journey is the destination. Resources for Article: Further resources on this subject: Creating attention-grabbing pricing tables [article] Getting Started with Bootstrap [article] Bootstrap 3.0 is Mobile First [article]
Read more
  • 0
  • 0
  • 2133

article-image-tips-and-tricks-using-alfresco-3-business-solutions
Packt
25 Feb 2011
4 min read
Save for later

Tips and Tricks for using Alfresco 3 Business Solutions

Packt
25 Feb 2011
4 min read
  Alfresco 3 Business Solutions Practical implementation techniques and guidance for delivering business solutions with Alfresco Deep practical insights into the vast possibilities that exist with the Alfresco platform for designing business solutions. Each and every type of business solution is implemented through the eyes of a fictitious financial organization - giving you the right amount of practical exposure you need. Packed with numerous case studies which will enable you to learn in various real world scenarios. Learn to use Alfresco's rich API arsenal with ease. Extend Alfresco's functionality and integrate it with external systems.           Read more about this book       (For more resources on Alfresco, see here.) Node references are important. Tip: Node references are used to identify a specific node in one of the stores in the repository. You construct a node reference by combining a Store Reference such as workspace://SpacesStore with an identifier. The identifier is a Universally Unique Identifier (UUID) and it is generated automatically when a node is created. A UUID looks something like this: 986570b5-4a1b-11dd-823c-f5095e006c11 and it represents a 128-bit value. A complete Node Reference looks like workspace://SpacesStore/986570b5-4a1b-11dd-823c-f5095e006c11. The node reference is one of the most important concepts when developing custom behavior for Alfresco as it is required by a lot of the application interface methods. Avoid CRUD operations directly against the database. Tip: One should not do any CRUD operations directly against the database bypassing the foundation services when building a custom solution on top of Alfresco. This will cause the code to break in the future if the database design is ever changed. Alfresco is required to keep older APIs available for backward compatibility (if they ever change), so it is better to always use the published service APIs. Query the database directly only when: The customization built with available APIs is not providing acceptable performance and you need to come up with a solution that works satisfyingly Reporting is necessary Information is needed during development for debugging purposes For bootstrapping tweaking, such as when you want to run a patch again Executing patches in a specific order Tip: If we have several patches to execute and they should be in a specific order, we can control that with the targetSchema value. The fixesToSchema value is set to Alfresco's current schema version (that is, via the version.schema variable), which means that this patch will always be run no matter what version of Alfresco is being used. It is a good idea to export complex folder structures into ACP packages. Tip: When we set up more complex folder structures with rules, permission settings, template documents etc, it is a good idea to export them into Alfresco Content Packages (ACP) and store them in the version control system. The same is true for any Space Templates that we create. These packages are also useful to include in releases. Deploying the Share JAR extension Tip: When working with Spring Surf extensions for Alfresco Share it is not necessary to stop and start the Alfresco server between each deployment. We can set up Apache Tomcat to watch the JAR file we are working with and tell it to reload the JAR every time it changes. Update the tomcat/conf/context.xml configuration file to include the following line: <WatchedResource>WEB-INF/lib/3340_03_Share_Code.jar</WatchedResource> Now every time we update this Share extension, JAR Tomcat will reload it for us and this shortens the development cycle quite a bit. The Tomcat console should print something like this when this happens: INFO: Reloading context [/share] To deploy a new version of the JAR just run the deploy-share-jar ant target: C:3340_03_Codebestmoneyalf_extensionstrunk>ant -q deploy-share-jar [echo] Packaging extension JAR file for share.war [echo] Copies extension JAR file to share.war WEB-INF libBUILD SUCCESSFULTotal time: 0 seconds Debugging AMP extensions Tip: To debug AMP extensions, start the Alfresco server so that it listens for remote debugging connections; or more correctly, start the JVM so that it listens for remote debugging connection attempts. This can be done by adding the following line to the operating system as an environment variable: CATALINA_OPTS=-Dcom.sun.management.jmxremote -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005 This means that any Alfresco installation that we have installed locally on our development machine will be available for debugging as soon as we start it. Change the address as you see fit according to your development environment. With this setting we can now debug both into Alfresco's source code and our own source code at the same time.
Read more
  • 0
  • 0
  • 2132

article-image-slider-dynamic-applications-using-scriptaculous-part-2
Packt
08 Oct 2009
4 min read
Save for later

Slider for Dynamic Applications using script.aculo.us (part 2)

Packt
08 Oct 2009
4 min read
Code usage for sliders with options We are now done with the most important part of the slider: the implementation of the slider in our applications. But wait, we need the slider to suit our applications, right? So let's customize our slider with options. We have mentioned earlier that track is the range of values. So let's first define the range for our slider. window.onload = function() { new Control.Slider('handle1' , 'track1', { axis:'vertical', range:$R(1,100)} The range option uses the Prototypes' objectRange instance. Hence, we declare it using $R (minimum, Maximum). Everything looks neat until here. Let's add some more options to our constructor, onSlide(). Using the onSlide() callback every time, we drag the slider and the callback is invoked. The default parameter passed to onSlide() is the current slider value. window.onload = function() { new Control.Slider('handle1' , 'track1', { axis:'vertical', range:$R(1,100), onSlide: function(v) { $('value1').innerHTML = "New Slide Value="+v;} }} We have added a div called value1 in our HTML code. On dragging the slider, we will update the value1 with the current slider value. OK, so let's see what happened to our slider to this point. Check out the following screenshot: Impressed? And, we are not done yet. Let's add more options to the slider now. You may ask me, what if the slider in the application needs to be at a particular value by default? And I will say use the sliderValue option. Let's make our slider value 10 by default. Here is the snippet for the same: window.onload = function() {      new Control.Slider('handle1' , 'track1',     {   axis:'vertical',   range:$R(1,100),   sliderValue: 10,   onSlide: function(v) { $('value1').innerHTML = "New Slide                                                 Value="+v;}} And, you should see the slider value at 10 when you run the code. Now your dear friend will ask, what if we don't want to give the range, but we need to pass the fixed set of values? And you proudly say, use the values option. Check out the usage of the values options in the constructor. window.onload = function() { new Control.Slider('handle1' , 'track1', { range:$R(1,25), values:[1, 5,10,15,20,25], onSlide:function(v){ $('value1').innerHTML = "New Slide Value="+v;} } );} We have added a set of values in the array form and passed it to our constructor. Let's see what it looks like. Tips and tricks with the slider After covering all the aspects of the slider feature, here is a list of simple tips and tricks which we can make use of in our applications with ease. Reading the current value of the slider script.aculo.us "genie" provides us with two callbacks for the slider to read the current value of the slider. They are: onSlide onChange Both these callbacks are used as a part of options in the slider. onSlide contains the current sliding value while the drag is on. The callback syntax is shown as follows: onSlide: function(value) {// do something with the value while sliding. Write or Edit thevalue //of current slider value while sliding} onChange callback will contain the value of the slider while the sliding or the drag event ends. After the drag is completed and if the value of the slider has changed then the onChange function will be called. For example, if the slider's current value is set to 10 and after sliding we change it to 15, then the onChange callback will be fired. The callback syntax is shown as follows: onChange: function(value){// do anything with the "changed" and current value} Multiple handles in the slider Now, a thought comes to our mind at this point: Is it possible for us to have two handles in one track? And, the mighty script.aculo.us library says yes! Check out the following code snippet and screenshot for a quick glance of having two handles in one track: HTML code<div id="track1"><div id="handle1"></div><div id="handle2"></div></div> JavaScript code for the same: window.onload = function() { new Control.Slider(['handle1','handle2'] , 'track1');} Now, check out the resulting screenshot having two handles and one track: The same can also be applied for the vertical slider too.
Read more
  • 0
  • 0
  • 2130

article-image-creating-theme-package-zopeskel
Packt
23 Jul 2010
4 min read
Save for later

Creating a theme package with ZopeSkel

Packt
23 Jul 2010
4 min read
(Read more interesting articles on Plone here.) Download code from here Creating a theme package with ZopeSkel Now that we have examined someone else's theme, let us try creating our own. Remember, we will not cover theme creation in depth; this is only a sample for site administrators (who may or may not be required to develop themes, in addition to managing their site). For more information about creating themes, Visit: http://plone.org/documentation/kb/how-to-create-a-plone-3-theme-product-on-thefilesystem. To create a theme, we will use the ZopeSkel tool (http://pypi.python.org/pypi/ZopeSkel) to generate some of the boilerplate code. ZopeSkel uses PasteScript (http://pypi.python.org/pypi/PasteScript) to facilitate package generation using a set of templates. Other options include: Write everything by hand from memory Copy the contents of another theme package Use another tool such as ArchGenXML to generate boilerplate code (http://plone.org/products/archgenxml) Adding ZopeSkel to a buildout Now let's add ZopeSkel to our buildout. In 03-appearance-zopeskel.cfg, we have this: [buildout] extends = 03-appearance-zopepy.cfg parts += zopeskel [zopeskel] recipe = zc.recipe.egg dependent-scripts = true We extend the previous working configuration file, and add a new section called zopeskel. This section uses the zc.recipe.egg recipe (http://pypi.python.org/pypi/zc.recipe.egg) to download ZopeSkel from the Python Package Index (zc.recipe.egg will search the Python Package Index for packages that match the section name zopeskel). We set dependent-scripts to true, to tell Buildout to generate Python scripts for ZopeSkel's dependencies such as PasteScript, which includes the paster script. Now stop Plone (with Ctrl + C or Ctrl + Z/Enter) and run Buildout: $ bin/buildout -c 03-appearance-zopeskel.cfg You should see: $ bin/buildout -c 03-appearance-zopeskel.cfg Uninstalling plonesite. Updating zope2. Updating fake eggs Updating instance. Installing plonesite. … Updating zopepy. Installing zopeskel. Getting distribution for 'zopeskel'. Got ZopeSkel 2.16. Getting distribution for 'Cheetah>1.0,<=2.2.1'. … Got Cheetah 2.2.1. Getting distribution for 'PasteScript'. Got PasteScript 1.7.3. Getting distribution for 'PasteDeploy'. … Got PasteDeploy 1.3.3. Getting distribution for 'Paste>=1.3'. … Got Paste 1.7.3.1. Generated script '/Users/aclark/Developer/plone-site-admin/ buildout/bin/ zopeskel'. Generated script '/Users/aclark/Developer/plone-site-admin/ buildout/bin/ paster'. Generated script '/Users/aclark/Developer/plone-site-admin/ buildout/bin/ easy_install'. Generated script '/Users/aclark/Developer/plone-site-admin/ buildout/bin/ easy_install-2.4'. You will notice that in addition to bin/zopeskel, Buildout also installed the "dependent scripts" bin/paster and bin/easy_install (the latter of which we do not really need in this case). Running ZopeSkel Now try running ZopeSkel with the command: $ bin/zopeskel You should see: Usage: zopeskel <template> <output-name> [var1=value] ... [varN=value] zopeskel --help Full help zopeskel --list List template verbosely, with details zopeskel --make-config-file Output .zopeskel prefs file ... This tells us we need to pick a template and output-name. ZopeSkel goes on to list the available templates. They are: archetype: A Plone project that uses Archetypes content types kss_plugin: A project for a KSS plugin plone: A project for Plone products plone2_theme: A theme for Plone 2.1 plone3_portlet: A Plone 3 portlet plone_app: A project for Plone products with a nested namespace plone_pas: A project for a Plone PAS plugin plone2.5_theme: A theme for Plone 2.5 plone3_theme: A theme for Plone 3 plone2.5_buildout: A buildout for Plone 2.5 projects plone3_buildout: A buildout for Plone 3 installation plone_hosting: Plone hosting: buildout with ZEO and Plone recipe: A recipe project for zc.buildout silva_buildout: A buildout for Silva projects basic_namespace: A basic Python project with a namespace package nested_namespace: A basic Python project with a nested basic_zope: A Zope project<
Read more
  • 0
  • 0
  • 2129
article-image-apache-myfaces-extensions-validator
Packt
04 Mar 2010
14 min read
Save for later

Apache MyFaces Extensions Validator

Packt
04 Mar 2010
14 min read
Setting up ExtVal As with all other libraries, we start by downloading ExtVal and installing it in our project. As with many other JSF libraries, the ExtVal project has different branches for JSF 1.1 and 1.2. The first two digits of ExtVal’s version number are the JSF version they are made for. So ExtVal 1.1.x is the xth version of ExtVal for JSF 1.1, whereas ExtVal 1.2.x is the xth version for JSF 1.2. Versions of ExtVal are not released very often. At the time of writing this article, only two official releases have been published for each branch. According to the lead developer of ExtVal, a third release (1.1.3 and 1.2.3) is in the works for both branches, as well as a first release from the new JSF 2.0 branch. Apart from stable releases, ExtVal offers snapshot builds that are created on a regular basis. The snapshots are created manually, which gives some guarantees about the quality compared to automatically-created daily releases. No snapshots with major bugs will be created. According to the lead developer of ExtVal, the snapshot builds have “milestone quality”. Because of some issues and limitations in ExtVal 1.2.2, a snapshot build of ExtVal 1.2.3 was used while writing this article. A stable release of ExtVal 1.2.3 is expected to be available soon after the publishing date of this article. Stable releases can be downloaded from the ExtVal download site at http://myfaces.apache.org/extensions/validator/download.html. The downloaded ZIP file will contain all of the ExtVal modules, as listed in the next table. Note that more modules may be added to ExtVal in future releases. It is also possible that additional support modules will be provided by others. For example, a JSF component project may create a support module to get the most out of its components with ExtVal. Regarding component support modules, it is also worth mentioning the “Sandbox 890” project, which provides proof of concept implementations of support modules for some non-MyFaces component libraries. Currently, proofs of concept are available for IceFaces, PrimeFaces, RichFaces, and OpenFaces. Library Description myfaces-extval-core-1.2.x.jar The core of ExtVal. This library should be added to the project in all cases. myfaces-extval-property-validation-1.2.x.jar Extension module that adds several custom ExtVal annotations that we can use in our Model layer. myfaces-extval-generic-support-1.2.x.jar Extension module for generic JSF component support. This library should be added to the project in almost all cases. There are two cases when we don't need this generic support library, which are as follows: If we're using a support library for a specific component library, such as the Trinidad support module mentioned in the following row in this table If the component library we're using is 100% compliant with the JSF specification, which is almost never the case If no specific support module is in use, and it is unclear if the generic module is needed, it is safe to add it anyway. It is also a good idea to take a look at the Tested Compatibility section on the ExtVal wiki at http://wiki.apache.org/myfaces/Extensions/Validator/. myfaces-extval-trinidad-support-1.2.x.jar Extension module that supports the MyFaces Trinidad JSF components. If we use this one, we don't need the "generic support" module. The Trinidad support module will make use of Trinidad's client-side validation options where possible. So we get client-side validation based on annotations in our Model with no extra effort. myfaces-extval-bean-validation-1.2.x.jar Extension module that adds support for Bean Validation (JSR 303) annotations. This module will be available from the third release of ExtVal (*.*.3). Snapshot builds of ExtVal can be downloaded from ExtVal’s Maven snapshot repository, which can be found at http://people.apache.org/maven-snapshot-repository/org/apache/myfaces/extensions/validator/. In the case of snapshot builds, no single ZIP file is available, and each module has to be downloaded separately as a JAR file. Note that if Maven is used, there is no need to manually download the snapshots. In that case, we only have to change the version number in the pom.xml file to a snapshot version number, and Maven will automatically download the latest snapshot. The following table lists the URLs within the Maven repository from where the modules can be downloaded: Module URL Core myfaces-extval-core/ Property validation validation-modules/myfaces-extval-property-validation/ Generic support component-support-modules/myfaces-extval-generic-support/ Trinidad support component-support-modules/myfaces-extval-trinidad-support/ Bean validation (JSR 303) validation-modules/myfaces-extval-bean-validation/ URLs in this table are relative to the URL of the Maven repository that we just saw. After each URL, 1.2.x-SNAPSHOT/ has to be appended, where 1.2.x has to be replaced by the appropriate version number. Once we’ve finished downloading, we can start adding the JARs to our project. ExtVal differs in one thing from other libraries—it needs to access our Model and View project. So we have to add the ExtVal libraries to the lib directory of the EAR, instead of the WAR or the JAR with the entities. Some libraries that ExtVal uses have to be moved there as well. If we don’t do this, we’ll end up with all sorts of weird exceptions related to class-loading errors. Libraries that are added to the lib directory of an EAR are automatically available to all contained WAR and JAR files. However, depending on the IDE and build system that we are using, we may have to take some additional steps to be able to build the WAR and JAR with dependencies to the libraries in the EAR’s lib directory. This image shows a simplified structure of the EAR with ExtVal’s libraries added to it. Note that the MyFaces ExtVal and dependencies node in the image actually represents multiple JAR files. It is important to verify that none of the libraries that are in the lib directory of the EAR are included in either the WAR or the entities JAR. Otherwise, we could still encounter class-loading conflicts. The following table lists all of the libraries that have to be moved into the EAR to avoid these class-loading conflicts: Library Explanation myfaces-extval-*.jar Of course, all of the needed ExtVal JARs should be in the EAR. asm-1.5.x.jar, cglib-2.x_y.jar These are libraries that ExtVal depends on. They are bundled with the ExtVal download. They're not bundled with snapshot releases. jsf-facelets.jar We're using Facelets, so ExtVal has to use it to add validations within our Facelets pages. So if we didn't use Facelets, this one would not be needed. myfaces-api-1.2.*, myfaces-impl-1.2.* We're using MyFaces Core as JSF implementation. ExtVal will need those libs too. Note that if we use the application server's default JSF implementation, we don't have to add these either to the EAR or to the WAR. trinidad-api-1.2.*, trinidad-impl-1.2.* We're using Trinidad, and ExtVal offers some Trinidad-specific features through the "Trinidad support" extension. In that case, the Trinidad libraries should be in the EAR too. commons-*.jar Various libraries that we just mentioned depend on one or more libraries from the Apache Commons project. They should also be moved to the EAR file to be sure that no class-loading errors occur. Note that the aforementioned changes in our project structure are necessary only because we chose to have our Model layer in a separate JAR file. In smaller projects, it is often the case that the whole project is deployed as a single WAR file without enclosing it in an EAR. If we had chosen that strategy, no changes to the structure would have been necessary and we could have added all of the libraries to the WAR file, as we would do with any other library. Other than including the necessary libraries as discussed before, no configuration is needed to get started with ExtVal. ExtVal uses the convention over configuration pattern extensively. That means, a lot of sensible defaults are chosen, and as long as we’re satisfied with the defaults, no configuration is needed. The next section will get us started with some basic ExtVal usage. Bug with Trinidad tables There’s a bug in ExtVal that can cause some weird behavior in Trinidad’s <tr:table> component. Only the first row will be populated with data, and other rows will not show any data. This happens only when a Facelets composite component is used to add the columns to the table—exactly what we do in our example application. The bug can be found in the JIRA bug tracker for ExtVal at https://issues.apache.org/jira/browse/EXTVAL-77.. There’s a workaround for the bug that we can use until it gets fixed. Be warned that this workaround may have other side effects. This workaround is shown in the following code snippet, in which we have created a class called DevStartupListener: public class DevStartupListener extends AbstractStartupListener { @Override protected void init() { ExtValContext.getContext().addGlobalProperty(ExtValRendererProxy.KEY, null, true); } } The required imports are in the org.apache.myfaces.extensions.validator.core package and subpackages. Register this class as a phase listener in the faces-config.xml file: <lifecycle> <phase-listener>inc.monsters.mias.workaround.DevStartupListener</phase-listener> </lifecycle> You’re all set, and the <tr:table> will now act as expected. Don’t forget to remove this workaround if the bug gets fixed in a future release of ExtVal. Basic usage After setting up ExtVal, the basic usage is very simple. Let’s explore a simple example in our MIAS application. In our Kid.java entity, we have some JPA annotations that map the properties of the Kid bean to a database table. Let’s take a closer look at the lastName property of our Kid bean: @Column(name = "LAST_NAME", nullable = false, length = 30) private String lastName; The @Column annotation maps the lastName property to the LAST_NAME column in the database. It also shows some information that is derived from the table definition in the database. nullable = false means the database won’t accept an empty value in this field, and length = 30 means that no more than 30 characters can be stored in the corresponding database column. This information could be used for validation in our View layer. If we hadn’t used ExtVal, we would have added a required="true" attribute to the input element in our EditKid.xhtml page. We also would have added a <tr:validateLength > component to the input component, or we could have set the maximumLength attribute. But all of these things would have been a duplication of information and logic, and would thus break the DRY principle. With ExtVal, we don’t have to duplicate this information anymore. Whenever ExtVal encounters a nullable = false setting, it will automatically add a required="true" attribute to the corresponding input element. In the same way, it will translate the length = 30 from the @Column annotation into a maximumLength attribute on the input component. The next screenshot shows ExtVal in action. (Note that all validators, and the required and maximumLength attributes were removed from the JSF code before the screenshot was taken.) The really nice thing about this example is that the validations created by ExtVal make use of Trinidad’s client-side validation capabilities. In other words, the error message is created within the user’s web browser before any input is sent to the server. Complementing JPA annotations It’s nice that we can reuse our JPA annotations for validation. But the chances are that not all validation that we want can be expressed in JPA annotations. For that reason, ExtVal offers a set of extra annotations that we can add to our beans to complement the implicit validation constraints that ExtVal derives from JPA annotations. These annotations are a part of the myfaces-extval-propertyvalidation-1.2.x.jar library. For example, if we want to add a minimum length to the lastName fi eld, we could use the @Length annotation as follows: @Length(minimum = 5) @Column(name = "LAST_NAME", nullable = false, length = 30) private String lastName; Note that if, for some reason, we couldn’t use the length = 30 setting on the @Column annotation, the @Length annotation also has a maximum property that can be set. The @Length annotation can be imported from the org.apache.myfaces.extensions.validator.baseval.annotation package, which is where the other annotations that ExtVal offers are also located. The following image shows the minimum length validation in action: As the example in the screenshot shows, setting a minimum input length of five characters for a name might not be a good idea. However, that’s an entirely different discussion Using ExtVal annotations for standard JSF validators For each of the validator components that is a part of the JSF standard, there is an annotation in ExtVal’s Property Validation library. These are covered briefly in the following subsections. Note that ExtVal will automatically use any overridden versions of the standard validators, if they are present in the project. Defining length validation For the length validation of input strings, the @Length annotation can be used, as shown in the previous example. This annotation relies on the javax.faces.validator.LengthValidator to implement the validation. The following table lists the available properties: Property Type Explanation minimum int The minimum length (inclusive) in characters of the input string maximum int The maximum length (inclusive) of the input string in characters Defining double range validation To validate if a double value is within a certain range, the @DoubleRange annotation can be used, which delegates the implementation of the validation to the javax.faces.validator.DoubleRangeValidator validator. See the following table for the available properties: Property Type Explanation minimum double The minimum value (inclusive) of the double input maximum double The maximum value (inclusive)of the double input Defining long range validation What @DoubleRange annotation does for doubles, the @LongRange annotation does for long values. It uses javax.faces.validator.LongRangeValidator for the implementation: Property Type Explanation minimum long The minimum value (inclusive) of the long input maximum long The maximum value (inclusive) of the long input Defining required fields The example given at the beginning of this section showed how ExtVal can create a required="true" attribute based on an @Column annotation with the nullable = false setting. If it is not possible to use this setting, ExtVal also has an alternative @Required annotation . Just add this annotation to a field to make it required Using ExtVal’s additional annotations Apart from the annotations that correspond to the standard JSF validators, some additional annotations exist in the Property Validation module that perform other validations. These are listed in the following subsections. Whereas the validations based on the JSF standard validators use the error messages provided by the JSF validators, the additional validations cannot use standard messages from JSF. Therefore, standard messages are provided by ExtVal. Should you want to use your own message, all additional annotations have a validationErrorMsgKey property that can be used to assign a message key for the error message. We’ll discuss custom error messages in more detail later in this article. Defining pattern-based validation Validation with regular expressions is very powerful, and is very usable for validating phone numbers, postal codes, tax numbers, and any other data that has to fit in a certain pattern. Regular expression-based validation can be added by using the @Pattern annotation. For example, to allow only letters and spaces in the firstName field, we could write: @Pattern(value="[A-Za-z ]*") @Column(name = "FIRST_NAME", nullable = false, length = 30) private String firstName; For completeness, the following table lists the arguments of the @Pattern annotation: Property Type Required Explanation value String[] Required Array of regular expression patterns. Beware if you add more than one pattern that the input must match all patterns. It is easy to make any input invalid by using two patterns that are mutually exclusive. validatonErrorMsgKey String Optional Optional key for alternative error message.
Read more
  • 0
  • 0
  • 2127

article-image-developing-seam-applications
Packt
08 Oct 2009
10 min read
Save for later

Developing Seam Applications

Packt
08 Oct 2009
10 min read
Seam application architecture As most enterprise Java developers are probably familiar with JSF and JSP, we will be using this as the view technology for our sample applications. Facelets is the recommended view technology for Seam-based applications once we have a solid understanding of Seam. In a standard Java EE application, Enterprise Application Resource (EAR) files contain one or more Web Application Resource (WAR) files and one or more sets of Java Archive (JAR) files containing Enterprise JavaBeans (EJB) functionality. Seam applications are generally deployed in exactly the same manner as depicted in the following diagram. It is possible to deploy Seam onto a servlet-only container (for example, Tomcat) and use POJOs as the server-side Seam components. However, in this situation, we don't get any of the benefits that EJBs provide, such as security, transaction handling, management, or pooling. Seam components Within Seam, components are simple POJOs. There is no need to implement any interfaces or derive classes from a Seam-specific base class to make Seam components classes. For example, a Seam component could be: A simple POJO a stateless Session Bean a stateful Session Bean a JPA entity and so on Seam components are defined by adding the @Name annotation to a class definition. The @Name annotation takes a single parameter to define the name of the Seam component. The following example shows how a stateless Session Bean is defined as a Seam component called calcAction. package com.davidsalter.seamcalculator; @Stateless @Name("calcAction") public class CalcAction implements Calc { ... } When a Seam application is deployed to JBoss, the log output at startup lists what Seam components are deployed, and what type they are. This can be useful for diagnostic purposes, to ensure that your components are deployed correctly. Output similar to the following will be shown in the JBoss console log when the CalcAction class is deployed: 21:24:24,097 INFO [Initialization] Installing components...21:24:24,121 INFO [Component] Component: calcAction, scope: STATELESS, type: STATELESS_SESSION_BEAN, class: com.davidsalter.seamcalculator.CalcAction, JNDI: SeamCalculator/CalcAction/local Object Injection and Outjection One of the benefits of using Seam is that it acts as the "glue" between the web technology and the server-side technology. By this we mean that the Seam Framework allows us to use enterprise beans (for example, Session Beans) directly within the Web tier without having to use Data Transfer Object (DTO) patterns and without worrying about exposing server-side functionality on the client. Additionally, if we are using Session Beans for our server-side functionality, we don't really have to develop an additional layer of JSF backing beans, which are essentially acting as another layer between our web page and our application logic. In order to fully understand the benefits of Seam, we need to first describe what we mean by Injection and Outjection. Injection is the process of the framework setting component values before an object is created. With injection, the framework is responsible for setting components (or injecting them) within other components. Typically, Injection can be used to allow component values to be passed from the web page into Seam components. Outjection works in the opposite direction to Injection. With Outjection, components are responsible for setting component values back into the framework. Typically, Outjection is used for setting component values back into the Seam Framework, and these values can then be referenced via JSF Expression Language (EL) within JSF pages. This means that Outjection is typically used to allow data values to be passed from Seam components into web pages. Seam allows components to be injected into different Seam components by using the @In annotation and allows us to outject them by using the @Out annotation. For example, if we have some JSF code that allows us to enter details on a web form, we may use an <h:inputText …/> tag such as this: <h:inputText value="#{calculator.value1}" required="true"/> The Seam component calculator could then be injected into a Seam component using the @In annotation as follows: @In private Calculator calculator; With Seam, all of the default values on annotations are the most likely ones to be used. In the preceding example therefore, Seam will look up a component called calculator and inject that into the calculator variable. If we wanted Seam to inject a variable with a different name to the variable that it is being injected into, we can adorn the @In annotation with the value parameter. @In (value="myCalculator") private Calculator calculator; In this example, Seam will look up a component called myCalculator and inject it into the variable calculator. Similarly, if we want to outject a variable from a Seam component into a JSF page, we would use the @Out annotation. @Out private Calculator calculator; The outjected calculator object could then be used in a JSF page in the following manner: <h:outputText value="#{calculator.answer}"/> Example application To see these concepts in action, and to gain an understanding of how Seam components are used instead of JSF backing beans, let us look at a simple calculator web application. This simple application allows us to enter two numbers on a web page. Clicking the Add button on the web page will cause the sum of the numbers to be displayed. This basic application will give us an understanding of the layout of a Seam application and how we can inject and outject components between the business layer and the view. The application functionality is shown in the following screenshot. The sample code for this application can be downloaded from the Packt web site, at http://www.packtpub.com/support. For this sample application, we have a single JSF page that is responsible for: Reading two numeric values from the user Invoking business logic to add the numbers together Displaying the results of adding the numbers together <%@ taglib uri="http://java.sun.com/jsf/html" prefix="h" %> <%@ taglib uri="http://java.sun.com/jsf/core" prefix="f" %> <html> <head> <title>Seam Calculator</title> </head> <body> <f:view> <h:form> <h:panelGrid columns="2"> Value 1: <h:inputText value="#{calculator. value1}" /> Value 2: <h:inputText value="#{calculator. value2}" /> Add them together gives: <h:outputText value=" #{calculator.answer} "/> </h:panelGrid> <h:commandButton value="Add" action= "#{calcAction.calculate}"/> </h:form> </f:view> </body> </html> We can see that there is nothing Seam-specific in this JSF page. However, we are binding two inputText areas, one outputText area, and a button action to Seam components by using standard JSF Expression Language. JSF EL Seam Binding calculator.value1 This is bound to the member variable value1 on the Seam component called calculator. This value will be injected into the Seam component. calculator.value2 This is bound to the member variable value2 on the Seam component called calculator. This value will be injected into the Seam component. calculator.answer This is bound to the member variable answer on the Seam component called calculator. This value will be outjected from the Seam component. calcAction.calculate This will invoke the method calculate() on the Seam component called calcAction. Our business logic for this sample application is performed in a simple POJO class called Calculator.java. package com.davidsalter.seamcalculator; import java.io.Serializable; import org.jboss.seam.annotations.Name;@Name("calculator") public class Calculator { private double value1; private double value2; private double answer; public double getValue1() { return value1; } public void setValue1(double value1) { this.value1 = value1; } public double getValue2() { return value2; } public void setValue2(double value2) { this.value2 = value2; } public double getAnswer() { return answer; } public void add() { this.answer = value1 + value2; } } This class is decorated with the @Name("calculator") annotation, which causes it to be registered to Seam with the name, "calculator". The @Name annotation causes this object to be registered as a Seam component that can subsequently be used within other Seam components via Injection or Outjection by using the @In and @Out annotations. Finally, we need to have a class that is acting as a backing bean for the JSF page that allows us to invoke our business logic. In this example, we are using a Stateless Session Bean. The Session Bean and its local interface are as follows. In the Java EE 5 specification, a Stateless Session Bean is used to represent a single application client's communication with an application server. A Stateless Session Bean, as its name suggests, contains no state information; so they are typically used as transaction façades. A Façade is a popular design pattern, which defines how simplified access to a system can be provided. For more information about the Façade pattern, check out the following link: http://java.sun.com/blueprints/corej2eepatterns/Patterns/SessionFacade.html Defining a Stateless Session Bean using Java EE 5 technologies requires an interface and an implementation class to be defined. The interface defines all of the methods that are available to clients of the Session Bean, whereas the implementation class contains a concrete implementation of the interface. In Java EE 5, a Session Bean interface is annotated with either the @Local or @Remote or both annotations. An @Local interface is used when a Session Bean is to be accessed locally within the same JVM as its client (for example, a web page running within an application server). An @Remote interface is used when a Session Bean's clients are remote to the application server that is running within a different JVM as the application server. There are many books that cover Stateless Session Beans and EJB 3 in depth, such as EJB 3 Developer's Guide by Michael Sikora, published by Packt Publishing. For more information on this book, check out the following link: http://www.packtpub.com/developer-guide-for-ejb3 In the following code, we are registering our CalcAction class with Seam under the name calcAction. We are also Injecting and Outjecting the calculator variable so that we can use it both to retrieve values from our JSF form and pass them back to the form. package com.davidsalter.seamcalculator; import javax.ejb.Stateless; import org.jboss.seam.annotations.In; import org.jboss.seam.annotations.Out; import org.jboss.seam.annotations.Name; @Stateless @Name("calcAction") public class CalcAction implements Calc { @In @Out private Calculator calculator; public String calculate() { calculator.add(); return ""; } } package com.davidsalter.seamcalculator; import javax.ejb.Local; @Local public interface Calc { public String calculate(); } That's all the code we need to write for our sample application. If we review this code, we can see several key points where Seam has made our application development easier: All of the code that we have developed has been written as POJOs, which will make unit testing a lot easier. We haven't extended or implemented any special Seam interfaces. We've not had to define any JSF backing beans explicitly in XML. We're using Java EE Session Beans to manage all of the business logic and web-tier/business-tier integration. We've not used any DTO objects to transfer data between the web and the business tiers. We're using a Seam component that contains both state and behavior. If you are familiar with JSF, you can probably see that adding Seam into a fairly standard JSF application has already made our development simpler. Finally, to enable Seam to correctly find our Seam components and deploy them correctly to the application server, we need to create an empty file called seam.properties and place it within the root of the classpath of the EJB JAR file. Because this file is empty, we will not discuss it further here. To deploy the application as a WAR file embedded inside an EAR file, we need to write some deployment descriptors.
Read more
  • 0
  • 0
  • 2126

article-image-best-practices-modern-web-applications
Packt
22 Apr 2014
9 min read
Save for later

Best Practices for Modern Web Applications

Packt
22 Apr 2014
9 min read
(For more resources related to this topic, see here.) The importance of search engine optimization Every day, web crawlers scrape the Internet for updates on new content to update their associated search engines. People's immediate reaction to finding web pages is to load a query on a search engine and select the first few results. Search engine optimization is a set of practices used to maintain and improve search result ranks over time. Item 1 – using keywords effectively In order to provide information to web crawlers, websites provide keywords in their HTML meta tags and content. The optimal procedure to attain effective keyword usage is to: Come up with a set of keywords that are pertinent to your topic Research common search keywords related to your website Take an intersection of these two sets of keywords and preemptively use them across the website Once this final set of keywords is determined, it is important to spread them across your website's content whenever possible. For instance, a ski resort in California should ensure that their website includes terms such as California, skiing, snowboarding, and rentals. These are all terms that individuals would look up via a search engine when they are interested in a weekend at a ski resort. Contrary to popular belief, the keywords meta tag does not create any value for site owners as many search engines consider it a deprecated index for search relevance. The reasoning behind this goes back many years to when many websites would clutter their keywords meta tag with irrelevant filler words to bait users into visiting their sites. Today, many of the top search engines have decided that content is a much more powerful indicator for search relevance and have concentrated on this instead. However, other meta tags, such as description, are still being used for displaying website content on search rankings. These should be brief but powerful passages to pull in users from the search page to your website. Item 2 – header tags are powerful Header tags (also known as h-tags) are often used by web crawlers to determine the main topic of a given web page or section. It is often recommended to use only one set of h1 tags to identify the primary purpose of the web page, and any number of the other header tags (h2, h3, and so on) to identify section headings. Item 3 – make sure to have alternative attributes for images Despite the recent advance in image recognition technology, web crawlers do not possess the resources necessary for parsing images for content through the Internet today. As a result, it is advisable to leave an alt attribute for search engines to parse while they scrape your web page. For instance, let us suppose you were the webmaster of Seattle Water Sanitation Plant and wished to upload the following image to your website: Since web crawlers make use of the alt tag while sifting through images, you would ideally upload the preceding image using the following code: <img src = "flow_chart.png" alt="Seattle Water Sanitation Process Flow Chart" /> This will leave the content in the form of a keyword or phrase that can help contribute to the relevancy of your web page on search results. Item 4 – enforcing clean URLs While creating web pages, you'll often find the need to identify them with a URL ID. The simplest way often is to use a number or symbol that maps to your data for simple information retrieval. The problem with this is that a number or symbol does not help to identify the content for web crawlers or your end users. The solution to this is to use clean URLs. By adding a topic name or phrase into the URL, you give web crawlers more keywords to index off. Additionally, end users who receive the link will be given the opportunity to evaluate the content with more information since they know the topic discussed in the web page. A simple way to integrate clean URLs while retaining the number or symbol identifier is to append a readable slug, which describes the topic, to the end of the clean URL and after the identifier. Then, apply a regular expression to parse out the identifier for your own use; for instance, take a look at the following sample URL: http://www.example.com/post/24/golden-dragon-review The number 24, when parsed out, helps your server easily identify the blog post in question. The slug, golden-dragon-review, communicates the topic at hand to both web crawlers and users. While creating the slug, the best practice is often to remove all non-alphanumeric characters and replace all spaces with dashes. Contractions such as can't, don't, or won't, can be replaced by cant, dont, or wont because search engines can easily infer their intended meaning. It is important to also realize that spaces should not be replaced by underscores as they are not interpreted appropriately by web crawlers. Item 5 – backlink whenever safe and possible Search rankings are influenced by your website's clout throughout websites that search engines deem as trustworthy. For instance, due to the restrictive access of .edu or .gov domains, websites that use these domains are deemed trustworthy and given a higher level of authority when it comes down to search rankings. This means that any websites that are backlinked on trustworthy websites are seen at a higher value as a result. Thus, it is important to often consider backlinking on relevant websites where users would actively be interested in the content. If you choose to backlink irrelevantly, there are often consequences that you'll face, as this practice can often be caught automatically by web crawlers while comparing the keywords between your link and the backlink host. Item 6 – handling HTTP status codes properly Server errors help the client and server communicate the status of page requests in a clean and consistent manner. The following chart will review the most important server errors and what they do: Status Code Alias Effect on SEO 200 Success This loads the page and the content is contributed to SEO 301 Permanent redirect This redirects the page and the redirected content is contributed to SEO 302 Temporary redirect This redirects the page and the redirected content doesn't contribute to SEO 404 Client error (not found) This loads the page and the content does not contribute to SEO 500 Server error This will not load the page and there is no content to contribute to SEO In an ideal world, all pages would return the 200 status code. Unfortunately, URLs get misspelled, servers throw exceptions, and old pages get moved, which leads to the need for other status codes. Thus, it is important that each situation be handled to maximize communication to both web crawlers and users and minimize damage to one's search ranking. When a URL gets misspelled, it is important to provide a 301 redirect to a close match or another popular web page. This can be accomplished by using a clean URL and parsing out an identifier, regardless of the slug that follows it. This way, there exists content that contributes directly to the search ranking instead of just leaving a 404 page. Server errors should be handled as soon as possible. When a page does not load, it harms the experience for both users and web crawlers, and over an extended period of time, can expire that page's rank. Lastly, the 404 pages should be developed with your users in mind. When you choose not to redirect them to the most relevant link, it is important to either pass in suggested web pages or a search menu to keep them engaged with your content. The connect-rest-test Grunt plugin can be a healthy addition to any software project to test the status codes and responses from a RESTful API. You can find it at https://www.npmjs.org/package/connect-rest-test. Alternatively, while testing pages outside of your RESTful API, you may be interested in considering grunt-http-verify to ensure that status codes are returned properly. You can find it at https://www.npmjs.org/package/grunt-http-verify. Item 7 – making use of your robots.txt and site map files Often, there exist directories in a website that are available to the public but should not be indexed by a search engine. The robots.txt file, when placed in your website's root, helps to define exclusion rules for web crawling and prevent a user-defined set of search engines from entering certain directories. For instance, the following example disallows all search engines that choose to parse your robots.txt file from visiting the music directory on a website: User-agent: * Disallow: /music/ While writing navigation tools with dynamic content such as JavaScript libraries or Adobe Flash widgets, it's important to understand that web crawlers have limited capability in scraping these. Site maps help to define the relational mapping between web pages when crawlers cannot heuristically infer it themselves. On the other hand, the robots.txt file defines a set of search engine exclusion rules, and the sitemap.xml file, also located in a website's root, helps to define a set of search engine inclusion rules. The following XML snippet is a brief example of a site map that defines the attributes: <?xml version="1.0" encoding="utf-8"?> <urlset > <url> <loc>http://example.com/</loc> <lastmod>2014-11-24</lastmod> <changefreq>always</changefreq> <priority>0.8</priority> </url> <url> <loc>http://example.com/post/24/golden-dragon-review</loc> <lastmod>2014-07-13</lastmod> <changefreq>never</changefreq> <priority>0.5</priority> </url> </urlset> The attributes mentioned in the preceding code are explained in the following table: Attribute Meaning loc This stands for the URL location to be crawled lastmod This indicates the date on which the web page was last modified changefreq This indicates the page is modified and the number of times the crawler should visit as a result priority This indicates the web page's priority in comparison to the other web pages Using Grunt to reinforce SEO practices With the rising popularity of client-side web applications, SEO practices are often not met when page links do not exist without JavaScript. Certain Grunt plugins provide a workaround for this by loading the web pages, waiting for an amount of time to allow the dynamic content to load, and taking an HTML snapshot. These snapshots are then provided to web crawlers for search engine purposes and the user-facing dynamic web applications are excluded from scraping completely. Some examples of Grunt plugins that accomplish this need are: grunt-html-snapshots (https://www.npmjs.org/package/grunt-html-snapshots) grunt-ajax-seo (https://www.npmjs.org/package/grunt-ajax-seo)
Read more
  • 0
  • 0
  • 2118
article-image-managing-menus-joomla-15-part-2
Packt
12 Mar 2010
9 min read
Save for later

Managing Menus in Joomla! 1.5: Part 2

Packt
12 Mar 2010
9 min read
Time for action—tweak the menu position and orientation Placing a second menu in the left-hand side column makes it very prominent. You might notice that site visitors find this second menu distracting. And after all, the static links to information about SRUP aren't really that important. Why not move the menu somewhere down the page? We'll publish the SRUP links as a horizontal menu at the very bottom. By default, at the bottom of the screen there's a copyright notice. We'll start by removing this to make room for the new menu. Removing the copyright notice involves deleting a few lines of code from the template HTML. If you want to move the menu to any other screen position you can skip the first three steps: Navigate to Extensions | Template Manager. Select the rhuk_milkyway template and click on Edit HTML. In the HTML editor screen, find and select the following code: <p id="power_by"> <?php echo JText::_('Powered by') ?><a href="http://www.joomla.org">Joomla!</a>.<?php echoJText::_('Valid') ?> <a href="http://validator.w3.org/check/referer">XHTML</a> <?php echo JText::_('and') ?> <a href="http://jigsaw.w3.org/css-validator/check/referer">CSS</a>.</p> Press the Delete key to remove the selected code and click on Save. You can preview the frontend to check if the copyright notice has effectively disappeared. To have the About SRUP menu occupy the free position, we'll edit the menu module properties: Navigate to Extensions | Module Manager and click on the About SRUP menu module. In the Position drop-down box, select syndicate. This is the bottom most position in this template. In the Parameters section, click on Module Parameters and set the Menu Style to Legacy – Horizontal. This will make Joomla! display the links horizontally side by side in a table row. Click on Other Parameters. In the Spacer text box, enter a character to display in between the horizontal links. In this example, we'll enter two dashes. The effect is that the menu links will be displayed as follows:Who are SRUP? -- Mission Statement -- Contact Click on Apply and click on Preview. The menu has been moved to the bottom of the page: What just happened? We've just removed the copyright notice that by default appears at the bottom of the template. This creates room for a separate "About SRUP" menu. To get this menu to display at the bottom position we've changed its module position and the menu style (the links orientation) from the default values. The result is that the menu is now displayed as row of links at the bottom of the page. This makes them much less prominent. Only visitors looking for these links will really notice them. This kind of presentation is a good choice for links that don't fit the main navigation menus. In this example, we've moved links on the organization behind the site to the bottom menu. In real life, it's common to publish static links there (such as About This Site, Disclaimer, and Sitemap). Menu Manager or Module Manager?To customize a menu, you'll sometimes use the Menu Manager, and sometimes use the Module Manager. What's the difference? The Menu Manager is used for everything that has to do with the contents of the menu. Anything to do with the display of the menu module you control through the module settings. Option 3: Creating submenu items There's still room for improvement in our Main Menu. Although there are now only five links left, the way they're organized might still confuse visitors. Having a News link and a separate News Archive link, both on the same menu level, is odd. Visitors will expect a News link in a main site menu, but News Archive shouldn't really be a top-level link. Joomla! allows you to add a secondary level to a menu so let's change News Archive into a secondary link that will only display after the News link has been clicked. Time for action—create a secondary menu item Let's remove the News Archive link from the primary level in the Main Menu and show it as a sublevel link: To edit the Main Menu contents, navigate to Menus | Main Menu. Click on the title of the item you want to edit, News Archive. In the Menu Item Details section, the parent item is set to top. Change the parent item to News: Click on Save. In the list of menu items in the Menu Item Manager, the new sublevel menu item is shown indented: Click on Preview to see the output on the frontend. The Main Menu now shows four primary links. When the visitor clicks on News, a secondary link News Archive is displayed: What just happened? By assigning a parent item to a menu link you can create a submenu item. Of course, submenus aren't the only way to make secondary content visible. The main links, can point to overview pages with links to content from sections or categories. Those "secondary home pages" can make secondary menu links superfluous. However, sometimes it's better to add sublevels in the menu itself. If you have items outside of the section or category structure (such as uncategorized pages), submenus can clarify the coherence of the site. You can have main ("parent") links and secondary ("child") links. Creating split submenus When you want to use submenus on your site, you can also choose an altogether different presentation from what you've just seen. You're not limited to having submenu items shown within the main menu, as it's also possible to place main navigation links horizontally along the top of the page and display second level links in a separate menu (for example, in a vertical menu in the left-hand side column). This creates a clear visual distinction between the main menu items and their submenu items. At the same time the visitor can see that those two menus are related. The parent item can be displayed as "active" (using a different style or color) when the related submenu is displayed. An example is shown in the following screenshot. The primary link, Activities, is shown in a (horizontal) main menu bar. When this link is clicked a separate submenu shows secondary links (submenu links) to two category overview pages (Meetings and Lectures): How do you build this kind of menu system in Joomla!? In short, you create a copy of the main menu, set the original main menu to show only the top-level links, and set the copy to show only the second-level links. Joomla! will automatically display the appropriate submenu when the parent item is chosen in the top menu. We won't add a split menu system to our example site as it doesn't have the amount of content that would justify an elaborate multi-level navigation. However, feel free to experiment on your own website, as this is a very powerful technique. The following are the required steps in more detail: Suppose you have created a Main Menu with two or more links to sublevel items. Navigate to Extensions | Module Manager. Select the Main Menu module and click on Copy. The same list now contains an item called Copy of Main Menu. Open that copy and enter another title (for example, Submenu). Select Position: left. In the Module Parameters, set the Start Level to 1 and the End Level to 1. This will make the menu display only second-level menu items. Now edit the Main Menu module to show only the top-level items. Set Start Level to 0 and End Level to 0. The menu is done! The submenus now only display when the corresponding main-level link is clicked. Have a go hero—arrange menus any way you like Joomla!'s split menu capabilities allow you to design exactly the type of navigation that's appropriate for your site. You can place a row of main menu links at the top of the page and position secondary (submenu) links in the left-hand side or right-hand side column. Try to figure out what arrangement of main and secondary links fits your site best and how you can realize this in Joomla!. Here are a few suggestions (some common arrangements of site navigation) to get you going: By default, Joomla's main menu links are displayed as a vertical row in the left-hand side column. How can you achieve a horizontal row of main menu links, as shown in the first three images above? Have a look again at the Time for action - Tweak the menu position and orientation earlier in this article. It shows the easiest way to change the menu orientation in Joomla!, selecting the appropriate Menu Style in the menu module editing screen. However, there are templates that are specifically designed to support horizontal menus. These contain the appropriate module positions and module styling for a main horizontal menu (and do a much better job at this than the default Joomla template). Exploring menu module settings When creating or editing a menu module, the module details and parameters allow you to control exactly where the menu is shown and how it displays. In many cases, the default settings will do—but if you really want control over your menus, it's a good idea to explore the wide range of additional possibilities these settings provide. In the Module Manager, click on the menu name (such as Main Menu or About SRUP). The Module: [Edit] screen appears. The three main sections of the Module: [Edit] screen are Details, Menu Assignment, and Parameters.
Read more
  • 0
  • 0
  • 2117

article-image-servicestack-applications
Packt
21 Jan 2015
9 min read
Save for later

ServiceStack applications

Packt
21 Jan 2015
9 min read
In this article by Kyle Hodgson and Darren Reid, authors of the book ServiceStack 4 Cookbook, we'll learn about unit testing ServiceStack applications. (For more resources related to this topic, see here.) Unit testing ServiceStack applications In this recipe, we'll focus on simple techniques to test individual units of code within a ServiceStack application. We will use the ServiceStack testing helper BasicAppHost as an application container, as it provides us with some useful helpers to inject a test double for our database. Our goal is small; fast tests that test one unit of code within our application. Getting ready We are going to need some services to test, so we are going to use the PlacesToVisit application. How to do it… Create a new testing project. It's a common convention to name the testing project <ProjectName>.Tests—so in our case, we'll call it PlacesToVisit.Tests. Create a class within this project to contain the tests we'll write—let's name it PlacesServiceTests as the tests within it will focus on the PlacesService class. Annotate this class with the [TestFixture] attribute, as follows: [TestFixture]public class PlaceServiceTests{ We'll want one method that runs whenever this set of tests begins to set up the environment and another one that runs afterwards to tear the environment down. These will be annotated with the NUnit attributes of TestFixtureSetUp and TextFixtureTearDown, respectively. Let's name them FixtureInit and FixtureTearDown. In the FixtureInit method, we will use BasicAppHost to initialize our appHost test container. We'll make it a field so that we can easily access it in each test, as follows: ServiceStackHost appHost; [TestFixtureSetUp]public void FixtureInit(){appHost = new BasicAppHost(typeof(PlaceService).Assembly){   ConfigureContainer = container =>   {     container.Register<IDbConnectionFactory>(c =>       new OrmLiteConnectionFactory(         ":memory:", SqliteDialect.Provider));     container.RegisterAutoWiredAs<PlacesToVisitRepository,       IPlacesToVisitRepository>();   }}.Init();} The ConfigureContainer property on BasicAppHost allows us to pass in a function that we want AppHost to run inside of the Configure method. In this case, you can see that we're registering OrmLiteConnectionFactory with an in-memory SQLite instance. This allows us to test code that uses a database without that database actually running. This useful technique could be considered a classic unit testing approach—the mockist approach might have been to mock the database instead. The FixtureTearDown method will dispose of appHost as you might imagine. This is how the code will look: [TestFixtureTearDown]public void FixtureTearDown(){appHost.Dispose();} We haven't created any data in our in memory database yet. We'll want to ensure the data is the same prior to each test, so our TestInit method is a good place to do that—it will be run once before each and every test run as we'll annotate it with the [SetUp] attribute, as follows: [SetUp]public void TestInit(){using (var db = appHost.Container     .Resolve<IDbConnectionFactory>().Open()){   db.DropAndCreateTable<Place>();   db.InsertAll(PlaceSeedData.GetSeedPlaces());}} As our tests all focus on PlaceService, we'll make sure to create Place data. Next, we'll begin writing tests. Let's start with one that asserts that we can create new places. The first step is to create the new method, name it appropriately, and annotate it with the [Test] attribute, as follows: [Test]public void ShouldAddNewPlaces(){ Next, we'll create an instance of PlaceService that we can test against. We'll use the Funq IoC TryResolve method for this: var placeService = appHost.TryResolve<PlaceService>(); We'll want to create a new place, then query the database later to see whether the new one was added. So, it's useful to start by getting a count of how many places there are based on just the seed data. Here's how you can get the count based on the seed data: var startingCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places               .Count; Since we're testing the ability to handle a CreatePlaceToVisit request, we'll need a test object that we can send the service to. Let's create one and then go ahead and post it: var melbourne = new CreatePlaceToVisit{   Name = "Melbourne",   Description = "A nice city to holiday"}; placeService.Post(melbourne); Having done that, we can get the updated count and then assert that there is one more item in the database than there were before: var newCount = placeService               .Get(new AllPlacesToVisitRequest())               .Places              .Count;Assert.That(newCount == startingCount + 1); Next, let's fetch the new record that was created and make an assertion that it's the one we want: var newPlace = placeService.Get(new PlaceToVisitRequest{   Id = startingCount + 1});Assert.That(newPlace.Place.Name == melbourne.Name);} With this in place, if we run the test, we'll expect it to pass both assertions. This proves that we can add new places via PlaceService registered with Funq, and that when we do that we can go and retrieve them later as expected. We can also build a similar test that asserts that on our ability to update an existing place. Adding the code is simple, following the pattern we set out previously. We'll start with the arrange section of the test, creating the variables and objects we'll need: [Test]public void ShouldUpdateExistingPlaces(){var placeService = appHost.TryResolve<PlaceService>();var startingPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var startingCount = startingPlaces.Count;  var canberra = startingPlaces     .First(c => c.Name.Equals("Canberra")); const string canberrasNewName = "Canberra, ACT";canberra.Name = canberrasNewName; Once they're in place, we'll act. In this case, the Put method on placeService has the responsibility for update operations: placeService.Put(canberra.ConvertTo<UpdatePlaceToVisit>()); Think of the ConvertTo helper method from ServiceStack as an auto-mapper, which converts our Place object for us. Now that we've updated the record for Canberra, we'll proceed to the assert section of the test, as follows: var updatedPlaces = placeService     .Get(new AllPlacesToVisitRequest())     .Places;var updatedCanberra = updatedPlaces     .First(p => p.Id.Equals(canberra.Id));var updatedCount = updatedPlaces.Count; Assert.That(updatedCanberra.Name == canberrasNewName);Assert.That(updatedCount == startingCount);} How it works… These unit tests are using a few different patterns that help us write concise tests, including the development of our own test helpers, and with helpers from the ServiceStack.Testing namespace, for instance BasicAppHost allows us to set up an application host instance without actually hosting a web service. It also lets us provide a custom ConfigureContainer action to mock any of our dependencies for our services and seed our testing data, as follows: appHost = new BasicAppHost(typeof(PlaceService).Assembly){ConfigureContainer = container =>{   container.Register<IDbConnectionFactory>(c =>     new OrmLiteConnectionFactory(     ":memory:", SqliteDialect.Provider));    container.RegisterAutoWiredAs<PlacesToVisitRepository,     IPlacesToVisitRepository>();}}.Init(); To test any ServiceStack service, you can resolve it through the application host via TryResolve<ServiceType>().This will have the IoC container instantiate an object of the type requested. This gives us the ability to test the Get method independent of other aspects of our web service, such as validation. This is shown in the following code: var placeService = appHost.TryResolve<PlaceService>(); In this example, we are using an in-memory SQLite instance to mock our use of OrmLite for data access, which IPlacesToVisitRepository will also use as well as seeding our test data in our ConfigureContainer hook of BasicAppHost. The use of both in-memory SQLite and BasicAppHost provide fast unit tests to very quickly iterate our application services while ensuring we are not breaking any functionality specifically associated with this component. In the example provided, we are running three tests in less than 100 milliseconds. If you are using the full version of Visual Studio, extensions such as NCrunch can allow you to regularly run your unit tests while you make changes to your code. The performance of ServiceStack components and the use of these extensions results in a smooth developer experience with productivity and quality of code. There's more… In the examples in this article, we wrote out tests that would pass, ran them, and saw that they passed (no surprise). While this makes explaining things a bit simpler, it's not really a best practice. You generally want to make sure your tests fail when presented with wrong data at some point. The authors have seen many cases where subtle bugs in test code were causing a test to pass that should not have passed. One best practice is to write tests so that they fail first and then make them pass—this guarantees that the test can actually detect the defect you're guarding against. This is commonly referred to as the red/green/refactor pattern. Summary In this article, we covered some techniques to unit test ServiceStack applications. Resources for Article: Further resources on this subject: Building a Web Application with PHP and MariaDB – Introduction to caching [article] Web API and Client Integration [article] WebSockets in Wildfly [article]
Read more
  • 0
  • 0
  • 2117
Modal Close icon
Modal Close icon