Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-animation-silverlight-4
Packt
20 Apr 2010
8 min read
Save for later

Animation in Silverlight 4

Packt
20 Apr 2010
8 min read
Silverlight sports a rich animation system that is surprisingly easy to use. The animation model in Silverlight is time based, meaning that movements occur based on a set timeline. At the heart of every animation is a StoryBoard, which contains all the animation data and independent timeline. Silverlight controls can contain any number of Storyboards. StoryBoards contain one or more Key frame elements, which are responsible for making objects on screen change position, color, or any number of properties. There are four general types of Key frames in Silverlight 4: Linear, Discrete, Spline, and Easing. The table below illustrates what each one does: Very different than Flash The animation model in Silverlight is markedly different than the one found in Adobe Flash. Animations in Flash are frame-based, whereas in Silverlight they are time-based. The term StoryBoard comes from the motion picture industry, where scenes are drawn out before they are filmed. Time for action – animation time The client would like to transform their text-only logo into something a little more elaborate. The designers have once again given us a XAML snippet of code exported from their graphic design tool. We will need to do the following: Open up the CakeORama logo project in Blend. Blend should have automatically loaded the MainControl.xaml file and your screen should look like this: In the Objects and Timeline tab, you'll see a list of objects that make up this vector drawing. There is Path object for every character. Let's add an animation. On the Object and Timeline tab, click the plus sign (+) to create a new StoryBoard. In the Create Storyboard Resource dialog, type introAnimationStoryboard into the text box and click OK. You'll notice a couple of changes to your screen. For one, the art board is surrounded by a red border and a notification that: intoAnimationStoryboard timeline recording is on just like in this screenshot: If you take a look at the Objects and Timeline tab, you'll see the timeline for our newly created introAnimationStoryboard: Let's add a key frame at the very beginning. The vertical yellow line is the play head, which marks where you currently are in the timeline. Select the canvas1 object. You can switch to the Animation Workspace in Blend by pressing F6. Click on the square icon with a green plus sign to create a new Key frame here at position 0. A white oval appears representing the Key frame that you just created. It should look similar to the following screenshot: Move the play head to 0.7 seconds, by clicking on the tick mark to the immediate left of the number 1. Click the same button you did in step 9 to create a new key frame here so that your timeline looks like this: Move the play head back to zero. Make sure the canvas1 object is still selected, click and drag the logo graphic up, so that all of it is in the grey area. This moves the logo "off stage". Hit the play button highlighted in the below screenshot, to preview the animation and enjoy the show! Now all we need to do is tell Silverlight to run the animation when our control loads, but first we need to get out of recording mode. To do this, click the x button on the Objects and Timeline tab. Click on [UserControl] in the Objects and Timeline tab. On the Properties tab, you'll see an icon with a lightning bolt on it. Click on it to see the events associated with a UserControl object: To wire up an event handler for the Loaded event, type UserControl_Loaded in the text box next to Loaded, as shown in the next screenshot: Once you hit Enter, the code behind will immediately pop up with your cursor inside the event handler method. Add this line of code to the method: introAnimationStoryboard.Begin(); Run the solution via the menu bar or by pressing F5. You should see the logo graphic smoothly and evenly animate into view. If for some reason the animation doesn't get displayed, refresh the page in your browser. You should see it now. What just happened? You just created your first animation in Silverlight. First you created a Storyboard and then added a couple of Key frames. You changed the properties of the canvas on one key frame and Silverlight automatically interpolated them in between points to create a nice smooth animation. If your animation didn't show up on the initial page load but did when you reloaded the page, then you've just experienced how seriously the Silverlight animation engine respects time. Since our animation length is relatively short (0.7 seconds) it's possible that more than that amount of time elapsed from the call of the Begin method, to the amount of time it took for your computer to render it. Silverlight noticed that and "jumped" ahead to that part of the timeline to keep everything on schedule. Just like we did before, let's take a look at the XAML to get a better feel of what's really going on. You'll find the Storyboard XAML in the UserControl.Resources section towards the top of the document. Don't worry if the values are slightly different in your project: <Storyboard x_Name="introAnimationStoryboard"> <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)"><EasingDoubleKeyFrame KeyTime="00:00:00" Value="-229"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> </DoubleAnimationUsingKeyFrames><DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.X)"><EasingDoubleKeyFrame KeyTime="00:00:00" Value="1"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> </DoubleAnimationUsingKeyFrames></Storyboard> There are a couple of things going on here, so let's dissect the animation XAML starting with the Storyboard declaration which creates a Storyboard and assigns the name we gave it in the dialog box: <Storyboard x_Name="introAnimationStoryboard"> That's easy enough, but what about the next node? This line tells the Storyboard that we will be modifying a Double value starting at 0 seconds. It also further specifies a target for our animation: canvas1 and a property on our target: <DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="canvas1" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y)"> Clear enough, but what does the TargetProperty value mean? Here is that value highlight below. (UIElement.RenderTransform).(TransformGroup.Children)[3].(TranslateTransform.Y) We know that the net effect of the animation is that the logo moves from above the visible area back to its original position. If we're familiar with X, Y coordinates, where X represents a horizontal coordinate and Y a vertical coordinate, then the TranslateTransform.Y part makes sense. We are changing or, in Silverlight terms, transforming the Y property of the canvas. But what's all this TransformGroup about? Take a look at our canvas1 node further down in the XAML. You should see the following lines of XAML that weren't there earlier: <Canvas.RenderTransform> <TransformGroup> <ScaleTransform /> <SkewTransform/> <RotateTransform/> <TranslateTransform/> </TransformGroup></Canvas.RenderTransform> Blend automatically inserted them into the Canvas when we created the animation. They have no properties. Think of them as stubbed declarations of these objects. If you remove them, Silverlight will throw an exception at runtime like the one below complaining about not being able to resolve TargetProperty: Clearly this code is important, but what's really going on here? The TranslateTransform object is a type of Transform object which determines how an object can change in Silverlight. They are packaged in a TransformGroup, which can be set in the RenderTransform property on any object descending from UIElement, which is the base class for any kind of visual element. With that bit of knowledge, we now see that (TransformGroup.Children)[3] refers to the fourth element in a zero-based collection. Not so coincidentally, the TranslateTransform node is the fourth item inside the TransformGroup in our XAML. Changing the order of the transforms in the XAML will also cause an exception at runtime. That line of XAML just tells the Silverlight runtime that we're going to animation, now we tell it how and when with our two EasingDoubleKeyFrame nodes: <EasingDoubleKeyFrame KeyTime="00:00:00" Value="-229"/><EasingDoubleKeyFrame KeyTime="00:00:00.7000000" Value="0"/> The first EasingDoubleKeyFrame node tells Silverlight that, at zero seconds, we want the value to be -229. This corresponds to when the logo was above the visible area. The second EasingDoubleKeyFrame node tells Silverlight that at 0.7 seconds, we want the value of the property to be 0. This corresponds to the initial state of the logo, where it was before any transformations were applied. Silverlight handles all changes to the value in between the start and the end point. Silverlight's default frame rate is 60 frames per second, but Silverlight will adjust its frame rate based on the hardware that it is running on. Silverlight can adjust the amount by which it changes the values to keep the animation on schedule. If you had to reload the web page to see the animation run, then you've already experienced this. Once again, notice how few lines (technically only one line) of procedural code you had to write.
Read more
  • 0
  • 0
  • 1938

article-image-understanding-expression-blend-and-how-use-it-silverlight-4
Packt
16 Apr 2010
5 min read
Save for later

Understanding Expression Blend and How to Use it with Silverlight 4

Packt
16 Apr 2010
5 min read
Creating applications in Expression Blend What we've done so far falls short of some of the things you may have already seen and done in Silverlight. Hand editing XAML, assisted by Intellisense, works just fine to a point, but to create anything complex requires another tool to assist with turning our vision into code. Intellisense is a feature of Visual Studio and Blend that auto-completes text when you start typing a keyword, method, or variable name. Expression Blend may scare off developers at first with its radically different interface, but if you look more closely, you'll see that Blend has a lot in common with Visual Studio. For starters, both tools use the same Solution and Project file format. That means it's 100% compatible and enables tighter integration between developers and designers. You could even have the same project open in both Visual Studio and in Blend at the same time. Just be prepared to see the File Modified dialog box like the one below when switching between the two applications: If you've worked with designers on a project before, they typically mock up an interface in a graphics program and ship it off to the development team. Many times, a simple graphic embellishment can cause us developers to develop heartburn. Anyone who's ever had to implement a rounded corner in HTML knows the special kind of frustration that it brings along. Here's the good news: those days are over with Silverlight. A crash course in Expression Blend In the following screenshot, our CakeNavigationButton project is loaded into Expression Blend. Blend can be a bit daunting at first for developers that are used to Visual Studio as Blend's interface is dense with a lot of subtle cues. Solutions and projects are opened in Blend in the same manner as you would in Visual Studio. Just like in Visual Studio, you can customize Expression Blend's interface to suit your preference. You can move tabs around, dock, and undock them to create a workspace that works best for you as the following screenshot demonstrates: If you look at the CakeNavigationButton project, on the left hand side of the application window, you have the toolbar, which is substantially different from the toolbox in Visual Studio. The toolbar in Blend more closely resembles the toolbar in graphics editing software such as Adobe Photoshop or Adobe Illustrator. If you move the mouse over each button, you will see a tooltip that tells you what that button does, as well as the button's keyboard shortcut. In the upper-left corner, you'll notice a tab labeled Projects. This is functionally equivalent to the Solution Explorer in Visual Studio. The asterisk next to MainPage.XAML indicates that the file has not been saved. Examine the next screenshot to see Blend's equivalent to Visual Studio's Solution Explorer: If we look at the following screenshot, we find the Document tab control and the design surface, which Blend calls the art board. On the upper-right of the art board, there are three small buttons to control the switch between Design view, XAML view, or Split view. On the lower edge of the art board, there are controls to modify the view of the design surface. You can zoom in to take a closer look, turn on snap grid visibility, and turn on or off the snapping to snap lines.   If we then move to the upper-right corner of the next screen, we will see the Properties tab, which is a much more evolved version of the Properties tab in Visual Studio. As you can see in this screenshot, the color picker has a lot more to offer. There's also a search feature that narrows down the items in the tab based on the property name you type in.   At the lower left side of the next screen, there is the Objects and Timeline view, which shows the object hierarchy of the open document. Since we have the MainPage.XAML of our CakeNavigationButtons project, the view has StackPanel with six Buttons all inside a grid named LayoutRoot inside of a UserControl. Clicking on an item in this view selects the item on the art board and vice versa. Expression Blend is an intricate and rich application. Time for action – styles revisited Earlier in this chapter, we created and referenced a style directly in the XAML in Visual Studio. Let's modify the style we made in Blend to see how to do it graphically. To do this, we will need to: Open up the CakeNavigationButtons solution in Expression Blend. In the upper right corner, there are three tabs (Properties, Resources, and Data). On the Resources tab, expand the tree node marked [UserControl] and click on the button highlighted below to edit the [Button default] resource. Your art board should look something like this:
Read more
  • 0
  • 0
  • 2313

article-image-control-templates-visual-state-manager-and-event-handlers-silverlight-4
Packt
16 Apr 2010
7 min read
Save for later

Control templates, Visual State Manager, and Event Handlers in Silverlight 4

Packt
16 Apr 2010
7 min read
Skinning a control So far, you've seen that while styles can change the look of a control, they can only go so far. No matter how many changes we make, the buttons still look like old-fashioned buttons. Surely, there must be a way to customize a control further to match our creative vision. There is a way, its called skinning. Controls in Silverlight are extremely flexible and customizable. This flexibility stems from the fact that controls have both a VisualTree and a LogicalTree. The VisualTree deals with all the visual elements in a control, while the Logical tree deals with all the logical elements. All controls in Silverlight come with a default template, which defines what a control should look like. You can easily override this default template by redefining a control's visual tree with a custom one. Designers can either work directly with XAML in Blend or use a design tool that supports exporting to XAML. Expression Design is one such tool. You can alsoimport artwork from Adobe Illustrator and Adobe Photoshop from within Blend. In our scenario, let us pretend that there is a team of graphic designers. From time to time graphic designers will provide us with visual elements and, if we're lucky, snippets of XAML. In this case, the designers have sent us the XAML for a rectangle and gradient for us to base our control on: <Rectangle Stroke="#7F646464" Height="43" Width="150"StrokeThickness="2" RadiusX="15" RadiusY="15" VerticalAlignment="Top"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill></Rectangle> After inputting the above XAML, you will be presented with this image: We need to make this rectangle the template for our buttons. Time for action – Skinning a control We're going to take the XAML snippet above and skin our buttons with it. In order to achieve this we will need to do the following: Open up the CakeNavigationButtons project in Blend. In the MainPage.XAML file, switch to XAML View, either by clicking the XAML button on the upper-right corner of the art board or choosing View|Active Document View|XAML from the menu bar. Type in the following XAML after the closing tag for the StackPanel: (</StackPanel>)<Rectangle Stroke="#7F646464" Height="43" Width="150" StrokeThickness="2" RadiusX="15" RadiusY="15" VerticalAlignment="Top" > <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill></Rectangle> Switch back to Design View, either by clicking on the appropriate button on the upper right corner of the art board or choosing View|Active Document View|Design View from the menu bar. Right-click on the rectangle and click on Make Into Control. In the dialog box, choose Button, change the Name (Key) field to navButtonStyle and click OK. You are now in template editing mode. There are two on-screen indicators that you are in this mode: one is the Objects and Timeline tab: And one is the MainControl.xaml at the top of the art board: Click on the up button to exit template editing mode. Delete the button that our Rectangle was converted into. Select all the buttons in the StackPanel by clicking on the first one and then Shift+clicking on the last one. With all the buttons selected, go to the Properties tab, type Style into the search box. Using the techniques you've learned in this chapter, change the style to navButtonStyle, so that your screen now looks like this: The result is still not quite what we're looking for, but it's close. We need to increase the font size again; fortunately, we know how easy that is in Blend. Click on one of the buttons and choose Object|Edit Style|Edit Current from the menu bar to get into style editing mode. Make note of all the visual indicators. In the Properties tab, change the FontSize to 18, the Cursor to Hand, the Height to 45, and the Width to 200. You should see the changes immediately. The cursor change will only be noticeable at run time. Exit the template editing mode. There is a slight problem with the last button; the font is a little too large. Click on the button and use the Properties tab to change the FontSize to 12. Run the project and your application will look something like this: Run your mouse over the buttons. The button no longer reacts when you mouse over it, we'll fix that next. What just happened? We just took a plain old button and turned it into something a little more in line with the graphic designers' vision but how did we do it? When in doubt, look at the XAML The nice thing about Silverlight is that you can always take a look at the XAML to get a better understanding of what's going on. There are many places where things can "hide" in a tool like Blend or even Visual Studio. The raw naked XAML, however, bares all. For starters, we took a chunk of XAML and, using Blend, told Silverlight that we wanted to "take control" over how this button looks. This data was encapsulated into a Style and we told all our buttons to use our new style. When the new style was created, we lost some of our formatting data. We then inserted it back in and added a few more properties. If you're really curious to see what's going on, let's take a closer look at the XAML that Blend just generated for us: <Style TargetType="Button"> <Setter Property="FontSize" Value="18.667"/> <Setter Property="Background" Value="Red"/> <Setter Property="FontStyle" Value="Italic"/> <Setter Property="FontWeight" Value="Bold"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Margin" Value="5"/></Style><Style x_Key="smallerTextStyle" TargetType="Button"> <Setter Property="FontSize" Value="9"/> </Style><Style x_Key="navButtonStyle" TargetType="Button"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Button"> <Grid> <Rectangle RadiusY="15" RadiusX="15" Stroke="#7F646464" StrokeThickness="2"> <Rectangle.Fill> <LinearGradientBrush EndPoint="0.5,1" StartPoint="0.5,0"> <GradientStop Color="#FFEE9D9D" Offset="0.197"/> <GradientStop Color="#FFFF7D7D" Offset="0.847"/> <GradientStop Color="#FFF2DADA" Offset="0.066"/> <GradientStop Color="#FF7E4F4F" Offset="1"/> </LinearGradientBrush> </Rectangle.Fill> </Rectangle> <ContentPresenter HorizontalAlignment="{TemplateBinding HorizontalContentAlignment}" VerticalAlignment="{TemplateBinding VerticalContentAlignment}"/> </Grid> </ControlTemplate> </Setter.Value> </Setter> <Setter Property="FontSize" Value="24"/> <Setter Property="Cursor" Value="Hand"/> <Setter Property="Height" Value="45"/> <Setter Property="Width" Value="200"/></Style> You'll immediately notice how verbose XAML can be. We've not done a great deal of work, yet we've generated a lot of XAML. This is where a tool like Blend really saves us all those keystrokes. The next thing you'll see is that we're actually setting the Template property inside of a Setter node of a Style definition. It's not until toward the end of the Style definition that we see the Rectangle which we started with. There's also a lot of code here devoted to something called the Visual State Manager. Prior to us changing the control's template, you'll remember that when you moved your mouse over any of the buttons, they reacted by changing color. This was nice, subtle feedback for the user. Now that it's gone, we really miss it and so will our users. If you carefully study the XAML, it should come as no surprise to you that the button doesn't do anything other than just sit there: we've not defined anything for any of the states listed here. The nodes are blank. Let's do that now.
Read more
  • 0
  • 0
  • 2366

article-image-navigation-widgets-and-styles-microsoft-silverlight-4
Packt
09 Apr 2010
5 min read
Save for later

Navigation Widgets and Styles in Microsoft Silverlight 4

Packt
09 Apr 2010
5 min read
Retrofitting a website The first thing the client would like to do to their website is spice it up with a new navigation control and a playful, interactive logo for the top of the page. First, we'll work on a navigation control to replace the text links on the left hand side of the page of the current website. As you will notice in the following image, the current website navigation mechanism isn't fancy, but it's simple. However, the client would like the website to be more modern, while preserving ease of use. Adding pizzazz with Silverlight Cake-O-Rama would like to add a fancy navigation widget to their site. They've commissioned a graphic artist to create the following look for the widget.   A few words on search engine optimization We could easily create a Silverlight application that would encompass all the content and functionality of a whole website. However, doing so would severely limit the website's visibility to search engines. Search engines have programs called spiders, or robots, that 'crawl' the internet scanning for content. Generally, these programs can only see text exposed in the HTML. Search results are ranked based on this text-only content. Placing all our content inside a rich internet application platform like Silverlight would effectively hide all of our content. The net result would be reduced visibility on search engines. All Rich Internet Application (RIA) platforms have this issue with search engine visibility. Until this problem is resolved, the best approach is to augment the page's HTML content on sites that you want to be found more easily by search engines. Building a navigation control from the ground up Silverlight 4 has the Grid, Canvas, StackPanel, Border, WrapPanel, ViewBox, and ScrollViewer layout panels. Why are there so many? Well, each one serves a unique purpose. Picking the right kind of container You wouldn't fill a cardboard box with water or drink milk out of a gasoline can, would you? The same could be said of the various layout containers in Silverlight, each one serves a unique purpose and some are better at certain tasks than others. For instance, when you want to create a toolbar, you would probably use a StackPanel or WrapPanel , and not a Canvas. Why? While you could manually code the layout logic to place all the child controls, there's no good reason to. After all, there are already controls to do the heavy lifting for you. Below are the most common layout containers in Silverlight 4: Container Layout Behavior Canvas Manual positioning of items using X and Y coordinates Grid Lays out items using a defined grid of rows and columns InkPresenter Canvas that can handle digital ink StackPanel Stacks items on top of or next to one another WrapPanel Lines up items and wraps them around Border Draws a border around an item Viewbox Scales an item up to take up all the available space ScrollViewer Places a scroll bar around the control Silverlight also provides the means to write your own layout code. While there may be situations where this is warranted, first think about how you can achieve the desired result with a combination of the existing containers. Stack it up: Using the StackPanel Based on the website's current navigation links, StackPanel seems like the best choice. As the name implies, it lays out child controls in a stack, which seems like a good fit for our list of links. Time for action – building navigation buttons in Silverlight Now, let's make a StackPanel of button controls to navigate around the site. In order to do this, we will need to do the following: Launch Visual Studio 2010 and click on File|New Project. Choose to create a new Silverlight Application as shown in the next screen: Name the project CakeNavigationButtons and click OK to accept the default settings. In the MainPage.xaml file, write the following lines of XAML inside the Grid tag: <StackPanel> <Button Content="Home" /> <Button Content="Gallery"/> <Button Content="Order"/> <Button Content="Locations"/> <Button Content="Contact Us"/> <Button Content="Franchise Opportunities"/></StackPanel> Return to Visual Studio 2010 and click on Debug|Start Debugging or press F5 to launch the application. On the following screen, click OK to enable debugging. Your application should look something like this: We have now created a StackPanel of button controls to navigate around the website using Silverlight, but the application is not exactly visually appealing, not to mention, the buttons don't do anything. What we need them to do is reflect the design we've been provided with and navigate to a given page when the user clicks on them. What just happened? What we created here is the foundation for what will eventually become a dynamic navigation control. You have created a new Silverlight application, added a StackPanel, and then added button controls to it. Now, let's move on to make this little navigation bar sparkle. Adding a little style with Styles Many people refer to Silverlight controls as being "lookless", which may sound strange at first as they clearly have a "look." The term refers to the fact that the logic in a control defines its behavior rather than its appearance. That means that all the controls you've seen in Silverlight so far have no presentation logic in them. Their look comes from a default resource file. The good news is that we can create our own resources to customize the look of any control. You can re-style a control in Silverlight in much the same way as you can in Cascading Style Sheets (CSS).
Read more
  • 0
  • 0
  • 1679

article-image-service-oriented-java-business-integration-proxy
Packt
07 Apr 2010
9 min read
Save for later

Service Oriented Java Business Integration Proxy

Packt
07 Apr 2010
9 min read
Proxy—A Primer Wikipedia defines Proxy as: Proxy may refer to something which acts on behalf of something else. In the software a proxy is a substitute for a target instance and is a general pattern which appears in many other patterns in different variants. Proxy Design Pattern A proxy is a surrogate class for the target object. If a method call has to be invoked in the target object, it happens indirectly through the proxy object. The feature which makes proxy ideal for many situations is that the client or the caller is not aware that it is dealing with the proxy object. The proxy class is shown in the following figure: In the above figure, when a client invokes a method target towards the Target service, the proxy intercepts the call in between. The proxy also expose a similar interface to the target, hence the client is unaware of the dealing with the proxy. Thus the proxy method is invoked. The proxy then delegates the call to the actual target since it cannot provide the actual functionality. When doing so, the proxy can provide call management towards the actual method. The entire dynamics is shown in the following figure: A proxy is usually implemented by using a common, shared interface or super class. Both the proxy and the target share this common interface. Then, the proxy delegates the calls to the target class. JDK Proxy Class JDK provides both the class Proxy and the interface InvocationHandler in the java.lang.reflect package, since version 1.3. Using JDK Proxy classes, you can create your own classes implementing multiple interfaces of your choice, at run time. Proxy is the super class for any dynamic proxy instances you create at run time. Moreover, the Proxy class also accommodates a host of static methods which will help you to create your proxy instances. getProxyClass and newProxyInstance are two such utility methods. The Proxy API is listed in the following in brevity: package java.lang.reflect;public class Proxy implements java.io.Serializable{ protected InvocationHandler h; protected Proxy(InvocationHandler h); public static InvocationHandler getInvocationHandler(Object proxy) throws IllegalArgumentException; public static Class<?> getProxyClass(ClassLoader loader, Class<?>... interfaces) throws IllegalArgumentException; public static boolean isProxyClass(Class<?> cl); public static Object newProxyInstance(ClassLoader loader, Class<?>[] interfaces, InvocationHandler h) throws IllegalArgumentException} In the above code, you can invoke the Proxy.getProxyClass with a class loader and an array of interfaces for which you need to proxy, to get a Class instance for the proxy. Proxy objects have one constructor, to which you pass an InvocationHandler object associated with that proxy. When you invoke a method on the proxy instance, the method invocation is encoded and dispatched to the invoke method of its invocation handler. Let us also look at the InvocationHandler API reproduced as follows: package java.lang.reflect;public interface InvocationHandler{ Object invoke(Object proxy, Method method, Object[] args) throws Throwable;} We need to implement this interface and provide code for the invoke method. Once you get a Class instance for the proxy by invoking the Proxy.getProxyClass with a class loader and an array of interfaces for which you need to proxy to. Now, you can get a Constructor object for this proxy from the Class instance. On the constructor you can use newInstance (passing in an invocation handler instance) to create the proxy instance. The created instance should be implementing all the interfaces that were passed to getProxyClass. The steps are shown in the following code: InvocationHandler handler = new SomeInvocationHandler(...);Class proxyClazz = Proxy.getProxyClass(Blah.class.getClassLoader(), new Class[] {Blah.class});Blah blah = (Blah) proxyClazz.getConstructor(new Class[] { InvocationHandler.class }).newInstance(new Object[] {handler}); There is also a shortcut to get a proxy object. You can invoke Proxy.newProxyInstance, which takes a class loader, an array of interface classes, and an invocation handler instance. InvocationHandler handler = new SomeInvocationHandler(...);Blah blah = (Blah) Proxy.newProxyInstance(Blah.class. getClassLoader(),new Class[] {Blah.class}, handler); Now you can invoke methods on the proxy object during which these method invocations are turned into calls on to the invocation handler's invoke method is shown here: blah.interfaceMethod(); Sample JDK Proxy Class We will now write some simple code to demonstrate how you can write your own proxies at run time, for your interface classes. As a first step, if you haven't done it before, edit examples.PROPERTIES, and change the paths there to match your development environment. We will now look at the source code that can be found in the folder ch13JdkProxysrc. The files are explained here: ch13JdkProxysrcSimpleIntf.java public interface SimpleIntf { public void print(); } SimpleIntf is a simple interface with a single method print. print does not accept any parameters and also does not return any value. Our aim is that when we invoke methods on the proxy object for SimpleIntf, the method invocation should be turned into calls to an invocation handler's invoke method. Let us now define an invocation handler in the following code: ch13JdkProxysrcSimpleInvocationHandler.java import java.lang.reflect.InvocationHandler; import java.io.Serializable; import java.lang.reflect.Method; public class SimpleInvocationHandler implements InvocationHandler, Serializable { public SimpleInvocationHandler(){} public Object invoke(final Object obj, Method method, Object[] args) throws Throwable { if (method.getName().equals("print") && (args == null || args.length == 0)) { System.out.println("SimpleInvocationHandler.invoked"); } else { throw new IllegalArgumentException("Interface method does not support param(s) : " + args); } return null; }} Since SimpleIntf.print() does not accept any parameters and also does not return any value, in the invoke method of SimpleInvocationHandler, we double check the intention behind the actual invoker. In other words, we check that no parameters are passed and we return null only. Now, we have all the necessary classes to implement a proxy for SimpleIntf interface. Let us now execute it by writing a Test class. ch13JdkProxysrcTest.java import java.lang.reflect.Proxy; import java.lang.reflect.InvocationHandler; public class Test { public static void main(String[] args) { InvocationHandler handler = new SimpleInvocationHandler(); SimpleIntf simpleIntf = (SimpleIntf)Proxy.newProxyInstance (SimpleIntf.class.getClassLoader(),new Class[] { SimpleIntf. class }, handler); simpleIntf.print(); } } The wiring of the above described interfaces and classes are better represented in the UML class diagram in the following figure: The above figure shows the relationship between various classes and interfaces in the sample. $Proxy0 class represents the actual proxy class generated on the fly and as you can deduce it from the class diagram. $Proxy0 is a type of our interface (SimpleIntf). To build the sample, first change directory to ch13JdkProxy and execute ant as shown here: cd ch13JdkProxyant The command ant run will execute the Test class which will print out the following in the console: ServiceMix JBI Proxy Java proxies for the JBI endpoints can be created in ServiceMix using JSR181 components. For this, the requirement is that the JBI endpoints should expose a WSDL. A jsr181:endpoint takes a value for the serviceInterface attribute. The JBI container will be able to generate the WSDL out of this serviceInterface. Thus, if we have a jsr181:endpoint exposing service to the JBI bus, it is possible to provide a proxy for that service too. The basic configuration for defining a JBI proxy is shown as follows: <jsr181:proxy id="proxyBean" container="#jbi" interfaceName="test:HelloPortType" type="test.Hello" /> Once a proxy is defined, the same can then be referenced from your client bean or from one of your components. The proxied JBI endpoint can then be invoked just like a normal POJO. If you want to define a JBI proxy within a SU, you can follow the configuration given as follows: <jsr181:endpoint annotations="none" service="test:echoService" serviceInterface="test.Echo"> <jsr181:pojo> <bean class="test.EchoProxy"> <property name="echo"> <jsr181:proxy service="test:EchoService" context="#context" type="test.IService" /> </property> </bean> </jsr181:pojo></jsr181:endpoint> Let us now look into a few examples to make the concept clearer. JBI Proxy Sample Implementing Compatible Interface First, we will create a JBI proxy implementing an interface compatible with the target service. Then, in place of the target service we will use the proxy instance, so that any calls intended for the target service will be first routed to the proxy. The proxy in turn will delegate the call to the target service. The structural relationship between various classes participating in the interaction is shown in the following figure: Here, EchoProxyService is the class which we later expose in the JBI bus as the service. This class implements the IEcho interface. In order to demonstrate the proxy, EchoProxyService doesn't implement the service as such, instead depends on the JbiProxy derived out of another class TargetService. The TargetService contains the actual service code. As you can see, both the EchoProxyService and the TargetService implement the same interface. Proxy Code Listing The codebase for the sample is located in the folder ch13JbiProxy1_CompatibleInterface1_JsrProxysrc. This folder contains an interface IEcho and two other classes implementing the IEcho interface namely EchoProxyService and TargetService. These classes are explained here: IEcho.java: The IEcho interface declares a single method echo which takes a String parameter and returns a String. public interface IEcho{ public String echo(String input);} EchoProxyService.java: EchoProxyService is a convenient class which will act as mechanism for routing requests to the JBI proxy. Moreover, EchoProxyService implements the above interface IEcho. public class EchoProxyService implements IEcho{ private IEcho echo; public void setEcho(IEcho echo) { this.echo = echo; } public String echo(String input) { System.out.println("EchoProxyService.echo. this = " + this); return echo.echo(input); }} TargetService.java: TargetService also implements the interface IEcho. TargetService is supposed to be our target service, and we will be generating a JBI proxy for the TargetService. public class TargetService implements IEcho{ public String echo(String input) { System.out.println("TargetService.echo : String. this = " + this); return input; }}
Read more
  • 0
  • 0
  • 1600

article-image-core-principles-service-oriented-architecture-biztalk-server-2009
Packt
06 Apr 2010
11 min read
Save for later

The core principles of a service-oriented architecture with BizTalk Server 2009

Packt
06 Apr 2010
11 min read
So what exactly is a service? A service is essentially a well-defined interface to an autonomous chunk of functionality, which usually corresponds to a specific business process. That might sound a lot like a regular old object-oriented component to you. While both services and components have commonality in that they expose discrete interfaces of functionality, a service is more focused on the capabilities offered than the packaging. Services are meant to be higher-level, business-oriented offerings that provide technology abstraction and interoperability within a multipurpose "services" tier of your architecture. What makes up a service? Typically you'll find: Contract: Explains what operations the service exposes, types of messages, and exchange patterns supported by this service, and any policies that explain how this service is used. Messages: The data payload exchanged between the service consumer and provider. Implementation: The portion of the service which actually processes the requests, executes the expected business functionality, and optionally returns a response. Service provider: The host of the service which publishes the interface and manages the lifetime of the service. Service consumer: Ideally, a service has someone using it. The service consumer is aware of the available service operations and knows how to discover the provider and determine what type of messages to transmit. Facade: Optionally, a targeted facade may be offered to particularly service consumers. This sort of interface may offer a more simplified perspective on the service, or provide a coarse-grained avenue for service invocation. What is the point of building a service? I'd say it's to construct an asset capable of being reused which means that it's a discrete, discoverable, self-describing entity that can be accessed regardless of platform or technology. Service-oriented architecture is defined as an architectural discipline based on loosely-coupled, autonomous chunks of business functionality which can be used to construct composite applications. Through the rest of this article we get a chance to flesh out many of the concepts that underlie that statement. Let's go ahead and take a look at a few of the principles and characteristics that I consider most important to a successful service-oriented BizTalk solution. As part of each one, I'll explain the thinking behind the principle and then call out how it can be applied to BizTalk Server solutions. Loosely coupled Many of the fundamental SOA principles actually stem from this particular one. In virtually all cases, some form of coupling between components is inevitable. The only way we can effectively build software is to have interrelations between the various components that make up the delivered product. However, when architecting solutions, we have distinct design decisions to make regarding the extent to which application components are coupled. Loose coupling is all about establishing relationships with minimal dependencies. What would a tightly-coupled application look like? In such an application, we'd find components that maintained intimate knowledge of each others' working parts and engaged in frequent, chatty synchronous calls amongst themselves. Many components in the application would retain state and allow consumers to manipulate that state data. Transactions that take place in a tightly coupled application probably adhere to a two-phase commit strategy where all components must succeed together in order for each data interaction to be finalized. The complete solution has its ensemble of components compiled together and singularly deployed to one technology platform. In order to run properly, these tightly-coupled components rely on the full availability of each component to fulfill the requests made of them. On the other hand, a loosely-coupled application employs a wildly different set of characteristics. Components in this sort of application share only a contract and keep their implementation details hidden. Rarely preserving state data, these components rely on less frequent communication where chunky input containing all the data the component needs to satisfy its requestors is shared. Any transactions in these types of applications often follow a compensation strategy where we don't assume that all components can or will commit their changes at the same time. This class of solution can be incrementally deployed to a mix of host technologies. Asynchronous communication between components, often through a broker, enables a less stringent operational dependency between the components that comprise the solution. What makes a solution loosely coupled then? Notably, the primary information shared by a component is its interface. The consuming component possesses no knowledge of the internal implementation details. The contract relationship suffices as a means of explaining how the target component is used. Another trait of loosely coupled solutions is coarse-grained interfaces that encourage the transmission of full data entities as opposed to fine-grained interfaces, which accept small subsets of data. Because loosely-coupled components do not share state information, a thicker input message containing a complete impression of the entity is best. Loosely-coupled applications also welcome the addition of a broker which proxies the (often asynchronous) communication between components. This mediator permits a rich decoupling where runtime binding between components can be dynamic and components can forgo an operational dependency on each other. Let's take a look at an example of loose coupling that sits utterly outside the realm of technology. Completely non-technical loose coupling exampleWhen I go to a restaurant and place an order with my waiter, he captures the request on his pad and sends that request to the kitchen. The order pad (the contract) contains all the data needed by the kitchen chef to create my meal. The restaurant owner can bring in a new waiter or rotate his chefs and the restaurant shouldn't skip a beat as both roles (services) serve distinct functions where the written order is the intersection point and highlight of their relationship. Why does loose coupling matter? By designing a loosely-coupled solution, you provide a level of protection against the changes that the application will inevitably require over its life span. We have to reduce the impact of such changes while making it possible to deploy necessary updates in an efficient manner. How does this apply to BizTalk Server solutions? A good portion of the BizTalk Server architecture was built with loose coupling in mind. Think about the BizTalk MessageBox which acts as a broker facilitating communication between ports and orchestrations while limiting any tight coupling. Receive ports and send ports are very loosely coupled and in many cases, have absolutely no awareness of each other. The publish-and-subscribe bus thrives on the asynchronous transfer of self-describing messages between stateless endpoints. Let's look at a few recommendations of how to build loosely-coupled BizTalk applications. Orchestrations are a prime place where you can either go with a tightly-coupled or loosely-coupled design route. For instance, when sketching out your orchestration process, it's sure tempting to use that Transform shape to convert from one message type to another. However, a version change to that map will require a modification of the calling orchestration. When mapping to or from data structures associated with external systems, it's wiser to push those maps to the edges (receive/send ports) and not embed a direct link to the map within the orchestration. BizTalk easily generates schemas for line-of-business (LOB) systems and consumed services. To interact with these schemas in a very loosely coupled fashion, consider defining stable entity schemas (i.e. "canonical schemas") that are used within an orchestration, and only map to the format of the LOB system in the send port. For example, if you need to send a piece of data into an Oracle database table, you can certainly include a map within an orchestration which instantiates the Oracle message. However, this will create a tight coupling between the orchestration and the database structure. To better insulate against future changes to the database schema, consider using a generic intermediate data format in the orchestration and only transforming to the Oracle-specific format in the send port. How about those logical ports that we add to orchestrations to facilitate the transfer of messages in and out of the workflow process? When configuring those ports, the Port Configuration Wizard asks you if you want to associate the port to a physical endpoint via the Specify Now option. Once again, pretty tempting. If you know that the message will arrive at an orchestration via a FILE adapter, why not just go ahead and configure that now and let Visual Studio.NET create the corresponding physical ports during deployment? While you can independently control the auto-generated physical ports later on, it's a bad idea to embed transport details inside the orchestration file. On each subsequent deployment from Visual Studio.NET, the generated receive port will have any out-of-band changes overwritten by the deployment action. Chaining orchestration together is a tricky endeavor and one that can leave you in a messy state if you are too quick with a design decision. By "chaining orchestrations", I mean exploiting multiple orchestrations to implement a business process. There are a few options at your disposal listed here and ordered from most coupled to least coupled. Call Orchestration or Start Orchestration shape: An orchestration uses these shapes in order to kick off an additional workflow process. The Call Orchestration is used for synchronous connection with the new orchestration while the Start Orchestration is a fire-and-forget action. This is a useful tactic for sharing state data (for example variables, messages, ports) from the source orchestration to the target. However, both options require a tight coupling of the source orchestration to the target. Version changes to the target orchestration would likely require a redeployment of the source orchestration. Partner direct bound ports: These provide you the capability to communicate between orchestrations using ports. In the forward partner direct binding scenario, the sender has a strong coupling to the receiver, while the receiver knows nothing about the sender. This works well in situations where there are numerous senders and only one receiver. Inverse partner direct binding means that there is a tight coupling between the receiver and the sender. The sender doesn't know who will receive the command, so this scenario is intended for cases where there are many receivers for a single sender. In both cases, you have tight coupling on one end, with loose-coupling on the other. MessageBox direct binding: This is the most loosely-coupled way to share data between orchestrations. When you send a message out of an orchestration through a port marked for MessageBox direct binding, you are simply placing a message onto the bus for anyone to consume. The source orchestration has no idea where the data is going, and the recipients have no idea where it's been. MessageBox direct binding provides a very loosely-coupled way to send messages between different orchestrations and endpoints. Critical pointWhile MessageBox direct binding is great, you do lose the ability to send the additional state data that a Call Orchestration shape will provide you. So, as with all architectural decisions, you need to decide if the sacrifice (loose coupling, higher latency) is worth the additional capabilities. Decisions can be made during BizTalk messaging configuration that promote a loosely-coupled BizTalk landscape. For example, both receive ports and send ports allow for the application of maps to messages flying past. In each case, multiple maps can be added. This does NOT mean that all the maps will be applied to the message, but rather, it allows for sending multiple different message types in, and emitting a single type (or even multiple types) out the other side. By applying transformation at the earliest and latest moments of bus processing, you loosely couple external formats and systems from internal canonical formats. We should simply assume that all upstream and downstream systems will change over time, and configure our application accordingly. Another means for loosely coupling BizTalk solutions involves the exploitation of the publish-subscribe architecture that makes up the BizTalk message bus. Instead of building solely point-to-point solutions and figuring that a SOAP interface makes you service oriented, you should also consider loosely coupling the relationship between the service input and where the data actually ends up. We can craft a series of routing decision that take into account message content or context and direct the message to one or more relevant processes/endpoints. While point-to-point solutions may be appropriate for many cases, don't neglect a more distributed pattern where the data publisher does not need to explicitly know exactly how their data will be processed and routed by the message bus. When identifying subscriptions for our send ports, we should avoid tight coupling to metadata attributes that might limit the reuse of the port. For instance, you should try to create subscriptions on either the message type or message content instead of context attributes such as the inbound receive port name. Ports should be tightly coupled to the MessageBox and messages it stores, not to attributes of its publisher. That said, there are clearly cases where a subscriber is specifically looking for data that corresponds to a targeted piece of metadata such as the subject line of the email received by BizTalk. As always, design your solution in a way that solves your business problem in an efficient manner.
Read more
  • 0
  • 0
  • 2852
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-biztalk-server-standard-message-exchange-patterns-and-types-service
Packt
06 Apr 2010
4 min read
Save for later

BizTalk Server: Standard Message Exchange Patterns and Types of Service

Packt
06 Apr 2010
4 min read
Identifying Standard Message Exchange Patterns When we talk about Message Exchange Patterns, or MEPs, we're considering the direction and timing of data between the client and service. How do I get into the bus and what are the implications of those choices? Let's discuss the four primary options. Request/Response services This is probably the pattern that's most familiar to you. We're all comfortable making a function call to a component and waiting for a response. When a service uses this pattern, it's frequently performing a remote procedure call where the caller accesses functionality on the distant service and is blocked until either a timeout occurs or until the receiver sends a response that is expected by the caller. As we'll see below, while this pattern may set developers at ease, it may encourage bad behavior. Nevertheless, the cases where request/response services make the most sense are fine-grained functions and mashup services. If you need a list of active contracts that a hospital has with your company, then a request/response operation fits best. The client application should wait until that response is received before moving on to the next portion of the application. Or, let's say my web portal is calling an aggregate service, which takes contact data from five different systems and mashes them up into a single data entity that is then returned to the caller. This data is being requested for immediate presentation to an end user, and thus it's logical to solicit information from a service and wait to draw the screen until the completed result is loaded. BizTalk Server 2009 has full support for both consuming and publishing services adhering to a request/response pattern. When exposing request/response operations through BizTalk orchestrations, the orchestration port's Communication Pattern is set to Request-Response and the Port direction of communication is equal to I'll be receiving a request and sending a response. Once this orchestration port is bound to a physical request/response receive port, BizTalk takes care of correlating the response message with the appropriate thread that made the request. This is significant because by default, BizTalk is a purely asynchronous messaging engine. Even when you configure BizTalk Server to behave in a request/response fashion, it's only putting a facade on the standard underlying plumbing. A synchronous BizTalk service interface actually sits on top of a sophisticated mechanism of correlating MessageBox communication to simulate a request/response pattern. When consuming request/response services from BizTalk from an orchestration, the orchestration port's Communication Pattern is set to Request-Response and the Port direction of communication is equal to I'll be sending a request and receiving a response. The corresponding physical send port uses a solicit-response pattern and allows the user to set up both pipelines and maps for the inbound and outbound messages. One concern with either publishing or consuming request/response services is the issue of blocking and timeouts. From a BizTalk perspective, this means that whenever you publish an orchestration as a request/response service, you should always verify that the logic residing between inbound and outbound transmissions will either complete or fail within a relatively brief amount of time. This dictates wrapping this logic inside an orchestration Scope shape with a preset timeout that is longer than the standard web service timeout interval. For consuming services, a request/response pattern forces the orchestration to block and wait for the response to be returned. If the service response isn't necessary for processing to continue, consider using a Parallel shape that isolates the service interaction pattern on a dedicated branch. This way, the execution of unrelated workflow steps can proceed even though the downstream service is yet to respond.
Read more
  • 0
  • 0
  • 3516

article-image-documentation-phpdocumentor-part-2
Packt
31 Mar 2010
9 min read
Save for later

Documentation with phpDocumentor: Part 2

Packt
31 Mar 2010
9 min read
Documentation without DocBlocks You have probably already noticed that short of some inline comments, the sample project has no DocBlocks, tags, or anything else added by the programmer for the purpose of documenting the code. Nevertheless, there is quite a bit that phpDocumentor can do with uncommented PHP code. If we are in the directory containing the project directory, we can run phpDocumentor and ask to generate documentation for the project like this: The above command will recursively process all files in the project directory (--directory ./project/), create documentation with a custom title (--title 'Generated Documentation - No DocBlocks'), include a source code listing of each file processed (--sourcecode on), save all documentation to the docs directory (--target ./project/docs), and group everything under a specified package name (--defaultpackagename 'UserAuthentication'). Listing all documentation pages that phpDocumentor generated is impractical, but let's take a look at the outline and at least one of the classes. All we have to do to view the documentation is to open the index.html file in the docs directory where we told phpDocumentor to direct the output with a web browser. Looking at the above screenshot, we see that phpDocumentor correctly found all the class files. Moreover, it identified Accountable as an interface and found index.php, even though it contains no class definitions. All classes and interfaces are grouped together under the AuthenticationUser package name that was specified from the command line. At the same time, we see some of the shortcomings. There is no further classification or grouping and all components are simply listed under the root level. Before we move on, let's also take a look at what information phpDocumentor was able to extract from the Users.php file: It correctly identified the methods of the class, their visibility, and which parameters are required. I think that is a pretty useful start, albeit the description is a bit sparse and we have no idea what methods were actually implemented using the magic __call() method. Another point to note here is that the class property $accounts does not appear in the documentation at all. That is intended behavior because the property has been declared private. If you want elements with private visibility to appear in your documentation, you will have to add the –pp / --parse private command line option or put this option in a config file. Documentation with DocBlocks Of course, this example wouldn't be complete if we didn't proceed to add proper DocBlocks to our code. The following is the exact same code as before, but this time it has been properly marked up with DocBlocks. File project/classes/Accountable.php: <?php /** * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @subpackage Authentication * @copyright Waferthin Web Works LLC * @license http://www.gnu.org/copyleft/gpl.html Freely available under GPL */ /** * <i>Accountable</i> interface for authentication * * Any class that handles user authentication <b>must</b> * implement this interface. It makes it almost * trivial to check whether a user is currently * logged in or not. * * @package WebServices * @subpackage Authentication * @author Dirk Merkel <dirk@waferthin.com> * @version 0.2 * @since r12 */ interface Accountable { const AUTHENTICATION_ERR_MSG = 'There is no user account associated with the current session. Try logging in fist.'; /** * Did the current user log in? * * This method simply answers the question * "Did the current user log in?" * * @access public * @return bool */ public function isLoggedIn(); /** * Returns user account info * * This method is used to retrieve the account corresponding * to a given login. <b>Note:</b> it is not required that * the user be currently logged in. * * @access public * @param string $user user name of the account * @return Account */ public function getAccount($user = ''); } ?> File project/classes/Authentication.php: <?php /** * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @subpackage Authentication * @copyright Waferthin Web Works LLC * @license http://www.gnu.org/copyleft/gpl.html Freely available under GPL */ /** * <i>Authentication</i> handles user account info and login actions * * This is an abstract class that serves as a blueprint * for classes implementing authentication using * different account validation schemes. * * @see Authentication_HardcodedAccounts * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @subpackage Authentication * @version 0.5 * @since r5 */ abstract class Authentication implements Accountable { /** * Reference to Account object of currently * logged in user. * * @access private * @var Account */ private $account = null; /** * Returns account object if valid. * * @see Accountable::getAccount() * @access public * @param string $user user account login * @return Account user account */ public function getAccount($user = '') { if ($this->account !== null) { return $this->account; } else { return AUTHENTICATION_ERR_MSG; } } /** * isLoggedIn method * * Says whether the current user has provided * valid login credentials. * * @see Accountable::isLoggedIn() * @access public * @return boolean */ public function isLoggedIn() { return ($this->account !== null); } /** * login method * * Abstract method that must be implemented when * sub-classing this class. * * @access public * @return boolean */ abstract public function login($user, $password); } ?> File project/classes/Authentication/HardcodedAccounts.php: <?php /** * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @subpackage Authentication * @copyright Waferthin Web Works LLC * @license http://www.gnu.org/copyleft/gpl.html Freely available under GPL */ /** * <i>Authentication_HardcodedAccounts</i> class * * This class implements the login method needed to handle * actual user authentication. It extends <i>Authentication</i> * and implements the <i>Accountable</i> interface. * * @package WebServices * @subpackage Authentication * @see Authentication * @author Dirk Merkel <dirk@waferthin.com> * @version 0.6 * @since r14 */ class Authentication_HardcodedAccounts extends Authentication { /** * Referece to <i>Users</i> object * @access private * @var Users */ private $users; /** * Authentication_HardcodedAccounts constructor * * Instantiates a new {@link Users} object and stores a reference * in the {@link users} property. * * @see Users * @access public * @return void */ public function __construct() { $this->users = new Users(); } /** * login method * * Uses the reference {@link Users} class to handle * user validation. * * @see Users * @todo Decide which validate method to user instead of both * @access public * @param string $user account user name * @param string $password account password * @return boolean */ public function login($user, $password) { if (empty($user) || empty($password)) { return false; } else { // both validation methods should work ... // user static method to validate account $firstValidation = Users::validate($user, $password); // use magic method validate<username>($password) $userLoginFunction = 'validate' . $user; $secondValidation = $this->users- >$userLoginFunction($password); return ($firstValidation && $secondValidation); } } } ?> File project/classes/Users.php: <?php /** * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @subpackage Accounts * @copyright Waferthin Web Works LLC * @license http://www.gnu.org/copyleft/gpl.html Freely available under GPL */ /** * <i>Users</i> class * * This class contains a hard-coded list of user accounts * and the corresponding passwords. This is merely a development * stub and should be implemented with some sort of permanent * storage and security. * * @package WebServices * @subpackage Accounts * @see Authentication * @see Authentication_HardcodedAccounts * @author Dirk Merkel <dirk@waferthin.com> * @version 0.6 * @since r15 */ class Users { /** * hard-coded user accounts * * @access private * @static * @var array $accounts user name => password mapping */ private static $accounts = array('dirk' => 'myPass', 'albert' => 'einstein'); /** * static validate method * * Given a user name and password, this method decides * whether the user has a valid account and whether * he/she supplied the correct password. * * @see Authentication_HardcodedAccounts::login() * @access public * @static * @param string $user account user name * @param string $password account password * @return boolean */ public static function validate($user, $password) { return self::$accounts[$user] == $password; } /** * magic __call method * * This method only implements a magic validate method * where the second part of the method name is the user's * account name. * * @see Authentication_HardcodedAccounts::login() * @see validate() * @access public * @method boolean validate<user>() validate<user>(string $password) validate a user * @staticvar array $accounts used to validate users & passwords */ public function __call($name, $arguments) { if (preg_match("/^validate(.*)$/", $name, $matches) && count($arguments) > 0) { return self::validate($matches[1], $arguments[0]); } } } ?> File project/index.php: <?php /** * Bootstrap file * * This is the form handler for the login application. * It expects a user name and password via _POST. If * * @author Dirk Merkel <dirk@waferthin.com> * @package WebServices * @copyright Waferthin Web Works LLC * @license http://www.gnu.org/copyleft/gpl.html Freely available under GPL * @version 0.7 * @since r2 */ /** * required class files and interfaces */ require_once('classes/Accountable.php'); require_once('classes/Authentication.php'); require_once('classes/Users.php'); require_once('classes/Authentication/HardcodedAccounts.php'); $authenticator = new Authentication_HardcodedAccounts(); // uncomment for testing $_POST['user'] = 'dirk'; $_POST['password'] = 'myPass'; if (isset($_POST['user']) && isset($_POST['password'])) { $loginSucceeded = $authenticator->login($_POST['user'], $_POST['password']); if ($loginSucceeded === true) { echo "Congrats - you're in!n"; } else { echo "Uh-uh - try again!n"; } } ?> Since none of the functionality of the code has changed, we can skip that discussion here. What has changed, however, is that we have added DocBlocks for each file, class, interface, method, and property. Whereas the version of the project without documentation had a total of 113 lines of code, the new version including DocBlocks has 327 lines. The number of lines almost tripled! But don't be intimidated. Creating DocBlocks doesn't take nearly as much time as coding. Once you are used to the syntax, it becomes second nature. My estimate is that documenting takes about 10 to 20 percent of the time it takes to code. Moreover, there are tools to really speed things up and help you with the syntax, such as a properly configured code editor or IDE. Now let's see how phpDocumentor fared with the revised version of the project. Here is the index page: This time, the heading shows that we are looking at the Web Services package. Furthermore, the classes and interfaces have been grouped by sub-packages in the left-hand index column. Next, here is the documentation page for the Users class: As you can see, this documentation page is quite a bit more informative than the earlier version. For starters, it has a description of what the class does. Similarly, both methods have a description. All the tags and their content are listed and there are helpful links to other parts of the documentation. And, from the method tag we can actually tell that the magic method __call() was used to implement a method of the form validate<user>($password). That is quite an improvement, I would say! To really appreciate how much more informative and practical the documentation has become by adding DocBlocks, you really need to run through this example yourself and browse through the resulting documentation.
Read more
  • 0
  • 0
  • 1947

article-image-documentation-phpdocumentor-part-1
Packt
31 Mar 2010
11 min read
Save for later

Documentation with phpDocumentor: Part 1

Packt
31 Mar 2010
11 min read
Code-level documentation The documentation we will be creating describes the interface of the code more than minute details of the actual implementation. For example, you might document an API that you have developed for the outside world to interact with some insanely important project on which you are working. Having an API is great, but for other developers to quickly get an overview of the capabilities of the API and being able to crank out working code within a short amount of time is even better. If you are following the proper conventions while writing the code, all you would have to do is run a utility to extract and format the documentation from the code. Even if you're not inviting the whole world to interact with your software, developers within your own team will benefit from documentation describing some core classes that are being used throughout the project. Just imagine reading your co-worker's code and coming across some undecipherable object instance or method call. Wouldn't it be great to simply pull up the API documentation for that object and read about its uses, properties, and methods? Furthermore, it would be really convenient if the documentation for the whole project were assembled and logically organized in one location. That way, a developer cannot only learn about a specific class, but also about its relationships with other classes. In a way, it would enable the programmer to form a high-level picture of how the different pieces fit together. Another reason to consider code-level documentation is that source code is easily accessible due to PHP being a scripting language. Unless they choose to open source their code, compiled languages have a much easier time hiding their code. If you ever plan on making your project available for others to download and run on their own server, you are unwittingly inviting a potential critic or collaborator. Since it is rather hard (but not impossible) to hide the source code from a user that can download your project, there is the potential for people to start looking at and changing your code. Generally speaking, that is a good thing because they might be improving the quality and usefulness of the project and hopefully they will be contributing their improvements back to the user community. In such a case, you will be glad that you stuck to a coding standard and added comments throughout the code. It will make understanding your code much easier and anybody reading the code will come away with the impression that you are indeed a professional. Great, you say, how do I make sure I always generate such useful documentation when I program? The answer is simple. You need to invest a little time learning the right tool(s). That's the easy part for someone in the technology field where skill sets are being expanded every couple of years anyway. The hard part is to consistently apply that knowledge. Like much else in this book, it is a matter of training yourself to have good habits. Writing API level documentation at the same time as implementing a class or method should become second nature as much as following a coding standard or properly testing your code. Luckily, there are some tools that can take most of the tedium out of documenting your code. Foremost, modern IDEs (Integrated Development Environments) are very good at extracting some of the needed information automatically. Templates can help you generate documentation tags rather rapidly. Levels of detail As you create your documentation, you have to decide how detailed you want to get. I have seen projects where easily half the source code consisted of comments and documentation that produced fantastic developer and end-user documentation. However, that may not be necessary or appropriate for your project. My suggestion is to figure out what level of effort you can reasonably expect of yourself in relation to what would be appropriate for your target audience. After all, it is unlikely that you will start documenting every other line of code if you are not used to adding any documentation at all. On one hand, if your audience is relatively small and sophisticated, you might get away with less documentation. On the other hand, if you are documenting the web services API for a major online service as you are coding it, you probably want to be as precise and explicit as possible. Adding plenty of examples and tutorials might enable even novice developers to start using your API quickly. In that case, your employer's success in the market place is directly tied to the quality and accessibility of the documentation. In this case, the documentation is very much part of the product rather than an afterthought or merely an add-on. On one end of the spectrum, you can have documentation that pertains to the project as a whole, such as a "README" file. At the next level down, you might have a doc section at the beginning of each file. That way, you can cover the functionality of the file or class without going into too much detail. Introducing phpDocumentor phpDocumentor is an Open Source project that has established itself as the dominanot tool for documenting PHP code. Although there are other solutions, phpDocumentor is by far the one you are most likely to encounter in your work–and for good reason. Taking a clue from similar documentation tools that came before it, such as JavaDoc, phpDocumentor offers many features in terms of user interface, formatting, and so on. PhpDocumentor provides you with a large library of tags and other markup, which you can use to embed comments, documentation, and tutorials in your source code. The phpDoc markup is viewed as comments by PHP when it executes your source file and therefore doesn't interfere with the code's functionality. However, running the phpDocumentor command line executable or using the web-based interface, you can process all your source files, extract the phpDoc related content, and compile it into functional documentation. There is no need to look through the source files because phpDocumentor assembles the documentation into nicely looking HTML pages, text files, PDFs, or CHMs. Although phpDocumentor supports procedural programming and PHP4, the focus in this article will be on using it to document applications developed with object-oriented design in mind. Specifically, we will be looking at how to properly document interfaces, classes, properties, and methods. For details on how to document some of the PHP4 elements that don't typically occur in PHP5's object-oriented implementation, please consult the phpDocumentor online manual: http://manual.phpdoc.org/ Installing phpDocumentor There are two ways of installing phpDocumentor. The preferred way is to use the PEAR repository. Typing pear install PhpDocumentor from the command line will take care of downloading, extracting, and installing phpDocumentor for you. The pear utility is typically included in any recent standard distribution of PHP. However, if for some reason you need to install it first, you can download it from the PEAR site: http://pear.php.net/ Before we proceed with the installation, there is one important setting to consider. Traditionally, phpDocumentor has been run from the command line, however, more recent versions come with a rather functional web-based interface. If you want pear to install the web UI into a sub-directory of your web server's document root directory, you will first have to set pear's data_dir variable to the absolute path to that directory. In my case, I created a local site from which I can access various applications installed by pear. That directory is /Users/dirk/Sites/phpdoc. From the terminal, you would see the following if you tell pear where to install the web portion and proceed to install phpDocumentor. As part of the installation, the pear utility created a directory for phpDocumentor's web interface. Here is the listing of the contents of that directory: The other option for installing phpDocumentor is to download an archive from the project's SourceForge.net space. After that, it is just a matter of extracting the archive and making sure that the main phpdoc executable is in your path so that you can launch it from anywhere without having to type the absolute path. You will also have to manually move the corresponding directory to your server's document root directory to take advantage of the web-based interface. DocBlocks Let's start by taking a look at the syntax and usage of phpDocumentor.The basic unit of phpDoc documentation is a DocBlock. All DocBocks take the following format: /** * Short description * * Long description that can span as many lines as you wish. * You can add as much detail information and examples in this * section as you deem appropriate. You can even <i>markup</i> * this content or use inline tags like this: * {@tutorial Project/AboutInlineTags.proc} * * @tag1 * @tag2 value2 more text * ... more tags ... */ A DocBlock is the basic container of phpDocumentor markup within PHP source code. It can contain three different element groups: short description, long description, and tags–all of which are optional. The first line of a DocBlock has only three characters, namely "/**". Similarly, the last line will only have these three characters: " */ ". All lines in between will start with " * ". Short and long descriptions An empty line or a period at the end of the line terminates short descriptions. In contrast, long descriptions can go on for as many lines as necessary. Both types of descriptions allow certain markup to be used: <b>, <br>, <code>, <i>, <kbd>, <li>, <ol>, <p>, <pre>, <samp>, <ul>, <var>. The effect of these markup tags is borrowed directly from HTML. Depending on the output converter being used, each tag can be rendered in differe nt ways. Tags Tags are keywords known to phpDocumentor. Each tag can be followed by a number of optional arguments, such as data type, description, or URL. For phpDocumentor to recognize a tag, it has to be preceded by the @ character. Some examples of common tags are: /** * @package ForeignLanguageParser * @author Dirk Merkel dirk@waferthin.com * @link http://www.waferthin.com Check out my site */class Translate{} In addition to the above "standard" tags, phpDocumentor recognizes "inline" tags, which adhere to the same syntax, with the only notable difference that they are enclosed by curly brackets. Inline tags occur inline with short and long descriptions like this: /** * There is not enough space here to explain the value and usefulness * of this class, but luckily there is an extensive tutorial available * for you: {@tutorial ForeignLanguageParser/Tran slate.cls} */ DocBlock templates It often happens that the same tags apply to multiple successive elements. For example, you might group all private property declarations at the beginning of a class. In that case, it would be quite repetitive to list the same, or nearly the same DocBlocks, over and over again. Luckily, we can take advantage of DocBlock templates, which allow us to define DocBlock sections that will be added to the DocBlock of any element between a designated start and end point. DocBlock templates look just like regular DocBlocks with the difference that the first line consists of /**#@+ instead of /**. The tags in the template will be added to all subsequent DocBlocks until phpDocumenter encounters the ending letter sequence /**#@-*/. The following two code fragments will produce the same documentation. First, here is the version containing only standard DocBlocks: <?phpclass WisdomDispenser{ /** * @access protected * @var string */ private $firstSaying = 'Obey the golden rule.'; /** * @access protected * @var string */ private $secondSaying = 'Get in or get out.'; /** * @access protected * @var string * @author Albert Einstein <masterof@relativity.org> */ private $thirdSaying = 'Everything is relative';}?> And here is the fragment that will produce the same documentation using a more concise notation by taking advantage of DocBlock templates: <?phpclass WisdomDispenser{ /**#@+ * @access protected * @var string */ private $firstSaying = 'Obey the golden rule.'; private $secondSaying = 'Get in or get out.'; /** * @author Albert Einstein <masterof@relativity.org> */ private $thirdSaying = 'Everything is relative'; /**#@-*/}?>
Read more
  • 0
  • 0
  • 3252

article-image-build-your-own-application-access-twitter-using-java-and-netbeans-part-3
Packt
31 Mar 2010
7 min read
Save for later

Build your own Application to access Twitter using Java and NetBeans: Part 3

Packt
31 Mar 2010
7 min read
This is the third part of the Twitter Java client tutorial article series! In Build your own Application to access Twitter using Java and NetBeans: Part 2 we: Created a twitterLogin dialog to take care of the login process Added functionality to show your 20 most recent tweets right after logging in Added the functionality to update your Twitter status Showing your Twitter friends’ timeline Open your NetBeans IDE along with your SwingAndTweet project, and make sure you’re in the Design View. Select the Tabbed Pane component from the Palette panel and drag it into the SwingAndTweetUI JFrame component: A new JTabbedPane1 container will appear below the JScrollPane1 control in the Inspector panel. Now drag the JScrollPane1 control into the JTabbedPane1 container: The jScrollPane1 control will merge with the jTabbedPane1 and a tab will appear. Double-click on the tab, replace its default name –tab1– with Home, and press Enter: Resize the jTabbedPane1 control so it takes all the available space from the main window: Now drag a Scroll Pane container from the Palette panel and drop it into the white area of the jTabbedPane1 control:   A new tab will appear, containing the new jScrollPane2 object you’ve just dropped in. Now drag a Panel container from the Palette panel and drop it into the white area of the jTabbedPane1 control: A JPanel1 container will appear inside the jScrollPane2 container, as shown in the next screenshot: Change the name of the new tab to Friends and then click on the Source tab to change to the Source view. Once your app code shows up, locate the btnLoginActionPerformed method and type the following code at the end of this method, right below the jTextArea1.updateUI() line: //code for the Friends timeline try { java.util.List<Status> statusList = twitter.getFriendsTimeline(); jPanel1.setLayout(new GridLayout(statusList.size(),1)); for (int i=0; i<statusList.size(); i++) { statusText = new JLabel(String.valueOf(statusList.get(i).getText())); statusUser = new JLabel(statusList.get(i).getUser().getName()); JPanel individualStatus = new JPanel(new GridLayout(2,1)); individualStatus.add(statusUser); individualStatus.add(statusText); jPanel1.add(individualStatus); } } catch (TwitterException e) { JOptionPane.showMessageDialog (null, "A Twitter error ocurred!");} jPanel1.updateUI(); The next screenshot shows how the code in your btnLoginActionPerformed method should look like after adding the code: One important thing you should notice is that there will be 6 error icons due to the fact that we need to declare some variables and write some import statements. Scroll up the code window until you locate the import twitter4j.*; and the import javax.swing.JOptionPane; lines, and add the following lines right after them: import java.awt.GridLayout; import javax.swing.JLabel; import javax.swing.JPanel; Now scroll down the code until you locate the Twitter twitter; line you added in Swinging and Tweeting with Java and NetBeans: Part 2 of this tutorial series and add the following lines: JLabel statusText; JLabel statusUser; If you go back to the buttonUpdateStatusActionPerformed method, you’ll notice the errors have disappeared. Now everything is ready for you to test the new functionality in your Twitter client! Press F6 to run your SwingAndTweet application and log in with your Twitter credentials. The main window will show your last 20 tweets, and if you click on the Friends tab, you will see the last 20 tweets of the people you’re following, along with your own tweets: Close your SwingAndTweet application to return to NetBeans. Let’s examine what we did in the previous exercise. On steps 2-5 you added a JTabbedPane container and created a Home tab where the JScrollPane1 and JTextArea1 controls show your latest tweets, and then on steps 6-8 you added the JPanel1 container inside the JScrollPane2 container. On step 9 you changed the name of the new tab to Friends and then added some code to show your friends’ latest tweets. As in previous exercises, we need to add the code inside a try-catch block because we are going to call the Twitter4J API to get the last 20 tweets on your friends timeline. The first line inside the try block is: java.util.List<Status> statusList = twitter.getFriendsTimeline(); This line gets the 20 most recent tweets from your friends’ timeline, and assigns them to the statusList variable. The next line, jPanel1.setLayout(new GridLayout(statusList.size(),1)); sets your jPanel1 container to use a layout manager called GridLayout, so the components inside jPanel1 can be arranged into rows and columns. The GridLayout constructor requires two parameters; the first one defines the number of rows, so we use the statusList.size() function to retrieve the number of tweets obtained with the getFriendsTimeline() function in the previous line of code. The second parameter defines the number of columns, and in this case we only need 1 column. The next line, for (int i=0; i<statusList.size(); i++) { starts a for loop that iterates through all the tweets obtained from your friends’ timeline. The next 6 lines are executed inside the for loop. The next line in the execution path is statusText = new JLabel(String.valueOf(statusList.get(i).getText())); This line assigns the text of an individual tweet to a JLabel control called statusText. You can omit the String.valueOf function in this line because the getText() already returns a string value –I used it because at first I was having trouble getting NetBeans to compile this line, I still haven’t found out why, but as soon as I have an answer, I’ll let you know. As you can see, the statusText JLabel control was created programmatically; this means we didn’t use the NetBeans GUI interface. The next line, statusUser = new JLabel(statusList.get(i).getUser().getName()); creates a JLabel component called statusUser, gets the name of the user that wrote the tweet through the statusList.get(i).getUser().getName() method and assigns this value to the statusUser component. The next line, JPanel individualStatus = new JPanel(new GridLayout(2,1)); creates a JPanel container named individualStatus to contain the two JLabels we created in the last two lines of code. This panel has a GridLayout with 2 rows and one column. The first row will contain the name of the user that wrote the tweet, and the second row will contain the text of that particular tweet. The next two lines, individualStatus.add(statusUser); individualStatus.add(statusText); add the name of the user (statusUser) and the text of the individual tweet (statusText) to the individualStatus container, and the next line, jPanel1.add(individualStatus); adds the individualStatus JPanel component – which contains the username and text of one individual tweet –to the jPanel1 container. This is the last line of code inside the for loop. The catch block shows an error message in case an error occurs when executing the getFriendsTimeline() function, and the jPanel1.updateUI(); line updates the jPanel1 container so it shows the most recent information added to it. Now you can see your friends’ latest tweets along with your own tweets, but we need to improve the way tweets are displayed, don’t you think so? Improving the way your friends’ tweets are displayed For starters, let’s change some font attributes to show the user name in bold style and the text of the tweet in plain style. Then we’ll add a black border to separate each individual tweet. Add the following line below the other import statements in your code: import java.awt.Font; Scroll down until you locate the btnLoginActionPerformed method and add the following two lines below the statusUser = new JLabel(statusList.get(i).getUser().getName()) line: Font newLabelFont = new Font(statusUser.getFont().getName(),Font.PLAIN,statusUser.getFont().getSize()); statusText.setFont(newLabelFont); The following screenshot shows the btnLoginActionPerformed method after adding those two lines:   Press F6 to run your SwingAndTweet application. Now you will be able to differentiate the user name from the text of your friends’ tweets: And now let’s add a black border to each individual tweet. Scroll up the code until you locate the import declarations and add the following lines below the import statement you added on step 1 of this exercise: import javax.swing.BorderFactory; import java.awt.Color; Scroll down to the btnLoginActionPerformed method and add the following line right after the individualStatus.add(statusText) line: individualStatus.setBorder(BorderFactory.createLineBorder(Color.black)); The next screenshot shows the appearance of your friends’ timeline tab with a black border separating each individual tweet:
Read more
  • 0
  • 0
  • 2671
article-image-setting-python-development-environment-mac-os-x
Packt
19 Mar 2010
12 min read
Save for later

Setting Up Python Development Environment on Mac OS X

Packt
19 Mar 2010
12 min read
I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Setting Up Python Development Environment on Mac OS X Background First, a bit of a background. You run Mac OS X, one of the finest operating systems available to date. It's not just the looks, there is a fine machinery under the hood that makes it so good. OS X is essentially a UNIX system, more specifically BSD UNIX. It contains large chunks of FreeBSD code which is a good thing, because FreeBSD is a very robust and quality product. On top of that, there is a beautiful Aqua interface for you to enjoy. For some people, this combination is the best of both worlds. Because of the UNIX background, you get many benefits as well. For instance, you have quite a collection of useful UNIX tools available for you to use. In fact, Apple even ships Python out of the box! You may as well ask yourself why do I then need to install and set it up? - Good question. It depends. OS X is a commercial product and it takes some time from release to release and because of that Python version that is shipped with the current OS X (10.4) is a bit old. Maybe that is not an issue for you, but for some people it is. We will focus on getting and installing the newest Python version as that brings some other benefits we will mention later. Remember that /usr/local thing from the beginning? And what's up with that "safe heaven" talk? Well, there is a thing that is called Filesystem Hierarchy Standard, or FHS. The FHS sets up some basic conventions for use by UNIX and UNIX-like systems when mapping out a filesystem. Mac OS X breaks it at some places (as do many other UNIX variants) but most systems respect it. The FHS defines the /usr/local as the "tertiary hierarchy for local data installed by the system administrator", which basically means that it is the safe, standard place for you to put your own custom compiled programs there. Using the /usr/local directory for this purpose is important for many reasons but there is one that is most critical: System Updates. System Updates are automatic methods used by operating systems to deliver newer versions of the software to their users. This new software pieces are then installed at their usual location often with brute force, regardless of what was there before. So, for instance if you had modified or installed some newer version of some important system software, the Software Update process will overwrite it, thus rendering your changes lost. To overcome this problem, we will install all of our custom software in this safe place, the /usr/local directory. Getting to Work At last, the fun part (or so I hope). Requirements First, some prerequisites. You will need the following to get going: Mac OS X 10.4 XCode 2.4 or newer (this contains the necessary compilers) XCode is not installed by default on new Macs, but it can be obtained from the Mac OS X install DVD or from the Apple Developer Connection for free. Strategy As you might have guessed from the previous discussion, I decided to use the /usr/local directory as destination and compile everything from source. Some people favor binaries. However, binary distributions are often pre-packaged and end up in some sort of installer - they could contain certain things that we dislike and so on. This is also the case with Python on Mac OS X. You can download binary distribution from the official Python website (and you are in fact encouraged to do so, if using OS X), which suffers exactly from these kind of problems. It comes with an installer and installs some stuff out of the /usr/local directory which we don't need. It maybe useful to some of the Cocoa developers who deal with the Python code also, as it eases the installation of the PyObjC (a bridge between the Python and Objective-C) later on. But we don't need that either. We will end up with the pure, lean and mean installation of Python and some supportive applications. Additional benefit of compiling from source is that we can look through the actual source code and audit or modify it before we actually install it. I will focus on Python installation that is web development oriented. You will end up with a basic set of tools which you can use to build database-driven web sites powered by Python scripting language. Let's begin, shall we? Using /usr/local In order for all this to work, we will have to make some slight adjustments. For a system to see our custom Python installation, we will have to set the path to include /usr/local first. Mac OS X, like other UNIX systems, uses a "path" to determine where it should look for UNIX applications. The path is just an environment variable that is executed (if it's set) each time you open a new Terminal window. To set up a path, either create or edit a file .bash_login (notice the dot, it's hidden file) in your home directory using a text editor. I recommend the following native OS X text editors: TextMate, BBEdit or TextWrangler, and the following UNIX editors: Emacs or vi(m). To edit the file with TextMate for example, fire up the Terminal and type: mate ~/.bash_login This will open the file with TextMate. Now, add the following line at the end of the file: export PATH="/usr/local/bin:$PATH" After you save and close the file, apply the changes (from terminal) with the following command: . ~/.bash_login While we're at it, we could just as well (using the previous method) enter the following line to make the Terminal UTF-8 aware: export LC_CTYPE=en_US.UTF-8 In general, you should be using UTF-8 anyway, so this is just a bonus. And it is even required to do for some things to work, Subversion for example has some problems if this isn't set. Setting Up the Working Directory It's nice to have a working directory where you will download all the source files and possibly revert to it later. We'll create a directory called src in the /usr/local directory: sudo mkdir -p /usr/local/srcsudo chgrp admin /usr/local/srcsudo chmod -R 775 /usr/local/srccd /usr/local/src Notice the sudo command. It means "superuser do" or "substitute user and do". It will ask you for your password. Just enter it when asked. We are now in this new working directory and will download and compile everything here. Python Finally we are all set up for the actual work. Just enter all the following commands correctly and you should be good to go. We are starting off with Python, but to compile Python properly we will first install some prerequisites, like readline and sqlite. Technically, SQLite isn't required, but it is necessary to compile it first, so that later on Python picks up its libraries and makes use of them. One of the new things in the newest Python 2.5 is native SQLite database driver. So, we will kill two birds with one stone ;-). curl -O ftp://ftp.cwru.edu/pub/bash/readline-5.2.tar.gztar -xzvf readline-5.2.tar.gzcd readline-5.2./configure --prefix=/usr/localmakesudo make installcd .. If you get an error about no acceptable C compiler, then you haven't installed XCode. We can now proceed with SQLite installation. curl -O http://www.sqlite.org/sqlite-3.3.13.tar.gztar -xzvf sqlite-3.3.13.tar.gzcd sqlite-3.3.13./configure --prefix=/usr/local --with-readline-dir=/usr/localmakesudo make installcd .. Finally, we can download and install Python itself. curl -O http://www.python.org/ftp/python/2.5/Python-2.5.tgztar -xzvf Python-2.5.tgzcd Python-2.5./configure --prefix=/usr/local --with-readline-dir=/usr/local --with-sqlite3=/usr/localmakesudo make installcd .. This should leave us with the core Python and SQLite installation. We can verify this by issuing the following commands: python -Vsqlite3 -version Those commands should report new version numbers we just compiled (2.5 for Python and 3.3.13 for SQLite). Do the happy dance now! Before we get to excited, we should also verify are they properly linked together by entering interactive Python interpreter and issuing few commands (don't type ">>>", these are here for illustrative purposes, because you also get them in the interpreter): python>>> import sqlite3>>> sqlite3.sqlite_version'3.3.13' Press C-D (that's CTRL + D) to exit the interactive Python interpreter. If your session looks like the one above, we're all set. If you get some error about missing modules, that means something is not right. Did you follow all the steps as mentioned above? We now have the Python and SQLite installed. The rest is up to you. Do you want to program sites in Django, CherryPy, Pylons, TurboGears, web.py etc.? Just install the web framework you are interested with. Need any additional modules, like Beautiful Soup for parsing HTML? Just go ahead and install it… For development needs, all frameworks I tried come with a suitable development server, so you don't need to install any web server to get started. CherryPy in addition even comes with a great production-ready WSGI web server. Also, for all your database needs, I find SQLite more then adequate while in development mode. I even find it more then enough for some live sites also. It's great little zero-configuration database. If you have bigger needs, it's easy to switch to some other database on the production server (you are planning to use some database abstraction layer, do you?). For completeness sake, let's pretend you're going to develop sites with CherryPy as web framework, SQLite as database, SQLAlchemy as database abstraction layer (toolkit, ORM) and Mako for templates. So, we are missing CherryPy, SQLAlchemy and Mako. Let's get them while they're hot: cd /usr/local/srccurl -O http://download.cherrypy.org/cherrypy/3.0.1/CherryPy-3.0.1.tar.gztar -xzvf CherryPy-3.0.1.tar.gzcd CherryPy-3.0.1sudo python setup.py installcd ..curl -O http://cheeseshop.python.org/packages/source/S/SQLAlchemy/SQLAlchemy-0.3.5.tar.gztar -xzvf SQLAlchemy-0.3.5.tar.gzcd SQLAlchemy-0.3.5sudo python setup.py installcd ..curl -O http://www.makotemplates.org/downloads/Mako-0.1.4.tar.gztar -xzvf Mako-0.1.4.tar.gzcd Mako-0.1.4sudo python setup.py installcd .. Do the happy dance again! This same pattern applies for many other Python web frameworks and modules. What have we just achieved? Well, we now have "invisible" Python web development environment which is clean, fast, self-contained and in the safe place to rest on. Combine it with TextMate (or any other text editor you like) and you will have some serious good times. Again, for even more completeness sake, we will cover Subversion. Subversion is a version control system. Sounds exciting, eh? Actually, it's a very powerful and sane thing to learn and use. But, I'm not covering it because of actual version control, but because many software projects use it, so you will sometimes have the need to checkout (download your own local copy) some projects code. For example, Django project uses it, and their development version is often better than the actual released "stable" version. So, the only way of having (and keeping up with) the development version is to use Subversion to obtain it and keep it updated. All you usually need to do in order to obtain the latest revision of some software is to issue the following command (example for Django): svn co http://code.djangoproject.com/svn/django/trunk/ django_src Here are the steps to download and compile Subversion: curl -O http://subversion.tigris.org/downloads/subversion-1.4.3.tar.gzcurl -O http://subversion.tigris.org/downloads/subversion-deps-1.4.3.tar.gztar -xzvf subversion-1.4.3.tar.gztar -xzvf subversion-deps-1.4.3.tar.gzcd subversion-1.4.3./configure --prefix=/usr/local --with-openssl --with-ssl --with-zlibmakesudo make installcd .. However, even on some moderate recent computer hardware, Subversion can take a long time to compile. If that's the case you don't want to compile it, or you simply just use it for time to time to do some checkouts, you may prefer to download some pre-compiled binary. I know what I said about binaries before, but there is a very fine one over at Martin Ott. It's packaged as a standard Mac OS X installer, and it installs just where it should, in /usr/local directory. When speaking about version control, I'm more a decentralized version control person. I really like Mercurial — it's fast, small, lightweight, but it also scales fairly well for more demanding scenarios. And guess what, it's also written in Python. So, go ahead, install it too, and start writing those nice Python powered web sites! That would be all from me today. While I provided the exact steps for you to follow, that doesn't mean that you should pick the same components. These days (coming from the Django background), I'm learning Pylons, Mako, SQLAlchemy, Elixir and a couple of other components. It makes sense currently, as Pylons is strongly built around WSGI compliance and philosophy which makes the components more reusable and should make it easier to switch to or from any other Python WSGI-centric framework in the future. Good luck!
Read more
  • 0
  • 0
  • 10856

article-image-python-data-persistence-using-mysql-part-iii-building-python-data-structures-upon-unde
Packt
17 Mar 2010
8 min read
Save for later

Python Data Persistence using MySQL Part III: Building Python Data Structures Upon the Underlying Database Data

Packt
17 Mar 2010
8 min read
Using Python Built-in Object Types to Hold Structured Data The most common way to hold structured data in Python is to use built-in object types such as lists, list comprehensions, tuples, and dictionaries. In particular, you may find the above Python types useful when dealing with database data. Dictionaries can be of particular interest to you when you need to represent data stored in a database table that has a primary key column. Turning back to the posts table created and populated with data as discussed in the first article of this series, you might create the following dictionary to represent the records stored in this table: posts = {} posts["Layouts in Ext JS"] = {"guid":"http://www.packtpub.com/article/layouts-in-ext- js","pubDate":"Fri, 28 Nov 2008 10:31:03 +0000"} posts["WordPress Plug-in Development (Beginner's Guide)"] = {"guid":"http://www.packtpub.com/ wordpress-plug-in-development","pubDate":"Fri, 28 Nov 2008 00:00:00 +0000"} For clarity, you manually set up the dictionary here. In reality, though, you would most likely populate it with the data obtained from the database or the Web. In the above example, the posts dictionary uses the values of the title column in the posts table as the dictionary’s keys. The dictionary’s values are also dictionaries each of which represents the rest of a record, containing the guid and pubDate fields. Since the keys within a dictionary cannot be repeated, the above approach guarantees uniqueness of the title field in the posts’ records represented in the dictionary. Now to obtain a certain record, you can use its key like this: rec = posts["Layouts in Ext JS"] print rec This should produce the following output: {'guid':'http://www.packtpub.com/article/layouts-in- ext-js','pubDate':'Fri, 28 Nov 2008 10:31:03 +0000'} If you need to get to a certain field in the obtained record, you could use the following syntax: guid = posts["Layouts in Ext JS"]["guid"] print guid The above should give you the following: http://www.packtpub.com/article/layouts-in-ext-js To iterate over all of the records in the posts dictionary, you could use a for loop. Here is how you could iterate over the guid field, for example: for post in posts.items(): print post[1]['guid'] Note that the value of the first index of the post variable representing a key/value pair of the dictionary is set to 1, meaning you’re interested in the value part of the pair. The above should generate the following output: http://www.packtpub.com/article/layouts-in-ext-js http://www.packtpub.com/wordpress-plug-in-development If you want to iterate over the dictionary keys, you could use the following code: for post in posts.items(): print post[0] This should give you the following lines: Layouts in Ext JS WordPress Plug-in Development (Beginner's Guide) Now that you have an idea of how database data can be represented in Python, let’s look at an example of how you might persist it to a database. Here is a quick example that illustrates how you might persist the posts dictionary to the posts database table. import MySQLdb db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() for post in posts.items(): c.execute("""INSERT INTO posts (title, guid, pubDate) VALUES (%s,%s,%s)""", (post[0], post[1]['guid'], post[1]['pubDate'])) db.commit() db.close() Assuming you have populated the posts dictionary with data as discussed at the beginning of the article, the above code should insert two records into the posts table. Fetching Database Records You typically persist data to the database in order to retrieve it from there later. How can you retrieve data from the posts table? The following script answers this question: import MySQLdb db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("SELECT * FROM posts") c.fetchall() The cursor’s fetchall method in the above code fetches all the rows retrieved by the query, making them available as a list of tuples. To iterate over this list, you could use the following loop: for row in c: print row db.close() This should produce the following output: ('Layouts in Ext JS','http://www.packtpub.com/article/layouts-in- ext-js','Fri, 28 Nov 2008 10:31:03 +0000') ("WordPress Plug-in Development (Beginner's Guide)",'http://www.packtpub.com/wordpress-plug- in-development','Fri, 28 Nov 2008 00:00:00 +0000') As you can see, each line in the above output represents a tuple rather than a dictionary. To have a dictionary instead, you will need to obtain the column names along with the rows being fetched. You can do this with the help of the cursor’s description read-only attribute, as illustrated in the updated script below: import MySQLdb db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("SELECT * FROM posts") heads = [d[0] for d in c.description] c.fetchall() for row in c: print dict(zip(heads,row)) db.close() The output should give you a set of dictionaries each of which represents a record in the posts table. But how can you get the data so that it is structured as it were in the posts dictionary discussed at the beginning of the article? To do this, you could revise the above script as follows: import MySQLdb db=MySQLdb.connect(host="localhost",user="usrsample",passwd="pswd",db="dbsample") c=db.cursor() c.execute("SELECT * FROM posts") heads = [d[0] for index, d in enumerate(c.description) if index > 0] c.fetchall() print heads posts={} for row in c: posts[row[0]]= dict(zip(heads,[r for index, r in enumerate(row) if index > 0])) print posts db.close() Notice the use of comprehension lists in the above code. First time, you use it to exclude the first column head from the heads list. Then, you use a similar technique to exclude the first field from each row when iterating fetched rows in the loop. As a result, you should have the same posts dictionary as you saw at the beginning of the article. Customizing Built-in Types to Simulate Trigger Functionality In the world of relational databases, triggers are programs stored inside the database, which run implicitly in response to a certain event. For example, you can define a BEFORE INSERT trigger on a certain table, so that it fires just before a new record is inserted into that table. It is interesting to note that triggers can be used in MySQL starting with version 5.0. If you have an older MySQL version, you won’t be able to take advantage of triggers. In that case, though, you still can simulate trigger functionality on the Python side of your application. So, you want to define triggers on the data structures implemented in Python, much like you would do that in the underlying database. To achieve this, you could for example subclass the Python’s dict built-in type, overriding the __setitem__ method so that it takes the appropriate action implicitly whenever a new item is added. Next, you could use this customized dict’s subclass instead of dict. Consider the following example. Suppose you want to implement the BEFORE INSERT trigger functionality on the posts dictionary, so that it restricts inserting new items to those that represent an article from the Packt Article Network. To achieve this, you will need to override the dict’s __setitem__ method so that it checks to see whether the value of the item’s guid includes the following fragment: http://www.packtpub.com/article/. Below, you create the dict’s subclass called artdict, and then use this subclass to create the posts dictionary, populating it with the same data you used at the beginning of the article: class artdict(dict): def __setitem__(self, key, value): x = 'http://www.packtpub.com/article/' if (x in value['guid']): super(artdict, self).__setitem__(key, value) posts=artdict() posts["Layouts in Ext JS"] = {"guid":"http://www.packtpub.com/article/layouts-in-ext- js","pubDate":"Fri, 28 Nov 2008 10:31:03 +0000"} posts["WordPress Plug-in Development (Beginner's Guide)"] = {"guid":"http://www.packtpub.com/ wordpress-plug-in-development","pubDate":"Fri, 28 Nov 2008 00:00:00 +0000"} print posts Although you have tried to insert two records into the posts dictionary, only the first insertion should have succeeded. So the print should generate the following output: {"Layouts in Ext JS":{"guid":"http://www.packtpub.com/article/layouts-in- ext-js","pubDate":"Fri, 28 Nov 2008 10:31:03 +0000"}} The second item, whose title is WordPress Plug-in Development (Beginner's Guide), was excluded because its guid does not include substring http://www.packtpub.com/article/. This is because this item is not associated with an article but the book. Summary As you learned in this article, Python language provides a wide variety of useful tools to deal with structured data. You can utilize lists, tuples, list comprehensions, and dictionaries when it comes accessing and manipulating data stored in the underlying database. You can even customize the above built-in types to meet the requirements of your application.
Read more
  • 0
  • 0
  • 2082

article-image-building-flex-type-ahead-text-input
Packt
15 Mar 2010
7 min read
Save for later

Building a Flex Type-Ahead Text Input

Packt
15 Mar 2010
7 min read
Here is an example of how google.com implements the type-ahead list using DHTML: As you can see, once 'type-ahead' is typed into the text field , the user is given a selection of possible search phrases that google is already aware of. My intention with this article is to build a type-ahead list in Flex. To start, lets narrow down the scope of the application and make it easy to expand on. We'll create an application which is used primarily for searching for fruits. Our basic Fruit Finder application will consist of a form with a TextInput field. The TextInput field will allow the user to type in a fruit name and will automatically suggest a fruit if one is partially found in our list of fruits. 1. Building a Basic Form To start, here is what our form looks like: The XML which creates this user interface is quite simple: <?xml version="1.0" encoding="utf-8"?><mx:Application layout="absolute"><mx:Panel title="Fruit Finder"><mx:Form> <mx:FormHeading label="Fruit Finder"/> <mx:FormItem label="Fruit Name"> <mx:TextInput id="fruit"/> </mx:FormItem> <mx:FormItem> <mx:Button label="Search"/> </mx:FormItem> </mx:Form></mx:Panel></mx:Application> You'll notice the normal xml version declaration, the Application tag, a Panel tag and finally the Form tag. Nothing too complicated so far. If you are unfamiliar with the basics of Flex or Forms in Flex, you should take this opportunity to visit Adobe's website to explore them. This XML code gives up 90% of our GUI. In the coming steps will have to define the elements which will make up the fruit list which will appear as a user is typing. Next, we need to define our list of fruits. 2. Adding Data to Our Type Ahead List Now that we have the beginnings of our GUI, lets start building our fruit list. Thinking ahead for a bit, I know that we will have to display a list of fruits to the user. The simplest Flex control to use for this job is the List Control. We will be dynamically adding the List to the application's display list via ActionScript, but for now we just need to define the data which will be displayed in the list. We will start by creating adding a Script tag and adding an ArrayCollection to it. You will have to use the import statement to make the ArrayCollection class available to you. Our ArrayCollection constructor is passed an array of fruit names. Here is what the code looks like: <mx:Script><![CDATA[ import mx.collections.ArrayCollection; public var fruitList:ArrayCollection = new ArrayCollection(['apple', 'orange', 'banana', 'kiwi', 'avocado', 'tomato', 'squash', 'cucumber']);]]></mx:Script> Normally defining the list of items in this way is not commonly performed. For a real world use, getting this list of items through an XML source is more likely (especially in web applications), but this will work for our demonstration. Now that our fruit list is defined, we just need to connect it to a type-ahead list which we will create in the next step. links:http://livedocs.adobe.com/flex/3/html/help.html?content=databinding_4.htmlhttp://livedocs.adobe.com/flex/3/langref/mx/collections/ArrayCollection.html 3. Triggering the Appearance of Our Type Ahead-List It is common in modern web applications that the type ahead list appear automatically upon the user typing. We will add this functionality to our application by using the KeyUp event. Simply put, when the user begins typing into our TextInput field we will do the following: Determine if the type ahead list is already created. For the first key press, there will be no type-ahead list.  In this case we need to create the list, set it's data provider to fruitList (step 2) and add it to the UI. We will also need to position the type ahead list beneath the TextInput field so that the user is properly cued as to what is happening. To start our implementation of Type-Ahead Text Input, we use the KeyUp event. We change our FormItem tag surrounding the TextInput field to look like this: <mx:FormItem label="Fruit Name" keyUp="filterFruits(event)">We then define a filterFruits function like so:public function filterFruits(event:KeyboardEvent):void{ // if the type ahead list is not present, create it if(typeAheadList==null){ // create the list and assign the dataprovider typeAheadList = new List(); typeAheadList.dataProvider = fruitList; // add the list to the screen this.addChild(typeAheadList); }} In the above code we are programmatically creating a List control. Immediately assign the data provider to it. Lastly, we add the child to the application. Our function does everything that we need it to do for a Type-Ahead Text Input with the exception of positioning the type ahead list in the correct place. Here is what our app currently looks like: We are making progress, but without the correct positioning, our type-ahead list creates a bad user experience. To move this list to the correct location we need to use the localtoGlobal method to translate coordinate systems. This requires a short explanation. Flex has multiple coordinate systems on the Flash stage that you can make use of for making your controls and components position properly. The first is call the global corrdinate system. This system starts at the upper left hand corner of the Flash stage and extends down and out. The second is called the local coordinate system which starts at the upper left hand corner of a component. There is also a content coordinate system which encompasses a components content. For our purposes we only need to focus on the local and global systems. link:http://livedocs.adobe.com/flex/3/html/help.html?content=containers_intro_5.html Our goal here is to place our list directly beneath the fruit TextInput field. To accomplish this, we must first grabs the coordinates of the fruit TextInput field. Here is the code for retrieving them: var p1:Point = new Point(fruit.x,fruit.y); We use the Point type which receives the x and y coordinates of the fruit control. p1 now holds the points in the local coordinate system. You may ask, "what is it local to?". In this case it is local to it's parent container which is the FormItem. In order to convert these points to the global system we need to use to the localToGlobal method: var p2:Point = fruit_form_item.localToGlobal(p1); p2 now contains the converted coordinates. Note, we added the id of fruit_form_item to the FormItem Tag which is the parent of our fruit TextInput. From here we can now place the fruit List in the correct place in our application. typeAheadList.x=p2.x;typeAheadList.y=p2.y + fruit.height; // set the widthtypeAheadList.width=fruit.width; Notice above that we added fruit.height to the y value of the typeAheadList. This is necessary to not block the view TextInput field. We are moving it down by n pixels, where n is the height of the TextInput field. We also set the x coordinate of our list so that it is in the correct place. Here is what the final result for this step look like:
Read more
  • 0
  • 0
  • 2658
article-image-flex-multi-list-selector-using-list-control-datagrid-and-accordion
Packt
02 Mar 2010
3 min read
Save for later

Flex Multi-List Selector using List Control, DataGrid, and the Accordion

Packt
02 Mar 2010
3 min read
Instead of files and directories, I'm going to use States, Counties and Cities. Essentially this application will be used to give the user an easy way to select a city. Flex offers many components that can help us build this application. The controls I immediately consider for the job are the List Control, DataGrid, and the Accordion (in combination with the List). The List is the obvious control to start with because it represents the data in the right way - a list of states, counties, and cities. The reason I also considered the DataGrid and the Accordion (with the List) is because the they both have a header. I want an easy way to label the three columns/list 'States','Counties' and 'Cities'. With that said, I selected the Accordion with the List option. Using this option also allows for future expansion of the tool. For instance, one could adapt the tool to add country, then state, county, and city. The Accordion naturally has this grouping capability. With that said, our first code block contains our basic UI. The structure is pretty simple. The layout of the application is vertical. I've added an HBox which contains the main components of the application. The basic structure of each list is a List Control inside a Canvas Container which is inside of an Accordian Control. The Canvas is there because Accordians must have a container as a child and a List is not a part of the container package. We repeat this 3 times, one for each column and give the appropriate name. <?xml version="1.0" encoding="utf-8"?><mx:Application horizontalGap="0" layout="vertical"> <mx:HBox width="100%" height="100%"> <!-- States --> <mx:Accordion id="statesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="States"> <mx:List id="statesList" width="100%" height="100%" dataProvider="{locations.state.@name}" click="{selectCounties()}"/> </mx:Canvas> </mx:Accordion> <!-- Counties --> <mx:Accordion id="countiesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="Counties"> <mx:List id="countiesList" width="100%" height="100%" click="selectCities()"/> </mx:Canvas> </mx:Accordion> <!-- Cities --> <mx:Accordion id="citiesAccoridon" width="100%" height="100%"> <mx:Canvas width="100%" height="100%" label="Cities"> <mx:List id="citiesList" width="100%" height="100%"/> </mx:Canvas> </mx:Accordion> </mx:HBox> <!-- Selected City --> <mx:Label text="{citiesList.selectedItem}"/> <mx:Script> <![CDATA[ public function selectCounties():void{ countiesList.dataProvider = locations.state.(@name==statesList.selectedItem).counties.county.@name } public function selectCities():void{ citiesList.dataProvider = locations.state.(@name==statesList.selectedItem).counties.county.(@name==countiesList.selectedItem).cities.city.@name; } ]]> </mx:Script> </mx:Application> I've set the width and height to all containers to 100%. This will make it easy to later embed this application into a web page or other Flex application as a module. Also notice how the dataProvider attribute is only set for the statesList. This is because the countiesList and the citiesList are not populated until a state is selected. These dataProviders are set using ActionScript and are triggered by the click event listeners for both objects. Here is what the start of our selector looks like:
Read more
  • 0
  • 0
  • 2201

article-image-jboss-richfaces-33-supplemental-installation
Packt
19 Feb 2010
6 min read
Save for later

JBoss RichFaces 3.3 Supplemental Installation

Packt
19 Feb 2010
6 min read
This installation guide is for the Windows platform. JBoss Server Installation In order to run any web application, an application server is needed. JBoss server is an industry standard and is ideal for running Seam and RichFaces applications.  Downloading the server is a very simple task. First go to the JBoss download page and download the 4.2.2.GA version of the JBoss server. Save it to a directory for downloads such as c:downloads. Unzip the file to the c: directory.  After unzipping, you should have a folder named c: jboss-4.2.2.GA. Test the server installation by going into c:jboss-4.2.2.GAbin and running the file run.bat. A command window should run with server logs. At completion, the logs will indicate that the server has started. Starting JBoss Server Within Eclipse Although you can start JBoss server from the run.bat file, for the purposes of development and learning RichFaces, it is more valuable to start the JBoss server within Eclipse. The Eclipse IDE provides support to run, shutdown and adjust settings for servers. In Eclipse, we will work from the Java Perspective. Change Eclipse to be in the Java Perspective. Go to Window->Open Perspective->Java. We need to have a tab for Server. Go To Window->Show View->Other->Servers->Server. In the Server tab, right click and go to New->Server. Here we are defining a server to launch within Eclipse. Choose JBoss->JBoss v4.2->Next. Choose the JRE, which will typically be the path to where Java is installed. For the Application Server Directory, choose c:jboss-4.2.2.GA then Next. Accept the defaults for Address, Port, JNDI Port and Server Configuration. Click Next->Finish. Next the server settings need to be adjusted. Double click on the JBOSS 4.2 entry in the server tab to bring up the settings menu in eclipse. Click on the edit menu on the right hand side. Figure 1 - JBoss Server settings Uncheck all check boxes and Server Timeout Delay to be Unlimited. The server is ready to run. In the Server tab, right click on the JBOSS 4.2 entry and choose Start. Go to the Console tab and you will see the server logs. At completion, the logs should indicate that the server has started. MySql Installation MySql is the database used to store information in the example applications. Once MySql is installed, the example applications can connect to a persistent store and the developer will be able to see data saved as the application is exercised. Go to the MySql download page and retrieve the installation file. Look for MSI file labeled mysql-essential-5.1.42-winx64.msi (or a similar version). The MSI file is easiest to install as it gives a wizard to guide you through the process. Once the file is saved, double click on it to initiate installation. Choose all the default options. When the id and password is requested, choose root as both the id and password. This is easy to remember for development purposes. Verify installation of MySql by looking for the shortcuts placed Windows Programs Menu. Also verify that MySql has been installed as a Windows service. The easiest way to do this is to go to Start-> Run in Windows and type services.msc. The services dialogue box should have a MySql entry. Make sure the MySql service is started. Run MySql Command Client In order to operate the MySql database, you can use the provided command line client. The client enables the user to look up tables, execute operational commands, and run sql statements.  In the Windows Start menu, go to Start->Programs0->MySql->MySql Server 5.x->MySql Command Line Client. Type in root for the password. A mysql prompt will appear. The command line tool is used for creating the database for the example applications. In order to import a sql script, use the command source <path>. For example: source c:adv_contact_manager_create.sql For a full list of commands for MySql, see the online manual: http://dev.mysql.com/doc/refman/5.1/en/index.html. Download and Install MySql JDBC Connector In order for Java applications to connect to MySql through JDBC, a connector jar is needed. MySql provides connectivity for client applications developed in the Java programming language via a JDBC driver, which is called MySql Connector/J. Go to the connector download page and retrieve the zip file. Unzip the file to a directory. Identify the file mysql-connector-java-5.1.10-bin.jar. Copy this file to the default server lib directory so that it is accessible by all applications: C:jboss-4.2.2.GAserverdefaultlib Build and Deploy Example Applications In order to see the application that is being developed, it is necessary to build and deploy the application onto the server. Applications generated by the seam-gen tool come built with a script armed with many build tasks. Eclipse provides Ant support so we can use it to operate the build file provided within the example application. In Eclipse with the application loaded as a project, open the Ant view. Go to Window-> Show View-> Ant. The Ant view will be displayed. Now load the build.xml in order to operate the Ant targets. Right click in the Ant view and select Add Buildfiles->Choose build.xml for the application. A list of Ant targets will be loaded. In order to execute a task, simply double click on the task. The Console window will display the executed statements. Seam-gen offers several tasks, but a notable few are very useful. deploy – builds and deploys the application to the server undeploy – deletes the application from the server purge – deletes temporary server files associated with the application clean – deletes package application files from the local distribution directory If the deploy task fails, simply go to the JBoss deployment directory and delete the installed application. C:jboss-4.2.2.GAserverdefaultdeploy Applications can also be directly copied into this directory for deployment. Inversely, applications can be directly deleted from this directory for un-deployment. With these basics installations complete, running the example applications should be simple and you will be on your way to mastering RichFaces 3.3. Summary In this article, we discussed the following: JBoss Server Installation Starting JBoss Server within Eclipse MySql Installation Build and Deploy Example applications
Read more
  • 0
  • 0
  • 2066
Modal Close icon
Modal Close icon