Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-using-linq-query-linqpad
Packt
05 Sep 2013
3 min read
Save for later

Using a LINQ query in LINQPad

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) The standard version We are going to implement a simple scenario: given a deck of 52 cards, we want to pick a random number of cards, and then take out all of the hearts. From this stack of hearts, we will discard the first two and take the next five cards (if possible), and order them by their face value for display. You can try it in a C# program query in LINQPad: public static Random random = new Random();void Main(){ var deck = CreateDeck(); var randomCount = random.Next(52); var hearts = new Card[randomCount]; var j = 0; // take all hearts out for(var i=0;i<randomCount;i++) { if(deck[i].Suit == "Hearts") { hearts[j++] = deck[i]; } } // resize the array to avoid null references Array.Resize(ref hearts, j); // check that we have at least 2 cards. If not, stop if(hearts.Length <= 2) return; var count = 0; // check how many cards we can take count = hearts.Length - 2; // the most we need to take is 5 if(count > 5) { count = 5; } // take the cards var finalDeck = new Card[count]; Array.Copy(hearts, 2, finalDeck, 0, count); // now order the cards Array.Sort(finalDeck, new CardComparer()); // Display the result finalDeck.Dump();}public class Card{ public string Suit { get; set; } public int Value { get; set; }}// Create the cards' deckpublic Card[] CreateDeck(){ var suits = new [] { "Spades", "Clubs", "Hearts", "Diamonds" }; var deck = new Card[52]; for(var i = 0; i < 52; i++) { deck[i] = new Card { Suit = suits[i / 13], FaceValue = i-(13*(i/13))+1 }; } // randomly shuffle the deck for (var i = deck.Length - 1; i > 0; i--) { var j = random.Next(i + 1); var tmp = deck[j]; deck[j] = deck[i]; deck[i] = tmp; } return deck;}// CardComparer compare 2 cards against their face valuepublic class CardComparer : Comparer<Card>{ public override int Compare(Card x, Card y) { return x.FaceValue.CompareTo(y.FaceValue); }} Even if we didn't consider the CreateDeck() method, we had to do quite a few operations to produce the expected result (your values might be different as we are using random cards). The output is as follows: Depending on the data, LINQPad will add contextual information. For example, in this sample it will add the bottom row with the sum of all the values (here, only FaceValue). Also, if you click on the horizontal graph button, you will get a visual representation of your data, as shown in the following screenshot: This information is not always relevant but it can help you explore your data. Summary In this article we saw how LINQ queries can be used in LINQPad. The powerful query capabilitiesof LINQ has been utilized to the maximum in LINQPad. Resources for Article: Further resources on this subject: Displaying SQL Server Data using a Linq Data Source [Article] Binding MS Chart Control to LINQ Data Source Control [Article] LINQ to Objects [Article]
Read more
  • 0
  • 0
  • 2678

article-image-facebook-accessing-graph-api
Packt
18 Jan 2011
8 min read
Save for later

Facebook: Accessing Graph API

Packt
18 Jan 2011
8 min read
  Facebook Graph API Development with Flash Build social Flash applications fully integrated with the Facebook Graph API Build your own interactive applications and games that integrate with Facebook Add social features to your AS3 projects without having to build a new social network from scratch Learn how to retrieve information from Facebook's database A hands-on guide with step-by-step instructions and clear explanation that encourages experimentation and play Accessing the Graph API through a Browser We'll dive right in by taking a look at how the Graph API represents the information from a public Page. When I talk about a Page with a capital P, I don't just mean any web page within the Facebook site; I'm referring to a specific type of page, also known as a public profile. Every Facebook user has their own personal profile; you can see yours by logging in to Facebook and clicking on the "Profile" link in the navigation bar at the top of the site. Public profiles look similar, but are designed to be used by businesses, bands, products, organizations, and public figures, as a way of having a presence on Facebook. This means that many people have both a personal profile and a public profile. For example, Mark Zuckerberg, the CEO of Facebook, has a personal profile at http://www.facebook.com/zuck and a public profile (a Page) at http://www.facebook.com/markzuckerberg. This way, he can use his personal profile to keep in touch with his friends and family, while using his public profile to connect with his fans and supporters. There is a second type of Page: a Community Page. Again, these look very similar to personal profiles; the difference is that these are based on topics, experience, and causes, rather than entities. Also, they automatically retrieve information about the topic from Wikipedia, where relevant, and contain a live feed of wall posts talking about the topic. All this can feel a little confusing – don't worry about it! Once you start using it, it all makes sense. Time for action – loading a Page Browse to http://www.facebook.com/PacktPub to load Packt Publishing's Facebook Page. You'll see a list of recent wall posts, an Info tab, some photo albums (mostly containing book covers), a profile picture, and a list of fans and links. That's how website users view the information. How will our code "see" it? Take a look at how the Graph API represents Packt Publishing's Page by pointing your web browser at https://graph.facebook.com/PacktPub. This is called a Graph URL – note that it's the same URL as the Page itself, but with a secure https connection, and using the graph sub domain, rather than www. What you'll see is as follows: { "id": "204603129458", "name": "Packt Publishing", "picture": "http://profile.ak.fbcdn.net/hprofile-ak-snc4/ hs302.ash1/23274_204603129458_7460_s.jpg", "link": "http://www.facebook.com/PacktPub", "category": "Products_other", "username": "PacktPub", "company_overview": "Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.", "fan_count": 412 } What just happened? You just fetched the Graph API's representation of the Packt Publishing Page in your browser. The Graph API is designed to be easy to pick up – practically self-documenting – and you can see that it's a success in that respect. It's pretty clear that the previous data is a list of fields and their values. The one field that's perhaps not clear is id; this number is what Facebook uses internally to refer to the Page. This means Pages can have two IDs: the numeric one assigned automatically by Facebook, and an alphanumeric one chosen by the Page's owner. The two IDs are equivalent: if you browse to https://graph.facebook.com/204603129458, you'll see exactly the same data as if you browse to https://graph.facebook.com/PacktPub. Have a go hero – exploring other objects Of course, the Packt Publishing Page is not the only Page you can explore with the Graph API in your browser. Find some other Pages through the Facebook website in your browser, then, using the https://graph.facebook.com/id format, take a look at their Graph API representations. Do they have more information, or less? Next, move on to other types of Facebook objects: personal profiles, events, groups. For personal profiles, the id may be alphanumeric (if the person has signed up for a custom Facebook Username at http://www.facebook.com/username/), but in general the id will be numeric, and auto-assigned by Facebook when the user signed up. For certain types of objects (like photo albums), the value of id will not be obvious from the URL within the Facebook website. In some cases, you'll get an error message, like: { "error": { "type": "OAuthAccessTokenException", "message": "An access token is required to request this resource." } } Accessing the Graph API through AS3 Now that you've got an idea of how easy it is to access and read Facebook data in a browser, we'll see how to fetch it in AS3. Time for action – retrieving a Page's information in AS3 Set up the project. Check that the project compiles with no errors (there may be a few warnings, depending on your IDE). You should see a 640 x 480 px SWF, all white, with just three buttons in the top-left corner: Zoom In, Zoom Out, and Reset View: This project is the basis for a Rich Internet Application (RIA) that will be able to explore all of the information on Facebook using the Graph API. All the code for the UI is in place, just waiting for some Graph data to render. Our job is to write code to retrieve the data and pass it on to the renderers. I'm not going to break down the entire project and explain what every class does. What you need to know at the moment is a single instance of the controllers. CustomGraphContainerController class is created when the project is initialized, and it is responsible for directing the flow of data to and from Facebook. It inherits some useful methods for this purpose from the controllers.GCController class; we'll make use of these later on. Open the CustomGraphContainerController class in your IDE. It can be found in srccontrollersCustomGraphContainerController.as, and should look like the listing below: package controllers { import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); } } } The first thing we'll do is grab the Graph API's representation of Packt Publishing's Page via a Graph URL, like we did using the web browser. For this we can use a URLLoader. The URLLoader and URLRequest classes are used together to download data from a URL. The data can be text, binary data, or URL-encoded variables. The download is triggered by passing a URLRequest object, whose url property contains the requested URL, to the load() method of a URLLoader. Once the required data has finished downloading, the URLLoader dispatches a COMPLETE event. The data can then be retrieved from its data property. Modify CustomGraphContainerController.as like so (the highlighted lines are new): package controllers { import flash.events.Event; import flash.net.URLLoader; import flash.net.URLRequest; import ui.GraphControlContainer; public class CustomGraphContainerController extends GCController { public function CustomGraphContainerController (a_graphControlContainer:GraphControlContainer) { super(a_graphControlContainer); var loader:URLLoader = new URLLoader(); var request:URLRequest = new URLRequest(); //Specify which Graph URL to load request.url = "https://graph.facebook.com/PacktPub"; loader.addEventListener(Event.COMPLETE, onGraphDataLoadComplete); //Start the actual loading process loader.load(request); } private function onGraphDataLoadComplete(a_event:Event):void { var loader:URLLoader = a_event.target as URLLoader; //obtain whatever data was loaded, and trace it var graphData:String = loader.data; trace(graphData); } } } All we're doing here is downloading whatever information is at https://graph.facebook.com/PackPub and tracing it to the output window. Test your project, and take a look at your output window. You should see the following data: {"id":"204603129458","name":"Packt Publishing","picture":"http:// profile.ak.fbcdn.net/hprofile-ak-snc4/hs302. ash1/23274_204603129458_7460_s.jpg","link":"http://www.facebook. com/PacktPub","category":"Products_other","username":"PacktPub", "company_overview":"Packt is a modern, IT focused book publisher, specializing in producing cutting-edge books for communities of developers, administrators, and newbies alike.nnPackt published its first book, Mastering phpMyAdmin for MySQL Management in April 2004.","fan_count":412} If you get an error, check that your code matches the previously mentioned code. If you see nothing in your output window, make sure that you are connected to the Internet. If you still don't see anything, it's possible that your security settings prevent you from accessing the Internet via Flash, so check those.  
Read more
  • 0
  • 0
  • 2669

article-image-working-events
Packt
08 Jan 2016
7 min read
Save for later

Working with Events

Packt
08 Jan 2016
7 min read
In this article by Troy Miles, author of the book jQuery Essentials, we will learn that an event is the occurrence of anything that the system considers significant. It can originate in the browser, the form, the keyboard, or any other subsystem, and it can also be generated by the application via a trigger. An event can be as simple as a key press or as complex as the completion of an Ajax request. (For more resources related to this topic, see here.) While there are a myriad of potential events, events only matter when the application listens for them. This is also known as hooking an event. By hooking an event, you tell the browser that this occurrence is important to you and to let you know when it happens. When the event occurs, the browser calls your event handling code passing the event object to it. The event object holds important event data, including which page element triggered it. Let's take a look at the first learned and possibly most important event, the ready event. The ready event The first event that programmers new to jQuery usually learn about is the ready event, sometimes referred to as the document ready event. This event signifies that the DOM is fully loaded and that jQuery is open for business. The ready event is similar to the document load event, except that it doesn't wait for all of the page's images and other assets to load. It only waits for the DOM to be ready. Also, if the ready event fires before it is hooked, the handler code will be called at least once unlike most events. The .ready() event can only be attached to the document element. When you think about it, it makes sense because it fires when the DOM, also known as Document Object Model is fully loaded. The .ready() event has few different hooking styles. All of the styles do the same thing—hook the event. Which one you use is up to you. In its most basic form, the hooking code looks similar to the following: $(document).ready(handler); As it can only be attached to the document element, the selector can be omitted. In which case the event hook looks as follows: $().ready(handler); However, the jQuery documentation does not recommend using the preceding form. There is still a terser version of this event's hook. This version omits nearly everything, and it is only passing an event handler to the jQuery function. It looks similar to the following: $(handler); While all of the different styles work, I only recommend the first form because it is the most clear. While the other forms work and save a few bytes worth of characters, they do so at the expense of code clarity. If you are worried about the number of bytes an expression uses, you should use a JavaScript minimizer instead. It will do a much more thorough job of shrinking code than you could ever do by hand. The ready event can be hooked as many times as you'd like. When the event is triggered, the handlers are called in the order in which they were hooked. Let's take a look at an example in code. // ready event style no# 1 $(document).ready(function () { console.log("document ready event handler style no# 1"); // we're in the event handler so the event has already fired. // let's hook it again and see what happens $(document).ready(function () { console.log("We get this handler even though the ready event has already fired"); }); }); // ready event style no# 2 $().ready(function () { console.log("document ready event handler style no# 2"); }); // ready event style no# 3 $(function () { console.log("document ready event handler style no# 3"); }); In the preceding code, we hook the ready event three times, each one using a different hooking style. The handlers are called in the same order that they are hooked. In the first event handler, we hook the event again. As the event has been triggered already, we may expect that the handler will never be called, but we would be wrong. jQuery treats the ready event differently than other events. Its handler is always called, even if the event has already been triggered. This makes the ready event a great place for initialization and other code, which must be run. Hooking events The ready event is different as compared to all of the other events. Its handler will be called once, unlike other events. It is also hooked differently than other events. All of the other events are hooked by chaining the .on() method to the set of elements that you wish to use to trigger the event. The first parameter passed to the hook is the name of the event followed by the handling function, which can either be an anonymous function or the name of a function. This is the basic pattern for event hooking. It is as follows: $(selector).on('event name', handling function); The .on() method and its companion the .off() method were first added in version 1.7 of jQuery. For older versions of jQuery, the method that is used to hook the event is .bind(). Neither the .bind() method nor its companion the .unbind() method are deprecated, but .on() and .off() are preferred over them. If you are switching from .bind(), the call to .on() is identical at its simplest levels. The .on() method has capabilities beyond that of the .bind() method, which requires different sets of parameters to be passed to it. If you would like for more than one event to share the same handler, simply place the name of the next event after the previous one with a space separating them: $("#clickA").on("mouseenter mouseleave", eventHandler); Unhooking events The main method that is used to unhook an event handler is .off(). Calling it is simple; it is similar to the following: $(elements).off('event name', handling function); The handling function is optional and the event name is also optional. If the event name is omitted, then all events that are attached to the elements are removed. If the event name is included, then all handlers for the specified event are removed. This can create problems. Think about the following scenario. You write a click event handler for a button. A bit later in the app's life cycle, someone else also needs to know when the button is clicked. Not wanting to interfere with already working code, they add a second handler. When their code is complete, they remove the handler as follows: $('#myButton').off('click'); As the handler was called using only using the event name, it removed not only the handler that it added but also all of the handlers for the click event. This is not what was wanted. Don't despair however; there are two fixes for this problem: function clickBHandler(event){ console.log('Button B has been clicked, external'); } $('#clickB').on('click', clickBHandler); $('#clickB').on('click', function(event){ console.log('Button B has been clicked, anonymous'); // turn off the 1st handler without during off the 2nd $('#clickB').off('click', clickBHandler); }); The first fix is to pass the event handler to the .off() method. In the preceding code, we placed two click event handlers on the button named clickB. The first event handler is installed using a function declaration, and the second is installed using an anonymous function. When the button is clicked, both of the event handlers are called. The second one turns off the first one by calling the .off() method and passing its event handler as a parameter. By passing the event handler, the .off() method is able to match the signature of the handler that you'd like to turn off. If you are not using anonymous functions, this fix is works well. But, what if you want to pass an anonymous function as the event handler? Is there a way to turn off one handler without turning off the other? Yes there is, the second fix is to use event namespacing. Summary In this article, we learned a lot about one of the most important constructs in modern web programming—events. They are the things that make a site interactive. Resources for Article: Further resources on this subject: Preparing Your First jQuery Mobile Project[article] Building a Custom Version of jQuery[article] Learning jQuery[article]
Read more
  • 0
  • 0
  • 2661

article-image-testing-exceptional-flow
Packt
03 Sep 2015
22 min read
Save for later

Testing Exceptional Flow

Packt
03 Sep 2015
22 min read
 In this article by Frank Appel, author of the book Testing with JUnit, we will learn that special care has to be taken when testing a component's functionality under exception-raising conditions. You'll also learn how to use the various capture and verification possibilities and discuss their pros and cons. As robust software design is one of the declared goals of the test-first approach, we're going to see how tests intertwine with the fail fast strategy on selected boundary conditions. Finally, we're going to conclude with an in-depth explanation of working with collaborators under exceptional flow and see how stubbing of exceptional behavior can be achieved. The topics covered in this article are as follows: Testing patterns Treating collaborators (For more resources related to this topic, see here.) Testing patterns Testing exceptional flow is a bit trickier than verifying the outcome of normal execution paths. The following section will explain why and introduce the different techniques available to get this job done. Using the fail statement "Always expect the unexpected"                                                                                  – Adage based on Heraclitus Testing corner cases often results in the necessity to verify that a functionality throws a particular exception. Think, for example, of a java.util.List implementation. It quits the retrieval attempt of a list's element by means of a non-existing index number with java.lang.ArrayIndexOutOfBoundsException. Working with exceptional flow is somewhat special as without any precautions, the exercise phase would terminate immediately. But this is not what we want since it eventuates in a test failure. Indeed, the exception itself is the expected outcome of the behavior we want to check. From this, it follows that we have to capture the exception before we can verify anything. As we all know, we do this in Java with a try-catch construct. The try block contains the actual invocation of the functionality we are about to test. The catch block again allows us to get a grip on the expected outcome—the exception thrown during the exercise. Note that we usually keep our hands off Error, so we confine the angle of view in this article to exceptions. So far so good, but we have to bring up to our minds that in case no exception is thrown, this has to be classified as misbehavior. Consequently, the test has to fail. JUnit's built-in assertion capabilities provide the org.junit.Assert.fail method, which can be used to achieve this. The method unconditionally throws an instance of java.lang.AssertionError if called. The classical approach of testing exceptional flow with JUnit adds a fail statement straight after the functionality invocation within the try block. The idea behind is that this statement should never be reached if the SUT behaves correctly. But if not, the assertion error marks the test as failed. It is self-evident that capturing should narrow down the expected exception as much as possible. Do not catch IOException if you expect FileNotFoundException, for example. Unintentionally thrown exceptions must pass the catch block unaffected, lead to a test failure and, therefore, give you a good hint for troubleshooting with their stack trace. We insinuated that the fetch-count range check of our timeline example would probably be better off throwing IllegalArgumentException on boundary violations. Let's have a look at how we can change the setFetchCountExceedsLowerBound test to verify different behaviors with the try-catch exception testing pattern (see the following listing): @Test public void setFetchCountExceedsLowerBound() { int tooSmall = Timeline.FETCH_COUNT_LOWER_BOUND - 1; try { timeline.setFetchCount( tooSmall ); fail(); } catch( IllegalArgumentException actual ) { String message = actual.getMessage(); String expected = format( Timeline.ERROR_EXCEEDS_LOWER_BOUND, tooSmall ); assertEquals( expected, message ); assertTrue( message.contains( valueOf( tooSmall ) ) ); } } It can be clearly seen how setFetchCount, the functionality under test, is called within the try block, directly followed by a fail statement. The caught exception is narrowed down to the expected type. The test avails of the inline fixture setup to initialize the exceeds-lower-bound value in the tooSmall local variable because it is used more than once. The verification checks that the thrown message matches an expected one. Our test calculates the expectation with the aid of java.lang.String.format (static import) based on the same pattern, which is also used internally by the timeline to produce the text. Once again, we loosen encapsulation a bit to ensure that the malicious value gets mentioned correctly. Purists may prefer only the String.contains variant, which, on the other hand would be less accurate. Although this works fine, it looks pretty ugly and is not very readable. Besides, it blurs a bit the separation of the exercise and verification phases, and so it is no wonder that there have been other techniques invented for exception testing. Annotated expectations After the arrival of annotations in the Java language, JUnit got a thorough overhauling. We already mentioned the @Test type used to mark a particular method as an executable test. To simplify exception testing, it has been given the expected attribute. This defines that the anticipated outcome of a unit test should be an exception and it accepts a subclass of Throwable to specify its type. Running a test of this kind captures exceptions automatically and checks whether the caught type matches the specified one. The following snippet shows how this can be used to validate that our timeline constructor doesn't accept null as the injection parameter: @Test( expected = IllegalArgumentException.class ) public void constructWithNullAsItemProvider() { new Timeline( null, mock( SessionStorage.class ) ); } Here, we've got a test, the body statements of which merge setup and exercise in one line for compactness. Although the verification result is specified ahead of the method's signature definition, of course, it gets evaluated at last. This means that the runtime test structure isn't twisted. But it is a bit of a downside from the readability point of view as it breaks the usual test format. However, the approach bears a real risk when using it in more complex scenarios. The next listing shows an alternative of setFetchCountExceedsLowerBound using the expected attribute: @Test( expected = IllegalArgumentException.class ) public void setFetchCountExceedsLowerBound() { Timeline timeline = new Timeline( null, null ); timeline.setFetchCount( Timeline.FETCH_COUNT_LOWER_BOUND - 1 ); } On the face of it, this might look fine because the test run would succeed apparently with a green bar. But given that the timeline constructor already throws IllegalArgumentException due to the initialization with null, the virtual point of interest is never reached. So any setFetchCount implementation will pass this test. This renders it not only useless, but it even lulls you into a false sense of security! Certainly, the approach is most hazardous when checking for runtime exceptions because they can be thrown undeclared. Thus, they can emerge practically everywhere and overshadow the original test intention unnoticed. Not being able to validate the state of the thrown exception narrows down the reasonable operational area of this concept to simple use cases, such as the constructor parameter verification mentioned previously. Finally, here are two more remarks on the initial example. First, it might be debatable whether IllegalArgumentException is appropriate for an argument-not-null-check from a design point of view. But as this discussion is as old as the hills and probably will never be settled, we won't argue about that. IllegalArgumentException was favored over NullPointerException basically because it seemed to be an evident way to build up a comprehensible example. To specify a different behavior of the tested use case, one simply has to define another Throwable type as the expected value. Second, as a side effect, the test shows how a generated test double can make our life much easier. You've probably already noticed that the session storage stand-in created on the fly serves as a dummy. This is quite nice as we don't have to implement one manually and as it decouples the test from storage-related signatures, which may break the test in future when changing. But keep in mind that such a created-on-the-fly dummy lacks the implicit no-operation-check. Hence, this approach might be too fragile under some circumstances. With annotations being too brittle for most usage scenarios and the try-fail-catch pattern being too crabbed, JUnit provides a special test helper called ExpectedException, which we'll take a look at now. Verification with the ExpectedException rule The third possibility offered to verify exceptions is the ExpectedException class. This type belongs to a special category of test utilities. For the moment, it is sufficient to know that rules allow us to embed a test method into custom pre- and post-operations at runtime. In doing so, the expected exception helper can catch the thrown instance and perform the appropriate verifications. A rule has to be defined as a nonstatic public field, annotated with @Rule, as shown in the following TimelineTest excerpt. See how the rule object gets set up implicitly here with a factory method: public class TimelineTest { @Rule public ExpectedException thrown = ExpectedException.none(); [...] @Test public void setFetchCountExceedsUpperBound() { int tooLargeValue = FETCH_COUNT_UPPER_BOUND + 1; thrown.expect( IllegalArgumentException.class ); thrown.expectMessage( valueOf( tooLargeValue ) ); timeline.setFetchCount( tooLargeValue ); } [...] } Compared to the try-fail-catch approach, the code is easier to read and write. The helper instance supports several methods to specify the anticipated outcome. Apart from the static imports of constants used for compactness, this specification reproduces pretty much the same validations as the original test. ExpectedException#expectedMessage expects a substring of the actual message in case you wonder, and we omitted the exact formatting here for brevity. In case the exercise phase of setFetchCountExceedsUpperBound does not throw an exception, the rule ensures that the test fails. In this context, it is about time we mentioned the utility's factory method none. Its name indicates that as long as no expectations are configured, the helper assumes that a test run should terminate normally. This means that no artificial fail has to be issued. This way, a mix of standard and exceptional flow tests can coexist in one and the same test case. Even so, the test helper has to be configured prior to the exercise phase, which still leaves room for improvement with respect to canonizing the test structure. As we'll see next, the possibility of Java 8 to compact closures into lambda expressions enables us to write even leaner and cleaner structured exceptional flow tests. Capturing exceptions with closures When writing tests, we strive to end up with a clear representation of separated test phases in the correct order. All of the previous approaches for testing exceptional flow did more or less a poor job in this regard. Looking once more at the classical try-fail-catch pattern, we recognize, however, that it comes closest. It strikes us that if we put some work into it, we can extract exception capturing into a reusable utility method. This method would accept a functional interface—the representation of the exception-throwing functionality under test—and return the caught exception. The ThrowableCaptor test helper puts the idea into practice: public class ThrowableCaptor { @FunctionalInterface public interface Actor { void act() throws Throwable; } public static Throwable thrownBy( Actor actor ) { try { actor.act(); } catch( Throwable throwable ) { return throwable; } return null; } } We see the Actor interface that serves as a functional callback. It gets executed within a try block of the thrownBy method. If an exception is thrown, which should be the normal path of execution, it gets caught and returned as the result. Bear in mind that we have omitted the fail statement of the original try-fail-catch pattern. We consider the capturer as a helper for the exercise phase. Thus, we merely return null if no exception is thrown and leave it to the afterworld to deal correctly with the situation. How capturing using this helper in combination with a lambda expression works is shown by the next variant of setFetchCountExceedsUpperBound, and this time, we've achieved the clear phase separation we're in search of: @Test public void setFetchCountExceedsUpperBound() { int tooLarge = FETCH_COUNT_UPPER_BOUND + 1; Throwable actual = thrownBy( ()-> timeline.setFetchCount( tooLarge ) ); String message = actual.getMessage(); assertNotNull( actual ); assertTrue( actual instanceof IllegalArgumentException ); assertTrue( message.contains( valueOf( tooLarge ) ) ); assertEquals( format( ERROR_EXCEEDS_UPPER_BOUND, tooLarge ), message ); } Please note that we've added an additional not-null-check compared to the verifications of the previous version. We do this as a replacement for the non-existing failure enforcement. Indeed, the following instanceof check would fail implicitly if actual was null. But this would also be misleading since it overshadows the true failure reason. Stating that actual must not be null points out clearly the expected post condition that has not been met. One of the libraries presented there will be AssertJ. The latter is mainly intended to improve validation expressions. But it also provides a test helper, which supports the closure pattern you've just learned to make use of. Another choice to avoid writing your own helper could be the library Fishbowl, [FISBOW]. Now that we understand the available testing patterns, let's discuss a few system-spanning aspects when dealing with exceptional flow in practice. Treating collaborators Considerations we've made about how a software system can be built upon collaborating components, foreshadows that we have to take good care when modelling our overall strategy for exceptional flow. Because of this, we'll start this section with an introduction of the fail fast strategy, which is a perfect match to the test-first approach. The second part of the section will show you how to deal with checked exceptions thrown by collaborators. Fail fast Until now, we've learned that exceptions can serve in corner cases as an expected outcome, which we need to verify with tests. As an example, we've changed the behavior of our timeline fetch-count setter. The new version throws IllegalArgumentException if the given parameter is out of range. While we've explained how to test this, you may have wondered whether throwing an exception is actually an improvement. On the contrary, you might think, doesn't the exception make our program more fragile as it bears the risk of an ugly error popping up or even of crashing the entire application? Aren't those things we want to prevent by all means? So, wouldn't it be better to stick with the old version and silently ignore arguments that are out of range? At first sight, this may sound reasonable, but doing so is ostrich-like head-in-the-sand behavior. According to the motto: if we can't see them, they aren't there, and so, they can't hurt us. Ignoring an input that is obviously wrong can lead to misbehavior of our software system later on. The reason for the problem is probably much harder to track down compared to an immediate failure. Generally speaking, this practice disconnects the effects of a problem from its cause. As a consequence, you often have to deal with stack traces leading to dead ends or worse. Consider, for example, that we'd initialize the timeline fetch-count as an invariant employed by a constructor argument. Moreover, the value we use would be negative and silently ignored by the component. In addition, our application would make some item position calculations based on this value. Sure enough, the calculation results would be faulty. If we're lucky, an exception would be thrown, when, for example, trying to access a particular item based on these calculations. However, the given stack trace would reveal nothing about the reason that originally led to the situation. However, if we're unlucky, the misbehavior will not be detected until the software has been released to end users. On the other hand, with the new version of setFetchCount, this kind of translocated problem can never occur. A failure trace would point directly to the initial programming mistake, hence avoiding follow-up issues. This means failing immediately and visibly increases robustness due to short feedback cycles and pertinent exceptions. Jim Shore has given this design strategy the name fail fast, [SHOR04]. Shore points out that the heart of fail fast are assertions. Similar to the JUnit assert statements, an assertion fails on a condition that isn't met. Typical assertions might be not-null-checks, in-range-checks, and so on. But how do we decide if it's necessary to fail fast? While assertions of input arguments are apparently a potential use case scenario, checking of return values or invariants may also be so. Sometimes, such conditions are described in code comments, such as // foo should never be null because..., which is a clear indication that suggests to replace the note with an appropriate assertion. See the next snippet demonstrating the principle: public void doWithAssert() { [...] boolean condition = ...; // check some invariant if( !condition ) { throw new IllegalStateException( "Condition not met." ) } [...] } But be careful not to overdo things because in most cases, code will fail fast by default. So, you don't have to include a not-null-check after each and every variable assignment for example. Such paranoid programming styles decrease readability for no value-add at all. A last point to consider is your overall exception-handling strategy. The intention of assertions is to reveal programming or configuration mistakes as early as possible. Because of this, we strictly make use of runtime exception types only. Catching exceptions at random somewhere up the call stack of course thwarts the whole purpose of this approach. So, beware of the absurd try-catch-log pattern that you often see scattered all over the code of scrubs, and which is demonstrated in the next listing as a deterrent only: private Data readSomeData() { try { return source.readData(); } catch( Exception hardLuck ) { // NEVER DO THIS! hardLuck.printStackTrace(); } return null; } The sample code projects exceptional flow to null return values and disguises the fact that something seriously went wrong. It surely does not get better using a logging framework or even worse, by swallowing the exception completely. Analysis of an error by means of stack trace logs is cumbersome and often fault-prone. In particular, this approach usually leads to logs jammed with ignored traces, where one more or less does not attract attention. In such an environment, it's like looking for a needle in a haystack when trying to find out why a follow-up problem occurs. Instead, use the central exception handling mechanism at reasonable boundaries. You can create a bottom level exception handler around a GUI's message loop. Ensure that background threads report problems appropriately or secure event notification mechanisms for example. Otherwise, you shouldn't bother with exception handling in your code. As outlined in the next paragraph, securing resource management with try-finally should most of the time be sufficient. The stubbing of exceptional behavior Every now and then, we come across collaborators, which declare checked exceptions in some or all of their method signatures. There is a debate going on for years now whether or not checked exceptions are evil, [HEVEEC]. However, in our daily work, we simply can't elude them as they pop up in adapters around third-party code or get burnt in legacy code we aren't able to change. So, what are the options we have in these situations? "It is funny how people think that the important thing about exceptions is handling them. That's not the important thing about exceptions. In a well-written application there's a ratio of ten to one, in my opinion, of try finally to try catch."                                                                            – Anders Hejlsberg, [HEVEEC] Cool. This means that we also declare the exception type in question on our own method signature and let someone else up on the call stack solve the tricky things, right? Although it makes life easier for us for at the moment, acting like this is probably not the brightest idea. If everybody follows that strategy, the higher we get on the stack, the more exception types will occur. This doesn't scale well and even worse, it exposes details from the depths of the call hierarchy. Because of this, people sometimes simplify things by declaring java.lang.Exception as thrown type. Indeed, this gets them rid of the throws declaration tail. But it's also a pauper's oath as it reduces the Java type concept to absurdity. Fair enough. So, we're presumably better off when dealing with checked exceptions as soon as they occur. But hey, wouldn't this contradict Hejlsberg's statement? And what shall we do with the gatecrasher, meaning is there always a reasonable handling approach? Fortunately there is, and it absolutely conforms with the quote and the preceding fail fast discussion. We envelope the caught checked exception into an appropriate runtime exception, which we afterwards throw instead. This way, every caller of our component's functionality can use it without worrying about exception handling. If necessary, it is sufficient to use a try-finally block to ensure the disposal or closure of open resources for example. As described previously, we leave exception handling to bottom line handlers around the message loop or the like. Now that we know what we have to do, the next question is how can we achieve this with tests? Luckily, with the knowledge about stubs, you're almost there. Normally handling a checked exception represents a boundary condition. We can regard the thrown exception as an indirect input to our SUT. All we have to do is let the stub throw an expected exception (precondition) and check if the envelope gets delivered properly (postcondition). For better understanding, let's comprehend the steps in our timeline example. We consider for this section that our SessionStorage collaborator declares IOException on its methods for any reason whatsoever. The storage interface is shown in the next listing. public interface SessionStorage { void storeTop( Item top ) throws IOException; Item readTop() throws IOException; } Next, we'll have to write a test that reflects our thoughts. At first, we create an IOException instance that will serve as an indirect input. Looking at the next snippet, you can see how we configure our storage stub to throw this instance on a call to storeTop. As the method does not return anything, the Mockito stubbing pattern looks a bit different than earlier. This time, it starts with the expectation definition. In addition, we use Mockito's any matcher, which defines the exception that should be thrown for those calls to storeTop, where the given argument is assignment-compatible with the specified type token. After this, we're ready to exercise the fetchItems method and capture the actual outcome. We expect it to be an instance of IllegalStateException just to keep things simple. See how we verify that the caught exception wraps the original cause and that the message matches a predefined constant on our component class: @Test public void fetchItemWithExceptionOnStoreTop() throws IOException { IOException cause = new IOException(); doThrow( cause ).when( storage ).storeTop( any( Item.class ) ); Throwable actual = thrownBy( () -> timeline.fetchItems() ); assertNotNull( actual ); assertTrue( actual instanceof IllegalStateException ); assertSame( cause, actual.getCause() ); assertEquals( Timeline.ERROR_STORE_TOP, actual.getMessage() ); } With the test in place, the implementation is pretty easy. Let's assume that we have the item storage extracted to a private timeline method named storeTopItem. It gets called somewhere down the road of fetchItem and again calls a private method, getTopItem. Fixing the compile errors, we end up with a try-catch block because we have to deal with IOException thrown by storeTop. Our first error handling should be empty to ensure that our test case actually fails. The following snippet shows the ultimate version, which will make the test finally pass: static final String ERROR_STORE_TOP = "Unable to save top item"; [...] private void storeTopItem() { try { sessionStorage.storeTop( getTopItem() ); } catch( IOException cause ) { throw new IllegalStateException( ERROR_STORE_TOP, cause ); } } Of course, real-world situations can sometimes be more challenging, for example, when the collaborator throws a mix of checked and runtime exceptions. At times, this results in tedious work. But if the same type of wrapping exception can always be used, the implementation can often be simplified. First, re-throw all runtime exceptions; second, catch exceptions by their common super type and re-throw them embedded within a wrapping runtime exception (the following listing shows the principle): private void storeTopItem() { try { sessionStorage.storeTop( getTopItem() ); } catch( RuntimeException rte ) { throw rte; } catch( Exception cause ) { throw new IllegalStateException( ERROR_STORE_TOP, cause ); } } Summary In this article, you learned how to validate the proper behavior of an SUT with respect to exceptional flow. You experienced how to apply the various capture and verification options, and we discussed their strengths and weaknesses. Supplementary to the test-first approach, you were taught the concepts of the fail fast design strategy and recognized how adapting it increases the overall robustness of applications. Last but not least, we explained how to handle collaborators that throw checked exceptions and how to stub their exceptional bearing. Resources for Article: Further resources on this subject: Progressive Mockito[article] Using Mock Objects to Test Interactions[article] Ensuring Five-star Rating in the MarketPlace [article]
Read more
  • 0
  • 0
  • 2661

article-image-drupal-7-themes-creating-dynamic-css-styling
Packt
05 Jul 2011
7 min read
Save for later

Drupal 7 Themes: Creating Dynamic CSS Styling

Packt
05 Jul 2011
7 min read
Drupal 7 Themes Create new themes for your Drupal 7 site with a clean layout and powerful CSS styling The reader would benefit by referring the previous article on Dynamic Theming In addition to creating templates that are displayed conditionally, the Drupal system also enables you to apply CSS selectively. Drupal creates unique identifiers for various elements of the system and you can use those identifiers to create specific CSS selectors. As a result, you can provide styling that responds to the presence (or absence) of specific conditions on any given page. Employing $classes for conditional styling One of the most useful dynamic styling tools is $classes. This variable is intended specifically as an aid to dynamic CSS styling. It allows for the easy creation of CSS selectors that are responsive to either the layout of the page or to the status of the person viewing the page. This technique is typically used to control the styling where there may be one, two, or three columns displayed, or to trigger display for authenticated users. Prior to Drupal 6, $layout was used to detect the page layout. With Drupal 6 we got instead, $body_classes. Now, in Drupal 7, it's $classes. While each was intended to serve a similar purpose, do not try to implement the previous incarnations with Drupal 7, as they are no longer supported! By default $classes is included with the body tag in the system's html.tpl.php file; this means it is available to all themes without the necessity of any additional steps on your part. With the variable in place, the class associated with the body tag will change automatically in response to the conditions on the page at that time. All you need to do to take advantage of this and create the CSS selectors that you wish to see applied in the various situations. The following chart shows the dynamic classes available to you by default in Drupal 7: If you are not certain what this looks like and how it can be used, simply view the homepage of your site with the Bartik theme active. Use the view source option in your browser to then examine the body tag of the page. You will see something like this: <body class="html front not-logged-in one-sidebar sidebar-first page-node">. The class definition you see there is the result of $classes. By way of comparison, log in to your site and repeat this test. The body class will now look something like this: <body class="html front logged-in one-sidebar sidebar-first page-node">. In this example, we see that the class has changed to reflect that the user viewing the page is now logged in. Additional statements may appear, depending on the status of the person viewing the page and the additional modules installed. While the system implements this technique in relation to the body tag, its usage is not limited to just that scenario; you can use $classes with any template and in a variety of situations. If you'd like to see a variation of this technique in action (without having to create it from scratch), take a look at the Bartik theme. Open the node.tpl.php file and you can see the $classes variable added to the div at the top of the page; this allows this template to also employ the conditional classes tool. Note that the placement of $classes is not critical; it does not have to be at the top of the file. You can call this at any point where it is needed. You could, for example, add it to a specific ordered list by printing out $classes in conjunction with the li tag, like this: <li class="<?php print $classes; ?>"> $classes is, in short, a tremendously useful aid to creating dynamic theming. It becomes even more attractive if you master adding your own variables to the function, as discussed in the next section. Adding new variables to $classes To make things even more interesting (and useful), you can add new variables to $classes through use of the variable process functions. This is implemented in similar fashion to other preprocess function. Let's look at an example, in this case, taken from Drupal.org. The purpose here is to add a striping class keyed to the zebra variable and to make it available through $classes. To set this up, follow these steps: Access your theme's template.php file. If you don't have one, create it. Add the following to the file: <?php function mythemename_preprocess_node(&$vars) { // Add a striping class. $vars['classes_array'][] = 'node-' . $vars['zebra']; } ?> Save the file. The variable will now be available in any template in which you implement $classes. Creating dynamic selectors for nodes Another handy resource you can tap into for CSS styling purposes is Drupal's node ID system. By default, Drupal generates a unique ID for each node of the website. Node IDs are assigned at the time of node creation and remain stable for the life of the node. You can use the unique node identifier as a means of activating a unique selector. To make use of this resource, simply create a selector as follows: #node-[nid] { } For example, assume you wish to add a border to the node with the ID of 2. Simply create a new selector in your theme's stylesheet, as shown: #node-2 { border: 1px solid #336600 } As a result, the node with the ID of 2 will now be displayed with a 1-pixel wide solid border. The styling will only affect that specific node. Creating browser-specific stylesheets A common solution for managing some of the difficulties attendant to achieving true cross-browser compatibility is to offer stylesheets that target specific browsers. Internet Explorer tends to be the biggest culprit in this area, with IE6 being particularly cringe-worthy. Ironically, Internet Explorer also provides us with one of the best tools for addressing this issue. Internet Explorer implements a proprietary technology known as Conditional Comments. It is possible to easily add conditional stylesheets to your Drupal system through the use of this technology, but it requires the addition of a contributed module to your system, called Conditional Stylesheets. While it is possible to set up conditional stylesheets without the use of the module, it is more work, requiring you to add multiple lines of code to your template.php. With the module installed, you just add the stylesheet declarations to your .info file and then, using a simple syntax, set the conditions for their use. Note also that the Conditional Stylesheets module is in the queue for inclusion in Drupal 8, so it is certainly worth looking at now. To learn more, visit the project site at http://drupal.org/project/conditional_styles. If, in contrast, you would like to do things manually by creating a preprocess function to add the stylesheet and target it by browser key, please see http://drupal.org/node/744328. Summary This article covers the basics needed to make your Drupal theme responsive to the contents and the users. By applying the techniques discussed in this article, you can control the theming of pages based on content, state of the pages, or the users viewing them. Taking the principles one step further, you can also make the theming of elements within a page conditional. The ability to control the templates used and the styling of the page and its elements is what we call dynamic theming. Further resources on this subject: Drupal 7: Customizing an Existing Theme [Article] Drupal 7 Themes: Dynamic Theming [Article] 25 Useful Extensions for Drupal 7 Themers [Article] Drupal and Ubercart 2.x: Install a Ready-made Drupal Theme [Article] Building an Admin Interface in Drupal 7 Module Development [Article] Content in Drupal: Frequently Asked Questions (FAQ) [Article] Drupal Web Services: Twitter and Drupal [Article]
Read more
  • 0
  • 0
  • 2660

article-image-how-bridge-client-server-gap-using-ajax-part-i
Packt
28 Oct 2009
18 min read
Save for later

How to Bridge the Client-Server Gap using AJAX (Part I)

Packt
28 Oct 2009
18 min read
Technically, AJAX is an acronym standing for Asynchronous JavaScript and XML. The technologies involved in an AJAX solution include: JavaScript, to capture interactions with the user or other browser-related events The XMLHttpRequest object, which allows requests to be made to the server without interrupting other browser tasks XML files on the server, or often other similar data formats such as HTML or JSON More JavaScript, to interpret the data from the server and present it on the page Many frameworks have sprung up to assist developers in taming it, because of the inconsistencies in the browsers' implementations of the XMLHttpRequest object; jQuery is no exception. Let us see if AJAX can truly perform miracles. Loading data on demand Underneath all the hype and trappings, AJAX is just a means of loading data from the server to the web browser, or client, without a visible page refresh. This data can take many forms, and we have many options for what to do with it when it arrives. We'll see this by performing the same basic task in many ways. We are going to build a page that displays entries from a dictionary, grouped by the starting letter of the dictionary entry. The HTML defining the content area of the page will look like this: <div id="dictionary"></div> Yes, really! Our page will have no content to begin with. We are going to use jQuery's various AJAX methods to populate this <div> with dictionary entries. <div class="letters"> <div class="letter" id="letter-a"> <h3><a href="#">A</a></h3> </div> <div class="letter" id="letter-b"> <h3><a href="#">B</a></h3> </div> <div class="letter" id="letter-c"> <h3><a href="#">C</a></h3> </div> <div class="letter" id="letter-d"> <h3><a href="#">D</a></h3> </div></div> As always, a real-world implementation should use progressive enhancement to make the page function without requiring JavaScript. Here, to simplify our example, the links do nothing until we add behaviors to them with jQuery. As always, a real-world implementation should use progressive enhancement to make the page function without requiring JavaScript. Here, to simplify our example, the links do nothing until we add behaviors to them with jQuery. Adding a few CSS rules, we get a page that looks like this: Now we can focus on getting content onto the page. Appending HTML AJAX applications are often no more than a request for a chunk of HTML. This technique, sometimes referred to as AHAH (Asynchronous HTTP and HTML), is almost trivial to implement with jQuery. First we need some HTML to insert, which we'll place in a file called a.html alongside our main document. This secondary HTML file begins: <div class="entry"> <h3 class="term">ABDICATION</h3> <div class="part">n.</div> <div class="definition"> An act whereby a sovereign attests his sense of the high temperature of the throne. <div class="quote"> <div class="quote-line">Poor Isabella's Dead, whose abdication</div> <div class="quote-line">Set all tongues wagging in the Spanish nation.</div> <div class="quote-line">For that performance 'twere unfair to scold her:</div> <div class="quote-line">She wisely left a throne too hot to hold her.</div> <div class="quote-line">To History she'll be no royal riddle &mdash;</div> <div class="quote-line">Merely a plain parched pea that jumped the griddle.</div> <div class="quote-author">G.J.</div> </div> </div></div><div class="entry"> <h3 class="term">ABSOLUTE</h3> <div class="part">adj.</div> <div class="definition"> Independent, irresponsible. An absolute monarchy is one in which the sovereign does as he pleases so long as he pleases the assassins. Not many absolute monarchies are left, most of them having been replaced by limited monarchies, where the sovereign's power for evil (and for good) is greatly curtailed, and by republics, which are governed by chance. </div></div> The page continues with more entries in this HTML structure. Rendered on its own, this page is quite plain:   Note that a.html is not a true HTML document; it contains no <html>, <head>, or <body>, all of which are normally required. We usually call such a file a snippet or fragment; its only purpose is to be inserted into another HTML document, which we'll accomplish now: $(document).ready(function() { $('#letter-a a').click(function() { $('#dictionary').load('a.html'); return false; });}); The .load() method does all our heavy lifting for us! We specify the target location for the HTML snippet by using a normal jQuery selector, and then pass the URL of the file to be loaded as a parameter to the method. Now, when the first link is clicked, the file is loaded and placed inside <div id="dictionary">. The browser will render the new HTML as soon as it is inserted:   Note that the HTML is now styled, whereas before it was plain. This is due to the CSS rules in the main document; as soon as the new HTML snippet is inserted, the rules apply to its tags as well. When testing this example, the dictionary definitions will probably appear instantaneously when the button is clicked. This is a hazard of working on our applications locally; it is hard to account for delays in transferring documents across the network. Suppose we added an alert box to display after the definitions are loaded: $(document).ready(function() { $('#letter-a a').click(function() { $('#dictionary').load('a.html'); alert('Loaded!'); return false; });}); However, when this particular code is tested on a production web server, the alert will quite possibly have come and gone before the load has completed, due to network lag. This happens because all AJAX calls are by default asynchronous. Otherwise, we'd have to call it SJAX, which hardly has the same ring to it! Asynchronous loading means that once the HTTP request to retrieve the HTML snippet is issued, script execution immediately resumes without waiting. Sometime later, the browser receives the response from the server and handles it. This is generally desired behavior; it is unfriendly to lock up the whole web browser while waiting for data to be retrieved. If actions must be delayed until the load has been completed, jQuery provides a callback for this. An example will be provided below.   Working with JavaScript objects Pulling in fully-formed HTML on demand is very convenient, but there are times when we want our script to be able to do some processing of the data before it is displayed. In this case, we need to retrieve the data in a structure that we can traverse with JavaScript. With jQuery's selectors, we could traverse the HTML we get back and manipulate it, but it must first be inserted into the document. A more native JavaScript data format can mean even less code. Retrieving a JavaScript object As we have often seen, JavaScript objects are just sets of key-value pairs, and can be defined succinctly using curly braces ({}). JavaScript arrays, on the other hand, are defined on the fly with square brackets ([]). Combining these two concepts, we can easily express some very complex and rich data structures. The term JavaScript Object Notation (JSON) was coined by Douglas Crockford to capitalize on this simple syntax. This notation can offer a concise alternative to the sometimes-bulky XML format: { "key": "value", "key 2": [ "array", "of", "items" ]} For information on some of the potential advantages of JSON, as well as implementations in many programming languages, visit http://json.org/ . We can encode our data using this format in many ways. We'll place some dictionary entries in a JSON file we'll call b.json, which begins as follows: [ { "term": "BACCHUS", "part": "n.", "definition": "A convenient deity invented by the...", "quote": [ "Is public worship, then, a sin,", "That for devotions paid to Bacchus", "The lictors dare to run us in,", "And resolutely thump and whack us?" ], "author": "Jorace" }, { "term": "BACKBITE", "part": "v.t.", "definition": "To speak of a man as you find him when..." }, { "term": "BEARD", "part": "n.", "definition": "The hair that is commonly cut off by..." }, To retrieve this data, we'll use the $.getJSON() method, which fetches the file and processes it, providing the calling code with the resulting JavaScript object. Global jQuery functions To this point, all jQuery methods that we've used have been attached to a jQuery object that we've built with the $() factory function. The selectors have allowed us to specify a set of DOM nodes to work with, and the methods have operated on them in some way. This $.getJSON() function, however, is different. There is no logical DOM element to which it could apply; the resulting object has to be provided to the script, not injected into the page. For this reason, getJSON() is defined as a method of the global jQuery object (a single object called jQuery or $ defined once by the jQuery library), rather than of an individual jQuery object instance (the objects we create with the $() function). If JavaScript had classes like other object-oriented languages, we'd call $.getJSON() a class method. For our purposes, we'll refer to this type of method as a global function; in effect, they are functions that use the jQuery namespace so as not to conflict with other function names. To use this function, we pass it the file name as before: $(document).ready(function() { $('#letter-b a').click(function() { $.getJSON('b.json'); return false; });}); This code has no apparent effect when we click the link. The function call loads the file, but we have not told JavaScript what to do with the resulting data. For this, we need to use a callback function. The $.getJSON() function takes a second argument, which is a function to be called when the load is complete. As mentioned before, AJAX calls are asynchronous, and the callback provides a way to wait for the data to be transmitted rather than executing code right away. The callback function also takes an argument, which is filled with the resulting data. So, we can write: $(document).ready(function() { $('#letter-b a').click(function() { $.getJSON('b.json', function(data) { }); return false; });}); Here we are using an anonymous function as our callback, as has been common in our jQuery code for brevity. A named function could equally be provided as the callback. Inside this function, we can use the data variable to traverse the data structure as necessary. We'll need to iterate over the top-level array, building the HTML for each item. We could do this with a standard for loop, but instead we'll introduce another of jQuery's useful global functions, $.each(). Instead of operating on a jQuery object, this function takes an array or map as its first parameter and a callback function as its second. Each time through the loop, the current iteration index and the current item in the array or map are passed as two parameters to the callback function. $(document).ready(function() { $('#letter-b a').click(function() { $.getJSON('b.json', function(data) { $('#dictionary').empty(); $.each(data, function(entryIndex, entry) { var html = '<div class="entry">'; html += '<h3 class="term">' + entry['term'] + '</h3>'; html += '<div class="part">' + entry['part'] + '</div>'; html += '<div class="definition">'; html += entry['definition']; html += '</div>'; html += '</div>'; $('#dictionary').append(html); }); }); return false; });}); Before the loop, we empty out <div id="dictionary"> so that we can fill it with our newly-constructed HTML. Then we use $.each() to examine each item in turn, building an HTML structure using the contents of the entry map. Finally, we turn this HTML into a DOM tree by appending it to the <div>. This approach presumes that the data is safe for HTML consumption; it should not contain any stray < characters, for example. All that's left is to handle the entries with quotations, which takes another $.each() loop: $(document).ready(function() { $('#letter-b a').click(function() { $.getJSON('b.json', function(data) { $('#dictionary').empty(); $.each(data, function(entryIndex, entry) { var html = '<div class="entry">'; html += '<h3 class="term">' + entry['term'] + '</h3>'; html += '<div class="part">' + entry['part'] + '</div>'; html += '<div class="definition">'; html += entry['definition']; if (entry['quote']) { html += '<div class="quote">'; $.each(entry['quote'], function(lineIndex, line) { html += '<div class="quote-line">' + line + '</div>'; }); if (entry['author']) { html += '<div class="quote-author">' + entry['author'] + '</div>'; } html += '</div>'; } html += '</div>'; html += '</div>'; $('#dictionary').append(html); }); }); return false; });}); With this code in place, we can click the B link and confirm our results:   The JSON format is concise, but not forgiving. Every bracket, brace, quote, and comma must be present and accounted for, or the file will not load. In most browsers, we won't even get an error message; the script will just silently fail. Executing a script Occasionally we don't want to retrieve all the JavaScript we will need when the page is first loaded. We might not know what scripts will be necessary until some user interaction occurs. We could introduce <script> tags on the fly when they are needed, but a more elegant way to inject additional code is to have jQuery load the .js file directly. Pulling in a script is about as simple as loading an HTML fragment. In this case, we use the global function $.getScript(), which, like its siblings, accepts a URL locating the script file: $(document).ready(function() { $('#letter-c a').click(function() { $.getScript('c.js'); return false; });}); Scripts fetched in this way are run in the global context of the current page. This means they have access to all globally-defined functions and variables, notably including jQuery itself. We can therefore mimic the JSON example to prepare and insert HTML on the page when the script is executed, and place this code in c.js:. var entries = [ { "term": "CALAMITY", "part": "n.", "definition": "A more than commonly plain and..." }, { "term": "CANNIBAL", "part": "n.", "definition": "A gastronome of the old school who..." }, { "term": "CHILDHOOD", "part": "n.", "definition": "The period of human life intermediate..." }, { "term": "CLARIONET", "part": "n.", "definition": "An instrument of torture operated by..." }, { "term": "COMFORT", "part": "n.", "definition": "A state of mind produced by..." }, { "term": "CORSAIR", "part": "n.", "definition": "A politician of the seas." }];var html = '';$.each(entries, function() { html += '<div class="entry">'; html += '<h3 class="term">' + this['term'] + '</h3>'; html += '<div class="part">' + this['part'] + '</div>'; html += '<div class="definition">' + this['definition'] + '</div>'; html += '</div>';});$('#dictionary').html(html); Now clicking on the C link has the expected result: Loading an XML document XML is part of the acronym AJAX, but we haven't actually loaded any XML yet. Doing so is straightforward, and mirrors the JSON technique fairly closely. First we'll need an XML file d.xml containing some data we wish to display, excerpted here: <?xml version="1.0" encoding="UTF-8"?><entries> <entry term="DEFAME" part="v.t."> <definition> To lie about another. To tell the truth about another. </definition> </entry> <entry term="DEFENCELESS" part="adj."> <definition> Unable to attack. </definition> </entry> <entry term="DELUSION" part="n."> <definition> The father of a most respectable family, comprising Enthusiasm, Affection, Self-denial, Faith, Hope, Charity and many other goodly sons and daughters. </definition> <quote author="Mumfrey Mappel"> <line>All hail, Delusion! Were it not for thee</line> <line>The world turned topsy-turvy we should see; </line> <line>For Vice, respectable with cleanly fancies, </line> <line>Would fly abandoned Virtue's gross advances. </line> </quote> </entry> <entry term="DIE" part="n."> <definition> The singular of "dice." We seldom hear the word, because there is a prohibitory proverb, "Never say die." At long intervals, however, some one says: "The die is cast," which is not true, for it is cut. The word is found in an immortal couplet by that eminent poet and domestic economist, Senator Depew: </definition> <quote> <line>A cube of cheese no larger than a die</line> <line>May bait the trap to catch a nibbling mie.</line> </quote> </entry></entries> This data could be expressed in many ways, of course, and some would more closely mimic the structure we established for the HTML or JSON used earlier. Here, however, we're illustrating some of the features of XML designed to make it more readable to humans, such as the use of attributes for term and part rather than tags. $(document).ready(function() { $('#letter-d a').click(function() { $.get('d.xml', function(data) { }); return false; });}); This time it's the $.get() function that does our work. In general, this function simply fetches the file at the supplied URL and provides the plain text to the callback. However, if the response is known to be XML because of its server-supplied MIME type, the callback will be handed the XML DOM tree. Fortunately, as we have already seen, jQuery has substantial DOM traversing capabilities. We can use the normal .find(), .filter() and other traversal methods on the XML document just as we would on HTML: $(document).ready(function() { $('#letter-d a').click(function() { $.get('d.xml', function(data) { $('#dictionary').empty(); $(data).find('entry').each(function() { var $entry = $(this); var html = '<div class="entry">'; html += '<h3 class="term">' + $entry.attr('term') + '</h3>'; html += '<div class="part">' + $entry.attr('part') + '</div>'; html += '<div class="definition">'; html += $entry.find('definition').text(); var $quote = $entry.find('quote'); if ($quote.length) { html += '<div class="quote">'; $quote.find('line').each(function() { html += '<div class="quote-line">' + $(this).text() + '</div>'; }); if ($quote.attr('author')) { html += '<div class="quote-author">' + $quote.attr('author') + '</div>'; } html += '</div>'; } html += '</div>'; html += '</div>'; $('#dictionary').append($(html)); }); }); return false; });}); This has the expected effect when the D link is clicked: This is a new use for the DOM traversal methods we already know, shedding some light on the flexibility of jQuery's CSS selector support. CSS syntax is typically used to help beautify HTML pages, and thus selectors in standard .css files use HTML tag names such as div and body to locate content. However, jQuery can use arbitrary XML tag names, such as entry and definition here, just as readily as the standard HTML ones. The advanced selector engine inside jQuery facilitates finding parts of the XML document in much more complicated situations, as well. For example, suppose we wanted to limit the displayed entries to those that have quotes that in turn have attributed authors. To do this, we can limit the entries to those with nested <quote> elements by changing entry to entry:has(quote). Then we can further restrict the entries to those with author attributes on the <quote> elements by writing entry:has(quote[author]). The line with the initial selector now reads: $(data).find('entry:has(quote[author])').each(function() { This new selector expression restricts the returned entries correspondingly:    
Read more
  • 0
  • 0
  • 2659
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-moodle-20-managing-compliance-training
Packt
26 Apr 2011
13 min read
Save for later

Moodle 2.0 for Managing Compliance Training

Packt
26 Apr 2011
13 min read
Moodle 2.0 for Business Beginner's Guide Implement Moodle in your business to streamline your interview, training, and internal communication processes         Using Moodle to manage compliance training Compliance training is a very important part of an organization's risk management strategy. Moodle has a number of tools that you can use to deliver and manage required training for your employees. In this article we will explore the lesson module as well as some useful user management tools to ensure your employees receive the training they need. Using the lesson module as a training tool Every human resources manager must understand and comply with a multitude of regulations and establish policies and procedures to ensure their company is in compliance. Additionally, there are several internal risks to consider including work-life discrimination, protection of confidential information, workplace privacy risks, and so on. Many companies develop training programs as a way to mitigate and avoid certain risks by making employees aware of the company policies and procedures. Moodle's lesson module is a useful tool for this type of training. Using Moodle's inbuilt lesson module is also a time and cost-effective alternative to relying on third-party authoring tools that publish Scorm. The lesson module uses two different page types, which can be implemented in several different ways to create an interactive learning experience for the user. First, the question page asks the user a question and then provides the user with feedback based on their response. The question pages are graded and contribute to a user's grade. The second page type, the content page, presents content and navigation options without grading the user's response. Creating a lesson module Let's take a business case study example. You are developing a lesson module on basic office safety. To train employees on basic fire safety, you decide to use a common active training method—the case study. Before we dive into the lesson module, let's take a moment to decide how we're going to implement this. First, we are going to use a content page to present a realistic building fire scenario and have the learner choose their first action. Second, we will create a question page to present the learner with a scored choice regarding fire safety. Third, we need to then come up with feedback based on their responses. In reality, the fire safety plan would probably be part of a larger emergency action plan. However, for the purposes of this article we are going to keep things simple and address a scenario that may be used when training employees on fire safety. Lesson modules can get quite complicated if you let them, depending on how many choices the reader has for a given scenario and how long the chain of reasoning is. Many experts suggest developing a flowchart to plan out your lesson module before creating it in Moodle. For our purposes, we will just take it through the first choice to show you how to use the content page and then a question page. Once you have that down, it will be easy to keep repeating the process to make your lesson module as simple or complicated as you'd like. Time for action - creating a lesson module Log in to your Moodle site as an administrator. Create a course for your compliance training by following the instructions in Getting Started with Moodle 2.0 for Business. Click the Turn editing on button in the top-right corner. Go to the section you want to add the lesson to and from the Add an activity... drop-down menu, select Lesson. You should now be on the Adding a new Lesson page. The first section of the page is the General section. In the Name field, enter the title of your lesson, for example "Fire Safety". If you want to enter a time limit for the lesson, click the checkbox to the left of Enable at the Time limit (minutes) field and enter the time limit you want implemented, in minutes, for the lesson. For the purposes of this example, assume that if I do not give you a specific value to enter for a field, leave it set at the default. If you want to restrict the availability of the lesson to certain dates, then click the checkboxes next to Enable and enter the Available from date and Deadline date. Under Maximum number of answers, select the maximum number of answers that may be used in the lesson. For example, if it only consists of True/False questions, this would be 2. There are a lot of settings in the lesson module. You are not expected to remember them all. I don't! Next to most of the settings is a ? icon. Select this icon for a description of the setting anytime you can't remember what its purpose is. Grade is the next section on the Adding a new Lesson page. For Grade, select the maximum score that can be given for the lesson from the drop-down menu. If there is no grade, then select no grade from the drop-down menu. We are not going to use question pages for our case study example, so for here select no grade. The Grade category refers to the category in the grade book. We have not set the grade book up yet, so leave this as the default Uncategorised. There will be nothing else available yet in this drop-down menu if you have not set up categories in the grade book. Next go to the Grade options section and select your settings. In the Practice lesson setting, select No from the drop-down menu if you want the grades for this lesson to be recorded. The Custom scoring setting allows you to give a score (positive or negative) for each answer. For our example, select No. This could be a useful tool if there are different levels of right and wrong answers and you wish to capture this in the grade book. If you want to allow re-takes, select Yes from the drop-down menu at the Re-takes allowed setting. If you selected Yes in the previous setting and are allowing re-takes, then you need to select the method for grading in the next setting—Handing of re-takes. Your two choices from the drop-down menu are Use mean or Use maximum. The Display ongoing score setting, if Yes is selected from the drop-down menu, will allow the user to see their current score out of total possible thus far in the lesson. The following screenshot shows the General, Grade, and Grade options sections of the Create a lesson page. Now go to the Flow control section and select your settings. Allow student review, if Yes is selected, gives the user the option of going through the lesson again from the start after they have completed the lesson. Provide option to try a question again, select Yes from the drop-down menu to give the user the option to take the question again for no credit or continue with the lesson if they answer a question incorrectly. If you selected Yes for the previous setting, then in the next setting, Maximum number of attempts, you must select the number of attempts allowed from the drop-down menu. If the user answers the question incorrectly repeatedly, once the maximum number of attempts is reached the user will proceed to the next page in the lesson. If you want the default feedback for correct and incorrect answers to be shown when no feedback has been specified for the question, then at the Display default feedback section, select Yes from the drop-down menu. Default feedback for a correct answer is "That's the correct answer" and for an incorrect answer is "That's the wrong answer". If Yes is selected for the Progress bar setting, then a progress bar is displayed at the bottom of each page. When set to Yes the Display left menu setting provides a list of all the pages in the lesson on the left side of each lesson page. The Pop-up to file or web page section allows you to choose a file to display in a pop-up window at the beginning of a lesson. It also displays a link to reopen the pop-up window in every subsequent page in the lesson. The Dependent on section allows you to restrict access to the lesson based on performance in another lesson in the same course. Restrictions can be based on any combination of completion, time spent, or minimum grade required. Under Dependent on, select the lesson required before access to this lesson from the drop-down menu. Time spent (minutes): If time spent is one of the requirements, then enter the minimum number of minutes required. Completed, if completion is a requirement, then check the box. If a minimum grade is required, then for the Grade better than (%) setting, enter the minimum grade required. The following screenshot shows the Flow control, Pop-up to file or web page, and Dependent on sections of the create a lesson page. Once you have entered all your settings on the lesson page, click on the Save and display button at the bottom of the page. You are now on the editing page for the lesson you just created. See the following example: What just happened? We have just created the shell of a lesson activity for our learners. In the next steps we will add content to the lesson and learn how to ask the learner questions. Time for action - creating a content page Now that we have the shell of the lesson, we can begin to add content. We'll start with adding a simple content page. From the editing page for the lesson you just created, select Add a content page. Enter a descriptive title in the box next to Page title. For our example, we will enter Building is on fire. Enter the content for this page in the Page contents text area. If you want the user's choices for the lesson page to be displayed horizontally, then check the box to the left of Arrange content buttons horizontally? You have now filled in the Add a content page section; see the following screenshot for our example: In the Add a content page section you will find sections for Content 1, Content 2, and so on. This is where we will create the choices for the user. For our example, we will enter "Grab the fire extinguisher and look for smoke" in the Description text area for Content 1. Leave the drop-down menu below the text area on the default Moodle auto-format. For now leave the Jump drop-down menu as is; we will come back to this later. In the Content 2 section, enter your second choice in the Description text area. For our example, we will enter "Walk calmly to the exit and exit the building." Now scroll down to the bottom of the page and select Add a question page. This will save the content page you just created. You will now be on the editing page for the lesson you are creating and you should be able to see the content you just added. See the following screenshot for our example: What just happened? We have now added a content page to our lesson. Content pages can include a wide variety of media, including text, audio, video, and Flash. Next we will look at how to add a scored Question page to test the learner's understanding. Time for action - creating a question page Now we are going to create a question page in our lesson. Question pages are scored and provide the user with feedback on their choices. From the edit page, click the Expanded View option at the top of the page. Then select the Add a question page here link below the content page you just created. From the Select a question type drop-down menu select the question type you want to use. For our example, we are going to select Multichoice. Next click on the Add a question page button. For Page title, enter a title for your question page. For our example, we will enter "Why is this a bad idea?". In the Page contents text area, enter the question you want to ask the learner. For our example, we will enter "Why is grabbing the fire extinguisher and heading for smoke a bad idea?". Below the Page contents text area you will see an Options field; check the box next to Multiple-answer if you are creating a question with more than one correct answer. Our example is going to be single response; therefore we will not select this box. Below the Add a question page section, you will see the Answer 1, Answer 2, and so on, sections where you will enter the possible list of answers the learner will have to choose from. In the Answer 1 section, enter one of the possible answers to the question in the Answer text area. For our example, we will enter "It's dangerous. You should leave it to the professionals". Next in the Response text area, enter the response you want the learner to receive if they select this choice. For our example, we will enter "Firefighters are trained to put out the fire and have the necessary protective gear". Then move to the Answer 2 section and put your second choice and response. For this example, we will have "You can't put out an office fire with a fire extinguisher" for the Answer and "You might be able to put out the fire, but without a respirator you might be overcome by smoke" for the Response. For the correct answer, enter a "1" in the Score field located at the bottom of the corresponding Answer section. Once you have entered all your answers, scroll down to the bottom of the page and select Add a question page to save. Now you are back on the Lesson edit screen and will see the Content page and the Question page you just created. See the following screenshot. What just happened? We have now created a question page to test the learner's understanding of the lesson material. We now need to go back and begin to link our pages together with page jumps. Time for action - creating page jumps You have now added a content page and a question page, but you're not done yet. Now we need to link the question page to the content page using a page jump. The page jump is simply the link between pages. We need to go back to the Jump field we skipped previously: Go back to the content page you created by selecting the edit icon to the right of the Building is on fire page. The edit icon looks like a hand holding a pencil. Scroll down to Content 1 and from the Jump drop-down menu, select the question page you created. For our example, it was Why is this a bad idea?. Set the jump for Content 2 to the End of the Lesson. If the user selects this option, they will end the lesson. Scroll down to the bottom of the page and click on the Save page button. You will now be back at the edit lesson page and the Jump 1: field will now read Why is this a bad idea?. What just happened? We have now linked our pages together using page jumps. In a lesson module, page jumps are used for both navigation and to provide feedback pages for questions. Now we need to go through and test our lesson to make sure everything works.
Read more
  • 0
  • 0
  • 2657

article-image-navigating-your-site-using-codeigniter-17-part-1
Packt
30 Nov 2009
10 min read
Save for later

Navigating Your Site using CodeIgniter 1.7: Part 1

Packt
30 Nov 2009
10 min read
MVC:Model-View-Controller What's MVC all about? For sure at this time you are very curious about this. In short, MVC is an architectural pattern, a way of structuring our application. system application models views controllers As you can see there is a folder for each of the words (MVC); let's see what we can put into them: Models: The models represent our application data, be it in databases, in XML files or anywhere else. Also, interaction with databases is carried here. For example, models will allow us to fetch, modify, insert, and remove data from our database. All actions that require our application to talk to our database must be put in a model. Views: Files placed here are responsible for showing our data to the visitors to our site, or users of our application. No programming logic, no insert or update queries must be run here, though data access may occur in these files. They are here only to show the results of the other two. So we fetch the data in the model, and show it in the view. Now, what if we need to process the data, for example, putting it into an array? Then we do it in the controller; let's see how. Controllers: These act as a nexus between models and views, and programming logic occurs here. Take a look at this little diagram, in the left column we can see a "classical" way of doing things (a little outdated right now). We have a PHP file with SQL queries and HTML code in the same file and embedded into the HTML PHP logic. It may seem, at first glance, that this way it is easier and faster to program. But in this case the code gets messed faster and becomes cluttered with SQL queries, HTML, and PHP logic. Look at the right-side column—we have SQL queries in the Model, HTML and other graphic elements in the View, and PHP logic in the Controller. Doesn't that seem organized? The Controller calls and fetches data from the Model. It then loads the data and passes it to the Views, and sends the results to the user. Once we start working with this pattern we will feel how easy it is; it will keep our projects organized. If we need to come back to our project months after finishing it we will appreciate having made it in a structured fashion. No more of—Oh my God where did I put that query, where is that include file?—they will be in the model and the controller respectively. But, what happens if we want to put our queries in the controller? Well, CodeIgniter allows us to do so (though it is not recommended; if you can avoid, it is better to do so). Other frameworks force you to keep a particular structure, but with CI you can do programming in the way you want. Although it is recommended to keep to the structure, there will be times when we will need to do things the other way. With this structure we can accomplish two important things: Loose Coupling: Coupling is the degree by which the components of a system rely on each other. The less the components depend on each other, the more reusable and flexible the system becomes. Component Singularity: Singularity is the degree by which components have a narrow focus. In CI, each class and its functions are highly autonomous in order to allow maximum usefulness. But how does all this work? Now that we have seen how CI is structured, maybe you are asking yourself—how are the files in those three folders (models, views, controllers) working together? To answer this question we have another diagram, here it is: As you can see it's similar to the previous one, and a little summarized (but with a wider scope of things, this is how the MVC pattern works), but this time we can see some new elements, and if you look at it closely you will be able to distinguish the flow of data. Let's explain it, first of all there is a browser call to your site, then the index.php file in the root folder is called (because we removed it from the URL, using the .htaccess file, we don't see it). This file acts as a router and calls the controllers, as and when they are needed. The controllers, as they are called, come into action. Now, two things can happen: There is no need to fetch data from the database—in this case only the View is called, and loaded by the Controller. Then it is returned to the Browser for you or your visitors to see. There is the need to fetch some data from the database—in this case theController calls the Model, which in turn makes a query to the database. The database returns data to the Model, and the Model to the Controller. The Controller modifies the data in every necessary way. Then it loads the View, passing all necessary data to it, and the View is created and returned to the Browser again. Do not get confused with the first case; there will be times when you will need to create static pages. CI doesn't differentiate between static and dynamic pages. On those occasions simply don't create the Models. Now, return to our sample site to see how all this applies to it. Remember when we put the URL as http://127.0.0.1/codeigniter, CI's welcome screen appeared in our browser. Now try this URL http://127.0.0.1/codeigniter/welcome. You can also try using this URLhttp://127.0.0.1/codeigniter/index.php/welcome. In both cases the welcome screen appears in the browser. You maybe wondering, how CI knows, if you put http://127.0.0.1/codeigniter/, that it has to load the welcome controller. Don't worry, we will see that in a moment; for now, we will go on with our example: http://127.0.0.1/codeigniter/index.php/welcome A request coming to your web site's root is intercepted by the index.php file, which acts as a router. That is, it calls a controller—welcome controller—which then returns a view, just as in the previous diagram. But how does the controller do that? We are going to see how in the welcome controller. The welcome controller As we know the welcome controller is the default controller, configured in the routes.php file of the config directory and the code is at ./application/controllers/welcome.php. Here's what it says: <?php class Welcome extends Controller { function Welcome() { parent::Controller(); } function index() { $this->load->view('welcome_message'); } } /* End of file welcome.php */ /* Location: ./system/application/controllers/welcome.php */ From the second line you'll learn that this file is a class. Every controller inherits from an original Controller class, hence extends Controller. The next three lines make the constructor function. Within the class there are two functions or methods—Welcome() and index(). Though it is not necessary, naming controllers the same way as for tables is a good practice. For example, if I have a projects table I will create a projects controller. You can name your controllers the way you want, but naming them like the tables they represent keeps things organized. Also, getting used to this won't harm you, as other frameworks are stricter about this. Notice that CI uses the older PHP 4 convention for naming constructor functions, which is also acceptable by PHP 5—it doesn't require you to use PHP 5 and is happy with either version of the language. The constructor function is used to set up the class each time you instantiate it. We can obviate this and the controller will still work, and if we use it, it won't do any harm. Inside it we can put instructions to load other libraries or models, or definitions of class variables. So far the only thing inside the constructor is the parent::Controller(); statement. This is just a way of making sure that you inherit the functionality of the Controller class. If you want to understand the parent CI Controller class in detail, you can look at the file /www/CI_system/libraries/controller.php. One of the reassuring things about CI is that all the code is there for you to inspect, though you don't often need to. Working with views Let's go back to the incoming request for a moment. The router needs to know which controller and which function within that controller should handle the request. By default the index function is called if the function is not specified. So, when we put http://127.0.0.1/codeigniter/welcome/, the index function is called. If no error is shown, this function simply loads the view, ('welcome_message') using CI's loader function ($this->load->view). At this stage, it doesn't do anything cool with the view, such as passing dynamic information to it. That comes in later. The ('welcome_message') it wants to load, is in the views folder that you have just installed at /www/codeigniter/application/views/welcome_message.php. This particular view is only a simple HTML page, but it is saved as a PHP file because most views have PHP code in them (no point in doing all this if we're only going to serve up plain old static HTML). Here's the (slightly shortened) code for the view: <html> <head> <title>Welcome to CodeIgniter</title> <style type="text/css"> body { background-color: #fff; margin: 40px; font-family: Lucida Grande, Verdana, Sans-serif; font-size: 14px; color: #4F5155; } . . . . . more style information here . . . . </style> </head> <body> <h1>Welcome to CodeIgniter!</h1> <p>The page you are looking at is being generated dynamically by CodeIgniter. </p> <p>If you would like to edit this page you'll find it located at: </p> <code>system/application/views/welcome_message.php</code> <p>The corresponding controller for this page is found at:</p> <code>system/application/controllers/welcome.php</code> <p>If you are exploring CodeIgniter for the very first time, you should start by reading the <a href="user_guide/">User Guide</a>. </p> <p><br />Page rendered in {elapsed_time} seconds</p> </body> </html> As you can see, it consists entirely of HTML, with an embedded CSS stylesheet. In this simple example, the controller hasn't passed any variables to the view. Curious about—&ltp> &ltbr/> Page rendered in {elapsed_time} seconds </p>? Take a look at: http://codeigniter.com/user_guide/libraries/benchmark.html. You can name the views the way you want, and don't put the .php extension for them. You will have to specify the extension when loading them, for example: $this->load->view('welcome_message.html');
Read more
  • 0
  • 0
  • 2653

article-image-installation-and-configuration-oracle-soa-suite-11g-r1-part-2
Packt
19 Nov 2009
6 min read
Save for later

Installation and Configuration of Oracle SOA Suite 11g R1: Part 2

Packt
19 Nov 2009
6 min read
Additional actions In the following section, you will be performing additional configuration that is optional but will greatly improve performance and usability in the context of the development work you are about to start. Setting memory limits Review the memory settings. This value is dependent on your machine resources and may need to be adjusted for your machine. Allocating less memory for startup will give you better performance on a machine with less memory available. This value is appropriate for a 3 GB memory machine or less. Edit the SOA domain environment file found here (make sure you have the SOA Domain environment file): C:OracleMiddlewarehome_11gR1user_projectsdomains domain1binsetSOADomainEnv.cmd Set memory values: set DEFAULT_MEM_ARGS=-Xms512m -Xmx512m Starting and stopping Now it's time to start your servers. You can start them using the provided script or you can start them separately. Instructions for both methods are included. Starting First set boot.properties and then start the servers. Before you start, set the boot properties so you are not prompted to log in during server startup. Copy C:pobinboot.properties to C:OracleMiddlewarehome_11gR1user_projectsdomainsdomain1. Edit the copied file to reflect the password for your configuration (entered during domain configuration). The first time the server is started this file is encrypted and copied to the server locations.You can start the servers one at a time or you can use the start_all script to start the admin and SOA managed servers (not BAM). To start them one at a time instead, skip to step 6. Copy the startup script to the Oracle directory: C:pobinstart_all.cmd toC:OracleMiddleware Edit the copied file to reflect your environment. Open a command window and start your servers as shown. You must specify how many seconds to wait after starting the admin server before starting the managed server. The admin server must be in the RUNNING state before the managed server starts (see the following screenshot). Try 180 seconds and adjust as necessary. You will need more time the first time you start after a machine reboot than for subsequent restarts: cd C:OracleMiddlewarestart_all.cmd 180 Your servers are now starting automatically so you can skip steps 6-10. Jump to step 12 to continue. To start the servers manually, continue with the following steps. Open three command windows, one for the WebLogic admin server, one for the SOA managed server, and one for the BAM managed server (only start BAM when you need it for a BAM lab). Start the Admin Server first: cd C:OracleMiddlewarehome_11gR1user_projectsdomainsdomain1startWebLogic.cmd Wait for the Admin Server to finish starting up. It takes a few minutes—watch for status RUNNING in the log console window: Start the SOA managed server in the second command window. This start script is in the bin directory. You can also run it directly from the bin directory: cd C:OracleMiddlewarehome_11gR1user_projectsdomainsdomain1binstartManagedWebLogic.cmd soa_server1 When prompted, enter the username weblogic and password welcome1. If you did step 1 and set boot.properties, you will not be prompted. The server is started when you see the message, INFO: FabricProviderServlet.stateChanged SOA Platform is running and accepting requests. Start the BAM managed server in the third command window—do this only when needed for the BAM lab: cd C:OracleMiddlewarehome_11gR1user_projectsdomainsdomain1binstartManagedWebLogic.cmd bam_server1 When prompted, enter the user name weblogic and password welcome1. If you did step 1 and set boot.properties, you will not be prompted. Watch for the RUNNING status. Console URLs Log in with weblogic/welcome1 for all consoles: Weblogic console: http://localhost:7001/console Enterprise Manager console: http://localhost:7001/em SOA worklist: http://localhost:8001/integration/worklistapp B2B console: http://localhost:8001/b2b BAM (must use IE browser): http://localhost:9001/OracleBAM Stopping servers Whenever you need to stop the servers complete the following: Stop the managed servers first by entering Ctrl+C in the command window. Wait until stopped. Stop the admin server by entering Ctrl+C in the command window. WebLogic Server console settings There are two suggested changes to make in the WebLogic Server console. First, you will be viewing application deployments often using the WebLogic server console. This is a lot more convenient if you change the settings not to show libraries as this makes the list a lot shorter and you can find what you need more quickly. Start the WebLogic Admin Server (WLS) if it is not already running. Log in to the WLS console http://localhost:7001/console. Click on Deployments in the left navigation bar. Click on Customize this table at the top of the Deployments table. Change the number of rows per page to 100 (there are only about 30). Select the checkbox to exclude libraries and click on Apply. Second, when the server is started, internal applications like the WLS console are not deployed completely and you see a slight delay when you first access the console. You saw this delay just now when you first accessed the console URL. You can change this behavior to deploy internal applications at startup instead and then you don't get the delay when you access the console. This is convenient for demos (if you want to show the console) and also if you tend to use the console each time you start up the server. Click on domain1 in the left navigation bar in the WLS console. Click on Configuration | General tab. Deselect Enable on-demand deployment of internal applications checkbox. Click on the Save button. EM settings for development The Enterprise Manager can provide different levels of information about composite runtime instances based on a property setting. During development, it is helpful to have a higher setting. These settings are not used on production machines except when specifically needed for debugging purposes as there is a performance cost. Start your servers if they are not already running. Log in to the EM console at http://localhost:7001/em. Right-click on the soa-infra (soa_server1) in the left navigation bar to open the SOA menu and select SOA Administration | Common Properties. Select Audit Level: Development and select the checkbox for Capture Composite Instance State. Click on Apply and click on Yes. If you need to uninstall JDeveloper and servers If you need to uninstall everything, complete the following: First save anything from C:OracleMiddlewarejdev_11gR1jdevelopermywork that you want to keep as this directory will be deleted. Run Uninstall from the program menu to completion for both JDeveloper and WLS. Delete C:OracleMiddlewarejdev_11gR1 and C:OracleMiddlewarehome_11gR1. If you get an error message about not being able to delete because a name or path is too long, change the names of the composite directories within home_ 11gR1user_projectsdomainsdomain1deployed-composites to abcd and try deleting again. Delete program groups from C:Documents and SettingsAll UsersStart MenuPrograms: Oracle Fusion Middleware 11.1.1.1.0 Oracle SOA 11g - Home1 Oracle WebLogic Complete the Dropping existing schema section earlier in this article to clean up the database.
Read more
  • 0
  • 0
  • 2652

article-image-meet-yii
Packt
07 Dec 2012
7 min read
Save for later

Meet Yii

Packt
07 Dec 2012
7 min read
(For more resources related to this topic, see here.) Easy To run a Yii version 1.x-powered web application, all you need are the core framework files and a web server supporting PHP 5.1.0 or higher. To develop with Yii, you only need to know PHP and object-oriented programming. You are not required to learn any new configuration or templating language. Building a Yii application mainly involves writing and maintaining your own custom PHP classes, some of which will extend from the core, Yii framework component classes. Yii incorporates many of the great ideas and work from other well-known web programming frameworks and applications. So if you are coming to Yii from using other web development frameworks, it is likely that you will find it familiar and easy to navigate. Yii also embraces a convention over configuration philosophy, which contributes to its ease of use. This means that Yii has sensible defaults for almost all the aspects that are used for configuring your application. Following the prescribed conventions, you can write less code and spend less time developing your application. However, Yii does not force your hand. It allows you to customize all of its defaults and makes it easy to override all of these conventions. Efficient Yii is a high-performance, component-based framework that can be used for developing web applications on any scale. It encourages maximum code reuse in web programming and can significantly accelerate the development process. As mentioned previously, if you stick with Yii's built-in conventions, you can get your application up and running with little or no manual configuration. Yii is also designed to help you with DRY development. DRY stands for Don't Repeat Yourself , a key concept of agile application development. All Yii applications are built using the Model-View-Controller (MVC) architecture. Yiienforces this development pattern by providing a place to keep each piece of your MVC code. This minimizes duplication and helps promote code reuse and ease of maintainability. The less code you need to write, the less time it takes to get your application to market. The easier it is to maintain your application, the longer it will stay on the market. Of course, the framework is not just efficient to use, it is remarkably fast and performance optimized. Yii has been developed with performance optimization in mind from the very beginning, and the result is one of the most efficient PHP frameworks around. So any additional overhead that Yii adds to applications written on top of it is extremely negligible. Extensible Yii has been carefully designed to allow nearly every piece of its code to be extended and customized to meet any project requirement. In fact, it is difficult not to take advantage of Yii's ease of extensibility, since a primary activity when developing a Yii application is extending the core framework classes. And if you want to turn your extended code into useful tools for other developers, Yii provides easy-to-follow steps and guidelines to help you create such third-party extensions. This allows you to contribute to Yii's ever-growing list of features and actively participate in extending Yii itself. Remarkably, this ease-of-use, superior performance, and depth of extensibility does not come at the cost of sacrificing its features. Yii is packed with features to help you meet those high demands placed on today's web applications. AJAX-enabled widgets, RESTful and SOAP Web services integration, enforcement of an MVC architecture, DAO and relational ActiveRecord database layer, sophisticated caching, hierarchical role-based access control, theming, internationalization (I18N), and localization (L10N) are just the tip of the Yii iceberg. As of version 1.1, the core framework is now packaged with an official extension library called Zii. These extensions are developed and maintained by the core framework team members, and continue to extend Yii's core feature set. And with a deep community of users who are also contributing by writing Yiiextensions, the overall feature set available to a Yii-powered application is growing daily. A list of available, user-contributed extensions on the Yii framework website can be found at http://www.yiiframework.com/extensions. There is also an unofficial extension repository of great extensions that can be found at http://yiiext.github.com/, which really demonstrates the strength of the community and the extensibility of this framework. MVC architecture As mentioned earlier, Yii is an MVC framework and provides an explicit directory structure for each piece of model, view, and controller code. Before we get started with building our first Yii application, we need to define a few key terms and look at how Yii implements and enforces this MVC architecture. Model Typically in an MVC architecture, the model is responsible for maintaining the state, and should encapsulate the business rules that apply to the data that defines this state. A model in Yii is any instance of the framework class CModel or its child class. A model class is typically comprised of data attributes that can have separate labels (something user friendly for the purpose of display), and can be validated against a set of rules defined in the model. The data that makes up the attributes in the model class could come from a row of a database table or from the fields in a user input form. Yii implements two kinds of models, namely the form model (a CFormModel class) and active record (a CActiveRecord class). They both extend from the same base class CModel. The class CFormModel represents a data model that collects HTML form inputs. It encapsulates all the logic for form field validation, and any other business logic that may need to be applied to the form field data. It can then store this data in memory or, with the help of an active record model, store data in a database. Active Record (AR) is a design pattern used to abstract database access in an objectoriented fashion. Each AR object in Yii is an instance of CActiveRecord or its child class, which wraps a single row in a database table or view, that encapsulates all the logic and details around database access, and houses much of the business logic that is required to be applied to that data. The data field values for each column in the table row are represented as properties of the active record object. View Typically the view is responsible for rendering the user interface, often based on the data in the model. A view in Yii is a PHP script that contains user interface-related elements, often built using HTML, but can also contain PHP statements. Usually, any PHP statements within the view are very simple, conditional or looping statements, or refer to other Yii UI-related elements such as HTML helper class methods or prebuilt widgets. More sophisticated logic should be separated from the view and placed appropriately in either the model, if dealing directly with the data, or the controller, for more general business logic. Controller The controller is our main director of a routed request, and is responsible for taking user input, interacting with the model, and instructing the view to update and display appropriately. A controller in Yii is an instance of CController or a child class thereof. When a controller runs, it performs the requested action, which then interacts with the necessary models, and renders an appropriate view. An action, in its simplest form, is a controller class method whose name starts with the word action
Read more
  • 0
  • 0
  • 2649
article-image-wordpress-3-designing-your-blog
Packt
12 Dec 2011
19 min read
Save for later

WordPress 3: Designing your Blog

Packt
12 Dec 2011
19 min read
(For more resources on WordPress, see here.) Blog design principles Blogs tend to have a fairly simple, minimalist layout and design. This has always been one of their key characteristics. Blogs are all about frequently updated content, so the main purpose of their design is to present that content as efficiently and conveniently as possible. The vast majority of blogs present their most recent content on the page that visitors arrive at; hence, the front page contains the latest posts. There's no home page with a verbose welcome message and a long navigation menu to click through to the important stuff. The visitor gets straight into the meat of the blog. By default, this is the structure that WordPress provides. It is possible to set a static page as your blog's front page, but, in the vast majority of cases, I wouldn't recommend it. So when considering the architecture of a blog, unlike other types of website, we don't have to worry too much about a complex navigation structure. There is a convention that we can follow. Yes, we may want to add some extra static pages, but probably only a few of these. What we are most concerned with in blog design is not a complicated navigation structure and how all the pages link together, but how the actual blog page should look and feel. This can be broken down into four key components, which we will examine, one by one: Layout Color Typography Usability and accessibility Layout Good design is all about making something easy for people to use. Designers achieve this by employing standards and conventions. For example, cars have a standard design: four wheels, a chassis, a steering wheel, gas pedal, brake, gear shift, and so on. Car designers have stuck to this convention for many years. First, because it works well and second, because it enables us to drive any car we choose. When you sit down in any standard road car, you know how it works. You turn the key in the ignition, select a gear, hit the gas, and off you go. It's certainly not beyond the ken of car designers to come up with new ways for getting a car in motion (a joystick maybe, or a hand-operated brake) but this would make it more difficult for people to drive. Cars work reasonably safely and efficiently because we are all familiar with these conventions. The layout of blog pages also tends to follow certain conventions. As with cars, this helps people to use blogs efficiently. They know how they work, because they're familiar with the conventions. Most blogs have a header and a footer with the main content arranged into columns. This columnar layout works very well for the type of chronological content presented in blogs. Because of these conventions, the decisions about our blog layout are fairly simple. It's basically a case of deciding where we want to place all our page elements and content within this standard columnar layout. The set of page elements we have to choose from is also based on fairly well entrenched blogging conventions. The list of things we may want on our page includes: Header Posts Comments Static content (for example, the About page) Links to static pages (simple navigation) RSS feeds Search Categories Archives Blogroll Widgets and plugins Footer If we look at this list in more detail, we can see that these page elements can be grouped in a sensible way. For example: Group 1 Header Links to static pages Group 2 Posts Comments Static content Group 3 RSS Feeds Search Categories Group 4 Archives Blogroll Widgets and plugins Group 5 Footer This isn't the only possible grouping scheme we might come up with. For example, we may place the items in Groups 3 and 4 into a single larger group, or we may have widgets and plugins in a group on their own. From this grouping, we can see that the type of items in Group 2 are likely to be the main content on our page, with Groups 3 and 4 being placed in sidebars . Sidebars are areas on the page where we place ancillary content. Having considered the elements we want on the page and how they might be grouped, we can think about possible layouts. Within the conventional columnar structure of blogs there are quite a few possible layout variations. We'll look at four of the most common. The first is a three-column layout. Here, we have two sidebars, one on either side of the main content. Using this type of layout, we would probably place the items in Groups 3 and 4 in the sidebars and Group 2 in the main content area. A variation on the three-column layout is to have the two sidebars next to each other on one side of the page (usually the right), as shown in the following diagram. This is a popular layout for blogs, not just for aesthetics, but because the search engine spiders encounter the post column first as that content is at the top of the template. Using two sidebars is useful if you anticipate having a lot of ancillary content on your blog. The list of page elements given earlier is really the bare minimum you would want on your page. However, if you decide to use lots of widgets or have a long blogroll, it's a good idea to spread them across two sidebars. This means that more of your content can be placed above the fold. The concept of above the fold in web design applies to content in a web page which is visible without having to scroll down, that is, the stuff in the top part of the page. It's a good idea to place the most important content above the fold so that readers can see it immediately. This is particularly true if you plan to monetize your blog by displaying adverts. Adverts that appear above the fold get the most exposure, and therefore, generate the most revenue Another popular layout amongst bloggers has just two columns. In this layout, we would place the items in Groups 3 and 4 together in the one sidebar. It doesn't really matter which side of the page the sidebar is placed, but it seems more common to have it on the right. Studies have shown that a web user's eyes are most often focused on the top-left region of a web page, when they first open any page. So it makes sense to place your main content there, with your sidebar on the right. Also, remember that the search engine spiders will find the leftmost content first. You want them to find your main content quickly, which is a good reason for placing your sidebar on the right, out of their way. An important benefit of a two-column layout is that it allows more room for your main content area. This may be important, if you intend to use a lot of video or images within your blog posts. The extra room allows you to display this visual content bigger. Many blogs place some of their ancillary content just above the footer, below the main content. This also has the advantage of leaving more space for the main content, as with the two-column layout. The following diagram shows this type of layout. Here, the content just above the footer isn't strictly speaking a sidebar, but I've labeled it this way because it's the terminology most often applied to this type of ancillary content. Wireframing The layout diagrams we've just seen are referred to as wireframes by web designers. They give a simple overview of where the elements of a page should be placed. It would be a good idea for you to create your own wireframe for your blog design. This can be done using most graphic software packages or something like Microsoft Visio , or a simple pen and paper does the job equally well! Color This is the next design principle we need to consider. It may be that you already have a corporate color scheme based on your company logo, stationery, or existing website. In this case, you'll probably want to continue that theme through your blog design. Even if you already have your corporate color scheme, this section may still be useful in case you decide to change your blog colors in the future. The subject of color in design is a large one. Design students spend a great deal of time learning about the psychology and science of colors and techniques for achieving the best color schemes. Obviously, we don't have enough space to go into that kind of detail, but I will try to give you a few pointers. The first thing to think about is the psychology of color, in particular, color associations. This is the idea that different colors evoke different feelings and emotions in the eye of the beholder. To a certain extent this can be rather subjective and it can also depend on cultural influences, but there are some generalities that can be applied. For example, red is often perceived as being exciting, passionate, or dramatic. Yellow is an active and highly visible color, which is why it is used in warning signs. It is also associated with energy and happiness. Blue is sometimes thought of as being cold. It can also be a calming color and may sometimes be seen as corporate or conservative. White, for many people, gives the idea of cleanliness, purity, and perfection. Black can be seen as strong, elegant, and chic. Obviously, these color associations can vary from person to person, so designers don't rely on them solely in their color decisions, but they are certainly worth bearing in mind. There are more practical considerations regarding color that are probably more relevant than color psychology. For example, we all know that some color combinations don't work well together. There is a great deal of design theory aimed at devising good color combinations, but unless you're a professional designer, it's not really worth going into. Probably the best method for working out good color combinations is trial and error. If you're trying to figure out a background color and text color for your blog, simply test a few out. You could use a graphics package such as Photoshop or Microsoft Paint , or one of the many online color tools such as, http://colorschemedesigner.com/ or Adobe's Kuler at http://kuler.adobe.com. When choosing background and text colors you need to think about contrast. For example, yellow text on a white background can be very difficult to read. Some people also find light text on a black background a strain on their eyes. It's also important not to use too many colors in your design. Try to limit your palette to a maximum of three or four. Sometimes you may only need two colors to make an attractive design. One method for devising color combinations is to look for examples all around you, particularly in nature. Maybe look at a photograph of a landscape and pick out color combinations you like. Also consider the work of professional designers. Think about websites and blogs you like, and examine the color schemes they have used. You will also find good examples in offline design—pick up a book and see how colors have been used in the cover design. If you would like to base your blog's color scheme on your company logo, you could use lighter and darker versions of one color from the logo. Use the most vivid color in the logo for emphasis or headings. Web color theory At this point, it's worth looking at the technical theory behind colors on the Web. Web browsers use the Hexadecimal RGB color system to render colors in web pages. This is because computer monitors use an RGB color model, which means every pixel is colored using a combination of red, green, and blue light (hence RGB). There are 256 different levels of red light, 256 different levels of green light, and 256 different levels of blue light. These can be combined to create 16,277,216 different colors, which are all available for your web browser. The hexadecimal system gives us a way of counting from 0 to 255 using numbers and letters, which covers all 256 levels of RGB light. In the hexadecimal scale, 0 is 00 and 255 is FF. A six-character hexadecimal color code specifies the levels of red, green and blue, which form a particular color. For example, the color white combines red, green, and blue at their highest possible levels, that is 255. Remember that in hexadecimal 255 is FF, so the color code for white is FFFFFF (Red: FF, Green: FF, and Blue: FF). The color code for black is 000000 as the levels of red, green, and blue are set to their lowest, or 00 (in hexadecimal). The code for red is FF0000, blue is 0000FF, and yellow is FFFF00, and so on. We can use six-character Hexadecimal RGB codes to define all of the 16,277,216 web colors. So how do we know the hexadecimal code for a particular color? Well, there are many tools available that define the Hexadecimal RGB codes for the colors you choose. Some are standalone applications for PC or Mac, and others are online. Take a look at https://www.webpagefx.com/web-design/color-picker/ or do a quick search in Google on color picker . For more information on web colors , read the article at http://en.wikipedia.org/wiki/Web_colors. Typography Another important consideration for your blog design is the fonts you use. Your choice of font will have a big impact on the readability of your blog. It's important to bear in mind that although there are literally thousands of fonts to choose from, only a small number of them are practical for web design. This is because a web browser can only display fonts that are already installed on the user's computer. If you choose an obscure font for your blog, the chances are that most users won't have it installed on their computer. If this is the case the web browser will automatically select a substitute font. This may be smaller or far less readable than the font you had in mind. It's always safest to stick to the fonts that are commonly used in web design, which are known as web safe fonts. These include the following: Arial Verdana Times New Roman Georgia There are two types of font, serif and sans-serif. Serif fonts have little flourishes at the end of the strokes whereas sans-serif fonts don't have this extra decoration. Arial and Verdana are sans-serif fonts, whereas Times New Roman and Georgia are serif fonts. As you'll see later in the article, when we look at CSS, fonts are usually specified in groups or families. They are listed in the order of the designer's preference. For example, a designer may specify font-family:"Georgia, Times New Roman, serif". This means when the browser renders the page it will first look for the Georgia font; if it isn't installed, it will look for the Times New Roman font and if that isn't installed, it will look for the computer's default serif font. This method gives the designer more control over the font choices the browser will make. The size of your font is also an important factor. Generally speaking, the bigger it is, the easier it is to read. Computer displays are getting bigger and bigger but the default screen resolutions are tending to get smaller. In other words, the individual pixels on users' screens are getting smaller. This is a good reason for web designers to choose larger font sizes. This trend can be seen on many Web 2.0 sites, which tend to use large and clear fonts as part of their design, for example http://www.37signals.com. But be careful not to go too big as this can make your design look messy and a little childish. Remember that you're not limited to using just one font in your blog design. For example, you may decide to use a different font for your headings. This can be an effective design feature but don't go crazy by using too many fonts, as this will make your design look messy. Probably two, or at most three, fonts on a page are plenty. Font replacement Font replacement refers to a relatively new group of technologies that are pushing the envelope of web typography. In theory, they allow designers to use any font in their web page designs. In practice, things are a little more complicated. Issues around browser compatibility and font licensing make font replacement technologies a bit of a minefield for anyone who is new to web design. It's true that, thanks to font replacement technologies, professional designers are no longer constrained by the notion of web safe fonts. But, if you are a web design novice, I recommend you stick to web safe fonts until your skills improve and you are ready to learn a whole new technology. A full discussion on font replacement is way beyond the scope of this article; I mention it only to give you a better overview of the current state of web typography. But if you are interested in knowing more, three popular font replacement technologies are Cufón (http://cufon.shoqolate.com), Font Squirrel (http://www.fontsquirrel.com), and Google Fonts API (http://code.google.com/apis/webfonts/). There is also something known as @font-face, which is part of CSS3, the latest specification of CSS. Again, it offers the tantalizing possibility of giving designers free rein in their choice of fonts. Sadly, @font-face is also hindered by browser compatibility and font licensing issues. The Font Squirrel technology, mentioned previously, resolves these issues to a certain extent, so this is something to be aware of as your web design skills develop. But for the time being, I recommend you concentrate on the basics of web typography and don't worry about @font-face until you feel ready. Usability and accessibility This is another very important area to consider when designing your blog. Many people, who live with various disabilities, use a range of 'assistive' software to access the Web. For example, people with visual impairments use screen readers, which translate the text in a web browser into audible speech. There are also people who are unable to use a mouse, and instead rely on their keyboard to access web pages. It's the responsibility of web designers to ensure that their websites are accessible for these different methods of browsing. There's no sense in alienating this group of web surfers just because your blog is inaccessible to them. There are also many other circumstances when your blog might be accessed by means other than a standard web browser, for example, via mobile phones, PDAs, or tablets. Again, a good designer will ensure that these modes of browsing are catered for. The web design industry has been well aware of these accessibility issues for many years and has come up with guidelines and technologies to help conscientious designers build websites that are standards compliant . These web standards help ensure best practice and maximize accessibility and usability. Luckily, WordPress takes care of a lot of the accessibility issues simply by the way it's been developed and built. The code behind WordPress is valid XHTML and CSS, which means that it complies with web standards and is easily accessible. It's important, then, that you don't break the system by allowing bad practice to creep in. Some of the things to bear in mind relate to a couple of design principles we've already discussed, for example, using a color scheme and font size that makes your text easy to read. Other issues include keeping the number of navigation links on your page to a minimum—a whole load of useless links can be annoying for people who have to tab through them to get to your main content. You should also ensure that any third-party plugins you install are standards-compliant and don't throw up any accessibility problems. The same is true if you decide to use a ready-made theme for your blog design. Just make sure it's accessible and satisfies web standards. For more background reading on web standards, you could take a look at http://www.alistapart.com or the World Wide Web Consortium (W3C) website at http://www.w3.org. Implementing your blog design We've now considered the principles involved in designing our blog. The next thing to decide is how we actually carry out the design work. There are three main options available, each of which involves varying degrees of work. However, they all require knowledge of the design principles we just covered. The first approach is probably the easiest; it simply involves finding a readymade theme and installing it in WordPress. By working through the earlier design principles, you should have an idea of what you want your blog to look like and then you simply need to find a theme that matches your vision as closely as possible. A good place to start looking is the official WordPress Free Themes Directory at http://wordpress.org/extend/themes/. You'll also find many more theme resources by running a search through Google. There are hundreds of very attractive WordPress themes available for free and many others which you can pay for. However, if you adopt this approach to your blog design, you won't have a unique or original blog. The chances are the theme you choose will also be seen on many other blogs. At the other end of the scale, in terms of workload, is designing your own theme from scratch. This is a fairly complicated and technical process, and is well beyond the scope of this article. In fact, it's a subject that requires its own book. If you intend to build your own theme, I recommend WordPress 2.8 Theme Design by Tessa Blakeley Silver ISBN 978-1-849510-08-0 published by Packt Publishing. The third approach is to modify a readymade theme. You could do this with any theme you choose, even the default Twenty Ten theme that ships with WordPress. However, if you edit a fully developed theme, you spend a lot of time unpicking someone else's design work and you may still be left with elements that are not completely your own. A better method is to start with a theme framework, which has been specially designed to be a blank canvas for your own design. Over the last few years many WordPress theme frameworks have appeared, some free, some paid-for. Two of the most popular paid-for theme frameworks are Thesis (http://diythemes.com/) and Genesis (http://www.studiopress.com/themes/genesis), while popular free frameworks include Hybrid (http://themehybrid.com/), Carrington (http://carringtontheme.com/), and Thematic (see below).
Read more
  • 0
  • 0
  • 2647

article-image-silverstripe-24-creating-our-own-theme
Packt
12 May 2011
8 min read
Save for later

SilverStripe 2.4: Creating our Own Theme

Packt
12 May 2011
8 min read
Time for action - files and folders for a new theme We could start off from scratch, but let's not make it more complicated than necessary: Simply copy the BlackCandy theme to a new folder in the same directory. We'll just call it bar as the page should be for a bar business. Now reload the CMS backend and you should be able to switch to the new bar theme. As we won't use any of the available images we can delete all image files, but leave the folder intact—we'll add our own pictures later on. Basic layout Next, we'll create our page's general layout. Time to put our template skills into action! Here's the basic layout we want to create: So far it's a pretty basic page. We'll take a good look at the templates and basic structure but won't spend much time on the HTML and CSS specifics. Do, however pay attention on how to set up the different CSS files to make the most out of them. We won't list every line of code involved. If you'd prefer, you can simply copy the missing parts from the code provided here (Ch:2) File themes/bar/templates/Page.ss This page is pretty empty. We'll later add an intro page with a different structure, so the code these templates will share is rather limited. Time for action - the base page Add the following code to the file themes/bar/templates/Page.ss: <!doctype html> <html lang="$ContentLocale"> <head> <meta charset="utf-8"/> <% base_tag %> <title> <% if MetaTitle %> $MetaTitle <% else %> $Title <% end_if %> </title> $MetaTags(false) <link rel="shortcut icon" href="favicon.ico"/> <!--[if lt IE 9]> <script src="http://html5shim.googlecode.com/svn/trunk/html5.js"> </script> <![endif]--> </head> <body> $Layout <noscript> <br/>&nbsp;<br/>&nbsp;<br/>&nbsp;<br/>&nbsp;<br/>&nbsp;<br/>&nbsp; <div><p> <b>Please activate JavaScript.</b><br/> Otherwise you won't be able to use all available functions properly... </p></div> </noscript> </body> </html> What just happened? The highlighted parts are SilverStripe specific: First, we set the language of our content. Depending on what you've selected during the installation process this can vary. By default, it will be: <html lang="en-US"> Setting the page's base; this can be useful when referencing images and external files. Depending on how you set up your system this will vary. Setting the title just like in the BlackCandy theme. Next, adding meta tags without the title element. These are only set if you actually enter them in the CMS on the Metadata tab on the desired page's Content tab. Empty fields are simply left out in the output. Only the SilverStripe note is set in any case. Finally, including the page's specific layout. The JavaScript we include for Internet Explorer versions before 9 fixes their inability to correctly render HTML5 tags. To learn more about the specific code, head over to https://code.google.com/p/html5shim/. Why HTML5? The motivation behind it isn't to stay buzz-word compliant. While HTML5 is not a finished standard yet, it already adds new features and makes the development a little easier. Take a look at the doctype declaration for example: that's much shorter than XHTML. Now there's a real chance of actually remembering it and not needing to copy it every time you start from scratch. New features include some very handy tags, but we'll come to those in the next code sample. The output on the final HTML page for the second and third elements (we've already taken a look at the first) in the header looks like the following listing. Which part in the template is responsible for which line or lines? <title>home</title> <meta name="generator" content="SilverStripe - http://silverstripe.org" /> <meta http-equiv="Content-type" content="text/html; charset=utf-8" /> <meta name="keywords" content="some keyword" /> <meta name="description" content="the description of this page" /> Did you notice that there is no reference to a CSS file and the styling is still applied? Remember that CSS files are automagically included if you stick to the naming convention we've already discussed. File themes/bar/templates/Layout/Page.ss Now let's move beyond the basics and do something a bit more interesting. Time for action - the layout page Add the following code to the file themes/bar/templates/Layout/Page.ss: <div> <img src="$ThemeDir/images/background.jpg" alt="Background" id="background"/> </div> <section id="content" class="transparent rounded shadow"> <aside id="top"> <% include BasicInfo %> </aside> <% include Menu %> <section class="typography"> $Content $Form </section> </section> <% include Footer %> We rely on just three includes which are very easy to reuse on different pages. We then reference the page's content and possible forms (no comments will be required). The only template part we've not yet covered is the $ThemeDir. This is replaced by the path to our theme, in our case themes/bar/. So you don't need to worry about hard-coded paths when copying template files between different themes. You only need to take care of paths in the CSS files as they are not processed by the template engine. The includes: BasicInfo.ss, Menu.ss, and Footer.ss In the previous code segment, we've referenced three includes. Let's not forget to create them. Time for action - the includes The includes we're yet to explore are Menu.ss, BasicInfo.ss, and Footer.ss. The menu is very simple, we only include the first level, highlighting the currently active item, as we won't have more subpages: <nav id="menu"> <ul> <% control Menu(1) %> <li class="$Linkingmode"> <a href="$Link">$MenuTitle</a> </li> <% end_control %> </ul> </nav> BasicInfo.ss only contains the $ThemeDir placeholder besides good old HTML: <a href="home"> <img src="$ThemeDir/images/logo.png" alt="Logo" id="logo"/> </a> <ul id="details-first"> <li>Phone: <b>01 23456789</b></li> <li>Contact: <a href="contact">contact@bar.com</a></li> <li>Address: <a href="location">Bar Street 123</a></li> </ul> <div id="details-second"> <div class="left">Opening hours:</div> <div class="right"><p> <b>Mon - Thu 2pm to 2am</b><br/> <b>Fri - Sat 2pm to 4am</b> </p></div> </div> <a href="http://www.facebook.com/pages/"> <img src="$ThemeDir/images/facebook.png" alt="Facebook" id="facebook"/> </a> Be sure to replace bar.com with your own domain. However, using it here is quite fitting because we're making a page for a bar and http://www.bar.com is only a placeholder itself. It's derived from foobar or foo bar, a placeholder name often used in computer programming. But take a look at the site yourself for more information. Finally, the footer makes use of the $Now placeholder and the date formatter .Year to retrieve the current year: <footer> <span><a href="imprint">Imprint</a></span> <span>&copy; Bar $Now.Year</span> </footer> Have a go hero - create the layout and pages Now that we've gone over all of the content, it's time to build the page. Feel free to copy some code as we're not a typing class, but take careful note of every SilverStripe specific part, so you understand what you're doing. We use three includes, so create these files and remove the rest still available from BlackCandy as well as the unused templates/Layout/Page_results.ss. And don't forget to flush the cache by appending ?flush=all to the page's URL! When working with includes it will save you a lot of trouble. We've hardcoded some links to different pages in the template files. Make sure you add all necessary pages including the imprint, but don't add it to the menu.
Read more
  • 0
  • 0
  • 2646

article-image-opencart-faqs
Packt
25 Nov 2010
4 min read
Save for later

OpenCart FAQs

Packt
25 Nov 2010
4 min read
OpenCart 1.4 Beginner's Guide Build and manage professional online shopping stores easily using OpenCart. Develop a professional, easy-to-use, attractive online store and shopping cart solution using OpenCart that meets today's modern e-commerce standards Easily integrate your online store with one of the more popular payment gateways like PayPal and shipping methods such as UPS and USPS Provide coupon codes, discounts, and wholesale options for your customers to increase demand on your online store With hands-on examples, step-by-step explanations, and illustrations Q: What are the system requirements for OpenCart? A: The following screenshot shows the minimum system requirements for OpenCart for installation and running without problems. You should contact your hosting provider if you are not sure whether these settings are set or not. Q: What are the methods to upload files to a web host? A: There are two common methods for uploading files to a web host: Using cPanel File Manager Utility Using an FTP Client Q: Can we run more than one store on a single OpenCart installation? A: Yes. We can run more than one store on a single OpenCart installation. Q: What are GeoZones? A: Geo Zones represent the groups of countries or smaller geo sections under these countries. A Geo Zone can include countries, states, cities, and regions depending on the type of country. OpenCart uses Geo Zones to identify shipping and tax rate price regulations for a customer's order. Here is an example: Q: What if we want to edit anything in Geo Zones? A: If we want to edit any Country and / or Zone definition in Geo Zones, we should visit System | Localisation | Zones menu in the administration pane. Q: What is SEO? A: SEO (Search Engine Optimization) is a group of processes which is applied for websites to increase their visibility in search engine results to get more qualified traffic. For an online store, it is very important to apply at least the basic SEO techniques. Q: Where can we find the new modules of OpenCart? A: www.OpenCart.com contributions and forum pages are the essential sources to find new modules and/or ask for new ones from developers. Q: What do you mean by Payment Gateway? A: A payment gateway is an online analogue of a physical credit card processing terminal that we can locate in retail shops. Its function is to process credit card information and return the results back to the store system. You can imagine the payment gateway as an element in the middle of an online store and credit card network. The software part of this service is included in OpenCart but we will have to use one of the payment gateway services. Q: What are the payment methods in OpenCart? A: The current OpenCart version supports many established payment systems, including PayPal services, Authorize.net, Moneybookers, 2Checkout, and so on, as well as basic payment options such as Cash on Delivery, Bank Transfer, Check/money order, etc. Q: In which currency is the total amount calculated? A: PayPal automatically localizes the total amount according to the PayPal owner's account currency. Q: Whats the difference between PayPal Website Payment Standard and PayPal Website Payment Pro? A: PayPal Website Payment Standard is the easiest method to implement accepting credit card payments on an online store. There are no monthly fees or setup costs charged by PayPal. PayPal Website Payment Pro is the paid PayPal solution for an online store as a payment gateway and merchant account. The biggest difference from PayPal Website Payment Standard is that customers do not leave the website for credit card processing. The credit card information is completely processed in the online store as it is the popular method of all established e-commerce websites. Q: Which of the two PayPal products is recommended? A: For a beginner OpenCart administrator who wants to use PayPal for the online store, it is recommended to get experience with the free Standard payment option and then upgrade to the Pro option.
Read more
  • 0
  • 0
  • 2644
article-image-working-drupal-audio-flash-part-1
Packt
20 Oct 2009
7 min read
Save for later

Working with Drupal Audio in Flash (part 1)

Packt
20 Oct 2009
7 min read
Within the past five years, there has been a major change in the type of content found on the World Wide Web. In just a few short years, content has evolved from being primarily text and images, into a multimedia experience! Drupal contributors have put much effort in making this integration with multimedia as easy as possible. However, one issue still remains: in order to present multimedia to your users, you cannot rely on Drupal alone. You must have another application layer to present that media. This is most typically a Flash application that allows the user to listen or watch that media from within their web browser. This article explores how to use Drupal to manage a list of audio nodes and also builds a Flash application to play that music. When it comes to multimedia, Flash is the portal of choice for playing audio on a web sites. Integrating audio in Drupal is surprisingly easy, thanks to the contribution of the Audio module. This module allows you to upload audio tracks to your Drupal website (typically in MP3 format), by creating an Audio node. It also comes with a very basic audio player that will play those audio tracks in the node that was created. To start, let's download and enable the Audio module along with the Token, Views, and getID3 modules, which are required for the Audio module. The modules that you will need to download and install are as follows: Audio—http://www.drupal.org/project/audio Views—http://www.drupal.org/project/views Token—http://www.drupal.org/project/token getID3—http://www.drupal.org/project/getid3 At the time of writing this article, the Audio module was still considered "unstable". Because of this, I would recommend downloading the development version until a stable release has been made. It is also recommended to use the development or "unstable" versions for testing purposes only. Once we have downloaded these modules and placed them in our site's modules folder, we can enable the Audio module by first navigating to the Administer | Modules section, and then enabling the checkboxes in the Audio group as follows: After you have enabled these modules, you will probably notice an error at the top of the Administrator section that says the following: This error is shown because we have not yet installed the necessary PHP library to extract the ID3 information from our audio files. The ID3 information is the track information that is embedded within each audio file, and can save us a lot of time from having to manually provide that information when attaching each audio file to our Audio nodes. So, our next step will be to install the getID3 library so that we can utilize this great feature. Installing the getID3 library The getID3 library is a very useful PHP library that will automatically extract audio information (called ID3) from any given audio track. We can install this useful utility by going to http://sourceforge.net/project/showfiles.php?group_id=55859, which is the getID3 library URL at SourceForge.net. Once we have done this, we should see the following: We can download this library by clicking on the Download link on the first row, which is the main release. This will then take us to a new page, where we can download the ZIP package for the latest release. We can download this package by clicking on the latest ZIP link, which at the time of writing this article was getid3-1.7.9.zip Once this package has finished downloading, we then need to make sure that we place the extracted library on the server where the getID3 module can use it. The default location for the getID3 module, for this library, is within our site's modules/getid3 directory. Within this directory, we will need to create another directory called getid3, and then place the getid3 directory from the downloaded package into this directory. To verify that we have installed the library correctly, we should have the getid3.php at the following location: Our next task is to remove the demos folder from within the getid3 library, so that we do not present any unnecessary security holes in our system. Once this library is in the correct spot, and the demos folder has been removed, we can refresh our Drupal Administrator section and see that the error has disappeared. If it hasn't, then verify that your getID3 library is in the correct location and try again. Now that we have the getID3 library installed, we are ready to set up the Audio content type. Setting up the Audio content type When we installed the Audio module, it automatically created an Audio content type that we can now use to add audio to our Drupal web site. But before we add any audio to our web site, let's take a few minutes to set up the Audio content type to the way we want it. We will do so by navigating to Administer | Content Types, and then clicking on the edit link, next to the Audio content type. Our goal here is to set up the Audio content type so that the default fields make sense to the Audio content type. Drupal adds the Body field to all new content types, which doesn't make much sense when creating an Audio content. We can easily change this by simply expanding the Submission form settings. We can then replace the Body label with Description, since it is easily understood when adding new Audio tracks to our system. We will save this content type by clicking on the Save content type button at the bottom of the page. Now, we are ready to start adding audio content to our Drupal web site. Creating an Audio node We will add audio content by going to Create Content, and then clicking on Audio, where we should then see the following on the page: You will probably notice that the Title of this form has already been filled out with some strange looking text (as shown in the previous screenshot). This text is a series of tags, which are used to represent track information that is extracted using the getID3 module that we installed earlier. Once this ID3 information is extracted, these tags will be replaced with the Title and Artist of that track, and then combined to form the title of this node. This will save a lot of time because we do not have to manually provide this information when submitting a new audio track to our site. We can now upload any audio track by clicking on the Browse button next to the Add a new audio file field. After it adds the file to the field, we can submit this audio track to Drupal by clicking on the Save button at the bottom of the page, which will then show you something like the following screenshot: After this node has been added, you will notice that there is a player already provided to play the audio track. Although this player is really cool, there are some key differences between the player provided by the Audio module and the player that we will create later in this article. How our player will be different (and better) The main difference between the player that is provided by the Audio module and the player that we are getting ready to build is how it determines which file to play. In the default player, it uses flash variables passed to the player to determine which file to play. This type of player-web site interaction places the burden on Drupal to provide the file that needs to be played. In a way, the default player is passive, where it does nothing unless someone tells it to do something. The player that we will be building is different because instead of Drupal telling our player what to play, we will take an active approach and query Drupal for the file we wish to play. This has several benefits, such as that the file path does not have to be exposed to the public in order for it to be played. So, let's create our custom player!
Read more
  • 0
  • 0
  • 2640

article-image-miro-interview-nicholas-reville
Packt
24 Oct 2009
6 min read
Save for later

Miro: An Interview with Nicholas Reville

Packt
24 Oct 2009
6 min read
Kushal Sharma: What is the vision behind Miro? Nicholas Reville: There's an opportunity to build a new, open mass medium of online television. We're developing the Miro Internet TV platform so that watching Internet video channels will be as easy as watching TV and broadcasting a channel will be open to everyone. Unlike traditional TV, everyone will have a voice. KS: Does PCF finance the entire project or do you have any other contributors? NR: We have tons of help from volunteers – translating the software, coding, testing, and providing user support. We would not be able to do nearly enough without our community. KS: Are the developers full-time PCF employees or is it similar to other Open Source projects where people voluntarily contribute to the community in their spare-time? NR: We have 6 full-time developers and also volunteers. We're a hybrid model, like Mozilla. KS: Please highlight the most crucial features of Miro, and the idea behind having those features as part of this application. NR: The most crucial feature of Miro is the ability to download and play video RSS feeds. It's a truly open way to distribute video online. Using RSS means that creators can publish with any company they want and users can bring together video from multiple sources into one application. KS: How many languages is Miro translated into? NR: Miro is at least partially translated into more than 40 languages, but we always need helping revising and improving the translations. KS: How is Miro different from other players from the technological perspective? NR: Above all, Miro is open-source. That means that anyone can work on the code, improve, and customize it. Beyond this, Miro is unique in a number of ways. It runs on Mac, Windows, and Linux. It can play almost any video format. And it has more HD content available than any other Internet TV application. KS: Are the Torrent download capabilities as well developed as any other standalone Bit Torrent Client? Please tell us something more about it's download capabilities, the share ratio, configurable upload limits while downloading the torrents etc. compared to other software? NR: Our current version of Miro keeps the BitTorrent capabilities very simple and doesn't provide many options. We think that most users don't even notice when they are downloading a torrent feed, however, we want to expand our options and features for power users in future versions – expect much more bit torrent control in 3 months or so. KS: What type of support do you have for Miro? NR: Most of our support is provided user-to-user in the discussion forums – they are very useful, actually. In addition, because we are an open project, anyone can file a bug or even fix a bug. In the long-term this means we will have a more stable user experience than closed competitors. KS: With a host of TV channels and video content going online, viewers could really use a common solution for all their media needs rather than keeping tabs on multiple sources. How do you see Miro being instrumental in providing this? NR: We think that an open platform for video is the only way to create a unified experience for video. Miro is the only really open Internet video solution right now and we think we have a crucial role to play in unifying Internet TV in an open way. KS: Does Miro download Internet TV programs and store it on the hard disk or does it stream live media like any other online service? NR: For now, Miro downloads everything that's watched. This is useful in two key ways: first, you can watch HD video with no skipping or buffering delays. Second, you can move those files onto any external device, with no DRM or other restrictions. In the future we may add a streaming option for some special cases, or the ability to start watching while the download is in progress. KS: Subscription-free content is a great gift that viewers get with Internet TV, however, bandwidth issues could be a concern for some users. What minimum bandwidth requirements would you suggest for satisfactory Internet TV performance on Miro? Furthermore, does Miro have an alternate solution for users having low Internet bandwidth? NR: Miro actually works better than most Internet video solutions for low bandwidth users. On a dial-up connection, streaming video solutions are unusable. Miro will download videos in those circumstances – it may happen slowly, but once you have the video you can watch it full speed with no skipping. KS: I remember a statement on the PCF site saying “Miro is designed to eliminate gatekeepers”. Could you please elaborate on this? NR: Miro works with open standards and is designed to be decentralized, like the web. That means that users can use Miro to connect directly to any video publisher – they don't need our permission and the files don't travel through our servers. Companies like Joost design their software to be highly centralized so that they can control both users and creators. It's time to leave that model behind. KS: So far, what has the response been like? NR: Response has been great. Last month alone, we had more than 200,000 downloads and we've been growing with each release. I expect that when we release version 1.0 in November we'll see even faster growth. KS: What are your future plans for Miro in terms of releases, updates, functionality, etc.? NR: After version 1.0 is released, we'll be making major changes to Miro to improve performance and add an extension system that will give people new ways of customizing the software to fit their needs. KS: To me, Miro’s agenda is a lot more than simply creating a good player. You’re attempting to change the face of Internet Video and the way it’s being hosted right now. How would you describe the future of Miro and Internet Video to our readers? NR: We want Miro to push online video in an open direction. We're hoping to build the best video experience possible, something that can be a true substitute for traditional television. But that doesn't mean we want to control the future of online video – we want other people to build open video distribution platforms as well. Openness is vital to the future of our mass media. KS: What other projects is the PCF involved in? NR: Right now, PCF is exclusively focused on making video more open and Miro is at the center of that. KS: Thank you for your time Nicholas, and good luck developing Miro!
Read more
  • 0
  • 0
  • 2638
Modal Close icon
Modal Close icon