Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Web Development

1802 Articles
article-image-playing-max-6-framework
Packt
06 Sep 2013
17 min read
Save for later

Playing with Max 6 Framework

Packt
06 Sep 2013
17 min read
(For more resources related to this topic, see here.) Communicating easily with Max 6 – the [serial] object The easiest way to exchange data between your computer running a Max 6 patch and your Arduino board is via the serial port. The USB connector of our Arduino boards includes the FTDI integrated circuit EEPROM FT-232 that converts the RS-232 plain old serial standard to USB. We are going to use again our basic USB connection between Arduino and our computer in order to exchange data here. The [serial] object We have to remember the [serial] object's features. It provides a way to send and receive data from a serial port. To do this, there is a basic patch including basic blocks. We are going to improve it progressively all along this article. The [serial] object is like a buffer we have to poll as much as we need. If messages are sent from Arduino to the serial port of the computer, we have to ask the [serial] object to pop them out. We are going to do this in the following pages. This article is also a pretext for me to give you some of my tips and tricks in Max 6 itself. Take them and use them; they will make your patching life easier. Selecting the right serial port we have used the message (print) sent to [serial] in order to list all the serial ports available on the computer. Then we checked the Max window. That was not the smartest solution. Here, we are going to design a better one. We have to remember the [loadbang] object. It fires a bang, that is, a (print) message to the following object as soon as the patch is loaded. It is useful to set things up and initialize some values as we could inside our setup() block in our Arduino board's firmware. Here, we do that in order to fill the serial port selector menu. When the [serial] object receives the (print) message, it pops out a list of all the serial ports available on the computer from its right outlet prepended by the word port. We then process the result by using [route port] that only parses lists prepended with the word port. The [t] object is an abbreviation of [trigger]. This object sends the incoming message to many locations, as is written in the documentation, if you assume the use of the following arguments: b means bang f means float number i means integer s means symbol l means list (that is, at least one element) We can also use constants as arguments and as soon as the input is received, the constant will be sent as it is. At last, the [trigger] output messages in a particular order: from the rightmost outlet to the leftmost one. So here we take the list of serial ports being received from the [route] object; we send the clear message to the [umenu] object (the list menu on the left side) in order to clear the whole list. Then the list of serial ports is sent as a list (because of the first argument) to [iter]. [iter] splits a list into its individual elements. [prepend] adds a message in front of the incoming input message. That means the global process sends messages to the [umenu] object similar to the following: append xxxxxx append yyyyyy Here xxxxxx and yyyyyy are the serial ports that are available. This creates the serial port selector menu by filling the list with the names of the serial ports. This is one of the typical ways to create some helpers, in this case the menu, in our patches using UI elements. As soon as you load this patch, the menu is filled, and you only have to choose the right serial port you want to use. As soon as you select one element in the menu, the number of the element in the list is fired to its leftmost outlet. We prepend this number by port and send that to [serial], setting it up to the right-hand serial port. Polling system One of the most used objects in Max 6 to send regular bangs in order to trigger things or count time is [metro]. We have to use one argument at least; this is the time between two bangs in milliseconds. Banging the [serial] object makes it pop out the values contained in its buffer. If we want to send data continuously from Arduino and process them with Max 6, activating the [metro] object is required. We then send a regular bang and can have an update of all the inputs read by Arduino inside our Max 6 patch. Choosing a value between 15 ms and 150 ms is good but depends on your own needs. Let's now see how we can read, parse, and select useful data being received from Arduino. Parsing and selecting data coming from Arduino First, I want to introduce you to a helper firmware inspired by the Arduino2Max page on the Arduino website but updated and optimized a bit by me. It provides a way to read all the inputs on your Arduino, to pack all the data read, and to send them to our Max 6 patch through the [serial] object. The readAll firmware The following code is the firmware. int val = 0; void setup() { Serial.begin(9600); pinMode(13,INPUT); } void loop() { // Check serial buffer for characters incoming if (Serial.available() > 0){ // If an 'r' is received then read all the pins if (Serial.read() == 'r') { // Read and send analog pins 0-5 values for (int pin= 0; pin<=5; pin++){ val = analogRead(pin); sendValue (val); } // Read and send digital pins 2-13 values for (int pin= 2; pin<=13; pin++){ val = digitalRead(pin); sendValue (val); } Serial.println();// Carriage return to mark end of data flow. delay (5); // prevent buffer overload } } } void sendValue (int val){ Serial.print(val); Serial.write(32); // add a space character after each value sent } For starters, we begin the serial communication at 9600 bauds in the setup() block. As usual with serial communication handling, we check if there is something in the serial buffer of Arduino at first by using the Serial.available() function. If something is available, we check if it is the character r. Of course, we can use any other character. r here stands for read, which is basic. If an r is received, it triggers the read of both analog and digital ports. Each value (the val variable) is passed to the sendValue()function; this basically prints the value into the serial port and adds a space character in order to format things a bit to provide an easier parsing by Max 6. We could easily adapt this code to only read some inputs and not all. We could also remove the sendValue() function and find another way of packing data. At the end, we push a carriage return to the serial port by using Serial.println(). This creates a separator between each pack of data that is sent. Now, let's improve our Max 6 patch to handle this pack of data being received from Arduino. The ReadAll Max 6 patch The following screenshot is the ReadAll Max patch that provides a way to communicate with our Arduino: Requesting data from Arduino First, we will see a [t b b] object. It is also a trigger, ordering bangs provided by the [metro] object. Each bang received triggers another bang to another [trigger] object, then another one to the [serial] object itself. The [t 13 r] object can seem tricky. It just triggers a character r and then the integer 13. The character r is sent to [spell] that converts it to ASCII code and then sends the result to [serial]. 13 is the ASCII code for a carriage return. This structure provides a way to fire the character r to the [serial] object, which means to Arduino, each time that the metro bangs. As we already see in the firmware, it triggers Arduino to read all its inputs, then to pack the data, and then to send the pack to the serial port for the Max 6 patch. To summarize what the metro triggers at each bang, we can write this sequence: Send the character r to Arduino. Send a carriage return to Arduino. Bang the [serial] object. This triggers Arduino to send back all its data to the Max patch. Parsing the received data Under the [serial] object, we can see a new structure beginning with the [sel 10 13] object. This is an abbreviation for the [select] object. This object selects an incoming message and fires a bang to the specific output if the message equals the argument corresponding to the specific place of that output. Basically, here we select 10 or 13. The last output pops the incoming message out if that one doesn't equal any argument. Here, we don't want to consider a new line feed (ASCII code 10). This is why we put it as an argument, but we don't do anything if that's the one that has been selected. It is a nice trick to avoid having this message trigger anything and even to not have it from the right output of [select]. Here, we send all the messages received from Arduino, except 10 or 13, to the [zl group 78] object. The latter is a powerful list for processing many features. The group argument makes it easy to group the messages received in a list. The last argument is to make sure we don't have too many elements in the list. As soon as [zl group] is triggered by a bang or the list length reaches the length argument value, it pops out the whole list from its left outlet. Here, we "accumulate" all the messages received from Arduino, and as soon as a carriage return is sent (remember we are doing that in the last rows of the loop() block in the firmware), a bang is sent and all the data is passed to the next object. We currently have a big list with all the data inside it, with each value being separated from the other by a space character (the famous ASCII code 32 we added in the last function of the firmware). This list is passed to the [itoa] object. itoa stands for integer to ASCII . This object converts integers to ASCII characters. The [fromsymbol] object converts a symbol to a list of messages. Finally, after this [fromsymbol] object we have our big list of values separated by spaces and totally readable. We then have to unpack the list. [unpack] is a very useful object that provides a way to cut a list of messages into individual messages. We can notice here that we implemented exactly the opposite process in the Arduino firmware while we packed each value into a big message. [unpack] takes as many arguments as we want. It requires knowing about the exact number of elements in the list sent to it. Here we send 12 values from Arduino, so we put 12 i arguments. i stands for integer . If we send a float, [unpack] would cast it as an integer. It is important to know this. Too many students are stuck with troubleshooting this in particular. We are only playing with the integer here. Indeed, the ADC of Arduino provides data from 0 to 1023 and the digital input provides 0 or 1 only. We attached a number box to each output of the [unpack] object in order to display each value. Then we used a [change] object. This latter is a nice object. When it receives a value, it passes it to its output only if it is different from the previous value received. It provides an effective way to avoid sending the same value each time when it isn't required. Here, I chose the argument -1 because this is not a value sent by the Arduino firmware, and I'm sure that the first element sent will be parsed. So we now have all our values available. We can use them for different jobs. But I propose to use a smarter way, and this will also introduce a new concept. Distributing received data and other tricks Let's introduce here some other tricks to improve our patching style. Cordless trick We often have to use some data in our patches. The same data has to feed more than one object. A good way to avoid messy patches with a lot of cord and wires everywhere is to use the [send] and [receive] objects. These objects can be abbreviated with [s] and [r], and they generate communication buses and provide a wireless way to communicate inside our patches. These three structures are equivalent. The first one is a basic cord. As soon as we send data from the upper number box, it is transmitted to the one at the other side of the cord. The second one generates a data bus named busA. As soon as you send data into [send busA], each [receive busA] object in your patch will pop out that data. The third example is the same as the second one, but it generates another bus named busB. This is a good way to distribute data. I often use this for my master clock, for instance. I have one and only one master clock banging a clock to [send masterClock], and wherever I need to have that clock, I use [receive masterClock] and it provides me with the data I need. If you check the global patch, you can see that we distribute data to the structures at the bottom of the patch. But these structures could also be located elsewhere. Indeed, one of the strengths of any visual programming framework such as Max 6 is the fact that you can visually organize every part of your code exactly as you want in your patcher. And please, do that as much as you can. This will help you to support and maintain your patch all through your long development months. Check the previous screenshot. I could have linked the [r A1] object at the top left corner to the [p process03] object directly. But maybe this will be more readable if I keep the process chains separate. I often work this way with Max 6. This is one of the multiple tricks I teach in my Max 6 course. And of course, I introduced the [p] object, that is the [patcher] abbreviation. Let's check a couple of tips before we continue with some good examples involving Max 6 and Arduino. Encapsulation and subpatching When you open Max 6 and go to File | New Patcher , it opens a blank patcher. The latter, if you recall, is the place where you put all the objects. There is another good feature named subpatching . With this feature, you can create new patchers inside patchers, and embed patchers inside patchers as well. A patcher contained inside another one is also named a subpatcher. Let's see how it works with the patch named ReadAllCutest.maxpat. There are four new objects replacing the whole structures we designed before. These objects are subpatchers. If you double-click on them in patch lock mode or if you push the command key (or Ctrl for Windows), double-click on them in patch edit mode and you'll open them. Let's see what is there inside them. The [requester] subpatcher contains the same architecture that we designed before, but you can see the brown 1 and 2 objects and another blue 1 object. These are inlets and outlets. Indeed, they are required if you want your subpatcher to be able to communicate with the patcher that contains it. Of course, we could use the [send] and [receive] objects for this purpose too. The position of these inlets and outlets in your subpatcher matters. Indeed, if you move the 1 object to the right of the 2 object, the numbers get swapped! And the different inlets in the upper patch get swapped too. You have to be careful about that. But again, you can organize them exactly as you want and need. Check the next screenshot: And now, check the root patcher containing this subpatcher. It automatically inverts the inlets, keeping things relevant. Let's now have a look at the other subpatchers: The [p portHandler] subpatcher The [p dataHandler] subpatcher The [p dataDispatcher] subpatcher In the last figure, we can see only one inlet and no outlets. Indeed, we just encapsulated the global data dispatcher system inside the subpatcher. And this latter generates its data buses with [send] objects. This is an example where we don't need and even don't want to use outlets. Using outlets would be messy because we would have to link each element requesting this or that value from Arduino with a lot of cords. In order to create a subpatcher, you only have to type n to create a new object, and type p, a space, and the name of your subpatcher. While I designed these examples, I used something that works faster than creating a subpatcher, copying and pasting the structure on the inside, removing the structure from the outside, and adding inlets and outlets. This feature is named encapsulate and is part of the Edit menu of Max 6. You have to select the part of the patch you want to encapsulate inside a subpatcher, then click on Encapsulate , and voilà! You have just created a subpatcher including your structures that are connected to inlets and outlets in the correct order. Encapsulate and de-encapsulate features You can also de-encapsulate a subpatcher. It would follow the opposite process of removing the subpatcher and popping out the whole structure that was inside directly outside. Subpatching helps to keep things well organized and readable. We can imagine that we have to design a whole patch with a lot of wizardry and tricks inside it. This one is a processing unit, and as soon as we know what it does, after having finished it, we don't want to know how it does it but only use it . This provides a nice abstraction level by keeping some processing units closed inside boxes and not messing the main patch. You can copy and paste the subpatchers. This is a powerful way to quickly duplicate process units if you need to. But each subpatcher is totally independent of the others. This means that if you need to modify one because you want to update it, you'd have to do that individually in each subpatcher of your patch. This can be really hard. Let me introduce you to the last pure Max 6 concept now named abstractions before I go further with Arduino. Abstractions and reusability Any patch created and saved can be used as a new object in another patch. We can do this by creating a new object by typing n in a patcher; then we just have to type the name of our previously created and saved patch. A patch used in this way is called an abstraction . In order to call a patch as an abstraction in a patcher, the patch has to be in the Max 6 path in order to be found by it. You can check the path known by Max 6 by going to Options | File Preferences . Usually, if you put the main patch in a folder and the other patches you want to use as abstractions in that same folder, Max 6 finds them. The concept of abstraction in Max 6 itself is very powerful because it provides reusability . Indeed, imagine you need and have a lot of small (or big) patch structures that you are using every day, every time, and in almost every project. You can put them into a specific folder on your disk included in your Max 6 path and then you can call (we say instantiate ) them in every patch you are designing. Since each patch using it has only a reference to the one patch that was instantiated itself, you just need to improve your abstraction; each time you load a patch using it, the patch will have up-to-date abstractions loaded inside it. It is really easy to maintain all through the development months or years. Of course, if you totally change the abstraction to fit with a dedicated project/patch, you'll have some problems using it with other patches. You have to be careful to maintain even short documentation of your abstractions. Let's now continue by describing some good examples with Arduino.
Read more
  • 0
  • 0
  • 3034

article-image-aperture-action
Packt
06 Sep 2013
14 min read
Save for later

Aperture in Action

Packt
06 Sep 2013
14 min read
Controlling clipped highlights The problem of clipped highlights is a very common issue that a photographer will often have to deal with. Digital cameras only have limited dynamic range, so clipping becomes an issue, especially with high-contrast scenes. However, if you shoot RAW, then your camera will often record more highlighted information than is visible in the image. You may already be familiar with recovering highlights by using the recovery slider in Aperture, but there are actually a couple of other ways that you can bring this information back into range. The three main methods of controlling lost highlights in Aperture are: Using the recovery slider Using curves Using shadows and highlights For many cases, using the recovery slider will be good enough, but the recovery slider has its limitations. Sometimes it still leaves your highlights looking too bright, or it doesn't give you the look you wish to achieve. The other two methods mentioned give you more control over the process of recovery. If you use a Curves adjustment, you can control the way the highlight rolls off, and you can reduce the artificial look that clipped highlights can give your image, even if technically the highlight is still clipped. A highlights & shadows adjustment is also useful because it has a different look, as compared to the one that you get when using the recovery slider. It works in a slightly different way, and includes more of the brighter tones of your image when making its calculations. The highlights and shadows adjustment has the added advantage of being able to be brushed in. So, how do you know which one to use? Consider taking a three-stepped approach. If the first step doesn't work, move on to the second, and so on. Eventually, it will become second nature, and you'll know which way will be the best by just looking at the photograph. Step 1 Use the recovery slider. Drag the slider up until any clipped areas of the image start to reappear. Only drag the slider until the clipped areas have been recovered, and then stop. You may find that if your highlights are completely clipped, you may need to drag the slider all the way to the right, as per the following screenshot: For most clipped highlight issues, this will probably be enough. If you want to see what's going on, add a Curves adjustment and set the Range field to the Extended range. You don't have to make any adjustments at this point, but the histogram in the Curves adjustment will now show you how much image data is being clipped, and how much data that you can actually recover. Real world example In the following screenshot, the highlights on the right-hand edge of the plant pot have been completely blown out: If we zoom in, you will be able to see the problem in more detail. As you can see, all the image information has been lost from the intricate edge of this cast iron plant pot. Luckily this image had been shot in RAW, and the highlights are easily recovered. In this case, all that was necessary was the use of the recovery slider. It was dragged upward until it reached a value of around 1.1, and this brought most of the detail back into the visible range. As you can see from the preceding image, the detail has been recovered nicely and there are no more clipped highlights. The following screenshot is the finished image after the use of the recovery slider: Step 2 If the recovery slider brought the highlights back into range, but still they are too bright, then try the Highlights & Shadows adjustment. This will allow you to bring the highlights down even further. If you find that it is affecting the rest of your image, you can use brushes to limit the highlight adjustment to just the area you want to recover. You may find that with the Highlight and Shadows adjustment, if you drag the sliders too far the image will start to look flat and washed out. In this case, using the mid-contrast slider can add some contrast back into the image. You should use the mid-contrast slider carefully though, as too much can create an unnatural image with too much contrast. Step 3 If the previous steps haven't addressed the problem to your satisfaction, or if the highlight areas are still clipped, you can add a roll off to your Curves adjustment. The following is a quick refresher on what to do: Add a Curves adjustment, if you haven't already added one. From the pop-up range menu at the bottom of the Curves adjustment, set the range to Extended. Drag the white point of the Curves slider till it encompasses all the image information. Create a roll off on the right-hand side of the curve, so it looks something like the following screenshot: If you're comfortable with curves, you can skip directly to step 3 and just use a Curves adjustment, but for better results, you should combine the preceding differing methods to best suit your image. Real world example In the following screenshot (of yours truly), the photo was taken under poor lighting conditions, and there is a badly blown out highlight on the forehead: Before we fix the highlights, however, the first thing that we need to do is to fix the overall white balance, which is quite poor. In this case, the easiest way to fix this problem is to use the Aperture's clever skin tone white-balance adjustment. On the White Balance adjustment brick from the pop-up menu, set the mode to Skin Tone. Now, select the color picker and pick an area of skin tone in the image. This will set the white balance to a more acceptable color. (You can tweak it more if it's not right, but this usually gives satisfactory results.) The next step is to try and fix the clipped highlight. Let's use the three-step approach that we discussed earlier. We will start by using the recovery slider. In this case, the slider was brought all the way up, but the result wasn't enough and leaves an unsightly highlight, as you can see in the following screenshot: The next step is to try the Highlight & Shadows adjustment. The highlights slider was brought up to the mid-point, and while this helped, it still didn't fix the overall problem. The highlights are still quite ugly, as you can see in the following screenshot: Finally, a Curves adjustment was added and a gentle roll off was applied to the highlight portion of the curve. While the burned out highlight isn't completely gone, there is no longer a harsh edge to it. The result is a much better image than the original, with a more natural-looking highlight as shown in the following screenshot: Finishing touches To take this image further, the face was brightened using another Curves adjustment, and the curves was brushed in over the facial area. A vignette was also added. Finally, a skin softening brush was used over the harsh shadow on the nose, and over the edges of the halo on the forehead, just to soften it even further. The result is a much better (and now useable) image than the one we started with. Fixing blown out skies Another common problem one often encounters with digital images is blown out skies. Sometimes it can be as a result of the image being clipped beyond the dynamic range of the camera, whereas other times the day may simply have been overcast and there is no detail there to begin with. While there are situations when the sky is too bright and you just need to bring the brightness down to better match the rest of the scene, that is easily fixed. But what if there is no detail there to recover in the first place? That scenario is what we are going to look at in the next section. This covers what to do when the sky is completely gone and there's nothing left to recover. There are options open to you in this case. The first is pretty obvious. Leave it as it is. However, you might have an image that is nicely lit otherwise, but all that's ruining it is a flat washed-out sky. What would add a nice balance to an image in such a scenario is some subtle blue in the sky, even if it's just a small amount. Luckily, this is fairly easy to achieve in Aperture. Perform the following steps: Try the steps outlined in the previous section to bring clipped highlights back into range. Sometimes simply using the recovery slider will bring clipped skies back into the visible range, depending on the capabilities of your camera. In order for the rest of this trick to work, your highlights must be in the visible range. If you have already made any enhancements using the Enhance brick and you want to preserve those, add another Enhance brick by choosing Add New Enhance adjustment from the cog pop-up on the side of the interface. If the Tint controls aren't visible on the Enhance brick, click on the little arrow beside the word Tint to reveal the Tint controls. Using the right-hand Tint control (the one with the White eyedropper under it), adjust the control until it adds some blue back to the sky. If this is adding too much blue to other areas of your image, then brush the enhance adjustment in by choosing Brush Enhance In from the cog pop-up menu. Real world example In this example, the sky has been completely blown out and has lost most of its color detail. The first thing to try is to see whether any detail can be recovered by using the recovery slider. In this case, some of the sky was recovered, but a lot of it was still burned out. There is simply no more information to recover. The next step is to use the tint adjustment as outlined in the instructions. This puts some color back in the sky and it looks more natural. A small adjustment of the Highlights & Shadows also helps bring the sky back into the range. Finishing touches While the sky has now been recovered, there is still a bit of work to be done. To brighten up the rest of the image, a Curves adjustment was added, and the upper part of the curve was brought up, while the shadows were brought down to add some contrast. The following is the Curves adjustment that was used: Finally, to reduce the large lens flare in the center of the image, I added a color adjustment and reduced the saturation and brightness of the various colors in the flare. I then painted the color adjustment in over the flare, and this reduced the impact of it on the image. This is the same technique that can be used for getting rid of color fringing, which will be discussed later in this article. The following screenshot is the final result: Removing objects from a scene One of the myths about photo workflow applications such as Aperture is that they're not good for pixel-level manipulations. People will generally switch over to something such as Photoshop if they need to do more complex operations, such as cloning out an object. However, Aperture's retouch tool is surprisingly powerful. If you need to remove small distracting objects from a scene, then it works really well. The following is an example of a shot that was entirely corrected in Aperture: It is not really practical to give step-by-step instructions for using the tool because every situation is different, so instead, what follows is a series of tips on how best to use the retouch function: To remove complex objects you will have to switch back and forth between the cloning and healing mode. Don't expect to do everything entirely in one mode or the other. To remove long lines, such as the telegraph wires in the preceding example, start with the healing tool. Use this till you get close to the edge of an object in the scene you want to keep. Then switch to the cloning tool to fix the areas close to the kept object. The healing tool can go a bit haywire near the edges of the frame, or the edges of another object, so it's often best to use the clone tool near the edges. Remember when using the clone tool that you need to keep changing your clone source so as to avoid leaving repetitive patterns in the cloned area. To change your source area, hold down the option key, and click on the image in the area that you want to clone from. Sometimes doing a few smaller strokes works better than one long, big stroke. You can only have one retouch adjustment, but each stroke is stored separately within it. You can delete individual strokes, but only in the reverse order in which they were created. You can't delete the first stroke, and keep the following ones if for example, you have 10 other strokes. It is worth taking the time to experiment with the retouch tool. Once you get the hang of this feature, you will save yourself a lot of time by not having to jump to another piece of software to do basic (or even advanced) cloning and healing. Fixing dust spots on multiple images A common use for the retouch tool is for removing sensor dust spots on an image. If your camera's sensor has become dirty, which is surprisingly common, you may find spots of dust creeping onto your images. These are typically found when shooting at higher f-stops (narrower apertures), such as f/11 or higher, and they manifest as round dark blobs. Dust spots are usually most visible in the bright areas of solid color, such as skies. The big problem with dust spots is that once your sensor has dust on it, it will record that dust in the same place in every image. Luckily Aperture's tools makes it pretty easy to remove those dust spots, and once you've removed them from one image, it's pretty simple to remove them from all your images. To remove dust spots on multiple images, perform the following steps: Start by locating the image in your batch where the dust spots are most visible.   Zoom in to 1:1 view (100 percent zoom), and press X on your keyboard to activate the retouch tool.   Switch the retouch tool to healing mode and decrease the size of your brush till it is just bigger than the dust spot. Make sure there is some softness on the brush. Click once over the spot to get rid of it. You should try to click on it rather than paint when it comes to dust spots, as you want the least amount of area retouched as possible. Scan through your image when viewing at 1:1, and repeat the preceding process until you have removed all the dust spots Close the retouch tool's HUD to drop the tool. Zoom back out. Select the lift tool from the Aperture interface (it's at the bottom of the main window). In the lift and stamp HUD, delete everything except the Retouch adjustment in the Adjustments submenu. To do this, select all the items except the retouch entry, and press the delete (or backspace) key. Select another image or group of images in your batch, and press the Stamp Selected Images button on the Lift and Stamp HUD. Your retouched settings will be copied to all your images, and because the dust spots don't move between shots, the dust should be removed on all your images.
Read more
  • 0
  • 0
  • 2148

article-image-kendo-mvvm-framework
Packt
06 Sep 2013
19 min read
Save for later

The Kendo MVVM Framework

Packt
06 Sep 2013
19 min read
(For more resources related to this topic, see here.) Understanding MVVM – basics MVVM stands for Model ( M ), View ( V ), and View-Model ( VM ). It is part of a family of design patterns related to system architecture that separate responsibilities into distinct units. Some other related patterns are Model-View-Controller ( MVC ) and Model-View-Presenter ( MVP ). These differ on what each portion of the framework is responsible for, but they all attempt to manage complexity through the same underlying design principles. Without going into unnecessary details here, suffice it to say that these patterns are good for developing reliable and reusable code and they are something that you will undoubtedly benefit from if you have implemented them properly. Fortunately, the good JavaScript MVVM frameworks make it easy by wiring up the components for you and letting you focus on the code instead of the "plumbing". In the MVVM pattern for JavaScript through Kendo UI, you will need to create a definition for the data that you want to display and manipulate (the Model), the HTML markup that structures your overall web page (the View), and the JavaScript code that handles user input, reacts to events, and transforms the static markup into dynamic elements (the View-Model). Another way to put it is that you will have data (Model), presentation (View), and logic (View-Model). In practice, the Model is the most loosely-defined portion of the MVVM pattern and is not always even present as a unique entity in the implementation. The View-Model can assume the role of both Model and View-Model by directly containing the Model data properties within itself, instead of referencing them as a separate unit. This is acceptable and is also seen within ASP.NET MVC when a View uses the ViewBag or the ViewData collections instead of referencing a strongly-typed Model class. Don't let it bother you if the Model isn't as well defined as the View-Model and the View. The implementation of any pattern should be filtered down to what actually makes sense for your application. Simple data binding As an introductory example, consider that you have a web page that needs to display a table of data, and also provide the users with the ability to interact with that data, by clicking specifically on a single row or element. The data is dynamic, so you do not know beforehand how many records will be displayed. Also, any change should be reflected immediately on the page instead of waiting for a full page refresh from the server. How do you make this happen? A traditional approach would involve using special server-side controls that can dynamically create tables from a data source and can even wire-up some JavaScript interactivity. The problem with this approach is that it usually requires some complicated extra communication between the server and the web browser either through "view state", hidden fields, or long and ugly query strings. Also, the output from these special controls is rarely easy to customize or manipulate in significant ways and reduces the options for how your site should look and behave. Another choice would be to create special JavaScript functions to asynchronously retrieve data from an endpoint, generate HTML markup within a table and then wire up events for buttons and links. This is a good solution, but requires a lot of coding and complexity which means that it will likely take longer to debug and refine. It may also be beyond the skill set of a given developer without significant research. The third option, available through a JavaScript MVVM like Kendo UI, strikes a balance between these two positions by reducing the complexity of the JavaScript but still providing powerful and simple data binding features inside of the page. Creating the view Here is a simple HTML page to show how a view basically works: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <script src ="/Scripts/kendo/jquery.js"></script> <script src ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> </body> </html> Here we have a simple table element with three columns but instead of the body containing any tr elements, there are some special HTML5 data-* attributes indicating that something special is going on here. These data-* attributes do nothing by themselves, but Kendo UI reads them (as you will see below) and interprets their values in order to link the View with the View-Model. The data-bind attribute indicates to Kendo UI that this element should be bound to a collection of objects called people. The data-template attribute tells Kendo UI that the people objects should be formatted using a Kendo UI template. Here is the code for the template: <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> This is a simple template that defines a tr structure for each row within the table. The td elements also have a data-bind attribute on them so that Kendo UI knows to insert the value of a certain property as the "text" of the HTML element, which in this case means placing the value in between <td> and </td> as simple text on the page. Creating the Model and View-Model In order to wire this up, we need a View-Model that performs the data binding. Here is the View-Model code for this View: <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ] }); kendo.bind($("body"), viewModel); </script> A Kendo UI View-Model is declared through a call to kendo.observable() which creates an observable object that is then used for the data-binding within the View. An observable object is a special object that wraps a normal JavaScript variable with events that fire any time the value of that variable changes. These events notify the MVVM framework to update any data bindings that are using that variable's value, so that they can update immediately and reflect the change. These data bindings also work both ways so that if a field bound to an observable object variable is changed, the variable bound to that field is also changed in real time. In this case, I created an array called people that contains three objects with properties about some people. This array, then, operates as the Model in this example since it contains the data and the definition of how the data is structured. At the end of this code sample, you can see the call to kendo.bind($("body"), viewModel) which is how Kendo UI actually performs its MVVM wiring. I passed a jQuery selector for the body tag to the first parameter since this viewModel object applies to the full body of my HTML page, not just a portion of it. With everything combined, here is the full source for this simplified example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 1</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, { name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad" } ] }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the page in action. Note how the data from the JavaScript people array is populated into the table automatically: Even though this example contains a Model, a View, and a View-Model, all three units appear in the same HTML file. You could separate the JavaScript into other files, of course, but it is also acceptable to keep them together like this. Hopefully you are already seeing what sort of things this MVVM framework can do for you. Observable data binding Binding data into your HTML web page (View) using declarative attributes is great, and very useful, but the MVVM framework offers some much more significant functionality that we didn't see in the last example. Instead of simply attaching data to the View and leaving it at that, the MVVM framework maintains a running copy of all of the View-Model's properties, and keeps references to those properties up to date in real time. This is why the View-Model is created with a function called "observable". The properties inside, being observable, report changes back up the chain so that the data-bound fields always reflect the latest data. Let's see some examples. Adding data dynamically Building on the example we just saw, add this horizontal rule and form just below the table in the HTML page: <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name" data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color" data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food" data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> This adds a form to the page so that a user can enter data for a new person that should appear in the table. Note that we have added some data-bind attributes, but this time we are binding the value of the input fields not the text. Note also that we have added a data-bind attribute to the button at the bottom of the form that binds the click event of that button with a function inside our View-Model. By binding the click event to the addPerson JavaScript method, the addPerson method will be fired every time this button is clicked. These bindings keep the value of those input fields linked with the View-Model object at all times. If the value in one of these input fields changes, such as when a user types something in the box, the View-Model object will immediately see that change and update its properties to match; it will also update any areas of the page that are bound to the value of that property so that they match the new data as well. The binding for the button is special because it allows the View-Model object to attach its own event handler to the click event for this element. Binding an event handler to an event is nothing special by itself, but it is important to do it this way (through the data-bind attribute) so that the specific running View-Model instance inside of the page has attached one of its functions to this event so that the code inside the event handler has access to this specific View-Model's data properties and values. It also allows for a very specific context to be passed to the event that would be very hard to access otherwise. Here is the code I added to the View-Model just below the people array. The first three properties that we have in this example are what make up the Model. They contain that data that is observed and bound to the rest of the page: personName: "", // Model property personHairColor: "", // Model property personFavFood: "", // Model property addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } The first several properties you see are the same properties that we are binding to in the input form above. They start with an empty value because the form should not have any values when the page is first loaded. It is still important to declare these empty properties inside the View-Model in order that their value can be tracked when it changes. The function after the data properties, addPerson , is what we have bound to the click event of the button in the input form. Here in this function we are accessing the people array and adding a new record to it based on what the user has supplied in the form fields. Notice that we have to use the this.get() and this.set() functions to access the data inside of our View-Model. This is important because the properties in this View-Model are special observable properties so accessing their values directly may not give you the results you would expect. The most significant thing that you should notice about the addPerson function is that it is interacting with the data on the page through the View-Model properties. It is not using jQuery, document.querySelector, or any other DOM interaction to read the value of the elements! Since we declared a data-bind attribute on the values of the input elements to the properties of our View-Model, we can always get the value from those elements by accessing the View-Model itself. The values are tracked at all times. This allows us to both retrieve and then change those View-Model properties inside the addPerson function and the HTML page will show the changes right as it happens. By calling this.set() on the properties and changing their values to an empty string, the HTML page will clear the values that the user just typed and added to the table. Once again, we change the View-Model properties without needing access to the HTML ourselves. Here is the complete source of this example: <!DOCTYPE html> <html > <head> <title>MVVM Demo 2</title> <scriptsrc ="/Scripts/kendo/jquery.js"></script> <scriptsrc ="/Scripts/kendo/kendo.all.js"></script> <link href="/Content/kendo/kendo.common.css" rel="stylesheet" /> <link href="/Content/kendo/kendo.default.css" rel="stylesheet" /> <style type="text/css"> th { width: 135px; } </style> </head> <body> <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> <hr /> <form> <header>Add a Person</header> <input type="text" name="personName" placeholder="Name"data-bind="value: personName" /><br /> <input type="text" name="personHairColor" placeholder="Hair Color"data-bind="value: personHairColor" /><br /> <input type="text" name="personFavFood" placeholder="Favorite Food"data-bind="value: personFavFood" /><br /> <button type="button" data-bind="click: addPerson">Add</button> </form> <script id="row-template" type="text/x-kendo-template"> <tr> <td data-bind="text: name"></td> <td data-bind="text: hairColor"></td> <td data-bind="text: favoriteFood"></td> </tr> </script> <script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); } }); kendo.bind($("body"), viewModel); </script> </body> </html> And here is a screenshot of the page in action. You will see that one additional person has been added to the table by filling out the form. Try it out yourself to see the immediate interaction that you get with this code: Using observable properties in the View We just saw how simple it is to add new data to observable collections in the View-Model, and how this causes any data-bound elements to immediately show that new data. Let's add some more functionality to illustrate working with individual elements and see how their observable values can update content on the page. To demonstrate this new functionality, I have added some columns to the table: <table> <caption>People Data</caption> <thead> <tr> <th>Name</th> <th>Hair Color</th> <th>Favorite Food</th> <th></th> <th>Live Data</th> </tr> </thead> <tbody data-template="row-template" data-bind="source: people"></tbody> </table> The first new column has no heading text but will contain a button on the page for each of the table rows. The second new column will be displaying the value of the "live data" in the View-Model for each of the objects displayed in the table. Here is the updated row template: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td> <td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td> </tr> </script> Notice that I have replaced all of the simple text data-bind attributes with input elements and valuedata-bind attributes. I also added a button with a clickdata-bind attribute and a column that displays the text of the three properties so that you can see the observable behavior in real time. The View-Model gets a new method for the delete button: deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } When this function is called through the binding that Kendo UI has created, it passes an event argument, here called e, into the function that contains a data property. This data property is a reference to the model object that was used to render the specific row of data. In this function, I created a person variable for a reference to the person in this row and a reference to the people array; we then use the index of this person to splice it out of the array. When you click on the Delete button, you can observe the table reacting immediately to the change. Here is the full source code of the updated View-Model: <script id="row-template" type="text/x-kendo-template"> <tr> <td><input type="text" data-bind="value: name" /></td> <td><input type="text" data-bind="value: hairColor" /></td><td><input type="text" data-bind="value: favoriteFood" /></td> <td><button type="button" data-bind="click: deletePerson">Delete</button></td> <td><span data-bind="text: name"></span>&nbsp;-&nbsp; <span data-bind="text: hairColor"></span>&nbsp;-&nbsp; <span data-bind="text: favoriteFood"></span></td></tr> </script><script type="text/javascript"> var viewModel = kendo.observable({ people: [ {name: "John", hairColor: "Blonde", favoriteFood: "Burger"}, {name: "Bryan", hairColor: "Brown", favoriteFood: "Steak"}, {name: "Jennifer", hairColor: "Brown", favoriteFood: "Salad"} ], personName: "", personHairColor: "", personFavFood: "", addPerson: function () { this.get("people").push({ name: this.get("personName"), hairColor: this.get("personHairColor"), favoriteFood: this.get("personFavFood") }); this.set("personName", ""); this.set("personHairColor", ""); this.set("personFavFood", ""); }, deletePerson: function (e) { var person = e.data; var people = this.get("people"); var index = people.indexOf(person); people.splice(index, 1); } }); kendo.bind($("body"), viewModel); </script> </body> </html> Here is a screenshot of the new page: Click on the Delete button to see an entry disappear. You can also see that I have added a new person to the table and that I have made changes in the input boxes of the table and that those changes immediately show up on the right-hand side. This indicates that the View-Model is keeping track of the live data and updating its bindings accordingly.
Read more
  • 0
  • 0
  • 5064

article-image-setting-single-width-column-system-simple
Packt
05 Sep 2013
3 min read
Save for later

Setting up a single-width column system (Simple)

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) Getting ready To perform the steps listed in this article, we will need a text editor, a browser, and a copy of the Masonry plugin. Any text editor will do, but my browser of choice is Google Chrome, as the V8 JavaScript engine that ships with it generally performs better and supports CSS3 transitions, and as a result we see smoother animations when resizing the browser window. We need to make sure we have a copy of the most recent version of Masonry, which was Version 2.1.08 at the time of writing this article. This version is compatible with the most recent version of jQuery, which is Version 1.9.1. A production copy of Masonry can be found on the GitHub repository at the following address: https://github.com/desandro/masonry/blob/master/jquery.masonry.min.js For jQuery, we will be using a content delivery network (CDN) for ease of development. Open the basic single-column HTML file to follow along. You can download this file from the following location: http://www.packtpub.com/sites/default/files/downloads/1-single-column.zip How to do it... Set up the styling for the masonry-item class with the proper width, padding, and margins. We want our items to have a total width of 200 pixels, including the padding and margins. <style> .masonry-item { background: #FFA500; float: left; margin: 5px; padding: 5px; width: 180px; }</style> Set up the HTML structure on which you are going to use Masonry. At a minimum, we need a tagged Masonry container with the elements inside tagged as Masonry items. <div id='masonry-container'> <div class='masonry-item '> Maecenas faucibus mollis interdum. </div> <div class='masonry-item '> Maecenas faucibus mollis interdum. Donec sed odio dui. Nullamquis risus eget urna mollis ornare vel eu leo. Vestibulum idligula porta felis euismod semper. </div> <div class='masonry-item '> Nullam quis risus eget urna mollis ornare vel eu leo. Crasjusto odio, dapibus ac facilisis in, egestas eget quam. Aeneaneu leo quam. Pellentesque ornare sem lacinia quam venenatisvestibulum. </div></div> All Masonry options need not be included, but it is recommended (by David DeSandro, the creator of Masonry) to set itemSelector for single-column usage. We will be setting this every time we use Masonry. <script> $(function() { $('#masonry-container').masonry({ // options itemSelector : '.masonry-item', }); });</script> How it works... Using jQuery, we select our Masonry container and use the itemSelector option to select the elements that will be affected by Masonry. The size of the columns will be determined by the CSS code. Using the box model, we set our Masonry items to a width of 90 px (80-px wide, with a 5-px padding all around the item). The margin is our gutter between elements, which is also 5-px wide. With this setup, we can con firm that we have built the basic single-column grid system, with each column being 100-px wide. The end result should look like the following screenshot: Summary This article showed you how to set up the very basic Masonry single-width column system around which Masonry revolves. Resources for Article : Further resources on this subject: Designing Site Layouts in Inkscape [Article] New features in Domino Designer 8.5 [Article] Using jQuery and jQueryUI Widget Factory plugins with RequireJS [Article]
Read more
  • 0
  • 0
  • 1852

article-image-chef-infrastructure
Packt
05 Sep 2013
10 min read
Save for later

Chef Infrastructure

Packt
05 Sep 2013
10 min read
(For more resources related to this topic, see here.) First, let's talk about the terminology used in the Chef universe. A cookbook is a collection of recipes – codifying the actual resources, which should be installed and configured on your node – and the files and configuration templates needed. Once you've written your cookbooks, you need a way to deploy them to the nodes you want to provision. Chef offers multiple ways for this task. The most widely used way is to use a central Chef Server. You can either run your own or sign up for Opscode's Hosted Chef. The Chef Server is the central registry where each node needs to get registered. The Chef Server distributes the cookbooks to the nodes based on their configuration settings. Knife is Chef's command-line tool called to interact with the Chef Server. You use it for uploading cookbooks and managing other aspects of Chef. On your nodes, you need to install Chef Client – the part that retrieves the cookbooks from the Chef Server and executes them on the node. In this article, we'll see the basic infrastructure components of your Chef setup at work and learn how to use the basic tools. Let's get started with having a look at how to use Git as a version control system for your cookbooks. Using version control Do you manually back up every file before you change it? And do you invent creative filename extensions like _me and _you when you try to collaborate on a file? If you answer yes to any of the preceding questions, it's time to rethink your process. A version control system (VCS) helps you stay sane when dealing with important files and collaborating on them. Using version control is a fundamental part of any infrastructure automation. There are multiple solutions (some free, some paid) for managing source version control including Git, SVN, Mercurial, and Perforce. Due to its popularity among the Chef community, we will be using Git. However, you could easily use any other version control system with Chef. Getting ready You'll need Git installed on your box. Either use your operating system's package manager (such as Apt on Ubuntu or Homebrew on OS X), or simply download the installer from www.git-scm.org. Git is a distributed version control system. This means that you don't necessarily need a central host for storing your repositories. But in practice, using GitHub as your central repository has proven to be very helpful. In this article, I'll assume that you're using GitHub. Therefore, you need to go to github.com and create a (free) account to follow the instructions given in this article. Make sure that you upload your SSH key following the instructions at https://help.github.com/articles/generating-ssh-keys, so that you're able to use the SSH protocol to interact with your GitHub account. As soon as you've created your GitHub account, you should create your repository by visiting https://github.com/new and using chef-repo as the repository name. How to do it... Before you can write any cookbooks, you need to set up your initial Git repository on your development box. Opscode provides an empty Chef repository to get you started. Let's see how you can set up your own Chef repository with Git using Opscode's skeleton. Download Opscode's skeleton Chef repository as a tarball: mma@laptop $ wget http://github.com/opscode/chef-repo/tarball/master...TRUNCATED OUTPUT...2013-07-05 20:54:24 (125 MB/s) - 'master' saved [9302/9302] Extract the downloaded tarball: mma@laptop $ tar zvf master Rename the directory. Replace 2c42c6a with whatever your downloaded tarball contained in its name: mma@laptop $ mv opscode-chef-repo-2c42c6a/ chef-repo Change into your newly created Chef repository: mma@laptop $ cd chef-repo/ Initialize a fresh Git repository: mma@laptop:~/chef-repo $ git init .Initialized empty Git repository in /Users/mma/work/chef-repo/.git/ Connect your local repository to your remote repository on github.com. Make sure to replace mmarschall with your own GitHub username: mma@laptop:~/chef-repo $ git remote add origin git@github.com:mmarschall/chef-repo.git Add and commit Opscode's default directory structure: mma@laptop:~/chef-repo $ git add .mma@laptop:~/chef-repo $ git commit -m "initial commit"[master (root-commit) 6148b20] initial commit10 files changed, 339 insertions(+), 0 deletions(-)create mode 100644 .gitignore...TRUNCATED OUTPUT...create mode 100644 roles/README.md Push your initialized repository to GitHub. This makes it available to all your co-workers to collaborate on it. mma@laptop:~/chef-repo $ git push -u origin master...TRUNCATED OUTPUT...To git@github.com:mmarschall/chef-repo.git* [new branch] master -> master How it works... You've downloaded a tarball containing Opscode's skeleton repository. Then, you've initialized your chef-repo and connected it to your own repository on GitHub. After that, you've added all the files from the tarball to your repository and committed them. This makes Git track your files and the changes you make later. As a last step, you've pushed your repository to GitHub, so that your co-workers can use your code too. There's more... Let's assume you're working on the same chef-repo repository together with your co-workers. They cloned your repository, added a new cookbook called other_cookbook, committed their changes locally, and pushed their changes back to GitHub. Now it's time for you to get the new cookbook down to your own laptop. Pull your co-workers, changes from GitHub. This will merge their changes into your local copy of the repository. mma@laptop:~/chef-repo $ git pull From github.com:mmarschall/chef-repo * branch master -> FETCH_HEAD ...TRUNCATED OUTPUT... create mode 100644 cookbooks/other_cookbook/recipes/default.rb In the case of any conflicting changes, Git will help you merge and resolve them. Installing Chef on your workstation If you want to use Chef, you'll need to install it on your local workstation first. You'll have to develop your configurations locally and use Chef to distribute them to your Chef Server. Opscode provides a fully packaged version, which does not have any external prerequisites. This fully packaged Chef is called the Omnibus Installer. We'll see how to use it in this section. Getting ready Make sure you've curl installed on your box by following the instructions available at http://curl.haxx.se/download.html. How to do it... Let's see how to install Chef on your local workstation using Opscode's Omnibus Chef installer: In your local shell, run the following command: mma@laptop:~/chef-repo $ curl -L https://www.opscode.com/chef/install.sh | sudo bashDownloading Chef......TRUNCATED OUTPUT...Thank you for installing Chef! Add the newly installed Ruby to your path: mma@laptop:~ $ echo 'export PATH="/opt/chef/embedded/bin:$PATH"'>> ~/.bash_profile && source ~/.bash_profile How it works... The Omnibus Installer will download Ruby and all the required Ruby gems into /opt/chef/embedded. By adding the /opt/chef/embedded/bin directory to your .bash_profile, the Chef command-line tools will be available in your shell. There's more... If you already have Ruby installed in your box, you can simply install the Chef Ruby gem by running mma@laptop:~ $ gem install chef. Using the Hosted Chef platform If you want to get started with Chef right away (without the need to install your own Chef Server) or want a third party to give you an Service Level Agreement (SLA) for your Chef Server, you can sign up for Hosted Chef by Opscode. Opscode operates Chef as a cloud service. It's quick to set up and gives you full control, using users and groups to control the access permissions to your Chef setup. We'll configure Knife, Chef's command-line tool to interact with Hosted Chef, so that you can start managing your nodes. Getting ready Before being able to use Hosted Chef, you need to sign up for the service. There is a free account for up to five nodes. Visit http://www.opscode.com/hosted-chef and register for a free trial or the free account. I registered as the user webops with an organization short-name of awo. After registering your account, it is time to prepare your organization to be used with your chef-repo repository. How to do it... Carry out the following steps to interact with the Hosted Chef: Navigate to http://manage.opscode.com/organizations. After logging in, you can start downloading your validation keys and configuration file. Select your organization to be able to see its contents using the web UI. Regenerate the validation key for your organization and save it as <your-organization-short-name>.pem in the .chef directory inside your chef-repo repository. Generate the Knife config and put the downloaded knife.rb into the .chef directory inside your chef-repo directory as well. Make sure you replace webops with the username you chose for Hosted Chef and awo with the short-name you chose for your organization: current_dir = File.dirname(__FILE__)log_level :infolog_location STDOUTnode_name "webops"client_key "#{current_dir}/webops.pem"validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem"chef_server_url "https://api.opscode.com/organizations/awo"cache_type 'BasicFile'cache_options( :path => "#{ENV['HOME']}/.chef/checksums" )cookbook_path ["#{current_dir}/../cookbooks"] Use Knife to verify that you can connect to your hosted Chef organization. It should only have your validator client so far. Instead of awo, you'll see your organization's short-name: mma@laptop:~/chef-repo $ knife client listawo-validator How it works... Hosted Chef uses two private keys (called validators): one for the organization and the other for every user. You need to tell Knife where it can find these two keys in your knife.rb file. The following two lines of code in your knife.rb file tells Knife about which organization to use and where to find its private key: validation_client_name "awo-validator"validation_key "#{current_dir}/awo-validator.pem" The following line of code in your knife.rb file tells Knife about where to find your users' private key: client_key "#{current_dir}/webops.pem" And the following line of code in your knife.rb file tells Knife that you're using Hosted Chef. You will find your organization name as the last part of the URL: chef_server_url "https://api.opscode.com/organizations/awo" Using the knife.rb file and your two validators Knife can now connect to your organization hosted by Opscode. You do not need your own, self-hosted Chef Server, nor do you need to use Chef Solo in this setup. There's more... This setup is good for you if you do not want to worry about running, scaling, and updating your own Chef Server and if you're happy with saving all your configuration data in the cloud (under Opscode's control). If you need to have all your configuration data within your own network boundaries, you might sign up for Private Chef, which is a fully supported and enterprise-ready version of Chef Server. If you don't need any advanced enterprise features like role-based access control or multi-tenancy, then the open source version of Chef Server might be just right for you. Summary In this article, we learned about key concepts such as cookbooks, roles, and environments and how to use some basic tools such as Git, Knife, Chef Shell, Vagrant, and Berkshelf. Resources for Article: Further resources on this subject: Automating the Audio Parameters – How it Works [Article] Skype automation [Article] Cross-browser-distributed testing [Article]
Read more
  • 0
  • 0
  • 2688

article-image-using-linq-query-linqpad
Packt
05 Sep 2013
3 min read
Save for later

Using a LINQ query in LINQPad

Packt
05 Sep 2013
3 min read
(For more resources related to this topic, see here.) The standard version We are going to implement a simple scenario: given a deck of 52 cards, we want to pick a random number of cards, and then take out all of the hearts. From this stack of hearts, we will discard the first two and take the next five cards (if possible), and order them by their face value for display. You can try it in a C# program query in LINQPad: public static Random random = new Random();void Main(){ var deck = CreateDeck(); var randomCount = random.Next(52); var hearts = new Card[randomCount]; var j = 0; // take all hearts out for(var i=0;i<randomCount;i++) { if(deck[i].Suit == "Hearts") { hearts[j++] = deck[i]; } } // resize the array to avoid null references Array.Resize(ref hearts, j); // check that we have at least 2 cards. If not, stop if(hearts.Length <= 2) return; var count = 0; // check how many cards we can take count = hearts.Length - 2; // the most we need to take is 5 if(count > 5) { count = 5; } // take the cards var finalDeck = new Card[count]; Array.Copy(hearts, 2, finalDeck, 0, count); // now order the cards Array.Sort(finalDeck, new CardComparer()); // Display the result finalDeck.Dump();}public class Card{ public string Suit { get; set; } public int Value { get; set; }}// Create the cards' deckpublic Card[] CreateDeck(){ var suits = new [] { "Spades", "Clubs", "Hearts", "Diamonds" }; var deck = new Card[52]; for(var i = 0; i < 52; i++) { deck[i] = new Card { Suit = suits[i / 13], FaceValue = i-(13*(i/13))+1 }; } // randomly shuffle the deck for (var i = deck.Length - 1; i > 0; i--) { var j = random.Next(i + 1); var tmp = deck[j]; deck[j] = deck[i]; deck[i] = tmp; } return deck;}// CardComparer compare 2 cards against their face valuepublic class CardComparer : Comparer<Card>{ public override int Compare(Card x, Card y) { return x.FaceValue.CompareTo(y.FaceValue); }} Even if we didn't consider the CreateDeck() method, we had to do quite a few operations to produce the expected result (your values might be different as we are using random cards). The output is as follows: Depending on the data, LINQPad will add contextual information. For example, in this sample it will add the bottom row with the sum of all the values (here, only FaceValue). Also, if you click on the horizontal graph button, you will get a visual representation of your data, as shown in the following screenshot: This information is not always relevant but it can help you explore your data. Summary In this article we saw how LINQ queries can be used in LINQPad. The powerful query capabilitiesof LINQ has been utilized to the maximum in LINQPad. Resources for Article: Further resources on this subject: Displaying SQL Server Data using a Linq Data Source [Article] Binding MS Chart Control to LINQ Data Source Control [Article] LINQ to Objects [Article]
Read more
  • 0
  • 0
  • 2678
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-so-what-apache-wicket
Packt
04 Sep 2013
7 min read
Save for later

So, what is Apache Wicket?

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Wicket is a component-based Java web framework that uses just Java and HTML. Here, you will see the main advantages of using Apache Wicket in your projects. Using Wicket, you will not have mutant HTML pages. Most of the Java web frameworks require the insertion of special syntax to the HTML code, making it more difficult for Web designers. On the other hand, Wicket adopts HTML templates by using a namespace that follows the XHTML standard. It consists of an id attribute in the Wicket namespace (wicket:id). You won't need scripts to generate messy HTML code. Using Wicket, the code will be clearer, and refactoring and navigating within the code will be easier. Moreover, you can utilize any HTML editor to edit the HTML files, and web designers can work with little knowledge of Wicket in the presentation layer without worrying about business rules and other developer concerns. The advantages for developers are as follows: All code is written in Java No XML configuration files POJO-centric programming No Back-button problems (that is, unexpected and undesirable results on clicking on the browser's Back button) Ease of creating bookmarkable pages Great compile-time and runtime problem diagnosis Easy testability of components Another interesting thing is that concepts such as generics and anonymous subclasses are widely used in Wicket, leveraging the Java programming language to the max. Wicket is based on components. A component is an object that interacts with other components and encapsulates a set of functionalities. Each component should be reusable, replaceable, extensible, encapsulated, and independent, and it does not have a specific context. Wicket provides all these principles to developers because it has been designed taking into account all of them. In particular, the most remarkable principle is reusability. Developers can create custom reusable components in a straightforward way. For instance, you could create a custom component called SearchPanel (by extending the Panel class, which is also a component) and use it in all your other Wicket projects. Wicket has many other interesting features. Wicket also aims to make the interaction of the stateful server-side Java programming language with the stateless HTTP protocol more natural. Wicket's code is safe by default. For instance, it does not encode state in URLs. Wicket is also efficient (for example, it is possible to do a tuning of page-state replication) and scalable (Wicket applications can easily work on a cluster). Last, but not least, Wicket has support for frameworks like EJB and Spring. Installation In seven easy steps, you can build a Wicket "Hello World" application. Step 1 – what do I need? Before you start to use Apache Wicket 6, you will need to check if you have all of the required elements, listed as follows: Wicket is a Java framework, so you need to have Java virtual machine (at least Version 6) installed on your machine. Apache Maven is required. Maven is a tool that can be used for building and managing Java projects. Its main purpose is to make the development process easier and more structured. More information on how to install and configure Maven can be found at http://maven.apache.org. The examples of this book use the Eclipse IDE Juno version, but you can also use other versions or other IDEs, such as NetBeans. In case you are using other versions, check the link for installing the plugins to the version you have; the remaining steps will be the same. In case of other IDEs, you will need to follow some tutorial to install other equivalent plugins or not use them at all. Step 2 – installing the m2eclipse plugin The steps for installing the m2eclipse plugin are as follows: Go to Help | Install New Software. Click on Add and type in m2eclipse in the Name field; copy and paste the link https://repository.sonatype.org/content/repositories/forge-sites/m2e/1.3.0/N/LATEST onto the Location field. Check all options and click on Next. Conclude the installation of the m2eclipse plugin by accepting all agreements and clicking on Finish. Step 3 – creating a new Maven application The steps for creating a new Maven application are as follows: Go to File | New | Project. Then go to Maven | Maven Project. Click on Next and type wicket in the next form. Choose the wicket-archetype-quickstart maven Archetype and click on Next. Fill the next form according to the following screenshot and click on Finish: Step 4 – coding the "Hello World" program In this step, we will build the famous "Hello World" program. The separation of concerns will be clear between HTML and Java code. In this example, and in most cases, each HTML file has a corresponding Java class (with the same name). First, we will analyse the HTML template code. The content of the HomePage.html file must be replaced by the following code: <!DOCTYPE html> <html > <body> <span wicket_id="helloWorldMessage">Test</span> </body> </html> It is simple HTML code with the Wicket template wicket:id="helloWorldMessage". It indicates that in the Java code related to this page, a method will replace the message Test by another message. Now, let's edit the corresponding Java class; that is, HomePage. package com.packtpub.wicket.hello_world; import org.apache.wicket.markup.html.WebPage; import org.apache.wicket.markup.html.basic.Label; public class HomePage extends WebPage { public HomePage() { add(new Label("helloWorldMessage", "Hello world!!!")); } } The class HomePage extends WebPage; that is, it inherits some of the WebPage class's methods and attributes, and it becomes a WebPage subtype. One of these inherited methods is the method add(), where a Label object can be passed as a parameter. A Label object can be built by passing two parameters: an identifier and a string. The method add() is called in the HomePage class's constructor and will change the message in wicket:id="helloWorldMessage" with Hello world!!!. The resulting HTML code will be as shown in the following code snippet: <!DOCTYPE html> <html > <body> <span>Hello world!!!</span> </body> </html> Step 5 – compile and run! The steps to compile and run the project are as follows: To compile, right-click on the project and go to Run As | Maven install. Verify if the compilation was successful. If not, Wicket provides good error messages, so you can try to fix what is wrong. To run the project, right-click on the class Start and go to Run As | Java application. The class Start will run an embedded Jetty instance that will run the application. Verify if the server has started without any problems. Open a web browser and enter this in the address field: http://localhost:8080. In case you have changed the port, enter http://localhost:<port>. The browser should show Hello world!!!. The most common problem that can occur is that port 8080 is already in use. In this case, you can go into the Java Start class (found at src/test/java) and set another port by replacing 8080 in connector. setPort(8080) (line 21) by another number (for example, 9999). To stop the server, you can either click on Console and press any key or click on the red square on the console, which indicates termination. And that's it! By this point, you should have a working Wicket "Hello World" application and are free to play around and discover more about it. Summary This article describes how to create a simple "Hello World" application using Apache Wicket 6. Resources for Article : Further resources on this subject: Tips for Deploying Sakai [Article] OSGi life cycle [Article] Apache Wicket: Displaying Data Using DataTable [Article]
Read more
  • 0
  • 0
  • 5478

article-image-article-rapid-development
Packt
04 Sep 2013
7 min read
Save for later

Rapid Development

Packt
04 Sep 2013
7 min read
(For more resources related to this topic, see here.) Concept of reusability The concept of reusability has its roots in the production process. Typically, most of us go about creating e-learning using a process similar to what is shown in the following screenshot. It works well for large teams and the one man band, except in the latter case, you become a specialist for all the stages of production. That's a heavy load. It's hard to be good at all things and it demands that you constantly stretch and improve your skills, and find ways to increase the efficiency of what you do. Reusability in Storyline is about leveraging the formatting, look and feel and interactions you create so that you can re-purpose your work and speed-up production. Not every project will be an original one-off, in fact most won't, so the concept is to approach development with a plan to repurpose 80 percent of the media, quizzes, interactions, and designs you create. As you do this, you begin to establish processes, templates, and libraries that can be used to rapidly assemble base courses. With a little tweaking and some minor customization, you'll have a new, original course in no time. Your client doesn't need to know that 80 percent was made from reusable elements with just 20 percent created as original, unique components, but you'll know the difference in terms of time and effort. Leveraging existing assets So how can you leverage existing assets with Storyline? The first things you'll want to look at are the courses you've built with other authoring programs, such as PowerPoint, QuizMaker Engage, Captivate, Flash, and Camtasia. If there are design themes, elements, or interactions within these courses that you might want to use for future Storyline courses, you should focus your efforts on importing what you can, and further adjusting within Storyline to create a new version of the asset that can be reused for future Storyline courses. If re-working the asset is too complex or if you don't expect to reuse it in multiple courses, then using Storyline's web object feature to embed the interaction without re-working it in any way may be the better approach. In both cases, you'll save time by reusing content you've already put a lot of time in developing. Importing external content Here are the steps to bring external content into Storyline: From the Articulate Startup screen or by choosing the Insert tab, and then New Slide within a project, select the Import option. There are options to import PowerPoint, Quizmaker, and Storyline. All of these will display the slides within the file to be imported. You can pick and choose which slides to import into a new or the current scene in Storyline. The Engage option displays the entire interaction that can be imported into a single slide in the current or a new scene. Click on Import to complete the process. Considerations when importing Keep the following points in mind when importing: PowerPoint and Quizmaker files can be imported directly into Storyline. Once imported, you can edit the content like you would any other Storyline slide. Master slides come along with the import making it simple to reuse previous designs. Note that 64-bit PowerPoint is not supported and you must have an installed, activated version of Quizmaker for the import to work. The PowerPoint to Storyline conversion is not one-to-one. You can expect some alignment issues with slide objects due to the fact that PowerPoint uses points and Storyline uses pixels. There are 2.66 pixels for each point which is why you'll need to tweak the imported slides just a bit. Same with Quizmaker though the reason why is slightly different; Quizmaker is 686 x 424 in size, whereas Storyline is 720 x 540 by default. Engage files can be imported into Storyline and they are completely functional, but cannot be edited within Storyline. Though the option to import Engage appears on the Import screen, what Storyline is really doing is creating a web object to contain the Engage interaction. Once imported into a new scene, clicking on the Engage interaction will display an Options menu where you can make minor adjustments to the behavior of the interaction as well as Preview and Edit in it Engage. You can also resize and position the interaction just as you would any web object. Remember that though web objects work in iPad and HTML5 outputs, Engage content is Flash, so it will not playback on an iPad or in an HTML5 browser. Like Quizmaker, you'll need an installed, activated version of Engage for the import to work. Flash, Captivate, and Camtasia files cannot be imported in Storyline and cannot be edited within Storyline. You can however, use web objects to embed these projects into Storyline or the Insert Flash option. In both cases, the imported elements appear seamless to the learner while retaining full functionality.   Build once, and reuse many times Quizzing is at the heart of many e-learning courses where often the quiz questions need to be randomized or even reused in different sections of a single course (that is, the same questions for a pre and post-test). The concept of building once and reusing many times works well with several aspects of Storyline. We'll start with quizzing and a feature called Question Banks as follows: Question Banks Question Bank offers a way to pool, reuse, and randomize questions within a project. Slides in a question bank are housed within the project file but are not visible until placed into the story. Question Banks can include groups of quiz slides and regular slides (that is, you might include a regular slide if you need to provide instructions for the quiz or would like to include a post-quiz summary). When you want to include questions from a Question Bank, you just need to insert a new Quizzing slide, and then choose Draw from Bank . You can then select one or more questions to include and randomize them if desired. Follow along… In this exercise we will be removing three questions from a scene and moving them into a question bank. This will allow you to draw one or more of those questions at any point in the project where the quiz questions are needed, as follows: From the Home tab, choose Question Banks , and then Create Question bank . Title this Identity Theft Questions . Notice that a new tab has opened in Normal View . The Question Bank appears in this tab. Click on the Import link and navigate to question slides 2, 3, and 4. From the Import drop-down menu at the top, select move questions into question bank . Click on the Story View tab and notice the three slides containing the quiz questions are no longer in the story. Click back on the Identity Theft tab and notice that they are located here. The questions will not become a part of the story until the next step, when you draw them from the bank. In Story View, click once on slide 1 to select it, and then from the Home tab, choose Question Banks and New Draw from Question Bank . From the Question Bank drop-down menu, select Identity Theft Questions . All questions will be selected by default and will be randomized after being placed into the story. This means that the learner will need to answer three questions before continuing onto the next slide in the story. Click on Insert . The Question Bank draw has been inserted as slide 2. To see how this works, Preview the scene. Save as Exercise 11 – Identity Theft Quiz.   There are multiple ways to get back to the questions that are in a question bank. You can do this by selecting the tab the questions are located in (in this case, Identity Theft ), you can view the question bank slide in Normal View or choose Question Banks from the Home tab and navigate to the name of the question bank you'd like to edit.
Read more
  • 0
  • 0
  • 1452

article-image-managing-adobe-connect-meeting-room
Packt
04 Sep 2013
6 min read
Save for later

Managing Adobe Connect Meeting Room

Packt
04 Sep 2013
6 min read
(For more resources related to this topic, see here.) The Meeting Information page In order to get to the Meeting Information page, you will first need to navigate to the Meeting List page by following these steps: Log in to the Connect application. Click on the Meetings tab in the Home Page main menu. When you access the Meetings page, the My Meetings link is opened by default and a view is set on the Meeting List tab. You will find the meeting that is listed on this page as shown in the following screenshot: By clicking on the Cookbook Meeting option in the Name column (marked with a red outline), you will be presented with the Meeting Information page. In the section titled Meeting Information, you can examine various pieces of information about the selected meeting. On this page, you can review Name, Summary, Start Time, Duration, Number of users in room(that are currently present in the meeting room), URL, Language(selected), and the Access rights of the meeting. The two most important fields are marked with a red outline in the previous screenshot. The first one is the link to the meeting URL and the second is the Enter Meeting Room button. You can join the selected meeting room by clicking on any of these two options. In the upper portion of this page, you will notice the navigation bar with the following links: Meeting Information Edit Information Edit Participants Invitations Uploaded Content Recordings Reports By selecting any of these links, you will open pages associated with them. Our main focus of this article will be on the functionalities of these pages. Since we have explained the Meeting Information page, we can proceed to the Edit Information page. The Edit Information page The Edit Information page is very similar to the Enter Meeting Information page. We will briefly inform you about the meeting settings, which you can edit on this page. These settings are: Name Summary Start time Duration Language Access Audio conference settings Any changes made on this page are preserved by clicking on the Save button that you will find at very bottom of this page. Changes will not affect participants who are already logged in to the room, except changes to the Audio Conference settings. Next to the Save button, you will find the Cancel button. Any changes made on the Edit Information page, which are not already saved will be reverted by clicking on the Cancel button. The Edit Participants page After the Edit Information page, it's time for us to access the next page by clicking on the Edit Participants link in the navigation bar. This link will take you to the Select Participants page. In addition to the already described features, we will introduce you to a couple more functionalities that will help you to add participants, change their roles, or remove them from the meeting. Example 1 – changing roles In this example, we will change the role of the administrators group from participant to presenter by using the Search button. This feature is of great help when there are a large number of Connect users that are already added as meeting participants. In order to do so, you will need to follow the steps listed: In the Current Participants For Cookbook Meeting table on the right-hand side, click on the Search button located in the lower-left corner of the table. When you click on the Search button, a text field for instant search will be displayed. In the text field, enter the name of the Administrators group or part of the group name (the auto-complete function should recognize the name of the present group). When the group is present in the table, select it. Click on the Set User Role button. Select new role for this group in the menu. For the purpose of this example, we will select the Presenter role. By completing this action, you will grant Presenter privileges in the Cookbook Meeting table to all the administrators as shown in the following screenshot: Example 2 – removing a user In this example, we will show you how to remove a specific user from the selected meeting. For the purpose of this exercise, we will remove the Administrators group from the Participants list. In order to complete this action, please follow the given steps: Select Administrators in the Current Participants For Cookbook Meeting table. Click on the Remove button. Now, all the members of this group will be excluded from the meeting, and Administrators should not be present in the list. Example 3 – adding a specific user This example will demonstrate how to add a specific user from any group. For example, we will add a user from the Authors group to the Current Participants list. In the Available users and Groups table, double-click on the Authors group. This action will change the user interface of this table and list all the users that belong to the Authors group. Please note that table header is now changed to Authors. Select a specific user and click on the Add button. This will add the selected user from the Authors group to the Current Participants For Cookbook Meeting table. One thing that we would like to mention here is the ability to perform multiple selections in both the Available Users and Groups and Current Participants For Cookbook Meeting tables. To enable multiple selection functionality, select a specific user and group by clicking and selecting Ctrl and Shift on the keyboard at the same time. By demonstrating these examples, we reviewed the Edit Participant link functionalities. Summary In this article, we learned how to master all functionalities on how to edit different settings for already existing meetings. We covered the following topics: The Meeting information page The Managing Edit information page The Managing Edit participants page Resources for Article: Further resources on this subject: Top features you'll want to know about [Article] Remotely Preview and test mobile web pages on actual devices with Adobe Edge Inspect [Article] Exporting SAP BusinessObjects Dashboards into Different Environments [Article]
Read more
  • 0
  • 0
  • 1761

article-image-so-what-nodejs
Packt
04 Sep 2013
2 min read
Save for later

So, what is Node.js?

Packt
04 Sep 2013
2 min read
(For more resources related to this topic, see here.) Node.js is an open source platform that allows you to build fast and scalable network applications using JavaScript. Node.js is built on top of V8, a modern JavaScript virtual machine that powers Google's Chrome web browser. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient.Node.js can handle multiple concurrent network connections with little overhead, making it ideal for data-intensive, real-time applications. With Node.js, you can build many kinds of networked applications. For instance, you can use it to build a web application service, an HTTP proxy, a DNS server, an SMTP server, an IRC server, and basically any kind of process that is network intensive. You program Node.js using JavaScript, which is the language that powers the Web. JavaScript is a powerful language that, when mastered, makes writing networked, event-driven applications fun and easy. Node.js recognizes streams that are resistant to precarious network conditions and misbehaving clients. For instance, mobile clients are notoriously famous for having large latency network connections, which can put a big burden on servers by keeping around lots of connections and outstanding requests. By using streaming to handle data, you can use Node.js to control incoming and outgoing streams and enable your service to survive. Also, Node.js makes it easy for you to use third-party open source modules. By using Node Package Manager (NPM), you can easily install, manage, and use any of the several modules contained in a big and growing repository. NPM also allows you to manage the modules your application depends on in an isolated way, allowing different applications installed in the same machine to depend on different versions of the same module without originating a conflict, for instance. Given the way it's designed, NPM even allows different versions of the same module to coexist in the same application. Summary In this article, we learned that Node.js uses an event-driven, non-blocking I/O mode and can handle multiple concurrent network connections with little overhead. Resources for Article : Further resources on this subject: So, what is KineticJS? [Article] Cross-browser-distributed testing [Article] Accessing and using the RDF data in Stanbol [Article]
Read more
  • 0
  • 0
  • 1950
Packt
03 Sep 2013
19 min read
Save for later

Oracle ADF Essentials – Adding Business Logic

Packt
03 Sep 2013
19 min read
(For more resources related to this topic, see here.) Adding logic to business components by default, a business component does not have an explicit Java class. When you want to add Java logic, however, you generate the relevant Java class from the Java tab of the business component. On the Java tab, you also decide which of your methods are to be made available to other objects by choosing to implement a Client Interface . Methods that implement a client interface show up in the Data Control palette and can be called from outside the object. Logic in entity objects Remember that entity objects are closest to your database tables –– most often, you will have one entity object for every table in the database. This makes the entity object a good place to put data logic that must be always executed. If you place, for example, validation logic in an entity object, it will be applied no matter which view object attempts to change data. In the database or in an entity object? Much of the business logic you can place in an entity object can also be placed in the database using database triggers. If other systems are accessing your database tables, business logic should go into the database as much as possible. Overriding accessors To use Java in entity objects, you open an entity object and select the Java tab. When you click on the pencil icon, the Select Java Options dialog opens as shown in the following screenshot: In this dialog, you can select to generate Accessors (the setXxx() and getXxx() methods for all the attributes) as well as Data Manipulation Methods (the doDML() method; there is more on this later). When you click on OK , the entity object class is generated for you. You can open it by clicking on the hyperlink or you can find it in the Application Navigator panel as a new node under the entity object. If you look inside this file, you will find: Your class should start with an import section that contains a statement that imports your EntityImpl class. If you have set up your framework extension classes correctly this could be import com.adfessentials.adf.framework.EntityImpl. You will have to click on the plus sign in the left margin to expand the import section. The Structure panel in the bottom-left shows an overview of the class including all the methods it contains. You will see a lot of setter and getter methods like getFirstName() and setFirstName() as shown in the following screenshot: There is a doDML() method described later. If you were to decide, for example, that last name should always be stored in upper case, you could change the setLastName() method to: public void setLastName(String value) { setAttributeInternal(LASTNAME, value.toUpperCase()); } Working with database triggers If you decide to keep some of your business logic in database triggers, your triggers might change the values that get passed from the entity object. Because the entity object caches values to save database work, you need to make sure that the entity object stays in sync with the database even if a trigger changes a value. You do this by using the Refresh on Update property. To find this property, select the Attributes subtab on the left and then select the attribute that might get changed. At the bottom of the screen, you see various settings for the attribute with the Refresh settings in the top-right of the Details tab as shown in the following screenshot: Check the Refresh on Update property checkbox if a database trigger might change the attribute value. This makes the ADF framework requery the database after an update has been issued. Refresh on Insert doesn't work if you are using MySQL and your primary key is generated with AUTO_INCREMENT or set by a trigger. ADF doesn't know the primary key and therefore cannot find the newly inserted row after inserting it. It does work if you are running against an Oracle database, because Oracle SQL syntax has a special RETURNING construct that allows the entity object to get the newly created primary key back. Overriding doDML() Next, after the setters and getters, the doDML() method is the one that most often gets overridden. This method is called whenever an entity object wants to execute a Data Manipulation Language (DML ) statement like INSERT, UPDATE, or DELETE. This offers you a way to add additional processing; for example, checking that the account balance is zero before allowing a customer to be deleted. In this case, you would add logic to check the account balance, and if the deletion is allowed, call super.doDML() to invoke normal processing. Another example would be to implement logical delete (records only change state and are not actually deleted from the table). In this case, you would override doDML() as follows: @override protected void doDML(int operation, TransactionEvent e) { if (operation == DML_DELETE) { operation = DML_UPDATE; } super.doDML(operation, e); } As it is probably obvious from the code, this simply replaces a DELETE operation with an UPDATE before it calls the doDML() method of its superclass (your framework extension EntityImpl, which passes the task on to the Oracle-supplied EntityImpl class). Of course, you also need to change the state of the entity object row, for example, in the remove() method. You can find fully-functional examples of this approach on various blogs, for example at http://myadfnotebook.blogspot.dk/2012/02/updating-flag-when-deleting-entity-in.html. You also have the option of completely replacing normal doDML() method processing by simply not calling super.doDML(). This could be the case if you want all your data modifications to go via a database procedure –– for example, to insert an actor, you would have to call insertActor with first name and last name. In this case, you would write something like: @override protected void doDML(int operation, TransactionEvent e) { CallableStatement cstmt = null; if (operation == DML_INSERT) { String insStmt = "{call insertActor (?,?)}"; cstmt = getDBTransaction().createCallableStatement(insStmt, 0); try { cstmt.setString(1, getFirstName()); cstmt.setString(2, getLastName()); cstmt.execute(); } catch (Exception ex) { … } finally { … } } } If the operation is insert, the above code uses the current transaction (via the getDBTransaction() method) to create a CallableStatement with the string insertActor(?,?). Next, it binds the two parameters (indicated by the question marks in the statement string) to the values for first name and last name (by calling the getter methods for these two attributes). Finally, the code block finishes with a normal catch clause to handle SQL errors and a finally clause to close open objects. Again, fully working examples are available in the documentation and on the Internet in various blog posts. Normally, you would implement this kind of override in the framework extension EntityImpl class, with additional logic to allow the framework extension class to recognize which specific entity object the operation applies to and which database procedure to call. Data validation With the techniques you have just seen, you can implement every kind of business logic your requirements call for. One requirement, however, is so common that it has been built right into the ADF framework: data validation . Declarative validation The simplest kind of validation is where you compare one individual attribute to a limit, a range, or a number of fixed values. For this kind of validation, no code is necessary at all. You simply select the Business Rules subtab in the entity object, select an attribute, and click on the green plus sign to add a validation rule. The Add Validation Rule dialog appears as shown in the following screenshot: You have a number of options for Rule Type –– depending on your choice here, the Rule Definition tab changes to allow you to define the parameters for the rule. On the Failure Handling tab, you can define whether the validation is an error (that must be corrected) or a warning (that the user can override), and you define a message text as shown in the following screenshot: You can even define variable message tokens by using curly brackets { } in your message text. If you do so, a token will automatically be added to the Token Message Expressions section of the dialog, where you can assign it any value using Expression Language. Click on the Help button in the dialog for more information on this. If your application might ever conceivably be needed in a different language, use the looking glass icon to define a resource string stored in a separate resource bundle. This allows your application to have multiple resource bundles, one for each different user interface language. There is also a Validation Execution tab that allows you to specify under which condition your rule should be applied. This can be useful if your logic is complex and resource intensive. If you do not enter anything here, your rule is always executed. Regular expression validation One of the especially powerful declarative validations is the Regular Expression validation. A regular expression is a very compact notation that can define the format of a string –– this is very useful for checking e-mail addresses, phone numbers, and so on. To use this, set Rule Type to Regular Expression as shown in the following screenshot: JDeveloper offers you a few predefined regular expressions, for example, the validation for e-mails as shown in the preceding screenshot. Even though you can find lots of predefined regular expressions on the Internet, someone from your team should understand the basics of regular expression syntax so you can create the exact expression you need. Groovy scripts You can also set Rule Type to Script to get a free-format box where you can write a Groovy expression. Groovy is a scripting language for the Java platform that works well together with Java –– see http://groovy.codehaus.org/ for more information on Groovy. Oracle has published a white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf), and there is also information on Groovy in the JDeveloper help. Method validation If none of these methods for data validation fit your need, you can of course always revert to writing code. To do this, set Rule Type to Method and provide an error message. If you leave the Create a Select Method checkbox checked when you click on OK , JDeveloper will automatically create a method with the right signature and add it to the Java class for the entity object. The autogenerated validation method for Length (in the Film entity object) would look as follows: /** * Validation method for Length. */ public boolean validateLength (Integer length) { return true; } It is your task to fill in the logic and return either true (if validation is OK) or false (if the data value does not meet the requirements). If validation fails, ADF will automatically display the message you defined for this validation rule. Logic in view objects View objects represent the dataset you need for a specific part of the application — typically a specific screen or part of a screen. You can create Java objects for either an entire view object (an XxxImpl.java class, where Xxx is the name of your view object) or for a specific row (an XxxRowImpl.java class). A view object class contains methods to work with the entire data-set that the view object represents –– for example, methods to apply view criteria or re-execute the underlying database query. The view row class contains methods to work with an individual record of data –– mainly methods to set and get attribute values for one specific record. Overriding accessors Like for entity objects, you can override the accessors (setters and getters) for view objects. To do this, you use the Java subtab in the view object and click on the pencil icon next to Java Classes to generate Java. You can select to generate a view row class including accessors to ask JDeveloper to create a view row implementation class as shown in the following screenshot: This will create an XxxRowImpl class (for example, RentalVORowImpl) with setter and getter methods for all attributes. The code will look something like the following code snippet: … public class RentalVORowImpl extends ViewRowImpl { … /** * This is the default constructor (do not remove). */ public RentalVORowImpl() { } … /** * Gets the attribute value for title using the alias name * Title. * @return the title */ public String getTitle() { return (String) getAttributeInternal(TITLE); } /** * Sets <code>value</code> as attribute value for title using * the alias name Title. * @param value value to set the title */ public void setTitle(String value) { setAttributeInternal(TITLE, value); } … } You can change all of these to manipulate data before it is delivered to the entity object or to return a processed version of an attribute value. To use such attributes, you can write code in the implementation class to determine which value to return. You can also use Groovy expressions to determine values for transient attributes. This is done on the Value subtab for the attribute by setting Value Type to Expression and filling in the Value field with a Groovy expression. See the Oracle white paper on Groovy in ADF (http://www.oracle.com/technetwork/developer-tools/jdev/introduction-to-groovy-128837.pdf) or the JDeveloper help. Change view criteria Another example of coding in a view object is to dynamically change which view criteria are applied to the view object.It is possible to define many view criteria on a view object –– when you add a view object instance to an application module, you decide which of the available view criteria to apply to that specific view object instance. However, you can also programmatically change which view criteria are applied to a view object. This can be useful if you want to have buttons to control which subset of data to display –– in the example application, you could imagine a button to "show only overdue rentals" that would apply an extra view criterion to a rental view object. Because the view criteria apply to the whole dataset, view criteria methods go into the view object, not the view row object. You generate a Java class for the view object from the Java Options dialog in the same way as you generate Java for the view row object. In the Java Options dialog, select the option to generate the view object class as shown in the following screenshot: A simple example of programmatically applying a view criteria would be a method to apply an already defined view criterion called called OverdueCriterion to a view object. This would look like this in the view object class: public void showOnlyOverdue() { ViewCriteria vc = getViewCriteria("OverdueCriterion"); applyViewCriteria(vc); executeQuery(); } View criteria often have bind variables –– for example, you could have a view criteria called OverdueByDaysCriterion that uses a bind variable OverdueDayLimit. When you generate Java for the view object, the default option of Include bind variable accessors (shown in the preceding screenshot) will create a setOverdueDayLimit() method if you have an OverdueDayLimit bind variable. A method in the view object to which we apply this criterion might look like the following code snippet: public void showOnlyOverdueByDays(int days) { ViewCriteria vc = getViewCriteria("OverdueByDaysCriterion"); setOverdueDayLimit(days); applyViewCriteria(vc); executeQuery(); } If you want to call these methods from the user interface, you must select create a client interface for them (on the Java subtab in the view object). This will make your method available in the Data Control palette, ready to be dragged onto a page and dropped as a button. When you change the view criteria and execute the query, only the content of the view object changes –– the screen does not automatically repaint itself. In order to ensure that the screen refreshes, you need to set the PartialTriggers property of the data table to point to the ID of the button that changes the view criteria. For more on partial page rendering, see the Oracle Fusion Middleware Web User Interface Developer's Guide for Oracle Application Development Framework (http://docs.oracle.com/cd/E37975_01/web.111240/e16181/af_ppr.htm). Logic in application modules You've now seen how to add logic to both entity objects and view objects. However, you can also add custom logic to application modules. An application module is the place where logic that does not belong to a specific view object goes –– for example, calls to stored procedures that involve data from multiple view objects. To generate a Java class for an application module, you navigate to the Java subtab in the application module and select the pencil icon next to the Java Classes heading. Typically, you create Java only for the application module class and not for the application module definition. You can also add your own logic here that gets called from the user interface or you can override the existing methods in the application module. A typical method to override is prepareSession(), which gets called before the application module establishes a connection to the database –– if you need to, for example, call stored procedures or do other kinds of initialization before accessing the database, an application module method is a good place to do so. Remember that you need to define your own methods as client methods on the Java tab of the application module for the method to be available to be called from elsewhere in the application. Because the application module handles the transaction, it also contains methods, such as beforeCommit(), beforeRollback(), afterCommit(), afterRollback(), and so on. The doDML() method on any entity object that is part of the transaction is executed before any of the application modules' methods. Adding logic to the user interface Logic in the user interface is implemented in the form of managed beans. These are Java classes that are registered with the task flow and automatically instantiated by the ADF framework.ADF operates with various memory scopes –– you have to decide on a scope when you define a managed bean. Adding a bean method to a button The simplest way to add logic to the user interface is to drop a button (af:commandButton) onto a page or page fragment and then double-click on it. This brings up the Bind Action Property dialog as shown in the following screenshot: If you leave Method Binding selected and click on New , the Create Managed Bean dialog appears as shown in the following screenshot: In this dialog, you can give your bean a name, provide a class name (typically the same as the bean name), and select a scope. The backingBean scope is a good scope for logic that is only used for one action when the user clicks on the button and which does not need to store any state for later. Leaving the Generate Class If It Does Not Exist checkbox checked asks JDeveloper to create the class for you. When you click on OK , JDeveloper will automatically suggest a method for you in the Method dropdown (based on the ID of the button you double-clicked on). In the Method field, provide a more useful name and click on OK to add the new class and open it in the editor. You will see a method with your chosen name, as shown in the following code snippet: Public String rentDvd() { // Add event code here... return null; } Obviously, you place your code inside this method. If you accidentally left the default method name and ended up with something like cb5_action(), you can right-click on the method name and navigate to Refactor | Rename to give it a more descriptive name. Note that JDeveloper automatically sets the Action property for your button matching the scope, bean name, and method name. This might be something like #{backingBeanScope.RentalBean.rentDvd}. Adding a bean to a task flow Your beans should always be part of a task flow. If you're not adding logic to a button, or you just want more control over the process, you can also create a backing bean class first and then add it to the task flow. A bean class is a regular Java class created by navigating to File | New | Java Class . When you have created the class, you open the task flow where you want to use it and select the Overview tab. On the Managed Beans subtab, you can use the green plus to add your bean. Simply give it a name, point to the class you created, and select a memory scope. Accessing UI components from beans In a managed bean, you often want to refer to various user interface elements. This is done by mapping each element to a property in the bean. For example, if you have an af:inputText component that you want to refer to in a bean, you create a private variable of type RichInputText in the bean (with setter and getter methods) and set the Binding property (under the Advanced heading) to point to that bean variable using Expression Language. When creating a page or page fragment, you have the option (on the Managed Bean tab) to automatically have JDeveloper create corresponding attributes for you. The Managed Bean tab is shown in the following screenshot: Leave it on the default setting of Do Not Automatically Expose UI Components in a Managed Bean . If you select one of the options to automatically expose UI elements, your bean will acquire a lot of attributes that you don't need, which will make your code unnecessarily complex and slow. However, while learning ADF, you might want to try this out to see how the bean attributes and the Binding property work together. If you do activate this setting, it applies to every page and fragment you create until you explicitly deselect this option. Summary In this article, you have seen some examples of how to add Java code to your application to implement the specific business logic your application needs. There are many, many more places and ways to add logic –– as you work with ADF, you will continually come across new business requirements that force you to figure out how to add code to your application in new ways. Fortunately, there are other books, websites, online tutorials and training that you can use to add to your ADF skill set –– refer to http://www.adfessentials.com for a starting point. Resources for Article : Further resources on this subject: Oracle Tools and Products [Article] Managing Oracle Business Intelligence [Article] Oracle Integration and Consolidation Products [Article]
Read more
  • 0
  • 0
  • 5019

article-image-configuring-payment-models-intermediate
Packt
03 Sep 2013
4 min read
Save for later

Configuring payment models (Intermediate)

Packt
03 Sep 2013
4 min read
(For more resources related to this topic, see here.) How to do it... Let's learn how to integrate PayPal Website Payments Standard into our store in test mode. Integrating PayPal Website Payments Standard into our store (test mode) We will start by activating PayPal Payments Standard using the Payments section under the Extensions menu, and then we will edit the settings for it. The next step is to fill in the needed information for testing the PayPal system. For test purposes, we will choose the option for Sandbox Mode as Yes. We will now open a developer account to create test accounts on PayPal. Let's browse to http://developer.paypal.com and sign up for a developer account. Following is a screenshot of the screen that is displayed after we sign up and log in to the account. Click on the Create a preconfigured account link. The next screen will propose an account name with which your account will be created. Now we only need to add funds to the Account Balance field to create the account. Remember that it is a test account, so we can give any virtual amount of funds we want. Now we have a test PayPal account that can be used for our test purchases: Let's go to our shop's user interface, add a product to the shopping cart, and proceed to the Checkout page: Let's log in with the test account we have just created: The following screenshot shows a successful test order: Integrating PayPal Website Payments Pro into our store (live mode) We need to get the API information first. Let's log in to our PayPal account. Click on the User Profile link, and then click on the Update link next to API access in the My selling tools section. The next step is to click on the Request API Credentials link. Choose the option that says Request API signature. This will help us get the API information. The next step is to activate Payment Pro on the OpenCart administration interface using the Payments section under the Extensions menu. We need to edit the details and enter the API information. Let's not forget to select No for Test Mode, which means that this will be a live system. Choose Enabled for the Status field. How it works... Now let's learn how the PayPal Standard and Pro models work and how they differ from each other. PayPal Website Payments Standard PayPal Standard is the easiest payment model to integrate into our store. All we need is a PayPal account, and a bank account to withdraw the money from. PayPal Standard has no monthly costs or setup fee. However, the company charges a small percentage from each transaction. Please go to https://www.paypal.com/webapps/mpp/merchant for merchant service details. The activation of the Standard method is very straightforward. We only need to provide our e-mail address and then set Transaction Method to Sale, Sandbox Mode to No, and Status to Enabled on the administration panel. There is a difference in the test payments that we have made. Customers can also pay with their credit cards instantly, even without a PayPal account. This makes PayPal a very powerful and popular solution. If you are afraid of charging your real PayPal account, there is a good way to test your real payment environment. Create a dummy product with the price of $0.01 and complete the purchase with this tiny amount. PayPal Website Payments Pro This service can be used to charge credit cards using PayPal services in the background. The customers will not need to leave the store at all; the transaction will be completed at the shop itself. Many big e-commerce websites operate this way. Currently, Pro service is only available for merchant accounts located in the US, UK, and Canada. Summary This article explains about implementing different PayPal integrations. The article discusses about how to integrate the PayPal Website Payments Standard and Pro methods into a simple store. Resources for Article: Further resources on this subject: Upgrading OpenCart [Article] Setting Payment Model in OpenCart [Article] OpenCart: Layout Structure [Article]
Read more
  • 0
  • 0
  • 1519

article-image-getting-ibus
Packt
03 Sep 2013
10 min read
Save for later

Getting on the IBus

Packt
03 Sep 2013
10 min read
(For more resources related to this topic, see here.) Why NServiceBus? Before diving in, we should take a moment to consider why NServiceBus might be a tool worth adding to your repertoire. If you're eager to get started, feel free to skip this section and come back later. So what is NServiceBus? It's a powerful, extensible framework that will help you to leverage the principles of Service-oriented architecture ( SOA ) to create distributed systems that are more reliable, more extensible, more scalable, and easier to update. That's all well and good, but if you're just picking up this book for the first time, why should you care? What problems does it solve? How will it make your life better? Ask yourself whether any of the following situations describe you: My code updates values in several tables in a transaction, which acquires locks on those tables, so it frequently runs into deadlocks under load. I've optimized all the queries that I can. The transaction keeps the database consistent but the user gets an ugly exception and has to retry what they were doing, which doesn't make them very happy. Our order processing system sometimes fails on the third of three database calls. The transaction rolls back and we log the error, but we're losing money because the end user doesn't know if their order went through or not, and they're not willing to retry for fear of being double charged, so we're losing business to our competitor. We built a system to process images for our clients. It worked fine for a while but now we've become a victim of our own success. We designed it to be multithreaded (which was no small feat!) but we already maxed out the original server it was running on, and at the rate we're adding clients it's only a matter of time until we max out this one too. We need to scale it out to run on multiple servers but have no idea how to do it. We have a solution that is integrating with a third-party web service, but when we call the web service we also need to update data in a local database. Sometimes the web service times out, so our database transaction rolls back, but sometimes the web service call does actually complete at the remote end, so now our local data and our third-party provider's data are out of sync. We're sending emails as part of a complex business process. It is designed to be retried in the event of a failure, but now customers are complaining that they're receiving duplicate emails, sometimes dozens of them. A failure occurs after the email is sent, the process is retried, and the emails is sent over and over until the failure no longer occurs. I have a long-running process that gets kicked off from a web application. The website sits on an interstitial page while the backend process runs, similar to what you would see on a travel site when you search for plane tickets. This process is difficult to set up and fairly brittle. Sometimes the backend process fails to start and the web page just spins forever. We added latitude and longitude to our customer database, but now it is a nightmare to try to keep that information up-to-date. When a customer's address changes, there is nothing to make sure the location information is also recalculated. There are dozens of procedures that update the customer address, and not all of them are under our department's control. If any of these situations has you nodding your head in agreement, I invite you to read on. NServiceBus will help you to make multiple transactional updates utilizing the principle of eventual consistency so that you do not encounter deadlocks. It will ensure that valuable customer order data is not lost in the deep dark depths of a multi-megabyte log file. By the end of the book, you'll be able to build systems that can easily scale out, as well as up. You'll be able to reliably perform non-transactional tasks such as calling web services and sending emails. You will be able to easily start up long-running processes in an application server layer, leaving your web application free to process incoming requests, and you'll be able to unravel your spaghetti codebases into a logical system of commands, events, and handlers that will enable you to more easily add new features and version the existing ones. You could try to do this all on your own by rolling your own messaging infrastructure and carefully applying the principles of service-oriented architecture, but that would be really time consuming. NServiceBus is the easiest solution to solve the aforementioned problems without having to expend too much effort to get it right, allowing you to put your focus on your business concerns, where it belongs. So if you're ready, let's get started creating an NServiceBus solution. Getting the code We will be covering a lot of information very quickly in this article, so if you see something that doesn't immediately make sense, don't panic! Once we have the basic example in place, we will loop back and explain some of the finer points more completely. There are two main ways to get the NServiceBus code integrated with your project, by downloading the Windows Installer package, and via NuGet. I recommend you use Windows Installer the first time to ensure that your machine is set up properly to run NServiceBus, and then use NuGet to actually include the assemblies in your project. Windows Installer automates quite a bit of setup for you, all of which can be controlled through the advanced installation options: NServiceBus binaries, tools, and sample code are installed. The NServiceBus Management Service is installed to enable integration with ServiceInsight. . Microsoft Message Queueing ( MSMQ ) is installed on your system if it isn't already. MSMQ provides the durable, transactional messaging that is at the core of NServiceBus. The Distributed Transaction Coordinator ( DTC ) is configured on your system. This will allow you to receive MSMQ messages and coordinate data access within a transactional context. RavenDB is installed, which provides the default persistence mechanism for NServiceBus subscriptions, timeouts, and saga data. NServiceBus performance counters are added to help you monitor NServiceBus performance. Download the installer from http://particular.net/downloads and install it on your machine. After the install is complete, everything will be accessible from your Start Menu. Navigate to All Programs | Particular Software | NServiceBus as shown in the following screenshot: The install package includes several samples that cover all the basics as well as several advanced features. The Video Store sample is a good starting point. Multiple versions of it are available for different message transports that are supported by NServiceBus. If you don't know which one to use, take a look at VideoStore.Msmq. I encourage you to work through all of the samples, but for now we are going to roll our own solution by pulling in the NServiceBus NuGet packages. NServiceBus NuGet packages Once your computer has been prepared for the first time, the most direct way to include NServiceBus within an application is to use the NuGet packages. There are four core NServiceBus NuGet packages: NServiceBus.Interfaces: This package contains only interfaces and abstractions, but not actual code or logic. This is the package that we will use for message assemblies. NServiceBus: This package contains the core assembly with most of the code that drives NServiceBus except for the hosting capability. This is the package we will reference when we host NServiceBus within our own process, such as in a web application. NServiceBus.Host: This package contains the service host executable. With the host we can run an NServiceBus service endpoint from the command line during development, and then install it as a Windows service for production use. NServiceBus.Testing: This package contains a framework for unit testing NServiceBus endpoints and sagas. The NuGet packages will also attempt to verify that your system is properly prepared through PowerShell cmdlets that ship as part of the package. However, if you are not running Visual Studio as an Administrator, this can be problematic as the tasks they perform sometimes require elevated privileges. For this reason it's best to run Windows Installer before getting started. Creating a message assembly The first step to creating an NServiceBus system is to create a messages assembly. Messages in NServiceBus are simply plain old C# classes. Like the WSDL document of a web service, your message classes form a contract by which services communicate with each other. For this example, let's pretend we're creating a website like many on the Internet, where users can join and become a member. We will construct our project so that the user is created in a backend service and not in the main code of the website. Follow these steps to create your solution: In Visual Studio, create a new class library project. Name the project UserService.Messages and the solution simply Example. This first project will be your messages assembly. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console, run this command to install the NServiceBus.Interfaces package, which will add the reference to NServiceBus.dll. PM> Install-Package NServiceBus.Interfaces –ProjectName UserService.Messages Add a new folder to the project called Commands. Add a new class to the Commands folder called CreateNewUserCmd.cs. Add using NServiceBus; to the using block of the class file. It is very helpful to do this first so that you can see all of the options available with IntelliSense. Mark the class as public and implement ICommand. This is a marker interface so there is nothing you need to implement. Add the public properties for EmailAddress and Name. When you're done, your class should look like this: using System; using System.Collections.Generic; using System.Linq; using System.Text; using NServiceBus; namespace UserService.Messages.Commands { public class CreateNewUserCmd : ICommand { public string EmailAddress { get; set; } public string Name { get; set; } } } Congratulations! You've created a message! This will form the communication contract between the message sender and receiver. Unfortunately, we don't have enough to run yet, so let's keep moving. Creating a service endpoint Now we're going to create a service endpoint that will handle our command message. Add a new class library project to your solution. Name the project UserService. Delete the Class1.cs file that came with the class project. From the NuGet Package Manager Console window, run this command to install the NServiceBus.Host package: PM> Install-Package NServiceBus.Host –ProjectName UserService Take a look at what the host package has added to your class library. Don't worry; we'll cover this in more detail later. References to NServiceBus.Host.exe, NServiceBus.Core.dll, and NServiceBus.dll An App.config file A class named EndpointConfig.cs In the service project, add a reference to the UserService.Messages project you created before. Right-click on the project file and click on Properties , then in the property pages, navigate to the Debug tab and enter NServiceBus.Lite under Command line arguments . This tells NServiceBus not to run the service in production mode while we're just testing. This may seem obvious, but this is part of the NServiceBus promise to be safe by default, meaning you won't be able to mess up when you go to install your service in production.
Read more
  • 0
  • 0
  • 1934
article-image-introducing-emberjs-framework
Packt
03 Sep 2013
5 min read
Save for later

Introducing the Ember.JS framework

Packt
03 Sep 2013
5 min read
(For more resources related to this topic, see here.) Introduction to Ember.js Ember.js is a frontend MVC JavaScript framework that runs in the browser. It is for developers who are looking to build ambitious and large web applications that rival native applications. Ember.js was created from concepts introduced by native application frameworks, such as Cocoa. Ember.js helps you to create great experiences for the user. It will help you to organize all the direct interactions a user may perform on your website. A common use case for Ember.js is when you believe your JavaScript code will become complex; when the code base becomes complex, problems about maintaining and refactoring the code base will arise. MVC stands for model-view-controller. This kind of structure makes it easy to make modifications or refactor changes to any part of your code. It will also allow you to adhere to Don't Repeat Yourself (DRY) principles. The model is responsible for notifying associated views and controllers when there has been a change in the state of the application. The controller sends CRUD requests to the model to notify it of a change in state. It can also send requests to the view to change how the view is representing the current state of the model. The view will then receive information from the model to create a graphical rendering. If you are still unclear on how the three parts interact with each other, the following is a simple diagram illustrating this: Ember.js decouples the problematic areas of your frontend, enabling you to focus on one area at a time without worrying about affecting other parts of your application. To give you an example of some of these areas of Ember.js, take a look at the following list: Navigation : Ember's router takes care of your application's navigation Auto-updating templates : Ember view expressions are binding-aware, meaning they will update automatically if the underlying data ever changes Data handling : Each object you create will be an Ember object, thus inheriting all Ember.object methods Asynchronous behavior : Bindings and computed properties within Ember help manage asynchronous behavior Ember.js is more of a framework than a library. Ember.js expects you to build a good portion of your frontend around its methodologies and architecture, creating a solid application architecture once you are finished with it. This is the main difference between Ember and a framework like Angular.js. Angular allows itself to be incorporated into an existing application, whereas an Ember application would have had to have been planned out with its specific architecture in mind. Backbone.js would be another example of a library that can easily be inserted into existing JavaScript projects. Ember.js is a great framework for handling complex interactions performed by users in your application. You may have been led to believe that Ember.js is a difficult framework to learn, but this is false. The only difficulty for developers lies in understanding the concepts that Ember.js tries to implement. How to set up Ember.js The js folder contains a subfolder named libs and the app.js file. libs is for storing any external libraries that you will want to include into your application. app.js is the JavaScript file that contains your Ember application structure. index.html is a basic HTML index file that will display information in the user's browser. We will be using this file as the index page of the sample application that we will be creating. We create a namespace called MovieTracker where we can access any necessary Ember.js components. Initialize() will instantiate all the controllers currently available with the namespace. After that is done, it injects all the controllers onto a router. We then set ApplicationController as the rendering context of our views. Your application must have ApplicationController, otherwise your application will not be capable of rendering dynamic templates. Router in Ember is a subclass of the Ember StateManager. The Ember StateManager tracks the current active state and triggers callbacks when states have changed. This router will help you match the URL to an application state and detects the browser URL at application load time. The router is responsible for updating the URL as the application's state changes. When Ember parses the URL to determine the state, it attempts to find Ember.Route that matches this state. Our router must contain root and index. You can think of root as a general container for routes. It is a set of routes. An Ember view is responsible for structuring the page through the view's associated template. The view is also responsible for registering and responding to user events. ApplicationView we are creating is required for any Ember application. The view we created is associated with our ApplicationController as well. The templateName variable is the name we use in our index.html file. The templateName variable can be changed to anything you wish. Creating an Ember Object An object or a model is a way to manage data in a structured way. In other words, they are a way of representing persistent states in your application. In Ember.js, almost every object is derived from the Ember.Object class. Since most objects will be derived from the same base object, they will end up sharing properties with each other. This allows the observation and binding to properties of other objects.
Read more
  • 0
  • 0
  • 5926

Packt
03 Sep 2013
7 min read
Save for later

Quickstart – Creating an application

Packt
03 Sep 2013
7 min read
(For more resources related to this topic, see here.) Step 1 – Planning the workflow When you'll be writing a real application, you should start with the requirements to application functionality. For the blog example, they described in the Getting Started: Requirements Analysis section, at the very beginning of the tutorial. Direct URL is http://www.yiiframework.com/doc/blog/1.1/en/start.requirements. After you have written all the desired features, you basically start implementing them one by one. Of course, in serious software development there's a lot of gotchas included but overall it's the same. Blog example is a database driven application, so we need to prepare a database schema beforehand. Here's what they came up with for the blog demo. This image is a verbatim copy from the blog example demo. Note that there are two links missing. The posts table have tags field which is the storage area for tags written in raw and is not a foreign key to tags table. Also author field in comment should really be the foreign key to user table. Anyways, we'll not cover the actual database generation, and suggest you can do it yourself. The blog tutorial at the Yii website has all the relevant instructions addressed to total newbies. Next in this article we will see how easy it is with Yii to get a working user interface by which one will be able to manipulate our database. Step 2 – Linking to the database from your app Once you design and physically create, the database in some database management system like MySQL or maybe SQLite, you are ready to configure your app to point to this database. The skeleton app generated by the ./yiic webapp command needs to be configured to point to this database. To do this, you need to set a db component in the main config file located at protected/config/main.php. There is a section that contains an array of components. Below is the setup for a MySQL database located at the same server as the web application itself. You will find a commented-out template for this already present when you generate your app. /protected/config/main.php'components'=>array( /* other components */ 'db'=>array( 'connectionString' => 'mysql:host=localhost;dbname=DB_NAME, 'emulatePrepare' => true, 'username' => YOUR_USERNAME, 'password' => YOUR_PASSWORD, 'charset' => 'utf8', ), /* other components */), This is a default component having a class CDbConnection and is used by all of our ActiveRecord design patterns which we will create later. As with all application components, all configuration parameters corresponds to the public properties of the component's class, so, you can check the API documentation for details. By the way, you really want to understand more about the main application config. Read about it in the Definitive Guide to Yii at the official website, at Fundamentals | Application | Application Configuration. Direct URL is http://www.yiiframework.com/doc/guide/1.1/en/basics.application#application-configuration. Just remember that all configuration parameters are just properties of CWebApplication object, which you can read about it the API documentation, direct URL is http://www.yiiframework.com/doc/api/1.1/CWebApplication. Step 3 – Generating code automatically Now that we have our app linked up to a fully built database, we can start using one of Yii's greatest features: automatic code generation. To get started, there are two types of code generation that are necessary: Generate a model classes based on the tables in your database Run the CRUD generator that takes a model and sets up a corresponding controller and set of views for basic listing, creating, viewing, updating and deleting from the table Console way There are two ways to go about automatic code generating. Originally, there was only the yiic tool used earlier to create the skeleton app. For the automatic code generation features, you would use yiic shell index.php command, which would bring up a command-line interface where you could run subcommands for modeling and scaffolding. $ /usr/local/yii/framework/yiic shell index.phpYii Interactive Tool v1.1 (based on Yiiv1.1.13)Please type 'help' for help. Type 'exit' to quit.>> model Post tbl_post generate models/Post.php unchanged fixtures/tbl_post.php generate unit/PostTest.phpThe following model classes are successfully generated: PostIf you have a 'db' database connection, you can test these models nowwith: $model=Post::model()->find(); print_r($model);>> crud Post generate PostController.php generate PostTest.phpmkdir /var/www/app/protected/views/post generate create.php generate update.php generate index.php generate view.php As you can see, this is a quick and easy way to perform the model and crud actions. The model command produces just two files: For your actual model class For unit tests The crud command creates your controller and view files. Gii Console tools may be the preferred option for some, but for developers who like to use graphical tools, there is now solution for this, called Gii. To use Gii, it is necessary to turn it on in the main config file: protected/config/main.php. You will find the template for it already present, but it is commented out by default. Simply uncomment it, set your password, and decide from what hosts it may be accessed. The configuration looks like this: 'gii'=>array( 'class'=>'system.gii.GiiModule', 'password'=>'giiPassword', // If removed, Gii defaults to localhost only. // Edit carefully to taste. 'ipFilters'=>array('127.0.0.1','::1'), // For development purposes, // a wildcard will allow access from anywhere. // 'ipFilters'=>array('*'),), Once Gii is configured, it can be accessed by navigating to the app URL with ?r=gii after it. For example, http://www.example.com/index.php?r=gii. It will begin with a prompt asking for the password set in the config file. Once entered, it will display a list of generators. If the database is not set in the config file, you will see an error when you attempt to use one. The first most basic generator in Gii is the model generator. It asks for a table name from the database and a name to be used for the PHP class. Note that we can specify a table name prefix which will be ignored when generating the model class name. For instance, the blog demo's user table is tbl_user, where the tbl_ is a prefix. This feature exists to support some setups, especially common in shared hosting environments, where a single database holds tables for several distinct applications. In such an environment, it's a common practice to prefix something to names of tables to avoid getting into naming conflict and easily find tables relevant to some specific application. So, as this prefixes don't mean anything in the application itself, Gii offers a way to automatically ignore them. Model class names are being constructed from the remaining table names by the obvious rules: Underscores are converted to uppercasing the next letter The first letter of the class name is being uppercased as well. The first step in getting your application off the ground is to generate models for all the entity tables in your database. Things like bridge tables will not need models, as they simply relate two entities to one another, rather than actually being a distinct thing. Bridge tables are being used for generating relations between models, expressed in the relations method in model class. For the blog demo, basic models are User, Post, Comment, Tag, and Lookup. The second phase of scaffolding is to generate the CRUD code for each of these models. This will create a controller and a series of view templates. The controller (for example. PostController) will handle routing to actions related to the given model. The view files represent everything needed to list and view entities, as well as the forms needed to create and update individual entities. Summary In this article we created an application by following a series of steps such as planning the workflow, linking to the database from your app, and generating code automatically. Resources for Article : Further resources on this subject: Database, Active Record, and Model Tricks [Article] Building multipage forms (Intermediate) [Article] Creating a Recent Comments Widget in Agile [Article]
Read more
  • 0
  • 0
  • 1942
Modal Close icon
Modal Close icon