Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Application Development

357 Articles
article-image-asynchrony-action
Packt
06 Mar 2013
17 min read
Save for later

Asynchrony in Action

Packt
06 Mar 2013
17 min read
(For more resources related to this topic, see here.) Asynchrony When we talk about C# 5.0, the primary topic of conversation is the new asynchronous programming features. What does asynchrony mean? Well, it can mean a few different things, but in our context, it is simply the opposite of synchronous. When you break up execution of a program into asynchronous blocks, you gain the ability to execute them side-by-side, in parallel. As you can see in the following diagram, executing multiple ac-tions concurrently can bring various positive qualities to your programs: Parallel execution can bring performance improvements to the execution of a program. The best way to put this into context is by way of an example, an example that has been experienced all too often in the world of desktop software. Let's say you have an application that you are developing, and this software should fulfill the following requirements: When the user clicks on a button, initiate a call to a web service. Upon completion of the web service call, store the results into a database. Finally, bind the results and display them to the user. There are a number of problems with the naïve way of implementing this solution. The first is that many developers write code in such a way that the user interface will be completely unresponsive while we are waiting to receive the results of these web service calls. Then, once the results finally arrive, we continue to make the user wait while we store the results in a database, an operation that the user does not care about in this case. The primary vehicle for mitigating these kinds of problems in the past has been writing multithreaded code. This is of course nothing new, as multi-threaded hardware has been around for many years, along with software capabilities to take advantage of this hardware. Most of the programming languages did not provide a very good abstraction layer on top of this hardware, often letting (or requiring) you program directly against the hardware threads. Thankfully, Microsoft introduced a new library to simplify the task of writing highly concurrent programs, which is explained in the next section. Task Parallel Library The Task Parallel Library (TPL) was introduced in .NET 4.0 (along with C# 4.0). Firstly, it is a huge topic and could not have been examined properly in such a small space. Secondly, it is highly relevant to the new asynchrony features in C# 5.0, so much so that they are the literal foundation upon which the new features are built. So, in this section, we will cover the basics of the TPL, along with some of the background information about how and why it works. TPL introduces a new type, the Task type, which abstracts away the concept of something that must be done into an object. At first glance, you might think that this abstraction already exists in the form of the Thread class. While there are some similarities between Task and Thread, the implementations have quite different implications. With a Thread class, you can program directly against the lowest level of parallelism supported by the operating system, as shown in the following code: Thread thread = new Thread(new ThreadStart(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Thread"); })); thread.Start(); Console.WriteLine("Hello, from the main thread"); thread.Join(); In the previous example, we create a new Thread class, which when started will sleep for a second and then write out the text Hello, from the Thread. After we call thread.Start(), the code on the main thread immediately continues and writes Hello, from the main thread. After a second, we see the text from the background thread printed to the screen. In one sense, this example of using the Thread class shows how easy it is to branch off the execution to a background thread, while allowing execution of the main thread to continue, unimpeded. However, the problem with using the Thread class as your "concurrency primitive" is that the class itself is an indication of the implementation, which is to say, an operating system thread will be created. As far as abstractions go, it is not really an abstraction at all; your code must both manage the lifecycle of the thread, while at the same time dealing with the task the thread is executing. If you have multiple tasks to execute, spawning multiple threads can be disastrous, because the operating system can only spawn a finite number of them. For performance intensive applications, a thread should be considered a heavyweight resource, which means you should avoid using too many of them, and keep them alive for as long as possible. As you might imagine, the designers of the .NET Framework did not simply leave you to program against this without any help. The early versions of the frameworks had a mechanism to deal with this in the form of the ThreadPool, which lets you queue up a unit of work, and have the thread pool manage the lifecycle of a pool of threads. When a thread becomes available, your work item is then executed. The following is a simple example of using the thread pool: int[] numbers = { 1, 2, 3, 4 }; foreach (var number in numbers) { ThreadPool.QueueUserWorkItem(new WaitCallback(o => { Thread.Sleep(500); string tabs = new String('t', (int)o); Console.WriteLine("{0}processing #{1}", tabs, o); }), number); } This sample simulates multiple tasks, which should be executed in parallel. We start with an array of numbers, and for each number we want to queue a work item that will sleep for half a second, and then write to the console. This works much better than trying to manage multiple threads yourself because the pool will take care of spawning more threads if there is more work. When the configured limit of concurrent threads is reached, it will hold work items until a thread becomes available to process it. This is all work that you would have done yourself if you were using threads directly. However, the thread pool is not without its complications. First, it offers no way of synchronizing on completion of the work item. If you want to be notified when a job is completed, you have to code the notification yourself, whether by raising an event, or using a thread synchronization primitive, such as ManualResetEvent. You also have to be careful not to queue too many work items, or you may run into system limitations with the size of the thread pool. With the TPL, we now have a concurrency primitive called Task. Consider the following code: Task task = Task.Factory.StartNew(() => { Thread.Sleep(1000); Console.WriteLine("Hello, from the Task"); }); Console.WriteLine("Hello, from the main thread"); task.Wait(); Upon first glance, the code looks very similar to the sample using Thread, but they are very different. One big difference is that with Task, you are not committing to an implementation. The TPL uses some very interesting algorithms behind the scenes to manage the workload and system resources, and in fact, allows you customize those algorithms through the use of custom schedulers and synchronization contexts. This allows you to control the parallel execution of your programs with a high degree of control. Dealing with multiple tasks, as we did with the thread pool, is also easier because each task has synchronization features built-in. To demonstrate how simple it is to quickly parallelize an arbitrary number of tasks, we start with the same array of integers, as shown in the previous thread pool example: int[] numbers = { 1, 2, 3, 4 }; Because Task can be thought of as a primitive type that represents an asynchronous task, we can think of it as data. This means that we can use things such as Linq to project the numbers array to a list of tasks as follows: var tasks = numbers.Select(number => Task.Factory.StartNew(() => { Thread.Sleep(500); string tabs = new String('t', number); Console.WriteLine("{0}processing #{1}", tabs, number); })); And finally, if we wanted to wait until all of the tasks were done before continuing on, we could easily do that by calling the following method: Task.WaitAll(tasks.ToArray()); Once the code reaches this method, it will wait until every task in the array completes before continuing on. This level of control is very convenient, especially when you consider that, in the past, you would have had to depend on a number of different synchronization techniques to achieve the very same result that was accomplished in just a few lines of TPL code. With the usage patterns that we have discussed so far, there is still a big disconnect between the process that spawns a task, and the child process. It is very easy to pass values into a background task, but the tricky part comes when you want to retrieve a value and then do something with it. Consider the following requirements: Make a network call to retrieve some data. Query the database for some configuration data. Process the results of the network data, along with the configuration data. The following diagram shows the logic: Both the network call and query to the database can be done in parallel. With what we have learned so far about tasks, this is not a problem. However, acting on the results of those tasks would be slightly more complex, if it were not for the fact that the TPL provides support for exactly that scenario. There is an additional kind of Task, which is especially useful in cases like this called Task<T>. This generic version of a task expects the running task to ultimately return a value, whenever it is finished. Clients of the task can access the value through the .Result property of the task. When you call that property, it will return immediately if the task is completed and the result is available. If the task is not done, however, it will block execution in the current thread until it is. Using this kind of task, which promises you a result, you can write your programs such that you can plan for and initiate the parallelism that is required, and handle the response in a very logical manner. Look at the following code: varwebTask = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); vardbTask = Task.Factory.StartNew(() => { // do a lengthy database query return new { WriteToConsole=true }; }); if (dbTask.Result.WriteToConsole) { Console.WriteLine(webTask.Result); } else { ProcessWebResult(webTask.Result); } In the previous example, we have two tasks, the webTask, and dbTask, which will execute at the same time. The webTask is simply downloading the HTML from http://bing.com Accessing things over the Internet can be notoriously flaky due to the dynamic nature of accessing the network so you never know how long that is going to take. With the dbTask task, we are simulating accessing a database to return some stored settings. Although in this simple example we are just returning a static anonymous type, database access will usually access a different server over the network; again, this is an I/O bound task just like downloading something over the Internet. Rather than waiting for both of them to execute like we did with Task.WaitAll, we can simply access the .Result property of the task. If the task is done, the result will be returned and execution can continue, and if not, the program will simply wait until it is. This ability to write your code without having to manually deal with task synchronization is great because the fewer concepts a programmer has to keep in his/her head, the more resources he/she can devote to the program. If you are curious about where this concept of a task that returns a value comes from, you can look for resources pertaining to "Futures", and "Promises" at: http://en.wikipedia.org/wiki/Promise_%28programming%29 At the simplest level, this is a construct that "promises" to give you a result in the "future", which is exactly what Task<T> does. Task composability Having a proper abstraction for asynchronous tasks makes it easier to coordinate multiple asynchronous activities. Once the first task has been initiated, the TPL allows you to compose a number of tasks together into a cohesive whole using what are called continuations. Look at the following code: Task<string> task = Task.Factory.StartNew(() => { WebClient client = new WebClient(); return client.DownloadString("http://bing.com"); }); task.ContinueWith(webTask => { Console.WriteLine(webTask.Result); }); Every task object has the .ContinueWith method, which lets you chain another task to it. This continuation task will begin execution once the first task is done. Unlike the previous example, where we relied on the .Result method to wait until the task was done—thus potentially holding up the main thread while it completed—the continuation will run asynchronously. This is a better approach for composing tasks because you can write tasks that will not block the UI thread, which results in very responsive applications. Task composability does not stop at providing continuations though, the TPL also provides considerations for scenarios, where a task must launch a number of subtasks. You have the ability to control how completion of those child tasks affects the parent task. In the following example, we will start a task, which will in turn launch a number of subtasks: int[] numbers = { 1, 2, 3, 4, 5, 6 }; varmainTask = Task.Factory.StartNew(() => { // create a new child task foreach (intnum in numbers) { int n = num; Task.Factory.StartNew(() => { Thread.SpinWait(1000); int multiplied = n * 2; Console.WriteLine("Child Task #{0}, result {1}", n, multiplied); }); } }); mainTask.Wait(); Console.WriteLine("done"); Each child task will write to the console, so that you can see how the child tasks behave along with the parent task. When you execute the previous program, it results in the following output: Child Task #1, result 2 Child Task #2, result 4 done Child Task #3, result 6 Child Task #6, result 12 Child Task #5, result 10 Child Task #4, result 8 Notice how even though you have called the .Wait() method on the outer task before writing done, the execution of the child task continues a bit longer after the task is concluded. This is because, by default, child tasks are detached, which means their execution is not tied to the task that launched it. An unrelated, but important bit in the previous example code, is you will notice that we assigned the loop variable to an intermediary variable before using it in the task. int n = num; Task.Factory.StartNew(() => { int multiplied = n * 2; This is actually related to the way closures work, and is a common misconception when trying to "pass in" values in a loop. Because the closure actually creates a reference to the value, rather than copying the value in, using the loop value will end up changing every time the loop iterates, and you will not get the behavior you expect. As you can see, an easy way to mitigate this is to set the value to a local variable before passing it into the lambda expression. That way, it will not be a reference to an integer that changes before it is used. You do however have the option to mark a child task as Attached, as follows: Task.Factory.StartNew( () =>DoSomething(), TaskCreationOptions.AttachedToParent); The TaskCreationOptions enumeration has a number of different options. Specifically in this case, the ability to attach a task to its parent task means that the parent task will not complete until all child tasks are complete. Other options in TaskCreationOptions let you give hints and instructions to the task scheduler. From the documentation, the following are the descriptions of all these options: None: This specifies that the default behavior should be used. PreferFairness: This is a hint to a TaskScheduler class to schedule a task in as fair a manner as possible, meaning that tasks scheduled sooner will be more likely to be run sooner, and tasks scheduled later will be more likely to be run later. LongRunning: This specifies that a task will be a long-running, coarsegrained operation. It provides a hint to the TaskScheduler class that oversubscription may be warranted. AttachedToParent: This specifies that a task is attached to a parent in the task hierarchy. DenyChildAttach: This specifies that an exception of the type InvalidOperationException will be thrown if an attempt is made to attach a child task to the created task. HideScheduler: This prevents the ambient scheduler from being seen as the current scheduler in the created task. This means that operations such as StartNew or ContinueWith that are performed in the created task, will see Default as the current scheduler. The best part about these options, and the way the TPL works, is that most of them are merely hints. So you can suggest that a task you are starting is long running, or that you would prefer tasks scheduled sooner to run first, but that does not guarantee this will be the case. The framework will take the responsibility of completing the tasks in the most efficient manner, so if you prefer fairness, but a task is taking too long, it will start executing other tasks to make sure it keeps using the available resources optimally. Error handling with tasks Error handling in the world of tasks needs special consideration. In summary, when an exception is thrown, the CLR will unwind the stack frames looking for an appropriate try/catch handler that wants to handle the error. If the exception reaches the top of the stack, the application crashes. With asynchronous programs, though, there is not a single linear stack of execution. So when your code launches a task, it is not immediately obvious what will happen to an exception that is thrown inside of the task. For example, look at the following code: Task t = Task.Factory.StartNew(() => { throw new Exception("fail"); }); This exception will not bubble up as an unhandled exception, and your application will not crash if you leave it unhandled in your code. It was in fact handled, but by the task machinery. However, if you call the .Wait() method, the exception will bubble up to the calling thread at that point. This is shown in the following example: try { t.Wait(); } catch (Exception ex) { Console.WriteLine(ex.Message); } When you execute that, it will print out the somewhat unhelpful message One or more errors occurred, rather than the fail message that is the actual message contained in the exception. This is because unhandled exceptions that occur in tasks will be wrapped in an AggregateException exception, which you can handle specifically when dealing with task exceptions. Look at the following code: catch (AggregateException ex) { foreach (var inner in ex.InnerExceptions) { Console.WriteLine(inner.Message); } } If you think about it, this makes sense, because of the way that tasks are composable with continuations and child tasks, this is a great way to represent all of the errors raised by this task. If you would rather handle exceptions on a more granular level, you can also pass a special TaskContinuationOptions parameter as follows: Task.Factory.StartNew(() => { throw new Exception("Fail"); }).ContinueWith(t => { // log the exception Console.WriteLine(t.Exception.ToString()); }, TaskContinuationOptions.OnlyOnFaulted); This continuation task will only run if the task that it was attached to is faulted (for example, if there was an unhandled exception). Error handling is, of course, something that is often overlooked when developers write code, so it is important to be familiar with the various methods of handling exceptions in an asynchronous world.
Read more
  • 0
  • 0
  • 3855

article-image-classification-and-regression-trees
Packt
02 Aug 2013
23 min read
Save for later

Classification and Regression Trees

Packt
02 Aug 2013
23 min read
(For more resources related to this topic, see here.) Recursive partitions The name of the library package rpart, shipped along with R, stands for Recursive Partitioning . The package was first created by Terry M Therneau and Beth Atkinson , and is currently maintained by Brian Ripley . We will first have a peek at means recursive partitions are. A complex and contrived relationship is generally not identifiable by linear models. In the previous chapter, we saw the extensions of the linear models in piecewise, polynomial, and spline regression models. It is also well known that if the order of a model is larger than 4, then interpretation and usability of the model becomes more difficult. We consider a hypothetical dataset, where we have two classes for the output Y and two explanatory variables in X1 and X2. The two classes are indicated by filled-in green circles and red squares. First, we will focus only on the left display of Figure 1: A complex classification dataset with partitions , as it is the actual depiction of the data. At the outset, it is clear that a linear model is not appropriate, as there is quite an overlap of the green and red indicators. Now, there is a clear demarcation of the classification problem accordingly, as X1 is greater than 6 or not. In the area on the left side of X1=6, the mid-third region contains a majority of green circles and the rest are red squares. The red squares are predominantly identifiable accordingly, as the X2 values are either lesser than or equal to 3 or greater than 6. The green circles are the majority values in the region of X2 being greater than 3 and lesser than 6. A similar story can be built for the points on the right side of X1 greater than 6. Here, we first partitioned the data according to X1 values first, and then in each of the partitioned region, we obtained partitions according to X2 values. This is the act of recursive partitioning. Figure 1: A complex classification dataset with partitions Let us obtain the preceding plot in R. Time for action – partitioning the display plot We first visualize the CART_Dummy dataset and then look in the next subsection at how CART gets the patterns, which are believed to exist in the data. Obtain the dataset CART_Dummy from the RSADBE package by using data( CART_Dummy). Convert the binary output Y as a factor variable, and attach the data frame with CART_Dummy$Y <- as.factor(CART_Dummy$Y). attach(CART_Dummy) In Figure 1: A complex classification dataset with partitions , the red squares refer to 0 and the green circles to 1. Initialize the graphics windows for the three samples by using par(mfrow= c(1,2)). Create a blank scatter plot: plot(c(0,12),c(0,10),type="n",xlab="X1",ylab="X2"). Plot the green circles and red squares: points(X1[Y==0],X2[Y==0],pch=15,col="red") points(X1[Y==1],X2[Y==1],pch=19,col="green") title(main="A Difficult Classification Problem") Repeat the previous two steps to obtain the identical plot on the right side of the graphics window. First, partition according to X1 values by using abline(v=6,lwd=2). Add segments on the graph with the segment function: segments(x0=c(0,0,6,6),y0=c(3.75,6.25,2.25,5),x1=c(6,6,12,12),y1=c(3.75,6.25,2.25,5),lwd=2) title(main="Looks a Solvable Problem Under Partitions") What just happened? A complex problem is simplified through partitioning! A more generic function, segments, has nicely slipped in our program, which you may use for many other scenarios. Now, this approach of recursive partitioning is not feasible all the time! Why? We seldom deal with two or three explanatory variables and data points as low as in the preceding hypothetical example. The question is how one creates recursive partitioning of the dataset. Breiman, et. al. (1984) and Quinlan (1988) have invented tree building algorithms, and we will follow the Breiman, et. al. approach in the rest of book. The CART discussion in this book is heavily influenced by Berk (2008). Splitting the data In the earlier discussion, we saw that partitioning the dataset can benefit a lot in reducing the noise in the data. The question is how does one begin with it? The explanatory variables can be discrete or continuous. We will begin with the continuous (numeric objects in R) variables. For a continuous variable, the task is a bit simpler. First, identify the unique distinct values of the numeric object. Let us say, for example, that the distinct values of a numeric object, say height in cms, are 160, 165, 170, 175, and 180. The data partitions are then obtained as follows: data[Height<=160,], data[Height>160,] data[Height<=165,], data[Height>165,] data[Height<=170,], data[Height>170,] data[Height<=175,], data[Height>175,] The reader should try to understand the rationale behind the code, and certainly this is just an indicative one. Now, we consider the discrete variables. Here, we have two types of variables, namely categorical and ordinal . In the case of ordinal variables, we have an order among the distinct values. For example, in the case of the economic status variable, the order may be among the classes Very Poor, Poor, Average, Rich, and Very Rich. Here, the splits are similar to the case of continuous variable, and if there are m distinct orders, we consider m -1 distinct splits of the overall data. In the case of a categorical variable with m categories, for example the departments A to F of the UCBAdmissions dataset, the number of possible splits becomes 2m-1-1. However, the benefit of using software like R is that we do not have to worry about these issues. The first tree In the CART_Dummy dataset, we can easily visualize the partitions for Y as a function of the inputs X1 and X2. Obviously, we have a classification problem, and hence we will build the classification tree. Time for action – building our first tree The rpart function from the library rpart will be used to obtain the first classification tree. The tree will be visualized by using the plot options of rpart, and we will follow this up with extracting the rules of a tree by using the asRules function from the rattle package. Load the rpart package by using library(rpart). Create the classification tree with CART_Dummy_rpart <- rpart(Y~ X1+X2,data=CART_Dummy). Visualize the tree with appropriate text labels by using plot(CART_Dummy_rpart); text(CART_Dummy_rpart). Figure 2: A classification tree for the dummy dataset Now, the classification tree flows as follows. Obviously, the tree using the rpart function does not partition as simply as we did in Figure 1: A complex classification dataset with partitions , the working of which will be dealt within the third section of this chapter. First, we check if the value of the second variable X2 is less than 4.875. If the answer is an affirmation, we move to the left side of the tree; the right side in the other case. Let us move to the right side. A second question asked is whether X1 is lesser than 4.5 or not, and then if the answer is yes it is identified as a red square, and otherwise a green circle. You are now asked to interpret the left side of the first node. Let us look at the summary of CART_Dummy_rpart. Apply the summary, an S3 method, for the classification tree with summary( CART_Dummy_rpart). That one is a lot of output! Figure 3: Summary of a classification tree Our interests are in the nodes numbered 5 to 9! Why? The terminal nodes, of course! A terminal node is one in which we can't split the data any further, and for the classification problem, we arrive at a class assignment as the class that has a majority count at the node. The summary shows that there are indeed some misclassifications too. Now, wouldn't it be great if R gave the terminal nodes asRules. The function asRules from the rattle package extracts the rules from an rpart object. Let's do it! Invoke the rattle package library(rattle) and using the asRules function, extract the rules from the terminal nodes with asRules(CART_Dummy_rpart). The result is the following set of rules: Figure 4: Extracting "rules" from a tree! We can see that the classification tree is not according to our "eye-bird" partitioning. However, as a final aspect of our initial understanding, let us plot the segments using the naïve way. That is, we will partition the data display according to the terminal nodes of the CART_Dummy_rpart tree. The R code is given right away, though you should make an effort to find the logic behind it. Of course, it is very likely that by now you need to run some of the earlier code that was given previously. abline(h=4.875,lwd=2) segments(x0=4.5,y0=4.875,x1=4.5,y1=10,lwd=2) abline(h=1.75,lwd=2) segments(x0=3.5,y0=1.75,x1=3.5,y1=4.875,lwd=2) title(main="Classification Tree on the Data Display") It can be easily seen from the following that rpart works really well: Figure 5: The terminal nodes on the original display of the data What just happened? We obtained our first classification tree, which is a good thing. Given the actual data display, the classification tree gives satisfactory answers. We have understood the "how" part of a classification tree. The "why" aspect is very vital in science, and the next section explains the science behind the construction of a regression tree, and it will be followed later by a detailed explanation of the working of a classification tree. The construction of a regression tree In the CART_Dummy dataset, the output is a categorical variable, and we built a classification tree for it. The same distinction is required in CART, and we thus build classification trees for binary random variables, where regression trees are for continuous random variables. Recall the rationale behind the estimation of regression coefficients for the linear regression model. The main goal was to find the estimates of the regression coefficients, which minimize the error sum of squares between the actual regressand values and the fitted values. A similar approach is followed here, in the sense that we need to split the data at the points that keep the residual sum of squares to a minimum. That is, for each unique value of a predictor, which is a candidate for the node value, we find the sum of squares of y's within each partition of the data, and then add them up. This step is performed for each unique value of the predictor, and the value, which leads to the least sum of squares among all the candidates, is selected as the best split point for that predictor. In the next step, we find the best split points for each of the predictors, and then the best split is selected across the best split points across the predictors. Easy! Now, the data is partitioned into two parts according to the best split. The process of finding the best split within each partition is repeated in the same spirit as for the first split. This process is carried out in a recursive fashion until the data can't be partitioned any further. What is happening here? The residual sum of squares at each child node will be lesser than that in the parent node. At the outset, we record that the rpart function does the exact same thing. However, as a part of cleaner understanding of the regression tree, we will write raw R codes and ensure that there is no ambiguity in the process of understanding CART. We will begin with a simple example of a regression tree, and use the rpart function to plot the regression function. Then, we will first define a function, which will extract the best split given by the covariate and dependent variable. This action will be repeated for all the available covariates, and then we find the best overall split. This will be verified with the regression tree. The data will then be partitioned by using the best overall split, and then the best split will be identified for each of the partitioned data. The process will be repeated until we reach the end of the complete regression tree given by the rpart. First, the experiment! The cpus dataset available in the MASS package contains the relative performance measure of 209 CPUs in the perf variable. It is known that the performance of a CPU depends on factors such as the cycle time in nanoseconds (syct), minimum and maximum main memory in kilobytes (mmin and mmax), cache size in kilobytes (cach), and minimum and maximum number of channels (chmin and chmax). The task in hand is to model the perf as a function of syct, mmin, mmax, cach, chmin, and chmax. The histogram of perf—try hist(cpus$perf)—will show a highly skewed distribution, and hence we will build a regression tree for the logarithm transformation log10(perf). Time for action – the construction of a regression tree A regression tree is first built by using the rpart function. The getNode function is introduced, which helps in identifying the split node at each stage, and using it we build a regression tree and verify that we had the same tree as returned by the rpart function. Load the MASS library by using library(MASS). Create the regression tree for the logarithm (to the base 10) of perf as a function of the covariates explained earlier, and display the regression tree: cpus.ltrpart <- rpart(log10(perf)~syct+mmin+mmax+cach+chmin+chmax, data=cpus) plot(cpus.ltrpart); text(cpus.ltrpart) The regression tree will be indicated as follows: Figure 6: Regression tree for the "perf" of a CPU We will now define the getNode function. Given the regressand and the covariate, we need to find the best split in the sense of the sum of squares criterion. The evaluation needs to be done for every distinct value of the covariate. If there are m distinct points, we need m -1 evaluations. At each distinct point, the regressand needs to be partitioned accordingly, and the sum of squares should be obtained for each partition. The two sums of squares (in each part) are then added to obtain the reduced sum of squares. Thus, we create the required function to meet all these requirements. Create the getNode function in R by running the following code: getNode <- function(x,y) { xu <- sort(unique(x),decreasing=TRUE) ss <- numeric(length(xu)-1) for(i in 1:length(ss)) { partR <- y[x>xu[i]] partL <- y[x<=xu[i]] partRSS <- sum((partR-mean(partR))^2) partLSS <- sum((partL-mean(partL))^2) ss[i]<-partRSS + partLSS } return(list(xnode=xu[which(ss==min(ss,na.rm=TRUE))], minss = min(ss,na.rm=TRUE),ss,xu)) } The getNode function gives the best split for a given covariate. It returns a list consisting of four objects: xnode, which is a datum of the covariate x that gives the minimum residual sum of squares for the regressand y The value of the minimum residual sum of squares The vector of the residual sum of squares for the distinct points of the vector x The vector of the distinct x values We will run this function for each of the six covariates, and find the best overall split. The argument na.rm=TRUE is required, as at the maximum value of x we won't get a numeric value. We will first execute the getNode function on the syct covariate, and look at the output we get as a result: > getNode(cpus$syct,log10(cpus$perf))$xnode [1] 48 > getNode(cpus$syct,log10(cpus$perf))$minss [1] 24.72 > getNode(cpus$syct,log10(cpus$perf))[[3]] [1] 43.12 42.42 41.23 39.93 39.44 37.54 37.23 36.87 36.51 36.52 35.92 34.91 [13] 34.96 35.10 35.03 33.65 33.28 33.49 33.23 32.75 32.96 31.59 31.26 30.86 [25] 30.83 30.62 29.85 30.90 31.15 31.51 31.40 31.50 31.23 30.41 30.55 28.98 [37] 27.68 27.55 27.44 26.80 25.98 27.45 28.05 28.11 28.66 29.11 29.81 30.67 [49] 28.22 28.50 24.72 25.22 26.37 28.28 29.10 33.02 34.39 39.05 39.29 > getNode(cpus$syct,log10(cpus$perf))[[4]] [1] 1500 1100 900 810 800 700 600 480 400 350 330 320 300 250 240 [16] 225 220 203 200 185 180 175 167 160 150 143 140 133 125 124 [31] 116 115 112 110 105 100 98 92 90 84 75 72 70 64 60 [46] 59 57 56 52 50 48 40 38 35 30 29 26 25 23 17 The least sum of squares at a split for the best split value of the syct variable is 24.72, and it occurs at a value of syct greater than 48. The third and fourth list objects given by getNode, respectively, contain the details of the sum of squares for the potential candidates and the unique values of syct. The values of interest are highlighted. Thus, we will first look at the second object from the list output for all the six covariates to find the best split among the best split of each of the variables, by the residual sum of squares criteria. Now, run the getNode function for the remaining five covariates: getNode(cpus$syct,log10(cpus$perf))[[2]] getNode(cpus$mmin,log10(cpus$perf))[[2]] getNode(cpus$mmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[2]] getNode(cpus$chmin,log10(cpus$perf))[[2]] getNode(cpus$chmax,log10(cpus$perf))[[2]] getNode(cpus$cach,log10(cpus$perf))[[1]] sort(getNode(cpus$cach,log10(cpus$perf))[[4]],decreasing=FALSE) The output is as follows: Figure 7: Obtaining the best "first split" of regression tree The sum of squares for cach is the lowest, and hence we need to find the best split associated with it, which is 24. However, the regression tree shows that the best split is for the cach value of 27. The getNode function says that the best split occurs at a point greater than 24, and hence we take the average of 24 and the next unique point at 30. Having obtained the best overall split, we next obtain the first partition of the dataset. Partition the data by using the best overall split point: cpus_FS_R <- cpus[cpus$cach>=27,] cpus_FS_L <- cpus[cpus$cach<27,] The new names of the data objects are clear with _FS_R indicating the dataset obtained on the right side for the first split, and _FS_L indicating the left side. In the rest of the section, the nomenclature won't be further explained. Identify the best split in each of the partitioned datasets: getNode(cpus_FS_R$syct,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$cach,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmin,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$chmax,log10(cpus_FS_R$perf))[[2]] getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[1]] sort(getNode(cpus_FS_R$mmax,log10(cpus_FS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L$syct,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$cach,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmin,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$chmax,log10(cpus_FS_L$perf))[[2]] getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[1]] sort(getNode(cpus_FS_L$mmax,log10(cpus_FS_L$perf))[[4]], decreasing=FALSE) The following screenshot gives the results of running the preceding R code: Figure 8: Obtaining the next two splits Thus, for the first right partitioned data, the best split is for the mmax value as the mid-point between 24000 and 32000; that is, at mmax = 28000. Similarly, for the first left-partitioned data, the best split is the average value of 6000 and 6200, which is 6100, for the same mmax covariate. Note the important step here. Even though we used cach as the criteria for the first partition, it is still used with the two partitioned data. The results are consistent with the display given by the regression tree, Figure 6: Regression tree for the "perf" of a CPU . The next R program will take care of the entire first split's right side's future partitions. Partition the first right part cpus_FS_R as follows: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] Obtain the best split for cpus_FS_R_SS_R and cpus_FS_R_SS_L by running the following code: cpus_FS_R_SS_R <- cpus_FS_R[cpus_FS_R$mmax>=28000,] cpus_FS_R_SS_L <- cpus_FS_R[cpus_FS_R$mmax<28000,] getNode(cpus_FS_R_SS_R$syct,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$mmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmin,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$chmax,log10(cpus_FS_R_SS_R$perf))[[2]] getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[1]] sort(getNode(cpus_FS_R_SS_R$cach,log10(cpus_FS_R_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_R_SS_L$syct,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$mmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmin,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$chmax,log10(cpus_FS_R_SS_L$perf))[[2]] getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[1]] sort(getNode(cpus_FS_R_SS_L$cach,log10(cpus_FS_R_SS_L$perf))[[4]],decreasing=FALSE) For the cpus_FS_R_SS_R part, the final division is according to cach being greater than 56 or not (average of 48 and 64). If the cach value in this partition is greater than 56, then perf (actually log10(perf)) ends in the terminal leaf 3, else 2. However, for the region cpus_FS_R_SS_L, we partition the data further by the cach value being greater than 96.5 (average of 65 and 128). In the right side of the region, log10(perf) is found as 2, and a third level split is required for cpus_FS_R_SS_L with cpus_FS_R_SS_L_TS_L. Note that though the final terminal leaves of the cpus_FS_R_SS_L_TS_L region shows the same 2 as the final log10(perf), this may actually result in a significant variability reduction of the difference between the predicted and the actual log10(perf) values. We will now focus on the first main split's left side. Figure 9: Partitioning the right partition after the first main split Partition cpus_FS_L accordingly, as the mmax value being greater than 6100 or otherwise: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] The rest of the partition for cpus_FS_L is completely given next. The details will be skipped and the R program is given right away: cpus_FS_L_SS_R <- cpus_FS_L[cpus_FS_L$mmax>=6100,] cpus_FS_L_SS_L <- cpus_FS_L[cpus_FS_L$mmax<6100,] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$mmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$cach,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmin,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$chmax,log10(cpus_FS_L_SS_R$perf))[[2]] getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[1]] sort(getNode(cpus_FS_L_SS_R$syct,log10(cpus_FS_L_SS_R$perf))[[4]], decreasing=FALSE) getNode(cpus_FS_L_SS_L$syct,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$cach,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmin,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$chmax,log10(cpus_FS_L_SS_L$perf))[[2]] getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[1]] sort(getNode(cpus_FS_L_SS_L$mmax,log10(cpus_FS_L_SS_L$perf))[[4]],decreasing=FALSE) cpus_FS_L_SS_R_TS_R <- cpus_FS_L_SS_R[cpus_FS_L_SS_R$syct<360,] getNode(cpus_FS_L_SS_R_TS_R$syct,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$mmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$cach,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmax,log10(cpus_FS_L_SS_R_TS_R$ perf))[[2]] getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$ perf))[[1]] sort(getNode(cpus_FS_L_SS_R_TS_R$chmin,log10(cpus_FS_L_SS_R_TS_R$perf))[[4]],decreasing=FALSE) We will now see how the : Figure 10: Partitioning the left partition after the first main split We leave it to you to interpret the output arising from the previous action. What just happened? Using the rpart function from the rpart library we first built the regression tree for log10(perf). Then, we explored the basic definitions underlying the construction of a regression tree and defined the getNode function to obtain the best split for a pair of regressands and a covariate. This function is then applied for all the covariates, and the best overall split is obtained; using this we get our first partition of the data, which will be in agreement with the tree given by the rpart function. We then recursively partitioned the data by using the getNode function and verified that all the best splits in each partitioned data are in agreement with the one provided by the rpart function. The reader may wonder if the preceding tedious task was really essential. However, it has been the experience of the author that users/readers seldom remember the rationale behind using direct code/functions for any software after some time. Moreover, CART is a difficult concept and it is imperative that we clearly understand our first tree, and return to the preceding program whenever the understanding of a science behind CART is forgotten. Summary We began with the idea of recursive partitioning and gave a legitimate reason as to why such an approach is practical. The CART technique is completely demystified by using the getNode function, which has been defined appropriately depending upon whether we require a regression or a classification tree. Resources for Article : Further resources on this subject: Organizing, Clarifying and Communicating the R Data Analyses [Article] Graphical Capabilities of R [Article] Customizing Graphics and Creating a Bar Chart and Scatterplot in R [Article]
Read more
  • 0
  • 0
  • 3675

article-image-eventbus-class
Packt
23 Sep 2013
14 min read
Save for later

The EventBus Class

Packt
23 Sep 2013
14 min read
(For more resources related to this topic, see here.) When developing software, the idea of objects sharing information or collaborating with each other is a must. The difficulty lies in ensuring that communication between objects is done effectively, but not at the cost of having highly coupled components. Objects are considered highly coupled when they have too much detail about other components' responsibilities. When we have high coupling in an application, maintenance becomes very challenging, as any change can have a rippling effect. To help us cope with this software design issue; we have event-based programming. In event-based programming, objects can either subscribe/listen for specific events, or publish events to be consumed. In Java, we have had the idea of event listeners for some time. An event listener is an object whose purpose is to be notified when a specific event occurs. In this article, we are going to discuss the Guava EventBus class and how it facilitates the publishing and subscribing of events. The EventBus class will allow us to achieve the level of collaboration we desire, while doing so in a manner that results in virtually no coupling between objects. It's worth noting that the EventBus is a lightweight, in-process publish/subscribe style of communication, and is not meant for inter-process communication. We are going to cover several classes in this article that have an @Beta annotation indicating that the functionality of the class may be subject to change in future releases of Guava. EventBus The EventBus class (found in the com.google.common.eventbus package) is the focal point for establishing the publish/subscribe-programming paradigm with Guava. At a very high level, subscribers will register with EventBus to be notified of particular events, and publishers will send events to EventBus for distribution to interested subscribers. All the subscribers are notified serially, so it's important that any code performed in the event-handling method executes quickly. Creating an EventBus instance Creating an EventBus instance is accomplished by merely making a call to the EventBus constructor: EventBus eventBus = new EventBus(); We could also provide an optional string argument to create an identifier (for logging purposes) for EventBus: EventBus eventBus = new EventBus(TradeAccountEvent.class.getName()); Subscribing to events The following three steps are required by an object to receive notifications from EventBus,: The object needs to define a public method that accepts only one argument. The argument should be of the event type for which the object is interested in receiving notifications. The method exposed for an event notification is annotated with an @Subscribe annotation. Finally, the object registers with an instance of EventBus, passing itself as an argument to the EventBus.register method. Posting the events To post an event, we need to pass an event object to the EventBus.post method. EventBus will call the registered subscriber handler methods, taking arguments those are assignable to the event object type. This is a very powerful concept because interfaces, superclasses, and interfaces implemented by superclasses are included, meaning we can easily make our event handlers as course- or fine-grained as we want, simply by changing the type accepted by the event-handling method. Defining handler methods Methods used as event handlers must accept only one argument, the event object. As mentioned before, EventBus will call event-handling methods serially, so it's important that those methods complete quickly. If any extended processing needs to be done as a result of receiving an event, it's best to run that code in a separate thread. Concurrency EventBus will not call the handler methods from multiple threads, unless the handler method is marked with the @AllowConcurrentEvent annotation. By marking a handler method with the @AllowConcurrentEvent annotation, we are asserting that our handler method is thread-safe. Annotating a handler method with the @AllowConcurrentEvent annotation by itself will not register a method with EventBus. Now that we have defined how we can use EventBus, let's look at some examples. Subscribe – An example Let's assume we have defined the following TradeAccountEvent class as follows: public class TradeAccountEvent { private double amount; private Date tradeExecutionTime; private TradeType tradeType; private TradeAccount tradeAccount; public TradeAccountEvent(TradeAccount account, double amount, Date tradeExecutionTime, TradeType tradeType) { checkArgument(amount > 0.0, "Trade can't be less than zero"); this.amount = amount; this.tradeExecutionTime = checkNotNull(tradeExecutionTime,"ExecutionTime can't be null"); this.tradeAccount = checkNotNull(account,"Account can't be null"); this.tradeType = checkNotNull(tradeType,"TradeType can't be null"); } //Details left out for clarity So whenever a buy or sell transaction is executed, we will create an instance of the TradeAccountEvent class. Now let's assume we have a need to audit the trades as they are being executed, so we have the SimpleTradeAuditor class as follows: public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); public SimpleTradeAuditor(EventBus eventBus){ eventBus.register(this); } @Subscribe public void auditTrade(TradeAccountEvent tradeAccountEvent){ tradeEvents.add(tradeAccountEvent); System.out.println("Received trade "+tradeAccountEvent); } } Let's quickly walk through what is happening here. In the constructor, we are receiving an instance of an EventBus class and immediately register the SimpleTradeAuditor class with the EventBus instance to receive notifications on TradeAccountEvents. We have designated auditTrade as the event-handling method by placing the @Subscribe annotation on the method. In this case, we are simply adding the TradeAccountEvent object to a list and printing out to the console acknowledgement that we received the trade. Event Publishing – An example Now let's take a look at a simple event publishing example. For executing our trades, we have the following class: public class SimpleTradeExecutor { private EventBus eventBus; public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = eventBus; } public void executeTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ TradeAccountEvent tradeAccountEvent = processTrade(tradeAccount, amount, tradeType); eventBus.post(tradeAccountEvent); } private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType){ Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s",tradeAccount,amount,tradeType,executionTime); TradeAccountEvent tradeAccountEvent = new TradeAccountEvent(tradeAccount,amount,executionTime,tradeType); System.out.println(message); return tradeAccountEvent; } } Like the SimpleTradeAuditor class, we are taking an instance of the EventBus class in the SimpleTradeExecutor constructor. But unlike the SimpleTradeAuditor class, we are storing a reference to the EventBus for later use. While this may seem obvious to most, it is critical for the same instance to be passed to both classes. We will see in future examples how to use multiple EventBus instances, but in this case, we are using a single instance. Our SimpleTradeExecutor class has one public method, executeTrade, which accepts all of the required information to process a trade in our simple example. In this case, we call the processTrade method, passing along the required information and printing to the console that our trade was executed, then returning a TradeAccountEvent instance. Once the processTrade method completes, we make a call to EventBus.post with the returned TradeAccountEvent instance, which will notify any subscribers of the TradeAccountEvent object. If we take a quick view of both our publishing and subscribing examples, we see that although both classes participate in the sharing of required information, neither has any knowledge of the other. Finer-grained subscribing We have just seen examples on publishing and subscribing using the EventBus class. If we recall, EventBus publishes events based on the type accepted by the subscribed method. This gives us some flexibility to send events to different subscribers by type. For example, let's say we want to audit the buy and sell trades separately. First, let's create two separate types of events: public class SellEvent extends TradeAccountEvent { public SellEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.SELL); } } public class BuyEvent extends TradeAccountEvent { public BuyEvent(TradeAccount tradeAccount, double amount, Date tradExecutionTime) { super(tradeAccount, amount, tradExecutionTime, TradeType.BUY); } } Now we have created two discrete event classes, SellEvent and BuyEvent, both of which extend the TradeAccountEvent class. To enable separate auditing, we will first create a class for auditing SellEvent instances: public class TradeSellAuditor { private List<SellEvent> sellEvents = Lists.newArrayList(); public TradeSellAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received SellEvent "+sellEvent); } public List<SellEvent> getSellEvents() { return sellEvents; } } Here we see functionality that is very similar to the SimpleTradeAuditor class with the exception that this class will only receive the SellEvent instances. Then we will create a class for auditing only the BuyEvent instances: public class TradeBuyAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); public TradeBuyAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } public List<BuyEvent> getBuyEvents() { return buyEvents; } } Now we just need to refactor our SimpleTradeExecutor class to create the correct TradeAccountEvent instance class based on whether it's a buy or sell transaction: public class BuySellTradeExecutor { … deatails left out for clarity same as SimpleTradeExecutor //The executeTrade() method is unchanged from SimpleTradeExecutor private TradeAccountEvent processTrade(TradeAccount tradeAccount, double amount, TradeType tradeType) { Date executionTime = new Date(); String message = String.format("Processed trade for %s of amount %n type %s @ %s", tradeAccount, amount, tradeType, executionTime); TradeAccountEvent tradeAccountEvent; if (tradeType.equals(TradeType.BUY)) { tradeAccountEvent = new BuyEvent(tradeAccount, amount, executionTime); } else { tradeAccountEvent = new SellEvent(tradeAccount, amount, executionTime); } System.out.println(message); return tradeAccountEvent; } } Here we've created a new BuySellTradeExecutor class that behaves in the exact same manner as our SimpleTradeExecutor class, with the exception that depending on the type of transaction, we create either a BuyEvent or SellEvent instance. However, the EventBus class is completely unaware of any of these changes. We have registered different subscribers and we are posting different events, but these changes are transparent to the EventBus instance. Also, take note that we did not have to create separate classes for the notification of events. Our SimpleTradeAuditor class would have continued to receive the events as they occurred. If we wanted to do separate processing depending on the type of event, we could simply add a check for the type of event. Finally, if needed, we could also have a class that has multiple subscribe methods defined: public class AllTradesAuditor { private List<BuyEvent> buyEvents = Lists.newArrayList(); private List<SellEvent> sellEvents = Lists.newArrayList(); public AllTradesAuditor(EventBus eventBus) { eventBus.register(this); } @Subscribe public void auditSell(SellEvent sellEvent){ sellEvents.add(sellEvent); System.out.println("Received TradeSellEvent "+sellEvent); } @Subscribe public void auditBuy(BuyEvent buyEvent){ buyEvents.add(buyEvent); System.out.println("Received TradeBuyEvent "+buyEvent); } } Here we've created a class with two event-handling methods. The AllTradesAuditor method will receive notifications about all trade events; it's just a matter of which method gets called by EventBus depending on the type of event. Taken to an extreme, we could create an event handling method that accepts a type of Object, as Object is an actual class (the base class for all other objects in Java), and we could receive notifications on any and all events processed by EventBus. Finally, there is nothing preventing us from having more than one EventBus instance. If we were to refactor the BuySellTradeExecutor class into two separate classes, we could inject a separate EventBus instance into each class. Then it would be a matter of injecting the correct EventBus instance into the auditing classes, and we could have complete event publishing-subscribing independence. Unsubscribing to events Just as we want to subscribe to events, it may be desirable at some point to turn off the receiving of events. This is accomplished by passing the subscribed object to the eventBus.unregister method. For example, if we know at some point that we would want to stop processing events, we could add the following method to our subscribing class: public void unregister(){ this.eventBus.unregister(this); } Once this method is called, that particular instance will stop receiving events for whatever it had previously registered for. Other instances that are registered for the same event will continue to receive notifications. AsyncEventBus We stated earlier the importance of ensuring that our event-handling methods keep the processing light due to the fact that the EventBus processes all events in a serial fashion. However, we have another option with the AsyncEventBus class. The AsyncEventBus class offers the exact same functionality as the EventBus, but uses a provided java.util.concurrent.Executor instance to execute handler methods asynchronously. Creating an AsyncEventBus instance We create an AsyncEventBus instance in a manner similar to the EventBus instance: AsyncEventBus asyncEventBus = new AsyncEventBus(executorService); Here we are creating an AsyncEventBus instance by providing a previously created ExecutorService instance. We also have the option of providing a String identifier in addition to the ExecutorService instance. AsyncEventBus is very helpful to use in situations where we suspect the subscribers are performing heavy processing when events are received. DeadEvents When EventBus receives a notification of an event through the post method, and there are no registered subscribers, the event is wrapped in an instance of a DeadEvent class. Having a class that subscribes for DeadEvent instances can be very helpful when trying to ensure that all events have registered subscribers. The DeadEvent class exposes a getEvent method that can be used to inspect the original event that was undelivered. For example, we could provide a very simple class, which is shown as follows: public class DeadEventSubscriber { private static final Logger logger = Logger.getLogger(DeadEventSubscriber.class); public DeadEventSubscriber(EventBus eventBus) { eventBus.register(this); } @Subscribe public void handleUnsubscribedEvent(DeadEvent deadEvent){ logger.warn("No subscribers for "+deadEvent.getEvent()); } } Here we are simply registering for any DeadEvent instances and logging a warning for the original unclaimed event. Dependency injection To ensure we have registered our subscribers and publishers with the same instance of an EventBus class, using a dependency injection framework (Spring or Guice) makes a lot of sense. In the following example, we will show how to use the Spring Framework Java configuration with the SimpleTradeAuditor and SimpleTradeExecutor classes. First, we need to make the following changes to the SimpleTradeAuditor and SimpleTradeExecutor classes: @Component public class SimpleTradeExecutor { private EventBus eventBus; @Autowired public SimpleTradeExecutor(EventBus eventBus) { this.eventBus = checkNotNull(eventBus, "EventBus can't be null"); } @Component public class SimpleTradeAuditor { private List<TradeAccountEvent> tradeEvents = Lists.newArrayList(); @Autowired public SimpleTradeAuditor(EventBus eventBus){ checkNotNull(eventBus,"EventBus can't be null"); eventBus.register(this); } Here we've simply added an @Component annotation at the class level for both the classes. This is done to enable Spring to pick these classes as beans, which we want to inject. In this case, we want to use constructor injection, so we added an @Autowired annotation to the constructor for each class. Having the @Autowired annotation tells Spring to inject an instance of an EventBus class into the constructor for both objects. Finally, we have our configuration class that instructs the Spring Framework where to look for components to wire up with the beans defined in the configuration class. @Configuration @ComponentScan(basePackages = {"bbejeck.guava.article7.publisher", "bbejeck.guava.article7.subscriber"}) public class EventBusConfig { @Bean public EventBus eventBus() { return new EventBus(); } } Here we have the @Configuration annotation, which identifies this class to Spring as a Context that contains the beans to be created and injected if need be. We defined the eventBus method that constructs and returns an instance of an EventBus class, which is injected into other objects. In this case, since we placed the @Autowired annotation on the constructors of the SimpleTradeAuditor and SimpleTradeExecutor classes, Spring will inject the same EventBus instance into both classes, which is exactly what we want to do. While a full discussion of how the Spring Framework functions is beyond the scope of this book, it is worth noting that Spring creates singletons by default, which is exactly what we want here. As we can see, using a dependency injection framework can go a long way in ensuring that our event-based system is configured properly. Summary In this article, we have covered how to use event-based programing to reduce coupling in our code by using the Guava EventBus class. We covered how to create an EventBus instance and register subscribers and publishers. We also explored the powerful concept of using types to register what events we are interested in receiving. We learned about the AsyncEventBus class, which allows us to dispatch events asynchronously. We saw how we can use the DeadEvent class to ensure we have subscribers for all of our events. Finally, we saw how we can use dependency injection to ease the setup of our event-based system. In the next article, we will take a look at working with files in Guava. Resources for Article: Further resources on this subject: WordPress: Avoiding the Black Hat Techniques [Article] So, what is Google Drive? [Article] Showing your Google calendar on your Joomla! site using GCalendar [Article]
Read more
  • 0
  • 0
  • 3664

article-image-microsoft-sharepoint-creating-various-content-types
Packt
02 Dec 2011
7 min read
Save for later

Microsoft SharePoint : Creating Various Content Types

Packt
02 Dec 2011
7 min read
(For more resources on Microsoft SharePoint, see here.) SharePoint content types are used to make it simpler for site managers to standardize what content and associated metadata gets uploaded to lists and libraries on the site. In this article, we'll take a look at how you can create various content types and assign them to be used in site containers. As a subset of more complex content types, a document set will allow your users to store related items in libraries as a set of documents sharing common metadata. This approach will allow your users to run business processes on a batch of items in the document set as well as the whole set. In this article, we'll take a look at how you can define a document set to be used on your site. Since users mostly interact with your SharePoint site through pages and views, the ability to modify SharePoint pages to accommodate business user requirements becomes an important part of site management. In this article, we'll take a look at how you can create and modify pages and content related to them. We will also take a look at how you can provision simple out-of-the-box web parts to your SharePoint publishing pages and configure their properties. Creating basic and complex content types SharePoint lists and libraries can store a variety of content on the site. SharePoint also has a user interface to customize what information you can collect from users to be attached as an item metadata. In the scenario where the entire intranet or the department site within your organization requires a standard set of metadata to be collected with list and library items, content types are the easiest approach to implement the requirement. With content types, you can define the type of business content your users will be interacting with. Once defined, you can also add a metadata field and any applicable validation to them. Once defined, you can attach the newly created content type to the library or list of your choice so that newly uploaded or modified content can conform to the rules you defined on the site. Getting ready Considering you have already set up your virtual development environment, we'll get right into authoring our script. It's assumed you are familiar with how to interact with SharePoint lists and libraries using PowerShell. In this recipe, we'll be using PowerGUI to author the script, which means you will be required to be logged in with an administrator's role on the target Virtual Machine. How to do it... Let's take a look at how we can provision site content types using PowerShell as follows: Click Start | All Programs | PowerGUI | PowerGUI Script Editor. In the main script editing window of PowerGUI, add the following script: # Defining script variables$SiteUrl = "http://intranet.contoso.com"$ListName = "Shared Documents"# Loading Microsoft.SharePoint.PowerShell $snapin = Get-PSSnapin | Where-Object {$_.Name -eq 'Microsoft.SharePoint.Powershell'}if ($snapin -eq $null) {Write-Host "Loading SharePoint Powershell Snapin"Add-PSSnapin "Microsoft.SharePoint.Powershell"}$SPSite = Get-SPSite | Where-Object {$_.Url -eq $SiteUrl} if($SPSite -ne $null) { Write-Host "Connecting to the site" $SiteUrl ",list " $ListName $RootWeb = $SPSite.RootWeb $SPList = $RootWeb.Lists[$ListName] Write-Host "Creating new content type from base type" $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Write-Host "Adding content type to site" $ct = $RootWeb.ContentTypes.Add($ContentType) Write-Host "Creating new fields" $OrgDocumentContentType = $RootWeb.ContentTypes[$ContentType.Id] $OrgFields = $RootWeb.Fields $choices = New-Object System.Collections.Specialized.StringCollection $choices.Add("East") $choices.Add("West") $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) Write-Host "Adding fields to content type" $OrgDivisionObject = $OrgFields.GetField($OrgDivision) $OrgBranchObject = $OrgFields.GetField($OrgBranch) $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject) $OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) $OrgDocumentContentType.Update() Write-Host "Associating content type to list" $ListName $association = $SPList.ContentTypes.Add($OrgDocumentContentType) $SPList.ContentTypesEnabled = $true $SPList.Update() Write-Host "Content type provisioning complete" } $SPSite.Dispose() Click File | Save to save the script to your development machine's desktop. Set the filename of the script to CreateAssociateContentType.ps1. Open the PowerShell console window and call CreateAssociateContentType. ps1 using the following command: PS C:UsersAdministratorDesktop> . CreateAssociateContentType.ps1 As a result, your PowerShell script will create a site structure as shown in the following screenshot: Now, from your browser, let's switch to our SharePoint Intranet: http://intranet.contoso.com. From the home page's Quick launch, click the Shared Documents link. On the ribbon, click the Library tab and select Settings | Library Settings. Take note of the newly associated content type added to the Content Types area of the library settings, as shown in the following screenshot: Navigate back to the Shared Documents library from the Quick launch menu on your site and select any of the existing documents in the library. From the ribbons Documents tab, click Manage | Edit Properties. Take note of how the item now has the Content Type option available, where you can pick newly provisioned Org Document content type. Pick the Org Document content type and take note of the associated metadata showing up for the new content type, as shown in the following screenshot: How it works... First, we defined the script variables. In this recipe, the variables include a URL of the site where the content types are provisioned, http://intranet.contoso.com, and a document library to which the content type is associated: $ListName = "Shared Documents" Once a PowerShell snap-in has been loaded, we get a hold of the instance of the current site and its root web. Since we want our content type to inherit from the parent rather than just being defined from the scratch, we get a hold of the existing parent content type first, using the following command: $DocumentContentType = $RootWeb.AvailableContentTypes["Document"] Next, we created an instance of a new content type inheriting from our parent content type and provisioned it to the root site using the following command: $ContentType = New-Object Microsoft.SharePoint.SPContentType -ArgumentList @($DocumentContentType, $RootWeb.ContentTypes, "Org Document") Here, the new object takes the following parameters: the content type representing a parent, a web to which the new content type will be provisioned to, and the display name for the content type. Once our content type object has been created, we add it to the list of existing content types on the site: $ct = $RootWeb.ContentTypes.Add($ContentType) Since most content types are unique by the fields they are using, we will add some business- specific fields to our content type. First, we get a hold of the collection of all of the available fields on the site: $OrgFields = $RootWeb.Fields Next, we create a string collection to hold the values for the choice field we are going to add to our content type: $choices = New-Object System.Collections.Specialized.StringCollection The field with list of choices was called Division, representing a company division. We provision the field to the site using the following command: $OrgDivision = $OrgFields.Add("Division", [Microsoft.SharePoint.SPFieldType]::Choice, $false, $false, $choices) In the preceding command, the first parameter is the name of the field, followed by the type of the field, which in our case is choice field. We then specify whether the field will be a required field, followed by a parameter indicating whether the field name will be truncated to eight characters. The last parameter specifies the list of choices for the choice field. Another field we add, representing a company branch, is simpler since it's a text field. We define the text field using the following command: $OrgBranch = $OrgFields.Add("Branch", [Microsoft.SharePoint.SPFieldType]::Text, $false) We add both fields to the content type using the following commands: $OrgDocumentContentType.FieldLinks.Add($OrgDivisionObject)$OrgDocumentContentType.FieldLinks.Add($OrgBranchObject) The last part is to associate the newly created content type to a library, in our case Shared Documents. We use the following command to associate the content type to the library: $association = $SPList.ContentTypes.Add($OrgDocumentContentType) To ensure the content types on the list are enabled, we set the ContentTypesEnabled property of the list to $true.  
Read more
  • 0
  • 0
  • 3661

article-image-nhibernate-30-using-linq-specifications-data-access-layer
Packt
21 Oct 2010
4 min read
Save for later

NHibernate 3.0: Using LINQ Specifications in the data access layer

Packt
21 Oct 2010
4 min read
  NHibernate 3.0 Cookbook Get solutions to common NHibernate problems to develop high-quality performance-critical data access applications Master the full range of NHibernate features Reduce hours of application development time and get better application architecture and performance Create, maintain, and update your database structure automatically with the help of NHibernate Written and tested for NHibernate 3.0 with input from the development team distilled in to easily accessible concepts and examples Part of Packt's Cookbook series: each recipe is a carefully organized sequence of instructions to complete the task as efficiently as possible         Read more about this book       (For more resources on NHibernate, see here.) Getting ready Download the LinqSpecs library from http://linqspecs.codeplex.com. Copy LinqSpecs.dll from the Downloads folder to your solution's libs folder. Complete the Setting up an NHibernate Repository recipe. How to do it... In Eg.Core.Data and Eg.Core.Data.Impl, add a reference to LinqSpecs.dll. Add these two methods to the IRepository interface. IEnumerable<T> FindAll(Specification<T> specification);T FindOne(Specification<T> specification); Add the following three methods to NHibernateRepository: public IEnumerable<T> FindAll(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.ToList());}public T FindOne(Specification<T> specification){ var query = GetQuery(specification); return Transact(() => query.SingleOrDefault());}private IQueryable<T> GetQuery( Specification<T> specification){ return session.Query<T>() .Where(specification.IsSatisfiedBy());} Add the following specification to Eg.Core.Data.Queries: public class MoviesDirectedBy : Specification<Movie>{ private readonly string _director; public MoviesDirectedBy(string director) { _director = director; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Director == _director; }} Add another specification to Eg.Core.Data.Queries, using the following code: public class MoviesStarring : Specification<Movie>{ private readonly string _actor; public MoviesStarring(string actor) { _actor = actor; } public override Expression<Func<Movie, bool>> IsSatisfiedBy() { return m => m.Actors.Any(a => a.Actor == _actor); }} How it works... The specification pattern allows us to separate the process of selecting objects from the concern of which objects to select. The repository handles selecting objects, while the specification objects are concerned only with the objects that satisfy their requirements. In our specification objects, the IsSatisfiedBy method of the specification objects returns a LINQ expression to determine which objects to select. In the repository, we get an IQueryable from the session, pass this LINQ expression to the Where method, and execute the LINQ query. Only the objects that satisfy the specification will be returned. For a detailed explanation of the specification pattern, check out http://martinfowler.com/apsupp/spec.pdf. There's more... To use our new specifications with the repository, use the following code: var movies = repository.FindAll( new MoviesDirectedBy("Stephen Spielberg")); Specification composition We can also combine specifications to build more complex queries. For example, the following code will find all movies directed by Steven Speilberg starring Harrison Ford: var movies = repository.FindAll( new MoviesDirectedBy("Steven Spielberg") & new MoviesStarring("Harrison Ford")); This may result in expression trees that NHibernate is unable to parse. Be sure to test each combination. Summary In this article we covered: Using LINQ Specifications in the data access layer Further resources on this subject: NHibernate 3.0: Working with the Data Access Layer NHibernate 3.0: Using Named Queries in the Data Access Layer NHibernate 3.0: Using ICriteria and Paged Queries in the data access layer NHibernate 3.0: Testing Using NHibernate Profiler and SQLite Using the Fluent NHibernate Persistence Tester and the Ghostbusters Test
Read more
  • 0
  • 0
  • 3645

article-image-playback-audio-video-and-create-media-playback-component-using-javafx
Packt
26 Aug 2010
5 min read
Save for later

Playback Audio with Video and Create a Media Playback Component Using JavaFX

Packt
26 Aug 2010
5 min read
(For more resources on Java, see here.) Playing audio with MediaPlayer Playing audio is an important aspect of any rich client platform. One of the celebrated features of JavaFX is its ability to easily playback audio content. This recipe shows you how to create code that plays back audio resources using the MediaPlayer class. Getting ready This recipe uses classes from the Media API located in the javafx.scene.media package. As you will see in our example, using this API you are able to load, configure, and playback audio using the classes Media and MediaPlayer. For this recipe, we will build a simple audio player to illustrate the concepts presented here. Instead of using standard GUI controls, we will use button icons loaded as images. If you are not familiar with the concept of loading images, review the recipe Loading and displaying images with ImageView in the previous article. In this example we will use a JavaFX podcast from Oracle Technology Network TechCast series where Nandini Ramani discusses JavaFX. The stream can be found at http://streaming.oracle.com/ebn/podcasts/media/8576726_Nandini_Ramani_030210.mp3. How to do it... The code given next has been shortened to illustrate the essential portions involved in loading and playing an audio stream. You can get the full listing of the code in this recipe from ch05/source-code/src/media/AudioPlayerDemo.fx. def w = 400;def h = 200;var scene:Scene;def mediaSource = "http://streaming.oracle.com/ebn/podcasts/media/ 8576726_Nandini_Ramani_030210.mp3";def player = MediaPlayer {media:Media{source:mediaSource}}def controls = Group { layoutX:(w-110)/2 layoutY:(h-50)/2 effect:Reflection{ fraction:0.4 bottomOpacity:0.1 topOffset:3 } content:[ HBox{spacing:10 content:[ ImageView{id:"playCtrl" image:Image{url:"{__DIR__}play-large.png"} onMouseClicked:function(e:MouseEvent){ def playCtrl = e.source as ImageView; if(not(player.status == player.PLAYING)){ playCtrl.image = Image{url:"{__DIR__}pause-large.png"} player.play(); }else if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.pause(); } } } ImageView{id:"stopCtrl" image:Image{url:"{__DIR__}stop-large.png"} onMouseClicked:function(e){ def playCtrl = e.source as ImageView; if(player.status == player.PLAYING){ playCtrl.image = Image{url:"{__DIR__}play-large.png"} player.stop(); } } } ]} ]} When the variable controls is added to a scene object and the application is executed, it produces the screen shown in the following screenshot: How it works... The Media API is comprised of several components which, when put together, provides the mechanism to stream and playback the audio source. To playback audio requires two classes, including Media and MediaPlayer. Let's take a look at how these classes are used to playback audio in the previous example. The MediaPlayer—the first significant item in the code is the declaration and initialization of a MediaPlayer instance assigned to the variable player. To load the audio file, we assign an instance of Media to player.media. The Media class is used to specify the location of the audio. In our example, it is a URL that points to an MP3 file. The controls—the play, pause, and stop buttons are grouped in the Group object called controls. They are made of three separate image files: play-large.png, pause-large.png, and stop-large.png, loaded by two instances of the ImageView class. The ImageView objects serve to display the control icons and to control the playback of the audio: When the application starts, imgView displays image play-large.png. When the user clicks on the image, it invokes its action-handler function, which firsts detects the status of the MediaPlayer instance. If it is not playing, it starts playback of the audio source by calling player.play() and replaces the play-large.png with the image pause-large.png. If, however, audio is currently playing, then the audio is stopped and the image is replaced back with play-large.png. The other ImageView instance loads the stop-large.png icon. When the user clicks on it, it calls its action-handler to first stop the audio playback by calling player.stop(). Then it toggles the image for the "play" button back to icon play-large.png. As mentioned in the introduction, JavaFX will play the MP3 file format on any platform where the JavaFX format is supported. Anything other than MP3 must be supported natively by the OS's media engine where the file is played back. For instance, on my Mac OS, I can play MPEG-4, because it is a supported playback format by the OS's QuickTime engine. There's more... The Media class models the audio stream. It exposes properties to configure the location, resolves dimensions of the medium (if available; in the case of audio, that information is not available), and provides tracks and metadata about the resource to be played. The MediaPlayer class itself is a controller class responsible for controlling playback of the medium by offering control functions such as play(), pause(), and stop(). It also exposes valuable playback data including current position, volume level, and status. We will use these additional functions and properties to extend our playback capabilities in the recipe Controlling media playback in this article. See also Accessing media assets Loading and displaying images with ImageView  
Read more
  • 0
  • 0
  • 3619
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-introduction-jsf-part-2
Packt
30 Dec 2009
7 min read
Save for later

An Introduction to JSF: Part 2

Packt
30 Dec 2009
7 min read
Standard JSF Validators The JSF Core tag library also includes a number of built-in validators. These validator tags can also be registered with UI components to verify that required fields are completed by the user-that numeric values are within an acceptable range,and that text values are a certain length. For more specific validation scenarios, we can also write our own custom validators. User input validation happens immediately after data conversion during the JSF request lifecycle. Validating the Length of a Text Value JSF includes a built-in validator that can be used to ensure a text value entered by the user is between an expected minimum and maximum length. The following example demonstrates using the <f:validatelength> tag’s minimum and maximum attributes to check that the password entered by the user in the password field is exactly 8 characters long. It also demonstrates how to use the label attribute of certain JSF input components (introduced in JSF 1.2) to render a localizable validation message. JSF Validation Messages The JSF framework includes predefined validation messages for different input components and validation scenarios. These messages are defined in a message bundle (properties file) including in the JSF implementation jar file. Many of these messages are parameterized, meaning that since JSF 1.2 a UI component’s label can be inserted into these messages to provide more detailed information to the user. The default JSF validation messages can be overridden by specifying the same message bundle keys in the application’s message bundle. We will see an example of customizing JSF validation messages below. Notice that we also set the maxlength attribute of the <h:inputsecret> tag to limit the input to 8 characters. This does not, however, ensure that the user enters a minimum of 8 characters. Therefore, the <f:validatelength> validator tag is required.   <f:view> <h:form> <h:outputLabel value="Please enter a password (must be 8 characters): " /> <h:inputSecret maxlength="8" id="password" value="#{backingBean.password}" label="Password"> <f:validateLength minimum="8" maximum="8" /> </h:inputSecret> <h:commandButton value="Submit" /><br /> <h:message for="password" errorStyle="color:red" /> </h:form> </f:view> Validating a Required Field The following example demonstrates how to use the built-in JSF validators to ensure that a text field is filled out before the form is processed, and that the numeric value is between 1 and 10: <f:view> <h:form> <h:outputLabel value="Please enter a number: " /> <h:inputText id="number" label="Number" value="#{backingBean.price}" required="#{true}" /> <h:commandButton value="Submit" /><br /> <h:message for="number" errorClass="error" /> </h:form> </f:view> The following screenshot demonstrates the result of submitting a JSF form containing a required field that was not filled out. We render the validation error message using an <h:message> tag with a for attribute set to the ID of the text field component. We have also overridden the default JSF validation message for required fields by specifying the following message key in our message bundle. We will discuss message bundles and internationalization (I18N) shortly.   javax.faces.component.UIInput.REQUIRED=Required field. javax.faces.component.UIInput.REQUIRED_detail=Please fill in this field. Validating a numeric range The JSF Core <f:validatelongrange> and </f:validatedoublerange> tags can be used to validate numeric user input. The following example demonstrates how to use the <f:validatelongrange> tag to ensure an integer value entered by the user is between 1 and 10. <f:view> <h:form> <h:outputLabel value="Please enter a number between 1 and 10: " /> <h:inputText id="number" value="#{backingBean.number}" label="Number"> <f:validateLongRange minimum="1" maximum="10" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: #{backingBean.number}" rendered="#{backingBean.number ne null}" /> </h:form> </f:view> The following screenshot shows the result of entering an invalid value into the text field. Notice that the value of the text field’s label attribute is interpolated with the standard JSF validation message. Validating a floating point number is similar to validating an integer. The following example demonstrates how to use the value to ensure that a floating point number is between 0.0 and 1.0.   <f:view> <h:form> <h:outputLabel value="Please enter a floating point number between 0 and 1: " /> <h:inputText id="number" value="#{backingBean.percentage}" label="Percent"> <f:validateDoubleRange minimum="0.0" maximum="1.0" /> </h:inputText> <h:commandButton value="Submit" /><br /> <h:message for="number" errorStyle="color:red" /> <h:outputText value="You entered: " rendered="#{backingBean.percentage ne null}" /> <h:outputText value="#{backingBean.percentage}" rendered="#{backingBean.percentage ne null}" > <f:convertNumber type="percent" maxFractionDigits="2" /> </h:outputText> </h:form> </f:view> Registering a custom validator JSF also supports defining custom validation classes to provide more specialized user input validation. To create a custom validator, first we need to implement the javax.faces.validator.Validator interface. Implementing a custom validator in JSF is straightforward. In this example, we check if a date supplied by the user represents a valid birthdate. As most humans do not live more than 120 years, we reject any date that is more than 120 years ago. The important thing to note from this code example is not the validation logic itself, but what to do when the validation fails. Note that we construct a FacesMessage object with an error message and then throw a ValidatorException.   package chapter1.validator; import java.util.Calendar; import java.util.Date; import javax.faces.application.FacesMessage; import javax.faces.component.UIComponent; import javax.faces.context.FacesContext; import javax.faces.validator.Validator; import javax.faces.validator.ValidatorException; public class CustomDateValidator implements Validator { public void validate(FacesContext context, UIComponent component, Object object) throws ValidatorException { if (object instanceof Date) { Date date = (Date) object; Calendar calendar = Calendar.getInstance(); calendar.roll(Calendar.YEAR, -120); if (date.before(calendar.getTime())) { FacesMessage msg = new FacesMessage(); msg.setSummary("Invalid birthdate: " + date); msg.setDetail("The date entered is more than 120 years ago."); msg.setSeverity(FacesMessage.SEVERITY_ERROR); throw new ValidatorException(msg); } } } } We have to declare our custom validators in faces-config.xml as follows, giving the validator an ID of customDateValidator: <validator> <description>This birthdate validator checks a date to make sure it is within the last 120 years.</description> <display-name>Custom Date Validator</display-name> <validator-id>customDateValidator</validator-id> <validator-class> chapter1.validator.CustomDateValidator </validator-class> </validator> Next, we would register our custom validator on a JSF UI component using the tag. This tag has a converterId attribute that expects the ID of a custom converter declared in faces-config.xml. Notice in the following example that we are also registering the standard JSF <f:convertdatetime></f:convertdatetime> converter on the tag. This is to ensure that the value entered by the user is first converted to a java.util.Date object before it is passed to our custom validator. <h:inputText id="name" value="#{backingBean.date}"> <f:convertDateTime type="date" pattern="M/d/yyyy" /> <f:validator validatorId="customDateValidator" /> </h:inputText> Many JSF UI component tags have both a converter and validator attribute that accept EL method expressions. These attributes provides another way to register custom converters and validators implemented in managed beans on UI components.
Read more
  • 0
  • 0
  • 3604

article-image-mapreduce-api
Packt
02 Jun 2015
10 min read
Save for later

Map/Reduce API

Packt
02 Jun 2015
10 min read
 In this article by Wagner Roberto dos Santos, author of the book Infinispan Data Grid Platform Definitive Guide, we will see the usage of Map/Reduce API and its introduction in Infinispan. Using the Map/Reduce API According to Gartner, from now on in-memory data grids and in-memory computing will be racing towards mainstream adoption and the market for this kind of technology is going to reach 1 billion by 2016. Thinking along these lines, Infinispan already provides a MapReduce API for distributed computing, which means that we can use Infinispan cache to process all the data stored in heap memory across all Infinispan instances in parallel. If you're new to MapReduce, don't worry, we're going to describe it in the next section in a way that gets you up to speed quickly. An introduction to Map/Reduce MapReduce is a programming model introduced by Google, which allows for massive scalability across hundreds or thousands of servers in a data grid. It's a simple concept to understand for those who are familiar with distributed computing and clustered environments for data processing solutions. You can find the paper about MapReduce in the following link:http://research.google.com/archive/mapreduce.html The MapReduce has two distinct computational phases; as the name states, the phases are map and reduce: In the map phase, a function called Map is executed, which is designed to take a set of data in a given cache and simultaneously perform filtering, sorting operations, and outputs another set of data on all nodes. In the reduce phase, a function called Reduce is executed, which is designed to reduce the final form of the results of the map phase in one output. The reduce function is always performed after the map phase. Map/Reduce in the Infinispan platform The Infinispan MapReduce model is an adaptation of the Google original MapReduce model. There are four main components in each map reduce task, they are as follows: MapReduceTask: This is a distributed task allowing a large-scale computation to be transparently parallelized across Infinispan cluster nodes. This class provides a constructor that takes a cache whose data will be used as the input for this task. The MapReduceTask orchestrates the execution of the Mapper and Reducer seamlessly across Infinispan nodes. Mapper: A Mapper is used to process each input cache entry K,V. A Mapper is invoked by MapReduceTask and is migrated to an Infinispan node, to transform the K,V input pair into intermediate keys before emitting them to a Collector. Reducer: A Reducer is used to process a set of intermediate key results from the map phase. Each execution node will invoke one instance of Reducer and each instance of the Reducer only reduces intermediate keys results that are locally stored on the execution node. Collator: This collates results from reducers executed on the Infinispan cluster and assembles a final result returned to an invoker of MapReduceTask. The following image shows that in a distributed environment, an Infinispan MapReduceTask is responsible for starting the process for a given cache, unless you specify an onKeys(Object...) filter, all available key/value pairs of the cache will be used as input data for the map reduce task:   In the preceding image, the Map/Reduce processes are performing the following steps: The MapReduceTask in the Master Task Node will start the Map Phase by hashing the task input keys and grouping them by the execution node they belong to and then, the Infinispan master node will send a map function and input keys to each node. In each destination, the map will be locally loaded with the corresponding value using the given keys. The map function is executed on each node, resulting in a map< KOut, VOut > object on each node. The Combine Phase is initiated when all results are collected, if a combiner is specified (via combineWith(Reducer<KOut, VOut> combiner) method), the combiner will extract the KOut keys and invoke the reduce phase on keys. Before starting the Reduce Phase, Infinispan will execute an intermediate migration phase, where all intermediate keys and values are grouped. At the end of the Combine Phase, a list of KOut keys are returned to the initial Master Task Node. At this stage, values (VOut) are not returned, because they are not needed in the master node. At this point, Infinispan is ready to start the Reduce Phase; the Master Task Node will group KOut keys by the execution node and send a reduce command to each node where keys are hashed. The reducer is invoked and for each KOut key, the reducer will grab a list of VOut values from a temporary cache belonging to MapReduceTask, wraps it with an iterator, and invokes the reduce method on it. Each reducer will return one map with the KOut/VOut result values. The reduce command will return to the Master Task Node, which in turn will combine all resulting maps into one single map and return it as a result of MapReduceTask. Sample application – find a destination Now that we have seen what map and reduce are, and how the Infinispan model works, let's create a Find Destination application that illustrates the concepts we have discussed. To demonstrate how CDI works, in the last section, we created a web service that provides weather information. Now, based on this same weather information service, let's create a map/reduce engine for the best destination based on simple business rules, such as destination type (sun destination, golf, skiing, and so on). So, the first step is to create the WeatherInfo cache object that will hold information about the weather: public class WeatherInfo implements Serializable {  private static final long serialVersionUID =     -3479816816724167384L;  private String country;  private String city;  private Date day;  private Double temp;  private Double tempMax;  private Double tempMin;  public WeatherInfo(String country, String city, Date day,     Double temp) {    this(country, city, day, temp, temp + 5, temp - 5);  }  public WeatherInfo(String country, String city, Date day,     Double temp,    Double tempMax, Double tempMin) {    super();    this.country = country;    this.city = city;    this.day = day;    this.temperature = temp;    this.temperatureMax = tempMax;    this.temperatureMin = tempMin;  }// Getters and Setters ommitted  @Override  public String toString() {    return "{WeatherInfo:{ country:" + country + ", city:" +       city + ", day:" + day + ", temperature:" + temperature + ",       temperatureMax:" + temperatureMax + ", temperatureMin:" +           temperatureMin + "}";  }} Now, let's create an enum object to define the type of destination a user can select and the rules associated with each destination. To keep it simple, we are going to have only two destinations, sun and skiing. The temperature value will be used to evaluate if the destination can be considered the corresponding type: public enum DestinationTypeEnum {SUN(18d, "Sun Destination"), SKIING(-5d, "Skiing Destination");private Double temperature;private String description;DestinationTypeEnum(Double temperature, String description) {this.temperature = temperature;this.description = description;}public Double getTemperature() {return temperature;}public String getDescription() {return description;} Now it's time to create the Mapper class—this class is going to be responsible for validating whether each cache entry fits the destination requirements. To define the DestinationMapper class, just extend the Mapper<KIn, VIn, KOut, VOut> interface and implement your algorithm in the map method; public class DestinationMapper implementsMapper<String, WeatherInfo, DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID =-3418976303227050166L;public void map(String key, WeatherInfo weather,Collector<DestinationTypeEnum, WeatherInfo> c) {if (weather.getTemperature() >= SUN.getTemperature()){c.emit(SUN, weather);}else if (weather.getTemperature() <=SKIING.getTemperature()) {c.emit(SKIING, weather);}}} The role of the Reducer class in our application is to return the best destination among all destinations, based on the highest temperature for sun destinations and the lowest temperature for skiing destinations, returned by the mapping phase. To implement the Reducer class, you'll need to implement the Reducer<KOut, VOut> interface: public class DestinationReducer implementsReducer<DestinationTypeEnum, WeatherInfo> {private static final long serialVersionUID = 7711240429951976280L;public WeatherInfo reduce(DestinationTypeEnum key,Iterator<WeatherInfo> it) {WeatherInfo bestPlace = null;if (key.equals(SUN)) {while (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() >bestPlace.getTemp()) {bestPlace = w;}}} else { /// Best for skiingwhile (it.hasNext()) {WeatherInfo w = it.next();if (bestPlace == null || w.getTemp() <bestPlace.getTemp()) {bestPlace = w;}}}return bestPlace;}} Finally, to execute our sample application, we can create a JUnit test case with the MapReduceTask. But first, we have to create a couple of cache entries before executing the task, which we are doing in the setUp() method: public class WeatherInfoReduceTest {private static final Log logger =LogFactory.getLog(WeatherInfoReduceTest.class);private Cache<String, WeatherInfo> weatherCache;@Beforepublic void setUp() throws Exception {Date today = new Date();EmbeddedCacheManager manager = new DefaultCacheManager();Configuration config = new ConfigurationBuilder().clustering().cacheMode(CacheMode.LOCAL).build();manager.defineConfiguration("weatherCache", config);weatherCache = manager.getCache("weatherCache");WeatherInfoweatherCache.put("1", new WeatherInfo("Germany", "Berlin",today, 12d));weatherCache.put("2", new WeatherInfo("Germany","Stuttgart", today, 11d));weatherCache.put("3", new WeatherInfo("England", "London",today, 8d));weatherCache.put("4", new WeatherInfo("England","Manchester", today, 6d));weatherCache.put("5", new WeatherInfo("Italy", "Rome",today, 17d));weatherCache.put("6", new WeatherInfo("Italy", "Napoli",today, 18d));weatherCache.put("7", new WeatherInfo("Ireland", "Belfast",today, 9d));weatherCache.put("8", new WeatherInfo("Ireland", "Dublin",today, 7d));weatherCache.put("9", new WeatherInfo("Spain", "Madrid",today, 19d));weatherCache.put("10", new WeatherInfo("Spain", "Barcelona",today, 21d));weatherCache.put("11", new WeatherInfo("France", "Paris",today, 11d));weatherCache.put("12", new WeatherInfo("France","Marseille", today, -8d));weatherCache.put("13", new WeatherInfo("Netherlands","Amsterdam", today, 11d));weatherCache.put("14", new WeatherInfo("Portugal", "Lisbon",today, 13d));weatherCache.put("15", new WeatherInfo("Switzerland","Zurich", today, -12d));}@Testpublic void execute() {MapReduceTask<String, WeatherInfo, DestinationTypeEnum,WeatherInfo> task = new MapReduceTask<String, WeatherInfo,DestinationTypeEnum, WeatherInfo>(weatherCache);task.mappedWith(new DestinationMapper()).reducedWith(newDestinationReducer());Map<DestinationTypeEnum, WeatherInfo> destination =task.execute();assertNotNull(destination);assertEquals(destination.keySet().size(), 2);logger.info("********** PRINTING RESULTS FOR WEATHER CACHE*************");for (DestinationTypeEnum destinationType :destination.keySet()){logger.infof("%s - Best Place: %sn",destinationType.getDescription(),destination.get(destinationType));}}} When we execute the application, you should expect to see the following output: INFO: Skiing DestinationBest Place: {WeatherInfo:{ country:Switzerland, city:Zurich,day:Mon Jun 02 19:42:22 IST 2014, temp:-12.0, tempMax:-7.0,tempMin:-17.0}INFO: Sun DestinationBest Place: {WeatherInfo:{ country:Spain, city:Barcelona, day:MonJun 02 19:42:22 IST 2014, temp:21.0, tempMax:26.0, tempMin:16.0} Summary In this article, you learned how to work with applications in modern distributed server architecture, using the Map Reduce API, and how it can abstract parallel programming into two simple primitives, the map and reduce methods. We have seen a sample use case Find Destination that demonstrated how use map reduce almost in real time. Resources for Article: Further resources on this subject: MapReduce functions [Article] Hadoop and MapReduce [Article] Introduction to MapReduce [Article]
Read more
  • 0
  • 0
  • 3561

article-image-create-quick-application-cakephp-part-1
Packt
17 Nov 2009
9 min read
Save for later

Create a Quick Application in CakePHP: Part 1

Packt
17 Nov 2009
9 min read
The ingredients are fresh, sliced up, and in place. The oven is switched on, heated, and burning red. It is time for us to put on the cooking hat, and start making some delicious cake recipes. So, are you ready, baker? In this article, we are going to develop a small application that we'll call the "CakeTooDoo". It will be a simple to-do-list application, which will keep record of the things that we need to do. A shopping list, chapters to study for an exam, list of people you hate, and list of girls you had a crush on are all examples of lists. CakeTooDoo will allow us to keep an updated list. We will be able to view all the tasks, add new tasks, and tick the tasks that are done and much more. Here's another example of a to-do list, things that we are going to cover in this article: Make sure Cake is properly installed for CakeTooDoo Understand the features of CakeTooDoo Create and configure the CakeTooDoo database Write our first Cake model Write our first Cake controller Build a list that shows all the tasks in CakeTooDoo Create a form to add new tasks to CakeTooDoo Create another form to edit tasks in the to-do list Have a data validation rule to make sure users do not enter empty task title Add functionality to delete a task from the list Make separate lists for completed and pending Tasks Make the creation and modification time of a task look nicer Create a homepage for CakeTooDoo Making Sure the Oven is Ready Before we start with CakeTooDoo, let's make sure that our oven is ready. But just to make sure that we do not run into any problem later, here is a check list of things that should already be in place: Apache is properly installed and running in the local machine. MySQL database server is installed and running in the local machine. PHP, version 4.3.2 or higher, is installed and working with Apache. The latest 1.2 version of CakePHP is being used. Apache mod_rewrite module is switched on. AllowOverride is set to all for the web root directory in the Apache configuration file httpd.conf. CakePHP is extracted and placed in the web root directory of Apache. Apache has write access for the tmp directory of CakePHP. In this case, we are going to rename the Cake directory to it CakeTooDoo. CakeTooDoo: a Simple To-do List Application As we already know, CakeTooDoo will be a simple to-do list. The list will consist of many tasks that we want to do. Each task will consist of a title and a status. The title will indicate the thing that we need to do, and the status will keep record of whether the task has been completed or not. Along with the title and the status, each task will also record the time when the task has been created and last modified. Using CakeTooDoo, we will be able to add new tasks, change the status of a task, delete a task, and view all the tasks. Specifically, CakeTooDoo will allow us to do the following things: View all tasks in the list Add a new task to the list Edit a task to change its status View all completed tasks View all pen Delete a task A homepage that will allow access to all the features. You may think that there is a huge gap between knowing what to make and actually making it. But wait! With Cake, that's not true at all! We are just 10 minutes away from the fully functional and working CakeTooDoo. Don't believe me? Just keep reading and you will find it out yourself. Configuring Cake to Work with a Database The first thing we need to do is to create the database that our application will use. Creating database for Cake applications are no different than any other database that you may have created before. But, we just need to follow a few simple naming rules or conventions while creating tables for our database. Once the database is in place, the next step is to tell Cake to use the database. Time for Action: Creating and Configuring the Database Create a database named caketoodoo in the local machine's MySQL server. In your favourite MySQL client, execute the following code: CREATE DATABASE caketoodoo; In our newly created database, create a table named tasks, by running the following code in your MySQL client: USE caketoodoo; CREATE TABLE tasks ( id int(10) unsigned NOT NULL auto_increment, title varchar(255) NOT NULL, done tinyint(1) default NULL, created datetime default NULL, modified datetime default NULL, PRIMARY KEY (id) ); Rename the main cake directory to CakeTooDoo, if you haven't done that yet. Move inside the directory CakeTooDoo/app/config. In the config directory, there is a file named database.php.default. Rename this file to database.php. Open the database.php file with your favourite editor, and move to line number 73, where we will find an array named $default. This array contains database connection options. Assign login to the database user you will be using and password to the password of that user. Assign database to caketoodoo. If we are using the database user ahsan with password sims, the configuration will look like this: var $default = array( 'driver' => 'mysql', 'persistent' => false, 'host' => 'localhost', 'port' => '', 'login' => 'ahsan', 'password' => 'sims', 'database' => 'caketoodoo', 'schema' => '', 'prefix' => '', 'encoding' => '' ); Now, let us check if Cake is being able to connect to the database. Fire up a browser, and point to http://localhost/CakeTooDoo/. We should get the default Cake page that will have the following two lines: Your database configuration file is present and Cake is able to connect to the database, as shown in the following screen shot. If you get the lines, we have successfully configured Cake to use the caketoodoo database. What Just Happened? We just created our first database, following Cake convention, and configured Cake to use that database. Our database, which we named caketoodoo, has only one table named task. It is a convention in Cake to have plural words for table names. Tasks, users, posts, and comments are all valid names for database tables in Cake. Our table tasks has a primary key named id. All tables in Cake applications' database must have id as the primary key for the table. Conventions in CakePHPDatabase tables used with CakePHP should have plural names. All database tables should have a field named id as the primary key of the table. We then configured Cake to use the caketoodoo database. This was achieved by having a file named database.php in the configuration directory of the application. In database.php, we set the default database to caketoodoo. We also set the database username and password that Cake will use to connect to the database server. Lastly, we made sure that Cake was able to connect to our database, by checking the default Cake page. Conventions in Cake are what make the magic happen. By favoring convention over configuration, Cake makes productivity increase to a scary level without any loss to flexibility. We do not need to spend hours setting configuration values to just make the application run. Setting the database name is the only configuration that we will need, everything else will be figured out "automagically" by Cake. Throughout this article, we will get to know more conventions that Cake follows.   Writing our First Model Now that Cake is configured to work with the caketoodoo database, it's time to write our first model. In Cake, each database table should have a corresponding model. The model will be responsible for accessing and modifying data in the table. As we know, our database has only one table named tasks. So, we will need to define only one model. Here is how we will be doing it: Time for Action: Creating the Task Model Move into the directory CakeTooDoo/app/models. Here, create a file named task.php. In the file task.php, write the following code: <?php class Task extends AppModel { var $name = 'Task'; } ?> Make sure there are no white spaces or tabs before the <?php tag and after the ?> tag. Then save the file. What Just Happened? We just created our first Cake model for the database table tasks. All the models in a CakePHP application are placed in the directory named models in the app directory. Conventions in CakePHP: All model files are kept in the directory named models under the app directory. Normally, each database table will have a corresponding file (model) in this directory. The file name for a model has to be singular of the corresponding database table name followed by the .php extension. The model file for the tasks database table is therefore named task.php. Conventions in CakePHP: The model filename should be singular of the corresponding database table name. Models basically contain a PHP class. The name of the class is also singular of the database table name, but this time it is CamelCased. The name of our model is therefore Task. Conventions in CakePHP: A model class name is also singular of the name of the database table that it represents. You will notice that this class inherits another class named AppModel. All models in CakePHP must inherit this class. The AppModel class inherits another class called Model. Model is a core CakePHP class that has all the basic functions to add, modify, delete, and access data from the database. By inheriting this class, all the models will also be able to call these functions, thus we do not need to define them separately each time we have a new model. All we need to do is to inherit the AppModel class for all our models. We then defined a variable named $name in the Task'model, and assigned the name of the model to it. This is not mandatory, as Cake can figure out the name of the model automatically. But, it is a good practice to name it manually.
Read more
  • 0
  • 0
  • 3547

article-image-dwr-java-ajax-user-interface-basic-elements-part-1
Packt
20 Oct 2009
16 min read
Save for later

DWR Java AJAX User Interface: Basic Elements (Part 1)

Packt
20 Oct 2009
16 min read
  Creating a Dynamic User Interface The idea behind a dynamic user interface is to have a common "framework" for all samples. We will create a new web application and then add new features to the application as we go on. The user interface will look something like the following figure: The user interface has three main areas: the title/logo that is static, the tabs that are dynamic, and the content area that shows the actual content. The idea behind this implementation is to use DWR functionality to generate tabs and to get content for the tab pages. The tabbed user interface is created using a CSS template from the Dynamic Drive CSS Library (http://dynamicdrive.com/style/csslibrary/item/css-tabs-menu). Tabs are read from a properties file, so it is possible to dynamically add new tabs to the web page. The following screenshot shows the user interface. The following sequence diagram shows the application flow from the logical perspective. Because of the built-in DWR features we don't need to worry very much about how asynchronous AJAX "stuff" works. This is, of course, a Good Thing. Now we will develop the application using the Eclipse IDE and the Geronimo test environment Creating a New Web Project First, we will create a new web project. Using the Eclipse IDE we do the following: select the menu File | New | Dynamic Web Project. This opens the New Dynamic Web Project dialog; enter the project name DWREasyAjax and click Next, and accept the defaults on all the pages till the last page, where Geronimo Deployment Plan is created as shown in the following screenshot: Enter easyajax as Group Id and DWREasyAjax as Artifact Id. On clicking Finish, Eclipse creates a new web project. The following screen shot shows the generated project and the directory hierarchy. Before starting to do anything else, we need to copy DWR to our web application. All DWR functionality is present in the dwr.jar file, and we just copy that to the WEB-INF | lib directory. A couple of files are noteworthy: web.xml and geronimo-web.xml. The latter is generated for the Geronimo application server, and we can leave it as it is. Eclipse has an editor to show the contents of geronimo-web.xml when we double-click the file. Configuring the Web Application The context root is worth noting (visible in the screenshot above). We will need it when we test the application. The other XML file, web.xml, is very important as we all know. This XML will hold the DWR servlet definition and other possible initialization parameters. The following code shows the full contents of the web.xml file that we will use: <?xml version="1.0" encoding="UTF-8"?> <web-app xsi_schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web- app_2_5.xsd" id="WebApp_ID" version="2.5"> <display-name>DWREasyAjax</display-name> <servlet> <display-name>DWR Servlet</display-name> <servlet-name>dwr-invoker</servlet-name> <servlet-class> org.directwebremoting.servlet.DwrServlet </servlet-class> <init-param> <param-name>debug</param-name> <param-value>true</param-value> </init-param> </servlet> <servlet-mapping> <servlet-name>dwr-invoker</servlet-name> <url-pattern>/dwr/*</url-pattern> </servlet-mapping> <welcome-file-list> <welcome-file>index.html</welcome-file> <welcome-file>index.htm</welcome-file> <welcome-file>index.jsp</welcome-file> <welcome-file>default.html</welcome-file> <welcome-file>default.htm</welcome-file> <welcome-file>default.jsp</welcome-file> </welcome-file-list> </web-app> DWR cannot function without the dwr.xml configuration file. So we need to create the configuration file. We use Eclipse to create a new XML file in the WEB-INF directory. The following is required for the user interface skeleton. It already includes the allow-element for our DWR based menu. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE dwr PUBLIC "-//GetAhead Limited//DTD Direct Web Remoting 2.0//EN" "http://getahead.org/dwr/dwr20.dtd"> <dwr> <allow> <create creator="new" javascript="HorizontalMenu"> <param name="class" value="samples.HorizontalMenu" /> </create> </allow> </dwr> In the allow element, there is a creator for the horizontal menu Java class that we are going to implement here. The creator that we use here is the new creator, which means that DWR will use an empty constructor to create Java objects for clients. The parameter named class holds the fully qualified class name. Developing the Web Application Since we have already defined the name of the Java class that will be used for creating the menu, the next thing we do is implement it. The idea behind the HorizontalMenu class is that it is used to read a properties file that holds the menus that are going to be on the web page. We add properties to a file named dwrapplication.properties, and we create it in the same samples-package as the HorizontalMenu-class. The properties file for the menu items is as follows: menu.1=Tables and lists,TablesAndLists menu.2=Field completion,FieldCompletion The syntax for the menu property is that it contains two elements separated by a comma. The first element is the name of the menu item. This is visible to user. The second is the name of HTML template file that will hold the page content of the menu item. The class contains just one method, which is used from JavaScript and via DWR to retrieve the menu items. The full class implementation is shown here: package samples; import java.io.IOException; import java.io.InputStream; import java.util.List; import java.util.Properties; import java.util.Vector; public class HorizontalMenu { public HorizontalMenu() { } public List<String> getMenuItems() throws IOException { List<String> menuItems = new Vector<String>(); InputStream is = this.getClass().getClassLoader().getResourceAsStream( "samples/dwrapplication.properties"); Properties appProps = new Properties(); appProps.load(is); is.close(); for (int menuCount = 1; true; menuCount++) { String menuItem = appProps.getProperty("menu." + menuCount); if (menuItem == null) { break; } menuItems.add(menuItem); } return menuItems; } } The implementation is straightforward. The getMenuItems() method loads properties using the ClassLoader.getResourceAsStream() method, which searches the class path for the specified resource. Then, after loading properties, a for loop is used to loop through menu items and then a List of String-objects is returned to the client. The client is the JavaScript callback function that we will see later. DWR automatically converts the List of String objects to JavaScript arrays, so we don't have to worry about that. Testing the Web Application We haven't completed any client-side code now, but let's test the code anyway. Testing uses the Geronimo test environment. The Project context menu has the Run As menu that we use to test the application as shown in the following screenshot: Run on Server opens a wizard to define a new server runtime. The following screenshot shows that the Geronimo test environment has already been set up, and we just click Finish to run the application. If the test environment is not set up, we can manually define a new one in this dialog: After we click Finish, Eclipse starts the Geronimo test environment and our application with it. When the server starts, the Console tab in Eclipse informs us that it's been started. The Servers tab shows that the server is started and all the code has been synchronized, that is, the code is the most recent (Synchronization happens whenever we save changes on some deployed file.) The Servers tab also has a list of deployed applications under the server. Just the one application that we are testing here is visible in the Servers tab. Now comes the interesting part—what are we going to test if we haven't really implemented anything? If we take a look at the web.xml file, we will find that we have defined one initialization parameter. The Debug parameter is true, which means that DWR generates test pages for our remoted Java classes. We just point the browser (Firefox in our case) to the URL http://127.0.0.1:8080/DWREasyAjax/dwr and the following page opens up: This page will show a list of all the classes that we allow to be remoted. When we click the class name, a test page opens as in the following screenshot: This is an interesting page. We see all the allowed methods, in this case, all public class methods since we didn't specifically include or exclude anything. The most important ones are the script elements, which we need to include in our HTML pages. DWR does not automatically know what we want in our web pages, so we must add the script includes in each page where we are using DWR and a remoted functionality. Then there is the possibility of testing remoted methods. When we test our own method, getMenuItems(), we see a response in an alert box: The array in the alert box in the screenshot is the JavaScript array that DWR returns from our method. Developing Web Pages The next step is to add the web pages. Note that we can leave the test environment running. Whenever we change the application code, it is automatically published to test the environment, so we don't need to stop and start the server each time we make some changes and want to test the application. The CSS style sheet is from the Dynamic Drive CSS Library. The file is named styles.css, and it is in the WebContent directory in Eclipse IDE. The CSS code is as shown: /*URL: http://www.dynamicdrive.com/style/ */ .basictab{ padding: 3px 0; margin-left: 0; font: bold 12px Verdana; border-bottom: 1px solid gray; list-style-type: none; text-align: left; /*set to left, center, or right to align the menu as desired*/ } .basictab li{ display: inline; margin: 0; } .basictab li a{ text-decoration: none; padding: 3px 7px; margin-right: 3px; border: 1px solid gray; border-bottom: none; background-color: #f6ffd5; color: #2d2b2b; } .basictab li a:visited{ color: #2d2b2b; } .basictab li a:hover{ background-color: #DBFF6C; color: black; } .basictab li a:active{ color: black; } .basictab li.selected a{ /*selected tab effect*/ position: relative; top: 1px; padding-top: 4px; background-color: #DBFF6C; color: black; } This CSS is shown for the sake of completion, and we will not go into details of CSS style sheets. It is sufficient to say that CSS provides an excellent method to create websites with good presentation. The next step is the actual web page. We create an index.jsp page, in the WebContent directory, which will have the menu and also the JavaScript functions for our samples. It should be noted that although all JavaScript code is added to a single JSP page here in this sample, in "real" projects it would probably be more useful to create a separate file for JavaScript functions and include the JavaScript file in the HTML/JSP page using a code snippet such as this: <script type="text/javascript" src="myjavascriptcode/HorizontalMenu.js"/>. We will add JavaScript functions later for each sample. The following is the JSP code that shows the menu using the remoted HorizontalMenu class. <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link href="styles.css" rel="stylesheet" type="text/css"/> <script type='text/javascript' src='/DWREasyAjax/dwr/engine.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/util.js'></script> <script type='text/javascript' src='/DWREasyAjax/dwr/interface/HorizontalMenu.js'></script> <title>DWR samples</title> <script type="text/javascript"> function loadMenuItems() { HorizontalMenu.getMenuItems(setMenuItems); } function getContent(contentId) { AppContent.getContent(contentId,setContent); } function menuItemFormatter(item) { elements=item.split(','); return '<li><a href="#" onclick="getContent(''+elements[1]+'');return false;">'+elements[0]+'</a></li>'; } function setMenuItems(menuItems) { menu=dwr.util.byId("dwrMenu"); menuItemsHtml=''; for(var i=0;i<menuItems.length;i++) { menuItemsHtml=menuItemsHtml+menuItemFormatter(menuItems[i]); } menu.innerHTML=menuItemsHtml; } function setContent(htmlArray) { var contentFunctions=''; var scriptToBeEvaled=''; var contentHtml=''; for(var i=0;i<htmlArray.length;i++) { var html=htmlArray[i]; if(html.toLowerCase().indexOf('<script')>-1) { if(html.indexOf('TO BE EVALED')>-1) { scriptToBeEvaled=html.substring(html.indexOf('>')+1,html.indexOf('</')); } else { eval(html.substring(html.indexOf('>')+1,html.indexOf('</'))); contentFunctions+=html; } } else { contentHtml+=html; } } contentScriptArea=dwr.util.byId("contentAreaFunctions"); contentScriptArea.innerHTML=contentFunctions; contentArea=dwr.util.byId("contentArea"); contentArea.innerHTML=contentHtml; if(scriptToBeEvaled!='') { eval(scriptToBeEvaled); } } </script> </head> <body onload="loadMenuItems()"> <h1>DWR Easy Java Ajax Applications</h1> <ul class="basictab" id="dwrMenu"> </ul> <div id="contentAreaFunctions"> </div> <div id="contentArea"> </div> </body> </html> This JSP is our user interface. The HTML is just normal HTML with a head element and a body element. The head includes reference to a style sheet and to DWR JavaScript files, engine.js, util.js, and our own HorizontalMenu.js. The util.js file is optional, but as it contains very useful functions, it could be included in all the web pages where we use the functions in util.js. The body element has a contentArea place holder for the content pages just below the menu. It also contains the content area for JavaScript functions for a particular content. The body element onload-event executes the loadMenuItems() function when the page is loaded. The loadMenuItems() function calls the remoted method of the HorizontalMenu Java class. The parameter of the HorizontalMenu. getMenuItems() JavaScript function is the callback function that is called by DWR when the Java method has been executed and it returns menu items. The setMenuItems() function is a callback function for the loadMenuItems() function mentioned in the previous paragraph. While loading menu items, the Horizontal.getMenuItems() remoted method returns menu items as a List of Strings as a parameter to the setMenuItems() function. The menu items are formatted using the menuItemFormatter() helper function. The menuItemFormatter() function creates li elements of menu texts. Menus are formatted as links, (a href) and they have an onclick event that has a function call to the getContent-function, which in turn calls the AppContent.getContent() function. The AppContent is a remoted Java class, which we haven't implemented yet, and its purpose is to read the HTML from a file based on the menu item that the user clicked. Implementation of AppContent and the content pages are described in the next section. The setContent() function sets the HTML content to the content area and also evaluates JavaScript options that are within the content to be inserted in the content area (this is not used very much, but it is there for those who need it). Our dynamic user interface looks like this: Note the Firebug window at the bottom of the browser screen. The Firebug console in the screenshot shows one POST request to our HorizontalMenu.getMenuItems() method. Other Firebug features are extremely useful during development work, and we find it useful that Firebug has been enabled throughout the development work. Callback Functions We saw our first callback function as a parameter in the HorizontalMenu.getMenuItems(setMenuItems) function, and since callbacks are an important concept in DWR, it would be good to discuss a little more about them now that we have seen their first usage. Callbacks are used to operate on the data that was returned from a remoted method. As DWR and AJAX are asynchronous, typical return values in RPCs (Remote Procedure Calls), as in Java calls, do not work. DWR hides the details of calling the callback functions and handles everything internally from the moment we return a value from the remoted Java method to receiving the returned value to the callback function. Two methods are recommended while using callback functions. We have already seen the first method in the HorizontalMenu.getMenuItems(setMenuItems) function call. Remember that there are no parameters in the getMenuItems()Java method, but in the JavaScript call, we added the callback function name at the end of the parameter list. If the Java method has parameters, then the JavaScript call is similar to CountryDB.getCountries(selectedLetters,setCountryRows), where selectedLetters is the input parameter for the Java method and setCountryRows is the name of the callback function (we see the implementation later on). The second method to use callbacks is a meta-data object in the remote JavaScript call. An example (a full implementation is shown later in this article) is shown here: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here } }); Here, the function is anonymous and its implementation is included in the JavaScript call to the remoted Java method. One advantage here is that it is easy to read the code, and the code is executed immediately after we get the return value from the Java method. The other advantage is that we can add extra options to the call. Extra options include timeout and error handler as shown in the following example: CountryDB.saveCountryNotes(ccode,newNotes, { callback:function(newNotes) { //function body here }, timeout:10000, errorHandler:function(errorMsg) { alert(errorMsg);} }); It is also possible to add a callback function to those Java methods that do not return a value. Adding a callback to methods with no return values would be useful in getting a notification when a remote call has been completed. Afterword Our first sample is ready, and it is also the basis for the following samples. We also looked at how applications are tested in the Eclipse environment. Using DWR, we can look at JavaScript code on the browser and Java code on the server as one. It may take a while to get used to it, but it will change the way we develop web applications. Logically, there is no longer a client and a server but just a single run time platform that happens to be physically separate. But in practice, of course, applications using DWR, JavaScript on the client and Java in the server, are using the typical client-server interaction. This should be remembered when writing applications in the logically single run-time platform.
Read more
  • 0
  • 0
  • 3544
article-image-article-setting-development-environment
Packt
13 Jan 2012
14 min read
Save for later

Setting Up a Development Environment

Packt
13 Jan 2012
14 min read
Selecting your virtual environment Prisoners serving life sentences (in Canada) have what is known as a faint hope clause where you have a glimmer of a chance of getting parole after 15 years. However, those waiting for Microsoft to provide us a version of Virtual PC that can run Virtual Hard Drives (VHDs) hosting 64-bit operating systems (such as Windows Server 2008 R2), have no such hope of ever seeing that piece of software. But miracles do happen, and I hope that the release of a 64-bit capable Virtual PC renders this section of the article obsolete. If this has in fact happened, go with it and proceed to the following section.   Getting ready Head into your computer's BIOS settings and enable the virtualization setting. The exact setting you are looking for varies widely, so please consult with your manufacturer's documentation. This setting seems universally defaulted to off, so I am very sure you will need to perform this action.   How to do it... Since you are still reading, however, it is safe to say that a miracle has not yet happened. Your first task is to select a suitable virtualization technology that can support a 64-bit guest operating system. The recipe here is to consider the choices in this order, with the outcome of your virtual environment being selected: Microsoft Virtualization: Hyper-V certainly has the ability to create and run Virtual Hard Disks (VHDs) with 64-bit operating systems. It's free—that is, you can install the Hyper-V role , but it requires the base operating system to be Windows Server 2008 R2. It can be brutal to get it running properly on something like a laptop (for example, because of driver issues). It won't be a good idea to get Windows 2008 Server running on a laptop, primarily because of driver issues. I recommend that if your laptop is running Windows 7, look at creating a dual boot, and a boot to VHD where this other boot option / partition is Windows Server 2008 R2. The main disadvantage is coming up with an (preferably licensed) installation of Windows Server 2008 R2 as the main computer operating system (or as a dual boot option). Or perhaps your company runs Hyper-V on their server farm and would be willing to host your development environment for you? Either way, if you have managed to get access to a Hyper-V server, you are good to go! VMware Workstation: Go to http://www.vmware.com and download my absolute favorite virtualization technology—VMware Workstation—fully featured, powerful, and can run on Windows 7. I have used it for years and love it. You must of course pay for a license, but please believe me, it is a worthwhile investment. You can sign up for a 30 day trial to explore the benefits. Note that you only need one copy of VMware Workstation to create a virtual machine. Once you have created it, you can run it anywhere using the freely available VMware Player. Oracle Virtual Box: Go to http://www.virtualbox.org/ and download this free software that will run on Windows 7 and create and host 64-bit guest operating systems. The reason that this is at the bottom of the list is that I personally do not have experience using this software. However, I have colleagues who have used it and have had no problems with it. Give this a try and see if it works as equally well as a paid version of VMware. With your selected virtualization technology in hand, head to the next section to install and configure Windows Server 2008 R2, which is the base operating system required for an installation of SharePoint Server 2010. Installing and configuring Windows Server 2008 R2 SharePoint 2010 requires the Windows Server 2008 R2 operating system in order to run. In this recipe, we will confi gure the components of Windows Server 2008 necessary in order to get ready to install SQL Server 2008 and SharePoint 2010. Getting ready Download Windows Server 2008 R2 from your MSDN subscription, or type in windows server 2008 R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. This article does not cover actually installing the base operating system. The specific instructions to do so will be dependent upon the virtualization software. Generally, it will be provided as an ISO image (the file extension will be .iso). ISO means a compressed disk image, and all virtualization software that I am aware of will let you mount (attach) an ISO image to the virtual machine as a CD Drive. This means that when you elect to create a new virtual machine, you will normally be prompted for the ISO image, and the installation of the operating system should proceed in a familiar and relatively automated fashion. So for this recipe, ready means that you have your virtualization software up and running, the Windows Server 2008 R2 base operating system is installed, and you are able to log in as the Administrator (and that you are effectively logging in for the first time). How to do it... Log in as the Administrator. You will be prompted to change the password the first time—I suggest choosing a very commonly used Microsoft password—Password1. However, feel free to select a password of your choice, but use it consistently throughout. The Initial configuration tasks screen will come up automatically. On this screen: Activate windows using your 180 day trial key or using your MSDN key. Select Provide computer name and domain. Change the computer name to a simpler one of your choice. In my case, I named the machine OPENHIGHWAY. Leave the Member of option as Workgroup. The computer will require a reboot. In the Update this server section, choose Download and install updates. Click on the Change settings link and select the option Never check for updates and click OK. Click the Check for updates link. The important updates will be selected. Click on Install Updates. Now is a good time for a coffee break! You will need to reboot the server when the updates complete. In the Customize this server section, click on Add Features. Select the Desktop Experience, Windows, PowerShell, Integrated, Scripting, and Environment options. Choose Add Required Features when prompted to do so. Reboot the server when prompted to do so. If the Initial configuration tasks screen appears now, or in the future, you may now select the checkbox for Do not show this window at logon. We will continue configuration from the Server Manager, which should be displayed on your screen. If not, launch the Server Manager using the icon on the taskbar. We return to Server Manager to continue the confi guration: OPTIONAL: Click on Configure Remote Desktop if you have a preference for accessing your virtual machine using Remote Desktop (RDP) instead of using the virtual machine's console software. In the Security Information section, click Go to Windows Firewall. Click on the Windows Firewall Properties link. From the dialog, go to each of the tabs, namely, Domain Profi le, Private Profi le, and Public Profi le and set the Firewall State to Off on each tab and click OK. Click on the Server Manager node, and from the main screen, click on the Configure IE ESC link. Set both options to Off and click OK. From the Server Manager, expand the Configuration node and then expand Local Users and Groups node, and then click on the Users folder. Right-click on the Administrator account and select Properties. Select the option for Password never expires and click OK. From the Server Manager, click the Roles node . Click the Add Roles link. Now, click on the Introductory screen and select the checkbox for Active Directory Domain Services. Click Next, again click on Next, and then click Install. After completion, click the Close this wizard and launch the Active Directory Domain Services Installation Wizard (dcpromo.exe) link. Now, carry out the following steps: From the new wizard that pops up, from the welcome screen, select the checkbox Use advanced mode installation, click Next, and again click on Next on the Operating System Compatibility screen. Select the option Create a new domain in a new forest and click Next. Choose your domain (FQDN)! This is completely internal to your development server and does not have to be real. For article purposes, I am using theopenhighway.net, as shown in the following screenshot. Then click Next: From the Set Forest Functional Level drop-down, choose Windows Server 2008 R2 and click Next. Click Next on the Additional Domain Controller Option screen. Select Yes on the Static IP assignment screen. Click Yes on the Dns Delegation Warning screen. Click Next on the Location for Database, Log Files, and SYSVOL screen. On the Directory Services Restore Mode Administrator Password screen, enter the same password that you used for the Administrator account, in my case, Password1. Click Next. Click Next on the Summary screen. Click on the Reboot On Completion screen. Otherwise reboot the server after the installation completes You will now confi gure a user account that will run the application pools for the SharePoint web applications in IIS. From the Server Manager, expand the Roles node. Keep expanding the Active Directory Domain Services until you see the Users folder. Click on the Users folder. Now carry out the following: Right-click on the Users folder and select New | User Enter SP_AppPool in the full name field and also enter SP_AppPool in the user logon field and click Next. Enter the password as Password1 (or the same as you had selected for the Administrator account). Deselect the option for User must change password at next logon and select the option for Password never expires. Click Next and then click Finish. A loopback check is a security feature to mitigate against reflection attacks, introduced in Windows Server 2003 SP1. You will likely encounter connection issues with your local websites and it is therefore universally recommended that you disable the loopback check on a development server. This is done from the registry editor: Click the Start menu button, choose Run…, enter Regedit, and click OK to bring up the registry editor. Navigate to HKEY_LOCAL_MACHINE | SYSTEM | CurrentControlSet | Control | Lsa Right-click the Lsa node and select New | DWORD (32-bit) Value In the place of New Value #1 type DisableLoopbackCheck. Right-click DisableLoopbackCheck, select Modify, change the value to 1, and click OK Congratulations! You have successfully confi gured Windows Server 2008 R2. There's more... The Windows Shutdown Event Tracker is simply annoying on a development machine. To turn this feature off, click the Start button, select Run…, enter gpedit.msc, and click OK. Scroll down, right-click on Display Shutdown Event Tracker, and select Edit. Select the Disabled option and click OK, as shown in the following screenshot: Installing and configuring SQL Server 2008 R2 SharePoint 2010 requires Microsoft SQL Server as a fundamental component of the overall SharePoint architecture. The content that you plan to manage in SharePoint, including web content and documents, literally is stored within and served from SQL Server databases. The SharePoint 2010 architecture itself relies on information stored in SQL Server databases, such as confi guration and the many service applications. In this recipe, we will install and configure the components of SQL Server 2008 necessary to install SharePoint 2010. Getting ready I do not recommend SQL Server Express for your development environment, although this is a possible, free, and valid choice for the installation of SharePoint 2010. In my personal experience, I have valued the full power and fl exibility of the full version of SQL Server as well as not having to live with the constraints and limitations of SQL Express. Besides, there is another little reason too! The Enterprise edition of SQL Server is either readily available with your MSDN subscription or downloadable as a trial from the Microsoft site. Download SQL Server 2008 R2 Enterprise from your MSDN subscription, or type in sql server 2008 enterprise R2 trial download into your favorite search engine to download the 180-day trial from the Microsoft site. For SQL Server 2008 R2 Enterprise, if you have MSDN software, then you will be provided with an ISO image that you can attach to the virtual machine. If you download your SQL Server from the Microsoft site as a trial, extract the software (it is a self-extracting EXE) on your local machine, and then share the folder with your virtual machine. Finallly, run the Setup.exe fi le. How to do it... Here is your recipe for installing SQL Server 2008 R2 Enterprise. Carry out the following steps to complete this recipe: You will be presented with the SQL Server Installation Center; on the left side of the screen, select Installation, as shown in the following screenshot: For the choices presented on the Installation screen, select New installation or add features to an existing installation. The Setup Support Rules (shown in the following screenshot) will run to identify any possible problems that might occur when installing SQL Server. All rules should pass. Click OK to continue: You will be presented with the SQL Server 2008 R2 Setup screen. On the fi rst screen, you can select an evaluation or use your product key (from, for example, MSDN) and then click Next. Accept the terms in the license, but do not check the Send feature usage data to Microsoft checkbox, and click Next. On the Setup Support Files screen, click Install. All tests will pass except for a warning that you can safely ignore (the one noting we are installing on a domain controller), and click Next, as shown in the following screenshot: On the Setup Role screen, select SQL Server Feature Installation and click Next. On the Feature Selection, as shown in the following screenshot, carry out the following tasks: In Instance Features, select Database Engine Services (and both SQL Server Replication and Full Text Search), Analysis Services, and Reporting Services In Shared Features, select Business Intelligence Development Studio, Management Tools Basic (and Management Tools Complete), and Microsoft Sync Framework Finally, click Next. On the Installation Rules screen, click Next On the Instance Confi guration screen, click Next. On the Disk Space Requirements screen, click Next On the Server Confi guration screen: Set the Startup Type for SQL Server Agent to be Automatic Click on the button Use the same account for all SQL Server services. Select the account NT AUTHORITYSYSTEM and click OK. Finally, click Next. On the Database Configuration Engine screen: Look for the Account Provisioning tab and click the Add Current User button under Specify SQL Server administrators. Finally, click Next On the Analysis Services Confi guration screen: Look for the Account Provisioning tab and click the Add Current User button under Specify which users have administrative permissions for Analysis Services. Finally, click Next. On the Reporting Services Configuration screen, select the option to Install but do not configure the report server. Now, click Next. On the Error Reporting Screen, click Next. On the Installation Confi guration Rules screen, click Next. On the Ready to Install screen, click Install. Your patience will be rewarded with the Complete screen! Finally, click Close. The Complete screen is shown in the following screenshot: You can close the SQL Server Installation Center. Confi gure SQL Server security for the SP_AppPool account: Click Start | All Programs | SQL Server 2008 R2 | SQL Server Management Studio. On Connect to server, type a period (.) in the Server Name field and click Connect. Expand the Security node. Right-click Logins and select New Login. Use the Search function and enter SP_AppPool in the box Enter object name to select. Click the check names button and then click OK. In my case, you see the properly formatted THEOPENHIGHWAYSP_AppPool in the login name text box. On the Server Roles tab, ensure that the dbcreator and securityadmin roles are selected (in addition to the already selected public role). Finally, click OK. Congratulations! You have successfully installed and confi gured SQL Server 2008 R2 Enterprise.
Read more
  • 0
  • 0
  • 3541

article-image-jboss-plug-and-eclipse-web-tools-platform
Packt
23 Oct 2009
4 min read
Save for later

JBoss AS plug-in and the Eclipse Web Tools Platform

Packt
23 Oct 2009
4 min read
In this article, we recommend that you use the JBoss AS (version 4.2), which is a free J2EE Application Server that can be downloaded from http://www.jboss.org/jbossas/downloads/ (complete documentation can be downloaded from http://www.jboss.org/jbossas/docs/). JBoss AS plug-in and the Eclipse WTP JBoss AS plug-in can be treated as an elegant method of connecting a J2EE Application Server to the Eclipse IDE. It's important to know that JBoss AS plug-in does this by using the WTP support, which is a project included by default in the Eclipse IDE. WTP is a major project that extends the Eclipse platform with a strong support for Web and J2EE applications. In this case, WTP will sustain important operations, like starting the server in run/debug mode, stopping the server, and delegating WTP projects to their runtimes. For now, keep in mind that Eclipse supports a set of WTP servers and for every WTP server you may have one WTP runtime. Now, we will see how to install and configure the JBoss 4.2.2 runtime and server. Adding a WTP runtime in Eclipse In case of JBoss Tools, the main scope of Server Runtimes is to point to a server installation somewhere on your machine. By runtimes, we can use different configurations of the same server installed in different physical locations. Now, we will create a JBoss AS Runtime (you can extrapolate the steps shown below for any supported server): From the Window menu, select Preferences. In the Preferences window, expand the Server node and select the Runtime Environments child-node. On the right side of the window, you can see a list of currently installed runtimes, as shown in the following screenshot, where you can see that an Apache Tomcat runtime is reported (this is just an example, the Apache Tomcat runtime is not a default one). Now, if you want to install a new runtime, you should click the Add button from the top-right corner. This will bring in front the New Server Runtime Environment window as you can see in the following screenshot. Because we want to add a JBoss 4.2.2 runtime, we will select the JBoss 4.2 Runtime option (for other adapters proceed accordingly). After that, click Next for setting the runtime parameters. In the runtimes list, we have runtimes provided by WTP and runtimes provided by JBoss Tools (see the section marked in red on the previous screenshot). Because this article is about JBoss Tools, we will further discuss only the runtimes from this category. Here, we have five types of runtimes with the mention that the JBoss Deploy-Only Runtime type is for developers who start/stop/debug applications outside Eclipse. In this step, you will configure the JBoss runtime by indicating the runtime's name (in the Name field), the runtime's home directory (in the Home Directory field), the Java Runtime Environment associated with this runtime (in the JRE field), and the configuration type (in the Configuration field).In the following screenshot, we have done all these settings for our JBoss 4.2 Runtime. The official documentation of JBoss AS 4.2.2 recommends using JDK version 5. If you don't have this version in the JRE list, you can add it like this: Display the Preferences window by clicking the JRE button. In this window, click the Add button to display the Add JRE window. Continue by selecting the Standard VM option and click on the Next button. On the next page, use the Browse button to navigate to the JRE 5 home directory. Click on the Finish button and you should see a new entry in the Installed JREs field of the Preferences window (as shown in the following screenshot). Just check the checkbox of this new entry and click OK. Now, JRE 5 should be available in the JRE list of the New Server Runtime Environment window. After this, just click on the Finish button and the new runtime will be added, as shown in the following screenshot: From this window, you can also edit or remove a runtime by using the Edit and Remove buttons. These are automatically activated when you select a runtime from the list. As a final step, it is recommended to restart the Eclipse IDE.
Read more
  • 0
  • 0
  • 3541

article-image-biztalk-server-standard-message-exchange-patterns-and-types-service
Packt
06 Apr 2010
4 min read
Save for later

BizTalk Server: Standard Message Exchange Patterns and Types of Service

Packt
06 Apr 2010
4 min read
Identifying Standard Message Exchange Patterns When we talk about Message Exchange Patterns, or MEPs, we're considering the direction and timing of data between the client and service. How do I get into the bus and what are the implications of those choices? Let's discuss the four primary options. Request/Response services This is probably the pattern that's most familiar to you. We're all comfortable making a function call to a component and waiting for a response. When a service uses this pattern, it's frequently performing a remote procedure call where the caller accesses functionality on the distant service and is blocked until either a timeout occurs or until the receiver sends a response that is expected by the caller. As we'll see below, while this pattern may set developers at ease, it may encourage bad behavior. Nevertheless, the cases where request/response services make the most sense are fine-grained functions and mashup services. If you need a list of active contracts that a hospital has with your company, then a request/response operation fits best. The client application should wait until that response is received before moving on to the next portion of the application. Or, let's say my web portal is calling an aggregate service, which takes contact data from five different systems and mashes them up into a single data entity that is then returned to the caller. This data is being requested for immediate presentation to an end user, and thus it's logical to solicit information from a service and wait to draw the screen until the completed result is loaded. BizTalk Server 2009 has full support for both consuming and publishing services adhering to a request/response pattern. When exposing request/response operations through BizTalk orchestrations, the orchestration port's Communication Pattern is set to Request-Response and the Port direction of communication is equal to I'll be receiving a request and sending a response. Once this orchestration port is bound to a physical request/response receive port, BizTalk takes care of correlating the response message with the appropriate thread that made the request. This is significant because by default, BizTalk is a purely asynchronous messaging engine. Even when you configure BizTalk Server to behave in a request/response fashion, it's only putting a facade on the standard underlying plumbing. A synchronous BizTalk service interface actually sits on top of a sophisticated mechanism of correlating MessageBox communication to simulate a request/response pattern. When consuming request/response services from BizTalk from an orchestration, the orchestration port's Communication Pattern is set to Request-Response and the Port direction of communication is equal to I'll be sending a request and receiving a response. The corresponding physical send port uses a solicit-response pattern and allows the user to set up both pipelines and maps for the inbound and outbound messages. One concern with either publishing or consuming request/response services is the issue of blocking and timeouts. From a BizTalk perspective, this means that whenever you publish an orchestration as a request/response service, you should always verify that the logic residing between inbound and outbound transmissions will either complete or fail within a relatively brief amount of time. This dictates wrapping this logic inside an orchestration Scope shape with a preset timeout that is longer than the standard web service timeout interval. For consuming services, a request/response pattern forces the orchestration to block and wait for the response to be returned. If the service response isn't necessary for processing to continue, consider using a Parallel shape that isolates the service interaction pattern on a dedicated branch. This way, the execution of unrelated workflow steps can proceed even though the downstream service is yet to respond.
Read more
  • 0
  • 0
  • 3516
article-image-getting-started-scratch-14-part-1
Packt
16 Oct 2009
6 min read
Save for later

Getting Started with Scratch 1.4 (Part 1)

Packt
16 Oct 2009
6 min read
Before we create any code, let's make sure we speak the same language. The interface at a glance When we encounter software that's unfamiliar to us, we often wonder, "Where do I begin?" Together, we'll answer that question and click through some important sections of the Scratch interface so that we can quickly start creating our own projects. Now, open Scratch and let's begin. Time for action – first step When we open Scratch, we notice that the development environment roughly divides into three distinct sections, as seen in the following screenshot. Moving from left to right, we have the following sections in sequential order: Blocks palette Script editor Stage Let's see if we can get our cat moving: In the blocks palette, click on the Looks button. Drag the switch to costume block onto the scripts area. Now, in the blocks palette, click on the Control button. Drag the when flag clicked block to the scripts area and snap it on top of the switch to costume block, as illustrated in the following screenshot. How to snap two blocks together?As you drag a block onto another block, a white line displays to indicate that the block you are dragging can be added to the script. When you see the white line, release your mouse to snap the block in place. In the scripts area, click on the Costumes tab to display the sprite's costumes. Click on costume2 to change the sprite on the stage. Now, click back on costume1 to change how the sprite displays on the stage. Directly beneath the stage is a sprites list. The current list displays Sprite1 and Stage. Click on the sprite named Stage and notice that the scripts area changes. Click back on Sprite1 in the sprites list and again note the change to the scripts area. Click on the flag above the stage to set our first Scratch program in motion. Watch closely, or you might miss it. What just happened? Congratulations! You created your first Scratch project. Let's take a closer look at what we did just now. As we clicked through the blocks palette, we saw that the available blocks changed depending on whether we chose Motion, Looks, or Control. Each set of blocks is color-coded to help us easily identify them in our scripts. The first block we added to the script instructed the sprite to display costume2. The second block provided a way to control our script by clicking on the flag. Blocks with a smooth top are called hats in Scratch terminology because they can be placed only at the top of a stack of blocks. Did you look closely at the blocks as you snapped the control block into the looks block? The bottom of the when flag clicked block had a protrusion like a puzzle piece that fits the indent on the top of the switch to costume block. As children, most of us probably have played a game where we needed to put the round peg into the round hole. Building a Scratch program is just that simple. We see instantly how one block may or may not fit into another block. Stack blocks have indents on top and bumps on the bottom that allow blocks to lock together to form a sequence of actions that we call a script. A block depicting its indent and bump can be seen in the following screenshot: When we clicked on the Costumes tab, we learned that our cat had two costumes or appearances. Clicking on the costume caused the cat on the stage to change its appearance. As we clicked around the sprites list, we discovered our project had two sprites: a cat and a stage. And the script we created for the cat didn't transfer to the stage. We finished the exercise by clicking on the flag. The change was subtle, but our cat appeared to take its first step when it switched to costume2. Basics of a Scratch project Inside every Scratch project, we find the following ingredients: sprites, costumes, blocks, scripts, and a stage. It's how we mix the ingredients with our imagination that creates captivating stories, animations, and games. Sprites bring our program to life, and every project has at least one. Throughout the book, we'll learn how to add and customize sprites. A sprite wears a costume. Change the costume and you change the way the sprite looks. If the sprite happens to be the stage, the costume is known as a background. Blocks are just categories of instructions that include motion, looks, sound, pen, control, sensing, operators, and variables. Scripts define a set of blocks that tell a sprite exactly what to do. Each block represents an instruction or piece of information that affects the sprite in some way. We're all actors on Scratch's stage Think of each sprite in a Scratch program as an actor. Each actor walks onto the stage and recites a set of lines from the script. How each actor interacts with another actor depends on the words the director chooses. On Scratch's stage, every object, even the stone in the corner, is a sprite capable of contributing to the story. As directors, we have full creative control. Time for action – save your work It's a good practice to get in the habit of saving your work. Save your work early, and save it often: To save your new project, click the disk icon at the top of the Scratch window or click File | Save As. A Save Project dialog box opens and asks you for a location and a New Filename. Enter some descriptive information for your project by supplying the Project author and notes About this project in the fields provided. Set the cat in motion Even though our script contains only two blocks, we have a problem. When we click on the flag, the sprite switches to a different costume and stops. If we try to click on the flag again, nothing appears to happen, and we can't get back to the first costume unless we go to the Costumes tab and select costume1. That's not fun. In our next exercise, we're going to switch between both costumes and create a lively animation.
Read more
  • 0
  • 0
  • 3497

article-image-using-osgi-services
Packt
26 Aug 2014
14 min read
Save for later

Using OSGi Services

Packt
26 Aug 2014
14 min read
This article created by Dr Alex Blewitt the author of Mastering Eclipse Plug-in Development will present OSGi services as a means to communicate with and connect applications. Unlike the Eclipse extension point mechanism, OSGi services can have multiple versions available at runtime and can work in other OSGi environments, such as Felix or other commercial OSGi runtimes. (For more resources related to this topic, see here.) Overview of services In an Eclipse or OSGi runtime, each individual bundle is its own separate module, which has explicit dependencies on library code via Import-Package, Require-Bundle, or Require-Capability. These express static relationships and provide a way of configuring the bundle's classpath. However, this presents a problem. If services are independent, how can they use contributions provided by other bundles? In Eclipse's case, the extension registry provides a means for code to look up providers. In a standalone OSGi environment, OSGi services provide a similar mechanism. A service is an instance of a class that implements a service interface. When a service is created, it is registered with the services framework under one (or more) interfaces, along with a set of properties. Consumers can then get the service by asking the framework for implementers of that specific interface. Services can also be registered under an abstract class, but this is not recommended. Providing a service interface exposed as an abstract class can lead to unnecessary coupling of client to implementation. The following diagram gives an overview of services: This separation allows the consumer and producer to depend on a common API bundle, but otherwise be completely decoupled from one another. This allows both the consumer and producer to be mocked out or exchange with different implementations in the future. Registering a service programmatically To register a service, an instance of the implementation class needs to be created and registered with the framework. Interactions with the framework are performed with an instance of BundleContext—typically provided in the BundleActivator.start method and stored for later use. The *FeedParser classes will be extended to support registration as a service instead of the Equinox extension registry. Creating an activator A bundle's activator is a class that is instantiated and coupled to the lifetime of the bundle. When a bundle is started, if a manifest entry Bundle-Activator exists, then the corresponding class is instantiated. As long as it implements the BundleActivator interface, the start method will be called. This method is passed as an instance of BundleContext, which is the bundle's connection to the hosting OSGi framework. Create a class in the com.packtpub.e4.advanced.feeds project called com.packtpub.e4.advanced.feeds.internal.FeedsActivator, which implements the org.osgi.framework.BundleActivator interface. The quick fix may suggest adding org.osgi.framework as an imported package. Accept this, and modify the META-INF/MANIFEST.MF file as follows: Import-Package: org.osgi.framework Bundle-Activator: com.packtpub.e4.advanced.feeds.internal.FeedsActivator The framework will automatically invoke the start method of the FeedsActivator when the bundle is started, and correspondingly, the stop method when the bundle is stopped. Test this by inserting a pair of println calls: public class FeedsActivator implements BundleActivator { public void start(BundleContext context) throws Exception { System.out.println("Bundle started"); } public void stop(BundleContext context) throws Exception { System.out.println("Bundle stopped"); } } Now run the project as an OSGi framework with the feeds bundle, the Equinox console, and the Gogo shell. The required dependencies can be added by clicking on Add Required Bundles, although the Include optional dependencies checkbox does not need to be selected. Ensure that the other workspace and target bundles are deselected with the Deselect all button, as shown in the following screenshot: The required bundles are as follows: com.packtpub.e4.advanced.feeds org.apache.felix.gogo.command org.apache.felix.gogo.runtime org.apache.felix.gogo.shell org.eclipse.equinox.console org.eclipse.osgi On the console, when the bundle is started (which happens automatically if the Default Auto-Start is set to true), the Bundle started message should be seen. If the bundle does not start, ss in the console will print a list of bundles and start 2 will start the bundle with the ID 2. Afterwards, stop 2 can be used to stop bundle 2. Bundles can be stopped/started dynamically in an OSGi framework. Registering the service Once the FeedsActivator instance is created, a BundleContext instance will be available for interaction with the framework. This can be persisted for subsequent use in an instance field and can also be used directly to register a service. The BundleContext class provides a registerService method, which takes an interface, an instance, and an optional Dictionary instance of key/value pairs. This can be used to register instances of the feed parser at runtime. Modify the start method as follows: public void start(BundleContext context) throws Exception { context.registerService(IFeedParser.class, new RSSFeedParser(), null); context.registerService(IFeedParser.class, new AtomFeedParser(), null); context.registerService(IFeedParser.class, new MockFeedParser(), null); } Now start the framework again. In the console that is launched, look for the bundle corresponding to the feeds bundle: osgi> bundles | grep feeds com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=56} {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=57} {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=58} This shows that bundle 4 has started three services, using the interface com.packtpub.e4.advanced.feeds.IFeedParser, and with service IDs 56, 57, and 58. It is also possible to query the runtime framework for services of a known interface type directly using the services command and an LDAP style filter: osgi> services (objectClass=com.packtpub.e4.advanced.feeds.IFeedParser) {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=56} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=57} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.id=58} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." The results displayed represent the three services instantiated. They can be introspected using the service command passing the service.id: osgi> service 56 com.packtpub.e4.advanced.feeds.internal.RSSFeedParser@52ba638e osgi> service 57 com.packtpub.e4.advanced.feeds.internal.AtomFeedParser@3e64c3a osgi> service 58 com.packtpub.e4.advanced.feeds.internal.MockFeedParser@49d5e6da Priority of services Services have an implicit order, based on the order in which they were instantiated. Each time a service is registered, a global service.id is incremented. It is possible to define an explicit service ranking with an integer property. This is used to ensure relative priority between services, regardless of the order in which they are registered. For services with equal service.ranking values, the service.id values are compared. OSGi R6 adds an additional property, service.bundleid, which is used to denote the ID of the bundle that provides the service. This is not used to order services, and is for informational purposes only. Eclipse Luna uses OSGi R6. To pass a priority into the service registration, create a helper method called priority, which takes an int value and stores it in a Hashtable with the key service.ranking. This can be used to pass a priority to the service registration methods. The following code illustrates this: private Dictionary<String,Object> priority(int priority) { Hashtable<String, Object> dict = new Hashtable<String,Object>(); dict.put("service.ranking", new Integer(priority)); return dict; } public void start(BundleContext context) throws Exception { context.registerService(IFeedParser.class, new RSSFeedParser(), priority(1)); context.registerService(IFeedParser.class, new MockFeedParser(), priority(-1)); context.registerService(IFeedParser.class, new AtomFeedParser(), priority(2)); } Now when the framework starts, the services are displayed in order of priority: osgi> services | (objectClass=com.packtpub.e4.advanced.feeds.IFeedParser) {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=2, service.id=58} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=1, service.id=56} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." {com.packtpub.e4.advanced.feeds.IFeedParser}={service.ranking=-1, service.id=57} "Registered by bundle:" com.packtpub.e4.advanced.feeds_1.0.0.qualifier [4] "No bundles using service." Dictionary was the original Java Map interface, and Hashtable the original HashMap implementation. They fell out of favor in Java 1.2 when Map and HashMap were introduced (mainly because they weren't synchronized by default) but OSGi was developed to run on early releases of Java (JSR 8 proposed adding OSGi as a standard for the Java platform). Not only that, early low-powered Java mobile devices didn't support the full Java platform, instead exposing the original Java 1.1 data structures. Because of this history, many APIs in OSGi refer to only Java 1.1 data structures so that low-powered devices can still run OSGi systems. Using the services The BundleContext instance can be used to acquire services as well as register them. FeedParserFactory, which originally used the extension registry, can be upgraded to refer to services instead. To obtain an instance of BundleContext, store it in the FeedsActivator.start method as a static variable. That way, classes elsewhere in the bundle will be able to acquire the context. An accessor method provides an easy way to do this: public class FeedsActivator implements BundleActivator { private static BundleContext bundleContext; public static BundleContext getContext() { return bundleContext; } public void start(BundleContext context) throws Exception { // register methods as before bundleContext = context; } public void stop(BundleContext context) throws Exception { bundleContext = null; } } Now the FeedParserFactory class can be updated to acquire the services. OSGi services are represented via a ServiceReference instance (which is a sharable object representing a handle to the service) and can be used to acquire a service instance: public class FeedParserFactory { public List<IFeedParser> getFeedParsers() { List<IFeedParser> parsers = new ArrayList<IFeedParser>(); BundleContext context = FeedsActivator.getContext(); try { Collection<ServiceReference<IFeedParser>> references = context.getServiceReferences(IFeedParser.class, null); for (ServiceReference<IFeedParser> reference : references) { parsers.add(context.getService(reference)); context.ungetService(reference); } } catch (InvalidSyntaxException e) { // ignore } return parsers; } } In this case, the service references are obtained from the bundle context with a call to context.getServiceReferences(IFeedParser.class,null). The service references can be used to access the service's properties, and to acquire the service. The service instance is acquired with the context.getService(ServiceReference) call. The contract is that the caller "borrows" the service, and when finished, should return it with an ungetService(ServiceReference) call. Technically, the service is only supposed to be used between the getService and ungetService calls as its lifetime may be invalid afterwards; instead of returning an array of service references, the common pattern is to pass in a unit of work that accepts the service and then call ungetService afterwards. However, to fit in with the existing API, the service is acquired, added to the list, and then released immediately afterwards. Lazy activation of bundles Now run the project as an Eclipse application, with the feeds and feeds.ui bundles installed. When a new feed is created by navigating to File | New | Other | Feeds | Feed, and a feed such as http://alblue.bandlem.com/atom.xml is entered, the feeds will be shown in the navigator view. When drilling down, a NullPointerException may be seen in the logs, as shown in the following: !MESSAGE An exception occurred invoking extension: com.packtpub.e4.advanced.feeds.ui.feedNavigatorContent for object com.packtpub.e4.advanced.feeds.Feed@770def59 !STACK 0 java.lang.NullPointerException at com.packtpub.e4.advanced.feeds.FeedParserFactory. getFeedParsers(FeedParserFactory.java:31) at com.packtpub.e4.advanced.feeds.ui.FeedContentProvider. getChildren(FeedContentProvider.java:80) at org.eclipse.ui.internal.navigator.extensions. SafeDelegateTreeContentProvider. getChildren(SafeDelegateTreeContentProvider.java:96) Tracing through the code indicates that the bundleContext is null, which implies that the feeds bundle has not yet been started. This can be seen in the console of the running Eclipse application by executing the following code: osgi> ss | grep feeds 866 ACTIVE com.packtpub.e4.advanced.feeds.ui_1.0.0.qualifier 992 RESOLVED com.packtpub.e4.advanced.feeds_1.0.0.qualifier While the feeds.ui bundle is active, the feeds bundle is not. Therefore, the services haven't been instantiated, and bundleContext has not been cached. By default, bundles are not started when they are accessed for the first time. If the bundle needs its activator to be called prior to using any of the classes in the package, it needs to be marked as having an activation policy of lazy. This is done by adding the following entry to the MANIFEST.MF file: Bundle-ActivationPolicy: lazy The manifest editor can be used to add this configuration line by selecting Activate this plug-in when one of its classes is loaded, as shown in the following screenshot: Now, when the application is run, the feeds will resolve appropriately. Comparison of services and extension points Both mechanisms (using the extension registry and using the services) allow for a list of feed parsers to be contributed and used by the application. What are the differences between them, and are there any advantages to one or the other? Both the registry and services approaches can be used outside of an Eclipse runtime. They work the same way when used in other OSGi implementations (such as Felix) and can be used interchangeably. The registry approach can also be used outside of OSGi, although that is far less common. The registry encodes its information in the plugin.xml file by default, which means that it is typically edited as part of a bundle's install (it is possible to create registry entries from alternative implementations if desired, but this rarely happens). The registry has a notification system, which can listen to contributions being added and removed. The services approach uses the OSGi framework to store and maintain a list of services. These services don't have an explicit configuration file and, in fact, can be contributed by code (such as the registerService calls) or by declarative representations. The separation of how the service is created versus how the service is registered is a key difference between the service and the registry approach. Like the registry, the OSGi services system can generate notifications when services come and go. One key difference in an OSGi runtime is that bundles depending on the Eclipse registry must be declared as singletons; that is, they have to use the ;singleton:=true directive on Bundle-SymbolicName. This means that there can only be one version of a bundle that exposes registry entries in a runtime, as opposed to multiple versions in the case of general services. While the registry does provide mechanisms to be able to instantiate extensions from factories, these typically involve simple configurations and/or properties that are hard-coded in the plugin.xml files themselves. They would not be appropriate to store sensitive details such as passwords. On the other hand, a service can be instantiated from whatever external configuration information is necessary and then registered, such as a JDBC connection for a database. Finally, extensions in the registry are declarative by default and are activated on demand. This allows Eclipse to start quickly because it does not need to build the full set of class loader objects or run code, and then bring up services on demand. Although the approach previously didn't use declarative services, it is possible to do this. Summary This article introduced OSGi services as a means to extend an application's functionality. It also shed light on how to register a service programmatically. Resources for Article: Further resources on this subject: Apache Maven and m2eclipse [article] Introducing an Android platform [article] Installing and Setting up JavaFX for NetBeans and Eclipse IDE [article]
Read more
  • 0
  • 0
  • 3488
Modal Close icon
Modal Close icon