Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7010 Articles
article-image-uploading-multiple-files
Packt
24 Jun 2014
8 min read
Save for later

Uploading multiple files

Packt
24 Jun 2014
8 min read
(For more resources related to this topic, see here.) Regarding the first task, the multiple selection can be activated using an HTML5 input file attribute (multiple) and the JSF 2.2 pass-through attribute feature. When this attribute is present and its value is set to multiple, the file chooser can select multiple files. So, this task requires some minimal adjustments: <html > ... <h:form id="uploadFormId" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" required="true" f5:multiple="multiple" requiredMessage="No file selected ..." value="#{uploadBean.file}"/> <h:commandButton value="Upload" action="#{uploadBean.upload()}"/> </h:form> The second task is a little bit tricky, because when multiple files are selected, JSF will overwrite the previous Part instance with each file in the uploaded set. This is normal, since you use an object of type Part, but you need a collection of Part instances. Fixing this issue requires us to focus on the renderer of the file component. This renderer is named FileRenderer (an extension of TextRenderer), and the decode method implementation is the key for our issue (the bold code is very important for us), as shown in the following code: @Override public void decode(FacesContext context, UIComponent component) { rendererParamsNotNull(context, component); if (!shouldDecode(component)) { return; } String clientId = decodeBehaviors(context, component); if (clientId == null) { clientId = component.getClientId(context); } assert(clientId != null); ExternalContext externalContext = context.getExternalContext(); Map<String, String> requestMap = externalContext.getRequestParameterMap(); if (requestMap.containsKey(clientId)) { setSubmittedValue(component, requestMap.get(clientId)); } HttpServletRequest request = (HttpServletRequest) externalContext.getRequest(); try { Collection<Part> parts = request.getParts(); for (Part cur : parts) { if (clientId.equals(cur.getName())) { component.setTransient(true); setSubmittedValue(component, cur); } } } catch (IOException ioe) { throw new FacesException(ioe); } catch (ServletException se) { throw new FacesException(se); } } The highlighted code causes the override Part issue, but you can easily modify it to submit a list of Part instances instead of one Part, as follows: try { Collection<Part> parts = request.getParts(); List<Part> multiple = new ArrayList<>(); for (Part cur : parts) { if (clientId.equals(cur.getName())) { component.setTransient(true); multiple.add(cur); } } this.setSubmittedValue(component, multiple); } catch (IOException | ServletException ioe) { throw new FacesException(ioe); } Of course, in order to modify this code, you need to create a custom file renderer and configure it properly in faces-config.xml. Afterwards, you can define a list of Part instances in your bean using the following code: ... private List<Part> files; public List<Part> getFile() { return files; } public void setFile(List<Part> files) { this.files = files; } ... Each entry in the list is a file; therefore, you can write them on the disk by iterating the list using the following code: ... for (Part file : files) { try (InputStream inputStream = file.getInputStream(); FileOutputStream outputStream = new FileOutputStream("D:" + File.separator + "files"+ File.separator + getSubmittedFileName())) { int bytesRead = 0; final byte[] chunck = new byte[1024]; while ((bytesRead = inputStream.read(chunck)) != -1) { outputStream.write(chunck, 0, bytesRead); } FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Upload successfully ended: " + file.getSubmittedFileName())); } catch (IOException e) { FacesContext.getCurrentInstance().addMessage(null, new FacesMessage("Upload failed !")); } } ... Upload and the indeterminate progress bar When users upload small files, the process happens pretty fast; however, when large files are involved, it may take several seconds, or even minutes, to end. In this case, it is a good practice to implement a progress bar that indicates the upload status. The simplest progress bar is known as an indeterminate progress bar, because it shows that the process is running, but it doesn't provide information for estimating the time left or the amount of processed bytes. In order to implement a progress bar, you need to develop an AJAX-based upload. The JSF AJAX mechanism allows us to determine when the AJAX request begins and when it completes. This can be achieved on the client side; therefore, an indeterminate progress bar can be easily implemented using the following code: <script type="text/javascript"> function progressBar(data) { if (data.status === "begin") { document.getElementById("uploadMsgId").innerHTML=""; document.getElementById("progressBarId"). setAttribute("src", "./resources/progress_bar.gif"); } if (data.status === "complete") { document.getElementById("progressBarId").removeAttribute("src"); } } </script> ... <h:body> <h:messages id="uploadMsgId" globalOnly="true" showDetail="false" showSummary="true" style="color:red"/> <h:form id="uploadFormId" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" required="true" requiredMessage="No file selected ..." value="#{uploadBean.file}"/> <h:message showDetail="false" showSummary="true" for="fileToUpload" style="color:red"/> <h:commandButton value="Upload" action="#{uploadBean.upload()}"> <f:ajax execute="fileToUpload" onevent="progressBar" render=":uploadMsgId @form"/> </h:commandButton> </h:form> <div> <img id="progressBarId" width="250px;" height="23"/> </div> </h:body> A possible output is as follows: Upload and the determinate progress bar A determinate progress bar is much more complicated. Usually, such a progress bar is based on a listener capable to monitor the transferred bytes (if you have worked with Apache Commons' FileUpload, you must have had the chance to implement such a listener). In JSF 2.2, FacesServlet was annotated with @MultipartConfig for dealing multipart data (upload files), but there is no progress listener interface for it. Moreover, FacesServlet is declared final; therefore, we cannot extend it. Well, the possible approaches are pretty limited by these aspects. In order to implement a server-side progress bar, we need to implement the upload component in a separate class (servlet) and provide a listener. Alternatively, on the client side, we need a custom POST request that tricks FacesServlet that the request is formatted by jsf.js. In this article, you will see a workaround based on HTML5 XMLHttpRequest Level 2 (can upload/download streams as Blob, File, and FormData), HTML5 progress events (for upload it returns total transferred bytes and uploaded bytes), HTML5 progress bar, and a custom Servlet 3.0. If you are not familiar with these HTML5 features, then you have to check out some dedicated documentation. After you get familiar with these HTML5 features, it will be very easy to understand the following client-side code. First we have the following JavaScript code: <script type="text/javascript"> function fileSelected() { hideProgressBar(); updateProgress(0); document.getElementById("uploadStatus").innerHTML = ""; var file = document.getElementById('fileToUploadForm: fileToUpload').files[0]; if (file) { var fileSize = 0; if (file.size > 1048576) fileSize = (Math.round(file.size * 100 / (1048576)) / 100).toString() + 'MB'; else fileSize = (Math.round(file.size * 100 / 1024) / 100).toString() + 'KB'; document.getElementById('fileName').innerHTML = 'Name: ' + file.name; document.getElementById('fileSize').innerHTML = 'Size: ' + fileSize; document.getElementById('fileType').innerHTML = 'Type: ' + file.type; } } function uploadFile() { showProgressBar(); var fd = new FormData(); fd.append("fileToUpload", document.getElementById('fileToUploadForm: fileToUpload').files[0]); var xhr = new XMLHttpRequest(); xhr.upload.addEventListener("progress", uploadProgress, false); xhr.addEventListener("load", uploadComplete, false); xhr.addEventListener("error", uploadFailed, false); xhr.addEventListener("abort", uploadCanceled, false); xhr.open("POST", "UploadServlet"); xhr.send(fd); } function uploadProgress(evt) { if (evt.lengthComputable) { var percentComplete = Math.round(evt.loaded * 100 / evt.total); updateProgress(percentComplete); } } function uploadComplete(evt) { document.getElementById("uploadStatus").innerHTML = "Upload successfully completed!"; } function uploadFailed(evt) { hideProgressBar(); document.getElementById("uploadStatus").innerHTML = "The upload cannot be complete!"; } function uploadCanceled(evt) { hideProgressBar(); document.getElementById("uploadStatus").innerHTML = "The upload was canceled!"; } var updateProgress = function(value) { var pBar = document.getElementById("progressBar"); document.getElementById("progressNumber").innerHTML=value+"%"; pBar.value = value; } function hideProgressBar() { document.getElementById("progressBar").style.visibility = "hidden"; document.getElementById("progressNumber").style.visibility = "hidden"; } function showProgressBar() { document.getElementById("progressBar").style.visibility = "visible"; document.getElementById("progressNumber").style.visibility = "visible"; } </script> Further, we have the upload component that uses the preceding JavaScript code: <h:body> <hr/> <div id="fileName"></div> <div id="fileSize"></div> <div id="fileType"></div> <hr/> <h:form id="fileToUploadForm" enctype="multipart/form-data"> <h:inputFile id="fileToUpload" onchange="fileSelected();"/> <h:commandButton type="button" onclick="uploadFile()" value="Upload" /> </h:form> <hr/> <div id="uploadStatus"></div> <table> <tr> <td> <progress id="progressBar" style="visibility: hidden;" value="0" max="100"></progress> </td> <td> <div id="progressNumber" style="visibility: hidden;">0 %</div> </td> </tr> </table> <hr/> </h:body> A possible output can be seen in the following screenshot: The servlet behind this solution is UploadServlet that was presented earlier. For multiple file uploads and progress bars, you can extend this example, or choose a built-in solution, such as PrimeFaces Upload, RichFaces Upload, or jQuery Upload Plugin. Summary In this article, we saw how to upload multiple files using JSF 2.2 and the concepts of indeterminate and determinate progress bars. Resources for Article: Further resources on this subject: The Business Layer (Java EE 7 First Look) [article] The architecture of JavaScriptMVC [article] Differences in style between Java and Scala code [article]
Read more
  • 0
  • 0
  • 9347

article-image-introducing-variables
Packt
24 Jun 2014
6 min read
Save for later

Introducing variables

Packt
24 Jun 2014
6 min read
(For more resources related to this topic, see here.) In order to store data, you have to store data in the right kind of variables. We can think of variables as boxes, and what you put in these boxes depends on what type of box it is. In most native programming languages, you have to declare a variable and its type. Number variables Let's go over some of the major types of variables. The first type is number variables. These variables store numbers and not letters. That means, if you tried to put a name in, let's say "John Bura", then the app simply won't work. Integer variables There are numerous different types of number variables. Integer variables, called Int variables, can be positive or negative whole numbers—you cannot have a decimal at all. So, you could put -1 as an integer variable but not 1.2. Real variables Real variables can be positive or negative, and they can be decimal numbers. A real variable can be 1.0, -40.4, or 100.1, for instance. There are other kinds of number variables as well. They are used in more specific situations. For the most part, integer and real variables are the ones you need to know—make sure you don't get them mixed up. If you were to run an app with this kind of mismatch, chances are it won't work. String variables There is another kind of variable that is really important. This type of variable is called a string variable. String variables are variables that comprise letters or words. This means that if you want to record a character's name, then you will have to use a string variable. In most programming languages, string variables have to be in quotes, for example, "John Bura". The quote marks tell the computer that the characters within are actually strings that the computer can use. When you put a number 1 into a string, is it a real number 1 or is it just a fake number? It's a fake number because strings are not numbers—they are strings. Even though the string shows the number 1, it isn't actually the number 1. Strings are meant to display characters, and numbers are meant to do math. Strings are not meant to do math—they just hold characters. If you tried to do math with a string, it wouldn't work (except in JavaScript, which we will talk about shortly). Strings shouldn't be used for calculations—they are meant to hold and display characters. If we have a string "1", it will be recorded as a character rather than an integer that can be used for calculations. Boolean variables The last main type of variable that we need to talk about is Boolean variables. Boolean variables are either true or false, and they are very important when it comes to games. They are used where there can only be two options. The following are some examples of Boolean variables: isAlive isShooting isInAppPurchaseCompleted isConnectedToInternet Most of these variables start off with the word is. This is usually done to signify that the variable that we are using is a Boolean. When you make games, you tend to use a lot of Boolean variables because there are so many states that game objects can be in. Often, these states have only two options, and the best thing to do is use a Boolean. Sometimes, you need to use an integer instead of a Boolean. Usually, 0 equals false and 1 equals true. Other variables When it comes to game production, there are a lot of specific variables that differ from environment to environment. Sometimes, there are GameObject variables, and there can also be a whole bunch of more specific variables. Declaring variables If you want to store any kind of data in variables, you have to declare them first. In the backend of Construct 2, there are a lot of variables that are already declared for you. This means that Construct 2 takes out the work of declaring variables. The variables that are taken care of for you include the following: Keyboard Mouse position Mouse angle Type of web browser Writing variables in code When we use Construct 2, a lot of the backend busywork has already been done for us. So, how do we declare variables in code? Usually, variables are declared at the top of the coding document, as shown in the following code: Int score; Real timescale = 1.2; Bool isDead; Bool isShooting = false; String name = "John Bura"; Let's take a look at all of them. The type of variable is listed first. In this case, we have the Int, Real, Bool (Boolean), and String variables. Next, we have the name of the variable. If you look carefully, you can see that certain variables have an = (equals sign) and some do not. When we have a variable with an equals sign, we initialize it. This means that we set the information in the variable right away. Sometimes, you need to do this and at other times, you do not. For example, a score does not need to be initialized because we are going to change the score as the game progresses. As you already know, you can initialize a Boolean variable to either true or false—these are the only two states a Boolean variable can be in. You will also notice that there are quotes around the string variable. Let's take a look at some examples that won't work: Int score = -1.2; Bool isDead = "false"; String name = John Bura; There is something wrong with all these examples. First of all, the Int variable cannot be a decimal. Second, the Bool variable has quotes around it. Lastly, the String variable has no quotes. In most environments, this will cause the program to not work. However, in HTML5 or JavaScript, the variable is changed to fit the situation. Summary In this article, we learned about the different types of variables and even looked at a few correct and incorrect variable declarations. If you are making a game, get used to making and setting lots of variables. The best part is that Construct 2 makes handling variables really easy. Resources for Article: Further resources on this subject: 2D game development with Monkey [article] Microsoft XNA 4.0 Game Development: Receiving Player Input [article] Flash Game Development: Making of Astro-PANIC! [article]
Read more
  • 0
  • 0
  • 11113

article-image-serving-and-processing-forms
Packt
24 Jun 2014
13 min read
Save for later

Serving and processing forms

Packt
24 Jun 2014
13 min read
(For more resources related to this topic, see here.) Spring supports different view technologies, but if we are using JSP-based views, we can make use of the Spring tag library tags to make up our JSP pages. These tags provide many useful, common functionalities such as form binding, evaluating errors outputting internationalized messages, and so on. In order to use these tags, we must add references to this tag library in our JSP pages as follows: <%@taglib prefix="form" uri="http://www.springframework.org/tags/form" %> <%@taglib prefix="spring" uri="http://www.springframework.org/tags" %> The data transfer took place from model to view via the controller. The following line is a typical example of how we put data into the model from a controller: model.addAttribute(greeting,"Welcome") Similarly the next line shows how we retrieve that data in the view using the JSTL expression: <p> ${greeting} </p> JavaServer Pages Standard Tag Library (JSTL) is also a tag library provided by Oracle. And it is a collection of useful JSP tags that encapsulates the core functionality common to many JSP pages. We can add a reference to the JSTL tag library in our JSP pages as <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%>. However, what if we want to put data into the model from the view? How do we retrieve that data from the controller? For example, consider a scenario where an admin of our store wants to add new product information in our store by filling and submitting an HTML form. How can we collect the values filled in the HTML form elements and process it in the controller? This is where the Spring tag library tags help us to bind the HTML tag element's values to a form-backing bean in the model. Later, the controller can retrieve the form-backing bean from the model using the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Form-backing beans (sometimes called form beans) are used to store form data. We can even use our domain objects as form beans; this works well when there's a close match between the fields on the form and the properties on our domain object. Another approach is to create separate classes for form beans, which are sometimes called Data Transfer Objects (DTOs). Time for action – serving and processing forms The Spring tag library provides some special <form> and <input> tags that are more or less similar to HTML form and input tags, but it has some special attributes to bind the form elements data with the form-backing bean. Let's create a Spring web form in our application to add new products to our product list by performing the following steps: We open our ProductRepository interface and add one more method declaration in it as follows: void addProduct(Product product); We then add an implementation for this method in the InMemoryProductRepository class as follows: public void addProduct(Product product) { listOfProducts.add(product); } We open our ProductService interface and add one more method declaration in it as follows: void addProduct(Product product); And, we add an implementation for this method in the ProductServiceImpl class as follows: public void addProduct(Product product) { productRepository.addProduct(product); } We open our ProductController class and add two more request mapping methods as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { productService.addProduct(newProduct); return "redirect:/products"; } Finally, we add one more JSP view file called addProduct.jsp under src/main/webapp/WEB-INF/views/ and add the following tag reference declaration in it as the very first line: <%@ taglib prefix="c" uri="http://java.sun.com/jsp/jstl/core"%> <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> Now, we add the following code snippet under the tag declaration line and save addProduct.jsp (note that I have skipped the <form:input> binding tags for some of the fields of the product domain object, but I strongly encourage that you add binding tags for the skipped fields when you try out this exercise): <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <link rel="stylesheet"href="//netdna.bootstrapcdn.com/bootstrap/3.0.0/css/bootstrap.min.css"> <title>Products</title> </head> <body> <section> <div class="jumbotron"> <div class="container"> <h1>Products</h1> <p>Add products</p> </div> </div> </section> <section class="container"> <form:form modelAttribute="newProduct" class="form-horizontal"> <fieldset> <legend>Add new product</legend> <div class="form-group"> <label class="control-label col-lg-2 col-lg-2" for="productId">Product Id</label> <div class="col-lg-10"> <form:input id="productId" path="productId" type="text" class="form:input-large"/> </div> </div> <!-- Similarly bind <form:input> tag for name,unitPrice,manufacturer,category,unitsInStock and unitsInOrder fields--> <div class="form-group"> <label class="control-label col-lg-2" for="description">Description</label> <div class="col-lg-10"> form:textarea id="description" path="description" rows = "2"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="discontinued">Discontinued</label> <div class="col-lg-10"> <form:checkbox id="discontinued" path="discontinued"/> </div> </div> <div class="form-group"> <label class="control-label col-lg-2" for="condition">Condition</label> <div class="col-lg-10"> <form:radiobutton path="condition" value="New" />New <form:radiobutton path="condition" value="Old" />Old <form:radiobutton path="condition" value="Refurbished" />Refurbished </div> </div> <div class="form-group"> <div class="col-lg-offset-2 col-lg-10"> <input type="submit" id="btnAdd" class="btn btn-primary" value ="Add"/> </div> </div> </fieldset> </form:form> </section> </body> </html> Now, we run our application and enter the URL http://localhost:8080/webstore/products/add. We will be able to see a web page that displays a web form where we can add the product information as shown in the following screenshot: Add the product's web form Now, we enter all the information related to the new product that we want to add and click on the Add button; we will see the new product added in the product listing page under the URL http://localhost:8080/webstore/products. What just happened? In the whole sequence, steps 5 and 6 are very important steps that need to be observed carefully. I will give you a brief note on what we have done in steps 1 to 4. In step 1, we created a method declaration addProduct in our ProductRepository interface to add new products. In step 2, we implemented the addProduct method in our InMemoryProductRepository class; the implementation is just to update the existing listOfProducts by adding a new product to the list. Steps 3 and 4 are just a service layer extension for ProductRepository. In step 3, we declared a similar method, addProduct, in our ProductService interface and implemented it in step 4 to add products to the repository via the productRepository reference. Okay, coming back to the important step; we have done nothing but added two request mapping methods, namely, getAddNewProductForm and processAddNewProductForm, in step 5 as follows: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(Model model) { Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); return "addProduct"; } @RequestMapping(value = "/add", method = RequestMethod.POST) public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) { productService.addProduct(productToBeAdded); return "redirect:/products"; } If you observe these methods carefully, you will notice a peculiar thing, which is that both the methods have the same URL mapping value in their @RequestMapping annotation (value = "/add"). So, if we enter the URL http://localhost:8080/webstore/products/add in the browser, which method will Spring MVC map that request to? The answer lies in the second attribute of the @RequestMapping annotation (method = RequestMethod.GET and method = RequestMethod.POST). If you will notice again, even though both methods have the same URL mapping, they differ in request method. So, what is happening behind the screen is that when we enter the URL http://localhost:8080/webstore/products/add in the browser, it is considered as a GET request. So, Spring MVC maps this request to the getAddNewProductForm method, and within this method, we simply attach a new empty Product domain object to the model under the attribute name, newProduct. Product newProduct = new Product(); model.addAttribute("newProduct", newProduct); So in the view addproduct.jsp, we can access this model object, newProduct. Before jumping into the processAddNewProductForm method, let's review the addproduct.jsp view file for some time so that we are able to understand the form processing flow without confusion. In addproduct.jsp, we have just added a <form:form> tag from the Spring tag library using the following line of code: <form:form modelAttribute="newProduct" class="form-horizontal"> Since this special <form:form> tag is acquired from the Spring tag library, we need to add a reference to this tag library in our JSP file. That's why we have added the following line at the top of the addProducts.jsp file in step 6: <%@ taglib prefix="form" uri="http://www.springframework.org/tags/form" %> In the Spring <form:form> tag, one of the important attributes is modelAttribute. In our case, we assigned the value newProduct as the value of modelAttribute in the <form:form> tag. If you recall correctly, you will notice that this value of modelAttribute and the attribute name we used to store the newProduct object in the model from our getAddNewProductForm method are the same. So, the newProduct object that we attached to the model in the controller method (getAddNewProductForm) is now bound to the form. This object is called the form-backing bean in Spring MVC. Okay, now notice each <form:input> tag inside the <form:form> tag shown in the following code. You will observe that there is a common attribute in every tag. This attribute name is path: <form:input id="productId" path="productId" type="text" class="form:input-large"/> The path attribute just indicates the field name that is relative to the form-backing bean. So, the value that is entered in this input box at runtime will be bound to the corresponding field of the form bean. Okay, now is the time to come back and review our processAddNewProductForm method. When will this method be invoked? This method will be invoked once we press the submit button of our form. Yes, since every form submission is considered as a POST request, this time the browser will send a POST request to the same URL, that is, http://localhost:8080/webstore/products/add. So, this time, the processAddNewProductForm method will get invoked since it is a POST request. Inside the processAddNewProductForm method, we simply call the service method addProduct to add the new product to the repository, as follows: productService.addProduct(productToBeAdded); However, the interesting question here is, how is the productToBeAdded object populated with the data that we entered in the form? The answer lies within the @ModelAttribute annotation (org.springframework.web.bind.annotation.ModelAttribute). Note the method signature of the processAddNewProductForm method shown in the following line of code: public String processAddNewProductForm(@ModelAttribute("newProduct") Product productToBeAdded) Here, if you notice the value attribute of the @ModelAttribute annotation, you will observe a pattern. The values of the @ModelAttribute annotation and modelAttribute from the <form:form> tag are the same. So, Spring MVC knows that it should assign the form-bound newProduct object to the productToBeAdded parameter of the processAddNewProductForm method. The @ModelAttribute annotation is not only used to retrieve an object from a model, but if we want to, we can even use it to add objects to the model. For instance, we rewrite our getAddNewProductForm method to something like the following code with the use of the @ModelAttribute annotation: @RequestMapping(value = "/add", method = RequestMethod.GET) public String getAddNewProductForm(@ModelAttribute("newProduct") Product newProduct) { return "addProduct"; } You can notice that we haven't created any new empty Product domain object and attached it to the model. All we have done was added a parameter of the type Product and annotated it with the @ModelAttribute annotation so that Spring MVC would know that it should create an object of Product and attach it to the model under the name newProduct. One more thing that needs to be observed in the processAddNewProductForm method is the logical view name, redirect:/products, that it returns. So, what are we trying to tell Spring MVC by returning a string redirect:/products? To get the answer, observe the logical view name string carefully. If we split this string with the : (colon) symbol, we will get two parts; the first part is the prefix redirect and the second part is something that looks like a request path, /products. So, instead of returning a view name, we simply instruct Spring to issue a redirect request to the request path, /products, which is the request path for the list method of our ProductController class. So, after submitting the form, we list the products using the list method of ProductController. As a matter of fact, when we return any request path with the redirect: prefix from a request mapping method, Spring uses a special view object, RedirectView (org.springframework.web.servlet.view.RedirectView), to issue a redirect command behind the screen. Instead of landing in a web page after the successful submission of a web form, we are spawning a new request to the request path /products with the help of RedirectView. This pattern is called Redirect After Post, which is a common pattern to use with web-based forms. We are using this pattern to avoid double submission of the same form; sometimes, if we press the browser's refresh button or back button after submitting the form, there are chances that the same form will be resubmitted. Summary This article introduced you to Spring and Spring form tag libraries in web form handling. You also learned how to bind domain objects with views and how to use message bundles to externalize label caption texts. Resources for Article: Further resources on this subject: Spring MVC - Configuring and Deploying the Application [article] Getting Started With Spring MVC - Developing the MVC components [article] So, what is Spring for Android? [article]
Read more
  • 0
  • 0
  • 1848

article-image-implementing-particles-melonjs
Andre Antonio
23 Jun 2014
14 min read
Save for later

Implementing Particles with melonJS

Andre Antonio
23 Jun 2014
14 min read
With the popularity of games on smartphones and browsers, 2D gaming is back on the scene, recalling the classic age of the late '80s and early '90s. At that time, most games used animation techniques with images (sprite sheets) to display special effects such as explosions, fire, or magic. In current games we can use particle systems, generating random and interesting visual effects in them. In this post I will briefly describe what a particle system is and how to implement models using the HTML5 engine melonJS. Note: Although the examples are specific to the melonJS game engine using JavaScript as a programming language, the concepts and ideas can be adapted for other languages ​​or game engines as well. Particle systems A particle system is a technique used to simulate a variety of special visual effects like sparks, dust, smoke, fire, explosions, rain, stars, falling snow, glowing trails, and many others. Basically, a particle system consists of emitters and the particles themselves. The particles are small objects or textures that have different properties, such as: Position (x, y) Velocity (vx, vy) Scale Angle Transparency (alpha) Texture (sprite ou cor Lifetime These particles perform some functions associated with their properties (movement, rotation, scaling, change in transparency, and so on) for a period of time, and are removed from the game after this interval (or they are returned to a pool of particles for reuse, ensuring better performance). According to how the particle system is implemented, it may have some characteristics affected by its lifetime, such as its size or transparency. For example, the older the particle, the sooner it will be eliminated from the game, and the smaller its size and transparency. The emitter is responsible for launching the particles, acting as a point of origin and working in a group with dozens or hundreds of particles simultaneously. Each emitter has several parameters that determine the characteristics of the particles generated by it. These parameters may include, among others: Emission rate of new particles Initial position of the particles Initial speed of the particles Gravity applied to the particles Life time of the particles When activated, the emitter manages the number of generated particles and can have different behaviors, and can emit a certain number of particles simultaneously and stop (used in an explosion), or constantly emit particles (used in smoke). It is common for an emitter to allow random value ranges (like the initial position or the lifetime of the particles), creating different and interesting effects each time it is activated, unlike an animation sequence that always exhibits the same effects. melonJS - a lightweight HTML5 game engine melonJS (http://melonjs.org/) is an open source engine that uses JavaScript and the HTML5 Canvas element to develop games that can be accessed online via a compatible browser technology. It has native integration with a Tiled Map Editor (http://www.mapeditor.org/), facilitating the creation of game levels. It even has many features such as a basic physics engine and collision management, animations, support for sprite sheets and texture atlases, tweens effects, audio support, a built-in particle system, object pooling, and mouse and touch support among others. All of these features assist in process development and the prototyping of games. The engine implements a mechanism of inheritance (based on John Resig's Simple Inheritance) allowing any object to be extended. Several functions and classes are provided to the developer for direct use or for extending new features. The following is a simple example of engine use with an object of type me.Renderable (engine base class to draw on the canvas): game.myRenderable = me.Renderable.extend({ // constructor init : function() { // set the object screen position (x, y) = (100, 200); width = 150 and height = 50 this.parent(new me.Vector2d(100, 200), 150, 50); // set the z-order this.z = 1; }, // logic - collision, movement, etc update : function(dt) { return false; }, // draw - canvas draw : function(context) { // change the canvas context color context.fillStyle = '#000'; // draw a simple rectangle in the canvas context.fillRect(this.pos.x, this.pos.y, this.width, this.height); } }); // create and add the renderable in the game world me.game.world.addChild(new game.myRenderable()); This example draws a rectangle of dimensions (150, 50) at position (100, 200) of the canvas using the color black (# 000) as filler. Single particles with melonJS Using the classes and functions of melonJS, we can assemble a simple particle system, simulating, for example, drops of water. The following example will emit some particles with an initial velocity vertical (Y axis) and will fall until they leave the screen area of ​​the game, through gravity mechanism implemented by the physics engine. The example can be accessed online (http://aaschmitz.github.io/melonjs-simple-particles) and the code is available on GitHub (https://github.com/aaschmitz/melonjs-simple-particles). Simple particle example   The particles will be implemented through a me.Renderable object to draw an image on the canvas, with attributes initialized randomly, allowing that a particle is distinct from another: game.dropletParticle = me.SpriteObject.extend({ init: function(x, y) { // class constructor this.parent(x, y, me.loader.getImage("droplet")); // the particle update even off screen this.alwaysUpdate = true; // calculate random launch angle - convert degrees in radians var launch = Number.prototype.degToRad(Number.prototype.random(10, 80)); // calculate random distance from point x original var distance = Number.prototype.random(3, 5); // calculate random altitude from point y original var altitude = Number.prototype.random(10, 12); // particle screen side (value negative is left, positive is right and zero is center) var screenSide = Number.prototype.random(-1, 1); // create new vector and set initial particle velocity this.vel = new me.Vector2d(Math.sin(launch) * distance * screenSide, -Math.cos(launch) * altitude); // set the default engine gravity me.sys.gravity = 0.3; }, update: function(dt) { // check the particle position in screen limits if ((this.pos.y > 0) && (this.pos.y < me.game.viewport.getHeight()) && (this.pos.x > 0) && (this.pos.x < me.game.viewport.getWidth())) { // set particle position this.vel.y += me.sys.gravity; this.pos.x += this.vel.x;         this.pos.y += this.vel.y;                               this.parent(dt);             return true;         } else {             // particle off screen - remove the same!             me.game.world.removeChild(this, true);              return false;         }     } });  This code snippet is used as a particle, and the class game.dropletParticle must be initialized with the position (x, y) of the particle on the screen. When the particle is not in the visible area of ​​the screen, it will be deleted. For various particles, we will use a basic emitter, which will be implemented with a normal JavaScript function: game.startEmitter = function(x, y, count) {     // add count particles in the game, all at once!     for (var i = 0; i < count; i++)         // add the particle in the game, using the mouse coordinates and z-order = 5         // use the object pool for better performance!         me.game.world.addChild(me.pool.pull("droplet", x, y), 5); }; This emitter will be called to emit count particles at once using the initial position (x, y). Note that for best performance, the pooling mechanism of the engine is used in the generation of particles. Similar to the example described previously, you'll find the implementation effects of drops and stars in an educational game, Vibrant Recyclin. It is available online (http://vibrantrecycling.ciangames.com) and developed by the author using the engine melonJS: Single particles in the game Vibrant Recycling Improved particles through the embedded particle system For the use of particles with more advanced or complex effects, the previous example proves to be poor, requiring the addition of more attributes in the particles (such as rotation, lifetime, and so on) and creating a more robust emitter with many different behaviors. Starting from version 1.0.0 of the melonJS engine, a particle system was added in it, facilitating the creation of advanced particles and emitters. The particle system consists of the following classes: me.Particle: This is thebase class of particles, responsible for movement (using physics), updating the properties and the drawing of each individual particle. Its properties are set and adjusted directly by the associate emitter. me.ParticleEmitter: This is theclass responsible for generating the particles according to the parameters configured on each emitter. It can emit particles with a stream behavior (throws particles sequentially in an infinite form or to a defined time) or burst (launches all particles at the same time). For more information, see the official documentation of the engine, available online at http://melonjs.github.io/docs/me.ParticleEmitter.html. me.ParticleContainer: This is theclass associated with each emitter, which maintains a relationship with all particles generated by it, updating the logic of them and being responsible for the removal of particles that are outside the viewport or have a life time reset. me.ParticleEmitterSettings: This is the object containing the default settings to be used in emitters, allowing you to create many reusable emitters models. Check the parameters allowed by consulting the official documentation of the engine available online at http://melonjs.github.io/docs/me.ParticleEmitterSettings.html. To use the particle system with melonJS, do the following: Instantiate an emitter. Adjust the properties of the emitter or assign a me.ParticleEmitterSettings previously created for the same. Add the emitter and its container in the game. Enable the emitter through the functions burstParticles() or streamParticles() however many times necessary. At the end of the process, remove the emitter and its container from the game. The following is a basic example of an emitter that, if enabled, will simulate an explosion launching the particles upwards. Note that the example uses a normal JavaScript function and each time the function is called, the emitter is created, activated, and destroyed, which is not a good option if the function is executed several times: game.makeExplosion = function(x, y) {     // create a basic emitter at position (x, y) using sprite "explosion"     var emitter = new me.ParticleEmitter(x, y, me.loader.getImage("explosion"));     // adjust the emitter properties     // launch 50 particles              emitter.totalParticles = 50;     // particles lifetime between 1s and 3s     emitter.minLife = 1000;     emitter.maxLife = 3000;     // particles have initial velocity between 7 and 13     emitter.speed = 10;     emitter.speedVariation = 3;     // initial launch angle between 70 and 110 degrees     emitter.angle = Number.prototype.degToRad(90);     emitter.angleVariation = Number.prototype.degToRad(20);     // gravity 0.3 and z-order 10     emitter.gravity = 0.3;     emitter.z = 10;     // add the emitter to the game world     me.game.world.addChild(emitter);     me.game.world.addChild(emitter.container);     // launch all particles one time and stop     emitter.burstParticles();     // remove emitter from the game world     me.game.world.removeChild(emitter);     me.game.world.removeChild(emitter.container); }; The example with simple particles described in the previous section will be implemented using the built-in particle system in the engine. This can be accessed online (http://aaschmitz.github.io/melonjs-improved-particles) and the code is available on GitHub (https://github.com/aaschmitz/melonjs-improved-particles).  game.explosionManager = Object.extend({              init: function(x, y) {         // create a new emitter         this.emitter = new me.ParticleEmitter(x, y);         this.emitter.z = 10;         // start the emitter with pre-defined params         this.start(x, y);         // add the emitter to game         me.game.world.addChild(this.emitter);         me.game.world.addChild(this.emitter.container);     },     start: function(x, y) {         // set the emitter params         this.emitter.image = me.loader.getImage("droplet");         this.emitter.totalParticles = 20;         this.emitter.minLife = 2000;         this.emitter.maxLife = 5000;         this.emitter.speed = 10;         this.emitter.speedVariation = 3;         this.emitter.angle = Number.prototype.degToRad(90);         this.emitter.angleVariation = Number.prototype.degToRad(20);         this.emitter.minStartScale = 0.6;         this.emitter.maxStartScale = 1.0;         this.emitter.gravity = 0.3;         // move the emitter         this.emitter.pos.set(x, y);     },     launch: function(x, y) {                    // move the emitter         this.emitter.pos.set(x, y);         // launch the particles!         this.emitter.burstParticles();     },     remove: function() {         // remove the emitter from game                 me.game.world.removeChild(this.emitter.container);         me.game.world.removeChild(this.emitter);     } }); Comparing the examples using simple particles and the one using the built-in particle system of the engine, we note that the second option is more robust, customizable, and fluid. It provides more realism and refinement in the effects created by the particles. Improved particle example using the built-in particle system Visual particles editor The melonJS engine has a visual particles editor, available online (http://melonjs.github.io/examples/particles/) for creating emitters faster or to make fine adjustments in emitters already created. melonJS particles editor The particles editor has a selection menu on the top of screen, where you choose between several emitters preset such as fire, smoke, and rain. The right pane of the screen is responsible for configuring (and tuning) emitters. The left pane displays the configuration parameters of the emitter currently running, serving to be added via code the parameters of the emitter to be used in the game. You can operate the mouse on the center screen indicators (colored circles) affecting directly some properties of the emitters in a visual way. Conclusion The use of a particle system allows the creation of more enjoyable and interesting visual effects, through customization and randomization of several parameters configured in the emitters, thus creating a greater diversity of particles generated. The melonJS engine proves to be a robust and viable alternative to creating games in HTML5. Being an open source project and having a very active team and community, melonJS receives several enhancements and features with each new version, making it easier for game developers to use. About the author  Andre Antonio Schmitz is a game developer focusing on HTML5 at Cian Games (http://www.ciangames.com). Living in Caxias do Sul, Brasil, he graduated with a Bachelor's degree in Computer Science and an MBA specialization in IT Management. You can find him on Twitter (https://twitter.com/aaaschmitz), Google+ (https://plus.google.com/+AndreAntonioSchmitz/), or GitHub (https://github.com/aaschmitz).
Read more
  • 0
  • 0
  • 3468

article-image-using-playlist
Packt
23 Jun 2014
9 min read
Save for later

Using the Playlist

Packt
23 Jun 2014
9 min read
(For more resources related to this topic, see here.) You must use patterns that were made in the FL Studio step sequencer in order to build an original musical production from scratch. In this way, you build, overdub, and continually layer in order to come up with an original work. You may also insert full-length WAV or MP3 files and function exclusively as a recording studio, where you would then record vocals on top of the completed instrumental. You can chop any type of material (patterns or audio) in the playlist. So, in the end it may be a true fusion, mixture, and mash up of digital music. All files exist as channels. Using patterns to build a song If you were a writer who is writing a novel, you would start with a blank document and build your story with words and paragraphs. If you were a painter, you would start with a blank canvas and use markers or a paintbrush to build your creative vision. When working with FL Studio, you will be pasting your various patterns in the step sequencer in the FL Studio playlist. Getting ready… In order to start using the playlist, you need to have some data entered in your steps within channels and patterns on the step sequencer. You can press F5 or use the VIEW menu to bring up the playlist. Press Tab to toggle between the various windows that are open in FL Studio. How to do it… Let's look at two quick start methods to paste/paint patterns into the playlist: Quick start: Hover your mouse over the PAT box and drag it up/down to select a pattern number. Click on the playlist (F5) to paint it in. In Fig 5.1, the PAT box is shown to the right of the TEMPO button, with a value of 162.000. Alternative method: You may also click where it says Kick, next to the word Playlist in Fig 5.1. Not only will it bring up a list of patterns, but also show a list of all the automation clips and audio files created in the current project. Right-click on the same area to open up a really cool PROJECT PICKER window, where you can select your patterns or channels laid out in an awesome interface. Let's take a look at the multiple patterns being arranged using the following steps: Select a pattern in the step sequencer. Make sure you select the SONG mode, next to the transport controls in Fig 5.1. This will make your music project enter the SONG mode, and Play Position Marker (a small, orange-colored triangle) will be engaged when you press the Space bar to stop or play your project. Click with the Paint tool in the desired area in order to paste the pattern you have selected. You may also click-and-drag to the right with the Paint tool in order to smoothly paste the same pattern over and over again. Use the right-click button to erase patterns. The following screenshot shows that anything pasted in the Playlist section is like a small graphic of any given pattern in the step sequencer and the information inside it. You can see that the Hat and 303ish 2 patterns come in at bar 5. The 303ish 2 pattern shows Piano roll data because we have entered Piano roll data using a virtual instrument on that particular pattern. When you enter notes in the Piano roll on a given channel or pattern, the Playlist window will reflect this in order to help you see and organize your arrangement. The following screenshot shows the Playlist window: Fig 5.1 Click on the small square button next to the name of your pattern and then select the source pattern to populate and choose available patterns as per Fig 5.2. Once your patterns are pasted into the Playlist window, you will have the option to toggle between your patterns present in the list, as shown in the following screenshot: Fig 5.2 In the following screenshot, we have opened the pattern selector under the step sequencer by hovering our mouse on Kick. Your cursor will turn into a hand when you hover over Kick or any generic pattern name, indicating that the drop-down menu will get populated. Fig 5.3 How it works… By pasting your patterns (which have your channel steps or notes) in the playlist, you can form a full music production project. This happens when you click on the SONG mode, and the small play-position marker is engaged at the very start of the playlist on bar 1. Now, when you press the Space bar key, any pattern you paste into the playlist will be in time from left-to-right, and you can see the position pointer move along while highlighting the grid in time. A regular pattern when operating in the PAT mode will simply play your data and loop back to the beginning. You can stretch patterns to extremely long lengths by using the Piano roll, but you still have to paint your given pattern into the playlist. The area where you paste your patterns in the playlist will also be the area that is used to export your project into the rendered audio. There's more… You may also use the Slice tool directly (hover your mouse on the various tools at the upper left-hand corner of the playlist and look at the hint bar) in order to slice particular patterns and then edit or move them. Slicing is also exceedingly useful in the FL Studio Piano roll. If you select the Make unique option (shown in Fig 5.2), it will automatically create a new, unique pattern in the step sequencer. This can be handy if you want to tweak a pattern without navigating back to the step sequencer. This is also a great tool to remove part of your drums or incorporate a short silence to bring variety to your arrangement. Many times, you may want to slice off a certain part and then make it unique so that it becomes its own separate entity that you can mix later on. You can do this by pressing the Pattern clip drop-down box in Fig 5.2. This can prove handy because your song arrangement is affected and changed immediately. This method is used on any type of audio or samples. When working with vocals, you may want to make some of your pieces unique in order to change the volume of a specific word or add a different effect in the mixer. You may also paste any pattern you want, even if it has no steps or notes yet. This means that you can paste a blank pattern in the playlist; you can go to this pattern while your arrangement is playing and add your notes later. The longer you stretch your notes in a channel/pattern, the longer will your pattern automatically stretch in the playlist. If your pattern is currently playing or being triggered in the playlist, you will clearly see the LEDs of the playing step (orange slits) at the bottom of your step sequencer. A tremendous option when working with your arrangement is to double-click and drag your pattern where you see the numbers that represent the measures in your playlist and where the play position marker is located. When you do this, your selection will become in red color, and you can highlight the measure/sections that will be played and looped back around. This is handy to specify a particular area that you want to play back, which can then be edited and fine-tuned without listening to the whole song. If you only want to edit the intro, you can click-and-drag your mouse to highlight this area only. This feature allows you to specify which part of your song will play back and is helpful when adding/changing/revising/experimenting with pieces and parts, recording vocals, and drawing automation curves. Comparing patterns and audio The patterns that you paste into the FL Studio playlist reflect how you manage/arrange your project and decide which parts will play or not play during your song. This is easy to see on the FL Studio Playlist, where you will have a graphical readout of your patterns, audio, and automation clips. Audio clips will be shown on the playlist in a waveform readout, as will any type of MP3 or WAV file. Patterns in the playlist can stretch very far if your Piano roll is extended. Getting ready… In order start using the Playlist section and patterns, you need to have some data entered in your steps within the channels and patterns on the step sequencer. You may press F5 or use the VIEW menu to bring up the playlist. We will be working with a recorded vocal file. In some cases, your recording can show up directly on the FL Studio Playlist. How to do it… We will review the different types of patterns that are seen as specific images on the playlist using the following steps: Press the space bar key or click on play with your mouse on the FL Studio transport controls while in the SONG mode. Paste your patterns as discussed in the previous recipe of this article, Using patterns to build your song. If you have data in your steps and have not used the Piano roll on that channel, the data will be read out like the Hat pattern shown in Fig 5.4. If you have a channel/pattern that used a Piano roll, it will be similar to the 303ish 2 pattern in Fig 5.4. If you have vocals, an MP3 file, or a WAV file, it will show the audio data, as shown in Fig 5.4, next to Track 5 titled Vocals track 1 in the header. When working with automation clips, you will see automation curves and shapes as seen in the example in Fig 5.4. Fig 5.4 How it works… The readouts on various channels, patterns, audio data, and automation curves are a friendly way to keep your project organized and see exactly what is happening at specific moments (bars and beats) in time. You can use all of the features described in the previous recipe of this article, Using patterns to build your song. If you need to paste the same pattern multiple times in a row, you can use the paint tool and click-and-drag the pattern to the right. You can also use the Maximize / restore button to drag FL Studio's main window and drag audio files directly into the playlist. This restore button is shown in the following screenshot, directly to the left of the exit icon. You may click-and-drag an audio file from your desktop (or anywhere on your computer) into the FL Studio Playlist. The following screenshot shows the Maximize / restore button: Fig 5.5
Read more
  • 0
  • 0
  • 2710

article-image-upgrading-previous-versions
Packt
23 Jun 2014
8 min read
Save for later

Upgrading from Previous Versions

Packt
23 Jun 2014
8 min read
(For more resources related to this topic, see here.) This article is about guiding you through the requirements and steps necessary to upgrade your VMM 2008 R2 SP1 to VMM 2012 R2. There is no direct upgrade path from VMM 2008 R2 SP1 to VMM 2012 R2. You must first upgrade to VMM 2012 and then to VMM 2012 R2. VMM 2008 R2 SP1-> VMM 2012-> SCVMM 2012 SP1 -> VMM 2012 R2 is the correct upgrade path. Upgrade notes: VMM 2012 cannot be upgraded directly to VMM 2012 R2. Upgrading it to VMM 2012 SP1 is required VMM 2012 can be installed on a Windows 2008 Server VMM 2012 SP1 requires Windows 2012 VMM 2012 R2 requires minimum Windows 2012 (Windows 2012 R2 is recommended) Windows 2012 hosts can be managed by VMM 2012 SP1 Windows 2012 R2 hosts require VMM 2012 R2 System Center App Controller versions must match the VMM version To debug a VMM installation, the logs are located in %ProgramData%VMMLogs, and you can use the CMTrace.exe tool to monitor the content of the files in real time, including SetupWizard.log and vmmServer.log. VMM 2012 Architecture, VMM 2012 is a huge product upgrade, and there have been many improvements. This article only covers the VMM upgrade. If you have a previous version of System Center family components installed on your environment, make sure you follow the upgrade and installation. System Center 2012 R2 has some new components, in which the installation order is also critical. It is critical that you take the steps documented by Microsoft in Upgrade Sequencing for System Center 2012 R2 at http://go.microsoft.com/fwlink/?LinkId=328675 and use the following upgrade order: Service Management Automation Orchestrator Service Manager Data Protection Manager (DPM) Operations Manager Configuration Manager Virtual Machine Manager (VMM) App Controller Service Provider Foundation Windows Azure Pack for Windows Server Service Bus Clouds Windows Azure Pack Service Reporting Reviewing the upgrade options This recipe will guide you through the upgrade options for VMM 2012 R2. Keep in mind that there is no direct upgrade path from VMM 2008 R2 to VMM 2012 R2. How to do it... Read through the following recommendations in order to upgrade your current VMM installation. In-place upgrade from VMM 2008 R2 SP1 to VMM 2012 Use this method if your system meets the requirements for a VMM 2012 upgrade and you want to deploy it on the same server. The supported VMM version to upgrade from is VMM 2008 R2 SP1. If you need to upgrade VMM 2008 R2 to VMM 2008 R2 SP1, refer to http://go.microsoft.com/fwlink/?LinkID=197099. In addition, keep in mind that if you are running the SQL Server Express version, you will need to upgrade SQL Server to a fully supported version beforehand as the Express version is not supported in VMM 2012. Once the system requirements are met and all of the prerequisites are installed, the upgrade process is straightforward. To follow the detailed recipe, refer to the Upgrading to VMM 2012 R2 recipe. Upgrading from 2008 R2 SP1 to VMM 2012 on a different computer Sometimes, you may not be able to do an in-place upgrade to VMM 2012 or even to VMM 2012 SP1. In this case, it is recommended that you use the following instructions: Uninstall the current VMM that retains the database and then restore the database on a supported version of SQL Server. Next, install the VMM 2012 prerequisites on a new server (or on the same server, as long it meets the hardware and OS requirements). Finally, install VMM 2012, providing the retained database information on the Database configuration dialog, and the VMM setup will upgrade the database. When the install process is finished, upgrade the Hyper-V hosts with the latest VMM agents. The following figure illustrates the upgrade process from VMM 2008 R2 SP1 to VMM 2012: When performing an upgrade from VMM 2008 R2 SP1 with a local VMM database to a different server, the encrypted data will not be preserved as the encryption keys are stored locally. The same rule applies when upgrading from VMM 2012 to VMM 2012 SP1 and from VMM 2012 SP1 to VMM 2012 R2 and not using Distributed Key Management (DKM) in VMM 2012. Upgrading from VMM 2012 to VMM 2012 SP1 To upgrade to VMM 2012 SP1, you should already have VMM 2012 up and running. VMM 2012 SP1 requires a Windows Server 2012 and Windows ADK 8.0. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 SP1 and App Controller. Upgrading from VMM 2012 SP1 to VMM 2012 R2 To upgrade to VMM 2012 R2, you should already have VMM 2012 SP1 up and running. VMM 2012 R2 requires minimum Windows Server 2012 as the OS (Windows 2012 R2 is recommended) and Windows ADK 8.1. If planning an in-place upgrade, back up the VMM database; uninstall VMM 2012 SP1 and App Controller (if applicable), retaining the database; perform an OS upgrade; and then install VMM 2012 R2 and App Controller. Some more planning considerations are as follows: Virtual Server 2005 R2: VMM 2012 does not support Microsoft Virtual Server 2005 R2 anymore. If you have Virtual Server 2005 R2 or an unsupported ESXi version running and have not removed these hosts before the upgrade, they will be removed automatically during the upgrade process. VMware ESX and vCenter: For VMM 2012, the supported versions of VMware are from ESXi 3.5 to ESXi 4.1 and vCenter 4.1. For VMM 2012 SP1/R2, the supported VMware versions are from ESXi 4.1 to ESXi 5.1, and vCenter 4.1 to 5.0. SQL Server Express: This is not supported since VMM 2012. A full version is required. Performance and Resource Optimization (PRO): The PRO configurations are not retained during an upgrade to VMM 2012. If you have an Operations Manager (SCOM) integration configured, it will be removed during the upgrade process. Once the upgrade process is finished, you can integrate SCOM with VMM. Library server: Since VMM 2012, VMM does not support a library server on Windows Server 2003. If you have it running and continue with the upgrade, you will not be able to use it. To use the same library server in VMM 2012, move it to a server running a supported OS before starting the upgrade. Choosing a service account and DKM settings during an upgrade: During an upgrade to VMM 2012, on the Configure service account and distributed key management page of the setup, you are required to create a VMM service account (preferably a domain account) and choose whether you want to use DKM to store the encryption keys in Active Directory (AD). Make sure to log on with the same account that was used during the VMM 2008 R2 installation: This needs to be done because, in some situations after the upgrade, the encrypted data (for example, the passwords in the templates) may not be available depending on the selected VMM service account, and you will be required to re-enter it manually. For the service account, you can use either the Local System account or a domain account: This is the recommended option, but when deploying a highly available VMM management server, the only option available is a domain account. Note that DKM is not available with the versions prior to VMM 2012. Upgrading to a highly available VMM 2012: If you're thinking of upgrading to a High Available (HA) VMM, consider the following: Failover Cluster: You must deploy the failover cluster before starting the upgrade. VMM database: You cannot deploy the SQL Server for the VMM database on highly available VMM management servers. If you plan on upgrading the current VMM Server to an HA VMM, you need to first move the database to another server. As a best practice, it is recommended that you have the SQL Server cluster separated from the VMM cluster. Library server: In a production or High Available environment, you need to consider all of the VMM components to be High Available as well, and not only the VMM management server. After upgrading to an HA VMM management server, it is recommended, as a best practice, that you relocate the VMM library to a clustered file server. In order to keep the custom fields and properties of the saved VMs, deploy those VMs to a host and save them to a new VMM 2012 library. VMM Self-Service Portal: This is not supported since VMM 2012 SP1. It is recommended that you install System Center App Controller instead. How it works... There are two methods to upgrade to VMM 2012 from VMM 2008 R2 SP1: an in-place upgrade and upgrading to another server. Before starting, review the initial steps and the VMM 2012 prerequisites and perform a full backup of the VMM database. Uninstall VMM 2008 R2 SP1 (retaining the data) and restore the VMM database to another SQL Server running a supported version. During the installation, point to that database in order to have it upgraded. After the upgrade is finished, upgrade the host agents. VMM will be rolled back automatically in the event of a failure during the upgrade process and reverted to its original installation/configuration. There's more... The names of the VMM services have been changed in VMM 2012. If you have any applications or scripts that refer to these service names, update them accordingly as shown in the following table: VMM version VMM service display name Service name 2008 R2 SP1 Virtual Machine Manager vmmservice   Virtual Machine Manager Agent vmmagent 2012 / 2012 SP1/ 2012 R2 System Center Virtual Machine Manager scvmmservice   System Center Virtual Machine Manager Agent scvmmagent See also To move the file-based resources (for example, ISO images, scripts, and VHD/VHDX), refer to http://technet.microsoft.com/en-us/library/hh406929 To move the virtual machine templates, refer to Exporting and Importing Service Templates in VMM at http://go.microsoft.com/fwlink/p/?LinkID=212431
Read more
  • 0
  • 0
  • 8226
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-end-user-transactions
Packt
23 Jun 2014
13 min read
Save for later

End User Transactions

Packt
23 Jun 2014
13 min read
(For more resources related to this topic, see here.) End user transaction code or simply T-code is a functionality provided by SAP that calls a new screen to carry out day-to-day operational activities. A transaction code is a four-character command entered in SAP by the end user to perform routine tasks. It can also be a combination of characters and numbers, for example, FS01. Each module has a different T-code that is uniquely named. For instance, the FICO module's T-code is FI01, while the Project Systems module's T-code will be CJ20. The T-code, as we will call it throughout the article, is a technical name that is entered in the command field to initiate a new GUI window. In this article, we will cover all the important T-codes that end users or administrators use on a daily basis. Further, you will also learn more about the standard reports that SAP has delivered to ease daily activities. Daily transactional codes On a daily basis, an end user needs to access the T-code to perform daily transactions. All the T-code is entered in a command field. A command field is a space designed by SAP for entering T-codes. There are multiple ways to enter a T-code; we will gradually learn about the different approaches. The first approach is to enter the T-code in the command field, as shown in the following screenshot: Second, the T-codes can be accessed via SAP Easy Access. By double-clicking on a node, the associated application is called and the start of application message is populated at the bottom of the screen. SAP Easy Access is the first screen you see when you log on. The following screenshot shows the SAP Easy Access window: We don't have to remember any T-codes. SAP has given a functionality to store the T-codes by adding it under Favorites. To add a T-code to Favorites, navigate to Favorites | Insert transaction, as shown in the following screenshot, or simply press Ctrl + Shift + F4 and then enter the T-code that we wish to add as favorite: There are different ways to call a technical screen using a T-code. They are shown in the following table: Command+T-code Description /n+T-code, for example, /nPA20 If we wish to call the technical screen in the same session, we may use the /n+T-code function. /o+T-code, for example, /oFS01 If we wish to call the screen in a different session, we may use the /n+T-code function. Frequently used T-codes Let's look closely at the important or frequently used T-codes for administration or transactional purposes. The Recruitment submodule The following are the essential T-codes in the Recruitment submodule: T-code Description PB10 This T-code is used for initial data entry. It performs actions similar to the PB40T-code. The mandatory fields ought to be filled by the user to proceed to the next infotype. PB20 This T-code is used for display purposes only. PB30 This T-code is used to make changes to an applicant's data, for example, changing a wrongly entered date of birth or incorrect address. PBA1 This T-code provides the functionality to bulk process an applicants' data. Multiple applicants can be processed at the same time unlike the PB30 T-code, which processes every applicant's data individually. Applicants' IDs along with their names are fetched using this T-code for easy processing. PBA2 This T-code is useful when listing applicants based on their advertising medium for bulk processing. It helps to filter applicants based on a particular advertising channel such as a portal. PBAW It's used to maintain the advertisements used by the client to process an applicants' data. PBAY All the vacant positions can be listed using this T-code. If positions are not flagged as vacant in the Organizational Management (OM) submodule, they can be maintained via this T-code. PBAA A recruitment medium, such as job portal sites, that is linked with an advertisement medium is evaluated using this T-code. PBA7 This is an important T-code to transfer an applicant to employee. Applicant gets converted to an employee using this T-code. The integration between Recruitment and Personnel Administration submodules come into picture. PBA8 To confirm whether an applicant has been transferred to employee, PBA8 needs to be executed. The system throws a message that processing has been carried out successfully for the applicants. After PBA8 T-code is executed, we will see a message similar to the one shown in the following screenshot: The Organization Management submodule We will cover some of the important T-codes used to design and develop the organization structure in the following table: T-code Description PPOCE This T-code is used to create an organizational structure. It is a graphically supported interface with icons to easily differentiate between object types such as org unit and position. PPOC_OLD SAP provides multiple interfaces to create a structure. This T-code is one such interface that is pretty simple and easy to use. PP01 This is also referred to as the Expert Mode, because one needs to know the object types like SPOCK, where S represents position, O represents organization unit, and relationships A/B, where A is the bottom-up approach and B is the top-down approach, in depth to work in this interface. PO10 This T-code is used to build structures using object types individually based on SPOCK. This is used to create an Org unit; this T-code creates the object type O, organization unit. PO13 This is used to create the position object type. PO03 This T-code is used to create the job object type. PP03 This is an action-based T-code that helps infotypes get populated one after another. All of the infotypes such as 1000-object, 1001-relationships, and 1002-description can be created using this interface. PO14 Tasks, which are the day-to-day activities performed by the personnel, can be maintained using this T-code. The Personnel Administration submodule The Personnel Administration submodule deals with everything related to the master data of employees. Some of the frequently used T-codes are listed as follows: T-code Description PA20 The master data of an employee is displayed using this T-code. PA30 The master data is maintained via this T-code. Employee details such as address and date of birth can be edited using this T-code. PA40 Personnel actions are performed using this T-code. Personnel actions such as hiring and promotions, known as the action type, are executed for employees. PA42 This T-code, known as the fast entry for action solution, helps a company maintain large amount of data. The information captured using this solution is highly accurate. PA70 This T-code, known as the fast entry functionality, allows the maintenance of master data for multiple employees at the same time. For example, the recurring payments and deduction (0014) infotype can be maintained for multiple employees. The usage of the PA70 T-code is shown in the following screenshot. Multiple employees can be entered, and the corresponding wage type, amount, currency, and so on can be provided for these employees. Using this functionality saves the administrator's time. The Time Management submodule The Time Management submodule is used to capture the time an employee has spent at their work place or make a note of their absenteeism. The important T-codes that maintain time data are covered in the following table: T-code Description PT01 The work schedule of the employee is created using this T-code. The work schedule is simply the duration of work, say, for instance, 9 a.m. to 6 p.m. PTMW The time manager's workplace action allows us to have multiple views such as one-day view and multiday view. It is used to administer and manage time. PP61 This T-code is used to change a shift plan for the employee. PA61 This T-code, known as maintain time data, is used to maintain time data for the employees. Only time-related infotypes such as Absences, Attendances, and Overtime are maintained via this T-code. PA71 This T-code, known as the fast entry time data action, is used to capture multiple employees' time-related data. PT50 This T-code, known as quota overview, is used to display the quota entitlements and leave balances of an employee. PT62 The attendance check T-code is used to create a list of employees who are absent, along with their reasons and the attendance time. PT60 This T-code is used for time evaluation. It is a program that evaluates the time data of employee. Also, the wage types are processed using this program. PT_ERL00 Time evaluation messages are displayed using this T-code. PT_CLSTB2 Time evaluation results can be accessed via this T-code. CAC1 Using this T-code, data entry profile is created. Data entry profiles are maintained for employees to capture their daily working hours, absence, and so on. CATA This T-code is used to transfer data to target components such as PS, HR, and CO. The Payroll Accounting submodule The gross and net calculations of wages are performed using this submodule. We will cover all the important T-codes that are used on a daily basis in the following table: T-code Description PU03 This T-code can be used to change the payroll status of an employee if necessary. It lets us change the master data that already exists, for example, locking a personnel's number. One must exercise caution when working on this T-code. It's a sensitive T-code because it is related to an employee's pay. Also, time data for the employees is controlled using this T-code. PA03 The control record is accessed via this T-code. The control record has key characteristics of how a payroll is processed. This T-code is normally not authorized by administrators. PC00_MXX_SIMU This is the T-code used for the simulation run of a payroll. The test is automatically flagged when this T-code is executed. PC00_MXX_CALC A live payroll run can be performed using this T-code. The test flag is still available to be used if required. PC00_MXX_PA03_RELEA This T-code is used normally by end users to release the control record. Master data and time data is normally locked when this T-code is executed. Changes cannot be made when this T-code is executed. PC00_MXX_PA03_CORR This T-code is used to make any changes to the master data or time data. The status has to be reverted to "release" to run a payroll for the payroll period. PC00_MXX_PA03_END Once all the activities are performed for the payroll period, the control record must be exited in order to proceed for the subsequent periods. PC00_MXX_CEDT The remuneration statement or payslip can be displayed using this T-code. PE51 The payslip is designed using this T-code. The payments, deductions, and net can be designed using this T-code. PC00_MXX_CDTA The data medium exchange for banks can be achieved using this tool. PUOC_99 The off-cycle payroll or on-demand payroll, as it's called in SAP, is used to make payments or deductions in a nonregular pay period such as in the middle of the payroll period. PC00_M99_CIPE The payroll results are posted to the finance department using this T-code. PCP0 The payroll posting runs are displayed using this T-code. The release of posting documents is controlled using this T-code. PC00_M99_CIPC The completeness check is performed using this T-code. We can find the pay results that are not posted using this T-code. OH11/PU30 The wage type maintenance tool is useful when creating wage type or pay components such as housing, dearness allowance. PE01 The schema, which is the warehouse of logic, is accessed and/or maintained via this T-code. PE02 The Personnel Calculation Rule is accessed via this T-code. The PCR is used to perform small calculations. PE04 The function and operations used can be accessed via this T-code. The documentation of most of these functions and operations can also be accessed via this T-code. PC00_M99_DLGA20 This shows the wage types used and their process class and cummulation class assignment. The wage type used in a payroll is analyzed using this T-code. PC00_M99_DKON The wage type mapped to general ledgers for FICO integration can be analyzed using this T-code PCXX Country-specific payroll can be accessed via this T-code. PC00 Payroll of all the countries, such as Europe, Americas, and so on, can be accessed via this T-code. PC_Payresult The payroll results of the employee can be analyzed via this T-code. The following screenshot shows how the payroll results are shown when the T-code is executed. The "XX" part in PCXX denotes the country grouping. For example, its 10 for USA, 01 for Germany, and so on. SAP has localized country-specific payroll solution, and hence, each country has a specific number. The country-specific settings are enabled using MOLGA, which is a technical name for the country, and it needs to be activated. It is the foundation of the SAP HCM solution. It's always 99 for Offcyle run for any country grouping. It's the same for posting as well. The following screenshot shows the output of the PC_Payresult T-code: The Talent Management submodule The Talent Management module deals with assessing the performance of the employees, such as feedback from supervisors, peers, and so on. We will explore all the T-codes used in this submodule. They are described in the following table: T-code Description PHAP_CATALOG This is used to create an appraisal template that can be filled by the respective persons, based on the Key Result Areas (KRA) such as attendance, certification, and performance. PPEM Career and succession planning for an entire org unit can be performed via this T-code. PPCP Career planning for a person can be performed via this T-code. The qualifications and preferences can be checked, based on which suitable persons can be shortlisted. PPSP Succession planning can be performed via this T-code. The successor for a particular position can be determined using this T-code. Different object types such as position and job can be used to plan the successor. OOB1 The form of appraisals is accessed via this T-code. The possible combination of appraiser and appraisee is determined based on the evaluation path. APPSEARCH This T-code is used to evaluate the appraisal template based on different statuses such as "in preparation" and "completed". PHAP_CATALOG_PA This is used to create an appraisal template that can be filled in by the respective persons based on the KRAs such as attendance, certification, and performance. The appraisers and appraisee allowed can be defined. OOHAP_SETTINGS_PA The integration check-related switches can be accessed via this T-code. APPCREATE Once the created appraisal template is released, we would be able to find the template in this T-code.
Read more
  • 0
  • 0
  • 4594

article-image-working-pentaho-mobile-bi
Packt
23 Jun 2014
14 min read
Save for later

Working with Pentaho Mobile BI

Packt
23 Jun 2014
14 min read
(For more resources related to this topic, see here.) We've always talked about using the Pentaho platform from a mobile device, trying to understand what there really is about it. On the Internet, there are some videos on it, but nothing can give you a clear idea of what it is and what can we do with it. We are proud to talk about it (maybe this is the first article that touches this topic), and we hope to clear any doubts regarding this platform. Pentaho Mobile is a web app available (see the previous screenshot for the web application's main screen) only with the Enterprise Edition Version of Pentaho, to let iPad users (and only the users on that device) have a wonderful experience with Pentaho on their mobile device. At the time of writing this article, no other mobile platform or devices were considered. It lets us interact with the Pentaho system more or less in the same way as we do with Pentaho User Console. These examples show what we can do with Pentaho Mobile and what we cannot do in a clear and detailed way to help understand if accessing Pentaho from a mobile platform could be helpful for our users. Only for this article, because we are on a mobile device, we will talk about touching (touch) instead of clicking as the action that activates something in the application. With this term, touch, we refer to the user's finger gesture instead of the normal mouse click. Different environments have different ways to interact! The examples in this article are based on the assumption that you have iPad device available to try each example and that you are able to successfully log in to Pentaho Mobile. In case we want to use demo users, remember that we can use the following logins to access our system: admin/password: This is the new Pentaho demo administrator after the famous user, joe (the Pentaho recognized administrator until Pentaho 4.8), was dismissed in this new version. suzy/password: This is another simple user we can use to access the system. Because suzy is not a member of the administrator role, it is useful to see the difference in case a user who is not an administrator tries to use the system. Accessing BA server from a mobile device Accessing Pentaho Mobile is as easy as accessing it from a Pentaho User Console. Just open our iPad browser (either Safari or Chrome) and point your browser to the Pentaho server. This example shows the basics of accessing and logging in to Pentaho from an iPad device through Pentaho Mobile. Remember that this example makes use of Pentaho Mobile, a web app that is available only for iPad and only in the EE Version of Pentaho. Getting ready To get ready for this example, the only thing we need is an iPad to connect to our Pentaho system. How to do it… The following steps detail how simply we access our Pentaho Mobile application: To connect to Pentaho Mobile, open either Safari or Chrome on the iPad device. As soon as the browser window is ready, type the complete URL to the Pentaho server in the following format: http://<ip_address>:<port>/pentaho Pentaho immediately detects that we are connecting from an iPad device, and the Pentaho Mobile login screen appears. Touch the Login button; the login dialog box appears as shown in the following screenshot. Enter your login credentials and press Login. The Login dialog box closes and we will be taken to Pentaho Mobile's home page. How it works… Pentaho Mobile has a slightly different look and feel with respect to Pentaho User Console in order to facilitate a mobile user's experience. The following screenshot shows the landing page we get after we have successfully logged in to Pentaho Mobile. To the left-hand side of the Pentaho Mobile's home page, we have the following set of buttons: Browse Files: This lets us start our navigation in the Pentaho Solution Repository. Create New Content: This lets us start the Pentaho Analyzer to create a new Analyser report from the mobile device. Analyser report content is the only kind of content we can create from our iPad. Dashboards and interactive reports can be created only from the Pentaho User Console. Startup Screen: This lets us change what we display as the default startup screen as soon as we log in to Pentaho Mobile. Settings: This changes the configuration settings for our Pentaho Mobile application. To the right-hand side of the button list (see the previous screenshot for details), we have three list boxes that display the Recent files we opened so far, the Favorites files, and the set of Open Files. The Open Files list box is more or less the same as the Opened perspective in Pentaho User Console—it collects all of the opened content in one single place for easy access. Look at the upper-right corner of Pentaho Mobile's user interface (see the previous screenshot for details); we have two icons: The folder icon gives access, from a different path, to the Pentaho Solution's folders The gear icon opens the Settings dialog box There's more… Now, let's see which settings we can either set or change from the mobile user interface by going to the Settings options. Changing the Settings configuration in Pentaho Mobile We can easily access the Settings dialog box either by pressing the Settings button in the left-hand side area of the Pentaho Mobile's home page or by pressing the gear icon in the upper-right corner of Pentaho Mobile. The Settings dialog box allows us to easily change the following configuration items (see the following screenshot for details): We can set Startup Screen by changing the referenced landing home page for our Pentaho Mobile application. In the Recent Files section of the Settings dialog, we can set the maximum number of items allowable in the Recent Files list. The default setting's value is 10, but we can alter this value by pressing the related icon buttons. Another button situated immediately below Recent Files, lets us easily empty the Recent Files list box. The next two buttons let us clear the Favorites items' list (Clear All Favorites) and reset the settings to the default values (Reset All Settings). Finally, we have a button to take us to a help guide and the application's Logout button. See also Look at the Accessing folders and files section to obtain details about how to browse the Pentaho Solution and start new content In the Changing the default startup Screen section, we will find details about how to change the default Pentaho Mobile session startup screen Accessing folders and files From our Pentaho Mobile, we can easily access and navigate the Pentaho Solution folders. This example will show how we can navigate the Pentaho Solution folders and start our content on the mobile device. Remember that this example makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho. How to do it… The following steps detail how simply we can access the Pentaho Solution folders and start an existing BI content: From the Pentaho Mobile home page, either touch on the Browse Files button located on the left-hand side of page, or touch on the Folder icon button located in the upper-right side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface as shown in the following screenshot. Navigate the solution to the folder containing the content we want to start. As soon as we get to the content to start, touch on the content's icon to launch it. The content will be displayed in the entire Pentaho Mobile user interface screen. How it works… Accessing Pentaho objects from the Pentaho Mobile application is really intuitive. After you have successfully logged in, open the Browse Files dialog and navigate freely through the Pentaho Solution folder's structure to get to your content. To start the content, just touch the content icon and the report or the dashboard will display on your iPad. As we can see, at the time of writing this article, we cannot do any administrative tasks (share content, move content, schedule, and other tasks) from the Pentaho Mobile application. We can only navigate to the content, get it, and start it. There's more… As soon as we have some content items open, they are shown in the Opened list box. However, we would like to close them and free unused memory resources. Let's see how to do this. Closing opened content Pentaho Mobile continuously monitors the resource usage of our iPad and warns as soon as we have a lot of items open. As soon as we have a lot of opened items, a warning dialog box informs you about this, and it is a good opportunity to close some unused (and eventually forget the opened) content items. To do this, go to Pentaho Mobile's home page, look for items to close, and touch on the rounded x icon to the right of the content item's label (see the following screenshot for details). The content item will immediately close. Adding files to favorites As we saw in Pentaho User Console, even in the Pentaho Mobile application, we can set our favorites and start accessing content from the favorites list. This article will show how we can do this. Remember that this article makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho How to do it… The following steps detail how simply we can make a content item a favorite: From the Pentaho Mobile's home page, either touch on the Browse Files button located on the left-hand side of the page or touch on the Folder icon button located in the upper-right side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface. Navigate the solution to the folder containing the content we want as a favorite. Touch on the star located to the right-hand side of the content item's label to mark that item a favorite. How it works… Usually, it would be useful to define some Pentaho objects as favorites. Favorite items help the user to quickly find the report or dashboard to start with. After we have successfully logged in, open the Browse Files dialog and navigate freely through the Pentaho Solution folders' structure to get to your content. To mark the content a favorite, just touch the star in the right-hand side of the content label and our report or dashboard will be marked as favorite (see the following screenshot for details). The favorite status of an item is identified by the following elements: The content item's star located to the right-hand side of the item's label becomes bold on the boundary to put in evidence that the content has been marked as a favorite The content will appear in the Favorites list box on the Pentaho Mobile home page There's more… What should we do if we want to remove the favorite status from our content items? Let's see how we can do this. Removing an item from the Favorites items list To remove an item from the Favorites list, we can follow two different approaches: Go to the Favorites items list on the Pentaho Mobile home page. Look for the item we want to un-favorite and touch on the star icon with the bold boundaries located on the right-hand side of the content item's label. The content item will be immediately removed from the Favorites items list. Navigate to the Pentaho Solution's folders to the location containing the item we want to un-favorite and touch on the star icon with the bold boundaries located to the right-hand side of the content item's label. The content item will be immediately removed from the Favorites items list. See also Take a look at the Accessing folders and files section in case you want to review how to access content in the Pentaho Solution to mark it as a favorite. Changing the default startup screen Imagine that we want to change the default startup screen with a specific content item we have in our Pentaho Solution. After the new startup screen has been set, after the login, the user will be able to immediately access this new content item opened as the startup screen for Pentaho Mobile instead of the default home page. It would be fine to let our CEO immediately have in front of them the company's main KPI dashboard and immediately react to it. This article will show you how to make a specific content item the default startup screen in Pentaho Mobile. Remember that this example makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho. How to do it… The following steps detail how simply we can define a new startup screen with an existing BI content: From the Pentaho Mobile home page, touch on the Startup Screen button located on the left-hand side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface. Navigate the solution to the folder containing the content we want to use. Touch the content item we want to show as the default startup screen. The Browse Files dialog box immediately closes and the Settings dialog box opens. A reference to the new, selected item is shown as the default Startup Screen content item (see the following screenshot for details): Touch outside the Settings dialog to close this dialog. How it works… Changing the startup screen could be interesting to give your user access to important content any time immediately after a successful login. From the Pentaho Mobile's home page, touch on the Startup Screen button located on the left-hand side of the home page and open the Browse Files dialog. Navigate the solution to the folder containing the content we want and then touch the content item to show as the default startup screen. The Browse Files dialog box immediately closes and the Settings dialog box opens. The new selected item is shown as the default startup screen content item, referenced by Name, and the complete path to the Pentaho Solution folder is seen. We can change the startup screen at any time, and we can also reset it to the default Pentaho Mobile home page by touching on the Pentaho Default Home radio button. There's more… We have always showed pictures from Pentaho Mobile in landscape orientation, but the user interface has a responsive behavior, showing things organized differently depending on the orientation of the device. Pentaho Mobile's responsive behavior We always show pictures of Pentaho Mobile with a landscape orientation, but Pentaho Mobile has a responsive layout and changes the display of some of the items in the page we are looking at depending on the device's orientation. The following screenshot gives an idea about displaying a dashboard on Pentaho Mobile in portrait orientation: If we look at the home page with a device in the portrait mode, the Recent, Favorites, and Opened lists covers the available page's width, equally divided by each list; and all of the buttons we always saw on the left side of the user interface are now relocated to the bottom, below the three lists we talked about so far. This is another interesting layout; it is up to our taste or viewing needs to decide which of the two could be the best option for us. Summary In this article, we learned about accessing BA server from a mobile device, accessing files and folders, adding files to favorites, and changing the default startup screen from a mobile device. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Integrating Kettle and the Pentaho Suite [article] Installing Pentaho Data Integration with MySQL [article]
Read more
  • 0
  • 0
  • 3295

Packt
23 Jun 2014
10 min read
Save for later

Kendo UI DataViz – Advance Charting

Packt
23 Jun 2014
10 min read
(For more resources related to this topic, see here.) Creating a chart to show stock history The Kendo UI library provides a specialized chart widget that can be used to display the stock price data for a particular stock over a period of time. In this recipe, we will take a look at creating a Stock chart and customizing it. Getting started Include the CSS files, kendo.dataviz.min.css and kendo.dataviz.default.min.css, in the head section. These files are used in styling some of the parts of a stock history chart. How to do it… A Stock chart is made up of two charts: a pane that shows you the stock history and another pane that is used to navigate through the chart by changing the date range. The stock price for a particular stock on a day can be denoted by the following five attributes: Open: This shows you the value of the stock when the trading starts for the day Close: This shows you the value of the stock when the trading closes for the day High: This shows you the highest value the stock was able to attain on the day Low: This shows you the lowest value the stock reached on the day Volume: This shows you the total number of shares of that stock traded on the day Let's assume that a service returns this data in the following format: [ { "Date" : "2013/01/01", "Open" : 40.11, "Close" : 42.34, "High" : 42.5, "Low" : 39.5, "Volume": 10000 } . . . ] We will use the preceding data to create a Stock chart. The kendoStockChart function is used to create a Stock chart, and it is configured with a set of options similar to the area chart or Column chart. In addition to the series data, you can specify the navigator option to show a navigation pane below the chart that contains the entire stock history: $("#chart").kendoStockChart({ title: { text: 'Stock history' }, dataSource: { transport: { read: '/services/stock?q=ADBE' } }, dateField: "Date", series: [{ type: "candlestick", openField: "Open", closeField: "Close", highField: "High", lowField: "Low" }], navigator: { series: { type: 'area', field: 'Volume' } } }); In the preceding code snippet, the DataSource object refers to the remote service that would return the stock data for a set of days. The series option specifies the series type as candlestick; a candlestick chart is used here to indicate the stock price for a particular day. The mappings for openField, closeField, highField, and lowField are specified; they will be used in plotting the chart and also to show a tooltip when the user hovers over it. The navigator option is specified to create an area chart, which uses volume data to plot the chart. The dateField option is used to specify the mapping between the date fields in the chart and the one in the response. How it works… When you load the page, you will see two panes being shown; the navigator is below the main chart. By default, the chart displays data for all the dates in the DataSource object, as shown in the following screenshot: In the preceding screenshot, a candlestick chart is created and it shows you the stock price over a period of time. Also, notice that in the navigator pane, all date ranges are selected by default, and hence, they are reflected in the chart (candlestick) as well. When you hover over the series, you will notice that the stock quote for the selected date is shown. This includes the date and other fields such as Open, High, Low, and Close. The area of the chart is adjusted to show you the stock price for various dates such that the dates are evenly distributed. In the previous case, the dates range from January 1, 2013 to January 31, 2013. However, when you hover over the series, you will notice that some of the dates are omitted. To overcome this, you can either increase the width of the chart area or use the navigator to reduce the date range. The former option is not advisable if the date range spans across several months and years. To reduce the date range in the navigator, move the two date range selectors towards each other to narrow down the dates, as shown in the following screenshot: When you try to narrow down the dates, you will see a tooltip in the chart, indicating the date range that you are trying to select. The candlestick chart is adjusted to show you the stock price for the selected date range. Also, notice that the opacity of the selected date range in the navigator remains the same while the rest of the area's opacity is reduced. Once the date range is selected, the selected pane can be moved in the navigator. There's more… There are several options available to you to customize the behavior and the look and feel of the Stock Chart widget. Specifying the date range in the navigator when initializing the chart By default, all date ranges in the chart are selected and the user will have to narrow them down in the navigator pane. When you work with a large dataset, you will want to show the stock data for a specific range of date when the chart is rendered. To do this, specify the select option in navigator: navigator: { series: { type: 'area', field: 'Volume' }, select: { from: '2013/01/07', to: '2013/01/14' } } In the previous code snippet, the from and to date ranges are specified. Now, when you render the page, you will see that the same dates are selected in the navigator pane. Customizing the look and feel of the Stock Chart widget There are various options available to customize the navigator pane in the Stock Chart widget. Let's increase the height of the pane and also include a title text for it: navigator: { . . pane: { height: '50px', title: { text: 'Stock Volume' } } } Now when you render the page, you will see that the title and height of the navigator pane have been increased. Using the Radial Gauge widget The Radial Gauge widget allows you to build a dashboard-like application wherein you want to indicate a value that lies in a specific range. For example, a car's dashboard can contain a couple of Radial Gauge widgets that can be used to indicate the current speed and RPM. How to do it… To create a Radial Gauge widget, invoke the kendoRadialGauge function on the selected DOM element. A Radial Gauge widget contains some components, and it can be configured by providing options, as shown in the following code snippet: $("#chart").kendoRadialGauge({ scale: { startAngle: 0, endAngle: 180, min: 0, max: 180 }, pointer: { value: 20 } }); Here the scale option is used to configure the range for the Radial Gauge widget. The startAngle and endAngle options are used to indicate the angle at which the Radial Gauge widget's range should start and end. By default, its values are 30 and 210, respectively. The other two options, that is, min and max, are used to indicate the range values over which the value can be plotted. The pointer option is used to indicate the current value in the Radial Gauge widget. There are several options available to configure the Radial Gauge widget; these include positioning of the labels and configuring the look and feel of the widget. How it works… When you render the page, you will see a Radial Gauge widget that shows you the scale from 0 to 180 and the pointer pointing to the value 20. Here, the values from 0 to 180 are evenly distributed, that is, the major ticks are in terms of 20. There are 10 minor ticks, that is, ticks between two major ticks. The widget shows values in the clockwise direction. Also, the pointer value 20 is selected in the scale. There's more… The Radial Gauge widget can be customized to a great extent by including various options when initializing the widget. Changing the major and minor unit values Specify the majorUnit and minorUnit options in the scale: scale: { startAngle: 0, endAngle: 180, min: 0, max: 180, majorUnit: 30, minorUnit: 10, } The scale option specifies the majorUnit value as 30 (instead of the default 20) and minorUnit as 10. This will now add labels at every 30 units and show you two minor ticks between the two major ticks, each at a distance of 10 units, as shown in the following screenshot: The ticks shown in the preceding screenshot can also be customized: scale: { . . minorTicks: { size: 30, width: 1, color: 'green' }, majorTicks: { size: 100, width: 2, color: 'red' } } Here, the size option is used to specify the length of the tick marker, width is used to specify the thickness of the tick, and the color option is used to change the color of the tick. Now when you render the page, you will see the changes for the major and minor ticks. Changing the color of the radial using the ranges option The scale attribute can include the ranges option to specify a radial color for the various ranges on the Radial Gauge widget: scale: { . . ranges: [ { from: 0, to: 60, color: '#00F' }, { from: 60, to: 130, color: '#0F0' }, { from: 130, to: 200, color: '#F00' } ] } In the preceding code snippet, the ranges array contains three objects that specify the color to be applied on the circumference of the widget. The from and to values are used to specify the range of tick values for which the color should be applied. Now when you render the page, you will see the Radial Gauge widget showing the colors for various ranges along the circumference of the widget, as shown in the following screenshot: In the preceding screenshot, the startAngle and endAngle fields are changed to 10 and 250, respectively. The widget can be further customized by moving the labels outside. This can be done by specifying the labels attribute with position as outside. In the preceding screenshot, the labels are positioned outside, hence, the radial appears inside. Updating the pointer value using a Slider widget The pointer value is set when the Radial Gauge widget is initialized. It is possible to change the pointer value of the widget at runtime using a Slider widget. The changes in the Slider widget can be observed, and the pointer value of the Radial Gauge can be updated accordingly. Let's use the Radial Gauge widget. A Slider widget is created using an input element: <input id="slider" value="0" /> The next step is to initialize the previously mentioned input element to a Slider widget: $('#slider').kendoSlider({ min: 0, max: 200, showButtons: false, smallStep: 10, tickPlacement: 'none', change: updateRadialGuage }); The min and max values specify the range of values that can be set for the slider. The smallStep attribute specifies the minimum increment value of the slider. The change attribute specifies the function that should be invoked when the slider value changes. The updateRadialGuage function should then update the value of the pointer in the Radial Gauge widget: function updateRadialGuage() { $('#chart').data('kendoRadialGauge') .value($('#slider').val()); } The function gets the instance of the widget and then sets its value to the value obtained from the Slider widget. Here, the slider value is changed to 100, and you will notice that it is reflected in the Radial Gauge widget.
Read more
  • 0
  • 0
  • 5304

article-image-machine-learning-bioinformatics
Packt
20 Jun 2014
8 min read
Save for later

Machine Learning in Bioinformatics

Packt
20 Jun 2014
8 min read
(For more resources related to this topic, see here.) Supervised learning for classification Like clustering, classification is also about categorizing data instances, but in this case, the categories are known and are termed as class labels. Thus, it aims at identifying the category that a new data point belongs to. It uses a dataset where the class labels are known to find the pattern. Classification is an instance of supervised learning where the learning algorithm takes a known set of input data and corresponding responses called class labels and builds a predictor model that generates reasonable predictions for the class labels in the unknown data. To illustrate, let's imagine that we have gene expression data from cancer patients as well as healthy patients. The gene expression pattern in these samples can define whether the patient has cancer or not. In this case, if we have a set of samples for which we know the type of tumor, the data can be used to learn a model that can identify the type of tumor. In simple terms, it is a predictive function used to determine the tumor type. Later, this model can be applied to predict the type of tumor in unknown cases. There are some do's and don'ts to keep in mind while learning a classifier. You need to make sure that you have enough data to learn the model. Learning with smaller datasets will not allow the model to learn the pattern in an unbiased manner and again, you will end up with an inaccurate classification. Furthermore, the preprocessing steps (such as normalization) for the training and test data should be the same. Another important thing that one should take care of is to keep the training and test data distinct. Learning on the entire data and then using a part of this data for testing will lead to a phenomenon called over fitting. It is always recommended that you take a look at it manually and understand the question that you need to answer via your classifier. There are several methods of classification. In this recipe, we will talk about some of these methods. We will discuss linear discriminant analysis (LDA), decision tree (DT), and support vector machine (SVM). Getting ready To perform the classification task, we need two preparations. First, a dataset with known class labels (training set), and second, the test data that the classifier has to be tested on (test set). Besides this, we will use some R packages, which will be discussed when required. As a dataset, we will use approximately 2300 gene from tumor cells. The data has ~83 data points with four different types of tumors. These will be used as our class labels. We will use 60 of the data points for the training and the remaining 23 for the test. To find out more about the dataset, refer to the Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks article by Khan and others (http://research.nhgri.nih.gov/microarray/Supplement/). The set has been precompiled in a format that is readily usable in R and is available on the book's web page (code files) under the name cancer.rda. How to do it… To classify data points based on their features, perform the following steps: First, load the following MASS library as it has some of the classification functions: > library(MASS) Now, you need your data to learn and test the classifiers. Load the data from the code files available on the book's web page (cancer.rda) as follows: > load ("path/to/code/directory/cancer.rda") # located in the code file directory for the chapter, assign the path accordingly Randomly sample 60 data points for the training and the remaining 23 for the test set as follows—ensure that these two datasets do not overlap and are not biased towards any specific tumor type (random sampling): > train <- mldata[train_row,] # use sampled indexes to extract training data > test <- mldata[-train_row,] # test set is select by selecting all the other data points For the training data, retain the class labels, which are the tumor columns here, and remove this information from the test data. However, store this information for comparison purposes: > testClass <- test$tumor > test$tumor <- NULL Now, try the linear discriminate analysis classifier, as follows, to get the classifier model: > myLD <- lda(tumor ~ ., train) # might issue a warning Test this classifier to predict the labels on your test set, as follows: > testRes_lda <- predict(myLD, test) To check the number of correct and incorrect predictions, simply compare the predicted classes with the testClass object, which was created in step 4, as follows: > sum(testRes_lda$class == testClass) # correct prediction [1] 19 > sum(testRes_lda$class != testClass) # incorrect prediction [1] 4 Now, try another simple classifier called DT. For this, you need the rpart package: > library(rpart) Create the decision tree based on your training data, as follows: > myDT <- rpart(tumor~ ., data = train, control = rpart.control(minsplit = 10)) Plot your tree by typing the following commands, as shown in the next diagram: > plot(myDT) > text(myDT, use.n=T) The following screenshot shows the cut off for each feature (represented by the branches) to differentiate between the classes: The tree for DT-based learning Now, test the decision tree classifier on your test data using the following prediction function: > testRes_dt <- predict(myDT, newdata= test) Take a look at the species that each data instance is put in by the predicted classifier, as follows (1 if predicted in the class, else 0): > classes <- round(testRes_dt) > head(classes) BL EW NB RM 4 0 0 0 1 10 0 0 0 1 15 1 0 0 0 16 0 0 1 0 18 0 1 0 0 21 0 1 0 0 Finally, you'll work with SVMs. To be able to use them, you need another R package named e1071 as follows: > library(e1071) Create the svm classifier from the training data as follows: > mySVM <- svm(tumor ~ ., data = train) Then, use your classifier, the model (mySVM object) learned to predict for the test data. You will see the predicted labels for each instance as follows: > testRes_svm <- predict(mySVM, test) > testRes_svm How it works… We started our recipe by loading the input data on tumors. The supervised learning methods we saw in the recipe used two datasets: the training set and test set. The training set carries the class label information. The first part in most of the learning methods shown here, the training set is used to identify a pattern and model the pattern to find a distinction between the classes. This model is then applied on the test set that does not have the class label data to predict the class labels. To identify the training and test sets, we first randomly sample 60 indexes out of the entire data and use the remaining 23 for testing purposes. The supervised learning methods explained in this recipe follow a different principle. LDA attempts to model the difference between classes based on the linear combination of its features. This combination function forms the model based on the training set and is used to predict the classes in the test set. The LDA model trained on 60 samples is then used to predict for the remaining 23 cases. DT is, however, a different method. It forms regression trees that form a set of rules to distinguish one class from the other. The tree learned on a training set is applied to predict classes in test sets or other similar datasets. SVM is a relatively complex technique of classification. It aims to create a hyperplane(s) in the feature space, making the data points separable along these planes. This is done on a training set and is then used to assign classes to new data points. In general, LDA uses linear combination and SVM uses multiple dimensions as the hyperplane for data distinction. In this recipe, we used the svm functionality from the e1071 package, which has many other utilities for learning. We can compare the results obtained by the models we used in this recipe (they can be computed using the provided code on the book's web page). There's more... One of the most popular classifier tools in the machine learning community is WEKA. It is a Java-based tool and implements many libraries to perform classification tasks using DT, LDA, Random Forest, and so on. R supports an interface to the WEKA with a library named RWeka. It is available on the CRAN repository at http://cran.r-project.org/web/packages/RWeka/ . It uses RWekajars, a separate package, to use the Java libraries in it that implement different classifiers. See also The Elements of Statistical Learning book by Hastie, Tibshirani, and Friedman at http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf, which provides more information on LDA, DT, and SVM
Read more
  • 0
  • 0
  • 2642
article-image-what-quantitative-finance
Packt
20 Jun 2014
11 min read
Save for later

What is Quantitative Finance?

Packt
20 Jun 2014
11 min read
(For more resources related to this topic, see here.) Discipline 1 – finance (financial derivatives) In general, a financial derivative is a contract between two parties who agree to exchange one or more cash flows in the future. The value of these cash flows depends on some future event, for example, that the value of some stock index or interest rate being above or below some predefined level. The activation or triggering of this future event thus depends on the behavior of a variable quantity known as the underlying. Financial derivatives receive their name because they derive their value from the behavior of another financial instrument. As such, financial derivatives do not have an intrinsic value in themselves (in contrast to bonds or stocks); their price depends entirely on the underlying. A critical feature of derivative contracts is thus that their future cash flows are probabilistic and not deterministic. The future cash flows in a derivative contract are contingent on some future event. That is why derivatives are also known as contingent claims. This feature makes these types of contracts difficult to price. The following are the most common types of financial derivatives: Futures Forwards Options Swaps Futures and forwards are financial contracts between two parties. One party agrees to buy the underlying from the other party at some predetermined date (the maturity date) for some predetermined price (the delivery price). An example could be a one-month forward contract on one ounce of silver. The underlying is the price of one ounce of silver. No exchange of cash flows occur at inception (today, t=0), but it occurs only at maturity (t=T). Here t represents the variable time. Forwards are contracts negotiated privately between two parties (in other words, Over The Counter (OTC)), while futures are negotiated at an exchange. Options are financial contracts between two parties. One party (called the holder of the option) pays a premium to the other party (called the writer of the option) in order to have the right, but not the obligation, to buy some particular asset (the underlying) for some particular price (the strike price) at some particular date in the future (the maturity date). This type of contract is called a European Call contract. Example 1 Consider a one-month call contract on the S&P 500 index. The underlying in this case will be the value of the S&P 500 index. There are cash flows both at inception (today, t=0) and at maturity (t=T). At inception, (t=0) the premium is paid, while at maturity (t=T), the holder of the option will choose between the following two possible scenarios, depending on the value of the underlying at maturity S(T): Scenario A: To exercise his/her right and buy the underlying asset for K Scenario B: To do nothing if the value of the underlying at maturity is below the value of the strike, that is, S(T)<K The option holder will choose Scenario A if the value of the underlying at maturity is above the value of the strike, that is, S(T)>K. This will guarantee him/her a profit of S(T)-K. The option holder will choose Scenario B if the value of the underlying at maturity is below the value of the strike, that is, S(T)<K. This will guarantee him/her to limit his/her losses to zero. Example 2 An Interest Rate Swap (IRS) is a financial contract between two parties A and B who agree to exchange cash flows at regular intervals during a given period of time (the life of a contract). Typically, the cash flows from A to B are indexed to a fixed rate of interest, while the cash flows from B to A are indexed to a floating interest rate. The set of fixed cash flows is known as the fixed leg, while the set of floating cash flows is known as the floating leg. The cash flows occur at regular intervals during the life of the contract between inception (t=0) and maturity (t=T). An example could be a fixed-for-floating IRS, who pays a rate of 5 percent on the agreed notional N every three months and receives EURIBOR3M on the agreed notional N every three months. Example 3 A futures contract on a stock index also involves a single future cash flow (the delivery price) to be paid at the maturity of the contract. However, the payoff in this case is uncertain because how much profit I will get from this operation will depend on the value of the underlying at maturity. If the price of the underlying is above the delivery price, then the payoff I get (denoted by function H) is positive (indicating a profit) and corresponds to the difference between the value of the underlying at maturity S(T) and the delivery price K. If the price of the underlying is below the delivery price, then the payoff I get is negative (indicating a loss) and corresponds to the difference between the delivery price K and the value of the underlying at maturity S(T). This characteristic can be summarized in the following payoff formula: Equation 1 Here, H(S(T)) is the payoff at maturity, which is a function of S(T). Financial derivatives are very important to the modern financial markets. According to the Bank of International Settlements (BIS) as of December 2012, the amounts outstanding for OTC derivative contracts worldwide were Foreign exchange derivatives with 67,358 billion USD, Interest Rate Derivatives with 489,703 billion USD, Equity-linked derivatives with 6,251 billion USD, Commodity derivatives with 2,587 billion USD, and Credit default swaps with 25,069 billion USD. For more information, see http://www.bis.org/statistics/dt1920a.pdf. Discipline 2 – mathematics We need mathematical models to capture both the future evolution of the underlying and the probabilistic nature of the contingent cash flows we encounter in financial derivatives. Regarding the contingent cash flows, these can be represented in terms of the payoff function H(S(T)) for the specific derivative we are considering. Because S(T) is a stochastic variable, the value of H(S(T)) ought to be computed as an expectation E[H(S(T))]. And in order to compute this expectation, we need techniques that allow us to predict or simulate the behavior of the underlying S(T) into the future, so as to be able to compute the value of ST and finally be able to compute the mean value of the payoff E[H(S(T))]. Regarding the behavior of the underlying, typically, this is formalized using Stochastic Differential Equations (SDEs), such as Geometric Brownian Motion (GBM), as follows: Equation 2 The previous equation fundamentally says that the change in a stock price (dS), can be understood as the sum of two effects—a deterministic effect (first term on the right-hand side) and a stochastic term (second term on the right-hand side). The parameter is called the drift, and the parameter is called the volatility. S is the stock price, dt is a small time interval, and dW is an increment in the Wiener process. This model is the most common model to describe the behavior of stocks, commodities, and foreign exchange. Other models exist, such as jump, local volatility, and stochastic volatility models that enhance the description of the dynamics of the underlying. Regarding the numerical methods, these correspond to ways in which the formal expression described in the mathematical model (usually in continuous time) is transformed into an approximate representation that can be used for calculation (usually in discrete time). This means that the SDE that describes the evolution of the price of some stock index into the future, such as the FTSE 100, is changed to describe the evolution at discrete intervals. An approximate representation of an SDE can be calculated using the Euler approximation as follows: Equation 3 The preceding equation needs to be solved in an iterative way for each time interval between now and the maturity of the contract. If these time intervals are days and the contract has a maturity of 30 days from now, then we compute tomorrow's price in terms of todays. Then we compute the day after tomorrow as a function of tomorrow's price and so on. In order to price the derivative, we require to compute the expected payoff E[H(ST)] at maturity and then discount it to the present. In this way, we would be able to compute what should be the fair premium associated with a European option contract with the help of the following equation: Equation 4 Discipline 3 – informatics (C++ programming) What is the role of C++ in pricing derivatives? Its role is fundamental. It allows us to implement the actual calculations that are required in order to solve the pricing problem. Using the preceding techniques to describe the dynamics of the underlying, we require to simulate many potential future scenarios describing its evolution. Say we ought to price a futures contract on the EUR/USD exchange rate with one year maturity. We have to simulate the future evolution of EUR/USD for each day for the next year (using equation 3). We can then compute the payoff at maturity (using equation 1). However, in order to compute the expected payoff (using equation 4), we need to simulate thousands of such possible evolutions via a technique known as Monte Carlo simulation. The set of steps required to complete this process is known as an algorithm. To price a derivative, we ought to construct such algorithm and then implement it in an advanced programming language such as C++. Of course C++ is not the only possible choice, other languages include Java, VBA, C#, Mathworks Matlab, and Wolfram Mathematica. However, C++ is an industry standard because it's flexible, fast, and portable. Also, through the years, several numerical libraries have been created to conduct complex numerical calculations in C++. Finally, C++ is a powerful modern object-oriented language. It is always difficult to strike a balance between clarity and efficiency. We have aimed at making computer programs that are self-contained (not too object oriented) and self-explanatory. More advanced implementations are certainly possible, particularly in the context of larger financial pricing libraries in a corporate context. In this article, all the programs are implemented with the newest standard C++11 using Code::Blocks (http://www.codeblocks.org) and MinGW (http://www.mingw.org). The Bento Box template A Bento Box is a single portion take-away meal common in Japanese cuisine. Usually, it has a rectangular form that is internally divided in compartments to accommodate the various types of portions that constitute a meal. In this article, we use the metaphor of the Bento Box to describe a visual template to facilitate, organize, and structure the solution of derivative problems. The Bento Box template is simply a form that we will fill sequentially with the different elements that we require to price derivatives in a logical structured manner. The Bento Box template when used to price a particular derivative is divided into four areas or boxes, each containing information critical for the solution of the problem. The following figure illustrates a generic template applicable to all derivatives: The Bento Box template – general case The following figure shows an example of the Bento Box template as applied to a simple European Call option: The Bento Box template – European Call option In the preceding figure, we have filled the various compartments, starting in the top-left box and proceeding clockwise. Each compartment contains the details about our specific problem, taking us in sequence from the conceptual (box 1: derivative contract) to the practical (box 4: algorithm), passing through the quantitative aspects required for the solution (box 2: mathematical model and box 3: numerical method). Summary This article gave an overview of the main elements of Quantitative Finance as applied to pricing financial derivatives. The Bento Box template technique will be used to organize our approach to solve problems in pricing financial derivatives. We will assume that we are in possession with enough information to fill box 1 (derivative contract). Resources for Article: Further resources on this subject: Application Development in Visual C++ - The Tetris Application [article] Getting Started with Code::Blocks [article] Creating and Utilizing Custom Entities [article]
Read more
  • 0
  • 0
  • 3915

article-image-discovering-pythons-parallel-programming-tools
Packt
20 Jun 2014
3 min read
Save for later

Discovering Python's parallel programming tools

Packt
20 Jun 2014
3 min read
(For more resources related to this topic, see here.) The Python threading module The Python threading module offers a layer of abstraction to the module _thread, which is a lower-level module. It provides functions that help the programmer during the hard task of developing parallel systems based on threads. The threading module's official papers can be found at http://docs.python.org/3/library/threading.html?highlight=threading#module-threadin. The Python multiprocessing module The multiprocessing module aims at providing a simple API for the use of parallelism based on processes. This module is similar to the threading module, which simplifies alternations between the processes without major difficulties. The approach that is based on processes is very popular within the Python users' community as it is an alternative to answering questions on the use of CPU-Bound threads and GIL present in Python. The multiprocessing module's official papers can be found at http://docs.python.org/3/library/multiprocessing.html?highlight=multiprocessing#multiprocessing. The parallel Python module The parallel Python module is external and offers a rich API for the creation of parallel and distributed systems making use of the processes approach. This module promises to be light and easy to install, and integrates with other Python programs. The parallel Python module can be found at http://parallelpython.com. Among some of the features, we may highlight the following: Automatic detection of the optimal confi guration The fact that a number of worker processes can be changed during runtime Dynamic load balance Fault tolerance Auto-discovery of computational resources Celery – a distributed task queue Celery is an excellent Python module that's used to create distributed systems and has excellent documentation. It makes use of at least three different types of approach to run tasks in concurrent form—multiprocessing, Eventlet, and Gevent. This work will, however, concentrate efforts on the use of the multiprocessing approach. Also, the link between one and another is a configuration issue, and it remains as a study so that the reader is able to establish comparisons with his/her own experiments. The Celery module can be obtained on the official project page at http://celeryproject.org. Summary In this article, we had a short introduction to some Python modules, built-in and external, which makes a developer's life easier when building up parallel systems. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 4787

article-image-shaping-model-meshmixer-and-printing-it
Packt
20 Jun 2014
4 min read
Save for later

Shaping a model with Meshmixer and printing it

Packt
20 Jun 2014
4 min read
(For more resources related to this topic, see here.) Shaping with Meshmixer Meshmixer was designed to provide a modeling interface that frees the user from working directly with the geometry of the mesh. In most cases, the intent of the program succeeds, but in some cases, it's good to see how the underlying mesh works. We'll use some brush tools to make our model better, thereby taking a look at how this affects the mesh structure. Getting ready We'll use a toy block scanned with 123D Catch. How to do it... We will proceed as follows: Let's take a look at the model's mesh by positioning the model with a visible large surface. Go to the menu and select View. Scroll down and select Toggle Wireframe (W). Choose Sculpt. From the pop-up toolbox, choose Brushes. Go to the menu and select ShrinkSmooth. Adjust your settings in the Properties section. Keep the size as 60 and its strength as 25. Use the smooth tool slowly across the model, watching the change it makes to the mesh. In the following example, the keyboard shortcut W is used to toggle between mesh views: Repeat using the RobustSmooth and Flatten brushes. Use these combinations of brushes to flatten one side of the toy block. Rotate your model to an area where there's heavy distortion. Make sure your view is in the wireframe mode. Go back to Brushes and select Pinch. Adjust the Strength to 85, Size to 39, Depth to -17, and Lazyness to 95. Keep everything else at default values. If you are uncertain of the default values, left-click on the small cogwheel icon next to the Properties heading. Choose Reset to Defaults. We're going to draw a line across a distorted area of the toy block to see how it affects the mesh. Using the pinch brush, draw a line across the model. Save your work and then select Undo/back from the Actions menu (Ctrl+ Z). Now, select your entire model. Go to the toolbox and select Edit. Scroll down and select Remesh (R). You'll see an even distribution of polygons in the mesh. Keep the defaults in the pop up and click on Accept. Now, go back and choose Clear Selection. Select the pinch brush again and draw a line across the model as you did before. Compare it to the other model with the unrefined mesh. Let's finish cleaning up the toy block. Click on Undo/back (Ctrl+ Z) to the pinch brush line that you drew. Now, use the pinch tool to refine the edges of the model. Work around it and sharpen all the edges. Finish smoothing the planes on the block and click on Save. We can see the results clearly as we compare the original toy block model to our modified model in the preceding image. How it works... Meshmixer works by using a mesh with a high definition of polygons. When a sculpting brush such as pinch is used to manipulate the surface, it rapidly increases the polygon count in the surrounding area. When the pinch tool crosses an area that has fewer and larger polygons, the interpolation of the area becomes distorted. We can see this in the following example when we compare the original and remeshed model in the wireframe view: In the following image, when we hide the wireframe, we can see how the distortion in the mesh has given the model on the left some undesirable texture along the pinch line: It may be a good idea to examine a model's mesh before sculpting it. Meshmixer works better with a dense polygon count that is consistent in size. By using the Remesh edit, a variety of mesh densities can be achieved by making changes in Properties. Experiment with the various settings and the sculpting brushes while in the wireframing stage. This will help you gain a better understanding of how mesh surface modeling works. Let's print! When we 3D print a model, we have the option of controlling how solid the interior will be and what kind of structure will fill it. How we choose between the options is easily determined by answering the following questions: Will it need to be structurally strong? If it's going to be used as a mechanical part or an item that will be heavily handled, then it does. Will it be a prototype? If it's a temporary object for examination purposes or strictly for display, then a fragile form may suffice. Depending on the use of a model, you'll have to decide how the object falls within these two extremes.
Read more
  • 0
  • 0
  • 11906
article-image-common-performance-issues
Packt
19 Jun 2014
16 min read
Save for later

Common performance issues

Packt
19 Jun 2014
16 min read
(For more resources related to this topic, see here.) Threading performance issues Threading performance issues are the issues related to concurrency, as follows: Lack of threading or excessive threading Threads blocking up to starvation (usually from competing on shared resources) Deadlock until the complete application hangs (threads waiting for each other) Memory performance issues Memory performance issues are the issues that are related to application memory management, as follows: Memory leakage: This issue is an explicit leakage or implicit leakage as seen in improper hashing Improper caching: This issue is due to over caching, inadequate size of the object, or missing essential caching Insufficient memory allocation: This issue is due to missing JVM memory tuning Algorithmic performance issues Implementing the application logic requires two important parameters that are related to each other; correctness and optimization. If the logic is not optimized, we have algorithmic issues, as follows: Costive algorithmic logic Unnecessary logic Work as designed performance issues The work as designed performance issue is a group of issues related to the application design. The application behaves exactly as designed but if the design has issues, it will lead to performance issues. Some examples of performance issues are as follows: Using synchronous when asynchronous should be used Neglecting remoteness, that is, using remote calls as if they are local calls Improper loading technique, that is, eager versus lazy loading techniques Selection of the size of the object Excessive serialization layers Web services granularity Too much synchronization Non-scalable architecture, especially in the integration layer or middleware Saturated hardware on a shared infrastructure Interfacing performance issues Whenever the application is dealing with resources, we may face the following interfacing issues that could impact our application performance: Using an old driver/library Missing frequent database housekeeping Database issues, such as, missing database indexes Low performing JMS or integration service bus Logging issues (excessive logging or not following the best practices while logging) Network component issues, that is, load balancer, proxy, firewall, and so on Miscellaneous performance issues Miscellaneous performance issues include different performance issues, as follows: Inconsistent performance of application components, for example, having slow components can cause the whole application to slow down Introduced performance issues to delay the processing speed Improper configuration tuning of different components, for example, JVM, application server, and so on Application-specific performance issues, such as excessive validations, apply many business rules, and so on Fake performance issues Fake performance issues could be a temporary issue or not even an issue. The famous examples are as follows: Networking temporary issues Scheduled running jobs (detected from the associated pattern) Software automatic updates (it must be disabled in production) Non-reproducible issues In the following sections, we will go through some of the listed issues. Threading performance issues Multithreading has the advantage of maximizing the hardware utilization. In particular, it maximizes the processing power by executing multiple tasks concurrently. But it has different side effects, especially if not used wisely inside the application. For example, in order to distribute tasks among different concurrent threads, there should be no or minimal data dependency, so each thread can complete its task without waiting for other threads to finish. Also, they shouldn't compete over different shared resources or they will be blocked, waiting for each other. We will discuss some of the common threading issues in the next section. Blocking threads A common issue where threads are blocked is waiting to obtain the monitor(s) of certain shared resources (objects), that is, holding by other threads. If most of the application server threads are consumed in a certain blocked status, the application becomes gradually unresponsive to user requests. In the Weblogic application server, if a thread keeps executing for more than a configurable period of time (not idle), it gets promoted to the Stuck thread. The more the threads are in the stuck status, the more the server status becomes critical. Configuring the stuck thread parameters is part of the Weblogic performance tuning. Performance symptoms The following symptoms are the performance symptoms that usually appear in cases of thread blocking: Slow application response (increased single request latency and pending user requests) Application server logs might show some stuck threads. The server's healthy status becomes critical on monitoring tools (application server console or different monitoring tools) Frequent application server restarts either manually or automatically Thread dump shows a lot of threads in the blocked status waiting for different resources Application profiling shows a lot of thread blocking An example of thread blocking To understand the effect of thread blocking on application execution, open the HighCPU project and measure the time it takes for execution by adding the following additional lines: long start= new Date().getTime(); .. .. long duration= new Date().getTime()-start; System.err.println("total time = "+duration); Now, try to execute the code with a different number of the thread pool size. We can try using the thread pool size as 50 and 5, and compare the results. In our results, the execution of the application with 5 threads is much faster than 50 threads! Let's now compare the NetBeans profiling results of both the executions to understand the reason behind this unexpected difference. The following screenshot shows the profiling of 50 threads; we can see a lot of blocking for the monitor in the column and the percentage of Monitor to the left waiting around at 75 percent: To get the preceding profiling screen, click on the Profile menu inside NetBeans, and then click on Profile Project (HighCPU). From the pop-up options, select Monitor and check all the available options, and then click on Run. The following screenshot shows the profiling of 5 threads, where there is almost no blocking, that is, less threads compete on these resources: Try to remove the System.out statement from inside the run() method, re-execute the tests, and compare the results. Another factor that also affects the selection of the pool size, especially when the thread execution takes long time, is the context switching overhead. This overhead requires the selection of the optimal pool size, usually related to the number of available processors for our application. Context switching is the CPU switching from one process (or thread) to another, which requires restoration of the execution data (different CPU registers and program counters). The context switching includes suspension of the current executing process, storing its current data, picking up the next process for execution according to its priority, and restoring its data. Although it's supported on the hardware level and is faster, most operating systems do this on the level of software context switching to improve the performance. The main reason behind this is the ability of the software context switching to selectively choose the required registers to save. Thread deadlock When many threads hold the monitor for objects that they need, this will result in a deadlock unless the implementation uses the new explicit Lock interface. In the example, we had a deadlock caused by two different threads waiting to obtain the monitor that the other thread held. The thread profiling will show these threads in a continuous blocking status, waiting for the monitors. All threads that go into the deadlock status become out of service for the user's requests, as shown in the following screenshot: Usually, this happens if the order of obtaining the locks is not planned. For example, if we need to have a quick and easy fix for a multidirectional thread deadlock, we can always lock the smallest or the largest bank account first, regardless of the transfer direction. This will prevent any deadlock from happening in our simple two-threaded mode. But if we have more threads, we need to have a much more mature way to handle this by using the Lock interface or some other technique. Memory performance issues In spite of all this great effort put into the allocated and free memory in an optimized way, we still see memory issues in Java Enterprise applications mainly due to the way people are dealing with memory in these applications. We will discuss mainly three types of memory issues: memory leakage, memory allocation, and application data caching. Memory leakage Memory leakage is a common performance issue where the garbage collector is not at fault; it is mainly the design/coding issues where the object is no longer required but it remains referenced in the heap, so the garbage collector can't reclaim its space. If this is repeated with different objects over a long period (according to object size and involved scenarios), it may lead to an out of memory error. The most common example of memory leakage is adding objects to the static collections (or an instance collection of long living objects, such as a servlet) and forgetting to clean collections totally or partially. Performance symptoms The following symptoms are some of the expected performance symptoms during a memory leakage in our application: The application uses heap memory increased by time The response slows down gradually due to memory congestion OutOfMemoryError occurs frequently in the logs and sometimes an application server restart is required Aggressive execution of garbage collection activities Heap dump shows a lot of objects retained (from the leakage types) A sudden increase of memory paging as reported by the operating system monitoring tools An example of memory leakage We have a sample application ExampleTwo; this is a product catalog where users can select products and add them to the basket. The application is written in spaghetti code, so it has a lot of issues, including bad design, improper object scopes, bad caching, and memory leakage. The following screenshot shows the product catalog browser page: One of the bad issues is the usage of the servlet instance (or static members), as it causes a lot of issues in multiple threads and has a common location for unnoticed memory leakages. We have added the following instance variable as a leakage location: private final HashMap<String, HashMap> cachingAllUsersCollection = new HashMap(); We will add some collections to the preceding code to cause memory leakage. We also used the caching in the session scope, which causes implicit leakage. The session scope leakage is difficult to diagnose, as it follows the session life cycle. Once the session is destroyed, the leakage stops, so we can say it is less severe but more difficult to catch. Adding global elements, such as a catalog or stock levels, to the session scope has no meaning. The session scope should only be restricted to the user-specific data. Also, forgetting to remove data that is not required from a session makes the memory utilization worse. Refer to the following code: @Stateful public class CacheSessionBean Instead of using a singleton class here or stateless bean with a static member, we used the Stateful bean, so it is instantiated per user session. We used JPA beans in the application layers instead of using View Objects. We also used loops over collections instead of querying or retrieving the required object directly, and so on. It would be good to troubleshoot this application with different profiling aspects to fix all these issues. All these factors are enough to describe such a project as spaghetti. We can use our knowledge in Apache JMeter to develop simple testing scenarios. As shown in the following screenshot, the scenario consists of catalog navigations and details of adding some products to the basket: Executing the test plan using many concurrent users over many iterations will show the bad behavior of our application, where the used memory is increased by time. There is no justification as the catalog is the same for all users and there's no specific user data, except for the IDs of the selected products. Actually, it needs to be saved inside the user session, which won't take any remarkable memory space. In our example, we intend to save a lot of objects in the session, implement a wrong session level, cache, and implement meaningless servlet level caching. All this will contribute to memory leakage. This gradual increase in the memory consumption is what we need to spot in our environment as early as possible (as we can see in the following screenshot, the memory consumption in our application is approaching 200 MB!): Improper data caching Caching is one of the critical components in the enterprise application architecture. It increases the application performance by decreasing the time required to query the object again from its data store, but it also complicates the application design and causes a lot of other secondary issues. The main concerns in the cache implementation are caching refresh rate, caching invalidation policy, data inconsistency in a distributed environment, locking issues while waiting to obtain the cached object's lock, and so on. Improper caching issue types The improper caching issue can take a lot of different variants. We will pick some of them and discuss them in the following sections. No caching (disabled caching) Disabled caching will definitely cause a big load over the interfacing resources (for example, database) by hitting it in with almost every interaction. This should be avoided while designing an enterprise application; otherwise; the application won't be usable. Fortunately, this has less impact than using wrong caching implementation! Most of the application components such as database, JPA, and application servers already have an out-of-the-box caching support. Too small caching size Too small caching size is a common performance issue, where the cache size is initially determined but doesn't get reviewed with the increase of the application data. The cache sizing is affected by many factors such as the memory size. If it allows more caching and the type of the data, lookup data should be cached entirely when possible, while transactional data shouldn't be cached unless required under a very strict locking mechanism. Also, the cache replacement policy and invalidation play an important role and should be tailored according to the application's needs, for example, least frequently used, least recently used, most frequently used, and so on. As a general rule, the bigger the cache size, the higher the cache hit rate and the lower the cache miss ratio. Also, the proper replacement policy contributes here; if we are working—as in our example—on an online product catalog, we may use the least recently used policy so all the old products will be removed, which makes sense as the users usually look for the new products. Monitoring of the caching utilization periodically is an essential proactive measure to catch any deviations early and adjust the cache size according to the monitoring results. For example, if the cache saturation is more than 90 percent and the missed cache ratio is high, a cache resizing is required. Missed cache hits are very costive as they hit the cache first and then the resource itself (for example, database) to get the required object, and then add this loaded object into the cache again by releasing another object (if the cache is 100 percent), according to the used cache replacement policy. Too big caching size Too big caching size might cause memory issues. If there is no control over the cache size and it keeps growing, and if it is a Java cache, the garbage collector will consume a lot of time trying to garbage collect that huge memory, aiming to free some memory. This will increase the garbage collection pause time and decrease the cache throughput. If the cache throughput is decreased, the latency to get objects from the cache will increase causing the cache retrieval cost to be high to the level it might be slower than hitting the actual resources (for example, database). Using the wrong caching policy Each application's cache implementation should be tailored according to the application's needs and data types (transactional versus lookup data). If the selection of the caching policy is wrong, the cache will affect the application performance rather than improving it. Performance symptoms According to the cache issue type and different cache configurations, we will see the following symptoms: Decreased cache hit rate (and increased cache missed ratio) Increased cache loading because of the improper size Increased cache latency with a huge caching size Spiky pattern in the performance testing response time, in case the cache size is not correct, causes continuous invalidation and reloading of the cached objects An example of improper caching techniques In our example, ExampleTwo, we have demonstrated many caching issues, such as no policy defined, global cache is wrong, local cache is improper, and no cache invalidation is implemented. So, we can have stale objects inside the cache. Cache invalidation is the process of refreshing or updating the existing object inside the cache or simply removing it from the cache. So in the next load, it reflects its recent values. This is to keep the cached objects always updated. Cache hit rate is the rate or ratio in which cache hits match (finds) the required cached object. It is the main measure for cache effectiveness together with the retrieval cost. Cache miss rate is the rate or ratio at which the cache hits the required object that is not found in the cache. Last access time is the timestamp of the last access (successful hit) to the cached objects. Caching replacement policies or algorithms are algorithms implemented by a cache to replace the existing cached objects with other new objects when there are no rooms available for any additional objects. This follows missed cache hits for these objects. Some examples of these policies are as follows: First-in-first-out (FIFO): In this policy, the cached object is aged and the oldest object is removed in favor of the new added ones. Least frequently used (LFU): In this policy, the cache picks the less frequently used object to free the memory, which means the cache will record statistics against each cached object. Least recently used (LRU): In this policy, the cache replaces the least recently accessed or used items; this means the cache will keep information like the last access time of all cached objects. Most recently used (MRU): This policy is the opposite of the previous one; it removes the most recently used items. This policy fits the application where items are no longer needed after the access, such as used exam vouchers. Aging policy: Every object in the cache will have an age limit, and once it exceeds this limit, it will be removed from the cache in the simple type. In the advanced type, it will also consider the invalidation of the cache according to predefined configuration rules, for example, every three hours, and so on. It is important for us to understand that caching is not our magic bullet and it has a lot of related issues and drawbacks. Sometimes, it causes overhead if not correctly tailored according to real application needs.
Read more
  • 0
  • 0
  • 8988

article-image-getting-started-mockito
Packt
19 Jun 2014
14 min read
Save for later

Getting Started with Mockito

Packt
19 Jun 2014
14 min read
(For more resources related to this topic, see here.) Mockito is an open source framework for Java that allows you to easily create test doubles (mocks). What makes Mockito so special is that it eliminates the common expect-run-verify pattern (which was present, for example, in EasyMock—please refer to http://monkeyisland.pl/2008/02/24/can-i-test-what-i-want-please for more details) that in effect leads to a lower coupling of the test code to the production code as such. In other words, one does not have to define the expectations of how the mock should behave in order to verify its behavior. That way, the code is clearer and more readable for the user. On one hand, Mockito has a very active group of contributors and is actively maintained. On the other hand, by the time this article is written, the last Mockito release (Version 1.9.5) would have been in October 2012. You may ask yourself the question, "Why should I even bother to use Mockito in the first place?" Out of many, Mockito offers the following key features: There is no expectation phase for Mockito—you can either stub or verify the mock's behavior You are able to mock both interfaces and classes You can produce little boilerplate code while working with Mockito by means of annotations You can easily verify or stub with intuitive argument matchers Before diving into Mockito as such, one has to understand the concept behind System Under Test (SUT) and test doubles. We will base on what Gerard Meszaros has defined in the xUnit Patterns (http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html). SUT (http://xunitpatterns.com/SUT.html) describes the system that we are testing. It doesn't have to necessarily signify a class but any part of the application that we are testing or even the whole application as such. As for test doubles (http://www.martinfowler.com/bliki/TestDouble.html), it's an object that is used only for testing purposes, instead of a real object. Let's take a look at different types of test doubles: Dummy: This is an object that is used only for the code to compile—it doesn't have any business logic (for example, an object passed as a parameter to a method) Fake: This is an object that has an implementation but it's not production ready (for example, using an in-memory database instead of communicating with a standalone one) Stub: This is an object that has predefined answers to method executions made during the test Mock: This is an object that has predefined answers to method executions made during the test and has recorded expectations of these executions Spy: These are objects that are similar to stubs, but they additionally record how they were executed (for example, a service that holds a record of the number of sent messages) An additional remark is also related to testing the output of our application. The more decoupled your test code is from your production code, the better since you will have to spend less time (or even none) on modifying your tests after you change the implementation of the code. Coming back to the article's content—this article is all about getting started with Mockito. We will begin with how to add Mockito to your classpath. Then, we'll see a simple setup of tests for both JUnit and TestNG test frameworks. Next, we will check why it is crucial to assert the behavior of the system under test instead of verifying its implementation details. Finally, we will check out some of Mockito's experimental features, adding hints and warnings to the exception messages. The very idea of the following recipes is to prepare your test classes to work with Mockito and to show you how to do this with as little boilerplate code as possible. Due to my fondness of the behavior driven development (http://dannorth.net/introducing-bdd/ first introduced by Dan North), I'm using Mockito's BDDMockito and AssertJ's BDDAssertions static methods to make the code even more readable and intuitive in all the test cases. Also, please read Szczepan Faber's blog (author of Mockito) about the given, when, then separation in your test methods—http://monkeyisland.pl/2009/12/07/given-when-then-forever/—since these are omnipresent throughout the article. I don't want the article to become a duplication of the Mockito documentation, which is of high quality—I would like you to take a look at good tests and get acquainted with Mockito syntax from the beginning. What's more, I've used static imports in the code to make it even more readable, so if you get confused with any of the pieces of code, it would be best to consult the repository and the code as such. Adding Mockito to a project's classpath Adding Mockito to a project's classpath is as simple as adding one of the two jars to your project's classpath: mockito-all: This is a single jar with all dependencies (with the hamcrest and objenesis libraries—as of June 2011). mockito-core: This is only Mockito core (without hamcrest or objenesis). Use this if you want to control which version of hamcrest or objenesis is used. How to do it... If you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples of how to add mockito-all to your classpath for Maven and Gradle): For Maven, use the following code: <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.9.5</version> <scope>test</scope> </dependency> For Gradle, use the following code: testCompile "org.mockito:mockito-all:1.9.5" If you are not using any of the dependency managers, you have to either download mockito-all.jar or mockito-core.jar and add it to your classpath manually (you can download the jars from https://code.google.com/p/mockito/downloads/list). Getting started with Mockito for JUnit Before going into details regarding Mockito and JUnit integration, it is worth mentioning a few words about JUnit. JUnit is a testing framework (an implementation of the xUnit famework) that allows you to create repeatable tests in a very readable manner. In fact, JUnit is a port of Smalltalk's SUnit (both the frameworks were originally implemented by Kent Beck). What is important in terms of JUnit and Mockito integration is that under the hood, JUnit uses a test runner to run its tests (from xUnit—test runner is a program that executes the test logic and reports the test results). Mockito has its own test runner implementation that allows you to reduce boilerplate in order to create test doubles (mocks and spies) and to inject them (either via constructors, setters, or reflection) into the defined object. What's more, you can easily create argument captors. All of this is feasible by means of proper annotations as follows: @Mock: This is used for mock creation @Spy: This is used to create a spy instance @InjectMocks: This is used to instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable) @Captor: This is used to create an argument captor By default, you should profit from Mockito's annotations to make your code look neat and to reduce the boilerplate code in your application. Getting ready In order to add JUnit to your classpath, if you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples for Maven and Gradle): To add JUnit in Maven, use the following code: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> To add JUnit in Gradle, use the following code: testCompile('junit:junit:4.11') If you are not using any of the dependency managers, you have to download the following jars: junit.jar hamcrest-core.jar Add the downloaded files to your classpath manually (you can download the jars from https://github.com/junit-team/junit/wiki/Download-and-Install). For this recipe, our system under test will be a MeanTaxFactorCalculator class that will call an external service, TaxService, to get the current tax factor for the current user. It's a tax factor and not tax as such since, for simplicity, we will not be using BigDecimals but doubles, and I'd never suggest using doubles to anything related to money, as follows: public class MeanTaxFactorCalculator { private final TaxService taxService; public MeanTaxFactorCalculator(TaxService taxService) { this.taxService = taxService; } public double calculateMeanTaxFactorFor(Person person) { double currentTaxFactor = taxService.getCurrentTaxFactorFor(person); double anotherTaxFactor = taxService.getCurrentTaxFactorFor(person); return (currentTaxFactor + anotherTaxFactor) / 2; } } How to do it... To use Mockito's annotations, you have to perform the following steps: Annotate your test with the @RunWith(MockitoJUnitRunner.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and then inject all the @Mock or @Spy annotated fields into it (if applicable). The following snippet shows the JUnit and Mockito integration in a test class that verifies the SUT's behavior (remember that I'm using BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @RunWith(MockitoJUnitRunner.class) public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To profit from Mockito's annotations using JUnit, you just have to annotate your test class with @RunWith(MockitoJUnitRunner.class). How it works... The Mockito test runner will adapt its strategy depending on the version of JUnit. If there exists a org.junit.runners.BlockJUnit4ClassRunner class, it means that the codebase is using at least JUnit in Version 4.5.What eventually happens is that the MockitoAnnotations.initMocks(...) method is executed for the given test, which initializes all the Mockito annotations (for more information, check the subsequent There's more… section). There's more... You may have a situation where your test class has already been annotated with a @RunWith annotation and seemingly, you may not profit from Mockito's annotations. In order to achieve this, you have to call the MockitoAnnotations.initMocks method manually in the @Before annotated method of your test, as shown in the following code: public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Before public void setup() { MockitoAnnotations.initMocks(this); } @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(Mockito.any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To use Mockito's annotations without a JUnit test runner, you have to call the MockitoAnnotations.initMocks method and pass the test class as its parameter. Mockito checks whether the user has overridden the global configuration of AnnotationEngine, and if this is not the case, the InjectingAnnotationEngine implementation is used to process annotations in tests. What is done internally is that the test class fields are scanned for annotations and proper test doubles are initialized and injected into the @InjectMocks annotated object (either by a constructor, property setter, or field injection, in that precise order). You have to remember several factors related to the automatic injection of test doubles as follows: If Mockito is not able to inject test doubles into the @InjectMocks annotated fields through either of the strategies, it won't report failure—the test will continue as if nothing happened (and most likely, you will get NullPointerException). For constructor injection, if arguments cannot be found, then null is passed For constructor injection, if nonmockable types are required in the constructor, then the constructor injection won't take place. For other injection strategies, if you have properties with the same type (or same erasure) and if Mockito matches mock names with a field/property name, it will inject that mock properly. Otherwise, the injection won't take place. For other injection strategies, if the @InjectMocks annotated object wasn't previously initialized, then Mockito will instantiate the aforementioned object using a no-arg constructor if applicable. See also JUnit documentation at https://github.com/junit-team/junit/wiki Martin Fowler's article on xUnit at http://www.martinfowler.com/bliki/Xunit.html Gerard Meszaros's xUnit Test Patterns at http://xunitpatterns.com/ @InjectMocks Mockito documentation (with description of injection strategies) at http://docs.mockito.googlecode.com/hg/1.9.5/org/mockito/InjectMocks.html Getting started with Mockito for TestNG Before going into details regarding Mockito and TestNG integration, it is worth mentioning a few words about TestNG. TestNG is a unit testing framework for Java that was created, as the author defines it on the tool's website (refer to the See also section for the link), out of frustration for some JUnit deficiencies. TestNG was inspired by both JUnit and TestNG and aims at covering the whole scope of testing—from unit, through functional, integration, end-to-end tests, and so on. However, the JUnit library was initially created for unit testing only. The main differences between JUnit and TestNG are as follows: The TestNG author disliked JUnit's approach of having to define some methods as static to be executed before the test class logic gets executed (for example, the @BeforeClass annotated methods)—that's why in TestNG you don't have to define these methods as static TestNG has more annotations related to method execution before single tests, suites, and test groups TestNG annotations are more descriptive in terms of what they do; for example, the JUnit's @Before versus TestNG's @BeforeMethod Mockito in Version 1.9.5 doesn't provide any out-of-the-box solution to integrate with TestNG in a simple way, but there is a special Mockito subproject for TestNG (refer to the See also section for the URL) that should be part one of the subsequent Mockito releases. In the following recipe, we will take a look at how to profit from that code and that very elegant solution. Getting ready When you take a look at Mockito's TestNG subproject on the Mockito GitHub repository, you will find that there are three classes in the org.mockito.testng package, as follows: MockitoAfterTestNGMethod MockitoBeforeTestNGMethod MockitoTestNGListener Unfortunately, until this project eventually gets released you have to just copy and paste those classes to your codebase. How to do it... To integrate TestNG and Mockito, perform the following steps: Copy the MockitoAfterTestNGMethod, MockitoBeforeTestNGMethod, and MockitoTestNGListener classes to your codebase from Mockito's TestNG subproject. Annotate your test class with @Listeners(MockitoTestNGListener.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable). Annotate the test fields with the @Captor annotation to make Mockito instantiate an argument captor. Now let's take a look at this snippet that, using TestNG, checks whether the mean tax factor value has been calculated properly (remember that I'm using the BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @Listeners(MockitoTestNGListener.class) public class MeanTaxFactorCalculatorTestNgTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } How it works... TestNG allows you to register custom listeners (your listener class has to implement the IInvokedMethodListener interface). Once you do this, the logic inside the implemented methods will be executed before and after every configuration and test methods get called. Mockito provides you with a listener whose responsibilities are as follows: Initialize mocks annotated with the @Mock annotation (it is done only once) Validate the usage of Mockito after each test method Remember that with TestNG, all mocks are reset (or initialized if it hasn't already been done so) before any TestNG method! See also The TestNG homepage at http://testng.org/doc/index.html The Mockito TestNG subproject at https://github.com/mockito/mockito/tree/master/subprojects/testng The Getting started with Mockito for JUnit recipe on the @InjectMocks analysis
Read more
  • 0
  • 0
  • 11708
Modal Close icon
Modal Close icon