Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-8-recipes-to-master-promises-in-ecmascript-2018
Richa Tripathi
08 May 2018
17 min read
Save for later

8 recipes to master Promises in ECMAScript 2018

Richa Tripathi
08 May 2018
17 min read
What are Promises in ECMAScript? In earlier versions of JavaScript, the callback pattern was the most common way to organize asynchronous code. It got the job done, but it didn't scale well. With callbacks, as more asynchronous functions are added, the code becomes more deeply nested, and it becomes more difficult to add to, refactor, and understand the code. This situation is commonly known as callback hell. Promises were introduced to improve on this situation. Promises allow the relationships of asynchronous operations to be rearranged and organized with more freedom and flexibility. In this context, today we will learn about Promises and how to use it to create and organize asynchronous functions. We will also explore how to handle error conditions. Creating and waiting for Promises Promises provide a way to compose and combine asynchronous functions in an organized and easier to read way. This recipe demonstrates a very basic usage of promises. This recipe assumes that you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 03-01-creating-and-waiting-for-promises. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise and logs messages before and after the promise is created, as well as while the promise is executing and after it has been resolved: // main.js export function main () { console.log('Before promise created'); new Promise(function (resolve) { console.log('Executing promise'); resolve(); }).then(function () { console.log('Finished promise'); }); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. You will see the following output: How it works... By looking at the order of the log messages, you can clearly see the order of operations. First, the initial log is executed. Next, the promise is created with an executor method. The executor method takes resolve as an argument. The resolve function fulfills the promise. Promises adhere to an interface named thenable. This means that we can chain then callbacks. The callback we attached with this method is executed after the resolve function is called. This function executes asynchronously (not immediately after the Promise has been resolved). Finally, there is a log after the promise has been created. The order the logs messages appear reveals the asynchronous nature of the code. All of the logs are seen in the order they appear in the code, except the Finished promise message. That function is executed asynchronously after the main function has exited! Resolving Promise results In the previous recipe, we saw how to use promises to execute asynchronous code. However, this code is pretty basic. It just logs a message and then calls resolve. Often, we want to use asynchronous code to perform some long-running operation, then return that value. This recipe demonstrates how to use resolve in order to return the result of a long-running operation. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-02-resolving-promise-results. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise and logs messages before and after the promise is created: // main.js export function main () { console.log('Before promise created'); new Promise(function (resolve) { }); console.log('After promise created'); } Within the promise, resolve a random number after a 5-second timeout: new Promise(function (resolve) { setTimeout(function () { resolve(Math.random()); }, 5000); }) Chain a then call off the promise. Pass a function that logs out the value of its only argument: new Promise(function (resolve) { setTimeout(function () { resolve(Math.random()); }, 5000); }).then(function (result) { console.log('Long running job returned: %s', result); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Just as in the previous recipe, the promise was not fulfilled until resolve was executed (this time after 5 seconds). This time however, we passed the called  resolve immediately with a random number for an argument. When this happens, the argument is provided to the callback for the subsequent then function. We'll see in future recipes how this can be continued to create promise chains. Rejecting Promise errors In the previous recipe, we saw how to use resolve to provide a result from a successfully fulfilled promise. Unfortunately, the code doesn't always run as expected. Network connections can be down, data can be corrupted, and uncountable other errors can occur. We need to be able to handle those situations as well. This recipe demonstrates how to use reject when errors arise. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-03-rejecting-promise-errors. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise, and logs messages before and after the promise is created and when the promise is fulfilled: new Promise(function (resolve) { resolve(); }).then(function (result) { console.log('Promise Completed'); }); Add a second argument to the promise callback named reject, and call reject with a new error: new Promise(function (resolve, reject) { reject(new Error('Something went wrong'); }).then(function (result) { console.log('Promise Completed'); }); Chain a catch call off the promise. Pass a function that logs out its only argument: new Promise(function (resolve, reject) { reject(new Error('Something went wrong'); }).then(function (result) { console.log('Promise Completed'); }).catch(function (error) { console.error(error); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Previously we saw how to use resolve to return a value in the case of a successful fulfillment of a promise. In this case, we called reject before resolve. This means that the Promise finished with an error before it could resolve. When the Promise completes in an error state, the then callbacks are not executed. Instead we have to use catch in order to receive the error that the Promise rejects. You'll also notice that the catch callback is only executed after the main function has returned. Like successful fulfillment, listeners to unsuccessful ones execute asynchronously. See also Handle errors with Promise.catch Simulating finally with Promise.then Chaining Promises So far in this article, we've seen how to use promises to run single asynchronous tasks. This is helpful but doesn't provide a significant improvement over the callback pattern. The real advantage that promises offer comes when they are composed. In this recipe, we'll use promises to combine asynchronous functions in series. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-04-chaining-promises. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates a promise. Resolve a random number from the promise: new Promise(function (resolve) { resolve(Math.random()); }); ); Chain a then call off of the promise. Return true from the callback if the random value is greater than or equal to 0.5: new Promise(function (resolve, reject) { resolve(Math.random()); }).then(function(value) { return value >= 0.5; }); Chain a final then call after the previous one. Log out a different message if the argument is true or false: new Promise(function (resolve, reject) { resolve(Math.random()); }).then(function (value) { return value >= 0.5; }).then(function (isReadyForLaunch) { if (isReadyForLaunch) { console.log('Start the countdown! '); } else { console.log('Abort the mission. '); } }); Start your Python web server and open the following link in your browser: http://localhost:8000/. If you are lucky, you'll see the following output: If you are unlucky, we'll see the following output: How it works... We've already seen how to use then to wait for the result of a promise. Here, we are doing the same thing multiple times in a row. This is called a promise chain. After the promise chain is started with the new promise, all of the subsequent links in the promise chain return promises as well. That is, the callback of each then function is resolve like another promise. See also Using Promise.all to resolve multiple Promises Handle errors with Promise.catch Simulating finally with a final Promise.then call Starting a Promise chain with Promise.resolve In this article's preceding recipes, we've been creating new promise objects with the constructor. This gets the jobs done, but it creates a problem. The first callback in the promise chain has a different shape than the subsequent callbacks. In the first callback, the arguments are the resolve and reject functions that trigger the subsequent then or catch callbacks. In subsequent callbacks, the returned value is propagated down the chain, and thrown errors are captured by catch callbacks. This difference adds mental overhead. It would be nice to have all of the functions in the chain behave in the same way. In this recipe, we'll see how to use Promise.resolve to start a promise chain. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-05-starting-with-resolve. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that calls Promise.resolve with an empty object as the first argument: export function main () { Promise.resolve({}) } Chain a then call off of resolve, and attach rocket boosters to the passed object: export function main () { Promise.resolve({}).then(function (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; }) } Add a final then call to the chain that lets you know when the boosters have been added: export function main () { Promise.resolve({}) .then(function (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; }) .then(function (rocket) { console.log('boosters attached'); console.log(rocket); }) } Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Promise.resolve creates a new promise that resolves the value passed to it. The subsequent then method will receive that resolved value as it's argument. This method can seem a little roundabout but can be very helpful for composing asynchronous functions. In effect, the constituents of the promise chain don't need to be aware that they are in the chain (including the first step). This makes transitioning from code that doesn't use promises to code that does much easier. Using Promise.all to resolve multiple promises So far, we've seen how to use promises to perform asynchronous operations in sequence. This is useful when the individual steps are long-running operations. However, this might not always be the more efficient configuration. Quite often, we can perform multiple asynchronous operations at the same time. In this recipe, we'll see how to use Promise.all to start multiple asynchronous operations, without waiting for the previous one to complete. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 3-06-using-promise-all. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that creates an object named rocket, and calls Promise.all with an empty array as the first argument: export function main() { console.log('Before promise created'); const rocket = {}; Promise.all([]) console.log('After promise created'); } Create a function named addBoosters that creates an object with boosters to an object: function addBoosters (rocket) { console.log('attaching boosters'); rocket.boosters = [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }]; return rocket; } Create a function named performGuidanceDiagnostic that returns a promise of a successfully completed task: function performGuidanceDiagnostic (rocket) { console.log('performing guidance diagnostic'); return new Promise(function (resolve) { setTimeout(function () { console.log('guidance diagnostic complete'); rocket.guidanceDiagnostic = 'Completed'; resolve(rocket); }, 2000); }); } Create a function named loadCargo that adds a payload to the cargoBay: function loadCargo (rocket) { console.log('loading satellite'); rocket.cargoBay = [{ name: 'Communication Satellite' }] return rocket; } Use Promise.resolve to pass the rocket object to these functions within Promise.all: export function main() { console.log('Before promise created'); const rocket = {}; Promise.all([ Promise.resolve(rocket).then(addBoosters), Promise.resolve(rocket).then(performGuidanceDiagnostic), Promise.resolve(rocket).then(loadCargo) ]); console.log('After promise created'); } Attach a then call to the chain and log that the rocket is ready for launch: const rocket = {}; Promise.all([ Promise.resolve(rocket).then(addBoosters), Promise.resolve(rocket).then(performGuidanceDiagnostic), Promise.resolve(rocket).then(loadCargo) ]).then(function (results) { console.log('Rocket ready for launch'); console.log(results); }); Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... Promise.all is similar to Promise.resolve; the arguments are resolved as promises. The difference is that instead of a single result, Promise.all accepts an iterable argument, each member of which is resolved individually. In the preceding example, you can see that each of the promises is initiated immediately. Two of them are able to complete while performGuidanceDiagnostic continues. The promise returned by Promise.all is fulfilled when all the constituent promises have been resolved. The results of the promises are combined into an array and propagated down the chain. You can see that three references to rocket are packed into the results argument. And you can see that the operations of each promise have been performed on the resulting object. There's more As you may have guessed, the results of the constituent promises don't have to return the same value. This can be useful, for example, when performing multiple independent network requests. The index of the result for each promise corresponds to the index of the operation within the argument to Promise.all. In these cases, it can be useful to use array destructuring to name the argument of the then callback: Promise.all([ findAstronomers, findAvailableTechnicians, findAvailableEquipment ]).then(function ([astronomers, technicians, equipment]) { // use results for astronomers, technicians, and equipment }); Handling errors with Promise.catch In a previous recipe, we saw how to fulfill a promise with an error state using reject, and we saw that this triggers the next catch callback in the promise chain. Because promises are relatively easy to compose, we need to be able to handle errors that are reported in different ways. Luckily promises are able to handle this seamlessly. In this recipe, we'll see how Promises.catch can handle errors that are reported by being thrown or through rejection. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-07-handle-errors-promise-catch. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file with a main function that creates an object named rocket: export function main() { console.log('Before promise created'); const rocket = {}; console.log('After promise created'); } Create a function addBoosters that throws an error: function addBoosters (rocket) { throw new Error('Unable to add Boosters'); } Create a function performGuidanceDiagnostic that returns a promise that rejects an error: function performGuidanceDiagnostic (rocket) { return new Promise(function (resolve, reject) { reject(new Error('Unable to finish guidance diagnostic')); }); } Use Promise.resolve to pass the rocket object to these functions, and chain a catch off each of them: export function main() { console.log('Before promise created'); const rocket = {}; Promise.resolve(rocket).then(addBoosters) .catch(console.error); Promise.resolve(rocket).then(performGuidanceDiagnostic) .catch(console.error); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. You should see the following output: How it works... As we saw before, when a promise is fulfilled in a rejected state, the callback of the catch functions is triggered. In the preceding recipe, we see that this can happen when the reject method is called (as with performGuidanceDiagnostic). It also happens when a function in the chain throws an error (as will addBoosters). This has similar benefit to how Promise.resolve can normalize asynchronous functions. This handling allows asynchronous functions to not know about the promise chain, and announce error states in a way that is familiar to developers who are new to promises. This makes expanding the use of promises much easier. Simulating finally with the promise API In a previous recipe, we saw how catch can be used to handle errors, whether a promise has rejected, or a callback has thrown an error. Sometimes, it is desirable to execute code whether or not an error state has been detected. In the context of try/catch blocks, the finally block can be used for this purpose. We have to do a little more work to get the same behavior when working with promises In this recipe, we'll see how a final then call to execute some code in both successful and failing fulfillment states. How to do it... Open your command-line application and navigate to your workspace.  Create a new folder named 3-08-simulating-finally. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file with a main function that logs out messages for before and after promise creation: export function main() { console.log('Before promise created'); console.log('After promise created'); } Create a function named addBoosters that throws an error if its first parameter is false: function addBoosters(shouldFail) { if (shouldFail) { throw new Error('Unable to add Boosters'); } return { boosters: [{ count: 2, fuelType: 'solid' }, { count: 1, fuelType: 'liquid' }] }; } Use Promise.resolve to pass a Boolean value that is true if a random number is greater than 0.5 to addBoosters: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) console.log('After promise created'); } Add a then function to the chain that logs a success message: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) console.log('After promise created'); } Add a catch to the chain and log out the error if thrown: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) .catch(console.error) console.log('After promise created'); } Add a then after the catch, and log out that we need to make an announcement: export function main() { console.log('Before promise created'); Promise.resolve(Math.random() > 0.5) .then(addBoosters) .then(() => console.log('Ready for launch: ')) .catch(console.error) .then(() => console.log('Time to inform the press.')); console.log('After promise created'); } Start your Python web server and open the following link in your browser: http://localhost:8000/. If you are lucky and the boosters are added successfully, you'll see the following output: 12. If you are unlucky, you'll see an error message like the following: How it works... We can see in the preceding output that whether or not the asynchronous function completes in an error state, the last then callback is executed. This is possible because the catch method doesn't stop the promise chain. It simply catches any error states from the previous links in the chain, and then propagates a new value forward. The final then is then protected from being bypassed by an error state by this catch. And so, regardless of the fulfillment state of prior links in the chain, we can be sure that the callback of this final then will be executed. To summarize, we learned how to use the Promise API to organize asynchronous programs. We also looked at how to propagate results through promise chains and handle errors.  You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. It’s a complete guide on how to become a better web programmer by writing efficient and modular code using ES6 and ES8. What’s new in ECMAScript 2018 (ES9)? ECMAScript 7 – What to expect? Modular Programming in ECMAScript 6  
Read more
  • 0
  • 0
  • 19228

Packt
23 Dec 2013
5 min read
Save for later

GLSL – How to Set up the Shaders from the Host Application Side

Packt
23 Dec 2013
5 min read
(For more resources related to this topic, see here.) Setting up geometry Let's say that our mesh (a quad formed by two triangles) has the following information: vertex positions and texture coordinates. Also, we will arrange the data interleaved in the array. struct MyVertex { float x, y, z; float s, t; }; MyVertex geometry[] =  {{0,1,0,0,1}, {0,0,0,0,0}, {1,1,0,1,1},{1,0,0,1,0}}; // Let's create the objects that will encapsulate our geometry data GLuint vaoID, vboID; glGenVertexArray(1, &vaoID); glBindVertexArray(vaoID); glGenBuffers(1, &vboID); glBindBuffer(GL_ARRAY_BUFFER, vboID); // Attach our data to the OpenGL objects glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(MyVertex), &geometry[0].x,GL_DYNAMIC_DRAW); // Specify the format of each vertex attribute glEnableVertexAttribArray(0); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(MyVertex), NULL); glEnableVertexAttribArray(1); glVertexAttribPointer(1, 2, GL_FLOAT, GL_FALSE, sizeof(MyVertex),(void*)(sizeof(float)*3)); At this point, we created the OpenGL objects, set up correctly each vertex attribute format and uploaded the data to GPU. Setting up textures Setting up textures follows the same pattern. First create the OpenGL objects, then fill the data and the format in which it is provided. const int width = 512; const int height = 512; const int bpp = 32; struct RGBColor { unsigned char R,G,B,A; }; RGBColor textureData[width * height]; for(size_t y = 0; y < height; ++y) for(size_t x = 0; x < width; ++x)                textureData[y*height+x] = …; //fill your texture here // Create GL object GLuint texID; glGenTextures(1, &texID); glBindTexture(GL_TEXTURE_2D, texID); // Fill up the data and set the texel format glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA,GL_UNSIGNED_BYTE, &textureData[0].R); // Set texture format data: interpolation and clamping modes glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT); And that's all about textures. Later we will use the texture ID to place the texture into a slot, and that slot number is the information that we will pass to the shader to tell it where the texture is placed in order to locate it. Setting up shaders In order to setup the shaders, we have to carry out some steps:  load up the source code, compile it and associate to a shader object, and link all shaders together into a program object. char* vs[1]; vs[0] = "#version 430\nlayout (location = 0) in vec3 PosIn;layout (location = 1) in vec2 TexCoordIn;smooth out vec2 TexCoordOut;uniform mat4 MVP; void main() { TexCoordOut = TexCoordIn;gl_Position = MVP * vec4(PosIn, 1.0);}"; char fs[1]; fs = "#version 430\n uniform sampler2D Image; smooth in vec2 TexCoord;out vec4 FBColor;void main() {FBColor = texture(Image, TexCoord);}"; // Upload source code and compile it GLuint pId, vsId, fsId; vsId = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vsId, 1, (const char**)&vs, NULL); glCompileShader(vsId); // Check for compilation errors GLint status = 0, bufferLength = 0; glGetShaderiv(vsId, GL_COMPILE_STATUS, &status); if(!status) { char* infolog = new char[bufferLength + 1]; glGetShaderiv(vsId, GL_INFO_LOG_LENGTH, &bufferLength); glGetShaderInfoLog(vsId, bufferLength, NULL, infolog); infolog[bufferLength] = 0; printf("Shader compile errors / warnings: %s\n", infolog); delete [] infolog; } The process for the fragment shader is exactly the same. The only change is that the shader object must be created as fsId = glCreateShader(GL_FRAGMENT_SHADER); // Now let's proceed to link the shaders into the program object pId = glCreateProgram(); glAttachShader(pId, vsId); glAttachShader(pId, fsId); glLinkProgram(pId); glGetProgramiv(pId, GL_LINK_STATUS, &status); if(!status) { char* infolog = new char[bufferLength + 1]; glGetProgramiv(pId, GL_INFO_LOG_LENGTH, &bufferLength); infolog[bufferLength] = 0; printf("Shader linking errors / warnings: %s\n", infolog); delete [] infolog; } // We do not need the vs and fs anymore, so it is same mark them for deletion. glDeleteShader(vsId); glDeleteShader(fsId); The last things to upload to the shader are two uniform variables: the one that corresponds with the view-projection matrix and the one that represents the texture. Those are uniform variables and are set in the following way: // First bind the program object where we want to upload the variables glUseProgram(pId); // Obtain the "slot number" where the uniform is located in GLint location = glGetUniformLocation(pId, "MVP"); float mvp[16] = {1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1}; // Set the data into the uniform's location glUniformMatrix4fv(location, 1, GL_FALSE, mvp); // Active the texture slot 0 glActiveTexture(GL_TEXTURE0); // Bind the texture to the active slot glBindTexture(GL_TEXTURE_2D, texID); location = glGetUniformLocation(pId, "Image"); // Upload the texture's slot number to the uniform variable int imageSlot = 0; glUniform1i(location, imageSlot); And that's all. For the other types of shaders, process is all the same: Create shader object, upload source code, compile and link, but using the proper OpenGL types such as GL_GEOMETRY_SHADER or GL_COMPUTE_SHADER. A last step, to draw all these things, is to establish them as active and issue the draw call: glBindVertexArray(vaoID); glBindTexture(GL_TEXTURE_2D, texID); glUseProgram(pId); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); Resources for Article: Further resources on this subject: The Basics of GLSL 4.0 Shaders [Article] GLSL 4.0: Using Subroutines to Select Shader Functionality [Article] Getting Started with GLSL [Article]
Read more
  • 0
  • 0
  • 19166

article-image-asynchronous-control-flow-patterns-es2015-and-beyond
Packt
07 Jun 2016
6 min read
Save for later

Asynchronous Control Flow Patterns with ES2015 and beyond

Packt
07 Jun 2016
6 min read
In this article,by Luciano Mammino, the author of the book Node.js Design Patterns, Second Edition, we will explore async await, an innovative syntaxthat will be available in JavaScript as part of the release of ECMAScript 2017. (For more resources related to this topic, see here.) Async await using Babel Callbacks, promises, and generators turn out to be the weapons at our disposal to deal with asynchronous code in JavaScript and in Node.js. As we have seen, generators are very interesting because they offer a way to actually suspend the execution of a function and resume it at a later stage. Now we can adopt this feature to write asynchronous codethatallowsdevelopers to write functions that "appear" to block at each asynchronous operation, waiting for the results before continuing with the following statement. The problem is that generator functions are designed to deal mostly with iterators and their usage with asynchronous code feels a bit cumbersome.It might be hard to understand,leading to code that is hard to read and maintain. But there is hope that there will be a cleaner syntax sometime in the near future. In fact, there is an interesting proposal that will be introduced with the ECMAScript 2017 specification that defines the async function's syntax. You can read more about the current status of the async await proposal at https://tc39.github.io/ecmascript-asyncawait/. The async function specification aims to dramatically improve the language-level model for writing asynchronous code by introducing two new keywords into the language: async and await. To clarify how these keywords are meant to be used and why they are useful, let's see a very quick example: const request = require('request'); function getPageHtml(url) { return new Promise(function(resolve, reject) { request(url, function(error, response, body) { resolve(body); }); }); } async function main() { const html = awaitgetPageHtml('http://google.com'); console.log(html); } main(); console.log('Loading...'); In this code,there are two functions: getPageHtml and main. The first one is a very simple function that fetches the HTML code of a remote web page given its URL. It's worth noticing that this function returns a promise. The main function is the most interesting one because it's where the new async and await keywords are used. The first thing to notice is that the function is prefixed with the async keyword. This means that the function executes asynchronous code and allows it to use the await keyword within its body. The await keyword before the call to getPageHtml tells the JavaScript interpreter to "await" the resolution of the promise returned by getPageHtml before continuing to the next instruction. This way, the main function is internally suspended until the asynchronous code completes without blocking the normal execution of the rest of the program. In fact, we will see the string Loading… in the console and, after a moment, the HTML code of the Google landing page. Isn't this approach much more readable and easy to understand? Unfortunately, this proposal is not yet final, and even if it will be approved we will need to wait for the next version of the ECMAScript specification to come out and be integrated in Node.js to be able to use this new syntax natively. So what do we do today? Just wait? No, of course not! We can already leverage async await in our code thanks to transpilers such as Babel. Installing and running Babel Babel is a JavaScript compiler (or transpiler) that is able to convert JavaScript code into other JavaScript code using syntax transformers. Syntax transformers allowsthe use of new syntax such as ES2015, ES2016, JSX, and others to produce backward compatible equivalent code that can be executed in modernJavaScript runtimes, such as browsers or Node.js. You can install Babel in your project using NPM with the following command: npm install --save-dev babel-cli We also need to install the extensions to support async await parsing and transformation: npm install --save-dev babel-plugin-syntax-async-functions babel-plugin-transform-async-to-generator Now let's assume we want to run our previous example (called index.js).We need to launch the following command: node_modules/.bin/babel-node --plugins "syntax-async-functions,transform-async-to-generator" index.js This way, we are transforming the source code in index.js on the fly, applying the transformers to support async await. This new backward compatible code is stored in memory and then executed on the fly on the Node.js runtime. Babel can also be configured to act as a build processor that stores the generated code into files so that you can easily deploy and run the generated code. You can read more about how to install and configure Babel on the official website at https://babeljs.io. Comparison At this point, we should have a better understanding of the options we have to tame the asynchronous nature of JavaScript. Each one of the solutions presented has its own pros and cons. Let's summarize them in the following table: Solutions Pros Cons Plain JavaScript Does not require any additional libraries or technology Offers the best performances Provides the best level of compatibility with third-party libraries Allows the creation of ad hoc and more advanced algorithms Might require extra code and relatively complex algorithms Async (library) Simplifies the most common control flow patterns Is still a callback-based solution Good performance Introduces an external dependency Might still not be enough for advanced flows   Promises Greatly simplify the most common control flow patterns Robust error handling Part of the ES2015 specification Guarantee deferred invocation of onFulfilled and onRejected Require to promisify callback-based APIs Introduce a small performance hit   Generators Make non-blocking API look like a blocking one Simplify error handling Part of ES2015 specification Require a complementary control flow library Still require callbacks or promises to implement non-sequential flows Require to thunkify or promisify nongenerator-based APIs   Async await Make non-blocking API look like blocking Clean and intuitive syntax Not yet available in JavaScript and Node.js natively Requires Babel or other transpilers and some configuration to be used today   It is worth mentioning that we chose to present only the most popular solutions to handle asynchronous control flow, or the ones receiving a lot of momentum, but it's good to know that there are a few more options you might want to look at, for example, Fibers (https://npmjs.org/package/fibers) and Streamline (https://npmjs.org/package/streamline). Summary In this article, we analyzed how Babel can be used for performing async await and how to install Babel.
Read more
  • 0
  • 0
  • 19131

article-image-introduction-creational-patterns-using-go-programming
Packt
02 Jan 2017
12 min read
Save for later

Introduction to Creational Patterns using Go Programming

Packt
02 Jan 2017
12 min read
This article by Mario Castro Contreras, author of the book Go Design Patterns, introduces you to the Creational design patterns that are explained in the book. As the title implies, this article groups common practices for creating objects. Creational patterns try to give ready-to-use objects to users instead of asking for their input, which, in some cases, could be complex and will couple your code with the concrete implementations of the functionality that should be defined in an interface. (For more resources related to this topic, see here.) Singleton design pattern – Having a unique instance of an object in the entire program Have you ever done interviews for software engineers? It's interesting that when you ask them about design patterns, more than 80% will start saying Singleton design pattern. Why is that? Maybe it's because it is one of the most used design patterns out there or one of the easiest to grasp. We will start our journey on creational design patterns because of the latter reason. Description Singleton pattern is easy to remember. As the name implies, it will provide you a single instance of an object, and guarantee that there are no duplicates. At the first call to use the instance, it is created and then reused between all the parts in the application that need to use that particular behavior. Objective of the Singleton pattern You'll use Singleton pattern in many different situations. For example: When you want to use the same connection to a database to make every query When you open a Secure Shell (SSH) connection to a server to do a few tasks, and don't want to reopen the connection for each task If you need to limit the access to some variable or space, you use a Singleton as the door to this variable. If you need to limit the number of calls to some places, you create a Singleton instance to make the calls in the accepted window The possibilities are endless, and we have just mentioned some of them. Implementation Finally, we have to implement the Singleton pattern. You'll usually write a static method and instance to retrieve the Singleton instance. In Go, we don't have the keyword static, but we can achieve the same result by using the scope of the package. First, we create a structure that contains the object which we want to guarantee to be a Singleton during the execution of the program: package creational type singleton struct{ count int } var instance *singleton func GetInstance() *singleton { if instance == nil { instance = new(singleton) } return instance } func (s *singleton) AddOne() int { s.count++ return s.count } We must pay close attention to this piece of code. In languages like Java or C++, the variable instance would be initialized to NULL at the beginning of the program. In Go, you can initialize a pointer to a structure as nil, but you cannot initialize a structure to nil (the equivalent of NULL). So the var instance *singleton line defines a pointer to a structure of type Singleton as nil, and the variable called instance. We created a GetInstance method that checks if the instance has not been initialized already (instance == nil), and creates an instance in the space already allocated in the line instance = new(singleton). Remember, when we use the keyword new, we are creating a pointer to the type between the parentheses. The AddOne method will take the count of the variable instance, raise it by one, and return the current value of the counter. Lets run now our unit tests again: $ go test -v -run=GetInstance === RUN TestGetInstance --- PASS: TestGetInstance (0.00s) PASS ok Factory method – Delegating the creation of different types of payments The Factory method pattern (or simply, Factory) is probably the second-best known and used design pattern in the industry. Its purpose is to abstract the user from the knowledge of the structure it needs to achieve a specific purpose. By delegating this decision to a Factory, this Factory can provide the object that best fits the user needs or the most updated version. It can also ease the process of downgrading or upgrading of the implementation of an object if needed. Description When using the Factory method design pattern, we gain an extra layer of encapsulation so that our program can grow in a controlled environment. With the Factory method, we delegate the creation of families of objects to a different package or object to abstract us from the knowledge of the pool of possible objects we could use. Imagine that you have two ways to access some specific resource: by HTTP or FTP. For us, the specific implementation of this access should be invisible. Maybe, we just know that the resource is in HTTP or in FTP, and we just want a connection that uses one of these protocols. Instead of implementing the connection by ourselves, we can use the Factory method to ask for the specific connection. With this approach, we can grow easily in the future if we need to add an HTTPS object. Objective of the Factory method After the previous description, the following objectives of the Factory Method design pattern must be clear to you: Delegating the creation of new instances of structures to a different part of the program Working at the interface level instead of with concrete implementations Grouping families of objects to obtain a family object creator Implementation We will start with the GetPaymentMethod method. It must receive an integer that matches with one of the defined constants of the same file to know which implementation it should return. package creational import ( "errors" "fmt" ) type PaymentMethod interface { Pay(amount float32) string } const ( Cash = 1 DebitCard = 2 ) func GetPaymentMethod(m int) (PaymentMethod, error) { switch m { case Cash: return new(CashPM), nilcase DebitCard: return new(DebitCardPM), nil default: return nil, errors.New(fmt.Sprintf("Payment method %d not recognizedn", m)) } } We use a plain switch to check the contents of the argument m (method). If it matches any of the known methods—cash or debit card, it returns a new instance of them. Otherwise, it will return a nil and an error indicating that the payment method has not been recognized. Now we can run our tests again to check the second part of the unit tests: $go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- FAIL: TestGetPaymentMethodCash (0.00s) factory_test.go:16: The cash payment method message wasn't correct factory_test.go:18: LOG: === RUN TestGetPaymentMethodDebitCard --- FAIL: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:28: The debit card payment method message wasn't correct factory_test.go:30: LOG: === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized FAIL exit status 1 FAIL Now we do not get the errors saying it couldn't find the type of payment methods. Instead, we receive a message not correct error when it tries to use any of the methods that it covers. We also got rid of the Not implemented message that was being returned when we asked for an unknown payment method. Lets implement the structures now: type CashPM struct{} type DebitCardPM struct{} func (c *CashPM) Pay(amount float32) string { return fmt.Sprintf("%0.2f paid using cashn", amount) } func (c *DebitCardPM) Pay(amount float32) string { return fmt.Sprintf("%#0.2f paid using debit cardn", amount) } We just get the amount, printing it in a nice formatted message. With this implementation, the tests will all passing now: $ go test -v -run=GetPaymentMethod . === RUN TestGetPaymentMethodCash --- PASS: TestGetPaymentMethodCash (0.00s) factory_test.go:18: LOG: 10.30 paid using cash === RUN TestGetPaymentMethodDebitCard --- PASS: TestGetPaymentMethodDebitCard (0.00s) factory_test.go:30: LOG: 22.30 paid using debit card === RUN TestGetPaymentMethodNonExistent --- PASS: TestGetPaymentMethodNonExistent (0.00s) factory_test.go:38: LOG: Payment method 20 not recognized PASS ok Do you see the LOG: messages? They aren't errors—we just print some information that we receive when using the package under test. These messages can be omitted unless you pass the -v flag to the test command: $ go test -run=GetPaymentMethod . ok Abstract Factory – A factory of factories After learning about the factory design pattern is when we grouped a family of related objects in our case payment methods, one can be quick to think: what if I group families of objects in a more structured hierarchy of families? Description The Abstract Factory design pattern is a new layer of grouping to achieve a bigger (and more complex) composite object, which is used through its interfaces. The idea behind grouping objects in families and grouping families is to have big factories that can be interchangeable and can grow more easily. In the early stages of development, it is also easier to work with factories and abstract factories than to wait until all concrete implementations are done to start your code. Also, you won't write an Abstract Factory from the beginning unless you know that your object's inventory for a particular field is going to be very large and it could be easily grouped into families. The objective Grouping related families of objects is very convenient when your object number is growing so much that creating a unique point to get them all seems the only way to gain flexibility of the runtime object creation. Following objectives of the Abstract Factory method must be clear to you: Provide a new layer of encapsulation for Factory methods that returns a common interface for all factories Group common factories into a super Factory (also called factory of factories) Implementation The implementation of every factory is already done for the sake of brevity. They are very similar to the factory method with the only difference being that in the factory method, we don't use an instance of the factory, because we use the package functions directly. The implementation of the vehicle factory is as follows: func GetVehicleFactory(f int) (VehicleFactory, error) { switch f { case CarFactoryType: return new(CarFactory), nil case MotorbikeFactoryType: return new(MotorbikeFactory), nil default: return nil, errors.New(fmt.Sprintf("Factory with id %d not recognizedn", f)) } } Like in any factory, we switched between the factory possibilities to return the one that was demanded. As we have already implemented all concrete vehicles, the tests must be run too: go test -v -run=Factory -cover . === RUN TestMotorbikeFactory --- PASS: TestMotorbikeFactory (0.00s) vehicle_factory_test.go:16: Motorbike vehicle has 2 wheels vehicle_factory_test.go:22: Sport motorbike has type 1 === RUN TestCarFactory --- PASS: TestCarFactory (0.00s) vehicle_factory_test.go:36: Car vehicle has 4 seats vehicle_factory_test.go:42: Luxury car has 4 doors. PASS coverage: 45.8% of statements ok All of them passed. Take a close look and note that we have used the -cover flag when running the tests to return a coverage percentage of the package 45.8%. What this tells us is that 45.8% of the lines are covered by the tests we have written, but 54.2% is still not under the tests. This is because we haven't covered the cruise motorbike and the Family car with tests. If you write those tests, the result should rise to around 70.8%. Prototype design pattern The last pattern we will see in this article is the Prototype pattern. Like all creational patterns, this too comes in handy when creating objects and it is very common to see the Prototype pattern surrounded by more patterns. Description The aim of the Prototype pattern is to have an object or a set of objects that are already created at compilation time, but which you can clone as many times as you want at runtime. This is useful, for example, as a default template for a user who has just registered with your webpage or a default pricing plan in some service. The key difference between this and a Builder pattern is that objects are cloned for the user instead of building them at runtime. You can also build a cache-like solution, storing information using a prototype. Objective Maintain a set of objects that will be cloned to create new instances Free CPU of complex object initialization to take more memory resources We will start with the GetClone method. This method should return an item of the specified type: type ShirtsCache struct {} func (s *ShirtsCache)GetClone(m int) (ItemInfoGetter, error) { switch m { case White: newItem := *whitePrototype return &newItem, nil case Black: newItem := *blackPrototype return &newItem, nil case Blue: newItem := *bluePrototype return &newItem, nil default: return nil, errors.New("Shirt model not recognized") } } The Shirt structure also needs a GetInfo implementation to print the contents of the instances. type ShirtColor byte type Shirt struct { Price float32 SKU string Color ShirtColor } func (s *Shirt) GetInfo() string { return fmt.Sprintf("Shirt with SKU '%s' and Color id %d that costs %fn", s.SKU, s.Color, s.Price) } Finally, lets run the tests to see that everything is now working: go test -run=TestClone -v . === RUN TestClone --- PASS: TestClone (0.00s) prototype_test.go:41: LOG: Shirt with SKU 'abbcc' and Color id 1 that costs 15.000000 prototype_test.go:42: LOG: Shirt with SKU 'empty' and Color id 1 that costs 15.000000 prototype_test.go:44: LOG: The memory positions of the shirts are different 0xc42002c038 != 0xc42002c040 PASS ok In the log (remember to set the -v flag when running the tests), you can check that shirt1 and shirt2 have different SKUs. Also, we can see the memory positions of both objects. Take into account that the positions shown on your computer will probably be different. Summary We have seen the creational design patterns commonly used in the software industry. Their purpose is to abstract the user from the creation of objects for handling complexity or maintainability purposes. Design patterns have been the foundation of thousands of applications and libraries since the nineties, and most of the software we use today has many of these creational patterns under the hood. Resources for Article: Further resources on this subject: Getting Started [article] Thinking Functionally [article] Auditing and E-discovery [article]
Read more
  • 0
  • 0
  • 19031

article-image-new-functionality-opencv-30
Packt
25 Aug 2014
5 min read
Save for later

New functionality in OpenCV 3.0

Packt
25 Aug 2014
5 min read
In this article by Oscar Deniz Suarez, coauthor of the book OpenCV Essentials, we will cover the forthcoming Version 3.0, which represents a major evolution of the OpenCV library for Computer Vision. Currently, OpenCV already includes several new techniques that are not available in the latest official release (2.4.9). The new functionality can be already used by downloading and compiling the latest development version from the official repository. This article provides an overview of some of the new techniques implemented. Other numerous lower-level changes in the forthcoming Version 3.0 (updated module structure, C++ API changes, transparent API for GPU acceleration, and so on) are not discussed. (For more resources related to this topic, see here.) Line Segment Detector OpenCV users have had the Hough transform-based straight line detector available in the previous versions. An improved method called Line Segment Detector (LSD) is now available. LSD is based on the algorithm described at http://dx.doi.org/10.5201/ipol.2012.gjmr-lsd. This method has been shown to be more robust and faster than the best previous Hough-based detector (the Progressive Probabilistic Hough Transform). The detector is now part of the imgproc module. OpenCV provides a short sample code ([opencv_source_code]/samples/cpp/lsd_lines.cpp), which shows how to use the LineSegmentDetector class. The following table shows the main components of the class: Method Function <constructor> The constructor allows to enter parameters of the algorithm; particularly; the level of refinements we want in the result detect This method detects line segments in the image drawSegments This method draws the segments in a given image compareSegments This method draws two sets of segments in a given image. The two sets are drawn with blue and red color lines Connected components The previous versions of OpenCV have included functions for working with image contours. Contours are the external limits of connected components (that is, regions of connected pixels in a binary image). The new functions, connectedComponents and connectedComponentsWithStats retrieve connected components as such. The connected components are retrieved as a labeled image with the same dimensions as the input image. This allows drawing the components on the original image easily. The connectedComponentsWithStats function retrieves useful statistics about each component shown in the following table: CC_STAT_LEFT  The leftmost (x) coordinate, which is the inclusive start of the bounding box in the horizontal direction CC_STAT_TOP  The topmost (y) coordinate, which is the inclusive start of the bounding box in the vertical direction CC_STAT_WIDTH  The horizontal size of the bounding box CC_STAT_HEIGHT  The vertical size of the bounding box CC_STAT_AREA  The total area (in pixels) of the connected component Scene text detection Text recognition is a classic problem in Computer Vision. Thus, Optical Character Recognition (OCR) is now routinely used in our society. In OCR, the input image is expected to depict typewriter black text over white background. In the last years, researchers aim at the more challenging problem of recognizing text "in the wild" on street signs, indoor signs, with diverse backgrounds and fonts, colors, and so on. The following figure shows and example of the difference between the two scenarios. In this scenario, OCR cannot be applied to the input images. Consequently, text recognition is actually accomplished in two steps. The text is first localized in the image and then character or word recognition is performed on the cropped region. OpenCV now provides a scene text detector based on the algorithm described in Neumann L., Matas J.: Real-Time Scene Text Localization and Recognition, CVPR 2012 (Providence, Rhode Island, USA). The implementation of OpenCV makes use of additional improvements found at http://158.109.8.37/files/GoK2013.pdf. OpenCV includes an example ([opencv_source_code]/samples/cpp/textdetection.cpp) that detects and draws text regions in an input image. The KAZE and AKAZE features Several 2D features have been proposed in the computer vision literature. Generally, the two most important aspects in feature extraction algorithms are computational efficiency and robustness. One of the latest contenders is the KAZE (Japanese word meaning "Wind") and Accelerated-KAZE (AKAZE) detector. There are reports that show that KAZE features are both robust and efficient, as compared with other widely-known features (BRISK, FREAK, and so on). The underlying algorithm is described in KAZE Features, Pablo F. Alcantarilla, Adrien Bartoli, and Andrew J. Davison, in European Conference on Computer Vision (ECCV), Florence, Italy, October 2012. As with other keypoint detectors in OpenCV, the KAZE implementation allows retrieving both keypoints and descriptors (that is, a feature vector computed around the keypoint neighborhood). The detector follows the same framework used in OpenCV for other detectors, so drawing methods are also available. Computational photography One of the modules with most improvements in the forthcoming Version 3.0 is the computational photography module (photo). The new techniques include the functionalities mentioned in the following table: Functionality Description HDR imaging Functions for handling High-Dynamic Range images (tonemapping, exposure alignment, camera calibration with multiple exposures, and exposure fusion) Seamless cloning Functions for realistically inserting one image into other image with an arbitrary-shape region of interest. Non-photorealistic rendering This technique includes non-photorealistic filters (such as pencil-like drawing effect) and edge-preserving smoothing filters (those are similar to the bilateral filter). New modules Finally, we provide a list with the new modules in development for version 3.0: Module name Description videostab Global motion estimation, Fast Marching method softcascade Implements a stageless variant of the cascade detector, which is considered more accurate shape Shape matching and retrieval. Shape context descriptor and matching algorithm, Hausdorff distance and Thin-Plate Splines cuda<X> Several modules with CUDA-accelerated implementations of other functions in the library Summary In this article, we learned about the different functionalities in OpenCV 3.0 and its different components. Resources for Article: Further resources on this subject: Wrapping OpenCV [article] A quick start – OpenCV fundamentals [article] Linking OpenCV to an iOS project [article]
Read more
  • 0
  • 0
  • 18965

article-image-basics-programming-julia
Packt
03 Mar 2015
17 min read
Save for later

Basics of Programming in Julia

Packt
03 Mar 2015
17 min read
 In this article by Ivo Balbaert, author of the book Getting Started with Julia Programming, we will explore how Julia interacts with the outside world, reading from standard input and writing to standard output, files, networks, and databases. Julia provides asynchronous networking I/O using the libuv library. We will see how to handle data in Julia. We will also discover the parallel processing model of Julia. In this article, the following topics are covered: Working with files (including the CSV files) Using DataFrames (For more resources related to this topic, see here.) Working with files To work with files, we need the IOStream type. IOStream is a type with the supertype IO and has the following characteristics: The fields are given by names(IOStream) 4-element Array{Symbol,1}:  :handle   :ios    :name   :mark The types are given by IOStream.types (Ptr{None}, Array{Uint8,1}, String, Int64) The file handle is a pointer of the type Ptr, which is a reference to the file object. Opening and reading a line-oriented file with the name example.dat is very easy: // code in Chapter 8io.jl fname = "example.dat"                                 f1 = open(fname) fname is a string that contains the path to the file, using escaping of special characters with when necessary; for example, in Windows, when the file is in the test folder on the D: drive, this would become d:\test\example.dat. The f1 variable is now an IOStream(<file example.dat>) object. To read all lines one after the other in an array, use data = readlines(f1), which returns 3-element Array{Union(ASCIIString,UTF8String),1}: "this is line 1.rn" "this is line 2.rn" "this is line 3." For processing line by line, now only a simple loop is needed: for line in data   println(line) # or process line end close(f1) Always close the IOStream object to clean and save resources. If you want to read the file into one string, use readall. Use this only for relatively small files because of the memory consumption; this can also be a potential problem when using readlines. There is a convenient shorthand with the do syntax for opening a file, applying a function process, and closing it automatically. This goes as follows (file is the IOStream object in this code): open(fname) do file     process(file) end The do command creates an anonymous function, and passes it to open. Thus, the previous code example would have been equivalent to open(process, fname). Use the same syntax for processing a file fname line by line without the memory overhead of the previous methods, for example: open(fname) do file     for line in eachline(file)         print(line) # or process line     end end Writing a file requires first opening it with a "w" flag, then writing strings to it with write, print, or println, and then closing the file handle that flushes the IOStream object to the disk: fname =   "example2.dat" f2 = open(fname, "w") write(f2, "I write myself to a filen") # returns 24 (bytes written) println(f2, "even with println!") close(f2) Opening a file with the "w" option will clear the file if it exists. To append to an existing file, use "a". To process all the files in the current folder (or a given folder as an argument to readdir()), use this for loop: for file in readdir()   # process file end Reading and writing CSV files A CSV file is a comma-separated file. The data fields in each line are separated by commas "," or another delimiter such as semicolons ";". These files are the de-facto standard for exchanging small and medium amounts of tabular data. Such files are structured so that one line contains data about one data object, so we need a way to read and process the file line by line. As an example, we will use the data file Chapter 8winequality.csv that contains 1,599 sample measurements, 12 data columns, such as pH and alcohol per sample, separated by a semicolon. In the following screenshot, you can see the top 20 rows:   In general, the readdlm function is used to read in the data from the CSV files: # code in Chapter 8csv_files.jl: fname = "winequality.csv" data = readdlm(fname, ';') The second argument is the delimiter character (here, it is ;). The resulting data is a 1600x12 Array{Any,2} array of the type Any because no common type could be found:     "fixed acidity"   "volatile acidity"      "alcohol"   "quality"      7.4                        0.7                                9.4              5.0      7.8                        0.88                              9.8              5.0      7.8                        0.76                              9.8              5.0   … If the data file is comma separated, reading it is even simpler with the following command: data2 = readcsv(fname) The problem with what we have done until now is that the headers (the column titles) were read as part of the data. Fortunately, we can pass the argument header=true to let Julia put the first line in a separate array. It then naturally gets the correct datatype, Float64, for the data array. We can also specify the type explicitly, such as this: data3 = readdlm(fname, ';', Float64, 'n', header=true) The third argument here is the type of data, which is a numeric type, String or Any. The next argument is the line separator character, and the fifth indicates whether or not there is a header line with the field (column) names. If so, then data3 is a tuple with the data as the first element and the header as the second, in our case, (1599x12 Array{Float64,2}, 1x12 Array{String,2}) (There are other optional arguments to define readdlm, see the help option). In this case, the actual data is given by data3[1] and the header by data3[2]. Let's continue working with the variable data. The data forms a matrix, and we can get the rows and columns of data using the normal array-matrix syntax). For example, the third row is given by row3 = data[3, :] with data:  7.8  0.88  0.0  2.6  0.098  25.0  67.0  0.9968  3.2  0.68  9.8  5.0, representing the measurements for all the characteristics of a certain wine. The measurements of a certain characteristic for all wines are given by a data column, for example, col3 = data[ :, 3] represents the measurements of citric acid and returns a column vector 1600-element Array{Any,1}:   "citric acid" 0.0  0.0  0.04  0.56  0.0  0.0 …  0.08  0.08  0.1  0.13  0.12  0.47. If we need columns 2-4 (volatile acidity to residual sugar) for all wines, extract the data with x = data[:, 2:4]. If we need these measurements only for the wines on rows 70-75, get these with y = data[70:75, 2:4], returning a 6 x 3 Array{Any,2} outputas follows: 0.32   0.57  2.0 0.705  0.05  1.9 … 0.675  0.26  2.1 To get a matrix with the data from columns 3, 6, and 11, execute the following command: z = [data[:,3] data[:,6] data[:,11]] It would be useful to create a type Wine in the code. For example, if the data is to be passed around functions, it will improve the code quality to encapsulate all the data in a single data type, like this: type Wine     fixed_acidity::Array{Float64}     volatile_acidity::Array{Float64}     citric_acid::Array{Float64}     # other fields     quality::Array{Float64} end Then, we can create objects of this type to work with them, like in any other object-oriented language, for example, wine1 = Wine(data[1, :]...), where the elements of the row are splatted with the ... operator into the Wine constructor. To write to a CSV file, the simplest way is to use the writecsv function for a comma separator, or the writedlm function if you want to specify another separator. For example, to write an array data to a file partial.dat, you need to execute the following command: writedlm("partial.dat", data, ';') If more control is necessary, you can easily combine the more basic functions from the previous section. For example, the following code snippet writes 10 tuples of three numbers each to a file: // code in Chapter 8tuple_csv.jl fname = "savetuple.csv" csvfile = open(fname,"w") # writing headers: write(csvfile, "ColName A, ColName B, ColName Cn") for i = 1:10   tup(i) = tuple(rand(Float64,3)...)   write(csvfile, join(tup(i),","), "n") end close(csvfile) Using DataFrames If you measure n variables (each of a different type) of a single object of observation, then you get a table with n columns for each object row. If there are m observations, then we have m rows of data. For example, given the student grades as data, you might want to know "compute the average grade for each socioeconomic group", where grade and socioeconomic group are both columns in the table, and there is one row per student. The DataFrame is the most natural representation to work with such a (m x n) table of data. They are similar to pandas DataFrames in Python or data.frame in R. A DataFrame is a more specialized tool than a normal array for working with tabular and statistical data, and it is defined in the DataFrames package, a popular Julia library for statistical work. Install it in your environment by typing in Pkg.add("DataFrames") in the REPL. Then, import it into your current workspace with using DataFrames. Do the same for the packages DataArrays and RDatasets (which contains a collection of example datasets mostly used in the R literature). A common case in statistical data is that data values can be missing (the information is not known). The DataArrays package provides us with the unique value NA, which represents a missing value, and has the type NAtype. The result of the computations that contain the NA values mostly cannot be determined, for example, 42 + NA returns NA. (Julia v0.4 also has a new Nullable{T} type, which allows you to specify the type of a missing value). A DataArray{T} array is a data structure that can be n-dimensional, behaves like a standard Julia array, and can contain values of the type T, but it can also contain the missing (Not Available) values NA and can work efficiently with them. To construct them, use the @data macro: // code in Chapter 8dataarrays.jl using DataArrays using DataFrames dv = @data([7, 3, NA, 5, 42]) This returns 5-element DataArray{Int64,1}: 7  3   NA  5 42. The sum of these numbers is given by sum(dv) and returns NA. One can also assign the NA values to the array with dv[5] = NA; then, dv becomes [7, 3, NA, 5, NA]). Converting this data structure to a normal array fails: convert(Array, dv) returns ERROR: NAException. How to get rid of these NA values, supposing we can do so safely? We can use the dropna function, for example, sum(dropna(dv)) returns 15. If you know that you can replace them with a value v, use the array function: repl = -1 sum(array(dv, repl)) # returns 13 A DataFrame is a kind of an in-memory database, versatile in the ways you can work with the data. It consists of columns with names such as Col1, Col2, Col3, and so on. Each of these columns are DataArrays that have their own type, and the data they contain can be referred to by the column names as well, so we have substantially more forms of indexing. Unlike two-dimensional arrays, columns in a DataFrame can be of different types. One column might, for instance, contain the names of students and should therefore be a string. Another column could contain their age and should be an integer. We construct a DataFrame from the program data as follows: // code in Chapter 8dataframes.jl using DataFrames # constructing a DataFrame: df = DataFrame() df[:Col1] = 1:4 df[:Col2] = [e, pi, sqrt(2), 42] df[:Col3] = [true, false, true, false] show(df) Notice that the column headers are used as symbols. This returns the following 4 x 3 DataFrame object: We could also have used the full constructor as follows: df = DataFrame(Col1 = 1:4, Col2 = [e, pi, sqrt(2), 42],    Col3 = [true, false, true, false]) You can refer to the columns either by an index (the column number) or by a name, both of the following expressions return the same output: show(df[2]) show(df[:Col2]) This gives the following output: [2.718281828459045, 3.141592653589793, 1.4142135623730951,42.0] To show the rows or subsets of rows and columns, use the familiar splice (:) syntax, for example: To get the first row, execute df[1, :]. This returns 1x3 DataFrame.  | Row | Col1 | Col2    | Col3 |  |-----|------|---------|------|  | 1   | 1    | 2.71828 | true | To get the second and third row, execute df [2:3, :] To get only the second column from the previous result, execute df[2:3, :Col2]. This returns [3.141592653589793, 1.4142135623730951]. To get the second and third column from the second and third row, execute df[2:3, [:Col2, :Col3]], which returns the following output: 2x2 DataFrame  | Row | Col2    | Col3  |  |---- |-----   -|-------|  | 1   | 3.14159 | false |  | 2   | 1.41421 | true  | The following functions are very useful when working with DataFrames: The head(df) and tail(df) functions show you the first six and the last six lines of data respectively. The names function gives the names of the columns names(df). It returns 3-element Array{Symbol,1}:  :Col1  :Col2  :Col3. The eltypes function gives the data types of the columns eltypes(df). It gives the output as 3-element Array{Type{T<:Top},1}:  Int64  Float64  Bool. The describe function tries to give some useful summary information about the data in the columns, depending on the type, for example, describe(df) gives for column 2 (which is numeric) the min, max, median, mean, number, and percentage of NAs: Col2 Min      1.4142135623730951 1st Qu.  2.392264761937558  Median   2.929937241024419 Mean     12.318522011105483  3rd Qu.  12.856194490192344  Max      42.0  NAs      0  NA%      0.0% To load in data from a local CSV file, use the method readtable. The returned object is of type DataFrame: // code in Chapter 8dataframes.jl using DataFrames fname = "winequality.csv" data = readtable(fname, separator = ';') typeof(data) # DataFrame size(data) # (1599,12) Here is a fraction of the output: The readtable method also supports reading in gzipped CSV files. Writing a DataFrame to a file can be done with the writetable function, which takes the filename and the DataFrame as arguments, for example, writetable("dataframe1.csv", df). By default, writetable will use the delimiter specified by the filename extension and write the column names as headers. Both readtable and writetable support numerous options for special cases. Refer to the docs for more information (refer to http://dataframesjl.readthedocs.org/en/latest/). To demonstrate some of the power of DataFrames, here are some queries you can do: Make a vector with only the quality information data[:quality] Give the wines with alcohol percentage equal to 9.5, for example, data[ data[:alcohol] .== 9.5, :] Here, we use the .== operator, which does element-wise comparison. data[:alcohol] .== 9.5 returns an array of Boolean values (true for datapoints, where :alcohol is 9.5, and false otherwise). data[boolean_array, : ] selects those rows where boolean_array is true. Count the number of wines grouped by quality with by(data, :quality, data -> size(data, 1)), which returns the following: 6x2 DataFrame | Row | quality | x1  | |-----|---------|-----| | 1    | 3      | 10  | | 2    | 4      | 53  | | 3    | 5      | 681 | | 4    | 6      | 638 | | 5    | 7      | 199 | | 6    | 8      | 18  | The DataFrames package contains the by function, which takes in three arguments: A DataFrame, here it takes data A column to split the DataFrame on, here it takes quality A function or an expression to apply to each subset of the DataFrame, here data -> size(data, 1), which gives us the number of wines for each quality value Another easy way to get the distribution among quality is to execute the histogram hist function hist(data[:quality]) that gives the counts over the range of quality (2.0:1.0:8.0,[10,53,681,638,199,18]). More precisely, this is a tuple with the first element corresponding to the edges of the histogram bins, and the second denoting the number of items in each bin. So there are, for example, 10 wines with quality between 2 and 3, and so on. To extract the counts as a variable count of type Vector, we can execute _, count = hist(data[:quality]); the _ means that we neglect the first element of the tuple. To obtain the quality classes as a DataArray class, we will execute the following: class = sort(unique(data[:quality])) We can now construct a df_quality DataFrame with the class and count columns as df_quality = DataFrame(qual=class, no=count). This gives the following output: 6x2 DataFrame | Row | qual | no  | |-----|------|-----| | 1   | 3    | 10  | | 2   | 4    | 53  | | 3   | 5    | 681 | | 4   | 6    | 638 | | 5   | 7    | 199 | | 6   | 8    | 18  | To deepen your understanding and learn about the other features of Julia DataFrames (such as joining, reshaping, and sorting), refer to the documentation available at http://dataframesjl.readthedocs.org/en/latest/. Other file formats Julia can work with other human-readable file formats through specialized packages: For JSON, use the JSON package. The parse method converts the JSON strings into Dictionaries, and the json method turns any Julia object into a JSON string. For XML, use the LightXML package For YAML, use the YAML package For HDF5 (a common format for scientific data), use the HDF5 package For working with Windows INI files, use the IniFile package Summary In this article we discussed the basics of network programming in Julia. Resources for Article: Further resources on this subject: Getting Started with Electronic Projects? [article] Getting Started with Selenium Webdriver and Python [article] Handling The Dom In Dart [article]
Read more
  • 0
  • 0
  • 18945
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at €18.99/month. Cancel anytime
article-image-just-object-oriented-programming-object-oriented-programming-explained
Packt
18 Feb 2016
32 min read
Save for later

Just Object Oriented Programming (Object Oriented Programming, explained)

Packt
18 Feb 2016
32 min read
In software development, design is often considered as the step done before programming. This isn't true; in reality, analysis, programming, and design tend to overlap, combine, and interweave. (For more resources related to this topic, see here.) Introducing object-oriented Everyone knows what an object is—a tangible thing that we can sense, feel, and manipulate. The earliest objects we interact with are typically baby toys. Wooden blocks, plastic shapes, and over-sized puzzle pieces are common first objects. Babies learn quickly that certain objects do certain things: bells ring, buttons press, and levers pull. The definition of an object in software development is not terribly different. Software objects are not typically tangible things that you can pick up, sense, or feel, but they are models of something that can do certain things and have certain things done to them. Formally, an object is a collection of data and associated behaviors. So, knowing what an object is, what does it mean to be object-oriented? Oriented simply means directed toward. So object-oriented means functionally directed towards modeling objects. This is one of the many techniques used for modeling complex systems by describing a collection of interacting objects via their data and behavior. If you've read any hype, you've probably come across the terms object-oriented analysis, object-oriented design, object-oriented analysis and design, and object-oriented programming. These are all highly related concepts under the general object-oriented umbrella. In fact, analysis, design, and programming are all stages of software development. Calling them object-oriented simply specifies what style of software development is being pursued. Object-oriented analysis (OOA) is the process of looking at a problem, system, or task (that somebody wants to turn into an application) and identifying the objects and interactions between those objects. The analysis stage is all about what needs to be done. The output of the analysis stage is a set of requirements. If we were to complete the analysis stage in one step, we would have turned a task, such as, I need a website, into a set of requirements. For example: Website visitors need to be able to (italic represents actions, bold represents objects): review our history apply for jobs browse, compare, and order products In some ways, analysis is a misnomer. The baby we discussed earlier doesn't analyze the blocks and puzzle pieces. Rather, it will explore its environment, manipulate shapes, and see where they might fit. A better turn of phrase might be object-oriented exploration. In software development, the initial stages of analysis include interviewing customers, studying their processes, and eliminating possibilities. Object-oriented design (OOD) is the process of converting such requirements into an implementation specification. The designer must name the objects, define the behaviors, and formally specify which objects can activate specific behaviors on other objects. The design stage is all about how things should be done. The output of the design stage is an implementation specification. If we were to complete the design stage in a single step, we would have turned the requirements defined during object-oriented analysis into a set of classes and interfaces that could be implemented in (ideally) any object-oriented programming language. Object-oriented programming (OOP) is the process of converting this perfectly defined design into a working program that does exactly what the CEO originally requested. Yeah, right! It would be lovely if the world met this ideal and we could follow these stages one by one, in perfect order, like all the old textbooks told us to. As usual, the real world is much murkier. No matter how hard we try to separate these stages, we'll always find things that need further analysis while we're designing. When we're programming, we find features that need clarification in the design. Most twenty-first century development happens in an iterative development model. In iterative development, a small part of the task is modeled, designed, and programmed, then the program is reviewed and expanded to improve each feature and include new features in a series of short development cycles. Objects and classes So, an object is a collection of data with associated behaviors. How do we differentiate between types of objects? Apples and oranges are both objects, but it is a common adage that they cannot be compared. Apples and oranges aren't modeled very often in computer programming, but let's pretend we're doing an inventory application for a fruit farm. To facilitate the example, we can assume that apples go in barrels and oranges go in baskets. Now, we have four kinds of objects: apples, oranges, baskets, and barrels. In object-oriented modeling, the term used for kind of object is class. So, in technical terms, we now have four classes of objects. What's the difference between an object and a class? Classes describe objects. They are like blueprints for creating an object. You might have three oranges sitting on the table in front of you. Each orange is a distinct object, but all three have the attributes and behaviors associated with one class: the general class of oranges. The relationship between the four classes of objects in our inventory system can be described using a Unified Modeling Language (invariably referred to as UML, because three letter acronyms never go out of style) class diagram. Here is our first class diagram:   This diagram shows that an Orange is somehow associated with a Basket and that an Apple is also somehow associated with a Barrel. Association is the most basic way for two classes to be related. UML is very popular among managers, and occasionally disparaged by programmers. The syntax of a UML diagram is generally pretty obvious; you don't have to read a tutorial to (mostly) understand what is going on when you see one. UML is also fairly easy to draw, and quite intuitive. After all, many people, when describing classes and their relationships, will naturally draw boxes with lines between them. Having a standard based on these intuitive diagrams makes it easy for programmers to communicate with designers, managers, and each other. However, some programmers think UML is a waste of time. Citing iterative development, they will argue that formal specifications done up in fancy UML diagrams are going to be redundant before they're implemented, and that maintaining these formal diagrams will only waste time and not benefit anyone. Depending on the corporate structure involved, this may or may not be true. However, every programming team consisting of more than one person will occasionally has to sit down and hash out the details of the subsystem it is currently working on. UML is extremely useful in these brainstorming sessions for quick and easy communication. Even those organizations that scoff at formal class diagrams tend to use some informal version of UML in their design meetings or team discussions. Further, the most important person you will ever have to communicate with is yourself. We all think we can remember the design decisions we've made, but there will always be the Why did I do that? moments hiding in our future. If we keep the scraps of papers we did our initial diagramming on when we started a design, we'll eventually find them a useful reference. This article, however, is not meant to be a tutorial in UML. There are many of these available on the Internet, as well as numerous books available on the topic. UML covers far more than class and object diagrams; it also has a syntax for use cases, deployment, state changes, and activities. We'll be dealing with some common class diagram syntax in this discussion of object-oriented design. You'll find that you can pick up the structure by example, and you'll subconsciously choose the UML-inspired syntax in your own team or personal design sessions. Our initial diagram, while correct, does not remind us that apples go in barrels or how many barrels a single apple can go in. It only tells us that apples are somehow associated with barrels. The association between classes is often obvious and needs no further explanation, but we have the option to add further clarification as needed. The beauty of UML is that most things are optional. We only need to specify as much information in a diagram as makes sense for the current situation. In a quick whiteboard session, we might just quickly draw lines between boxes. In a formal document, we might go into more detail. In the case of apples and barrels, we can be fairly confident that the association is, many apples go in one barrel, but just to make sure nobody confuses it with, one apple spoils one barrel, we can enhance the diagram as shown: This diagram tells us that oranges go in baskets with a little arrow showing what goes in what. It also tells us the number of that object that can be used in the association on both sides of the relationship. One Basket can hold many (represented by a *) Orange objects. Any one Orange can go in exactly one Basket. This number is referred to as the multiplicity of the object. You may also hear it described as the cardinality. These are actually slightly distinct terms. Cardinality refers to the actual number of items in the set, whereas multiplicity specifies how small or how large this number could be. I frequently forget which side of a relationship the multiplicity goes on. The multiplicity nearest to a class is the number of objects of that class that can be associated with any one object at the other end of the association. For the apple goes in barrel association, reading from left to right, many instances of the Apple class (that is many Apple objects) can go in any one Barrel. Reading from right to left, exactly one Barrel can be associated with any one Apple. Specifying attributes and behaviors We now have a grasp of some basic object-oriented terminology. Objects are instances of classes that can be associated with each other. An object instance is a specific object with its own set of data and behaviors; a specific orange on the table in front of us is said to be an instance of the general class of oranges. That's simple enough, but what are these data and behaviors that are associated with each object? Data describes objects Let's start with data. Data typically represents the individual characteristics of a certain object. A class can define specific sets of characteristics that are shared by all objects of that class. Any specific object can have different data values for the given characteristics. For example, our three oranges on the table (if we haven't eaten any) could each weigh a different amount. The orange class could then have a weight attribute. All instances of the orange class have a weight attribute, but each orange has a different value for this attribute. Attributes don't have to be unique, though; any two oranges may weigh the same amount. As a more realistic example, two objects representing different customers might have the same value for a first name attribute. Attributes are frequently referred to as members or properties. Some authors suggest that the terms have different meanings, usually that attributes are settable, while properties are read-only. In Python, the concept of "read-only" is rather pointless, so throughout this article, we'll see the two terms used interchangeably. In our fruit inventory application, the fruit farmer may want to know what orchard the orange came from, when it was picked, and how much it weighs. They might also want to keep track of where each basket is stored. Apples might have a color attribute, and barrels might come in different sizes. Some of these properties may also belong to multiple classes (we may want to know when apples are picked, too), but for this first example, let's just add a few different attributes to our class diagram: Depending on how detailed our design needs to be, we can also specify the type for each attribute. Attribute types are often primitives that are standard to most programming languages, such as integer, floating-point number, string, byte, or Boolean. However, they can also represent data structures such as lists, trees, or graphs, or most notably, other classes. This is one area where the design stage can overlap with the programming stage. The various primitives or objects available in one programming language may be somewhat different from what is available in other languages. Usually, we don't need to be overly concerned with data types at the design stage, as implementation-specific details are chosen during the programming stage. Generic names are normally sufficient for design. If our design calls for a list container type, the Java programmers can choose to use a LinkedList or an ArrayList when implementing it, while the Python programmers (that's us!) can choose between the list built-in and a tuple. In our fruit-farming example so far, our attributes are all basic primitives. However, there are some implicit attributes that we can make explicit—the associations. For a given orange, we might have an attribute containing the basket that holds that orange. Behaviors are actions Now, we know what data is, but what are behaviors? Behaviors are actions that can occur on an object. The behaviors that can be performed on a specific class of objects are called methods. At the programming level, methods are like functions in structured programming, but they magically have access to all the data associated with this object. Like functions, methods can also accept parameters and return values. Parameters to a method are a list of objects that need to be passed into the method that is being called (the objects that are passed in from the calling object are usually referred to as arguments). These objects are used by the method to perform whatever behavior or task it is meant to do. Returned values are the results of that task. We've stretched our "comparing apples and oranges" example into a basic (if far-fetched) inventory application. Let's stretch it a little further and see if it breaks. One action that can be associated with oranges is the pick action. If you think about implementation, pick would place the orange in a basket by updating the basket attribute of the orange, and by adding the orange to the oranges list on the Basket. So, pick needs to know what basket it is dealing with. We do this by giving the pick method a basket parameter. Since our fruit farmer also sells juice, we can add a squeeze method to Orange. When squeezed, squeeze might return the amount of juice retrieved, while also removing the Orange from the basket it was in. Basket can have a sell action. When a basket is sold, our inventory system might update some data on as-yet unspecified objects for accounting and profit calculations. Alternatively, our basket of oranges might go bad before we can sell them, so we add a discard method. Let's add these methods to our diagram: Adding models and methods to individual objects allows us to create a system of interacting objects. Each object in the system is a member of a certain class. These classes specify what types of data the object can hold and what methods can be invoked on it. The data in each object can be in a different state from other objects of the same class, and each object may react to method calls differently because of the differences in state. Object-oriented analysis and design is all about figuring out what those objects are and how they should interact. The next section describes principles that can be used to make those interactions as simple and intuitive as possible. Hiding details and creating the public interface The key purpose of modeling an object in object-oriented design is to determine what the public interface of that object will be. The interface is the collection of attributes and methods that other objects can use to interact with that object. They do not need, and are often not allowed, to access the internal workings of the object. A common real-world example is the television. Our interface to the television is the remote control. Each button on the remote control represents a method that can be called on the television object. When we, as the calling object, access these methods, we do not know or care if the television is getting its signal from an antenna, a cable connection, or a satellite dish. We don't care what electronic signals are being sent to adjust the volume, or whether the sound is destined to speakers or headphones. If we open the television to access the internal workings, for example, to split the output signal to both external speakers and a set of headphones, we will void the warranty. This process of hiding the implementation, or functional details, of an object is suitably called information hiding. It is also sometimes referred to as encapsulation, but encapsulation is actually a more all-encompassing term. Encapsulated data is not necessarily hidden. Encapsulation is, literally, creating a capsule and so think of creating a time capsule. If you put a bunch of information into a time capsule, lock and bury it, it is both encapsulated and the information is hidden. On the other hand, if the time capsule has not been buried and is unlocked or made of clear plastic, the items inside it are still encapsulated, but there is no information hiding. The distinction between encapsulation and information hiding is largely irrelevant, especially at the design level. Many practical references use these terms interchangeably. As Python programmers, we don't actually have or need true information hiding, so the more encompassing definition for encapsulation is suitable. The public interface, however, is very important. It needs to be carefully designed as it is difficult to change it in the future. Changing the interface will break any client objects that are calling it. We can change the internals all we like, for example, to make it more efficient, or to access data over the network as well as locally, and the client objects will still be able to talk to it, unmodified, using the public interface. On the other hand, if we change the interface by changing attribute names that are publicly accessed, or by altering the order or types of arguments that a method can accept, all client objects will also have to be modified. While on the topic of public interfaces, keep it simple. Always design the interface of an object based on how easy it is to use, not how hard it is to code (this advice applies to user interfaces as well). Remember, program objects may represent real objects, but that does not make them real objects. They are models. One of the greatest gifts of modeling is the ability to ignore irrelevant details. The model car I built as a child may look like a real 1956 Thunderbird on the outside, but it doesn't run and the driveshaft doesn't turn. These details were overly complex and irrelevant before I started driving. The model is an abstraction of a real concept. Abstraction is another object-oriented concept related to encapsulation and information hiding. Simply put, abstraction means dealing with the level of detail that is most appropriate to a given task. It is the process of extracting a public interface from the inner details. A driver of a car needs to interact with steering, gas pedal, and brakes. The workings of the motor, drive train, and brake subsystem don't matter to the driver. A mechanic, on the other hand, works at a different level of abstraction, tuning the engine and bleeding the breaks. Here's an example of two abstraction levels for a car: Now, we have several new terms that refer to similar concepts. Condensing all this jargon into a couple of sentences: abstraction is the process of encapsulating information with separate public and private interfaces. The private interfaces can be subject to information hiding. The important lesson to take from all these definitions is to make our models understandable to other objects that have to interact with them. This means paying careful attention to small details. Ensure methods and properties have sensible names. When analyzing a system, objects typically represent nouns in the original problem, while methods are normally verbs. Attributes can often be picked up as adjectives, although if the attribute refers to another object that is part of the current object, it will still likely be a noun. Name classes, attributes, and methods accordingly. Don't try to model objects or actions that might be useful in the future. Model exactly those tasks that the system needs to perform, and the design will naturally gravitate towards the one that has an appropriate level of abstraction. This is not to say we should not think about possible future design modifications. Our designs should be open ended so that future requirements can be satisfied. However, when abstracting interfaces, try to model exactly what needs to be modeled and nothing more. When designing the interface, try placing yourself in the object's shoes and imagine that the object has a strong preference for privacy. Don't let other objects have access to data about you unless you feel it is in your best interest for them to have it. Don't give them an interface to force you to perform a specific task unless you are certain you want them to be able to do that to you. Composition So far, we learned to design systems as a group of interacting objects, where each interaction involves viewing objects at an appropriate level of abstraction. But we don't know yet how to create these levels of abstraction. But even most design patterns rely on two basic object-oriented principles known as composition and inheritance. Composition is simpler, so let's start with it. Composition is the act of collecting several objects together to create a new one. Composition is usually a good choice when one object is part of another object. We've already seen a first hint of composition in the mechanic example. A car is composed of an engine, transmission, starter, headlights, and windshield, among numerous other parts. The engine, in turn, is composed of pistons, a crank shaft, and valves. In this example, composition is a good way to provide levels of abstraction. The car object can provide the interface required by a driver, while also providing access to its component parts, which offers the deeper level of abstraction suitable for a mechanic. Those component parts can, of course, be further broken down if the mechanic needs more information to diagnose a problem or tune the engine. This is a common introductory example of composition, but it's not overly useful when it comes to designing computer systems. Physical objects are easy to break into component objects. People have been doing this at least since the ancient Greeks originally postulated that atoms were the smallest units of matter (they, of course, didn't have access to particle accelerators). Computer systems are generally less complicated than physical objects, yet identifying the component objects in such systems does not happen as naturally. The objects in an object-oriented system occasionally represent physical objects such as people, books, or telephones. More often, however, they represent abstract ideas. People have names, books have titles, and telephones are used to make calls. Calls, titles, accounts, names, appointments, and payments are not usually considered objects in the physical world, but they are all frequently-modeled components in computer systems. Let's try modeling a more computer-oriented example to see composition in action. We'll be looking at the design of a computerized chess game. This was a very popular pastime among academics in the 80s and 90s. People were predicting that computers would one day be able to defeat a human chess master. When this happened in 1997 (IBM's Deep Blue defeated world chess champion, Gary Kasparov), interest in the problem waned, although there are still contests between computer and human chess players. (The computers usually win.) As a basic, high-level analysis, a game of chess is played between two players, using a chess set featuring a board containing sixty-four positions in an 8 X 8 grid. The board can have two sets of sixteen pieces that can be moved, in alternating turns by the two players in different ways. Each piece can take other pieces. The board will be required to draw itself on the computer screen after each turn. I've identified some of the possible objects in the description using italics, and a few key methods using bold. This is a common first step in turning an object-oriented analysis into a design. At this point, to emphasize composition, we'll focus on the board, without worrying too much about the players or the different types of pieces. Let's start at the highest level of abstraction possible. We have two players interacting with a chess set by taking turns making moves: What is this? It doesn't quite look like our earlier class diagrams. That's because it isn't a class diagram! This is an object diagram, also called an instance diagram. It describes the system at a specific state in time, and is describing specific instances of objects, not the interaction between classes. Remember, both players are members of the same class, so the class diagram looks a little different: The diagram shows that exactly two players can interact with one chess set. This also indicates that any one player can be playing with only one chess set at a time. However, we're discussing composition, not UML, so let's think about what the Chess Set is composed of. We don't care what the player is composed of at this time. We can assume that the player has a heart and brain, among other organs, but these are irrelevant to our model. Indeed, there is nothing stopping said player from being Deep Blue itself, which has neither a heart nor a brain. The chess set, then, is composed of a board and 32 pieces. The board further comprises 64 positions. You could argue that pieces are not part of the chess set because you could replace the pieces in a chess set with a different set of pieces. While this is unlikely or impossible in a computerized version of chess, it introduces us to aggregation. Aggregation is almost exactly like composition. The difference is that aggregate objects can exist independently. It would be impossible for a position to be associated with a different chess board, so we say the board is composed of positions. But the pieces, which might exist independently of the chess set, are said to be in an aggregate relationship with that set. Another way to differentiate between aggregation and composition is to think about the lifespan of the object. If the composite (outside) object controls when the related (inside) objects are created and destroyed, composition is most suitable. If the related object is created independently of the composite object, or can outlast that object, an aggregate relationship makes more sense. Also, keep in mind that composition is aggregation; aggregation is simply a more general form of composition. Any composite relationship is also an aggregate relationship, but not vice versa. Let's describe our current chess set composition and add some attributes to the objects to hold the composite relationships: The composition relationship is represented in UML as a solid diamond. The hollow diamond represents the aggregate relationship. You'll notice that the board and pieces are stored as part of the chess set in exactly the same way a reference to them is stored as an attribute on the chess set. This shows that, once again, in practice, the distinction between aggregation and composition is often irrelevant once you get past the design stage. When implemented, they behave in much the same way. However, it can help to differentiate between the two when your team is discussing how the different objects interact. Often, you can treat them as the same thing, but when you need to distinguish between them, it's great to know the difference (this is abstraction at work). Inheritance We discussed three types of relationships between objects: association, composition, and aggregation. However, we have not fully specified our chess set, and these tools don't seem to give us all the power we need. We discussed the possibility that a player might be a human or it might be a piece of software featuring artificial intelligence. It doesn't seem right to say that a player is associated with a human, or that the artificial intelligence implementation is part of the player object. What we really need is the ability to say that "Deep Blue is a player" or that "Gary Kasparov is a player". The is a relationship is formed by inheritance. Inheritance is the most famous, well-known, and over-used relationship in object-oriented programming. Inheritance is sort of like a family tree. My grandfather's last name was Phillips and my father inherited that name. I inherited it from him (along with blue eyes and a penchant for writing). In object-oriented programming, instead of inheriting features and behaviors from a person, one class can inherit attributes and methods from another class. For example, there are 32 chess pieces in our chess set, but there are only six different types of pieces (pawns, rooks, bishops, knights, king, and queen), each of which behaves differently when it is moved. All of these classes of piece have properties, such as color and the chess set they are part of, but they also have unique shapes when drawn on the chess board, and make different moves. Let's see how the six types of pieces can inherit from a Piece class: The hollow arrows indicate that the individual classes of pieces inherit from the Piece class. All the subtypes automatically have a chess_set and color attribute inherited from the base class. Each piece provides a different shape property (to be drawn on the screen when rendering the board), and a different move method to move the piece to a new position on the board at each turn. We actually know that all subclasses of the Piece class need to have a move method; otherwise, when the board tries to move the piece, it will get confused. It is possible that we would want to create a new version of the game of chess that has one additional piece (the wizard). Our current design will allow us to design this piece without giving it a move method. The board would then choke when it asked the piece to move itself. We can implement this by creating a dummy move method on the Piece class. The subclasses can then override this method with a more specific implementation. The default implementation might, for example, pop up an error message that says: That piece cannot be moved. Overriding methods in subtypes allows very powerful object-oriented systems to be developed. For example, if we wanted to implement a player class with artificial intelligence, we might provide a calculate_move method that takes a Board object and decides which piece to move where. A very basic class might randomly choose a piece and direction and move it accordingly. We could then override this method in a subclass with the Deep Blue implementation. The first class would be suitable for play against a raw beginner, the latter would challenge a grand master. The important thing is that other methods in the class, such as the ones that inform the board as to which move was chose need not be changed; this implementation can be shared between the two classes. In the case of chess pieces, it doesn't really make sense to provide a default implementation of the move method. All we need to do is specify that the move method is required in any subclasses. This can be done by making Piece an abstract class with the move method declared abstract. Abstract methods basically say, "We demand this method exist in any non-abstract subclass, but we are declining to specify an implementation in this class." Indeed, it is possible to make a class that does not implement any methods at all. Such a class would simply tell us what the class should do, but provides absolutely no advice on how to do it. In object-oriented parlance, such classes are called interfaces. Inheritance provides abstraction Let's explore the longest word in object-oriented argot. Polymorphism is the ability to treat a class differently depending on which subclass is implemented. We've already seen it in action with the pieces system we've described. If we took the design a bit further, we'd probably see that the Board object can accept a move from the player and call the move function on the piece. The board need not ever know what type of piece it is dealing with. All it has to do is call the move method, and the proper subclass will take care of moving it as a Knight or a Pawn. Polymorphism is pretty cool, but it is a word that is rarely used in Python programming. Python goes an extra step past allowing a subclass of an object to be treated like a parent class. A board implemented in Python could take any object that has a move method, whether it is a bishop piece, a car, or a duck. When move is called, the Bishop will move diagonally on the board, the car will drive someplace, and the duck will swim or fly, depending on its mood. This sort of polymorphism in Python is typically referred to as duck typing: "If it walks like a duck or swims like a duck, it's a duck". We don't care if it really is a duck (inheritance), only that it swims or walks. Geese and swans might easily be able to provide the duck-like behavior we are looking for. This allows future designers to create new types of birds without actually specifying an inheritance hierarchy for aquatic birds. It also allows them to create completely different drop-in behaviors that the original designers never planned for. For example, future designers might be able to make a walking, swimming penguin that works with the same interface without ever suggesting that penguins are ducks. Multiple inheritance When we think of inheritance in our own family tree, we can see that we inherit features from more than just one parent. When strangers tell a proud mother that her son has, "his fathers eyes", she will typically respond along the lines of, "yes, but he got my nose." Object-oriented design can also feature such multiple inheritance, which allows a subclass to inherit functionality from multiple parent classes. In practice, multiple inheritance can be a tricky business, and some programming languages (most notably, Java) strictly prohibit it. However, multiple inheritance can have its uses. Most often, it can be used to create objects that have two distinct sets of behaviors. For example, an object designed to connect to a scanner and send a fax of the scanned document might be created by inheriting from two separate scanner and faxer objects. As long as two classes have distinct interfaces, it is not normally harmful for a subclass to inherit from both of them. However, it gets messy if we inherit from two classes that provide overlapping interfaces. For example, if we have a motorcycle class that has a move method, and a boat class also featuring a move method, and we want to merge them into the ultimate amphibious vehicle, how does the resulting class know what to do when we call move? At the design level, this needs to be explained, and at the implementation level, each programming language has different ways of deciding which parent class's method is called, or in what order. Often, the best way to deal with it is to avoid it. If you have a design showing up like this, you're probably doing it wrong. Take a step back, analyze the system again, and see if you can remove the multiple inheritance relationship in favor of some other association or composite design. Inheritance is a very powerful tool for extending behavior. It is also one of the most marketable advancements of object-oriented design over earlier paradigms. Therefore, it is often the first tool that object-oriented programmers reach for. However, it is important to recognize that owning a hammer does not turn screws into nails. Inheritance is the perfect solution for obvious is a relationships, but it can be abused. Programmers often use inheritance to share code between two kinds of objects that are only distantly related, with no is a relationship in sight. While this is not necessarily a bad design, it is a terrific opportunity to ask just why they decided to design it that way, and whether a different relationship or design pattern would have been more suitable. Summary To learn more about object oriented programming, the following books published by Packt Publishing (https://www.packtpub.com/) are recommended: Python 3 Object Oriented Programming (https://www.packtpub.com/application-development/python-3-object-oriented-programming) Learning Object-Oriented Programming (https://www.packtpub.com/application-development/learning-object-oriented-programming) Resources for Article: Further resources on this subject: Putting the Fun in Functional Python [article] Scraping the Web with Python - Quick Start [article] Introduction to Object-Oriented Programming using Python, JavaScript, and C# [article]
Read more
  • 0
  • 0
  • 18889

article-image-manipulating-functions-functional-programming
Packt
20 Jun 2017
6 min read
Save for later

Manipulating functions in functional programming

Packt
20 Jun 2017
6 min read
In this article by Wisnu Anggoro, author of the book Learning C++ Functional Programming, you will learn to apply functional programming techniques to C++ to build highly modular, testable, and reusable code. In this article, you will learn the following topics: Applying a first-class function in all functions Passing a function as other functions parameter Assigning a function to a variable Storing a function in the container (For more resources related to this topic, see here.) Applying a first-class function in all functions There's nothing special with the first-class function since it's a normal class. We can treat the first-class function like any other data type. However, in the language that supports the first-class function, we can perform the following tasks without invoking the compiler recursively: Passing a function as other function parameters Assigning functions to a variable Storing functions in collections Fortunately, C++ can be used to solve the preceding tasks. We will discuss it in depth in the following topics. Passing a function as other functions parameter Let's start passing a function as the function parameter. We will choose one of four functions and invoke the function from its main function. The code will look as follows: /* first-class-1.cpp */ #include <functional> #include <iostream> using namespace std; typedef function<int(int, int)> FuncType; int addition(int x, int y) { return x + y; } int subtraction(int x, int y) { return x - y; } int multiplication(int x, int y) { return x * y; } int division(int x, int y) { return x / y; } void PassingFunc(FuncType fn, int x, int y) { cout << "Result = " << fn(x, y) << endl; } int main() { int i, a, b; FuncType func; cout << "Select mode:" << endl; cout << "1. Addition" << endl; cout << "2. Subtraction" << endl; cout << "3. Multiplication" << endl; cout << "4. Division" << endl; cout << "Choice: "; cin >> i; cout << "a = "; cin >> a; cout << "b = "; cin >> b; switch(i) { case 1: PassingFunc(addition, a, b); break; case 2: PassingFunc(subtraction, a, b); break; case 3: PassingFunc(multiplication, a, b); break; case 4: PassingFunc(division, a, b); break; } return 0; } From the preceding code, we can see that we have four functions, and we want the user to choose one from them, and then run it. In the switch statement, we will invoke one of the four functions based on the choice of the user. We will pass the selected function to PassingFunc(), as we can see in the following code snippet: case 1: PassingFunc(addition, a, b); break; case 2: PassingFunc(subtraction, a, b); break; case 3: PassingFunc(multiplication, a, b); break; case 4: PassingFunc(division, a, b); break; The result we get on the screen should look like the following screenshot: The preceding screenshot shows that we selected the Subtraction mode and gave 8 to a and 3 to b. As we expected, the code gives us 5 as a result. Assigning a function to variable We can also assign a function to the variable so that we can call the function by calling the variable. We will refactor first-class-1.cpp, and it will be as follows: /* first-class-2.cpp */ #include <functional> #include <iostream> using namespace std; // Adding the addition, subtraction, multiplication, and // division function as we've got in first-class-1.cpp int main() { int i, a, b; function<int(int, int)> func; cout << "Select mode:" << endl; cout << "1. Addition" << endl; cout << "2. Subtraction" << endl; cout << "3. Multiplication" << endl; cout << "4. Division" << endl; cout << "Choice: "; cin >> i; cout << "a = "; cin >> a; cout << "b = "; cin >> b; switch(i) { case 1: func = addition; break; case 2: func = subtraction; break; case 3: func = multiplication; break; case 4: func = division; break; } cout << "Result = " << func(a, b) << endl; return 0; } We will now assign the four functions based on the user choice. We will store the selected function in func variable inside the switch statement, as follows: case 1: func = addition; break; case 2: func = subtraction; break; case 3: func = multiplication; break; case 4: func = division; break; After the func variable is assigned with the user's choice, the code will just call the variable like calling the function as follows: cout << "Result = " << func(a, b) << endl; Moreover, we will obtain the same output on the console if we run the code. Storing a function in the container Now, let's save the function to the container. Here, we will use the vector as the container. The code is as follows: /* first-class-3.cpp */ #include <vector> #include <functional> #include <iostream> using namespace std; typedef function<int(int, int)> FuncType; // Adding the addition, subtraction, multiplication, and // division function as we've got in first-class-1.cpp int main() { vector<FuncType> functions; functions.push_back(addition); functions.push_back(subtraction); functions.push_back(multiplication); functions.push_back(division); int i, a, b; function<int(int, int)> func; cout << "Select mode:" << endl; cout << "1. Addition" << endl; cout << "2. Subtraction" << endl; cout << "3. Multiplication" << endl; cout << "4. Division" << endl; cout << "Choice: "; cin >> i; cout << "a = "; cin >> a; cout << "b = "; cin >> b; cout << "Result = " << functions.at(i - 1)(a, b) << endl; return 0; } From the preceding code, we can see that we created a new vector named functions, then stored four different functions to it. Same with our two previous code samples, we ask the user to select the mode as well. However, now the code becomes simpler since we don't need to add the switch statement as we can select the function directly by selecting the vector index, as we can see in the following line of code: cout << "Result = " << functions.at(i - 1)(a, b) << endl; However, since the vector is a zero-based index, we have to adjust the index with the menu choice. The result will be the same with our two previous code samples. Summary In this article, we discussed that there are some techniques to manipulate a function to produce the various purpose on it. Since we can implement the first-class function in C++ language, we can pass a function as other functions parameter. We can treat a function as a data object so we can assign it to a variable and store it in the container. Resources for Article: Further resources on this subject: Introduction to the Functional Programming [article] Functional Programming in C# [article] Putting the Function in Functional Programming [article]
Read more
  • 0
  • 0
  • 18827

article-image-unit-testing-with-java-frameworks-junit-and-testng-tutorial
Fatema Patrawala
10 Jul 2018
10 min read
Save for later

Unit testing with Java frameworks: JUnit and TestNG [Tutorial]

Fatema Patrawala
10 Jul 2018
10 min read
Frequent manual testing is too impractical for any but the smallest systems. The only way around this is the use of automated tests. Automated tests are an effective method to reduce the time and cost of building, deploying, and maintaining applications. In order to effectively manage applications, it is of the utmost importance that both the implementation and test codes are as simple as possible. In this tutorial, we will look at implementing simple and easy to learn Java Unit testing frameworks for writing and running test i.e. JUnit and TestNG. You are reading Java unit testing tutorial from the book Test Driven Java Development - Second Edition, written by Alex Garcia and Viktor Farcic. Simplicity of code is one of the core extreme programming (XP) values and the key to Test Driven Development (TDD) and programming in general. It is most often accomplished through division into small units. In Java, units are methods. Being the smallest, the feedback loop they provide is the fastest so we spend most of our time thinking about and working on them. As a counterpart to implementation methods, unit tests should constitute by far the biggest percentage of all tests. So let us look at what is Unit testing and then get into the usage of the frameworks. What is Unit testing? Unit testing is a practice that forces us to test small, individual, and isolated units of code. They are usually methods, even though in some cases classes or even whole applications can be considered to be units, as well. In order to write unit tests, code under tests needs to be isolated from the rest of the application. Preferably, that isolation is already ingrained in the code or it can be accomplished with the use of mocks. If unit tests of a particular method cross the boundaries of that unit, then they become integration tests. As such, it becomes less clear what is under the tests. In case of a failure, the scope of a problem suddenly increases and finding the cause becomes more tedious. Java Unit testing frameworks In this section, two of the most used Java frameworks for unit testing are shown and briefly commented on. We will focus on their syntax and main features by comparing a test class written using both JUnit and TestNG. Although there are slight differences, both frameworks offer the most commonly used functionalities, and the main difference is how tests are executed and organized. Let's start with a question. What is a test? How can we define it? A test is a repeatable process or method that verifies the correct behavior of a tested target in a determined situation with a determined input expecting a predefined output or interactions. In the programming approach, there are several types of tests depending on their scope—functional tests, acceptance tests, and unit tests. Further on, we will explore each of those types of tests in more detail. Let's see how to test a single Java class. The class is quite simple, but enough for our interest: public class Friendships { private final Map<String, List<String>> friendships = new HashMap<>(); public void makeFriends(String person1, String person2) { addFriend(person1, person2); addFriend(person2, person1); } public List<String> getFriendsList(String person) { if (!friendships.containsKey(person)) { return Collections.emptyList(); } return friendships.get(person) } public boolean areFriends(String person1, String person2) { return friendships.containsKey(person1) && friendships.get(person1).contains(person2); } private void addFriend(String person, String friend) { if (!friendships.containsKey(person)) { friendships.put(person, new ArrayList<String>()); } List<String> friends = friendships.get(person); if (!friends.contains(friend)) { friends.add(friend); } } } Testing with JUnit JUnit is a simple and easy-to-learn framework for writing and running tests. Each test is mapped as a method, and each method should represent a specific known scenario in which a part of our code will be executed. The code verification is made by comparing the expected output or behavior with the actual output. The following is the test class written with JUnit. There are some scenarios missing, but for now we are interested in showing what tests look like. We will focus on better ways to test our code and on best practices later in this book. Test classes usually consist of three stages: set up, test, and tear down. Let's start with methods that set up data needed for tests. A setup can be performed on a class or method level: Friendships friendships; @BeforeClass public static void beforeClass() { // This method will be executed once on initialization time } @Before public void before() { friendships = new Friendships(); friendships.makeFriends("Joe",",," "Audrey"); friendships.makeFriends("Joe", "Peter"); friendships.makeFriends("Joe", "Michael"); friendships.makeFriends("Joe", "Britney"); friendships.makeFriends("Joe", "Paul"); } The @BeforeClass annotation specifies a method that will be run once before any of the test methods in the class. It is a useful way to do some general set up that will be used by most (if not all) tests. The @Before annotation specifies a method that will be run before each test method. We can use it to set up test data without worrying that the tests that are run afterwards will change the state of that data. In the preceding example, we're instantiating the Friendships class and adding five sample entries to the Friendships list. No matter what changes will be performed by each individual test, this data will be recreated over and over until all the tests are performed. Common examples of usage of those two annotations are the setting up of database data, the creation of files needed for tests, and so on. Later on, we'll see how external dependencies can and should be avoided using mocks. Nevertheless, functional or integration tests might still need those dependencies and the @Before and @BeforeClass annotations are a good way to set them up. Once the data is set up, we can proceed with the actual tests: @Test public void alexDoesNotHaveFriends() { Assert.assertTrue("Alex does not have friends", friendships.getFriendsList("Alex").isEmpty()); } @Test public void joeHas5Friends() { Assert.assertEquals("Joe has 5 friends", 5, friendships.getFriendsList("Joe").size()); } @Test public void joeIsFriendWithEveryone() { List<String> friendsOfJoe = Arrays.asList("Audrey", "Peter", "Michael", "Britney", "Paul"); Assert.assertTrue(friendships.getFriendsList("Joe") .containsAll(friendsOfJoe)); } In this example, we are using a few of the many different types of asserts. We're confirming that Alex does not have any friends, while Joe is a very popular guy with five friends (Audrey, Peter, Michael, Britney, and Paul). Finally, once the tests are finished, we might need to perform some cleanup: @AfterClass public static void afterClass() { // This method will be executed once when all test are executed } @After public void after() { // This method will be executed once after each test execution } In our example, in the Friendships class, we have no need to clean up anything. If there were such a need, those two annotations would provide that feature. They work in a similar fashion to the @Before and @BeforeClass annotations. @AfterClass is run once all tests are finished. The @After annotation is executed after each test. This runs each test method as a separate class instance. As long as we are avoiding global variables and external resources, such as databases and APIs, each test is isolated from the others. Whatever was done in one, does not affect the rest. The complete source code can be found in the FriendshipsTest class at https://bitbucket.org/vfarcic/tdd-java-ch02-example-junit. Testing with TestNG In TestNG, tests are organized in classes, just as in the case of JUnit. The following Gradle configuration (build.gradle) is required in order to run TestNG tests: dependencies { testCompile group: 'org.testng', name: 'testng', version: '6.8.21' } test.useTestNG() { // Optionally you can filter which tests are executed using // exclude/include filters // excludeGroups 'complex' } Unlike JUnit, TestNG requires additional Gradle configuration that tells it to use TestNG to run tests. The following test class is written with TestNG and is a reflection of what we did earlier with JUnit. Repeated imports and other boring parts are omitted with the intention of focusing on the relevant parts: @BeforeClass public static void beforeClass() { // This method will be executed once on initialization time } @BeforeMethod public void before() { friendships = new Friendships(); friendships.makeFriends("Joe", "Audrey"); friendships.makeFriends("Joe", "Peter"); friendships.makeFriends("Joe", "Michael"); friendships.makeFriends("Joe", "Britney"); friendships.makeFriends("Joe", "Paul"); } You probably already noticed the similarities between JUnit and TestNG. Both are using annotations to specify what the purposes of certain methods are. Besides different names (@Beforeclass versus @BeforeMethod), there is no difference between the two. However, unlike Junit, TestNG reuses the same test class instance for all test methods. This means that the test methods are not isolated by default, so more care is needed in the before and after methods. Asserts are very similar as well: public void alexDoesNotHaveFriends() { Assert.assertTrue(friendships.getFriendsList("Alex").isEmpty(), "Alex does not have friends"); } public void joeHas5Friends() { Assert.assertEquals(friendships.getFriendsList("Joe").size(), 5, "Joe has 5 friends"); } public void joeIsFriendWithEveryone() { List<String> friendsOfJoe = Arrays.asList("Audrey", "Peter", "Michael", "Britney", "Paul"); Assert.assertTrue(friendships.getFriendsList("Joe") .containsAll(friendsOfJoe)); } The only notable difference when compared with JUnit is the order of the assert variables. While the JUnit assert's order of arguments is optional message, expected values, and actual values, TestNG's order is an actual value, expected value, and optional message. Besides the difference in the order of arguments we're passing to the assert methods, there are almost no differences between JUnit and TestNG. You might have noticed that @Test is missing. TestNG allows us to set it on the class level and thus convert all public methods into tests. The @After annotations are also very similar. The only notable difference is the TestNG @AfterMethod annotation that acts in the same way as the JUnit @After annotation. As you can see, the syntax is pretty similar. Tests are organized in to classes and test verifications are made using assertions. That is not to say that there are no more important differences between those two frameworks; we'll see some of them throughout this book. I invite you to explore JUnit and TestNG by yourself. The complete source code with the preceding examples are here. The assertions we have written until now have used only the testing frameworks. However, there are some test utilities that can help us make them nicer and more readable. To summarize, we learned about JUnit and Test NG; the Java unit testing frameworks. We also ran tests on both the frameworks to know the usage and difference between the two. To learn concepts of test-driven development in Java to help you build clean, maintainable and robust code, check out this book Test Driven Java Development - Second Edition. Unit Testing Apps with Android Studio Unit Testing in .NET Core with Visual Studio 2017 for better code quality
Read more
  • 0
  • 0
  • 18814

article-image-hands-service-fabric
Packt
06 Apr 2017
12 min read
Save for later

Hands on with Service Fabric

Packt
06 Apr 2017
12 min read
In this article by Rahul Rai and Namit Tanasseri, authors of the book Microservices with Azure, explains that Service Fabric as a platform supports multiple programming models. Each of which is best suited for specific scenarios. Each programming model offers different levels of integration with the underlying management framework. Better integration leads to more automation and lesser overheads. Picking the right programming model for your application or services is the key to efficiently utilize the capabilities of Service Fabric as a hosting platform. Let's take a deeper look into these programming models. (For more resources related to this topic, see here.) To start with, let's look at the least integrated hosting option: Guest Executables. Native windows applications or application code using Node.js or Java can be hosted on Service Fabric as a guest executable. These executables can be packaged and pushed to a Service Fabric cluster like any other services. As the cluster manager has minimal knowledge about the executable, features like custom health monitoring, load reporting, state store and endpoint registration cannot be leveraged by the hosted application. However, from a deployment standpoint, a guest executable is treated like any other service. This means that for a guest executable, Service Fabric cluster manager takes care of high availability, application lifecycle management, rolling updates, automatic failover, high density deployment and load balancing. As an orchestration service, Service Fabric is responsible for deploying and activating an application or application services within a cluster. It is also capable of deploying services within a container image. This programming model is addressed as Guest Containers. The concept of containers is best explained as an implementation of operating system level virtualization. They are encapsulated deployable components running on isolated process boundaries sharing the same kernel. Deployed applications and their runtime dependencies are bundles within the container with an isolated view of all operating system constructs. This makes containers highly portable and secure. Guest container programming model is usually chosen when this level of isolation is required for the application. As containers don't have to boot an operating system, they have fast boot up time and are comparatively small in size. A prime benefit of using Service Fabric as a platform is the fact that it supports heterogeneous operating environments. Service Fabric supports two types of containers to be deployed as guest containers: Docker containers on Linux and Windows server containers. Container images for Docker containers are stored in Docker Hub and Docker APIs are used to create and manage the containers deployed on Linux kernel. Service Fabric supports two different types of containers in Windows Server 2016 with different levels of isolation. They are: Windows Server containers and Windows Hyper-V containers Windows Server containers are similar to Docker containers in terms of the isolation they provide. Windows Hyper-V containers offer higher degree of isolation and security by not sharing the operating system kernel across instances. These are ideally used when a higher level of security isolation is required such as systems requiring hostile multitenant hosts. The following figure illustrates the different isolation levels achieved by using these containers. Container isolation levels Service Fabric application model treats containers as an application host which can in turn host service replicas. There are three ways of utilizing containers within a Service Fabric application mode. Existing applications like Node.js, JavaScript application of other executables can be hosted within a container and deployed on Service Fabric as a Guest Container. A Guest Container is treated similar to a Guest Executable by Service Fabric runtime. The second scenario supports deploying stateless services inside a container hosted on Service Fabric. Stateless services using Reliable Services and Reliable actors can be deployed within a container. The third option is to deploy stateful services in containers hosted on Service Fabric. This model also supports Reliable Services and Reliable Actors. Service Fabric offers several features to manage containerized Microservices. These include container deployment and activation, resource governance, repository authentication, port mapping, container discovery and communication and ability to set environment variables. While containers offer a good level of isolation it is still heavy in terms of deployment footprint. Service Fabric offers a simpler, powerful programming model to develop your services which they call Reliable Services. Reliable services let you develop stateful and stateless services which can be directly deployed on Service Fabric clusters. For stateful services, the state can be stored close to the compute by using Reliable Collections. High availability of the state store and replication of the state is taken care by the Service Fabric cluster management services. This contributes substantially to the performance of the system by improving the latency of data access. Reliable services come with a built-in pluggable communication model which supports HTTP with Web API, WebSockets and custom TCP protocols out of the box. A Reliable service is addressed as stateless if it does not maintain any state within it or if the scope of the state stored is limited to a service call and is entirely disposable. This means that a stateless service does not require to persist, synchronize or replicate state. A good example for this service is a weather service like MSN weather service. A weather service can be queried to retrieve weather conditions associated with a specific geographical location. The response is totally based on the parameters supplied to the service. This service does not store any state. Although stateless services are simpler to implement, most of the services in real life are not stateless. They either store state in an external state store or an internal one. Web front end hosting APIs or web applications are good use cases to be hosted as stateless services. A stateful service persists states. The outcome of a service call made to a stateful service is usually influenced by the state persisted by the service. A service exposed by a bank to return the balance on an account is a good example for a stateful service. The state may be stored in an external data store such as Azure SQL Database, Azure Blobs or Azure Table store. Most services prefer to store the state externally considering the challenges around reliability, availability, scalability and consistency of the data store. With Service Fabric, state can be stored close to the compute by using reliable collections. To makes things more lightweight, Service Fabric also offers a programming model based on Virtual actor pattern. This programming model is called Reliable Actors. The Reliable Actors programming model is built on top of Reliable Services. This guarantees the scalability and reliability of the services. An Actor can be defined as an isolated, independent unit of compute and state with single-threaded execution. Actors can be created, managed and disposed independent of each other. Large number of actors can coexist and execute at a time. Service Fabric Reliable Actors are a good fit for systems which are highly distributed and dynamic by nature. Every actor is defined as an instance of an actor type; the same way an object is an instance of a class. Each actor is uniquely identified by an actor ID. The lifetime of Service Fabric Actors is not tied to their in-memory state. As a result, Actors are automatically created the first time a request for them is made. Reliable Actor's garbage collector takes care of disposing unused Actors in memory. Now that we understand the programming models, let's take a look at how the services deployed on Service Fabric are discovered and how the communication between services takes place. Service Fabric discovery and communication An application built on top of Microservices is usually composed of multiple services, each of which runs multiple replicas. Each service is specialized in a specific task. To achieve an end to end business use case, multiple services will need to be stitched together. This requires services to communicate to each other. A simple example would be web front end service communicating with the middle tier services which in turn connects to the back end services to handle a single user request. Some of these middle tier services can also be invoked by external applications. Services deployed on Service Fabric are distributed across multiple nodes in a cluster of virtual machines. The services can move across dynamically. This distribution of services can wither be triggered by a manual action of be result of Service Fabric cluster manager re-balancing services to achieve optimal resource utilization. This makes communication a challenge as services are not tied to a particular machine. Let's understand how Service Fabric solved this challenge for its consumers. Service protocols Service Fabric, as a hosting platform for Microservices does not interfere in the implementation of the service. On top of this, it also lets services decide on the communication channels they want to open. These channels are addressed as service endpoints. During service initiation, Service Fabric provides the opportunity for the services to set up the endpoints for incoming request on any protocol or communication stack. The endpoints are defined according to common industry standards, that is IP:Port. It is possible that multiple service instances share a single host process. In which case, they either have to use different ports or a port sharing mechanism. This will ensure that every service instance is uniquely addressable. Service endpoints Service discovery Service Fabric can rebalance services deployed on a cluster as a part of orchestration activities. This can be caused by resource balancing activities, failovers, upgrades, scale outs or scale ins. This will result in change in service endpoint addresses as the services move across different virtual machines. Service distribution The Service Fabric Naming Service is responsible for abstracting this complexity from the consuming service or application. Naming service takes care of service discovery and resolution. All service instances in Services Fabric are identified by a unique URL like fabric:/MyMicroServiceApp/AppService1. This name stays constant across the lifetime of the service although the endpoint addresses which physically host the service may change. Internally, Service Fabric manages a map between the service names and the physical location where the service is hosted. This is similar to the DNS service which is used to resolve Website URLs to IP addresses. The following figure illustrates the name resolution process for a service hosted on Service Fabric: Name resolution Connections from applications external to Service Fabric Service communications to or between services hosted in Service Fabric can be categorized as internal or external. Internal communication among services hosted on Service Fabric is easily achieved using the Naming Service. External communication, originated from an application or a user outside the boundaries of Service Fabric will need some extra work. To understand how this works, let's dive deeper in to the logical network layout of a typical Service Fabric cluster. Service Fabric cluster is always placed behind an Azure Load Balancer. The Load Balancer acts like a gateway to all traffic which needs to pass to the Service Fabric cluster. The Load Balancer is aware of every post open on every node of a cluster. When a request hits the Load Balancer, it identifies the port the request is looking for and randomly routes the request to one of the nodes which has the requested port open. The Load Balancer is not aware of the services running on the nodes or the ports associated with the services. The following figure illustrates request routing in action. Request routing Configuring ports and protocols The protocol and the ports to be opened by a Service Fabric cluster can be easily configured through the portal. Let's take an example to understand the configuration in detail. If we need a web application to be hosted on a Service Fabric cluster which should have port 80 opened on HTTP to accept incoming traffic, the following steps should be performed. Configuring service manifest Once a service listening to port 80 is authored, we need to configure port 80 in the service manifest to open a listener in the service. This can be done by editing the Service Manifest.xml. <Resources> <Endpoints> <Endpoint Name="WebEndpoint" Protocol="http" Port="80" /> </Endpoints> </Resources> Configuring custom end point On the Service Fabric cluster, configure port 80 as a custom endpoint. This can be easily done through the Azure Management portal. Configuring custom port Configure Azure Load Balancer Once the cluster is configured and created, the Azure Load Balancer can be instructed to forward the traffic to port 80. If the Service Fabric cluster is created through the portal, this step is automatically taken care for every port which is configured on the cluster configuration. Configuring Azure Load Balancer Configure health check Azure Load Balancer probes the ports on the nodes for their availability to ensure reliability of the service. The probes can be configured on the Azure portal. This is an optional step as a default probe configuration is applied for each endpoint when a cluster is created. Configuring probe Built-in Communication API Service Fabric offers many built-in communication options to support inter service communications. Service Remoting is one of them. This option allows strong typed remote procedure calls between Reliable Services and Reliable Actors. This option is very easy to set up and operate with as Service Remoting handles resolution of service addresses, connection, retry and error handling. Service Fabric also supports HTTP for language-agnostic communication. Service Fabric SDK exposes ICommunicationClient and ServicePartitionClient classes for service resolution, HTTP connections, and retry loops. WCF is also supported by Service Fabric as a communication channel to enable legacy workload to be hosted on it. The SDK exposed WcfCommunicationListener for the server side and WcfCommunicationClient and ServicePartitionClient classes for the client to ease programming hurdles. Resources for Article: Further resources on this subject: Installing Neutron [article] Designing and Building a vRealize Automation 6.2 Infrastructure [article] Insight into Hyper-V Storage [article]
Read more
  • 0
  • 0
  • 18655
article-image-what-can-you-do-sage-math
Packt
02 May 2011
5 min read
Save for later

What can you do with SageMath?

Packt
02 May 2011
5 min read
Getting started with the basics of SageMath You don't have to install Sage to try it out! In this article, we will use the notebook interface to showcase some of the basics of Sage so that you can follow along using a public notebook server. These examples can also be run from an interactive session if you have installed Sage. Go to http://www.sagenb.org and sign up for a free account. You can also browse worksheets created and shared by others. The notebook interface should look like this: Create a new worksheet by clicking on the link called New Worksheet: Type in a name when prompted, and click Rename. The new worksheet will look like this: Enter an expression by clicking in an input cell and typing or pasting in an expression: Click the evaluate link or press Shift-Enter to evaluate the contents of the cell. A new input cell will automatically open below the results of the calculation. You can also create a new input cell by clicking in the blank space just above an existing input cell. Using Sage as a powerful calculator Sage has all the features of a scientific calculator—and more. If you have been trying to perform mathematical calculations with a spreadsheet or the built-in calculator in your operating system, it's time to upgrade. Sage offers all the built-in functions you would expect. Here are a few examples: If you have to make a calculation repeatedly, you can define a function and variables to make your life easier. For example, let's say that you need to calculate the Reynolds number, which is used in fluid mechanics: You can define a function and variables like this: Re(velocity, length, kinematic_viscosity) = velocity * length / kinematic_viscosity v = 0.01 L = 1e-3 nu = 1e-6 Re(v, L, nu) When you type the code into an input cell and evaluate the cell, your screen will look like this: Now, you can change the value of one or more variables and re-run the calculation: Sage can also perform exact calculations with integers and rational numbers. Using the pre-defined constant pi will result in exact values from trigonometric operations. Sage will even utilize complex numbers when needed. Here are some examples: Symbolic mathematics Much of the difficulty of higher mathematics actually lies in the extensive algebraic manipulations that are required to obtain a result. Sage can save you many hours, and many sheets of paper, by automating some tedious tasks in mathematics. We'll start with basic calculus. For example, let's compute the derivative of the following equation: The following code defines the equation and computes the derivative: var('x') f(x) = (x^2 - 1) / (x^4 + 1) show(f) show(derivative(f, x)) The results will look like this: The first line defines a symbolic variable x (Sage automatically assumes that x is always a symbolic variable, but we will define it in each example for clarity). We then defined a function as a quotient of polynomials. Taking the derivative of f(x) would normally require the use of the quotient rule, which can be very tedious to calculate. Sage computes the derivative effortlessly. Now, we'll move on to integration, which can be one of the most daunting tasks in calculus. Let's compute the following indefinite integral symbolically: The code to compute the integral is very simple: f(x) = e^x * cos(x) f_int(x) = integrate(f, x) show(f_int) The result is as follows: To perform this integration by hand, integration by parts would have to be done twice, which could be quite time consuming. If we want to better understand the function we just defined, we can graph it with the following code: f(x) = e^x * cos(x) plot(f, (x, -2, 8)) Sage will produce the following plot: Sage can also compute definite integrals symbolically: To compute a definite integral, we simply have to tell Sage the limits of integration: f(x) = sqrt(1 - x^2) f_integral = integrate(f, (x, 0, 1)) show(f_integral) The result is: This would have required the use of a substitution if computed by hand. Have a go hero There is actually a clever way to evaluate the integral from the previous problem without doing any calculus. If it isn't immediately apparent, plot the function f(x) from 0 to 1 and see if you recognize it. Note that the aspect ratio of the plot may not be square. The partial fraction decomposition is another technique that Sage can do a lot faster than you. The solution to the following example covers two full pages in a calculus textbook —assuming that you don't make any mistakes in the algebra! f(x) = (3 * x^4 + 4 * x^3 + 16 * x^2 + 20 * x + 9) / ((x + 2) * (x^2 + 3)^2) g(x) = f.partial_fraction(x) show(g) The result is as follows: We'll use partial fractions again when we talk about solving ordinary differential equations symbolically. Linear algebra   Linear algebra is one of the most fundamental tasks in numerical computing. Sage has many facilities for performing linear algebra, both numerical and symbolic. One fundamental operation is solving a system of linear equations:   Although this is a tedious problem to solve by hand, it only requires a few lines of code in Sage: A = Matrix(QQ, [[0, -1, -1, 1], [1, 1, 1, 1], [2, 4, 1, -2], [3, 1, -2, 2]]) B = vector([0, 6, -1, 3]) A.solve_right(B) The answer is as follows: Notice that Sage provided an exact answer with integer values. When we created matrix A, the argument QQ specified that the matrix was to contain rational values. Therefore, the result contains only rational values (which all happen to be integers for this problem).  
Read more
  • 0
  • 0
  • 18636

article-image-design-spring-aop
Packt
14 Oct 2009
12 min read
Save for later

Design with Spring AOP

Packt
14 Oct 2009
12 min read
Designing and implementing an enterprise Java application means not only dealing with the application core business and architecture, but also with some typical enterprise requirements. We have to define how the application manages concurrency so that the application is robust and does not suffer too badly from an increase in the amount of requests. We have to define the caching strategies for the application because we don't want that CPU or data-intensive operations to be executed over and over. We have to define roles and profiles, applying security policies and restricting access to application parts, because different kind of users will probably have different rights and permissions. All these issues require writing additional code that clutters our application business code and reduces its modularity and maintainability. But we have a choice. We can design our enterprise Java application keeping AOP in mind. This will help us to concentrate on our actual business code, taking away all the infrastructure issues that can be otherwise expressed as crosscutting concerns. This article will introduce such issues, and will show how to design and implement them with Spring 2.5 AOP support. Concurrency with AOP For many developers, concurrency remains a mystery. Concurrency is the system's skill to act with several requests simultaneously, so that threads don't corrupt the state of the objects when they gain access at the same time. A number of good books have been written on this subject, such as Concurrent Programming in Java and Java Concurrency in Practice. They deserve much attention, since concurrency is an aspect that's hard to understand, and not immediately visible to developers. Problems in the area of concurrency are hard to reproduce. However, it's important to keep concurrency in mind to assure that the application is robust regardless of the number of users it will serve. If we don't take into account concurrency and document when and how the problems of concurrency are considered, we will build an application taking some risks by supposing that the CPU will never simultaneously schedule processes on parts of our application that are not thread-safe. To ensure the building of robust and scalable systems, we use proper patterns: There are JDK packages just for concurrency. They are in the java.util.concurrent package, a result of JSR-166. One of these patterns is the read-write lock pattern, of which there is the interface java.util.concurrent.locks.ReadWriteLock and implementations, one of which is ReentrantReadWriteLock. The goal of ReadWriteLock is to allow the reading of an object from virtually endless number of threads, while only one thread at a time can modify it. In this way, the state of the object can never be corrupted because threads to reading the object's state will always read up-to-date data, and the thread modifying the state of the object in question will be able to act without the possibility of the object's state being corrupted. Another necessary skill is that the result of a thread's action can be visible to the other threads. The behavior is the same as we could have achieved using synchronized, but when using a read-write lock we are explicitly synchronizing the actions, whereas with synchronized synchronization is implicit. Now let's see an example of ReadWriteLock on BankAccountThreadSafe object. Before the read operation, that needs to be safe, we set the read lock. After the read operation, we release the read lock. Before the write operation that needs to be safe, we set the write lock. After a state modification, we release the write lock. package org.springaop.chapter.five.concurrent; import java.util.Date; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; public final class BankAccountThreadSafe { public BankAccountThreadSafe(Integer id) { this.id = id; balance = new Float(0); startDate = new Date(); } public BankAccountThreadSafe(Integer id, Float balance) { this.id = id; this.balance = balance; startDate = new Date(); } public BankAccountThreadSafe(Integer id, Float balance, Date start) { this.id = id; this.balance = balance; this.startDate = start; } public boolean debitOperation(Float debit) { wLock.lock(); try { float balance = getBalance(); if (balance < debit) { return false; } else { setBalance(balance - debit); return true; } } finally { wLock.unlock(); } } public void creditOperation(Float credit) { wLock.lock(); try { setBalance(getBalance() + credit); } finally { wLock.unlock(); } } private void setBalance(Float balance) { wLock.lock(); try { balance = balance; } finally { wLock.unlock(); } } public Float getBalance() { rLock.lock(); try { return balance; } finally { rLock.unlock(); } } public Integer getId() { return id; } public Date getStartDate() { return (Date) startDate.clone(); } ... private Float balance; private final Integer id; private final Date startDate; private final ReadWriteLock lock = new ReentrantReadWriteLock(); private final Lock rLock = lock.readLock(); private final Lock wLock = lock.writeLock(); } BankAccountThreadSafeis a class that doesn't allow a bank account to be overdrawn (that is, have a negative balance), and it's an example of thread-safe class. The final fields are set in the constructors, hence implicitly thread-safe. The balance field, on the other hand, is managed in a thread-safe way by the setBalance,getBalance, creditOperation, and debitOperation methods. In other words, this class is correctly programmed, concurrency-wise. The problem is that wherever we would like to have those characteristics, we have to write the same code (especially the finally block containing the lock's release). We can solve that by writing an aspect that carries out that task for us. A state modification is execution(void com.mycompany.BankAccount.set*(*)) A safe read is execution(* com.mycompany.BankAccount.getBalance()) package org.springaop.chapter.five.concurrent; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReadWriteLock; import java.util.concurrent.locks.ReentrantReadWriteLock; import org.aspectj.lang.annotation.After; import org.aspectj.lang.annotation.Aspect; import org.aspectj.lang.annotation.Before; import org.aspectj.lang.annotation.Pointcut; @Aspect public class BankAccountAspect { /*pointcuts*/ @Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.getBalance())") public void safeRead(){} @Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.set*(*))") public void stateModification(){} @Pointcut( "execution(* org.springaop.chapter.five.concurrent.BankAccount.getId())") public void getId(){} @Pointcut("execution(* org.springaop.chapter.five.concurrent.BankAccount.getStartDate())) public void getStartDate(){} /*advices*/ @Before("safeRead()") public void beforeSafeRead() { rLock.lock(); } @After("safeRead()") public void afterSafeRead() { rLock.unlock(); } @Before("stateModification()") public void beforeSafeWrite() { wLock.lock(); } @After("stateModification()") public void afterSafeWrite() { wLock.unlock(); } private final ReadWriteLock lock = new ReentrantReadWriteLock(); private final Lock rLock = lock.readLock(); private final Lock wLock = lock.writeLock(); } The BankAccountAspect class applies the crosscutting functionality. In this case, the functionality is calling the lock and unlock methods on the ReadLock and the WriteLock. The before methods apply the locks with the @Before annotation, while the after methods release the locks as if they were in the final block, with the @After annotation that is always executed (after-finally advice). In this way the BankAccount class can become much easier, clearer, and briefer. It wouldn't have any perception that it can be executed in a thread-safe manner. package org.springaop.chapter.five.concurrent; import java.util.Date; public class BankAccount { public BankAccount(Integer id) { this.id = id; this.balance = new Float(0); this.startDate = new Date(); } public BankAccount(Integer id, Float balance) { this.id = id; this.balance = balance; this.startDate = new Date(); } public BankAccount(Integer id, Float balance, Date start) { this.id = id; this.balance = balance; this.startDate = start; } public boolean debitOperation(Float debit) { float balance = getBalance(); if (balance < debit) { return false; } else { setBalance(balance - debit); return true; } } public void creditOperation(Float credit) { setBalance(getBalance() + credit); } private void setBalance(Float balance) { this.balance = balance; } public Float getBalance() { return balance; } public Integer getId() { return id; } public Date getStartDate() { return (Date) startDate.clone(); } private Float balance; private final Integer id; private final Date startDate; } Another good design choice, together with the use of ReadWriteLock when necessary, is: using objects that once built are immutable, and therefore, not corruptible and can be easily shared between threads. Transparent caching with AOP Often, the objects that compose applications perform the same operations with the same arguments and obtain the same results. Sometimes, these operations are costly in terms of CPU usage, or may be there is a lot of I/O going on while executing those operations. To get better results in terms of speed and resources used, it's suggested to use a cache. We can store in it the results corresponding to the methods' invocations as a key-value pair: method and arguments as key and return object as value. Once you decide to use a cache you're just halfway. In fact, you must decide which part of the application is going to use the cache. Let's think about a web application backed by a database. Such web application usually involves Data Access Objects (DAOs), which access the relational database. Such objects are usually a bottleneck in the application as there is a lot of I/O going on. In other words, a cache can be used there. The cache can also be used by the business layer that has already aggregated and elaborated data retrieved from repositories, or it can be used by the presentation layer putting formatted presentation templates in the cache, or even by the authentication system that keeps roles according to an authenticated username. There are almost no limits as to how you can optimize an application and make it faster. The only price you pay is having RAM to dedicate the objects that are to be kept in memory, besides paying attention to the rules on how to manage life of the objects in cache. After these preliminary remarks, using a cache could seem common and obvious. A cache essentially acts as a hash into which key-value pairs are put. The keys are useful to retrieve objects from the cache. Caching usually has configuration parameters that'll allow you to change its behavior. Now let's have a look at an example with ehcache (http://ehcache.sourceforge.net). First of all let's configure it with the name methodCache so that we have at the most 1000 objects. The objects are inactive for a maximum of five minutes, with a maximum life of 10 minutes. If the objects count is over 1000, ehcache saves them on the filesystem, in java.io.tmpdir. <ehcache> ... <diskStore path="java.io.tmpdir"/> ... <defaultCache maxElementsInMemory="10000" eternal="false" timeToIdleSeconds="120" timeToLiveSeconds="120" overflowToDisk="true" diskPersistent="false" diskExpiryThreadIntervalSeconds="120" /> ... <cache name="methodCache" maxElementsInMemory="1000" eternal="false" overflowToDisk="false" timeToIdleSeconds="300" timeToLiveSeconds="600" /> </ehcache> Now let's create an CacheAspect . Let's define the cacheObject to which the ProceedingJoinPoint is passed. Let's recover an unambiguous key from the ProceedingJoinPoint with the method getCacheKey. We will use this key to put the objects in cache and to recover them. Once we have obtained the key, we ask to cache the Element with the instruction cache.get(cacheKey). The Element has to be evaluated because it may be null if the cache didn't find an Element with the passed cacheKey. If the Element is null, advice invokes the method proceed(), and puts in the cache the Element with the key corresponding to the invocation. Otherwise, if the Element recovered from the cache is not null, the method isn't invoked on the target class, and the value taken from the cache is given back to the caller. package org.springaop.chapter.five.cache; import it.springaop.utils.Constants; import net.sf.ehcache.Cache; import net.sf.ehcache.Element; import org.apache.log4j.Logger; import org.aspectj.lang.ProceedingJoinPoint; public class CacheAspect { public Object cacheObject(ProceedingJoinPoint pjp) throws Throwable { Object result; String cacheKey = getCacheKey(pjp); Element element = (Element) cache.get(cacheKey); logger.info(new StringBuilder("CacheAspect invoke:").append("n get:") .append(cacheKey).append(" value:").append(element).toString()); if (element == null) { result = pjp.proceed(); element = new Element(cacheKey, result); cache.put(element); logger.info(new StringBuilder("n put:").append(cacheKey).append( " value:").append(result).toString()); } return element.getValue(); } public void flush() { cache.flush(); } private String getCacheKey(ProceedingJoinPoint pjp) { String targetName = pjp.getTarget().getClass().getSimpleName(); String methodName = pjp.getSignature().getName(); Object[] arguments = pjp.getArgs(); StringBuilder sb = new StringBuilder(); sb.append(targetName).append(".").append(methodName); if ((arguments != null) && (arguments.length != 0)) { for (int i = 0; i < arguments.length; i++) { sb.append(".").append(arguments[i]); } } return sb.toString(); } public void setCache(Cache cache) { this.cache = cache; } private Cache cache; private Logger logger = Logger.getLogger(Constants.LOG_NAME); } Here is applicationContext.xml <beans xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.5.xsd http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-2.5.xsd"> ... <bean id="rockerCacheAspect" class="org.springaop.chapter.five.cache.CacheAspect" > <property name="cache"> <bean id="bandCache" parent="cache"> <property name="cacheName" value="methodCache" /> </bean> </property> </bean> <!-- CACHE config --> <bean id="cache" abstract="true" class="org.springframework.cache.ehcache.EhCacheFactoryBean"> <property name="cacheManager" ref="cacheManager" /> </bean> <bean id="cacheManager" class="org.springframework.cache.ehcache. EhCacheManagerFactoryBean"> <property name="configLocation" value="classpath:org/springaop/chapter/five/cache/ehcache.xml" /> </bean> ... </beans> The idea about the caching aspect is to avoid repetition in our code base and have a consistent strategy for identifying objects (for example using the hash code of an object) so as to avoid objects from ending up in the cache twice. Employing an around advice, we can use the cache to make the method invocations give back the cached result of a previous invocation of the same method in a totally transparent way. In fact, to the methods of the classes defined in the interception rules in pointcuts will be given back the return values drawn from the cache or, if they are not present, they will be invoked and inserted in the cache. In this way, the classes and methods don't have any cognition of obtaining values retrieved from the cache.
Read more
  • 0
  • 0
  • 18570

article-image-mailing-spring-mail
Packt
04 Jun 2015
19 min read
Save for later

Mailing with Spring Mail

Packt
04 Jun 2015
19 min read
In this article, by Anjana Mankale, author of the book Mastering Spring Application Development we shall see how we can use the Spring mail template to e-mail recipients. We shall also demonstrate using Spring mailing template configurations using different scenarios. (For more resources related to this topic, see here.) Spring mail message handling process The following diagram depicts the flow of a Spring mail message process. With this, we can clearly understand the process of sending mail using a Spring mailing template. A message is created and sent to the transport protocol, which interacts with internet protocols. Then, the message is received by the recipients. The Spring mail framework requires a mail configuration, or SMTP configuration, as the input and message that needs to be sent. The mail API interacts with internet protocols to send messages. In the next section, we shall look at the classes and interfaces in the Spring mail framework. Interfaces and classes used for sending mails with Spring The package org.springframework.mail is used for mail configuration in the spring application. The following are the three main interfaces that are used for sending mail: MailSender: This interface is used to send simple mail messages. JavaMailSender: This interface is a subinterface of the MailSender interface and supports sending mail messages. MimeMessagePreparator: This interface is a callback interface that supports the JavaMailSender interface in the preparation of mail messages. The following classes are used for sending mails using Spring: SimpleMailMessage: This is a class which has properties such as to, from, cc, bcc, sentDate, and many others. The SimpleMailMessage interface sends mail with MailSenderImp classes. JavaMailSenderImpl: This class is an implementation class of the JavaMailSender interface. MimeMessageHelper: This class helps with preparing MIME messages. Sending mail using the @Configuration annotation We shall demonstrate here how we can send mail using the Spring mail API. First, we provide all the SMTP details in the .properties file and read it to the class file with the @Configuration annotation. The name of the class is MailConfiguration. mail.properties file contents are shown below: mail.protocol=smtp mail.host=localhost mail.port=25 mail.smtp.auth=false mail.smtp.starttls.enable=false mail.from=me@localhost mail.username= mail.password=   @Configuration @PropertySource("classpath:mail.properties") public class MailConfiguration { @Value("${mail.protocol}") private String protocol; @Value("${mail.host}") private String host; @Value("${mail.port}") private int port; @Value("${mail.smtp.auth}") private boolean auth; @Value("${mail.smtp.starttls.enable}") private boolean starttls; @Value("${mail.from}") private String from; @Value("${mail.username}") private String username; @Value("${mail.password}") private String password;   @Bean public JavaMailSender javaMailSender() {    JavaMailSenderImpl mailSender = new JavaMailSenderImpl();    Properties mailProperties = new Properties();    mailProperties.put("mail.smtp.auth", auth);    mailProperties.put("mail.smtp.starttls.enable", starttls);    mailSender.setJavaMailProperties(mailProperties);    mailSender.setHost(host);    mailSender.setPort(port);    mailSender.setProtocol(protocol);    mailSender.setUsername(username);    mailSender.setPassword(password);    return mailSender; } } The next step is to create a rest controller to send mail; to do so, click on Submit. We shall use the SimpleMailMessage interface since we don't have any attachment. @RestController class MailSendingController { private final JavaMailSender javaMailSender; @Autowired MailSubmissionController(JavaMailSender javaMailSender) {    this.javaMailSender = javaMailSender; } @RequestMapping("/mail") @ResponseStatus(HttpStatus.CREATED) SimpleMailMessage send() {    SimpleMailMessage mailMessage = new SimpleMailMessage();    mailMessage.setTo("packt@localhost");    mailMessage.setReplyTo("anjana@localhost");    mailMessage.setFrom("Sonali@localhost");    mailMessage.setSubject("Vani veena Pani");  mailMessage.setText("MuthuLakshmi how are you?Call      Me Please [...]");    javaMailSender.send(mailMessage);    return mailMessage; } } Sending mail using MailSender and Simple Mail Message with XML configuration "Simple mail message" means the e-mail sent will only be text-based with no HTML formatting, no images, and no attachments. In this section, consider a scenario where we are sending a welcome mail to the user as soon as the user gets their order placed in the application. In this scenario, the mail will be sent after the database insertion operation is successful. Create a separate folder, called com.packt.mailService, for the mail service. The following are the steps for sending mail using the MailSender interface and SimpleMailMessage class. Create a new Maven web project with the name Spring4MongoDB_MailChapter3. We have also used the same Eshop db database with MongoDB for CRUD operations on Customer, Order, and Product. We have also used the same mvc configurations and source files. Use the same dependencies as used previously. We need to add dependencies to the pom.xml file: <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-mail</artifactId> <version>3.0.2.RELEASE</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.activation</groupId> <artifactId>activation</artifactId> <version>1.1-rev-1</version> <scope>runtime</scope> </dependency> <dependency> <groupId>javax.mail</groupId> <artifactId>mail</artifactId> <version>1.4.3</version> </dependency> Compile the Maven project. Create a separate folder called com.packt.mailService for the mail service. Create a simple class named MailSenderService and autowire the MailSender and SimpleMailMessage classes. The basic skeleton is shown here: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String    subject, String body){    /*Code */ }   } Next, create an object of SimpleMailMessage and set mail properties, such as from, to, and subject to it. public void sendmail(String from, String to, String subject, String body){ SimpleMailMessage message=new SimpleMailMessage(); message.setFrom(from); message.setSubject(subject); message.setText(body); mailSender.send(message); } We need to configure the SMTP details. Spring Mail Support provides this flexibility of configuring SMTP details in the XML file. <bean id="mailSender" class="org.springframework.mail.javamail. JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com" /> <property name="port" value="587" /> <property name="username" value="username" /> <property name="password" value="password" />   <property name="javaMailProperties"> <props>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop> </props> </property> </bean>   <bean id="mailSenderService" class=" com.packt.mailserviceMailSenderService "> <property name="mailSender" ref="mailSender" /> </bean>   </beans> We need to send mail to the customer after the order has been placed successfully in the MongoDB database. Update the addorder() method as follows: @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order")    Order order,Map<String, Object> model) {    Customer cust=new Customer();    cust=customer_respository.getObject      (order.getCustomer().getCust_id());      order.setCustomer(cust);    order.setProduct(product_respository.getObject      (order.getProduct().getProdid()));    respository.saveObject(order);    mailSenderService.sendmail      ("anjana.mprasad@gmail.com",cust.getEmail(),      "Dear"+cust.getName()+"Your order      details",order.getProduct().getName()+"-price-"+order      .getProduct().getPrice());    model.put("customerList", customerList);    model.put("productList", productList);    return "order"; } Sending mail to multiple recipients If you want to intimate the user regarding the latest products or promotions in the application, you can create a mail sending group and send mail to multiple recipients using Spring mail sending support. We have created an overloaded method in the same class, MailSenderService, which will accept string arrays. The code snippet in the class will look like this: public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage; public void sendmail(String from, String to, String subject,    String body){    /*Code */ }   public void sendmail(String from, String []to, String subject,    String body){    /*Code */ }   } The following is the code snippet for listing the set of users from MongoDB who have subscribed to promotional e-mails: public List<Customer> getAllObjectsby_emailsubscription(String    status) {    return mongoTemplate.find(query(      where("email_subscribe").is("yes")), Customer.class); } Sending MIME messages Multipurpose Internet Mail Extension (MIME) allows attachments to be sent over the Internet. This class just demonstrates how we can send mail with MIME messages. Using a MIME message sender type class is not advisible if you are not sending any attachments with the mail message. In the next section, we will look at the details of how we can send mail with attachments. Update the MailSenderService class with another method. We have used the MIME message preparator and have overridden the prepare method() to set properties for the mail. public class MailSenderService { @Autowired private MailSender mailSender; @AutoWired private SimpleMailMessage simplemailmessage;   public void sendmail(String from, String to, String subject,    String body){    /*Code */ } public void sendmail(String from, String []to, String subject,    String body){    /*Code */ } public void sendmime_mail(final String from, final String to,    final String subject, final String body) throws MailException{    MimeMessagePreparator message = new MimeMessagePreparator() {      public void prepare(MimeMessage mimeMessage)        throws Exception {        mimeMessage.setRecipient(Message.RecipientType.TO,new          InternetAddress(to));        mimeMessage.setFrom(new InternetAddress(from));        mimeMessage.setSubject(subject);        mimeMessage.setText(msg);    } }; mailSender.send(message); } Sending attachments with mail We can also attach various kinds of files to the mail. This functionality is supported by the MimeMessageHelper class. If you just want to send a MIME message without an attachment, you can opt for MimeMesagePreparator. If the requirement is to have an attachment to be sent with the mail, we can go for the MimeMessageHelper class with file APIs. Spring provides a file class named org.springframework.core.io.FileSystemResource, which has a parameterized constructor that accepts file objects. public class SendMailwithAttachment { public static void main(String[] args)    throws MessagingException {    AnnotationConfigApplicationContext ctx =      new AnnotationConfigApplicationContext();    ctx.register(AppConfig.class);    ctx.refresh();    JavaMailSenderImpl mailSender =      ctx.getBean(JavaMailSenderImpl.class);    MimeMessage mimeMessage = mailSender.createMimeMessage();    //Pass true flag for multipart message    MimeMessageHelper mailMsg = new MimeMessageHelper(mimeMessage,      true);    mailMsg.setFrom("ANJUANJU02@gmail.com");    mailMsg.setTo("RAGHY03@gmail.com");    mailMsg.setSubject("Test mail with Attachment");    mailMsg.setText("Please find Attachment.");    //FileSystemResource object for Attachment    FileSystemResource file = new FileSystemResource(new      File("D:/cp/ GODGOD. jpg"));    mailMsg.addAttachment("GODGOD.jpg", file);    mailSender.send(mimeMessage);    System.out.println("---Done---"); }   } Sending preconfigured mail In this example, we shall provide a message that is to be sent in the mail, and we will configure it in an XML file. Sometimes when it comes to web applications, you may have to send messages on maintenance. Think of a scenario where the content of the mail changes, but the sender and receiver are preconfigured. In such a case, you can add another overloaded method to the MailSender class. We have fixed the subject of the mail, and the content can be sent by the user. Think of it as "an application which sends mails to users whenever the build fails". <?xml version="1.0" encoding="UTF-8"?> <beans xsi_schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/ context/spring-context-3.0.xsd"> <context:component-scan base-package="com.packt" /> <!-- SET default mail properties --> <bean id="mailSender" class= "org.springframework.mail.javamail.JavaMailSenderImpl"> <property name="host" value="smtp.gmail.com"/> <property name="port" value="25"/> <property name="username" value="anju@gmail.com"/> <property name="password" value="password"/> <property name="javaMailProperties"> <props>    <prop key="mail.transport.protocol">smtp</prop>    <prop key="mail.smtp.auth">true</prop>    <prop key="mail.smtp.starttls.enable">true</prop>    <prop key="mail.debug">true</prop> </props> </property> </bean>   <!-- You can have some pre-configured messagess also which are ready to send --> <bean id="preConfiguredMessage" class= "org.springframework.mail.SimpleMailMessage"> <property name="to" value="packt@gmail.com"></property> <property name="from" value="anju@gmail.com"></property> <property name="subject" value="FATAL ERROR- APPLICATION AUTO    MAINTENANCE STARTED-BUILD FAILED!!"/> </bean> </beans> Now we shall sent two different bodies for the subjects. public class MyMailer { public static void main(String[] args){    try{      //Create the application context      ApplicationContext context = new        FileSystemXmlApplicationContext(        "application-context.xml");        //Get the mailer instance      ApplicationMailer mailer = (ApplicationMailer)        context.getBean("mailService");      //Send a composed mail      mailer.sendMail("nikhil@gmail.com", "Test Subject",        "Testing body");    }catch(Exception e){      //Send a pre-configured mail      mailer.sendPreConfiguredMail("build failed exception occured        check console or logs"+e.getMessage());    } } } Using Spring templates with Velocity to send HTML mails Velocity is the templating language provided by Apache. It can be integrated into the Spring view layer easily. The latest Velocity version used during this book is 1.7. In the previous section, we demonstrated using Velocity to send e-mails using the @Bean and @Configuration annotations. In this section, we shall see how we can configure Velocity to send mails using XML configuration. All that needs to be done is to add the following bean definition to the .xml file. In the case of mvc, you can add it to the dispatcher-servlet.xml file. <bean id="velocityEngine" class= "org.springframework.ui.velocity.VelocityEngineFactoryBean"> <property name="velocityProperties"> <value>    resource.loader=class    class.resource.loader.class=org.apache.velocity    .runtime.resource.loader.ClasspathResourceLoader </value> </property> </bean> Create a new Maven web project with the name Spring4MongoDB_Mail_VelocityChapter3. Create a package and name it com.packt.velocity.templates. Create a file with the name orderconfirmation.vm. <html> <body> <h3> Dear Customer,<h3> <p>${customer.firstName} ${customer.lastName}</p> <p>We have dispatched your order at address.</p> ${Customer.address} </body> </html> Use all the dependencies that we have added in the previous sections. To the existing Maven project, add this dependency: <dependency> <groupId>org.apache.velocity</groupId> <artifactId>velocity</artifactId> <version>1.7</version> </dependency> To ensure that Velocity gets loaded on application startup, we shall create a class. Let's name the class VelocityConfiguration.java. We have used the annotations @Configuration and @Bean with the class. import java.io.IOException; import java.util.Properties;   import org.apache.velocity.app.VelocityEngine; import org.apache.velocity.exception.VelocityException; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.ui.velocity.VelocityEngineFactory; @Configuration public class VelocityConfiguration { @Bean public VelocityEngine getVelocityEngine() throws VelocityException, IOException{    VelocityEngineFactory velocityEngineFactory = new      VelocityEngineFactory();    Properties props = new Properties();    props.put("resource.loader", "class");    props.put("class.resource.loader.class",      "org.apache.velocity.runtime.resource.loader." +      "ClasspathResourceLoader");    velocityEngineFactory.setVelocityProperties(props);    return factory.createVelocityEngine(); } } Use the same MailSenderService class and add another overloaded sendMail() method in the class. public void sendmail(final Customer customer){ MimeMessagePreparator preparator = new    MimeMessagePreparator() {    public void prepare(MimeMessage mimeMessage)    throws Exception {      MimeMessageHelper message =        new MimeMessageHelper(mimeMessage);      message.setTo(user.getEmailAddress());      message.setFrom("webmaster@packt.com"); // could be        parameterized      Map model = new HashMap();      model.put("customer", customer);      String text =        VelocityEngineUtils.mergeTemplateIntoString(        velocityEngine, "com/packt/velocity/templates/        orderconfirmation.vm", model);      message.setText(text, true);    } }; this.mailSender.send(preparator); } Update the controller class to send mail using the Velocity template. @RequestMapping(value = "/order/save", method = RequestMethod.POST) // request insert order recordh public String addorder(@ModelAttribute("Order") Order order,Map<String, Object> model) { Customer cust=new Customer(); cust=customer_respository.getObject(order.getCustomer()    .getCust_id());   order.setCustomer(cust); order.setProduct(product_respository.getObject    (order.getProduct().getProdid())); respository.saveObject(order); // to send mail using velocity template. mailSenderService.sendmail(cust);   return "order"; } Sending Spring mail over a different thread There are other options for sending Spring mail asynchronously. One way is to have a separate thread to the mail sending job. Spring comes with the taskExecutor package, which offers us a thread pooling functionality. Create a class called MailSenderAsyncService that implements the MailSender interface. Import the org.springframework.core.task.TaskExecutor package. Create a private class called MailRunnable. Here is the complete code for MailSenderAsyncService: public class MailSenderAsyncService implements MailSender{ @Resource(name = "mailSender") private MailSender mailSender;   private TaskExecutor taskExecutor;   @Autowired public MailSenderAsyncService(TaskExecutor taskExecutor){    this.taskExecutor = taskExecutor; } public void send(SimpleMailMessage simpleMessage) throws    MailException {    taskExecutor.execute(new MailRunnable(simpleMessage)); }   public void send(SimpleMailMessage[] simpleMessages)    throws MailException {    for (SimpleMailMessage message : simpleMessages) {      send(message);    } }   private class SimpleMailMessageRunnable implements    Runnable {    private SimpleMailMessage simpleMailMessage;    private SimpleMailMessageRunnable(SimpleMailMessage      simpleMailMessage) {      this.simpleMailMessage = simpleMailMessage;    }      public void run() {    mailSender.send(simpleMailMessage);    } } private class SimpleMailMessagesRunnable implements    Runnable {    private SimpleMailMessage[] simpleMessages;    private SimpleMailMessagesRunnable(SimpleMailMessage[]      simpleMessages) {      this.simpleMessages = simpleMessages;    }      public void run() {      mailSender.send(simpleMessages);    } } } Configure the ThreadPool executor in the .xml file. <bean id="taskExecutor" class="org.springframework. scheduling.concurrent.ThreadPoolTaskExecutor" p_corePoolSize="5" p_maxPoolSize="10" p_queueCapacity="100"    p_waitForTasksToCompleteOnShutdown="true"/> Test the source code. import javax.annotation.Resource;   import org.springframework.mail.MailSender; import org.springframework.mail.SimpleMailMessage; import org.springframework.test.context.ContextConfiguration;   @ContextConfiguration public class MailSenderAsyncService { @Resource(name = " mailSender ") private MailSender mailSender; public void testSendMails() throws Exception {    SimpleMailMessage[] mailMessages = new      SimpleMailMessage[5];      for (int i = 0; i < mailMessages.length; i++) {      SimpleMailMessage message = new SimpleMailMessage();      message.setSubject(String.valueOf(i));      mailMessages[i] = message;    }    mailSender.send(mailMessages); } public static void main (String args[]){    MailSenderAsyncService asyncservice=new      MailSenderAsyncService();    Asyncservice. testSendMails(); } } Sending Spring mail with AOP We can also send mails by integrating the mailing functionality with Aspect Oriented Programming (AOP). This can be used to send mails after the user registers with an application. Think of a scenario where the user receives an activation mail after registration. This can also be used to send information about an order placed on an application. Use the following steps to create a MailAdvice class using AOP: Create a package called com.packt.aop. Create a class called MailAdvice. public class MailAdvice { public void advice (final ProceedingJoinPoint    proceedingJoinPoint) {    new Thread(new Runnable() {    public void run() {      System.out.println("proceedingJoinPoint:"+        proceedingJoinPoint);      try {        proceedingJoinPoint.proceed();      } catch (Throwable t) {        // All we can do is log the error.         System.out.println(t);      }    } }).start(); } } This class creates a new thread and starts it. In the run method, the proceedingJoinPoint.proceed() method is called. ProceddingJoinPoint is a class available in AspectJ.jar. Update the dispatcher-servlet.xml file with aop configurations. Update the xlmns namespace using the following code: advice"> <aop:around method="fork"    pointcut="execution(* org.springframework.mail    .javamail.JavaMailSenderImpl.send(..))"/> </aop:aspect> </aop:config> Summary In this article, we demonstrated how to create a mailing service and configure it using Spring API. We also demonstrated how to send mails with attachments using MIME messages. We also demonstrated how to create a dedicated thread for sending mails using ExecutorService. We saw an example in which mail can be sent to multiple recipients, and saw an implementation of using the Velocity engine to create templates and send mails to recipients. In the last section, we demonstrated how the Spring framework supported mails can be sent using Spring AOP and threads. Resources for Article: Further resources on this subject: Time Travelling with Spring [article] Welcome to the Spring Framework [article] Creating a Spring Application [article]
Read more
  • 0
  • 0
  • 18567
article-image-command-line-tools
Packt
06 Jul 2017
9 min read
Save for later

Command-Line Tools

Packt
06 Jul 2017
9 min read
In this article by Aaron Torres, author of the book, Go Cookbook, we will cover the following recipes: Using command-line arguments Working with Unix pipes An ANSI coloring application (For more resources related to this topic, see here.) Using command-line arguments This article will expand on other uses for these arguments by constructing a command that supports nested subcommands. This will demonstrate Flagsets and also using positional arguments passed into your application. This recipe requires a main function to run. There are a number of third-party packages for dealing with complex nested arguments and flags, but we'll again investigate doing so using only the standard library. Getting ready You need to perform the following steps for the installation: Download and install Go on your operating system at https://golang.org/doc/install and configure your GOPATH. Open a terminal/console application. Navigate to your GOPATH/src and create a project directory, for example, $GOPATH/src/github.com/yourusername/customrepo. All code will be run and modified from this directory. Optionally, install the latest tested version of the code using the go get github.com/agtorre/go-cookbook/ command. How to do it... From your terminal/console application, create and navigate to the chapter2/cmdargs directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/cmdargs or use this as an exercise to write some of your own. Create a file called cmdargs.go with the following content: package main import ( "flag" "fmt" "os" ) const version = "1.0.0" const usage = `Usage: %s [command] Commands: Greet Version ` const greetUsage = `Usage: %s greet name [flag] Positional Arguments: name the name to greet Flags: ` // MenuConf holds all the levels // for a nested cmd line argument type MenuConf struct { Goodbye bool } // SetupMenu initializes the base flags func (m *MenuConf) SetupMenu() *flag.FlagSet { menu := flag.NewFlagSet("menu", flag.ExitOnError) menu.Usage = func() { fmt.Printf(usage, os.Args[0]) menu.PrintDefaults() } return menu } // GetSubMenu return a flag set for a submenu func (m *MenuConf) GetSubMenu() *flag.FlagSet { submenu := flag.NewFlagSet("submenu", flag.ExitOnError) submenu.BoolVar(&m.Goodbye, "goodbye", false, "Say goodbye instead of hello") submenu.Usage = func() { fmt.Printf(greetUsage, os.Args[0]) submenu.PrintDefaults() } return submenu } // Greet will be invoked by the greet command func (m *MenuConf) Greet(name string) { if m.Goodbye { fmt.Println("Goodbye " + name + "!") } else { fmt.Println("Hello " + name + "!") } } // Version prints the current version that is // stored as a const func (m *MenuConf) Version() { fmt.Println("Version: " + version) } Create a file called main.go with the following content: package main import ( "fmt" "os" "strings" ) func main() { c := MenuConf{} menu := c.SetupMenu() menu.Parse(os.Args[1:]) // we use arguments to switch between commands // flags are also an argument if len(os.Args) > 1 { // we don't care about case switch strings.ToLower(os.Args[1]) { case "version": c.Version() case "greet": f := c.GetSubMenu() if len(os.Args) < 3 { f.Usage() return } if len(os.Args) > 3 { if.Parse(os.Args[3:]) } c.Greet(os.Args[2]) default: fmt.Println("Invalid command") menu.Usage() return } } else { menu.Usage() return } } Run the go build command. Run the following commands and try a few other combinations of arguments: $./cmdargs -h Usage: ./cmdargs [command] Commands: Greet Version $./cmdargs version Version: 1.0.0 $./cmdargs greet Usage: ./cmdargs greet name [flag] Positional Arguments: name the name to greet Flags: -goodbye Say goodbye instead of hello $./cmdargs greet reader Hello reader! $./cmdargs greet reader -goodbye Goodbye reader! If you copied or wrote your own tests go up one directory and run go test, and ensure all tests pass. How it works... Flagsets can be used to set up independent lists of expected arguments, usage strings, and more. The developer is required to do validation on a number of arguments, parsing in the right subset of arguments to commands, and defining usage strings. This can be error prone and requires a lot of iteration to get it completely correct. The flag package makes parsing arguments much easier and includes convenience methods to get the number of flags, arguments, and more. This recipe demonstrates basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. Working with Unix pipes Unix pipes are useful when passing the output of one program to the input of another. Consider the following example: $ echo "test case" | wc -l 1 In a Go application, the left-hand side of the pipe can be read in using os.Stdin and acts like a file descriptor. To demonstrate this, this recipe will take an input on the left-hand side of a pipe and return a list of words and their number of occurrences. These words will be tokenized on white space. Getting ready Refer to the Getting Ready section of the Using command-line arguments recipe. How to do it... From your terminal/console application, create a new directory, chapter2/pipes. Navigate to that directory and copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/pipes or use this as an exercise to write some of your own. Create a file called pipes.go with the following content: package main import ( "bufio" "fmt" "os" ) // WordCount takes a file and returns a map // with each word as a key and it's number of // appearances as a value func WordCount(f *os.File) map[string]int { result := make(map[string]int) // make a scanner to work on the file // io.Reader interface scanner := bufio.NewScanner(f) scanner.Split(bufio.ScanWords) for scanner.Scan() { result[scanner.Text()]++ } if err := scanner.Err(); err != nil { fmt.Fprintln(os.Stderr, "reading input:", err) } return result } func main() { fmt.Printf("string: number_of_occurrencesnn") for key, value := range WordCount(os.Stdin) { fmt.Printf("%s: %dn", key, value) } }   Run echo "some string" | go run pipes.go. You may also run: go build echo "some string" | ./pipes You should see the following output: $ echo "test case" | go run pipes.go string: number_of_occurrences test: 1 case: 1 $ echo "test case test" | go run pipes.go string: number_of_occurrences test: 2 case: 1 If you copied or wrote your own tests, go up one directory and run go test, and ensure that all tests pass. How it works... Working with pipes in go is pretty simple, especially if you're familiar with working with files. This recipe uses a scanner to tokenize the io.Reader interface of the os.Stdin file object. You can see how you must check for errors after completing all of the reads. An ANSI coloring application Coloring an ANSI terminal application is handled by a variety of code before and after a section of text that you want colored. This recipe will explore a basic coloring mechanism to color the text red or keep it plain. For a more complete application, take a look at https://github.com/agtorre/gocolorize, which supports many more colors and text types implements the fmt.Formatter interface for ease of printing. Getting ready Refer to the Getting Ready section of the Using command line arguments recipe. How to do it... From your terminal/console application, create and navigate to the chapter2/ansicolor directory. Copy tests from https://github.com/agtorre/go-cookbook/tree/master/chapter2/ansicolor or use this as an exercise to write some of your own. Create a file called color.go with the following content: package ansicolor import "fmt" //Color of text type Color int const ( // ColorNone is default ColorNone = iota // Red colored text Red // Green colored text Green // Yellow colored text Yellow // Blue colored text Blue // Magenta colored text Magenta // Cyan colored text Cyan // White colored text White // Black colored text Black Color = -1 ) // ColorText holds a string and its color type ColorText struct { TextColor Color Text string } func (r *ColorText) String() string { if r.TextColor == ColorNone { return r.Text } value := 30 if r.TextColor != Black { value += int(r.TextColor) } return fmt.Sprintf("33[0;%dm%s33[0m", value, r.Text) } Create a new directory named example. Navigate to example and then create a file named main.go with the following content. Ensure that you modify the ansicolor import to use the path you set up in step 1: package main import ( "fmt" "github.com/agtorre/go-cookbook/chapter2/ansicolor" ) func main() { r := ansicolor.ColorText{ansicolor.Red, "I'm red!"} fmt.Println(r.String()) r.TextColor = ansicolor.Green r.Text = "Now I'm green!" fmt.Println(r.String()) r.TextColor = ansicolor.ColorNone r.Text = "Back to normal..." fmt.Println(r.String()) } Run go run main.go. Alternatively, you may also run the following: go build ./example You should see the following with the text colored if your terminal supports the ANSI coloring format: $ go run main.go I'm red! Now I'm green! Back to normal... If you copied or wrote your own tests, go up one directory and run go test, and ensure that all the tests pass. How it works... This application makes use of a struct keyword to maintain state of the colored text. In this case, it stores the color of the text and the value of the text. The final string is rendered when you call the String() method, which will either return colored text or plain text depending on the values stored in the struct. By default, the text will be plain. Summary In this article, we demonstrated basic ways to construct a complex command-line application using arguments, including a package-level config, required positional arguments, multi-leveled command usage, and how to split these things into multiple files or packages if needed. We saw how to work with Unix pipes and explored a basic coloring mechanism to color text red or keep it plain. Resources for Article: Further resources on this subject: Building a Command-line Tool [article] A Command-line Companion Called Artisan [article] Scaffolding with the command-line tool [article]
Read more
  • 0
  • 0
  • 18487

article-image-dont-call-us-ninjas-or-rockstars-say-developers
Richard Gall
30 May 2018
5 min read
Save for later

Don't call us ninjas or rockstars, say developers

Richard Gall
30 May 2018
5 min read
Words like 'ninja' and 'rockstar' have been flying around the tech world for some time now. Data revealed by recruitment website Indeed at the end of 2017 showed that the term 'rockstar' has increased in job postings 19% since 2015. We seem to live in a world where 'sexing up' job roles has become the norm. And when top talent is hard to come by it makes sense. The words offer some degree of status to candidates and, and imply the organizations behind them are forward-thinking. But it's starting to get boring. In this year's Skill Up survey, 57% of respondents said they didn't like creative terms like 'rockstar' 'ninja' and 'wizard' - only 26% said they actually liked the term. While words like these might be boring, they can be harmful too. In an age of spin and fake news, using language to dress up and redefine things can have an impact we might not expect. Sign up to the Packt Hub weekly newsletter and receive a free PDF of this year's Skill Up report. Using words like rockstar and ninja pits developers against each other The industry's insistence on using these words in everything from recruitment to conferences cultivates a bizarre class system within tech. When we start calling people rockstars, it suggests something about the role they play within a company or engineering team. It says 'these people are doing something really exciting' while everyone else, presumably, isn't. While it's true that hierarchies are part and parcel of any modern organization, this superficial labeling isn't helpful. 'Collaboration' and 'agile' are buzzwords that are as overused as ninja and rockstar, but at least they offer something practical and positive. And let's be honest - collaborating is important if we're to build better software and have better working lives. The unforeseen impact on developer mental health An unforeseen affect of these words could be a negative impact on mental health in the tech industry. We already know that burnout is becoming a common occurrence, as tech professionals are being overworked and pushed to breaking point. While this is particularly true of startup culture, where engineers are driven to develop products on incredibly tight schedules as owners seek investment and investors look for signs of growth, but this trend is growing. Arguably, the gap between management and engineers is playing into this. The results of innovation look shiny and exciting, but the actual work - which can, as we know, be repetitive, boring, hard as it can be enjoyable - isn't properly understood. Ninjas, rockstars and the commodification of technical skill It has become a truism that communication is important when it comes to tech. Words like ninja and rockstar are making communication hard - they undermine our ability to communicate. They conceal the work that engineers actually do. It's great that they shine a spotlight on technical professionals and skills, but they do so in a way that is actually quite euphemistic. It's hints at the value of the skills, but actually fails to engage with why these skills are important, how you develop them, and how they can be properly leveraged. More specifically, words like 'ninja' and 'rockstar' turn technical expertise into a product. It makes skills marketable; it turns knowledge into a commodity. Good for you if that helps you earn a little more money in your next job; even better if it lands you a book deal or a spot speaking at a conference. But all you're really doing is taking advantage of the technical skills bubble. It won't last forever, and in the long run it probably won't be good news for the industry. Some people will be overvalued, while others will be undervalued. Ninjas, rockstars and open source culture These irritating euphemisms come from a number of different sources. Tech recruitment has played a big part, as companies try and attract top tech talent. So has modern corporate culture, which has been trying to loosen its proverbial tie for the last decade. But it's also worth noting that rockstars and ninjas come out of open source culture. This is a culture that is celebrated for its lack of authority. However, with this decline, it makes opens up a space for 'experts' to take the lead. In the past, we might have quaintly referred to these people as 'community figures'. As open source has moved mainstream, the commodification of technical expertise found form in the tech ninja and rockstar. But while rockstars and ninjas appear to be the shining lights of open source culture, they might also be damaging to it as well. If open source culture is 'led' by a number of people the world begins referring to as rockstars, the very foundations of it begin to move. It's no longer quite as open as it used to be. True, perhaps we need rockstars and ninjas in open source. These are people that evangelize about certain projects. These are people who can offer unique and useful perspectives on important debates and issues in their respective fields, right? Well, sort of. It is important for people to discuss new ideas and pioneer new ways of doing things. But this doesn't mean we need to sex things up. After all, certain people aren't more entitled to an opinion just because they have a book deal. Yes they have experience, but it's important that communities don't get left behind as the tech industry chases after the dream of relentless innovation. Of course it's great to talk about, but we've all got to do the work too.
Read more
  • 0
  • 0
  • 18443
Modal Close icon
Modal Close icon