Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-learning-essential-linux-commands-for-navigating-the-shell-effectively
Expert Network
16 Aug 2021
9 min read
Save for later

Learning Essential Linux Commands for Navigating the Shell Effectively 

Expert Network
16 Aug 2021
9 min read
Once we learn how to deploy an Ubuntu server, how to manage users, and how to manage software packages, we should take a moment to learn some important concepts and commands that will allow us to build more of the foundational knowledge that will serve us well while understanding the advanced concepts and treading the path of expertise. These foundational concepts include core Linux commands for navigating the shell.  This article is an excerpt from the book, Mastering Ubuntu Server, Third Edition by Jeremy “Jay” La Croix – A hands-on book that will teach you how to deploy, maintain and troubleshoot Ubuntu Server.    Learning essential Linux commands Building a solid competency on the command line is essential and effectively gives any system administrator or engineer superpowers. Our new abilities won’t allow us to leap tall buildings in a single bound, but will definitely enable us to execute terminal commands as if we’re ninjas. While we won’t master the art of using the command line in this section (that can only come with years and experience), we will definitely become more confident.  First, let’s talk about moving from one place to another within the Linux filesystem. Specifically, by “Linux filesystem”, I’m referring to the default structure of the various folders (also referred to as “directories”) contained within your Ubuntu installation. The Linux filesystem contains many important directories, each with their own designated purpose, which we’ll talk about in more detail in the book. Before we can explore that further, we’ll need to learn how to navigate from one directory to another. The first command we’ll cover in this section relative to navigating the filesystem will clarify the directory you’re currently working from. For that, we have the pwd command. The pwd command pwd stands for print working directory, and shows you where you currently are in the filesystem. If you run it, you may see output such as this:  Figure 4.1: Viewing the current working directory  In this example, when I ran pwd, the output informed me that my current working directory is /home/jay. This is known as your home directory and, by default, every user has one. This is where all the files for your user account will reside by default. Sure, you can create files anywhere you’d like, even outside your home directory if you have permission to do so or you use sudo. But just because you can doesn’t mean you should. As you’ll learn in this article, the Linux filesystem has a designated place for just about everything. But your home directory, located at /home/<username>, is yours. You own it, you control it—it’s your home on the server. In the early 2000s, Linux installations with a graphical user interface even depicted your home directory with an icon of a house.  Typically, files that you create in your home directory will have permission string similar to this:  -rw-rw-r-- 1 jay  jay      0 Jul  5 14:10 testfile.txt  You can see by default, files you create in your home directory are owned by your user, your group, and are readable by all three categories (user, group, and other).  The cd command To change our current directory and navigate to another, we can use the cd command along with a path we’d like to move to:  cd /etc  Now, I haven’t gone over the file and directory layout yet, so I just randomly picked the etc directory. The forward slash at the beginning designates the beginning of the filesystem. More on that later. Now, we’re in the /etc directory, and our command prompt has even changed as well:  Figure 4.2: Command prompt and pwd command after changing a directory  As you could probably guess, the cd command stands for change directory, and it’s how you move your working directory from one to another while navigating around. You can use the following command, for example, to return back to the home directory:  cd /home/<user>  In fact, there are several ways to return home, a few of which are demonstrated in the following screenshot:    Figure 4.3: Other ways of navigating to the home directory  The first command, cd -, doesn’t actually have anything to do with your home directory specifically. It’s a neat trick to return you to whatever directory you were in most previously. For me, the cd – command took me to the previous directory I was just in, which just so happened to be /home/jay. The second command, cd /home/jay, took me directly to my home directory since I called out the entire path. The last command, cd ~, also took me to my home directory. This is because ~ is shorthand for the full path to your home directory, so you don’t really ever have to type out the entire path to /home/<user>. You can just refer to that path simply as ~.  The ls command Another essential command is ls. The ls command lists the contents of the current working directory. We probably don’t have any contents in our home directory yet. But if we navigate to /etc by running cd /etc, as we did earlier, and then execute ls, we’ll see that the /etc</span> directory has a number of files in it. Go ahead and try it yourself and see:  cd /etc ls  We didn’t actually have to change our working directory to /etc just to list the contents. We could’ve just executed the following command:  ls /etc  Even better, we can run:  ls -l /etc  This gives us the contents in a long list, which I think is much easier to understand. It will show each directory or file entry on its own line, along with the permission string. But you probably already must be knowing ls as well as ls -l so I won’t go into too much more detail here. The -l portion of the ls command in that example is known as an argument. I’m not referring to an argument such as the ever-ensuing debate in the Linux community over which command-line text editor is the best between vim and emacs (it’s clearly vim). Instead, I’m referring to the concept of an argument in shell commands that allow you to override the defaults, or feed options to the command in some way, such as in this example, where we format the output of ls to be in a long list.  The rm command The rm command is another one that we touched on in, when we were discussing manually removing the home directory of a user that was removed from the system. So, at this point, you’re probably well aware of that command and what it does (it removes files and directories). It’s a potentially dangerous command, as you could use it to accidentally remove something that you shouldn’t have. We used the following command to remove the home directory of user dscully:  rm -r /home/dscully  As you can see, we’re using the -r argument to alter the behavior of the rm command, which, by default, doesn’t remove directories but only files. The -r argument instructs rm to remove everything recursively, even if it’s a directory. The -r argument will also remove subdirectories of the path as well, so you’ll definitely want to be careful with this command. As I’ve mentioned earlier in the book, if you use sudo with rm, you can hypothetically delete your entire Ubuntu installation!  Another option offered by rm is the -f argument which is short for force, and it tells rm not to prompt before removing things. This argument won’t be needed as often, and use cases for it are outside the scope of this article. But keep in mind that it exists, should you need it.  The touch command Another foundational command that’s good to know is touch, which actually serves two purposes. First, assuming you have permission to do so in your current working directory, the touch command will create an empty file if it doesn’t already exist. Second, the touch command will update the modification time of a file or directory if it does already exist:  Figure 4.4: Experimenting with the touch command  To illustrate this, in the related screenshot, I ran several commands. First, I ran the following command to create an empty file:  touch testfile.txt  That file didn’t exist before, so when I ran ls -l afterward, it showed the newly created file with a size of 0 bytes. Next, I ran the touch testfile.txt command again a minute later, and you can see in the screenshot that the modification time went from 15:12 to 15:13.  When it comes to viewing the contents of a file, we’ll get to that later on in the book, Mastering Ubuntu Server, Third Edition. And there are definitely more commands that we’ll need to learn to build the basis of our foundation. But for now, let’s take a break from the foundational concepts to understand the Linux filesystem layout better.  Summary There are more Linux commands than you will never be able to memorize. Most of us just memorize our favorite commands and variations of commands. You’ll develop your own menu of these commands as you learn and expand your knowledge. In this article, we covered many of the foundational commands that are, for the most part, essential. Commands such as grep, cat, cd, ls, and others were explored this time around.  About Jeremy “Jay” La Croix is a technologist and open-source enthusiast, specializing in Linux. Jay is currently the director of Cloud Services, Adaptavist. He has a net field experience of 20 years across different firms as a Solutions Architect and holds a master’s degree in Information Systems Technology Management from Capella University.     In addition, Jay also has an active Linux-focused YouTube channel with over 186K followers and 15.9M views, available at LearnLinux.tv, where he posts instructional tutorial videos and other Linux-related content.
Read more
  • 0
  • 0
  • 40962

article-image-clean-your-code
Packt
19 Dec 2016
23 min read
Save for later

Clean Up Your Code

Packt
19 Dec 2016
23 min read
 In this article by Michele Bertoli, the author of the book React Design Patterns and Best Practices, we will learn to use JSX without any problems or unexpected behaviors, it is important to understand how it works under the hood and the reasons why it is a useful tool to build UIs. Our goal is to write clean and maintainable JSX code and to achieve that we have to know where it comes from, how it gets translated to JavaScript and which features it provides. In the first section, we will do a little step back but please bear with me because it is crucial to master the basics to apply the best practices. In this article, we will see: What is JSX and why we should use it What is Babel and how we can use it to write modern JavaScript code The main features of JSX and the differences between HTML and JSX The best practices to write JSX in an elegant and maintainable way (For more resources related to this topic, see here.) JSX Let's see how we can declare our elements inside our components. React gives us two ways to define our elements: the first one is by using JavaScript functions and the second one is by using JSX, an optional XML-like syntax. In the beginning, JSX is one of the main reasons why people fails to approach to React because looking at the examples on the homepage and seeing JavaScript mixed with HTML for the first time does not seem right to most of us. As soon as we get used to it, we realize that it is very convenient exactly because it is similar to HTML and it looks very familiar to anyone who already created User Interfaces on the web. The opening and closing tags, make it easier to represent nested trees of elements, something that would have been unreadable and hard to maintain using plain JavaScript. Babel In order to use JSX (and es2015) in our code, we have to install Babel. First of all, it is important to understand clearly the problems it can solve for us and why we need to add a step in our process. The reason is that we want to use features of the language that have not been implemented yet in the browser, our target environment. Those advanced features make our code more clean for the developers but the browser cannot understand and execute it. So the solution is to write our scripts in JSX and es2015 and when we are ready to ship, we compile the sources into es5, the standard specification that is implemented in the major browsers today. Babel is a popular JavaScript compiler widely adopted within the React community: It can compile es2015 code into es5 JavaScript as well as compile JSX into JavaScript functions. The process is called transpilation, because it compiles the source into a new source rather than into an executable. Using it is pretty straightforward, we just install it: npm install --global babel-cli If you do not like to install it globally (developers usually tend to avoid it), you can install Babel locally to a project and run it through a npm script but for the purpose of this article a global instance is fine. When the installation is completed we can run the following command to compile our JavaScript files: babel source.js -o output.js One of the reasons why Babel is so powerful is because it is highly configurable. Babel is just a tool to transpile a source file into an output file but to apply some transformations we need to configure it. Luckily, there are some very useful presets of configurations which we can easily install and use: npm install --global babel-preset-es2015 babel-preset-react Once the installation is done, we create a configuration file called .babelrc and put the following lines into it to tell Babel to use those presets: { "presets": [ "es2015", "React" ] } From this point on we can write es2015 and JSX in our source files and execute the output files in the browser. Hello, World! Now that our environment has been set up to support JSX, we can dive into the most basic example: generating a div element. This is how you would create a div with React'screateElementfunction: React.createElement('div') React has some shortcut methods for DOM elements and the following line is equivalent to the one above: React.DOM.div() This is the JSX for creating a div element: <div /> It looks identical to the way we always used to create the markup of our HTML pages. The big difference is that we are writing the markup inside a .js file but it is important to notice that JSX is only a syntactic sugar and it gets transpiled into the JavaScript before being executed in the browser. In fact, our <div /> is translated into React.createElement('div') when we run Babel and that is something we should always keep in mind when we write our templates. DOM elements and React components With JSX we can obviously create both HTML elements and React components, the only difference is if they start with a capital letter or not. So for example to render an HTML button we use <button />, while to render our Button components we use <Button />. The first button gets transpiled into: React.createElement('button') While the second one into: React.createElement(Button) The difference here is that in the first call we are passing the type of the DOM element as a string while in the second one we are passing the component itself, which means that it should exist in the scope to work. As you may have noticed, JSX supports self-closing tags which are pretty good to keep the code terse and they do not require us to repeat unnecessary tags. Props JSX is very convenient when your DOM elements or React components have props, in fact following XML is pretty easy to set attributes on elements: <imgsrc="https://facebook.github.io/react/img/logo.svg" alt="React.js" /> The equivalent in JavaScript would be: React.createElement("img", { src: "https://facebook.github.io/react/img/logo.svg", alt: "React.js" }); Which is way less readable and even with only a couple of attributes it starts getting hard to be read without a bit of reasoning. Children JSX allows you to define children to describe the tree of elements and compose complex UIs. A basic example could be a link with a text inside it: <a href="https://facebook.github.io/react/">Click me!</a> Which would be transpiled into: React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ); Our link can be enclosed inside a div for some layout requirements and the JSX snippet to achieve that is the following: <div> <a href="https://facebook.github.io/react/">Click me!</a> </div> With the JSX equivalent being: React.createElement( "div", null, React.createElement( "a", { href: "https://facebook.github.io/react/" }, "Click me!" ) ); It becomes now clear how the XML-like syntax of JSX makes everything more readable and maintainable but it is always important to know what is the JavaScript parallel of our JSX to take control over the creation of elements. The good part is that we are not limited to have elements as children of elements but we can use JavaScript expressions like functions or variables. For doing that we just have to put the expression inside curly braces: <div> Hello, {variable}. I'm a {function()}. </div> The same applies to non-string attributes: <a href={this.makeHref()}>Click me!</a> Differences with HTML So far we have seen how the JSX is similar to HTML, let's now see the little differences between them and the reasons why they exist. Attributes We always have to keep in mind that JSX is not a standard language and it gets transpiled into JavaScript and because of that, some attributes cannot be used. For example instead of class we have to use className and instead of for we have to use htmlFor: <label className="awesome-label"htmlFor="name" /> The reason is that class and for are reserved word in JavaScript. Style A pretty significant difference is the way the style attribute works.The style attribute does not accept a CSS string as the HTML parallel does, but it expects a JS Object where the style names are camelCased. <div style={{ backgroundColor: 'red' }} /> Root One important difference with HTML worth mentioning is that since JSX elements get translated into JavaScript functions and you cannot return two functions in JavaScript, whenever you have multiple elements at the same level you are forced to wrap them into a parent. Let's see a simple example: <div /> <div /> Gives us the following error: Adjacent JSX elements must be wrapped in an enclosing tag While this: <div> <div /> <div /> </div> It is pretty annoying having to add unnecessary divtags just for making JSX work but the React developers are trying to find a solution: https://github.com/reactjs/core-notes/blob/master/2016-07/july-07.md Spaces There's one thing that could be a little bit tricky at the beginning and again it regards the fact that we should always have in mind that JSX is not HTML, even if it has an XML-like syntax. JSX, in fact, handles the spaces between text and elements differently from HTML in a way that's counter-intuitive. Consider the following snippet: <div> <span>foo</span> bar <span>baz</span> </div> In the browser, which interprets HTML, this code would give you foo bar baz, which is exactly what we expect it to be. In JSX instead, the same code would be rendered as foobarbaz and that is because the three nested lines get transpiled as individual children of the div element, without taking in account the spaces. A common solution is to put a space explicitly between the elements: <div> <span>foo</span> {''} bar {''} <span>baz</span> </div> As you may have noticed, we are using an empty string wrapped inside a JavaScript expression to force the compiler to apply the space between the elements. Boolean Attributes A couple of more things worth mentioning before starting for real regard the way you define Boolean attributes in JSX. If you set an attribute without a value, JSX assumes that its value is true, following the same behavior of the HTML disabled attribute, for example. That means that if we want to set an attribute to false we have to declare it explicitly to false: <button disabled /> React.createElement("button", { disabled: true }); And: <button disabled={false} /> React.createElement("button", { disabled: false }); This can be confusing in the beginning because we may think that omitting an attribute would mean false but it is not like that: with React we should always be explicit to avoid confusion. Spread attributes An important feature is the spread attributes operator, which comes from the Rest/Spread Properties for ECMAScript proposal and it is very convenient whenever we want to pass all the attributes of a JavaScript object to an element. A common practice that leads to fewer bugs is not to pass entire JavaScript objects down to children by reference but using their primitive values which can be easily validated making components more robust and error proof. Let's see how it works: const foo = { bar: 'baz' } return <div {...foo} /> That gets transpiled into this: var foo = { bar: 'baz' }; return React.createElement('div', foo); JavaScript templating Last but not least, we started from the point that one of the advantages of moving the templates inside our components instead of using an external template library is that we can use the full power of JavaScript, so let's start looking at what it means. The spread attributes is obviously an example of that and another common one is that JavaScript expressions can be used as attributes values by wrapping them into curly braces: <button disabled={errors.length} /> Now that we know how JSX works and we master it, we are ready to see how to use it in the right way following some useful conventions and techniques. Common Patterns Multi-line Let's start with a very simple one: as we said, on the main reasons why we should prefer JSX over React'screateClass is because of its XML-like syntax and the way balanced opening/closing tags are perfect to represent a tree of nodes. Therefore, we should try to use it in the right way and get the most out of it. One example is that, whenever we have nested elements, we should always go multi-line: <div> <Header /> <div> <Main content={...} /> </div> </div> Instead of: <div><Header /><div><Main content={...} /></div></div> Unless the children are not elements, such as text or variables. In that case it can make sense to remain on the same line and avoid adding noise to the markup, like: <div> <Alert>{message}</Alert> <Button>Close</Button> </div> Always remember to wrap your elements inside parenthesis when you write them in multiple lines. In fact, JSX always gets replaced by functions and functions written in a new line can give you an unexpected result. Suppose for example that you are returning JSX from your render method, which is how you create UIs in React. The following example works fine because the div is in the same line of the return: return <div /> While this is not right: return <div /> Because you would have: return; React.createElement("div", null); That is why you have to wrap the statement into parenthesis: return ( <div /> ) Multi-properties A common problem in writing JSX comes when an element has multiples attributes. One solution would be to write all the attributes on the same line but this would lead to very long lines which we do not want in our code (see in the next section how to enforce coding style guides). A common solution is to write each attribute on a new line with one level of indentation and then putting the closing bracket aligned with the opening tag: <button foo="bar" veryLongPropertyName="baz" onSomething={this.handleSomething} /> Conditionals Things get more interesting when we start working with conditionals, for example if we want to render some components only when some conditions are matched. The fact that we can use JavaScript is obviously a plus but there are many different ways to express conditions in JSX and it is important to understand the benefits and the problems of each one of those to write code that is readable and maintainable at the same time. Suppose we want to show a logout button only if the user is currently logged in into our application. A simple snippet to start with is the following: let button if (isLoggedIn) { button = <LogoutButton /> } return <div>{button}</div> It works but it is not very readable, especially if there are multiple components and multiple conditions. What we can do in JSX is using an inline condition: <div> {isLoggedIn&&<LoginButton />} </div> This works because if the condition is false, nothing gets rendered but if the condition is true the createElement function of the Loginbutton gets called and the element is returned to compose the resulting tree. If the condition has an alternative, the classic if…else statement, and we want for example to show a logout button if the user is logged in and a login button otherwise, we can either use JavaScript's if…else: let button if (isLoggedIn) { button = <LogoutButton /> } else { button = <LoginButton /> } return <div>{button}</div> Alternatively, better, using a ternary condition, which makes our code more compact: <div> {isLoggedIn ? <LogoutButton /> : <LoginButton />} </div> You can find the ternary condition used in popular repositories like the Redux real world example (https://github.com/reactjs/redux/blob/master/examples/real-world/src/components/List.js) where the ternary is used to show a loading label if the component is fetching the data or "load more" inside a button according to the value of the isFetching variable: <button [...]> {isFetching ? 'Loading...' : 'Load More'} </button> Let's now see what is the best solution when things get more complicated and, for example, we have to check more than one variable to determine if render a component or not: <div> {dataIsReady&& (isAdmin || userHasPermissions) &&<SecretData />} </div> In this case is clear that using the inline condition is a good solution but the readability is strongly impacted so what we can do instead is creating a helper function inside our component and use it in JSX to verify the condition: canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData() &&<SecretData />} </div> As you can see, this change makes the code more readable and the condition more explicit. Looking into this code in six month time you will still find it clear just by reading the name of the function. If we do not like using functions you can use object's getters which make the code more elegant. For example, instead of declaring a function we define a getter: get canShowSecretData() { const { dataIsReady, isAdmin, userHasPermissions } = this.props return dataIsReady&& (isAdmin || userHasPermissions) } <div> {this.canShowSecretData&&<SecretData />} </div> The same applies to computed properties: suppose you have two single properties for currency and value. Instead of creating the price string inside you render method you can create a class function for that: getPrice() { return `${this.props.currency}${this.props.value}` } <div>{this.getPrice()}</div> Which is better because it is isolated and you can easily test it in case it contains logic. Alternatively going a step further and, as we have just seen, use getters: get price() { return `${this.props.currency}${this.props.value}` } <div>{this.price}</div> Going back to conditional statements, there are other solutions that require using external dependencies. A good practice is to avoid external dependencies as much as we can to keep our bundle smaller but it may be worth it in this particular case because improving the readability of our templates is a big win. The first solution is renderIf which we can install with: npm install --save render-if And easily use in our projects like this: const { dataIsReady, isAdmin, userHasPermissions } = this.props constcanShowSecretData = renderIf(dataIsReady&& (isAdmin || userHasPermissions)) <div> {canShowSecretData(<SecretData />)} </div> We wrap our conditions inside the renderIf function. The utility function that gets returned can be used as a function that receives the JSX markup to be shown when the condition is true. One goal that we should always keep in mind is never to add too much logic inside our components. Some of them obviously will require a bit of it but we should try to keep them as simple and dumb as possible in a way that we can spot and fix error easily. At least, we should try to keep the renderIf method as clean as possible and for doing that we could use another utility library called React Only If which let us write our components as if the condition is always true by setting the conditional function using a higher-order component. To use the library we just need to install it: npm install --save react-only-if Once it is installed, we can use it in our apps in the following way: constSecretDataOnlyIf = onlyIf( SecretData, ({ dataIsReady, isAdmin, userHasPermissions }) => { return dataIsReady&& (isAdmin || userHasPermissions) } ) <div> <SecretDataOnlyIf dataIsReady={...} isAdmin={...} userHasPermissions={...} /> </div>  As you can see here there is no logic at all inside the component itself. We pass the condition as the second parameter of the onlyIf function when the condition is matched, the component gets rendered. The function that is used to validate the condition receives the props, the state, and the context of the component. In this way we avoid polluting our component with conditionals so that it is easier to understand and reason about. Loops A very common operation in UI development is displaying lists of items. When it comes to showing lists we realize that using JavaScript as a template language is a very good idea. If we write a function that returns an array inside our JSX template, each element of the array gets compiled into an element. As we have seen before we can use any JavaScript expressions inside curly braces and the more obvious way to generate an array of elements, given an array of objects is using map. Let's dive into a real-world example, suppose you have a list of users, each one with a name property attached to it. To create an unordered list to show the users you can do: <ul> {users.map(user =><li>{user.name}</li>)} </ul> This snippet is in incredibly simple and incredibly powerful at the same time, where the power of the HTML and the JavaScript converge. Control Statements Conditional and loops are very common operations in UI templates and you may feel wrong using the JavaScript ternary or the map function to do that. JSX has been built in a way that it only abstract the creation of the elements leaving the logic parts to real JavaScript which is great but sometimes the code could become less clear. In general, we aim to remove all the logic from our components and especially from our render method but sometimes we have to show and hide elements according to the state of the application and very often we have to loop through collections and arrays. If you feel that using JSX for that kind of operations would make your code more readable there is a Babel plugin for that: jsx-control-statements. It follows the same philosophy of JSX and it does not add any real functionality to the language, it is just a syntactic sugar that gets compiled into JavaScript. Let's see how it works. First of all, we have to install it: npm install --save jsx-control-statements Once it is installed we have to add it to the list of our babel plugins in our .babelrc file: "plugins": ["jsx-control-statements"] From now on we can use the syntax provided by the plugin and Babel will transpile it together with the common JSX syntax. A conditional statement written using the plugin looks like the following snippet: <If condition={this.canShowSecretData}> <SecretData /> </If> Which get transpiled into a ternary expression: {canShowSecretData ? <SecretData /> : null} The If component is great but if for some reasons you have nested conditions in your render method it can easily become messy and hard to follow. Here is where the Choose component comes to help: <Choose> <When condition={...}> <span>if</span> </When> <When condition={...}> <span>else if</span> </When> <Otherwise> <span>else</span> </Otherwise> </Choose>   Please notice that the code above gets transpiled into multiple ternaries. Last but not least there is a "component" (always remember that we are not talking about real components but just a syntactic sugar) to manage the loops which is very convenient as well. <ul> <For each="user" of={this.props.users}> <li>{user.name}</li> </For> </ul> The code above gets transpiled into a map function, no magic in there. If you are used to using linters, you might wonder how the linter is not complaining about that code. In fact, the variable item doesn't exist before the transpilation nor it is wrapped into a function. To avoid those linting errors there's another plugin to install: eslint-plugin-jsx-control-statements. If you did not understand the previous sentence don't worry: in the next section we will talk about linting. Sub-render It is worth stressing that we always want to keep our components very small and our render methods very clean and simple. However, that is not an easy goal, especially when you are creating an application iteratively and in the first iteration you are not sure exactly how to split the components into smaller ones. So, what should we be doing when the render method becomes big to keep it maintainable? One solution is splitting it into smaller functions in a way that let us keeping all the logic in the same component. Let's see an example: renderUserMenu() { // JSX for user menu } renderAdminMenu() { // JSX for admin menu } render() { return ( <div> <h1>Welcome back!</h1> {this.userExists&&this.renderUserMenu()} {this.userIsAdmin&&this.renderAdminMenu()} </div> ) }  This is not always considered a best practice because it seems more obvious to split the component into smaller ones but sometimes it helps just to keep the render method cleaner. For example in the Redux Real World examples a sub-render method is used to render the load more button. Now that we are JSX power user it is time to move on and see how to follow a style guide within our code to make it consistent. Summary In this article we deeply understood how JSX works and how to use it in the right way in our components. We started from the basics of the syntax to create a solid knowledge that will let us mastering JSX and its features. Resources for Article: Further resources on this subject: Getting Started with React and Bootstrap [article] Create Your First React Element [article] Getting Started [article]
Read more
  • 0
  • 0
  • 40753

article-image-really-basic-guide-to-batch-file-programming
Richard Gall
31 May 2018
3 min read
Save for later

A really basic guide to batch file programming

Richard Gall
31 May 2018
3 min read
Batch file programming is a way of making a computer do things simply by creating, yes, you guessed it, a batch file. It's a way of doing things you might ordinarily do in the command prompt, but automates some tasks, which means you don't have to write so much code. If it sounds straightforward, that's because it is, generally. Which is why it's worth learning... Batch file programming is a good place to start learning how computers work Of course, if you already know your way around batch files, I'm sure you'll agree it's a good way for someone relatively experienced in software to get to know their machine a little better. If you know someone that you think would get a lot from learning batch file programming share this short guide with them! Why would I write a batch script? There are a number of reasons you might write batch scripts. It's particularly useful for resolving network issues, installing a number of programs on different machines, even organizing files and folders on your computer. Imagine you have a recurring issue - with a batch file you can solve it quickly and easily wherever you are without having to write copious lines of code in the command line. Or maybe your desktop simply looks like a mess; with a little knowledge of batch file programming you can clean things up without too much effort. How to write a batch file Clearly, batch file programming can make your life a lot easier. Let's take a look at the key steps to begin writing batch scripts. Step 1: Open your text editor Batch file programming is really about writing commands - so you'll need your text editor open to begin. Notepad, wordpad, it doesn't matter! Step 2: Begin writing code As we've already seen, batch file programming is really about writing commands for your computer. The code is essentially the same as what you would write in the command prompt. Here are a few batch file commands you might want to know to get started: ipconfig - this presents network information like your IP and MAC address. start “” [website] - this opens a specified website in your browser. rem - this is used if you want to make a comment or remark in your code (ie. for documentation purposes) pause - this, as you'd expect, pauses the script so it can be read before it continues. echo - this command will display text in the command prompt. %%a - this command refers to every file in a given folder if - this is a conditional command The list of batch file commands is pretty long. There are plenty of other resources with an exhaustive list of commands you can use, but a good place to begin is this page on Wikipedia. Step 3: Save your batch file Once you've written your commands in the text editor, you'll then need to save your document as a batch file. Title it, and suffix it with the .bat extension. You'll also need to make sure save as type is set as 'All files'. That's basically it when it comes to batch file programming. Of course, there are some complex things you can do, but once you know the basics, getting into the code is where you can start to experiment.  Read next Jupyter and Python scripting Python Scripting Essentials
Read more
  • 0
  • 0
  • 40744

article-image-azure-function-asp-net-core-mvc-application
Aaron Lazar
03 May 2018
10 min read
Save for later

How to call an Azure function from an ASP.NET Core MVC application

Aaron Lazar
03 May 2018
10 min read
In this tutorial, we'll learn how to call an Azure Function from an ASP.NET Core MVC application. [box type="shadow" align="" class="" width=""]This article is an extract from the book C# 7 and .NET Core Blueprints, authored by Dirk Strauss and Jas Rademeyer. This book is a step-by-step guide that will teach you essential .NET Core and C# concepts with the help of real-world projects.[/box] We will get started with creating an ASP.NET Core MVC application that will call our Azure Function to validate an email address entered into a login screen of the application: This application does no authentication at all. All it is doing is validating the email address entered. ASP.NET Core MVC authentication is a totally different topic and not the focus of this post. In Visual Studio 2017, create a new project and select ASP.NET Core Web Application from the project templates. Click on the OK button to create the project. This is shown in the following screenshot: On the next screen, ensure that .NET Core and ASP.NET Core 2.0 is selected from the drop-down options on the form. Select Web Application (Model-View-Controller) as the type of application to create. Don't bother with any kind of authentication or enabling Docker support. Just click on the OK button to create your project: After your project is created, you will see the familiar project structure in the Solution Explorer of Visual Studio: Creating the login form For this next part, we can create a plain and simple vanilla login form. For a little bit of fun, let's spice things up a bit. Have a look on the internet for some free login form templates: I decided to use a site called colorlib that provided 50 free HTML5 and CSS3 login forms in one of their recent blog posts. The URL to the article is: https://colorlib.com/wp/html5-and-css3-login-forms/. I decided to use Login Form 1 by Colorlib from their site. Download the template to your computer and extract the ZIP file. Inside the extracted ZIP file, you will see that we have several folders. Copy all the folders in this extracted ZIP file (leave the index.html file as we will use this in a minute): Next, go to the solution for your Visual Studio application. In the wwwroot folder, move or delete the contents and paste the folders from the extracted ZIP file into the wwwroot folder of your ASP.NET Core MVC application. Your wwwroot folder should now look as follows: 4. Back in Visual Studio, you will see the folders when you expand the wwwroot node in the CoreMailValidation project. 5. I also want to focus your attention to the Index.cshtml and _Layout.cshtml files. We will be modifying these files next: Open the Index.cshtml file and remove all the markup (except the section in the curly brackets) from this file. Paste the HTML markup from the index.html file from the ZIP file we extracted earlier. Do not copy the all the markup from the index.html file. Only copy the markup inside the <body></body> tags. Your Index.cshtml file should now look as follows: @{ ViewData["Title"] = "Login Page"; } <div class="limiter"> <div class="container-login100"> <div class="wrap-login100"> <div class="login100-pic js-tilt" data-tilt> <img src="images/img-01.png" alt="IMG"> </div> <form class="login100-form validate-form"> <span class="login100-form-title"> Member Login </span> <div class="wrap-input100 validate-input" data-validate="Valid email is required: ex@abc.xyz"> <input class="input100" type="text" name="email" placeholder="Email"> <span class="focus-input100"></span> <span class="symbol-input100"> <i class="fa fa-envelope" aria-hidden="true"></i> </span> </div> <div class="wrap-input100 validate-input" data-validate="Password is required"> <input class="input100" type="password" name="pass" placeholder="Password"> <span class="focus-input100"></span> <span class="symbol-input100"> <i class="fa fa-lock" aria-hidden="true"></i> </span> </div> <div class="container-login100-form-btn"> <button class="login100-form-btn"> Login </button> </div> <div class="text-center p-t-12"> <span class="txt1"> Forgot </span> <a class="txt2" href="#"> Username / Password? </a> </div> <div class="text-center p-t-136"> <a class="txt2" href="#"> Create your Account <i class="fa fa-long-arrow-right m-l-5" aria-hidden="true"></i> </a> </div> </form> </div> </div> </div> The code for this chapter is available on GitHub here: Next, open the Layout.cshtml file and add all the links to the folders and files we copied into the wwwroot folder earlier. Use the index.html file for reference. You will notice that the _Layout.cshtml file contains the following piece of code—@RenderBody(). This is a placeholder that specifies where the Index.cshtml file content should be injected. If you are coming from ASP.NET Web Forms, think of the _Layout.cshtml page as a master page. Your Layout.cshtml markup should look as follows: <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <title>@ViewData["Title"] - CoreMailValidation</title> <link rel="icon" type="image/png" href="~/images/icons/favicon.ico" /> <link rel="stylesheet" type="text/css" href="~/vendor/bootstrap/css/bootstrap.min.css"> <link rel="stylesheet" type="text/css" href="~/fonts/font-awesome-4.7.0/css/font-awesome.min.css"> <link rel="stylesheet" type="text/css" href="~/vendor/animate/animate.css"> <link rel="stylesheet" type="text/css" href="~/vendor/css-hamburgers/hamburgers.min.css"> <link rel="stylesheet" type="text/css" href="~/vendor/select2/select2.min.css"> <link rel="stylesheet" type="text/css" href="~/css/util.css"> <link rel="stylesheet" type="text/css" href="~/css/main.css"> </head> <body> <div class="container body-content"> @RenderBody() <hr /> <footer> <p>© 2018 - CoreMailValidation</p> </footer> </div> <script src="~/vendor/jquery/jquery-3.2.1.min.js"></script> <script src="~/vendor/bootstrap/js/popper.js"></script> <script src="~/vendor/bootstrap/js/bootstrap.min.js"></script> <script src="~/vendor/select2/select2.min.js"></script> <script src="~/vendor/tilt/tilt.jquery.min.js"></script> <script> $('.js-tilt').tilt({ scale: 1.1 }) </script> <script src="~/js/main.js"></script> @RenderSection("Scripts", required: false) </body> </html> If everything worked out right, you will see the following page when you run your ASP.NET Core MVC application. The login form is obviously totally non-functional: However, the login form is totally responsive. If you had to reduce the size of your browser window, you will see the form scale as your browser size reduces. This is what you want. If you want to explore the responsive design offered by Bootstrap, head on over to https://getbootstrap.com/ and go through the examples in the documentation:   The next thing we want to do is hook this login form up to our controller and call the Azure Function we created to validate the email address we entered. Let's look at doing that next. Hooking it all up To simplify things, we will be creating a model to pass to our controller: Create a new class in the Models folder of your application called LoginModel and click on the Add button:  2. Your project should now look as follows. You will see the model added to the Models folder: The next thing we want to do is add some code to our model to represent the fields on our login form. Add two properties called Email and Password: namespace CoreMailValidation.Models { public class LoginModel { public string Email { get; set; } public string Password { get; set; } } } Back in the Index.cshtml view, add the model declaration to the top of the page. This makes the model available for use in our view. Take care to specify the correct namespace where the model exists: @model CoreMailValidation.Models.LoginModel @{ ViewData["Title"] = "Login Page"; } The next portion of code needs to be written in the HomeController.cs file. Currently, it should only have an action called Index(): public IActionResult Index() { return View(); } Add a new async function called ValidateEmail that will use the base URL and parameter string of the Azure Function URL we copied earlier and call it using an HTTP request. I will not go into much detail here, as I believe the code to be pretty straightforward. All we are doing is calling the Azure Function using the URL we copied earlier and reading the return data: private async Task<string> ValidateEmail(string emailToValidate) { string azureBaseUrl = "https://core-mail- validation.azurewebsites.net/api/HttpTriggerCSharp1"; string urlQueryStringParams = $"? code=/IS4OJ3T46quiRzUJTxaGFenTeIVXyyOdtBFGasW9dUZ0snmoQfWoQ ==&email={emailToValidate}"; using (HttpClient client = new HttpClient()) { using (HttpResponseMessage res = await client.GetAsync( $"{azureBaseUrl}{urlQueryStringParams}")) { using (HttpContent content = res.Content) { string data = await content.ReadAsStringAsync(); if (data != null) { return data; } else return ""; } } } } Create another public async action called ValidateLogin. Inside the action, check to see if the ModelState is valid before continuing. For a nice explanation of what ModelState is, have a look at the following article—https://www.exceptionnotfound.net/asp-net-mvc-demystified-modelstate/. We then do an await on the ValidateEmail function, and if the return data contains the word false, we know that the email validation failed. A failure message is then passed to the TempData property on the controller. The TempData property is a place to store data until it is read. It is exposed on the controller by ASP.NET Core MVC. The TempData property uses a cookie-based provider by default in ASP.NET Core 2.0 to store the data. To examine data inside the TempData property without deleting it, you can use the Keep and Peek methods. To read more on TempData, see the Microsoft documentation here: https://docs.microsoft.com/en-us/aspnet/core/fundamentals/app-state?tabs=aspnetcore2x. If the email validation passed, then we know that the email address is valid and we can do something else. Here, we are simply just saying that the user is logged in. In reality, we will perform some sort of authentication here and then route to the correct controller. So now you know how to call an Azure Function from an ASP.NET Core application. If you found this tutorial helpful and you'd like to learn more, go ahead and pick up the book C# 7 and .NET Core Blueprints. What is ASP.NET Core? Why ASP.NET makes building apps for mobile and web easy How to dockerize an ASP.NET Core application    
Read more
  • 0
  • 1
  • 40715

article-image-implementing-an-api-design-first-approach-for-building-apis
Packt Editorial Staff
15 Jun 2018
9 min read
Save for later

Implement an API Design-first approach for building APIs [Tutorial]

Packt Editorial Staff
15 Jun 2018
9 min read
The Monster Records & Associates (MRA) –a fictional music records company, having realised that its biggest asset is in fact its data, embarked on a digital transformation with the aim to offer its product and offerings completely online and via APIs. This article is an excerpt taken from the book Implementing Oracle API Platform Cloud Service, written  by Andy Bell, Sander Rensen, Luis Weir, Phil Wilkins. In this post we are is going to take  you through an interesting MRA case study who adopted an API design-first approach for building its APIs. We will go through the process and steps performed by MRA for this implementation. The Problem Scenario MRA had embarked on a digital transformation journey with the objective to become a digital organisation capable of offering tailored (à la carte) offerings to artists such as handling of an artist’s online presence to on-demand distribution of an artist's digital media to Music Streaming Services such as Spotify, Apple Music, Google Play Music, Amazon Prime Music, Pandora, Deezer to name a few. Having fully acknowledged that their most valuable asset is in fact their media data, MRA wanted to materialise in such assets and determined that the quickest and most effective way to achieve this was by exposing a public API capable of providing access, on-demand, to MRA's Media Catalogue assets such as artists, songs and albums. Figure 1: MRA's Media Catalogue API The idea being, once such assets became accessible via an API, streaming services could, on-demand and 24x7, explore MRA's repertoire, purchase rights-to-use and start streaming. In addition, the API could also open the door to a brand new global audience: millions of app developers constantly innovating. If only a fraction of such a huge audience leveraged MRA's Media Catalogue API, it would still represent a considerable success for MRA. However, as with everything, there is a challenge to realise such vision. MRA like many other organizations, had a level of experience with systems integrations and Service Oriented Architectures (SOA). One of the lessons learnt from SOA however was that the cycles for designing, building, prototyping, and testing SOAP-based Web Services could be quite lengthy and expensive. An API differentiates from a service in that the former represents the RESTful interface a consumer application interacts with, whereas the latter is the actual implementation (the code) behind an API. A HTTP endpoint exposed by a service is defined as an unmanaged API. When a service endpoint is accessed via an API Gateway where policies such as app-key validation, authentication/authorization and other policies are enforced, then it becomes a managed API. The book, Implementing Oracle API Platform Cloud Service, refers to managed APIs as simply APIs and unmanaged APIs as simply service endpoints. Especially when it came to capturing and accommodating the feedback from Client Application Developers (API consumers), MRA had very bad experiences as in the majority of occasions they came to realize very late in the software lifecycle that the Web Service developed did not meet the expectations of its consumers. Figure 2: feedback-loops in traditional web service design Refactoring web services in this approach wasn't just time consuming but also an expensive exercise as both the Service design (WSDL) and code had to be refactored and re-tested in order to accommodate the feedback received and before application developers could try a service again. Naturally service designers and developers avoided as much as possible making changes, thus challenging feedback received from application developers, which in turn created friction amongst both teams but in some occasions meant application developers finding alternative routes to solve their needs rather than using the web service. This was the worst possible scenario as it meant that the investments made in implementing a web service could've been wasted. API design-first process Learning from experiences and acknowledging the challenges that such waterfall like process imposed to a digital transformation initiative, MRA were quite keen to adopt a more agile, interactive but also quicker way to deliver modern RESTful based APIs. The idea was clear. By engaging application developers (API consumers) in the initial stages of the design process, feedback would be captured and reflected back in the interface design (API) early as well. Not only this would shorten feedback loops, but ensure that once the underlying services are implemented, it would expose an interface already endorsed and tested by its consumers, as opposed to risk building a service that won't satisfy the client expectations and needs late in the process. Figure 3: API design-first approach vs traditional service design The implication of this approach though, is that the tooling and notation to define the API, had to be both simple, yet rich in capability such as the task of designing and mocking API endpoints is quick and easy, given that if the process becomes cumbersome it would defeat its purpose. We elaborate on the different steps undertaken by MRA when designing its Media Catalogue API using Apiary and related tools in the book, Implementing Oracle API Platform Cloud Service. Here are the steps: Defining the API type Defining the API’s domain semantics Creating the API definition with its main resources Trying the API mock Defining MSON Data Structures Pushing the API Blueprint to GitHub Publishing the API mock in Oracle API Platform CS Setting up Dredd for continuous testing of API endpoints against the API blueprint Defining the API type: A fundamental step when designing any API is to first define what the type is. This is important as it will determine the guiding principles to consider when doing the design. We have three types of APIs: Single-Purpose APIs: These are APIs that serve a unique and specific purpose, typically derived from an unambiguous need associated with a user journey or use case. Multi-Purpose APIs: These APIs are more generic in nature and are meant to satisfy not just one but multiple use cases and scenarios. They are not bound (coupled) to a specific user journey or system of engagement (for example, a mobile app) therefore ideal for reuse enterprise-wide. MRA’s Media Catalogue API: MRA's Media Catalogue API was specifically targeted at two main audiences: Music Streaming Services and Application Developers in general. Therefore, the API had to be both Public and Multi-Purpose. Defining the API’s domain semantics: This step elaborates proper understanding of the API's bounded context, Media Catalogue. To do so, entities, key attributes, and relationships within the bounded context itself were identified and also defined using semantics appropriate for the purpose of the API. Creating the API definition with its main resources: This step shows how to create an API and define its main resources, parameters, and sample payloads.It involves steps followed by MRA when creating the Media Catalogue API definition and its associated API mock. Trying the API mock: This part describes how Apiary's automatically generated API mocks can be used to satisfy one of the most important steps in API design-first: try an API early in the lifecycle, before the API is actually implemented. This is a critical step as collecting feedback from API consumers early can potentially save numerous hours in code refactoring later in the project. Defining MSON Data Structures: The Markdown Syntax for Object Notation (MSON) is a plain-text syntax for the description and validation of Data Structures in API Blueprint. It provides a way to represent objects (for example, an artist) in a human-readable plain text form. This part involves steps to define the Artist, Album and Song objects using the MSON notation. Pushing the API Blueprint to GitHub: API Blueprints can be pushed into GitHub repositories, so they can be version controlled but most importantly it can follow a similar GitHub cycle as any other code asset. This step is also required in order to configure Dredd to validate API endpoints against API blueprint definitions. Publishing the API mock in Oracle API Platform CS: Although Apiary provides an API mock URL that can be can be accessed directly, it is recommended that instead, the API mock is published and accessed via the Oracle API Platform Cloud Service. Setting up Dredd for continuous testing of API endpoints against the API blueprint: The last step of the API design-first process is to configure Dredd to continuously validate that an API endpoint exposed through the API Gateway is always compliant with its corresponding API Blueprint definition. The idea is to ensure that Client Application code is not broken once an API Policy is changed to point to a Backend Service once it has been built and deployed. We discussed the API design-first approach for building its APIs. MRA's business scenario demanded the need for more efficient and leaner process for implementing APIs. We saw how an API design-first process could effectively help organizations such as MRA gain greater speed, agility, and efficiencies. Here’s a summary of the basic steps to realize such process. Choose your API type: We introduced the conceptual concepts such as Single Purpose and Multi-Purpose APIs to decide on what type of API to adopt. Define your APIs: The need for creating an API definition and an API mock in Apiary based on API Blueprints and the Markdown Syntax for Object Notation (MSON). Create & publish API: Creation and publication of an API using the Oracle API Platform Cloud Service. Continuously test: Finally, the configuration of Dredd to verify API endpoints compliance with the API definition. You just enjoyed an excerpt from the book Implementing Oracle API Platform Cloud Services. Grab the latest edition of this book to work with the newest Oracle APIs, and interface with an increasingly complex array of services your clients want. What are the best programming languages for building APIs? Glancing at the Fintech growth story – Powered by ML, AI & APIs What RESTful APIs can do for Cloud, IoT, social media and other emerging technologies  
Read more
  • 0
  • 0
  • 40577

article-image-3-programming-languages-some-people-think-are-dead-but-definitely-arent
Richard Gall
24 Oct 2019
11 min read
Save for later

3 programming languages some people think are dead but definitely aren’t

Richard Gall
24 Oct 2019
11 min read
Recently I looked closely at what it really means when a certain programming language, tool, or trend is declared to be ‘dead’. It seems, I argued, that talking about death in respect of different aspects of the tech industry is as much a signal about one’s identity and values as a developer as it is an accurate description of a particular ‘thing’s’ reality. To focus on how these debates and conversations play out in practice I decided to take a look at 3 programming languages, each of which has been described as dead or dying at some point. What I found might not surprise you, but it nevertheless highlights that the different opinions a certain person or community has about a language reflects their needs and challenges as software engineers. Is Java dead? One of the biggest areas of debate in terms of living, thriving or dying, is Java. There are a number of reasons for this. The biggest is the simple fact that it’s so widely used. With so many developers using the language for a huge range of reasons, it’s not surprising to find such a diversity of opinion across its developer community. Another reason is that Java is so well-established as a programming language. Although it’s a matter of debate whether it’s declining or dying, it certainly can’t be said to be emerging or growing at any significant pace. Java is part of the industry mainstream now. You’d think that might mean it’s holding up. But when you consider that this is an industry that doesn’t just embrace change and innovation, but one that depends on it for its value, you can begin to see that Java has occupied a slightly odd space for some time. Why do people think Java is dead? Java has been on the decline for a number of years. If you look at the TIOBE index from the mid to late part of this decade it has been losing percentage points. From May 2016 to May 2017, for example, the language declined 6% - this indicates that it’s losing mindshare to other languages. A further reason for its decline is the rise of Kotlin. Although Java has for a long time been the defining language of Android development, in recent years its reputation has taken a hit as Kotlin has become more widely adopted. As this Medium article from 2018 argues, it’s not necessarily a great idea to start a new Android project with Java. The threat to Java isn’t only coming from Kotlin - it’s coming from Scala too. Scala is another language based on the JVM (Java Virtual Machine). It supports both object oriented and functional programming, offering many performance advantages over Java, and is being used for a wide range of use cases - from machine learning to application development. Reasons why Java isn’t dead Although the TIOBE index has shown Java to be a language in decline, it nevertheless remains comfortably at the top of the table. It might have dropped significantly between 2016 and 2017, but more recently its decline has slowed: it has dropped only 0.92% between October 2018 and October 2019. From this perspective, it’s simply bizarre to suggest that Java is ‘dead’ or ‘dying’: it’s de facto the most widely used programming language on the planet. When you factor in everything else that that entails - the massive community means more support, an extensive ecosystem of frameworks, libraries and other tools (note Spring Boot’s growth as a response to the microservice revolution). So, while Java’s age might seem like a mark against it, it’s also a reason why there’s still a lot of life in it. At a more basic level, Java is ubiquitous; it’s used inside a massive range of applications. Insofar as it’s inside live apps it’s alive. That means Java developers will be in demand for a long time yet. The verdict: is Java dead or alive? Java is very much alive and well. But there are caveats: ultimately, it’s not a language that’s going to help you solve problems in creative or innovative ways. It will allow you to build things and get projects off the ground, but it’s arguably a solid foundation on which you will need to build more niche expertise and specialisation to be a really successful engineer. Is JavaScript dead? Although Java might be the most widely used programming language in the world, JavaScript is another ubiquitous language that incites a diverse range of opinions and debate. One of the reasons for this is that some people seriously hate JavaScript. The consensus on Java is a low level murmur of ‘it’s fine’, but with JavaScript things are far more erratic. This is largely because of JavaScript’s evolution. For a long time it was playing second fiddle to PHP in the web development arena because it was so unstable - it was treated with a kind of stigma as if it weren’t a ‘real language.’ Over time that changed, thanks largely to HTML5 and improved ES6 standards, but there are still many quirks that developers don’t like. In particular, JavaScript isn’t a nice thing to grapple with if you’re used to, say, Java or C. Unlike those languages its an interpreted not a compiled programming language. So, why do people think it’s dead? Why do people think JavaScript is dead? There are a number of very different reasons why people argue that JavaScript is dead. On the one hand, the rise of templates, and out of the box CMS and eCommerce solutions mean the use of JavaScript for ‘traditional’ web development will become less important. Essentially, the thinking goes, the barrier to entry is lower, which means there will be fewer people using JavaScript for web development. On the other hand people look at the emergence of Web Assembly as the death knell for JavaScript. Web Assembly (or Wasm) is “a binary instruction format for a stack-based virtual machine” (that’s from the project’s website), which means that code can be compiled into a binary format that can be read by a browser. This means you can bring high level languages such as Rust to the browser. To a certain extent, then, you’d think that Web Assembly would lead to the growth of languages that at the moment feel quite niche. Read next: Introducing Woz, a Progressive WebAssembly Application (PWA + Web Assembly) generator written entirely in Rust Reasons why JavaScript isn’t dead First, let’s counter the arguments above: in the first instance, out of the box solutions are never going to replace web developers. Someone needs to build those products, and even if organizations choose to use them, JavaScript is still a valuable language for customizing and reshaping purpose-built solutions. While the barrier to entry to getting a web project up and running might be getting lower, it’s certainly not going to kill JavaScript. Indeed, you could even argue that the pool is growing as you have people starting to pick up some of the basic elements of the web. On the Web Assembly issue: this is a slightly more serious threat to JavaScript, but it’s important to remember that Web Assembly was never designed to simply ape the existing JavaScript use case. As this useful article explains: “...They solve two different issues: JavaScript adds basic interactivity to the web and DOM while WebAssembly adds the ability to have a robust graphical engine on the web. WebAssembly doesn’t solve the same issues that JavaScript does because it has no knowledge of the DOM. Until it does, there’s no way it could replace JavaScript.” Web Assembly might even renew faith in JavaScript. By tackling some of the problems that many developers complain about, it means the language can be used for problems it is better suited to solve. But aside from all that, there are a wealth of other reasons that JavaScript is far from dead. React continues to grow in popularity, as does Node.js - the latter in particular is influential in how it has expanded what’s possible with the language, moving from the browser to the server. The verdict: Is JavaScript dead or alive? JavaScript is very much alive and well, however much people hate it. With such a wide ecosystem of tools surrounding it, the way that it’s used might change, but the language is here to stay and has a bright future. Is C dead? C is one of the oldest programming languages around (it’s approaching its 50th birthday). It’s a language that has helped build the foundations of the software world as we know it today, including just about every operating system. But although it’s a fundamental part of the technology landscape, there are murmurs that it’s just not up to the job any more… Why do people think that C is dead? If you want to get a sense of the division of opinion around C you could do a lot worse than this article on TechCrunch. “C is no longer suitable for this world which C has built,” explains engineer Jon Evans. “C has become a monster. It gives its users far too much artillery with which to shoot their feet off. Copious experience has taught us all, the hard way, that it is very difficult, verging on ‘basically impossible,’ to write extensive amounts of C code that is not riddled with security holes.” The security concerns are reflected elsewhere, with one writer arguing that “no one is creating new unsafe languages. It’s not plausible to say that this is because C and C++ are perfect; even the staunchest proponent knows that they have many flaws. The reason that people are not creating new unsafe languages is that there is no demand. The future is safe languages.” Added to these concerns is the rise of Rust - it could, some argue, be an alternative to C (and C++) for lower level systems programming that is more modern, safer and easier to use. Reasons why C isn’t dead Perhaps the most obvious reason why C isn’t dead is the fact that it’s so integral to so much software that we use today. We’re not just talking about your standard legacy systems; C is inside the operating systems that allow us to interface with software and machines. One of the arguments often made against C is that ‘the web is taking over’, as if software in general is moving up levels of abstraction that make languages at a machine level all but redundant. Aside from that argument being plain stupid (ie. what’s the web built on?), with IoT and embedded computing growing at a rapid rate, it’s only going to make C more important. To return to our good friend the TIOBE Index: C is in second place, the same position it held in October 2018. Like Java, then, it’s holding its own in spite of rumors. Unlike Java, moreover, C’s rating has actually increased over the course of a year. Not a massive amount admittedly - 0.82% - but a solid performance that suggests it’s a long way from dead. Read next: Why does the C programming language refuse to die? The verdict: Is C dead or alive? C is very much alive and well. It’s old, sure, but it’s buried inside too much of our existing software infrastructure for it to simply be cast aside. This isn’t to say it isn’t without flaws. From a security and accessibility perspective we’re likely to see languages like Rust gradually grow in popularity to tackle some of the challenges that C poses. But an equally important point to consider is just how fundamental C is for people that want to really understand programming in depth. Even if it doesn’t necessarily have a wide range of use cases, the fact that it can give developers and engineers an insight into how code works at various levels of the software stack means it will always remain a language that demands attention. Conclusion: Listen to multiple perspectives on programming languages before making a judgement The obvious conclusion to draw from all this is that people should just stop being so damn opinionated. But I don't actually think that's correct: people should keep being opinionated and argumentative. There's no place for snobbery or exclusion, but anyone that has a view on something's value then they should certainly express it. It helps other people understand the language in a way that's not possible through documentation or more typical learning content. What's important is that we read opinions with a critical eye: what's this persons agenda? What's their background? What are they trying to do? After all, there are things far more important than whether something is dead or alive: building great software we can be proud of being one of them.
Read more
  • 0
  • 0
  • 40576
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-architects-love-api-driven-architecture
Aaron Lazar
07 Jun 2018
6 min read
Save for later

8 Reasons why architects love API driven architecture

Aaron Lazar
07 Jun 2018
6 min read
Everyday, we see a new architecture popping up, being labeled as a modern architecture for application development. That’s what happened with Microservices in the beginning, and then all went for a toss when they were termed as a design pattern rather than an architecture on a whole. APIs are growing in popularity and are even being used as a basis to draw out the architecture of applications. We’re going to try and understand what some of the top factors are, which make Architects (and Developers) appreciate API driven architectures over the other “modern” and upcoming architectures. Before we get to the reasons, let’s understand where I’m coming from in the first place. So, we recently published our findings from the Skill Up survey that we conducted for 8,000 odd IT pros. We asked them various questions ranging from what their favourite tools were, to whether they felt they knew more than what their managers did. Of the questions, one of them was directed to find out which of the modern architectures interested them the most. The choices were among Chaos Engineering, API Driven Architecture and Evolutionary Architecture. Source: Skill Up 2018 From the results, it's evident that they’re more inclined towards API driven Architecture. Or maybe, those who didn’t really find the architecture of their choice among the lot, simply chose API driven to be the best of the lot. But why do architects love API driven development? Anyway, I’ve been thinking about it a bit and thought I would come up with a few reasons as to why this might be so. So here goes… Reason #1: The big split between the backend and frontend Also known as Split Stack Development, API driven architecture allows for the backend and frontend of the application to be decoupled. This allows developers and architects to mitigate any dependencies that each end might have or rather impose on the other. Instead of having the dependencies, each end communicates with the other via APIs. This is extremely beneficial in the sense that each end can be built in completely different tools and technologies. For example, the backend could be in Python/Java, while the front end is built in JavaScript. Reason #2: Sensibility in scalability When APIs are the foundation of an architecture, it enables the organisation to scale the app by simply plugging in services as and when needed, instead of having to modify the app itself. This is a great way to plugin and plugout functionality as and when needed without disrupting the original architecture. Reason #3: Parallel Development aka Agile When different teams work on the front and back end of the application, there’s no reason for them to be working together. That doesn’t mean they don’t work together at all, rather, what I mean is that the only factor they have to agree upon is the API structure and nothing else. This is because of Reason #1, where both layers of the architecture are disconnected or decoupled. This enables teams to be more flexible and agile when developing the application. It is only at the testing and deployment stages that the teams will collaborate more. Reason #4: API as a product This is more of a business case, rather than developer centric, but I thought I should add it in anyway. So, there’s something new that popped up on the Thoughtworks Radar, a few months ago - API-as-a-product.  As a matter of fact, you could consider this similar to API-as-a-Service. Organisations like Salesforce have been offering their services in the form of APIs. For example, suppose you’re using Salesforce CRM and you want to extend the functionality, all you need to do is use the APIs for extending the system. Google is another good example of a company that offers APIs as products. This is a great way to provide extensibility instead of having a separate application altogether. Individual APIs or groups of them can be priced with subscription plans. These plans contain not only access to the APIs themselves, but also a defined number of calls or data that is allowed. Reason #5: Hiding underlying complexity In an API driven architecture, all components that are connected to the API are modular, exist on their own and communicate via the API. The modular nature of the application makes it easier to test and maintain. Moreover, if you’re using or consuming someone else’s API, you needn’t learn/decipher the entire code’s working, rather you can just plug in the API and use it. That reduces complexity to a great extent. Reason #6: Business Logic comes first API driven architecture allows developers to focus on the Business Logic, rather than having to worry about structuring the application. The initial API structure is all that needs to be planned out, after which each team goes forth and develops the individual APIs. This greatly reduces development time as well. Reason #7: IoT loves APIs API architecture makes for a great way to build IoT applications, as IoT needs a great deal of scalability. An application that is built on a foundation of APIs is a dream for IoT developers as devices can be easily connected to the mother app. I expect everything to be connected via APIs in the next 5 years. If it doesn’t happen, you can always get back at me in the comments section! ;) Reason #8: APIs and DevOps are a match made in Heaven APIs allow for a more streamlined deployment pipeline, while also eliminating the production of duplicate assets by development teams. Moreover, deployments can reach production a lot faster through these slick pipelines, thus increasing efficiency and reducing costs by a great deal. The merger of DevOps and API driven architecture, however, is not a walk in the park, as it requires a change in mindset. Teams need to change culturally, to become enablers of reusable, self-service consumption. The other side of the coin Well, there’s always two sides to the coin, and there are some drawbacks to API driven architecture. For starters, you’ll have APIs all over the place! While that was the point in the first place, it becomes really tedious to manage all those APIs. Secondly, when you have things running in parallel, you require a lot of processing power - more cores, more infrastructure. Another important issue is regarding security. With so many cyber attacks, and privacy breaches, an API driven architecture only invites trouble with more doors for hackers to open. So apart from the above flipside, those were some of the reasons I could think of, as to why Architects would be interested in an API driven architecture. APIs give customers, i.e both internal and external stakeholders, the freedom to leverage enterprise’s assets, while customizing as required. In a way, APIs aren’t just ways to offer integration and connectivity for large enterprise apps. Rather, they should be looked at as a way to drive faster and more modern software architecture and delivery. What are web developers favorite front-end tools? The best backend tools in web development The 10 most common types of DoS attacks you need to know
Read more
  • 0
  • 0
  • 40172

article-image-introducing-llvm-intermediate-representation
Packt
26 Aug 2014
18 min read
Save for later

Introducing LLVM Intermediate Representation

Packt
26 Aug 2014
18 min read
In this article by Bruno Cardoso Lopez and Rafael Auler, the authors of Getting Started with LLVM Core Libraries, we will look into some basic concepts of the LLVM intermediate representation (IR). (For more resources related to this topic, see here.) LLVM IR is the backbone that connects frontends and backends, allowing LLVM to parse multiple source languages and generate code to multiple targets. Frontends produce the IR, while backends consume it. The IR is also the point where the majority of LLVM target-independent optimizations takes place. Overview The choice of the compiler IR is a very important decision. It determines how much information the optimizations will have to make the code run faster. On one hand, a very high-level IR allows optimizers to extract the original source code intent with ease. On the other hand, a low-level IR allows the compiler to generate code tuned for a particular hardware more easily. The more information you have about the target machine, the more opportunities you have to explore machine idiosyncrasies. Moreover, the task at lower levels must be done with care. As the compiler translates the program to a representation that is closer to machine instructions, it becomes increasingly difficult to map program fragments to the original source code. Furthermore, if the compiler design is exaggerated using a representation that represents a specific target machine very closely, it becomes awkward to generate code for other machines that have different constructs. This design trade-off has led to different choices among compilers. Some compilers, for instance, do not support code generation for multiple targets and focus on only one machine architecture. This enables them to use specialized IRs throughout their entire pipeline that make the compiler efficient with respect to a single architecture, which is the case of the Intel C++ Compiler (icc). However, writing compilers that generate code for a single architecture is an expensive solution if you aim to support multiple targets. In these cases, it is unfeasible to write a different compiler for each architecture, and it is best to design a single compiler that performs well on a variety of targets, which is the goal of compilers such as GCC and LLVM. For these projects, called retargetable compilers, there are substantially more challenges to coordinate the code generation for multiple targets. The key to minimizing the effort to build a retargetable compiler lies in using a common IR, the point where different backends share the same understanding about the source program to translate it to a divergent set of machines. Using a common IR, it is possible to share a set of target-independent optimizations among multiple backends, but this puts pressure on the designer to raise the level of the common IR to not overrepresent a single machine. Since working at higher levels precludes the compiler from exploring target-specific trickery, a good retargetable compiler also employs other IRs to perform optimizations at different, lower levels. The LLVM project started with an IR that operated at a lower level than the Java bytecode, thus, the initial acronym was Low Level Virtual Machine. The idea was to explore low-level optimization opportunities and employ link-time optimizations. The link-time optimizations were made possible by writing the IR to disk, as in a bytecode. The bytecode allows the user to amalgamate multiple modules in the same file and then apply interprocedural optimizations. In this way, the optimizations will act on multiple compilation units as if they were in the same module. LLVM, nowadays, is neither a Java competitor nor a virtual machine, and it has other intermediate representations to achieve efficiency. For example, besides the LLVM IR, which is the common IR where target-independent optimizations work, each backend may apply target-dependent optimizations when the program is represented with the MachineFunction and MachineInstr classes. These classes represent the program using target-machine instructions. On the other hand, the Function and Instruction classes are, by far, the most important ones because they represent the common IR that is shared across multiple targets. This intermediate representation is mostly target-independent (but not entirely) and the official LLVM intermediate representation. To avoid confusion, while LLVM has other levels to represent a program, which technically makes them IRs as well, we do not refer to them as LLVM IRs; however, we reserve this name for the official, common intermediate representation by the Instruction class, among others. This terminology is also adopted by the LLVM documentation. The LLVM project started as a set of tools that orbit around the LLVM IR, which justifies the maturity of the optimizers and the number of optimizers that act at this level. This IR has three equivalent forms: An in-memory representation (the Instruction class, among others) An on-disk representation that is encoded in a space-efficient form (the bitcode files) An on-disk representation in a human-readable text form (the LLVM assembly files) LLVM provides tools and libraries that allow you to manipulate and handle the IR in all forms. Hence, these tools can transform the IR back and forth, from memory to disk as well as apply optimizations, as illustrated in the following diagram: Understanding the LLVM IR target dependency The LLVM IR is designed to be as target-independent as possible, but it still conveys some target-specific aspects. Most people blame the C/C++ language for its inherent, target-dependent nature. To understand this, consider that when you use standard C headers in a Linux system, for instance, your program implicitly imports some header files from the bits Linux headers folder. This folder contains target-dependent header files, including macro definitions that constrain some entities to have a particular type that matches what the syscalls of this kernel-machine expect. Afterwards, when the frontend parses your source code, it needs to also use different sizes for int, for example, depending on the intended target machine where this code will run. Therefore, both library headers and C types are already target-dependent, which makes it challenging to generate an IR that can later be translated to a different target. If you consider only the target-dependent, C standard library headers, the parsed AST for a given compilation unit is already target-dependent, even before the translation to the LLVM IR. Furthermore, the frontend generates IR code using type sizes, calling conventions, and special library calls that match the ones defined by each target ABI. Still, the LLVM IR is quite versatile and is able to cope with distinct targets in an abstract way. Exercising basic tools to manipulate the IR formats We mention that the LLVM IR can be stored on disk in two formats: bitcode and assembly text. We will now learn how to use them. Consider the sum.c source code: int sum(int a, int b) { return a+b; } To make Clang generate the bitcode, you can use the following command: $ clang sum.c -emit-llvm -c -o sum.bc To generate the assembly representation, you can use the following command: $ clang sum.c -emit-llvm -S -c -o sum.ll You can also assemble the LLVM IR assembly text, which will create a bitcode: $ llvm-as sum.ll -o sum.bc To convert from bitcode to IR assembly, which is the opposite, you can use the disassembler: $ llvm-dis sum.bc -o sum.ll The llvm-extract tool allows the extraction of IR functions, globals, and also the deletion of globals from the IR module. For instance, extract the sum function from sum.bc with the following command: $ llvm-extract -func=sum sum.bc -o sum-fn.bc Nothing changes between sum.bc and sum-fn.bc in this particular example since sum is already the sole function in this module. Introducing the LLVM IR language syntax Observe the LLVM IR assembly file, sum.ll: target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-apple-macosx10.7.0" define i32 @sum(i32 %a, i32 %b) #0 { entry: %a.addr = alloca i32, align 4 %b.addr = alloca i32, align 4 store i32 %a, i32* %a.addr, align 4 store i32 %b, i32* %b.addr, align 4 %0 = load i32* %a.addr, align 4 %1 = load i32* %b.addr, align 4 %add = add nsw i32 %0, %1 ret i32 %add } attributes #0 = { nounwind ssp uwtable ... } The contents of an entire LLVM file, either assembly or bitcode, are said to define an LLVM module. The module is the LLVM IR top-level data structure. Each module contains a sequence of functions, which contains a sequence of basic blocks that contain a sequence of instructions. The module also contains peripheral entities to support this model, such as global variables, the target data layout, and external function prototypes as well as data structure declarations. LLVM local values are the analogs of the registers in the assembly language and can have any name that starts with the % symbol. Thus, %add = add nsw i32 %0, %1 will add the local value %0 to %1 and put the result in the new local value, %add. You are free to give any name to the values, but if you are short on creativity, you can just use numbers. In this short example, we can already see how LLVM expresses its fundamental properties: It uses the Static Single Assignment (SSA) form. Note that there is no value that is reassigned; each value has only a single assignment that defines it. Each use of a value can immediately be traced back to the sole instruction responsible for its definition. This has an immense value to simplify optimizations, owing to the trivial use-def chains that the SSA form creates, that is, the list of definitions that reaches a user. If LLVM had not used the SSA form, we would need to run a separate data flow analysis to compute the use-def chains, which are mandatory for classical optimizations such as constant propagation and common subexpression elimination. Code is organized as three-address instructions. Data processing instructions have two source operands and place the result in a distinct destination operand. It has an infinite number of registers. Note how LLVM local values can be any name that starts with the % symbol, including numbers that start at zero, such as %0, %1, and so on, that have no restriction on the maximum number of distinct values. The target datalayout construct contains information about endianness and type sizes for target triple that is described in target host. Some optimizations depend on knowing the specific data layout of the target to transform the code correctly. Observe how the layout declaration is done: target datalayout = "e-p:64:64:64-i1:8:8-i8:8:8-i16:16:16-i32:32:32-i64:64:64-f32:32:32-f64:64:64-v64:64:64-v128:128:128-a0:0:64-s0:64:64-f80:128:128-n8:16:32:64-S128" target triple = "x86_64-apple-macosx10.7.0" We can extract the following facts from this string: The target is an x86_64 processor with macOSX 10.7.0. It is a little-endian target, which is denoted by the first letter in the layout (a lowercase e). Big-endian targets need to use an uppercase E. The information provided about types is in the format type:<size>:<abi>:<preferred>. In the preceding example, p:64:64:64 represents a pointer that is 64 bits wide in size, with the abi and preferred alignments set to the 64-bit boundary. The ABI alignment specifies the minimum required alignment for a type, while the preferred alignment specifies a potentially larger value, if this will be beneficial. The 32-bit integer types i32:32:32 are 32 bits wide in size, 32-bit abi and preferred alignment, and so on. The function declaration closely follows the C syntax: define i32 @sum(i32 %a, i32 %b) #0 { This function returns a value of the type i32 and has two i32 arguments, %a and %b. Local identifiers always need the % prefix, whereas global identifiers use @. LLVM supports a wide range of types, but the most important ones are the following: Arbitrary-sized integers in the iN form; common examples are i32, i64, and i128. Floating-point types, such as the 32-bit single precision float and 64-bit double precision double. Vectors types in the format <<# elements> x <elementtype>>. A vector with four i32 elements is written as <4 x i32>. The #0 tag in the function declaration maps to a set of function attributes, also very similar to the ones used in C/C++ functions and methods. The set of attributes is defined at the end of the file: attributes #0 = { nounwind ssp uwtable "less-precise-fpmad"="false""no-frame-pointer-elim"="true" "no-frame-pointer-elim-non-leaf"="true""no-infs-fp-math"="false" "no-nans-fp-math"="false" "unsafe-fp-math"="false""use-soft-float"="false" } For instance, nounwind marks a function or method as not throwing exceptions, and ssp tells the code generator to use a stack smash protector in an attempt to increase the security of this code against attacks. The function body is explicitly divided into basic blocks (BBs), and a label is used to start a new BB. A label relates to a basic block in the same way that a value identifier relates to an instruction. If a label declaration is omitted, the LLVM assembler automatically generates one using its own naming scheme. A basic block is a sequence of instructions with a single entry point at its first instruction, and a single exit point at its last instruction. In this way, when the code jumps to the label that corresponds to a basic block, we know that it will execute all of the instructions in this basic block until the last instruction, which will change the control flow by jumping to another basic block. Basic blocks and their associated labels need to adhere to the following conditions: Each BB needs to end with a terminator instruction, one that jumps to other BBs or returns from the function The first BB, called the entry BB, is special in an LLVM function and must not be the target of any branch instructions Our LLVM file, sum.ll, has only one BB because it has no jumps, loops, or calls. The function start is marked with the entry label, and it ends with the return instruction, ret: entry: %a.addr = alloca i32, align 4 %b.addr = alloca i32, align 4 store i32 %a, i32* %a.addr, align 4 store i32 %b, i32* %b.addr, align 4 %0 = load i32* %a.addr, align 4 %1 = load i32* %b.addr, align 4 %add = add nsw i32 %0, %1 ret i32 %add The alloca instruction reserves space on the stack frame of the current function. The amount of space is determined by element type size, and it respects a specified alignment. The first instruction, %a.addr = alloca i32, align 4, allocates a 4-byte stack element, which respects a 4-byte alignment. A pointer to the stack element is stored in the local identifier, %a.addr. The alloca instruction is commonly used to represent local (automatic) variables. The %a and %b arguments are stored in the stack locations %a.addr and %b.addr by means of store instructions. The values are loaded back from the same memory locations by load instructions, and they are used in the addition, %add = add nsw i32 %0, %1. Finally, the addition result, %add, is returned by the function. The nsw flag specifies that this add operation has "no signed wrap", which indicates instructions that are known to have no overflow, allowing for some optimizations. If you are interested in the history behind the nsw flag, a worthwhile read is the LLVMdev post at http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-November/045730.html by Dan Gohman. In fact, the load and store instructions are redundant, and the function arguments can be used directly in the add instruction. Clang uses -O0 (no optimizations) by default, and the unnecessary loads and stores are not removed. If we compile with -O1 instead, the outcome is a much simpler code, which is reproduced here: define i32 @sum(i32 %a, i32 %b) ... { entry: %add = add nsw i32 %b, %a ret i32 %add } ... Using the LLVM assembly directly is very handy when writing small examples to test target backends and as a means to learn basic LLVM concepts. However, a library is the recommended interface for frontend writers to build the LLVM IR, which is the subject of our next section. You can find a complete reference to the LLVM IR assembly syntax at http://llvm.org/docs/LangRef.html. Introducing the LLVM IR in-memory model The in-memory representation closely models the LLVM language syntax that we just presented. The header files for the C++ classes that represent the IR are located at include/llvm/IR. The following is a list of the most important classes: The Module class aggregates all of the data used in the entire translation unit, which is a synonym for "module" in LLVM terminology. It declares the Module::iterator typedef as an easy way to iterate across the functions inside this module. You can obtain these iterators via the begin() and end() methods. View its full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1Module.html. The Function class contains all objects related to a function definition or declaration. In the case of a declaration (use the isDeclaration() method to check whether it is a declaration), it contains only the function prototype. In both cases, it contains a list of the function parameters accessible via the getArgumentList() method or the pair of arg_begin() and arg_end(). You can iterate through them using the Function::arg_iterator typedef. If your Function object represents a function definition, and you iterate through its contents via the for (Function::iterator i = function.begin(), e = function.end(); i != e; ++i) idiom, you will iterate across its basic blocks. View its full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1Function.html. The BasicBlock class encapsulates a sequence of LLVM instructions, accessible via the begin()/end() idiom. You can directly access its last instruction using the getTerminator() method, and you also have a few helper methods to navigate the CFG, such as accessing predecessor basic blocks via getSinglePredecessor(), when the basic block has a single predecessor. However, if it does not have a single predecessor, you need to work out the list of predecessors yourself, which is also not difficult if you iterate through basic blocks and check the target of their terminator instructions. View its full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1BasicBlock.html. The Instruction class represents an atom of computation in the LLVM IR, a single instruction. It has some methods to access high-level predicates, such as isAssociative(), isCommutative(), isIdempotent(), or isTerminator(), but its exact functionality can be retrieved with getOpcode(), which returns a member of the llvm::Instruction enumeration, which represents the LLVM IR opcodes. You can access its operands via the op_begin() and op_end() pair of methods, which are inherited from the User superclass that we will present shortly. View its full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1Instruction.html. We have still not presented the most powerful aspect of the LLVM IR (enabled by the SSA form): the Value and User interfaces; these allow you to easily navigate the use-def and def-use chains. In the LLVM in-memory IR, a class that inherits from Value means that it defines a result that can be used by others, whereas a subclass of User means that this entity uses one or more Value interfaces. Function and Instruction are subclasses of both Value and User, while BasicBlock is a subclass of just Value. To understand this, let's analyze these two classes in depth: The Value class defines the use_begin() and use_end() methods to allow you to iterate through Users, offering an easy way to access its def-use chain. For every Value class, you can also access its name through the getName() method. This models the fact that any LLVM value can have a distinct identifier associated with it. For example, %add1 can identify the result of an add instruction, BB1 can identify a basic block, and myfunc can identify a function. Value also has a powerful method called replaceAllUsesWith(Value *), which navigates through all of the users of this value and replaces it with some other value. This is a good example of how the SSA form allows you to easily substitute instructions and write fast optimizations. You can view the full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1Value.html. The User class has the op_begin() and op_end() methods that allows you to quickly access all of the Value interfaces that it uses. Note that this represents the use-def chain. You can also use a helper method called replaceUsesOfWith(Value *From, Value *To) to replace any of its used values. You can view the full interface at http://llvm.org/docs/doxygen/html/classllvm_1_1User.html. Summary In this article, we acquainted ourselves with the concepts and components related to the LLVM intermediate representation. Resources for Article: Further resources on this subject: Creating and Utilizing Custom Entities [Article] Getting Started with Code::Blocks [Article] Program structure, execution flow, and runtime objects [Article]
Read more
  • 0
  • 0
  • 39631

article-image-6-signs-you-need-containers
Richard Gall
05 Feb 2019
9 min read
Save for later

6 signs you need containers

Richard Gall
05 Feb 2019
9 min read
I’m not about to tell you containers is a hot new trend - clearly, it isn’t. Today, they are an important part of the mainstream software development industry that probably won't be disappearing any time soon. But while containers certainly can’t be described as a niche or marginal way of deploying applications, they aren’t necessarily ubiquitous. There are still developers or development teams yet to fully appreciate the usefulness of containers. You might know them - you might even be one of them. Joking aside, there are often many reasons why people aren’t using containers. Sometimes these are good reasons: maybe you just don’t need them. Often, however, you do need them, but the mere thought of changing your systems and workflow can feel like more trouble than it’s worth. If everything seems to be (just about) working, why shake things up? Well, I’m here to tell you that more often than not it is worthwhile. But to know that you’re not wasting your time and energy, there are a few important signs that can tell you if you should be using containers. Download Containerize Your Apps with Docker and Kubernetes for free, courtesy of Microsoft.  Your codebase is too complex There are few developers in the world who would tell you that their codebase couldn’t do with a little pruning and simplification. But if your code has grown into a beast that everyone fears and doesn’t really understand, containers could probably help you a lot. Why do containers help simplify your codebase? Let’s think about how spaghetti code actually happens. Yes, it always happens by accident, but usually it’s something that evolves out of years of solving intractable problems with knock on effects and consequences that only need to be solved later. By using containers you can begin to think differently about your code. Instead of everything being tied up together, like a complex concrete network of road junctions, containers allow you to isolate specific parts of it. When you can better isolate your code, you can also isolate different problems and domains. This is one of the reasons that containers is so closely aligned with microservices. Software testing is nightmarish The efficiency benefits of containers are well documented, but the way containers can help the software testing process is often underplayed - this probably says more about a general inability to treat testing with the respect and time it deserves as much as anything else. How do containers make testing easier? There are a number of reasons containers make software testing easier. On the one hand, by using containers you’re reducing that gap between the development environment and production, which means you shouldn’t be faced with as many surprises once your code hits production as you sometimes might. Containers also make the testing process faster - you only need to test against a container image, you don’t need a fully-fledged testing environment for every application you do tests on. What this all boils down to is that testing becomes much quicker and easier. In theory, then, this means the testing process fits much more neatly within the development workflow. Code quality should never be seen as a bottleneck; with containers it becomes much easier to embed the principle in your workflow. Read next: How to build 12 factor microservices on Docker Your software isn’t secure - you’ve had breaches that could have been prevented Spaghetti code, lack of effective testing can lead to major security risks. If no one really knows what’s going on inside your applications and inside your code it’s inevitable that you’ll have vulnerabilities. And, in turn, it’s highly likely these vulnerabilities will be exploited. How can containers make software security easier? Because containers allow you to make changes to parts of your software infrastructure (rather than requiring wholesale changes), this makes security patches much easier to achieve. Essentially, you can isolate the problem and tackle it. Without containers, it becomes harder to isolate specific pieces of your infrastructure, which means any changes could have a knock on effect on other parts of your code that you can’t predict. That all being said, it probably is worth mentioning that containers do still pose a significant set of security challenges. While simplicity in your codebase can make testing easier, you are replacing simplicity at that level with increased architectural complexity. To really feel the benefits of container security, you need a strong sense of how your container deployments are working together and how they might interact. Your software infrastructure is expensive (you feel the despair of vendor lock-in) Running multiple virtual machines can quickly get expensive. In terms of both storage and memory, if you want to scale up, you’re going to be running through resources at a rapid rate. While you might end up spending big on more traditional compute resources, the tools around container management and automation are getting cheaper. One of the costs of many organization’s software infrastructure is lock-in. This isn’t just about price, it’s about the restrictions that come with sticking with a certain software vendor - you’re spending money on software systems that are almost literally restricting your capacity for growth and change. How do containers solve the software infrastructure problem and reduce vendor lock-in? Traditional software infrastructure - whether that’s on-premise servers or virtual ones - is a fixed cost - you invest in the resources you need, and then either use it or you don’t. With containers running on, say, cloud, it becomes a lot easier to manage your software spend alongside strategic decisions about scalability. Fundamentally, it means you can avoid vendor lock-in. Yes, you might still be paying a lot of money for AWS or Azure, but because containers are much more portable, moving your applications between providers is much less hassle and risk. Read next: CNCF releases 9 security best practices for Kubernetes, to protect a customer’s infrastructure DevOps is a war, not a way of working Like containers, DevOps could hardly be considered a hot new trend any more. But this doesn’t mean it’s now part of the norm. There are plenty of organizations that simply don’t get DevOps, or, at the very least, seem to be stumbling their way through sprint meetings with little real alignment between development and operations. There could be multiple causes for this conflict (maybe people just don’t get on), but DevOps often fails where the code that’s being written and deployed is too complicated for anyone to properly take accountability. This takes us back to the issue of the complex codebase. Think of it this way - if code is a gigantic behemoth that can’t be easily broken up, the unintended effects and consequences of every new release and update can cause some big problems - both personally and technically. How do containers solve DevOps challenges? Containers can help solve the problems that DevOps aims to tackle by breaking up software into different pieces. This means that developers and operations teams have much more clarity on what code is being written and why, as well as what it should do. Indeed, containers arguably facilitate DevOps practices much more effectively than DevOps proponents have been trying to do in pre-container years. Adding new product features is a pain The issue of adding features or improving applications is a complaint that reaches far beyond the development team. Product management, marketing - these departments will all bemoan the ability to make necessary changes or add new features that they will argue is business critical. Often, developers will take the heat. But traditional monolithic applications make life difficult for developers - you simply can’t make changes or updates. It’s like wanting to replace a radiator and having to redo your house’s plumbing. This actually returns us to the earlier point about DevOps - containers makes DevOps easier because it enables faster delivery cycles. You can make changes to an application at the level of a container or set of containers. Indeed, you might even simply kill one container and replace it with a new one. In turn, this means you can change and build things much more quickly. How do containers make it easier to update or build new features? To continue with the radiator analogy: containers would allow you to replace or change an individual radiator without having to gut your home. Essentially, if you want to add a new feature or change an element, you wouldn’t need to go into your application and make wholesale changes - that may have unintended consequences - instead, you can simply make a change by running the resources you need inside a new container (or set of containers). Watch for the warning signs As with any technology decision, it’s well worth paying careful attention to your own needs and demands. So, before fully committing to containers, or containerizing an application, keep a close eye on the signs that they could be a valuable option. Containers may well force you to come face to face with the reality of technical debt - and if it does, so be it. There’s no time like the present, after all. Of course, all of the problems listed above are ultimately symptoms of broader issues or challenges you face as a development team or wider organization. Containers shouldn’t be seen as a sure-fire corrective, but they can be an important element in changing your culture and processes. Learn how to containerize your apps with a new eBook, free courtesy of Microsoft. Download it here.
Read more
  • 0
  • 0
  • 39378

article-image-creators-of-python-java-c-and-perl-discuss-the-evolution-and-future-of-programming-language-design-at-puppy
Bhagyashree R
08 Apr 2019
11 min read
Save for later

Creators of Python, Java, C#, and Perl discuss the evolution and future of programming language design at PuPPy

Bhagyashree R
08 Apr 2019
11 min read
At the first annual charity event conducted by Puget Sound Programming Python (PuPPy) last Tuesday, four legendary language creators came together to discuss the past and future of language design. This event was organized to raise funds for Computer Science for All (CSforALL), an organization which aims to make CS an integral part of the educational experience. Among the panelists were the creators of some of the most popular programming languages: Guido van Rossum, the creator of Python James Gosling, the founder, and lead designer behind the Java programming language Anders Hejlsberg, the original author of Turbo Pascal who has also worked on the development of C# and TypeScript Larry Wall, the creator of Perl The discussion was moderated by Carol Willing, who is currently a Steering Council member and developer for Project Jupyter. She is also a member of the inaugural Python Steering Council, a Python Software Foundation Fellow and former Director. Key principles of language design The first question thrown at the panelists was, “What are the principles of language design?” Guido van Rossum believes: [box type="shadow" align="" class="" width=""]Designing a programming language is very similar to the way JK Rowling writes her books, the Harry Potter series.[/box] When asked how he says JK Rowling is a genius in the way that some details that she mentioned in her first Harry Potter book ended up playing an important plot point in part six and seven. Explaining how this relates to language design he adds, “In language design often that's exactly how things go”. When designing a language we start with committing to certain details like the keywords we want to use, the style of coding we want to follow, etc. But, whatever we decide on we are stuck with them and in the future, we need to find new ways to use those details, just like Rowling. “The craft of designing a language is, in one hand, picking your initial set of choices so that's there are a lot of possible continuations of the story. The other half of the art of language design is going back to your story and inventing creative ways of continuing it in a way that you had not thought of,” he adds. When James Gosling was asked how Java came into existence and what were the design principles he abided by, he simply said, “it didn’t come out of like a personal passion project or something. It was actually from trying to build a prototype.” James Gosling and his team were working on a project that involved understanding the domain of embedded systems. For this, they spoke to a lot of developers who built software for embedded systems to know how their process works. This project had about a dozen people on it and Gosling was responsible for making things much easier from a programming language point of view. “It started out as kind of doing better C and then it got out of control that the rest of the project really ended up just providing the context”, he adds. In the end, the only thing out of that project survived was “Java”. It was basically designed to solve the problems of people who are living outside of data centers, people who are getting shredded by problems with networking, security, and reliability. Larry Wall calls himself a “linguist” rather than a computer scientist. He wanted to create a language that was more like a natural language. Explaining through an example, he said, “Instead of putting people in a university campus and deciding where they go we're just gonna see where people want to walk and then put shortcuts in all those places.” A basic principle behind creating Perl was to provide APIs to everything. It was aimed to be both a good text processing language linguistically but also a glue language. Wall further shares that in the 90s the language was stabilizing, but it did have some issues. So, in the year 2000, the Perl team basically decided to break everything and came up with a whole new set of design principles. And, based on these principles Perl was redesigned into Perl 6. Some of these principles were picking the right default, conserve your brackets because even Unicode does not have enough brackets, don't reinvent object orientation poorly, etc. He adds, [box type="shadow" align="" class="" width=""]“A great deal of the redesign was to say okay what is the right peg to hang everything on? Is it object-oriented? Is it something in the lexical scope or in the larger scope? What does the right peg to hang each piece of information on and if we don't have that peg how do we create it?”[/box] Anders Hejlsberg shares that he follows a common principle in all the languages he has worked on and that is “there's only one way to do a particular thing.” He believes that if a developer is provided with four different ways he may end up choosing the wrong path and realize it later in the development. According to Hejlsberg, this is why often developers end up creating something called “simplexity” which means taking something complex and wrapping a single wrapper on top it so that the complexity goes away. Similar to the views of Guido van Rossum, he further adds that any decision that you make when designing a language you have to live with it. When designing a language you need to be very careful about reasoning over what “not” to introduce in the language. Often, people will come to you with their suggestions for updates, but you cannot really change the nature of the programming language. Though you cannot really change the basic nature of a language, you can definitely extend it through extensions. You essentially have two options, either stay true to the nature of the language or you develop a new one. The type system of programming languages Guido van Rossum, when asked about the typing approach in Python, shared how it was when Python was first introduced. Earlier, int was not a class it was actually a little conversion function. If you wanted to convert a string to an integer you can do that with a built-in function. Later on, Guido realized that this was a mistake. “We had a bunch of those functions and we realized that we had made a mistake, we have given users classes that were different from the built-in object types.” That's where the Python team decided to reinvent the whole approach to types in Python and did a bunch of cleanups. So, they changed the function int into a designator for the class int. Now, calling the class means constructing an instance of the class. James Gosling shared that his focus has always been performance and one factor for improving performance is the type system. It is really useful for things like building optimizing compilers and doing ahead of time correctness checking. Having the type system also helps in cases where you are targeting small footprint devices. “To do that kind of compaction you need every kind of hope that it gives you, every last drop of information and, the earlier you know it, the better job you do,” he adds. Anders Hejlsberg looks at type systems as a tooling feature. Developers love their IDEs, they are accustomed to things like statement completion, refactoring, and code navigation. These features are enabled by the semantic knowledge of your code and this semantic knowledge is provided by a compiler with a type system. Hejlsberg believes that adding types can dramatically increase the productivity of developers, which is a counterintuitive thought. “We think that dynamic languages were easier to approach because you've got rid of the types which was a bother all the time. It turns out that you can actually be more productive by adding types if you do it in a non-intrusive manner and if you work hard on doing good type inference and so forth,” he adds. Talking about the type system in Perl, Wall started off by saying that Perl 5 and Perl 6 had very different type systems. In Perl 5, everything was treated as a string even if it is a number or a floating point. The team wanted to keep this feature in Perl 6 as part of the redesign, but they realized that “it's fine if the new user is confused about the interchangeability but it's not so good if the computer is confused about things.” For Perl 6, Wall and his team envisioned to make it a better object-oriented as well as a better functional programming language. To achieve this goal, it is important to have a very sound type system of a sound meta object model underneath. And, you also need to take the slogans like “everything is an object, everything is a closure” very seriously. What makes a programming language maintainable Guido van Rossum believes that to make a programming language maintainable it is important to hit the right balance between the flexible and disciplined approach. While dynamic typing is great for small programs, large programs require a much-disciplined approach. And, it is better if the language itself enables that discipline rather than giving you the full freedom of doing whatever you want. This is why Guido is planning to add a very similar technology like TypeScript to Python. He adds: [box type="shadow" align="" class="" width=""]“TypeScript is actually incredibly useful and so we're adding a very similar idea to Python. We are adding it in a slightly different way because we have a different context."[/box] Along with type system, refactoring engines can also prove to be very helpful. It will make it easier to perform large scale refactorings like millions of lines of code at once. Often, people do not rename methods because it is really hard to go over a piece of code and rename exactly this right variable. If you are provided with a refactoring engine, you just need to press a couple of buttons, type in the new name, and it will be refactored in maybe just 30 seconds. The origin of the TypeScript project was these enormous JavaScript codebases. As these codebases became bigger and bigger, it became quite difficult to maintain them. These codebases basically became “write-only code” shared Anders Hejlsberg. He adds that this is why we need a semantic understanding of the code, which makes refactoring much easier. “This semantic understanding requires a type system to be in place and once you start adding that you add documentation to the code,” added Hejlsberg. Wall also supports the same thought that “good lexical scoping helps with refactoring”. The future of programming language design When asked about the future of programming design, James Gosling shared that a very underexplored area in programming is writing code for GPUs. He highlights the fact that currently, we do not have any programming language that works like a charm with GPUs and much work is needed to be done in that area. Anders Hejlsberg rightly mentioned that programming languages do not move with the same speed as hardware or all the other technologies. In terms of evolution, programming languages are more like maths and the human brain. He said, “We're still programming in languages that were invented 50 years ago, all of the principles of functional programming were thought of more than 50 years ago.” But, he does believe that instead of segregating into separate categories like object-oriented or functional programming, now languages are becoming multi-paradigm. [box type="shadow" align="" class="" width=""]“Languages are becoming more multi-paradigm. I think it is wrong to talk about oh I only like object-oriented programming, or imperative programming, or functional programming language.”[/box] Now, it is important to be aware of the recent researches, the new thinking, and the new paradigms. Then we need to incorporate them in our programming style, but tastefully. Watch this talk conducted by PuPPy to know more in detail. Python 3.8 alpha 2 is now available for testing ISO C++ Committee announces that C++20 design is now feature complete Using lambda expressions in Java 11 [Tutorial]
Read more
  • 0
  • 0
  • 38890
article-image-mozilla-proposes-webassembly-interface-types-to-enable-language-interoperability
Bhagyashree R
23 Aug 2019
4 min read
Save for later

Mozilla proposes WebAssembly Interface Types to enable language interoperability

Bhagyashree R
23 Aug 2019
4 min read
WebAssembly will soon be able to use the same high-level types in Python, Rust, and Node says Lin Clark, a Principal Research Engineer at Mozilla, with the help of a new proposal: WebAssembly Interface Types. This proposal aims to add a new set of interface types that will describe high-level values like strings, sequences, records, and variants in WebAssembly. https://twitter.com/linclark/status/1164206550010884096 Why WebAssembly Interface Type matters Mozilla and many other companies have been putting their efforts into bringing WebAssembly outside the browser with projects like WASI and Fastly’s Lucet. Developers also want to run WebAssembly from different source languages like Python, Ruby, and Rust. Clark believes there are three reasons why developers want to do that. First, this will allow them to easily use native modules and deliver better speed to their application users. Second, they can use WebAssembly to sandbox native code for better security. Third, they can save time and maintenance cost by sharing native code across platforms. However, currently, this “cross-language integration” is very complicated. The problem is that WebAssembly currently only supports numbers, so it becomes difficult in cases like passing a string between JS and WebAssembly. You will first have to convert the string into an array of numbers and then convert them back into a string. “This means the two languages can call each other’s functions. But if a function takes or returns anything besides numbers, things get complicated,” Clark explains. So, to get past this hurdle you either need to write “a really hard-to-use API that only speaks in numbers” or “add glue code for every single environment you want this module to run in.” This is why Clark and her team have come up with WebAssembly Interface Types. It will allow WebAssembly modules to interoperate with modules running in their own native runtimes and other WebAssembly modules written in different source languages. It will also be able to talk directly with the host systems. It will achieve all of this using rich APIs and complex types. Source: Mozilla WebAssembly Interface Types are different from the types we have in WebAssembly today. Also, there will not be any new operations added to WebAssembly because of them. All the operations will be performed on the concrete types on both communicating sides. Explaining how this will work, Clark wrote, “There’s one key point that makes this possible: with interface types, the two sides aren’t trying to share a representation. Instead, the default is to copy values between one side and the other.” What WebAssembly developers think about this proposal The news sparked a discussion on Hacker News. A user commented that this could in the future prevent a lot of rewrites and duplication, “I'm very happy to see the WebIDL proposal replaced with something generalized.  The article brings up an interesting point: WebAssembly really could enable seamless cross-language integration in the future. Writing a project in Rust, but really want to use that popular face detector written in Python? And maybe the niche language tokenizer written in PHP? And sprinkle ffmpeg on top, without the hassle of target-compatible compilation and worrying about use after free vulnerabilities? No problem use one of the many WASM runtimes popping up and combine all those libraries by using their pre-compiled WASM packages distributed on a package repo like WAPM, with auto-generated bindings that provide a decent API from your host language.” Another user added, ”Of course, cross-language interfaces will always have tradeoffs. But we see Interface Types extending the space where the tradeoffs are worthwhile, especially in combination with wasm's sandboxing.” Some users are also unsure that this will actually work in practice. Here’s what a Reddit user said, “I wonder how well this will work in practice. effectively this is attempting to be universal language interop. that is a bold goal. I suspect this will never work for complicated object graphs. maybe this is for numbers and strings only. I wonder if something like protobuf wouldn't actually be better. it looked from the graphics that memory is still copied anyway (which makes sense, eg going from a cstring to a java string), but this is still marshalling. maybe you can skip this in some cases, but is that important enough to hinge the design there?” To get a deeper understanding of WebAssembly Interface Types, watch this explainer video by Mozilla: https://www.youtube.com/watch?time_continue=17&v=Qn_4F3foB3Q Also, check out Lin Clark’s article, WebAssembly Interface Types: Interoperate with All the Things. Wasmer introduces WebAssembly Interfaces for validating the imports and exports of a Wasm module Fastly CTO Tyler McMullen on Lucet and the future of WebAssembly and Rust [Interview] LLVM WebAssembly backend will soon become Emscripten’s default backend, V8 announces
Read more
  • 0
  • 0
  • 38701

article-image-parallel-programming-patterns
Packt
25 Nov 2013
22 min read
Save for later

Parallel Programming Patterns

Packt
25 Nov 2013
22 min read
(For more resources related to this topic, see here.) Patterns in programming mean a concrete and standard solution to a given problem. Usually, programming patterns are the result of people gathering experience, analyzing the common problems, and providing solutions to these problems. Since parallel programming has existed for quite a long time, there are many different patterns for programming parallel applications. There are even special programming languages to make programming of specific parallel algorithms easier. However, this is where things start to become increasingly complicated. In this article, I will provide a starting point from where you will be able to study parallel programming further. We will review very basic, yet very useful, patterns that are quite helpful for many common situations in parallel programming. First is about using a shared-state object from multiple threads. I would like to emphasize that you should avoid it as much as possible. A shared state is really bad when you write parallel algorithms, but in many occasions it is inevitable. We will find out how to delay an actual computation of an object until it is needed, and how to implement different scenarios to achieve thread safety. The next two recipes will show how to create a structured parallel data flow. We will review a concrete case of a producer/consumer pattern, which is called as Parallel Pipeline. We are going to implement it by just blocking the collection first, and then see how helpful is another library from Microsoft for parallel programming—TPL DataFlow. The last pattern that we will study is the Map/Reduce pattern. In the modern world, this name could mean very different things. Some people consider map/reduce not as a common approach to any problem but as a concrete implementation for large, distributed cluster computations. We will find out the meaning behind the name of this pattern and review some examples of how it might work in case of small parallel applications. Implementing Lazy-evaluated shared states This recipe shows how to program a Lazy-evaluated thread-safe shared state object. Getting ready To start with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... For implementing Lazy-evaluated shared states, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Threading; using System.Threading.Tasks; Add the following code snippet below the Main method: static async Task ProcessAsynchronously() { var unsafeState = new UnsafeState(); Task[] tasks = new Task[4]; for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(unsafeState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var firstState = new DoubleCheckedLocking(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(firstState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var secondState = new BCLDoubleChecked(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(secondState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var thirdState = new Lazy<ValueToAccess>(Compute); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(thirdState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); var fourthState = new BCLThreadSafeFactory(); for (int i = 0; i < 4; i++) { tasks[i] = Task.Run(() => Worker(fourthState)); } await Task.WhenAll(tasks); Console.WriteLine(" --------------------------- "); } static void Worker(IHasValue state) { Console.WriteLine("Worker runs on thread id {0}",Thread .CurrentThread.ManagedThreadId); Console.WriteLine("State value: {0}", state.Value.Text); } static void Worker(Lazy<ValueToAccess> state) { Console.WriteLine("Worker runs on thread id {0}",Thread .CurrentThread.ManagedThreadId); Console.WriteLine("State value: {0}", state.Value.Text); } static ValueToAccess Compute() { Console.WriteLine("The value is being constructed on athread id {0}", Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromSeconds(1)); return new ValueToAccess(string.Format("Constructed on thread id {0}", Thread.CurrentThread.ManagedThreadId)); } class ValueToAccess { private readonly string _text; public ValueToAccess(string text) { _text = text; } public string Text { get { return _text; } } } class UnsafeState : IHasValue { private ValueToAccess _value; public ValueToAccess Value { get { if (_value == null) { _value = Compute(); } return _value; } } } class DoubleCheckedLocking : IHasValue { private object _syncRoot = new object(); private volatile ValueToAccess _value; public ValueToAccess Value { get { if (_value == null) { lock (_syncRoot) { if (_value == null) _value = Compute(); } } return _value; } } } class BCLDoubleChecked : IHasValue { private object _syncRoot = new object(); private ValueToAccess _value; private bool _initialized = false; public ValueToAccess Value { get { return LazyInitializer.EnsureInitialized( ref _value, ref _initialized, ref _syncRoot,Compute); } } } class BCLThreadSafeFactory : IHasValue { private ValueToAccess _value; public ValueToAccess Value { get { return LazyInitializer.EnsureInitialized(ref _value,Compute); } } } interface IHasValue { ValueToAccess Value { get; } } Add the following code snippet inside the Main method: var t = ProcessAsynchronously(); t.GetAwaiter().GetResult(); Console.WriteLine("Press ENTER to exit"); Console.ReadLine(); Run the program. How it works... The first example show why it is not safe to use the UnsafeState object with multiple accessing threads. We see that the Construct method was called several times, and different threads use different values, which is obviously not right. To fix this, we can use a lock when reading the value, and if it is not initialized, create it first. It will work, but using a lock with every read operation is not efficient. To avoid using locks every time, there is a traditional approach called the double-checked locking pattern. We check the value for the first time, and if is not null, we avoid unnecessary locking and just use the shared object. However, if it was not constructed yet, we use the lock and then check the value for the second time, because it could be initialized between our first check and the lock operation. If it is still not initialized, only then we compute the value. We can clearly see that this approach works with the second example—there is only one call to the Construct method, and the first-called thread defines the shared object state. Please note that if the lazy- evaluated object implementation is thread-safe, it does not automatically mean that all its properties are thread-safe as well. If you add, for example, an int public property to the ValueToAccess object, it will not be thread-safe; you still have to use interlocked constructs or locking to ensure thread safety. This pattern is very common, and that is why there are several classes in the Base Class Library to help us. First, we can use the LazyInitializer.EnsureInitialized method, which implements the double-checked locking pattern inside. However, the most comfortable option is to use the Lazy<T> class that allows us to have thread-safe Lazy-evaluated, shared state, out of the box. The next two examples show us that they are equivalent to the second one, and the program behaves the same. The only difference is that since LazyInitializer is a static class, we do not have to create a new instance of a class as we do in the case of Lazy<T>, and therefore the performance in the first case will be better in some scenarios. The last option is to avoid locking at all, if we do not care about the Construct method. If it is thread-safe and has no side effects and/or serious performance impacts, we can just run it several times but use only the first constructed value. The last example shows the described behavior, and we can achieve this result by using another LazyInitializer.EnsureInitialized method overload. Implementing Parallel Pipeline with BlockingCollection This recipe will describe how to implement a specific scenario of a producer/consumer pattern, which is called Parallel Pipeline, using the standard BlockingCollection data structure. Getting ready To begin this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... To understand how to implement Parallel Pipeline using BlockingCollection, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Collections.Concurrent; using System.Linq; using System.Threading; using System.Threading.Tasks; Add the following code snippet below the Main method: private const int CollectionsNumber = 4; private const int Count = 10; class PipelineWorker<TInput, TOutput> { Func<TInput, TOutput> _processor = null; Action<TInput> _outputProcessor = null; BlockingCollection<TInput>[] _input; CancellationToken _token; public PipelineWorker( BlockingCollection<TInput>[] input, Func<TInput, TOutput> processor, CancellationToken token, string name) { _input = input; Output = new BlockingCollection<TOutput>[_input.Length]; for (int i = 0; i < Output.Length; i++) Output[i] = null == input[i] ? null : new BlockingCollection<TOutput>(Count); _processor = processor; _token = token; Name = name; } public PipelineWorker( BlockingCollection<TInput>[] input, Action<TInput> renderer, CancellationToken token, string name) { _input = input; _outputProcessor = renderer; _token = token; Name = name; Output = null; } public BlockingCollection<TOutput>[] Output { get; private set; } public string Name { get; private set; } public void Run() { Console.WriteLine("{0} is running", this.Name); while (!_input.All(bc => bc.IsCompleted) && !_token.IsCancellationRequested) { TInput receivedItem; int i = BlockingCollection<TInput>.TryTakeFromAny( _input, out receivedItem, 50, _token); if (i >= 0) { if (Output != null) { TOutput outputItem = _processor(receivedItem); BlockingCollection<TOutput>.AddToAny(Output,outputItem); Console.WriteLine("{0} sent {1} to next,on thread id {2}", Name, outputItem,Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); } else { _outputProcessor(receivedItem); } } else { Thread.Sleep(TimeSpan.FromMilliseconds(50)); } } if (Output != null) { foreach (var bc in Output) bc.CompleteAdding(); } } } Add the following code snippet inside the Main method: var cts = new CancellationTokenSource(); Task.Run(() => { if (Console.ReadKey().KeyChar == 'c') cts.Cancel(); }); var sourceArrays = new BlockingCollection<int>[CollectionsNumber]; for (int i = 0; i < sourceArrays.Length; i++) { sourceArrays[i] = new BlockingCollection<int>(Count); } var filter1 = new PipelineWorker<int, decimal> (sourceArrays, (n) => Convert.ToDecimal(n * 0.97), cts.Token, "filter1" ); var filter2 = new PipelineWorker<decimal, string> (filter1.Output, (s) => String.Format("--{0}--", s), cts.Token, "filter2" ); var filter3 = new PipelineWorker<string, string> (filter2.Output, (s) => Console.WriteLine("The final result is {0} onthread id {1}", s, Thread.CurrentThread.ManagedThreadId), cts.Token,"filter3"); try { Parallel.Invoke( () => { Parallel.For(0, sourceArrays.Length * Count,(j, state) => { if (cts.Token.IsCancellationRequested) { state.Stop(); } int k = BlockingCollection<int>.TryAddToAny(sourceArrays, j); if (k >= 0) { Console.WriteLine("added {0} to source data onthread id {1}", j, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); } }); foreach (var arr in sourceArrays) { arr.CompleteAdding(); } }, () => filter1.Run(), () => filter2.Run(), () => filter3.Run() ); } catch (AggregateException ae) { foreach (var ex in ae.InnerExceptions) Console.WriteLine(ex.Message + ex.StackTrace); } if (cts.Token.IsCancellationRequested) { Console.WriteLine("Operation has been canceled!Press ENTER to exit."); } else { Console.WriteLine("Press ENTER to exit."); } Console.ReadLine(); Run the program. How it works... In the preceding example, we implement one of the most common parallel programming scenarios. Imagine that we have some data that has to pass through several computation stages, which take a significant amount of time. The latter computation requires the results of the former, so we cannot run them in parallel. If we had only one item to process, there would not be many possibilities to enhance the performance. However, if we run many items through the set of same computation stages, we can use a Parallel Pipeline technique. This means that we do not have to wait until all items pass through the first computation stage to go to the next one. It is enough to have just one item that finishes the stage, we move it to the next stage, and meanwhile the next item is being processed by the previous stage, and so on. As a result, we almost have parallel processing shifted by a time required for the first item to pass through the first computation stage. Here, we use four collections for each processing stage, illustrating that we can process every stage in parallel as well. The first step that we do is to provide a possibility to cancel the whole process by pressing the C key. We create a cancellation token and run a separate task to monitor the C key. Then, we define our pipeline. It consists of three main stages. The first stage is where we put the initial numbers on the first four collections that serve as the item source to the latter pipeline. This code is inside the Parallel.For loop, which in turn is inside the Parallel.Invoke statement, as we run all the stages in parallel; the initial stage runs in parallel as well. The next stage is defining our pipeline elements. The logic is defined inside the PipelineWorker class. We initialize the worker with the input collection, provide a transformation function, and then run the worker in parallel with the other workers. This way we define two workers, or filters, because they filter the initial sequence. One of them turns an integer into a decimal value, and the second one turns a decimal to a string. Finally, the last worker just prints every incoming string to the console. Everywhere we provide a running thread ID to see how everything works. Besides this, we added artificial delays, so the items processing will be more natural, as we really use heavy computations. As a result, we see the exact expected behavior. First, some items are being created on the initial collections. Then, we see that the first filter starts to process them, and as they are being processed, the second filter starts to work, and finally the item goes to the last worker that prints it to the console. Implementing Parallel Pipeline with TPL DataFlow This recipe shows how to implement a Parallel Pipeline pattern with the help of TPL DataFlow library. Getting ready To start with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe could be found at Packt site. How to do it... To understand how to implement Parallel Pipeline with TPL DataFlow, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. Add references to the Microsoft TPL DataFlow NuGet package. Right-click on the References folder in the project and select the Manage NuGet Packages... menu option. Now add your preferred references to the Microsoft TPL DataFlow NuGet package. You can use the search option in the Manage NuGet Packages dialog as follows: In the Program.cs file, add the following using directives: using System; using System.Threading; using System.Threading.Tasks; using System.Threading.Tasks.Dataflow; Add the following code snippet below the Main method: async static Task ProcessAsynchronously() { var cts = new CancellationTokenSource(); Task.Run(() => { if (Console.ReadKey().KeyChar == 'c') cts.Cancel(); }); var inputBlock = new BufferBlock<int>( new DataflowBlockOptions { BoundedCapacity = 5, CancellationToken = cts.Token }); var filter1Block = new TransformBlock<int, decimal>( n => { decimal result = Convert.ToDecimal(n * 0.97); Console.WriteLine("Filter 1 sent {0} to the nextstage on thread id {1}", result, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); return result; }, new ExecutionDataflowBlockOptions {MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); var filter2Block = new TransformBlock<decimal, string>( n => { string result = string.Format("--{0}--", n); Console.WriteLine("Filter 2 sent {0} to the nextstage on thread id {1}", result, Thread.CurrentThread.ManagedThreadId); Thread.Sleep(TimeSpan.FromMilliseconds(100)); return result; }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); var outputBlock = new ActionBlock<string>( s => { Console.WriteLine("The final result is {0} on threadid {1}", s, Thread.CurrentThread.ManagedThreadId); }, new ExecutionDataflowBlockOptions { MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }); inputBlock.LinkTo(filter1Block, new DataflowLinkOptions {PropagateCompletion = true }); filter1Block.LinkTo(filter2Block, new DataflowLinkOptions { PropagateCompletion = true }); filter2Block.LinkTo(outputBlock, new DataflowLinkOptions { PropagateCompletion = true }); try { Parallel.For(0, 20, new ParallelOptions {MaxDegreeOfParallelism = 4, CancellationToken =cts.Token }, i => { Console.WriteLine("added {0} to source data on threadid {1}", i, Thread.CurrentThread.ManagedThreadId); inputBlock.SendAsync(i).GetAwaiter().GetResult(); }); inputBlock.Complete(); await outputBlock.Completion; Console.WriteLine("Press ENTER to exit."); } catch (OperationCanceledException) { Console.WriteLine("Operation has been canceled!Press ENTER to exit."); } Console.ReadLine(); } Add the following code snippet inside the Main method: var t = ProcessAsynchronously(); t.GetAwaiter().GetResult(); Run the program. How it works... In the previous recipe, we have implemented a Parallel Pipeline pattern to process items through sequential stages. It is quite a common problem, and one of the proposed ways to program such algorithms is using a TPL DataFlow library from Microsoft. It is distributed via NuGet, and is easy to install and use in your application. The TPL DataFlow library contains different type of blocks that can be connected with each other in different ways and form complicated processes that can be partially parallel and sequential where needed. To see some of the available infrastructure, let's implement the previous scenario with the help of the TPL DataFlow library. First, we define the different blocks that will be processing our data. Please note that these blocks have different options that can be specified during their construction; they can be very important. For example, we pass the cancellation token into every block we define, and when we signal the cancellation, all of them will stop working. We start our process with BufferBlock. This block holds items to pass it to the next blocks in the flow. We restrict it to the five-items capacity, specifying the BoundedCapacity option value. This means that when there will be five items in this block, it will stop accepting new items until one of the existing items pass to the next blocks. The next block type is TransformBlock. This block is intended for a data transformation step. Here we define two transformation blocks, one of them creates decimals from integers, and the second one creates a string from a decimal value. There is a MaxDegreeOfParallelism option for this block, specifying the maximum simultaneous worker threads. The last block is the ActionBlock type. This block will run a specified action on every incoming item. We use this block to print our items to the console. Now, we link these blocks together with the help of the LinkTo methods. Here we have an easy sequential data flow, but it is possible to create schemes that are more complicated. Here we also provide DataflowLinkOptions with the PropagateCompletion property set to true. This means that when the step completes, it will automatically propagate its results and exceptions to the next stage. Then we start adding items to the buffer block in parallel, calling the block's Complete method, when we finish adding new items. Then we wait for the last block to complete. In case of a cancellation, we handle OperationCancelledException and cancel the whole process. Implementing Map/Reduce with PLINQ This recipe will describe how to implement the Map/Reduce pattern while using PLINQ. Getting ready To begin with this recipe, you will need a running Visual Studio 2012. There are no other prerequisites. The source code for this recipe can be found at Packt site. How to do it... To understand how to implement Map/Reduce with PLINQ, perform the following steps: Start Visual Studio 2012. Create a new C# Console Application project. In the Program.cs file, add the following using directives: using System; using System.Collections.Generic; using System.IO; using System.Linq; Add the following code snippet below the Main method: private static readonly char[] delimiters =Enumerable.Range(0, 256). Select(i => (char)i).Where(c =>!char.IsLetterOrDigit(c)).ToArray(); private const string textToParse = @" Call me Ishmael. Some years ago - never mind how long precisely - having little or no money in my purse, and nothing particular to interest me on shore, I thought I would sail about a little and see the watery part of the world. It is a way I have of driving off the spleen, and regulating the circulation. Whenever I find myself growing grim about the mouth; whenever it is a damp, drizzly November in my soul; whenever I find myself involuntarily pausing before coffin warehouses, and bringing up the rear of every funeral I meet; and especially whenever my hypos get such an upper hand of me , that it requires a strong moral principle to prevent me from deliberately stepping into the street, and methodically knocking people's hats off - then, I account it high time to get to sea as soon as I can. ― Herman Melville, Moby Dick. "; Add the following code snippet inside the Main method: var q = textToParse.Split(delimiters) .AsParallel() .MapReduce( s => s.ToLower().ToCharArray() , c => c , g => new[] {new {Char = g.Key, Count = g.Count()}}) .Where(c => char.IsLetterOrDigit(c.Char)) .OrderByDescending( c => c.Count); foreach (var info in q) { Console.WriteLine("Character {0} occured in the text {1}{2}", info.Char, info.Count, info.Count == 1 ? "time" : "times"); } Console.WriteLine(" -------------------------------------------"); const string searchPattern = "en"; var q2 = textToParse.Split(delimiters) .AsParallel() .Where(s => s.Contains(searchPattern)) .MapReduce( s => new [] {s} , s => s , g => new[] {new {Word = g.Key, Count = g.Count()}}) .OrderByDescending(s => s.Count); Console.WriteLine("Words with search pattern '{0}':",searchPattern); foreach (var info in q2) { Console.WriteLine("{0} occured in the text {1} {2}",info.Word, info.Count, info.Count == 1 ? "time" : "times"); } int halfLengthWordIndex = textToParse.IndexOf(' ',textToParse.Length/2); using(var sw = File.CreateText("1.txt")) { sw.Write(textToParse.Substring(0, halfLengthWordIndex)); } using(var sw = File.CreateText("2.txt")) { sw.Write(textToParse.Substring(halfLengthWordIndex)); } string[] paths = new[] { ".\" }; Console.WriteLine(" ------------------------------------------------"); var q3 = paths .SelectMany(p => Directory.EnumerateFiles(p, "*.txt")) .AsParallel() .MapReduce( path => File.ReadLines(path).SelectMany(line =>line.Trim(delimiters).Split (delimiters)),word => string.IsNullOrWhiteSpace(word) ? 't' :word.ToLower()[0], g => new [] { new {FirstLetter = g.Key, Count = g.Count()}}) .Where(s => char.IsLetterOrDigit(s.FirstLetter)) .OrderByDescending(s => s.Count); Console.WriteLine("Words from text files"); foreach (var info in q3) { Console.WriteLine("Words starting with letter '{0}'occured in the text {1} {2}", info.FirstLetter,info.Count, info.Count == 1 ? "time" : "times"); } Add the following code snippet after the Program class definition: static class PLINQExtensions { public static ParallelQuery<TResult> MapReduce<TSource,TMapped, TKey, TResult>( this ParallelQuery<TSource> source, Func<TSource, IEnumerable<TMapped>> map, Func<TMapped, TKey> keySelector, Func<IGrouping<TKey, TMapped>, IEnumerable<TResult>> reduce) { return source.SelectMany(map) .GroupBy(keySelector) .SelectMany(reduce); } } Run the program. How it works... The Map/Reduce functions are another important parallel programming pattern. It is suitable for a small program and large multi-server computations. The meaning of this pattern is that you have two special functions to apply to your data. The first of them is the Map function. It takes a set of initial data in a key/value list form and produces another key/value sequence, transforming the data to the comfortable format for further processing. Then we use another function called Reduce. The Reduce function takes the result of the Map function and transforms it to a smallest possible set of data that we actually need. To understand how this algorithm works, let's look through the recipe. First, we define a relatively large text in the string variable: textToParse. We need this text to run our queries on. Then we define our Map/Reduce implementation as a PLINQ extension method in the PLINQExtensions class. We use SelectMany to transform the initial sequence to the sequence we need by applying the Map function. This function produces several new elements from one sequence element. Then we choose how we group the new sequence with the keySelector function, and we use GroupBy with this key to produce an intermediate key/value sequence. The last thing we do is applying Reduce to the resulting grouped sequence to get the result. In our first example, we split the text into separate words, and then we chop each word into character sequences with the help of the Map function, and group the result by the character value. The Reduce function finally transforms the sequence into a key value pair, where we have a character and a number for the times it was used in the text ordered by the usage. Therefore, we are able to count each character appearance in the text in parallel (since we use PLINQ to query the initial data). The next example is quite similar, but now we use PLINQ to filter the sequence leaving only the words containing our search pattern, and we then get all those words sorted by their usage in the text. Finally, the last example uses file I/O. We save the sample text on the disk, splitting it into two files. Then we define the Map function as producing a number of strings from the directory name, which are all the words from all the lines in all text files in the initial directory. Then we group those words by the first letter (filtering out the empty strings) and use reduce to see which letter is most often used as the first word letter in the text. What is nice is that we can easily change this program to be distributed by just using other implementations of map and reduce functions, and we still are able to use PLINQ with them to make our program easy to read and maintain. Summary In this article we covered implementing lazy-evaluated shared states, implementing Parallel Pipeline using BlockingCollection and TPL DataFlow, and finally we covered the implementation of Map/Reduce with PLINQ. Resources for Article: Further resources on this subject: Simplifying Parallelism Complexity in C# [Article] Watching Multiple Threads in C# [Article] Low-level C# Practices [Article]
Read more
  • 0
  • 0
  • 38467

article-image-common-design-patterns-javascript
Richa Tripathi
01 May 2018
14 min read
Save for later

Implementing 5 Common Design Patterns in JavaScript (ES8)

Richa Tripathi
01 May 2018
14 min read
In this tutorial, we'll see how common design patterns can be used as blueprints for organizing larger structures. Defining steps with template functions A template is a design pattern that details the order a given set of operations are to be executed in; however, a template does not outline the steps themselves. This pattern is useful when behavior is divided into phases that have some conceptual or side effect dependency that requires them to be executed in a specific order. Here, we'll see how to use the template function design pattern. We assume you already have a workspace that allows you to create and run ES modules in your browser for all the recipes given below: How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-01-defining-steps-with-template-functions. Copy or create an index.html file that loads and runs a main function from main.js. Create a main.js file that defines a new abstract class named Mission: // main.js class Mission { constructor () { if (this.constructor === Mission) { throw new Error('Mission is an abstract class, must extend'); } } } Add a function named execute that calls three instance methods—determineDestination, determinPayload, and launch: // main.js class Mission { execute () { this.determinDestination(); this.determinePayload(); this.launch(); } } Create a LunarRover class that extends the Mission class: // main.js class LunarRover extends Mission {} Add a constructor that assigns name to an instance property: // main.js class LunarRover extends Mission constructor (name) { super(); this.name = name; } } Implement the three methods called by Mission.execute: // main.js class LunarRover extends Mission {} determinDestination() { this.destination = 'Oceanus Procellarum'; } determinePayload() { this.payload = 'Rover with camera and mass spectrometer.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Rover Will arrive in a week. `); } } Create a JovianOrbiter class that also extends the Mission class: // main.js class LunarRover extends Mission {} constructor (name) { super(); this.name = name; } determinDestination() { this.destination = 'Jovian Orbit'; } determinePayload() { this.payload = 'Orbiter with decent module.'; } launch() { console.log(` Destination: ${this.destination} Playload: ${this.payload} Lauched! Orbiter Will arrive in 7 years. `); } } Create a main function that creates both concrete mission types and executes them: // main.js export function main() { const jadeRabbit = new LunarRover('Jade Rabbit'); jadeRabbit.execute(); const galileo = new JovianOrbiter('Galileo'); galileo.execute(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. The output should appear as follows: How it works... The Mission abstract class defines the execute method, which calls the other instance methods in a particular order. You'll notice that the methods called are not defined by the Mission class. This implementation detail is the responsibility of the extending classes. This use of abstract classes allows child classes to be used by code that takes advantage of the interface defined by the abstract class. In the template function pattern, it is the responsibility of the child classes to define the steps. When they are instantiated, and the execute method is called, those steps are then performed in the specified order. Ideally, we'd be able to ensure that Mission.execute was not overridden by any inheriting classes. Overriding this method works against the pattern and breaks the contract associated with it. This pattern is useful for organizing data-processing pipelines. The guarantee that these steps will occur in a given order means that, if side effects are eliminated, the instances can be organized more flexibly. The implementing class can then organize these steps in the best possible way. Assembling customized instances with builders The previous recipe shows how to organize the operations of a class. Sometimes, object initialization can also be complicated. In these situations, it can be useful to take advantage of another design pattern: builders. Now, we'll see how to use builders to organize the initialization of more complicated objects. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-02-assembling-instances-with-builders. Create a main.js file that defines a new class named Mission, which that takes a name constructor argument and assigns it to an instance property. Also, create a describe method that prints out some details: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create classes named Destination, Payload, and Rocket, which receive a name property as a constructor parameter and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } }   Create a MissionBuilder class that defines the setMissionName, setDestination, setPayload, and setRocket methods: // main.js class MissionBuilder { setMissionName (name) { this.missionName = name; return this; } setDestination (destination) { this.destination = destination; return this; } setPayload (payload) { this.payload = payload; return this; } setRocket (rocket) { this.rocket = rocket; return this; } } Create a build method that creates a new Mission instance with the appropriate properties: // main.js class MissionBuilder { build () { const mission = new Mission(this.missionName); mission.rocket = this.rocket; mission.destination = this.destination; mission.payload = this.payload; return mission; } } Create a main function that uses MissionBuilder to create a new mission instance: // main.js export function main() { // build an describe a mission new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Oceanus Procellarum')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build() .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The builder defines methods for assigning all the relevant properties and defines a build method that ensures that each is called and assigned appropriately. Builders are like template functions, but instead of ensuring that a set of operations are executed in the correct order, they ensure that an instance is properly configured before returning. Because each instance method of MissionBuilder returns the this reference, the methods can be chained. The last line of the main function calls describe on the new Mission instance that is returned from the build method. Replicating instances with factories Like builders, factories are a way of organizing object construction. They differ from builders in how they are organized. Often, the interface of factories is a single function call. This makes factories easier to use, if less customizable, than builders. Now, we'll see how to use factories to easily replicate instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-03-replicating-instances-with-factories. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Mission. Add a constructor that takes a name constructor argument and assigns it to an instance property. Also, define a simple describe method: // main.js class Mission { constructor (name) { this.name = name; } describe () { console.log(` The ${this.name} mission will be launched by a ${this.rocket.name} rocket, and deliver a ${this.payload.name} to ${this.destination.name}. `); } } Create three classes named Destination, Payload, and Rocket, that take name as a constructor argument and assign it to an instance property: // main.js class Destination { constructor (name) { this.name = name; } } class Payload { constructor (name) { this.name = name; } } class Rocket { constructor (name) { this.name = name; } } Create a MarsMissionFactory object with a single create method that takes two arguments: name and rocket. This method should create a new Mission using those arguments: // main.js const MarsMissionFactory = { create (name, rocket) { const mission = new Mission(name); mission.destination = new Destination('Martian surface'); mission.payload = new Payload('Mars rover'); mission.rocket = rocket; return mission; } } Create a main method that creates and describes two similar missions: // main.js export function main() { // build an describe a mission MarsMissionFactory .create('Curiosity', new Rocket('Atlas V')) .describe(); MarsMissionFactory .create('Spirit', new Rocket('Delta II')) .describe(); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The create method takes a subset of the properties needed to create a new mission. The remaining values are provided by the method itself. This allows factories to simplify the process of creating similar instances. In the main function, you can see that two Mars missions have been created, only differing in name and Rocket instance. We've halved the number of values needed to create an instance. This pattern can help reduce instantiation logic. In this recipe, we simplified the creation of different kinds of missions by identifying the common attributes, encapsulating those in the body of the factory function, and using arguments to supply the remaining properties. In this way, commonly used instance shapes can be created without additional boilerplate code. Processing a structure with the visitor pattern The patterns we've seen thus far organize the construction of objects and the execution of operations. The next pattern we'll look at is specially made to traverse and perform operations on hierarchical structures. Here, we'll be looking at the visitor pattern. How to do it... Open your command-line application and navigate to your workspace. Copy the 09-02-assembling-instances-with-builders folder to a new 09-04-processing-a-structure-with-the-visitor-pattern directory. Add a class named MissionInspector to main.js. Create a visitor method that calls a corresponding method for each of the following types: Mission, Destination, Rocket, and Payload: // main.js /* visitor that inspects mission */ class MissionInspector { visit (element) { if (element instanceof Mission) { this.visitMission(element); } else if (element instanceof Destination) { this.visitDestination(element); } else if (element instanceof Rocket) { this.visitRocket(element); } else if (element instanceof Payload) { this.visitPayload(element); } } } Create a visitMission method that logs out an ok message: // main.js class MissionInspector { visitMission (mission) { console.log('Mission ok'); mission.describe(); } } Create a visitDestination method that throws an error if the destination is not in an approved list: // main.js class MissionInspector { visitDestination (destination) { const name = destination.name.toLowerCase(); if ( name === 'mercury' || name === 'venus' || name === 'earth' || name === 'moon' || name === 'mars' ) { console.log('Destination: ', name, ' approved'); } else { throw new Error('Destination: '' + name + '' not approved at this time'); } } } Create a visitPayload method that throws an error if the payload isn't valid: // main.js class MissionInspector { visitPayload (payload) { const name = payload.name.toLowerCase(); const payloadExpr = /(orbiter)|(rover)/; if ( payloadExpr.test(name) ) { console.log('Payload: ', name, ' approved'); } else { throw new Error('Payload: '' + name + '' not approved at this time'); } } } Create a visitRocket method that logs out an ok message: // main.js class MissionInspector { visitRocket (rocket) { console.log('Rocket: ', rocket.name, ' approved'); } } Add an accept method to the Mission class that calls accept on its constituents, then tells visitor to visit the current instance: // main.js class Mission { // other mission code ... accept (visitor) { this.rocket.accept(visitor); this.payload.accept(visitor); this.destination.accept(visitor); visitor.visit(this); } } Add an accept method to the Destination class that tells visitor to visit the current instance: // main.js class Destination { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Payload class that tells visitor to visit the current instance: // main.js class Payload { // other mission code ... accept (visitor) { visitor.visit(this); } } Add an accept method to the Rocket class that tells visitor to visit the current instance: // main.js class Rocket { // other mission code ... accept (visitor) { visitor.visit(this); } } Create a main function that creates different instances with the builder, visits them with the MissionInspector instance, and logs out any thrown errors: // main.js export function main() { // build an describe a mission const jadeRabbit = new MissionBuilder() .setMissionName('Jade Rabbit') .setDestination(new Destination('Moon')) .setPayload(new Payload('Lunar Rover')) .setRocket(new Rocket('Long March 3B Y-23')) .build(); const curiosity = new MissionBuilder() .setMissionName('Curiosity') .setDestination(new Destination('Mars')) .setPayload(new Payload('Mars Rover')) .setRocket(new Rocket('Delta II')) .build(); // expect error from Destination const buzz = new MissionBuilder() .setMissionName('Buzz Lightyear') .setDestination(new Destination('Too Infinity And Beyond')) .setPayload(new Payload('Interstellar Orbiter')) .setRocket(new Rocket('Self Propelled')) .build(); // expect error from payload const terraformer = new MissionBuilder() .setMissionName('Mars Terraformer') .setDestination(new Destination('Mars')) .setPayload(new Payload('Terraformer')) .setRocket(new Rocket('Light Sail')) .build(); const inspector = new MissionInspector(); [jadeRabbit, curiosity, buzz, terraformer].forEach((mission) => { try { mission.accept(inspector); } catch (e) { console.error(e); } }); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The visitor pattern has two components. The visitor processes the subject objects and the subjects tell other related subjects about the visitor, and when the current subject should be visited. The accept method is required for each subject to receive a notification that there is a visitor. That method then makes two types of method call. The first is the accept method on its related subjects. The second is the visitor method on the visitor. In this way, the visitor traverses a structure by being passed around by the subjects. The visitor methods are used to process different types of node. In some languages, this is handled by language-level polymorphism. In JavaScript, we can use run-time type checks to do this. The visitor pattern is a good option for processing hierarchical structures of objects, where the structure is not known ahead of time, but the types of subjects are known. Using a singleton to manage instances Sometimes, there are objects that are resource intensive. They may require time, memory, battery power, or network usage that are unavailable or inconvenient. It is often useful to manage the creation and sharing of instances. Here, we'll see how to use singletons to manage instances. How to do it... Open your command-line application and navigate to your workspace. Create a new folder named 09-05-singleton-to-manage-instances. Copy or create an index.html that loads and runs a main function from main.js. Create a main.js file that defines a new class named Rocket. Add a constructor takes a name constructor argument and assigns it to an instance property: // main.js class Rocket { constructor (name) { this.name = name; } } Create a RocketManager object that has a rockets property. Add a findOrCreate method that indexes Rocket instances by the name property: // main.js const RocketManager = { rockets: {}, findOrCreate (name) { const rocket = this.rockets[name] || new Rocket(name); this.rockets[name] = rocket; return rocket; } } Create a main function that creates instances with and without the manager. Compare the instances and see whether they are identical: // main.js export function main() { const atlas = RocketManager.findOrCreate('Atlas V'); const atlasCopy = RocketManager.findOrCreate('Atlas V'); const atlasClone = new Rocket('Atlas V'); console.log('Copy is the same: ', atlas === atlasCopy); console.log('Clone is the same: ', atlas === atlasClone); } Start your Python web server and open the following link in your browser: http://localhost:8000/. Your output should appear as follows: How it works... The object stores references to the instances, indexed by the string value given with name. This map is created when the module loads, so it is persisted through the life of the program. The singleton is then able to look up the object and returns instances created by findOrCreate with the same name. Conserving resources and simplifying communication are primary motivations for using singletons. Creating a single object for multiple uses is more efficient in terms of space and time needed than creating several. Plus, having single instances for messages to be communicated through makes communication between different parts of a program easier. Singletons may require more sophisticated indexing if they are relying on more complicated data. You read an excerpt from a book written by Ross Harrison, titled ECMAScript Cookbook. This book contains over 70 recipes to help you improve your coding skills and solving practical JavaScript problems. 6 JavaScript micro optimizations you need to know Mozilla is building a bridge between Rust and JavaScript Behavior Scripting in C# and Javascript for game developers  
Read more
  • 0
  • 0
  • 37744
article-image-why-we-need-design-patterns
Packt
10 Nov 2016
16 min read
Save for later

Why we need Design Patterns?

Packt
10 Nov 2016
16 min read
In this article by Praseed Pai, and Shine Xavier, authors of the book .NET Design Patterns, we will try to understand the necessity of choosing a pattern-based approach to software development. We start with some principles of software development, which one might find useful while undertaking large projects. The working example in the article starts with a requirements specification and progresses towards a preliminary implementation. We will then try to iteratively improve the solution using patterns and idioms, and come up with a good design that supports a well-defined programming Interface. In this process, we will learn about some software development principles (listed below) one can adhere to, including the following: SOLID principles for OOP Three key uses of design patterns Arlow/Nuestadt archetype patterns Entity, value, and data transfer objects Leveraging the .NET Reflection API for plug-in architecture (For more resources related to this topic, see here.) Some principles of software development Writing quality production code consistently is not easy without some foundational principles under your belt. The purpose of this section is to whet the developer's appetite, and towards the end, some references are given for detailed study. Detailed coverage of these principles warrants a separate book on its own scale. The authors have tried to assimilate the following key principles of software development which would help one write quality code: KISS: Keep it simple, Stupid DRY: Don't repeat yourself YAGNI: You aren't gonna need it Low coupling: Minimize coupling between classes SOLID principles: Principles for better OOP William of Ockham had framed the maxim Keep it simple, Stupid (KISS). It is also called law of parsimony. In programming terms, it can be translated as "writing code in a straightforward manner, focusing on a particular solution that solves the problem at hand". This maxim is important because, most often, developers fall into the trap of writing code in a generic manner for unwarranted extensibility. Even though it initially looks attractive, things slowly go out of bounds. The accidental complexity introduced in the code base for catering to improbable scenarios, often reduces readability and maintainability. The KISS principle can be applied to every human endeavor. Learn more about KISS principle by consulting the Web. Don't repeat yourself (DRY), a maxim which most programmers often forget while implementing their domain logic. Most often, in a collaborative development scenario, code gets duplicated inadvertently due to lack of communication and proper design specifications. This bloats the code base, induces subtle bugs, and make things really difficult to change. By following the DRY maxim at all stages of development, we can avoid additional effort and make the code consistent. The opposite of DRY is write everything twice (WET). You aren't gonna need it (YAGNI), a principle that compliments the KISS axiom. It serves as a warning for people who try to write code in the most general manner, anticipating changes right from the word go,. Too often, in practice, most of this code is not used to make potential code smells. While writing code, one should try to make sure that there are no hard-coded references to concrete classes. It is advisable to program to an interface as opposed to an implementation. This is a key principle which many patterns use to provide behavior acquisition at runtime. A dependency injection framework could be used to reduce coupling between classes. SOLID principles are a set of guidelines for writing better object-oriented software. It is a mnemonic acronym that embodies the following five principles: 1 Single Responsibility Principle (SRP) A class should have only one responsibility. If it is doing more than one unrelated thing, we need to split the class. 2 Open Close Principle (OCP) A class should be open for extension, closed for modification. 3 Liskov Substitution Principle (LSP) Named after Barbara Liskov, a Turing Award laureate, who postulated that a sub-class (derived class) could substitute any super class (base class) references without affecting the functionality. Even though it looks like stating the obvious, most implementations have quirks which violate this principle. 4 Interface segregation principle (ISP) It is more desirable to have multiple interfaces for a class (such classes can also be called components) than having one Uber interface that forces implementation of all methods (both relevant and non-relevant to the solution context). 5 Dependency Inversion (DI) This is a principle which is very useful for Framework design. In the case of Frameworks, the client code will be invoked by server code, as opposed to the usual process of client invoking the server. The main principle here is that abstraction should not depend upon details, rather, details should depend upon abstraction. This is also called the "Hollywood Principle" (Do not call us, we will call you back). The authors consider the preceding five principles primarily as a verification mechanism. This will be demonstrated by verifying the ensuing case study implementations for violation of these principles. Karl Seguin has written an e-book titled Foundations of Programming – Building Better Software, which covers most of what has been outlined here. Read his book to gain an in-depth understanding of most of these topics. The SOLID principles are well covered in the Wikipedia page on the subject, which can be retrieved from https://en.wikipedia.org/wiki/SOLID_(object-oriented_design). Robert Martin's Agile Principles, Patterns, and Practices in C# is a definitive book on learning about SOLID, as Robert Martin itself is the creator of these principles, even though Michael Feathers coined the acronym. Why patterns are required? According to the authors, the three key advantages of pattern-oriented software development that stand out are as follows: A language/platform-agnostic way to communicate about software artifacts A tool for refactoring initiatives (targets for refactoring) Better API design With the advent of the pattern movement, the software development community got a canonical language to communicate about software design, architecture, and implementation. Software development is a craft which has got trade-offs attached to each strategy, and there are multiple ways to develop software. The various pattern catalogues brought some conceptual unification for this cacophony in software development. Most developers around the world today who are worth their salt, can understand and speak this language. We believe you will be able to do the same at the end of the article. Fancy yourself stating the following about your recent implementation: For our tax computation example, we have used command pattern to handle the computation logic. The commands (handlers) are configured using an XML file, and a factory method takes care of the instantiation of classes on the fly using Lazy loading. We cache the commands, and avoid instantiation of more objects by imposing singleton constraints on the invocation. We support prototype pattern where command objects can be cloned. The command objects have a base implementation, where concrete command objects use the template method pattern to override methods which are necessary. The command objects are implemented using the design by contracts idiom. The whole mechanism is encapsulated using a Façade class, which acts as an API layer for the application logic. The application logic uses entity objects (reference) to store the taxable entities, attributes like tax parameters are stored as value objects. We use data transfer object (DTO) to transfer the data from the application layer to the computational layer. Arlow/Nuestadt-based archetype pattern is the unit of structuring the tax computation logic. For some developers, the preceding language/platform-independent description of the software being developed is enough to understand the approach taken. This will boos developer productivity (during all phases of SDLC, including development, maintenance, and support) as the developers will be able to get a good mental model of the code base. Without Pattern catalogs, such succinct descriptions of the design or implementation would have been impossible. In an Agile software development scenario, we develop software in an iterative fashion. Once we reach a certain maturity in a module, developers refactor their code. While refactoring a module, patterns do help in organizing the logic. The case study given next will help you to understand the rationale behind "Patterns as refactoring targets". APIs based on well-defined patterns are easy to use and impose less cognitive load on programmers. The success of the ASP.NET MVC framework, NHibernate, and API's for writing HTTP modules and handlers in the ASP.NET pipeline are a few testimonies to the process. Personal income tax computation - A case study Rather than explaining the advantages of patterns, the following example will help us to see things in action. Computation of the annual income tax is a well-known problem domain across the globe. We have chosen an application domain which is well known to focus on the software development issues. The application should receive inputs regarding the demographic profile (UID, Name, Age, Sex, Location) of a citizen and the income details (Basic, DA, HRA, CESS, Deductions) to compute his tax liability. The System should have discriminants based on the demographic profile, and have a separate logic for senior citizens, juveniles, disabled people, old females, and others. By discriminant we mean demographic that parameters like age, sex and location should determine the category to which a person belongs and apply category-specific computation for that individual. As a first iteration, we will implement logic for the senior citizen and ordinary citizen category. After preliminary discussion, our developer created a prototype screen as shown in the following image: Archetypes and business archetype pattern The legendary Swiss psychologist, Carl Gustav Jung, created the concept of archetypes to explain fundamental entities which arise from a common repository of human experiences. The concept of archetypes percolated to the software industry from psychology. The Arlow/Nuestadt patterns describe business archetype patterns like Party, Customer Call, Product, Money, Unit, Inventory, and so on. An Example is the Apache Maven archetype, which helps us to generate projects of different natures like J2EE apps, Eclipse plugins, OSGI projects, and so on. The Microsoft patterns and practices describes archetypes for targeting builds like Web applications, rich client application, mobile applications, and services applications. Various domain-specific archetypes can exist in respective contexts as organizing and structuring mechanisms. In our case, we will define some archetypes which are common in the taxation domain. Some of the key archetypes in this domain are: Sr.no Archetype Description 1 SeniorCitizenFemale Tax payers who are female, and above the age of 60 years 2 SeniorCitizen Tax payers who are male, and above the age of 60 years 3 OrdinaryCitizen Tax payers who are Male/Female, and above 18 years of age 3 DisabledCitizen Tax payers who have any disability 4 MilitaryPersonnel Tax payers who are military personnel 5 Juveniles Tax payers whose age is less than 18 years We will use demographic parameters as discriminant to find the archetype which corresponds to the entity. The whole idea of inducing archetypes is to organize the tax computation logic around them. Once we are able to resolve the archetypes, it is easy to locate and delegate the computations corresponding to the archetypes. Entity, value, and data transfer objects We are going to create a class which represents a citizen. Since citizen needs to be uniquely identified, we are going to create an entity object, which is also called reference object (from DDD catalog). The universal identifier (UID) of an entity object is the handle which an application refers. Entity objects are not identified by their attributes, as there can be two people with the same name. The ID uniquely identifies an entity object. The definition of an entity object is given as follows: public class TaxableEntity { public int Id { get; set; } public string Name { get; set; } public int Age { get; set; } public char Sex { get; set; } public string Location { get; set; } public TaxParamVO taxparams { get; set; } } In the preceding class definition, Id uniquely identifies the entity object. TaxParams is a value object (from DDD catalog) associated with the entity object. Value objects do not have a conceptual identity. They describe some attributes of things (entities). The definition of TaxParams is given as follows: public class TaxParamVO { public double Basic {get;set;} public double DA { get; set; } public double HRA { get; set; } public double Allowance { get; set; } public double Deductions { get; set; } public double Cess { get; set; } public double TaxLiability { get; set; } public bool Computed { get; set; } } While writing applications ever since Smalltalk, Model-view-controller (MVC) is the most dominant paradigm for structuring applications. The application is split into a model layer ( which mostly deals with data), view layer (which acts as a display layer), and a controller (to mediate between the two). In the Web development scenario, they are physically partitioned across machines. To transfer data between layers, the J2EE pattern catalog identified the DTO to transfer data between layers. The DTO object is defined as follows: public class TaxDTO { public int id { } public TaxParamVO taxparams { } } If the layering exists within the same process, we can transfer these objects as-is. If layers are partitioned across processes or systems, we can use XML or JSON serialization to transfer objects between the layers. A computation engine We need to separate UI processing, input validation, and computation to create a solution which can be extended to handle additional requirements. The computation engine will execute different logic depending upon the command received. The GoF command pattern is leveraged for executing the logic based on the command received. The command pattern consists of four constituents. They are: Command object Parameters Command Dispatcher Client The command object's interface has an Execute method. The parameters to the command objects are passed through a bag. The client invokes the command object by passing the parameters through a bag to be consumed by the Command Dispatcher. The Parameters are passed to the command object through the following data structure: public class COMPUTATION_CONTEXT { private Dictionary<String, Object> symbols = new Dictionary<String, Object>(); public void Put(string k, Object value) { symbols.Add(k, value); } public Object Get(string k) { return symbols[k]; } } The ComputationCommand interface, which all the command objects implement, has only one Execute method, which is shown next. The Execute method takes a bag as parameter. The COMPUTATION_CONTEXT data structure acts as the bag here. Interface ComputationCommand { bool Execute(COMPUTATION_CONTEXT ctx); } Since we have already implemented a command interface and bag to transfer the parameters, it is time that we implement a command object. For the sake of simplicity, we will implement two commands where we hardcode the tax liability. public class SeniorCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1000; td.taxparams.Computed = true; return true; } } public class OrdinaryCitizenCommand : ComputationCommand { public bool Execute(COMPUTATION_CONTEXT ctx) { TaxDTO td = (TaxDTO)ctx.Get("tax_cargo"); //---- Instead of computation, we are assigning //---- constant tax for each arcetypes td.taxparams.TaxLiability = 1500; td.taxparams.Computed = true; return true; } } The commands will be invoked by a CommandDispatcher Object, which takes an archetype string and a COMPUTATION_CONTEXT object. The CommandDispatcher acts as an API layer for the application. class CommandDispatcher { public static bool Dispatch(string archetype, COMPUTATION_CONTEXT ctx) { if (archetype == "SeniorCitizen") { SeniorCitizenCommand cmd = new SeniorCitizenCommand(); return cmd.Execute(ctx); } else if (archetype == "OrdinaryCitizen") { OrdinaryCitizenCommand cmd = new OrdinaryCitizenCommand(); return cmd.Execute(ctx); } else { return false; } } } The application to engine communication The data from the application UI, be it Web or Desktop, has to flow to the computation engine. The following ViewHandler routine shows how data, retrieved from the application UI, is passed to the engine, via the Command Dispatcher, by a client: public static void ViewHandler(TaxCalcForm tf) { TaxableEntity te = GetEntityFromUI(tf); if (te == null){ ShowError(); return; } string archetype = ComputeArchetype(te); COMPUTATION_CONTEXT ctx = new COMPUTATION_CONTEXT(); TaxDTO td = new TaxDTO { id = te.id, taxparams = te.taxparams}; ctx.Put("tax_cargo",td); bool rs = CommandDispatcher.Dispatch(archetype, ctx); if ( rs ) { TaxDTO temp = (TaxDTO)ctx.Get("tax_cargo"); tf.Liabilitytxt.Text = Convert.ToString(temp.taxparams.TaxLiability); tf.Refresh(); } } At this point, imagine that a change in requirement has been received from the stakeholders. Now, we need to support tax computation for new categories. Initially, we had different computations for senior citizen and ordinary citizen. Now we need to add new Archetypes. At the same time, to make the software extensible (loosely coupled) and maintainable, it would be ideal if we provide the capability to support new Archetypes in a configurable manner as opposed to recompiling the application for every new archetype owing to concrete references. The Command Dispatcher object does not scale well to handle additional archetypes. We need to change the assembly whenever a new archetype is included, as the tax computation logic varies for each archetype. We need to create a pluggable architecture to add or remove archetypes at will. The plugin system to make system extensible Writing system logic without impacting the application warrants a mechanism—that of loading a class on the fly. Luckily, the .NET Reflection API provides a mechanism for one to load a class during runtime, and invoke methods within it. A developer worth his salt should learn the Reflection API to write systems which change dynamically. In fact, most of the technologies like ASP.NET, Entity framework, .NET Remoting, and WCF work because of the availability of Reflection API in the .NET stack. Henceforth, we will be using an XML configuration file to specify our tax computation logic. A sample XML file is given next: <?xml version="1.0"?> <plugins> <plugin archetype ="OrindaryCitizen" command="TaxEngine.OrdinaryCitizenCommand"/> <plugin archetype="SeniorCitizen" command="TaxEngine.SeniorCitizenCommand"/> </plugins> The contents of the XML file can be read very easily using LINQ to XML. We will be generating a Dictionary object by the following code snippet: private Dictionary<string,string> LoadData(string xmlfile) { return XDocument.Load(xmlfile) .Descendants("plugins") .Descendants("plugin") .ToDictionary(p => p.Attribute("archetype").Value, p => p.Attribute("command").Value); } Summary In this article, we have covered quite a lot of ground in understanding why pattern-oriented software development is a good way to develop modern software. We started the article citing some key principles. We progressed further to demonstrate the applicability of these key principles by iteratively skinning an application which is extensible and resilient to changes. Resources for Article: Further resources on this subject: Debugging Your .NET Application [article] JSON with JSON.Net [article] Using ASP.NET Controls in SharePoint [article]
Read more
  • 0
  • 0
  • 37642

article-image-openjdk-project-valhallas-head-shares-how-they-plan-to-enhance-the-java-language-and-jvm-with-value-types-and-more
Bhagyashree R
10 Dec 2019
4 min read
Save for later

OpenJDK Project Valhalla’s head shares how they plan to enhance the Java language and JVM with value types, and more

Bhagyashree R
10 Dec 2019
4 min read
Announced in 2014, Project Valhalla is an experimental OpenJDK project to bring major new language features to Java 10 and beyond. It primarily focuses on enabling developers to create and utilize value types, or non-reference values. Last week, the project’s head Brian Goetz shared the goal, motivation, current status, and other details about the project in a set of documents called “State of Valhalla”. Goetz shared that in the span of five years, the team has come up with five distinct prototypes of the project. Sharing the current state of the project, he wrote, “We believe we are now at the point where we have a clear and coherent path to enhance the Java language and virtual machine with value types, have them interoperate cleanly with existing generics, and have a compatible path for migrating our existing value-based classes to inline classes and our existing generic classes to specialized generics.” The motivation behind Project Valhalla One of the main motivations behind Project Valhalla was adapting the Java language and runtime to modern hardware. It’s almost been 25 years since Java was introduced and a lot has changed since then. At that time, the cost of a memory fetch and an arithmetic operation was roughly the same, but this is not the case now. Today, the memory fetch operations have become 200 to 1,000 times more expensive as compared to arithmetic operations. Java is often considered to be a pointer-heavy language as most Java data structures in an application are objects or reference types. This is why Project Valhalla aims to introduce value types to get rid of the type overhead both in memory as well as in computation. Goetz wrote, “We aim to give developers the control to match data layouts with the performance model of today’s hardware, providing Java developers with an easier path to flat (cache-efficient) and dense (memory-efficient) data layouts without compromising abstraction or type safety.” The language model for incorporating inline types Goetz further moved on to talk about how the team is accommodating inline classes in the language type system. He wrote, “The motto for inline classes is: codes like a class, works like an int; the latter part of this motto means that inline types should align with the behaviors of primitive types outlined so far.” This means that inline classes will enable developers to write types that behave more like Java's inbuilt primitive types. Inline classes are similar to current classes in the sense that they can have properties, methods, constructors, and so on. However, the difference that Project Valhalla brings is that instances of inline classes or inline objects do not have an identity, the property that distinguishes them from other objects. This is why operations that are identity-sensitive like synchronization are not possible with inline objects. There are a bunch of other differences between inline and identity classes. Goetz wrote, “Object identity serves, among other things, to enable mutability and layout polymorphism; by giving up identity, inline classes must give up these things. Accordingly, inline classes are implicitly final, cannot extend any other class besides Object...and their fields are implicitly final.” In Project Valhalla, the types are divided into inline and reference types where inline types include primitives and reference types are those that are not inline types such as declared identity classes, declared interfaces, array types, etc. He further listed a few migration scenarios including value-based classes, primitives, and specialized generics. Check out Goetz’s post to know more in detail about the Project Valhalla. OpenJDK Project Valhalla is ready for developers working in building data structures or compiler runtime libraries OpenJDK Project Valhalla’s LW2 early access builds are now available for you to test OpenJDK Project Valhalla is now in Phase III
Read more
  • 0
  • 0
  • 37196
Modal Close icon
Modal Close icon