Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-running-firefox-os-simulators-webide
Packt
12 Oct 2015
9 min read
Save for later

Running Firefox OS Simulators with WebIDE

Packt
12 Oct 2015
9 min read
In this article by Tanay Pant, the author of the book, Learning Firefox OS Application Development, you will learn how to use WebIDE and its features. We will start by installing Firefox OS simulators in the WebIDE so that we can run and test Firefox OS applications in it. Then, we will study how to install and create new applications with WebIDE. Finally, we will cover topics such as using developer tools for applications that run in WebIDE, and uninstalling applications in Firefox OS. In brief, we will go through the following topics: Getting to know about WebIDE Installing Firefox OS simulator Installing and creating new apps with WebIDE Using developer tools inside WebIDE Uninstalling applications in Firefox OS (For more resources related to this topic, see here.) Introducing WebIDE It is now time to have a peek at Firefox OS. You can test your applications in two ways, either by running it on a real device or by running it in Firefox OS Simulator. Let's go ahead with the latter option since you might not have a Firefox OS device yet. We will use WebIDE, which comes preinstalled with Firefox, to accomplish this task. If you haven't installed Firefox yet, you can do so from https://www.mozilla.org/en-US/firefox/new/. WebIDE allows you to install one or several runtimes (different versions) together. You can use WebIDE to install different types of applications, debug them using Firefox's Developer Tools Suite, and edit the applications/manifest using the built-in source editor. After you install Firefox, open WebIDE. You can open it by navigating to Tools | Web Developer | WebIDE. Let's now take a look at the following screenshot of WebIDE: You will notice that on the top-right side of your window, there is a Select Runtime option. When you click on it, you will see the Install Simulator option. Select that option, and you will see a page titled Extra Components. It presents a list of Firefox OS simulators. We will install the latest stable and unstable versions of Firefox OS. We installed two versions of Firefox OS because we would need both the latest and stable versions to test our applications in the future. After you successfully install both the simulators, click on Select Runtime. This will now show both the OS versions listed, as shown in the following screenshot:. Let's open Firefox OS 3.0. This will open up a new window titled B2G. You should now explore Firefox OS, take a look at its applications, and interact with them. It's all HTML, CSS and JavaScript. Wonderful, isn't it? Very soon, you will develop applications like these:` Installing and creating new apps using WebIDE To install or create a new application, click on Open App in the top-left corner of the WebIDE window. You will notice that there are three options: New App, Open Packaged App, and Open Hosted App. For now, think of Hosted apps like websites that are served from a web server and are stored online in the server itself but that can still use appcache and indexeddb to store all their assets and data offline, if desired. Packaged apps are distributed in a .zip format and they can be thought of as the source code of the website bundled and distributed in a ZIP file. Let's now head to the first option in the Open App menu, which is New App. Select the HelloWorld template, enter Project Name, and click on OK. After completing this, the WebIDE will ask you about the directory where you want to store the application. I have made a new folder named Hello World for this purpose on the desktop. Now, click on Open button and finally, click again on the OK button. This will prepare your app and show details, such as Title, Icon, Description, Location and App ID of your application. Note that beneath the app title, it says Packaged Web. Can you figure out why? As we discussed, it is because of the fact that we are not serving the application online, but from a packaged directory that holds its source code. This covers the right-hand side panel. In the left-hand side panel, we have the directory listing of the application. It contains an icon folder that holds different-sized icons for different screen resolutions It also contains the app.js file, which is the engine of the application and will contain the functionality of the application; index.html, which will contain the markup data for the application; and finally, the manifest.webapp file, which contains crucial information and various permissions about the application. If you click on any filename, you will notice that the file opens in an in-browser editor where you can edit the files to make changes to your application and save them from here itself. Let's make some edits in the application— in app.js and index.html. I have replaced World with Firefox everywhere to make it Hello Firefox. Let's make the same changes in the manifest file. The manifest file contains details of your application, such as its name, description, launch path, icons, developer information, and permissions. These details are used to display information about your application in the WebIDE and Firefox Marketplace. The manifest file is in JSON format. I went ahead and edited developer information in the application as well, to include my name and my website. After saving all the files, you will notice that the information of the app in the WebIDE has changed! It's now time to run the application in Firefox OS. Click on Select Runtime and fire up Firefox OS 3.0. After it is launched, click on the Play button in the WebIDE hovering on which is the prompt that says Install and Run. Doing this will install and launch the application on your simulator! Congratulations, you installed your first Firefox OS application! Using developer tools inside WebIDE WebIDE allows you to use Firefox's awesome developer tools for applications that run in the Simulator via WebIDE as well. To use them, simply click on the Settings icon (which looks like a wrench) beside the Install and Run icon that you had used to get the app installed and running. The icon says Debug App on hovering the cursor over it. Click on this to reveal developer tools for the app that is running via WebIDE. Click on Console, and you will see the message Hello Firefox, which we gave as the input in console.log() in the app.js file. Note that it also specifies the App ID of our application while displaying Hello Firefox. You may have noticed in the preceding illustration that I sent a command via the console alert('Hello Firefox'); and it simultaneously executed the instruction in the app running in the simulator. As you may have noticed, Firefox OS customizes the look and feel of components, such as the alert box (this is browser based). Our application is running in an iframe in Gaia. Every app, including the keyboard application, runs in an iframe for security reasons. You should go through these tools to get a hang of the debugging capabilities if you haven't done so already! One more important thing that you should keep in mind is that inline scripts (for example, <a href="#" onclick="alert(this)">Click Me</a>) are forbidden in Firefox OS apps, due to Content Security Policy (CSP) restrictions. CSP restrictions include the remote scripts, inline scripts, javascript URIs, function constructor, dynamic code execution, and plugins, such as Flash or Shockwave. Remote styles are also banned. Remote Web Workers and eval() operators are not allowed for security reasons and they show 400 error and security errors respectively upon usage. You are warned about CSP violations when submitting your application to the Firefox OS Marketplace. CSP warnings in the validator will not impact whether your app is accepted into the Marketplace. However, if your app is privileged and violates the CSP, you will be asked to fix this issue in order to get your application accepted. Browsing other runtime applications You can also take a look at the source code of the preinstalled/runtime apps that are present in Firefox OS or Gaia, to be precise. For example, the following is an illustration that shows how to open them: You can click on the Hello World button (in the same place where Open App used to exist), and this will show you the whole list of Runtime Apps as shown in the preceding illustration. I clicked on the Camera application and it showed me the source code of its main.js file. It's completely okay if you are daunted by the huge file. If you find these runtime applications interesting and want to contribute to them, then you can refer to Mozilla Developer Network's articles on developing Gaia, which you can find at https://developer.mozilla.org/en-US/Firefox_OS/Developing_Gaia. Our application looks as follows in the App Launcher of the operating system: Uninstalling applications in Firefox OS You can remove the project from WebIDE by clicking on the Remove Project button in the home page of the application. However, this will not uninstall the application from Firefox OS Simulator. The uninstallation system of the operating system is quite similar to iOS. You just have to double tap in OS X to get the Edit screen, from where you can click on the cross button on the top-left of the app icon to uninstall the app. You will then get a confirmation screen that warns you that all the data of the application will also be deleted along with the app. This will take you back to the Edit screen where you can click on Done to get back to the home screen. Summary In this article, you learned about WebIDE, how to install Firefox OS simulator in WebIDE, using Firefox OS and installing applications in it, and creating a skeleton application using WebIDE. You then learned how to use developer tools for applications that run in the simulator, browsing other preinstalled runtime applications present in Firefox OS. Finally, you learned about removing a project from WebIDE and uninstalling an application from the operating system. Resources for Article: Further resources on this subject: Learning Node.js for Mobile Application Development [Article] Introducing Web Application Development in Rails [Article] One-page Application Development [Article]
Read more
  • 0
  • 0
  • 14147

article-image-collaboration-using-github-workflow
Packt
30 Sep 2015
12 min read
Save for later

Collaboration Using the GitHub Workflow

Packt
30 Sep 2015
12 min read
In this article by Achilleas Pipinellis, the author of the book GitHub Essentials, has come up with a workflow based on the features it provides and the power of Git. It has named it the GitHub workflow (https://guides.github.com/introduction/flow). In this article, we will learn how to work with branches and pull requests, which is the most powerful feature of GitHub. (For more resources related to this topic, see here.) Learn about pull requests Pull request is the number one feature in GitHub that made it what it is today. It was introduced in early 2008 and is being used extensively among projects since then. While everything else can be pretty much disabled in a project's settings (such as issues and the wiki), pull requests are always enabled. Why pull requests are a powerful asset to work with Whether you are working on a personal project where you are the sole contributor or on a big open source one with contributors from all over the globe, working with pull requests will certainly make your life easier. I like to think of pull requests as chunks of commits, and the GitHub UI helps you visualize clearer what is about to be merged in the default branch or the branch of your choice. Pull requests are reviewable with an enhanced diff view. You can easily revert them with a simple button on GitHub and they can be tested before merging, if a CI service is enabled in the project. The connection between branches and pull requests There is a special connection between branches and pull requests. In this connection, GitHub will automatically show you a button to create a new pull request if you push a new branch in your repository. As we will explore in the following sections, this is tightly coupled to the GitHub workflow, and GitHub uses some special words to describe the from and to branches. As per GitHub's documentation: The base branch is where you think changes should be applied, the head branch is what you would like to be applied. So, in GitHub terms, head is your branch, and base the branch you would like to merge into. Create branches directly in a project – the shared repository model The shared repository model, as GitHub aptly calls it, is when you push new branches directly to the source repository. From there, you can create a new pull request by comparing between branches, as we will see in the following sections. Of course, in order to be able to push to a repository you either have to be the owner or a collaborator; in other words you must have write access. Create branches in your fork – the fork and pull model Forked repositories are related to their parent in a way that GitHub uses in order to compare their branches. The fork and pull model is usually used in projects when one does not have write access but is willing to contribute. After forking a repository, you push a branch to your fork and then create a pull request in the source repository asking its maintainer to merge the changes. This is common practice to contribute to open source projects hosted on GitHub. You will not have access to their repository, but being open source, you can fork the public repository and work on your own copy. How to create and submit a pull request There are quite a few ways to initiate the creation of a pull request, as we you will see in the following sections. The most common one is to push a branch to your repository and let GitHub's UI guide you. Let's explore this option first. Use the Compare & pull request button Whenever a new branch is pushed to a repository, GitHub shows a quick button to create a pull request. In reality, you are taken to the compare page, as we will explore in the next section, but some values are already filled out for you. Let's create, for example, a new branch named add_gitignore where we will add a .gitignore file with the following contents: git checkout -b add_gitignore echo -e '.bundlen.sass-cachen.vendorn_site' > .gitignore git add .gitignore git commit -m 'Add .gitignore' git push origin add_gitignore Next, head over your repository's main page and you will notice the Compare & pull request button, as shown in the following screenshot: From here on, if you hit this button you will be taken to the compare page. Note that I am pushing to my repository following the shared repository model, so here is how GitHub greets me: What would happen if I used the fork and pull repository model? For this purpose, I created another user to fork my repository and followed the same instructions to add a new branch named add_gitignore with the same changes. From here on, when you push the branch to your fork, the Compare & pull request button appears whether you are on your fork's page or on the parent repository. Here is how it looks if you visit your fork: The following screenshot will appear, if you visit the parent repository: In the last case (captured in red), you can see from which user this branch came from (axil43:add_gitignore). In either case, when using the fork and pull model, hitting the Compare & pull request button will take you to the compare page with slightly different options: Since you are comparing across forks, there are more details. In particular, you can see the base fork and branch as well as the head fork and branch that are the ones you are the owner of. GitHub considers the default branch set in your repository to be the one you want to merge into (base) when the Create Pull Request button appears. Before submitting it, let's explore the other two options that you can use to create a pull request. You can jump to the Submit a pull request section if you like. Use the compare function directly As mentioned in the previous section, the Compare & pull request button gets you on the compare page with some predefined values. The button appears right after you push a new branch and is there only for a few moments. In this section, we will see how to use the compare function directly in order to create a pull request. You can access the compare function by clicking on the green button next to the branch drop-down list on a repository's main page: This is pretty powerful as one can compare across forks or, in the same repository, pretty much everything—branches, tags, single commits and time ranges. The default page when you land on the compare page is like the following one; you start by comparing your default branch with GitHub, proposing a list of recently created branches to choose from and compare: In order to have something to compare to, the base branch must be older than what you are comparing to. From here, if I choose the add_gitignore branch, GitHub compares it to a master and shows the diff along with the message that it is able to be merged into the base branch without any conflicts. Finally, you can create the pull request: Notice that I am using the compare function while I'm at my own repository. When comparing in a repository that is a fork of another, the compare function slightly changes and automatically includes more options as we have seen in the previous section. As you may have noticed the Compare & pull request quick button is just a shortcut for using compare manually. If you want to have more fine-grained control on the repositories and the branches compared, use the compare feature directly. Use the GitHub web editor So far, we have seen the two most well-known types of initiating a pull request. There is a third way as well: using entirely the web editor that GitHub provides. This can prove useful for people who are not too familiar with Git and the terminal, and can also be used by more advanced Git users who want to propose a quick change. As always, according to the model you are using (shared repository or fork and pull), the process is a little different. Let's first explore the shared repository model flow using the web editor, which means editing files in a repository that you own. The shared repository model Firstly, make sure you are on the branch that you wish to branch off; then, head over a file you wish to change and press the edit button with the pencil icon: Make the change you want in that file, add a proper commit message, and choose Create a new branch giving the name of the branch you wish to create. By default, the branch name is username-patch-i, where username is your username and i is an increasing integer starting from 1. Consecutive edits on files will create branches such as username-patch-1, username-patch-2, and so on. In our example, I decided to give the branch a name of my own: When ready, press the Propose file change button. From this moment on, the branch is created with the file edits you made. Even if you close the next page, your changes will not be lost. Let's skip the pull request submission for the time being and see how the fork and pull model works. The fork and pull model In the fork and pull model, you fork a repository and submit a pull request from the changes you make in your fork. In the case of using the web editor, there is a caveat. In order to get GitHub automatically recognize that you wish to perform a pull request in the parent repository, you have to start the web editor from the parent repository and not your fork. In the following screenshot, you can see what happens in this case: GitHub informs you that a new branch will be created in your repository (fork) with the new changes in order to submit a pull request. Hitting the Propose file change button will take you to the form to submit the pull request: Contrary to the shared repository model, you can now see the base/head repositories and branches that are compared. Also, notice that the default name for the new branch is patch-i, where i is an increasing integer number. In our case, this was the first branch created that way, so it was named patch-1. If you would like to have the ability to name the branch the way you like, you should follow the shared repository model instructions as explained in preceding section. Following that route, edit the file in your fork where you have write access, add your own branch name, hit the Propose file change button for the branch to be created, and then abort when asked to create the pull request. You can then use the Compare & pull request quick button or use the compare function directly to propose a pull request to the parent repository. One last thing to consider when using the web editor, is the limitation of editing one file at a time. If you wish to include more changes in the same branch that GitHub created for you when you first edited a file, you must first change to that branch and then make any subsequent changes. How to change the branch? Simply choose it from the drop-down menu as shown in the following screenshot: Submit a pull request So far, we have explored the various ways to initiate a pull request. In this section, we will finally continue to submit it as well. The pull request form is identical to the form when creating a new issue. If you have write access to the repository that you are making the pull request to, then you are able to set labels, milestone, and assignee. The title of the pull request is automatically filled by the last commit message that the branch has, or if there are multiple commits, it will just fill in the branch name. In either case, you can change it to your liking. In the following image, you can see the title is taken from the branch name after GitHub has stripped the special characters. In a sense, the title gets humanized: You can add an optional description and images if you deem proper. Whenever ready, hit the Create pull request button. In the following sections, we will explore how the peer review works. Peer review and inline comments The nice thing about pull requests is that you have a nice and clear view of what is about to get merged. You can see only the changes that matter, and the best part is that you can fire up a discussion concerning those changes. In the previous section, we submitted the pull request so that it can be reviewed and eventually get merged. Suppose that we are collaborating with a team and they chime in to discuss the changes. Let's first check the layout of a pull request. Summary In this article, we explored the GitHub workflow and the various ways to perform a pull request, as well as the many features GitHub provides to make that workflow even smoother. This is how the majority of open source projects work when there are dozens of contributors involved. Resources for Article: Further resources on this subject: Git Teaches – Great Tools Don't Make Great Craftsmen[article] Maintaining Your GitLab Instance[article] Configuration [article]
Read more
  • 0
  • 0
  • 33829

article-image-patterns-traversing
Packt
25 Sep 2015
15 min read
Save for later

Patterns of Traversing

Packt
25 Sep 2015
15 min read
 In this article by Ryan Lemmer, author of the book Haskell Design Patterns, we will focus on two fundamental patterns of recursion: fold and map. The more primitive forms of these patterns are to be found in the Prelude, the "old part" of Haskell. With the introduction of Applicative, came more powerful mapping (traversal), which opened the door to type-level folding and mapping in Haskell. First, we will look at how Prelude's list fold is generalized to all Foldable containers. Then, we will follow the generalization of list map to all Traversable containers. Our exploration of fold and map culminates with the Lens library, which raises Foldable and Traversable to an even higher level of abstraction and power. In this article, we will cover the following: Traversable Modernizing Haskell Lenses (For more resources related to this topic, see here.) Traversable As with Prelude.foldM, mapM fails us beyond lists, for example, we cannot mapM over the Tree from earlier: main = mapM doF aTree >>= print -- INVALID The Traversable type-class is to map in the same way as Foldable is to fold: -- required: traverse or sequenceA class (Functor t, Foldable t) => Traversable (t :: * -> *) where -- APPLICATIVE form traverse :: Applicative f => (a -> f b) -> t a -> f (t b) sequenceA :: Applicative f => t (f a) -> f (t a) -- MONADIC form (redundant) mapM :: Monad m => (a -> m b) -> t a -> m (t b) sequence :: Monad m => t (m a) -> m (t a) The traverse fuction generalizes our mapA function, which was written for lists, to all Traversable containers. Similarly, Traversable.mapM is a more general version of Prelude.mapM for lists: mapM :: Monad m => (a -> m b) -> [a] -> m [b] mapM :: Monad m => (a -> m b) -> t a -> m (t b) The Traversable type-class was introduced along with Applicative: "we introduce the type class Traversable, capturing functorial data structures through which we can thread an applicative computation"                         Applicative Programming with Effects - McBride and Paterson A Traversable Tree Let's make our Traversable Tree. First, we'll do it the hard way: – a Traversable must also be a Functor and Foldable: instance Functor Tree where fmap f (Leaf x) = Leaf (f x) fmap f (Node x lTree rTree) = Node (f x) (fmap f lTree) (fmap f rTree) instance Foldable Tree where foldMap f (Leaf x) = f x foldMap f (Node x lTree rTree) = (foldMap f lTree) `mappend` (f x) `mappend` (foldMap f rTree) --traverse :: Applicative ma => (a -> ma b) -> mt a -> ma (mt b) instance Traversable Tree where traverse g (Leaf x) = Leaf <$> (g x) traverse g (Node x ltree rtree) = Node <$> (g x) <*> (traverse g ltree) <*> (traverse g rtree) data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) -- import Data.Traversable main = traverse doF aTree where doF n = do print n; return (n * 2) The easier way to do this is to auto-implement Functor, Foldable, and Traversable: {-# LANGUAGE DeriveFunctor #-} {-# LANGUAGE DeriveFoldable #-} {-# LANGUAGE DeriveTraversable #-} import Data.Traversable data Tree a = Node a (Tree a) (Tree a)| Leaf a deriving (Show, Functor, Foldable, Traversable) aTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) main = traverse doF aTree where doF n = do print n; return (n * 2) Traversal and the Iterator pattern The Gang of Four Iterator pattern is concerned with providing a way "...to access the elements of an aggregate object sequentially without exposing its underlying representation"                                       "Gang of Four" Design Patterns, Gamma et al, 1995 In The Essence of the Iterator Pattern, Jeremy Gibbons shows precisely how the Applicative traversal captures the Iterator pattern. The Traversable.traverse class is the Applicative version of Traversable.mapM, which means it is more general than mapM (because Applicative is more general than Monad). Moreover, because mapM does not rely on the Monadic bind chain to communicate between iteration steps, Monad is a superfluous type for mapping with effects (Applicative is sufficient). In other words, Applicative traverse is superior to Monadic traversal (mapM): "In addition to being parametrically polymorphic in the collection elements, the generic traverse operation is parametrised along two further dimensions: the datatype being tra- versed, and the applicative functor in which the traversal is interpreted" "The improved compositionality of applicative functors over monads provides better glue for fusion of traversals, and hence better support for modular programming of iterations"                                        The Essence of the Iterator Pattern - Jeremy Gibbons Modernizing Haskell 98 The introduction of Applicative, along with Foldable and Traversable, had a big impact on Haskell. Foldable and Traversable lift Prelude fold and map to a much higher level of abstraction. Moreover, Foldable and Traversable also bring a clean separation between processes that preserve or discard the shape of the structure that is being processed. Traversable describes processes that preserve that shape of the data structure being traversed over. Foldable processes, in turn, discard or transform the shape of the structure being folded over. Since Traversable is a specialization of Foldable, we can say that shape preservation is a special case of shape transformation. This line between shape preservation and transformation is clearly visible from the fact that functions that discard their results (for example, mapM_, forM_, sequence_, and so on) are in Foldable, while their shape-preserving counterparts are in Traversable. Due to the relatively late introduction of Applicative, the benefits of Applicative, Foldable, and Traversable have not found their way into the core of the language. This is due to the change with the Foldable Traversable In Prelude proposal (planned for inclusion in the core libraries from GHC 7.10). For more information, visit https://wiki.haskell.org/Foldable_Traversable_In_Prelude. This will involve replacing less generic functions in Prelude, Control.Monad, and Data.List with their more polymorphic counterparts in Foldable and Traversable. There have been objections to the movement to modernize, the main concern being that more generic types are harder to understand, which may compromise Haskell as a learning language. These valid concerns will indeed have to be addressed, but it seems certain that the Haskell community will not resist climbing to new abstract heights. Lenses A Lens is a type that provides access to a particular part of a data structure. Lenses express a high-level pattern for composition. However, Lens is also deeply entwined with Traversable, and so we describe it in this article instead. Lenses relate to the getter and setter functions, which also describe access to parts of data structures. To find our way to the Lens abstraction (as per Edward Kmett's Lens library), we'll start by writing a getter and setter to access the root node of a Tree. Deriving Lens Returning to our Tree from earlier: data Tree a = Node a (Tree a) (Tree a) | Leaf a deriving (Show) intTree = Node 2 (Leaf 3) (Node 5 (Leaf 7) (Leaf 11)) listTree = Node [1,1] (Leaf [2,1]) (Node [3,2] (Leaf [5,2]) (Leaf [7,4])) tupleTree = Node (1,1) (Leaf (2,1)) (Node (3,2) (Leaf (5,2)) (Leaf (7,4))) Let's start by writing generic getter and setter functions: getRoot :: Tree a -> a getRoot (Leaf z) = z getRoot (Node z _ _) = z setRoot :: Tree a -> a -> Tree a setRoot (Leaf z) x = Leaf x setRoot (Node z l r) x = Node x l r main = do print $ getRoot intTree print $ setRoot intTree 11 print $ getRoot (setRoot intTree 11) If we want to pass in a setter function instead of setting a value, we use the following: fmapRoot :: (a -> a) -> Tree a -> Tree a fmapRoot f tree = setRoot tree newRoot where newRoot = f (getRoot tree) We have to do a get, apply the function, and then set the result. This double work is akin to the double traversal we saw when writing traverse in terms of sequenceA. In that case we resolved the issue by defining traverse first (and then sequenceA i.t.o. traverse): We can do the same thing here by writing fmapRoot to work in a single step (and then rewriting setRoot' i.t.o. fmapRoot'): fmapRoot' :: (a -> a) -> Tree a -> Tree a fmapRoot' f (Leaf z) = Leaf (f z) fmapRoot' f (Node z l r) = Node (f z) l r setRoot' :: Tree a -> a -> Tree a setRoot' tree x = fmapRoot' (_ -> x) tree main = do print $ setRoot' intTree 11 print $ fmapRoot' (*2) intTree The fmapRoot' function delivers a function to a particular part of the structure and returns the same structure: fmapRoot' :: (a -> a) -> Tree a -> Tree a To allow for I/O, we need a new function: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) We can generalize this beyond I/O to all Monads: fmapM :: (a -> m a) -> Tree a -> m (Tree a) It turns out that if we relax the requirement for Monad, and generalize f' to all the Functor container types, then we get a simple van Laarhoven Lens! type Lens' s a = Functor f' => (a -> f' a) -> s -> f' s The remarkable thing about a van Laarhoven Lens is that given the preceding function type, we also gain "get", "set", "fmap", "mapM", and many other functions and operators. The Lens function type signature is all it takes to make something a Lens that can be used with the Lens library. It is unusual to use a type signature as "primary interface" for a library. The immediate benefit is that we can define a lens without referring to the Lens library. We'll explore more benefits and costs to this approach, but first let's write a few lenses for our Tree. The derivation of the Lens abstraction used here has been based on Jakub Arnold's Lens tutorial, which is available at http://blog.jakubarnold.cz/2014/07/14/lens-tutorial-introduction-part-1.html. Writing a Lens A Lens is said to provide focus on an element in a data structure. Our first lens will focus on the root node of a Tree. Using the lens type signature as our guide, we arrive at: lens':: Functor f => (a -> f' a) -> s -> f' s root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) Still, this is not very tangible; fmapRootIO is easier to understand with the Functor f' being IO: fmapRootIO :: (a -> IO a) -> Tree a -> IO (Tree a) fmapRootIO g (Leaf z) = (g z) >>= return . Leaf fmapRootIO g (Node z l r) = (g z) >>= return . (x -> Node x l r) displayM x = print x >> return x main = fmapRootIO displayM intTree If we drop down from Monad into Functor, we have a Lens for the root of a Tree: root :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) root g (Node z l r) = fmap (x -> Node x l r) (g z) root g (Leaf z) = fmap Leaf (g z) As Monad is a Functor, this function also works with Monadic functions: main = root displayM intTree As root is a lens, the Lens library gives us the following: -– import Control.Lens main = do -- GET print $ view root listTree print $ view root intTree -- SET print $ set root [42] listTree print $ set root 42 intTree -- FMAP print $ over root (+11) intTree The over is the lens way of fmap'ing a function into a Functor. Composable getters and setters Another Lens on Tree might be to focus on the rightmost leaf: rightMost :: Functor f' => (a -> f' a) -> Tree a -> f' (Tree a) rightMost g (Node z l r) = fmap (r' -> Node z l r') (rightMost g r) rightMost g (Leaf z) = fmap (x -> Leaf x) (g z) The Lens library provides several lenses for Tuple (for example, _1 which brings focus to the first Tuple element). We can compose our rightMost lens with the Tuple lenses: main = do print $ view rightMost tupleTree print $ set rightMost (0,0) tupleTree -- Compose Getters and Setters print $ view (rightMost._1) tupleTree print $ set (rightMost._1) 0 tupleTree print $ over (rightMost._1) (*100) tupleTree A Lens can serve as a getter, setter, or "function setter". We are composing lenses using regular function composition (.)! Note that the order of composition is reversed in (rightMost._1) the rightMost lens is applied before the _1 lens. Lens Traversal A Lens focuses on one part of a data structure, not several, for example, a lens cannot focus on all the leaves of a Tree: set leaves 0 intTree over leaves (+1) intTree To focus on more than one part of a structure, we need a Traversal class, the Lens generalization of Traversable). Whereas Lens relies on Functor, Traversal relies on Applicative. Other than this, the signatures are exactly the same: traversal :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) lens :: Functor f'=> (a -> f' a) -> Tree a -> f' (Tree a) A leaves Traversal delivers the setter function to all the leaves of the Tree: leaves :: Applicative f' => (a -> f' a) -> Tree a -> f' (Tree a) leaves g (Node z l r) = Node z <$> leaves g l <*> leaves g r leaves g (Leaf z) = Leaf <$> (g z) We can use set and over functions with our new Traversal class: set leaves 0 intTree over leaves (+1) intTree The Traversals class compose seamlessly with Lenses: main = do -- Compose Traversal + Lens print $ over (leaves._1) (*100) tupleTree -- Compose Traversal + Traversal print $ over (leaves.both) (*100) tupleTree -- map over each elem in target container (e.g. list) print $ over (leaves.mapped) (*(-1)) listTree -- Traversal with effects mapMOf leaves displayM tupleTree (The both is a Tuple Traversal that focuses on both elements). Lens.Fold The Lens.Traversal lifts Traversable into the realm of lenses: main = do print $ sumOf leaves intTree print $ anyOf leaves (>0) intTree The Lens Library We used only "simple" Lenses so far. A fully parametrized Lens allows for replacing parts of a data structure with different types: type Lens s t a b = Functor f' => (a -> f' b) -> s -> f' t –- vs simple Lens type Lens' s a = Lens s s a a Lens library function names do their best to not clash with existing names, for example, postfixing of idiomatic function names with "Of" (sumOf, mapMOf, and so on), or using different verb forms such as "droppingWhile" instead of "dropWhile". While this creates a burden as i.t.o has to learn new variations, it does have a big plus point—it allows for easy unqualified import of the Lens library. By leaving the Lens function type transparent (and not obfuscating it with a new type), we get Traversals by simply swapping out Functor for Applicative. We also get to define lenses without having to reference the Lens library. On the downside, Lens type signatures can be bewildering at first sight. They form a language of their own that requires effort to get used to, for example: mapMOf :: Profunctor p => Over p (WrappedMonad m) s t a b -> p a (m b) -> s -> m t foldMapOf :: Profunctor p => Accessing p r s a -> p a r -> s -> r On the surface, the Lens library gives us composable getters and setters, but there is much more to Lenses than that. By generalizing Foldable and Traversable into Lens abstractions, the Lens library lifts Getters, Setters, Lenses, and Traversals into a unified framework in which they are all compose together. Edward Kmett's Lens library is a sprawling masterpiece that is sure to leave a lasting impact on idiomatic Haskell. Summary We started with Lists (Haskel 98), then generalizing for all Traversable containers (Introduced in the mid-2000s). Following that, we saw how the Lens library (2012) places traversing in an even broader context. Lenses give us a unified vocabulary to navigate data structures, which explains why it has been described as a "query language for data structures". Resources for Article: Further resources on this subject: Plotting in Haskell[article] The Hunt for Data[article] Getting started with Haskell [article]
Read more
  • 0
  • 0
  • 15213

article-image-creating-jee-application-ejb
Packt
24 Sep 2015
11 min read
Save for later

Creating a JEE Application with EJB

Packt
24 Sep 2015
11 min read
In this article by Ram Kulkarni, author of Java EE Development with Eclipse (e2), we will be using EJBs (Enterprise Java Beans) to implement business logic. This is ideal in scenarios where you want components that process business logic to be distributed across different servers. But that is just one of the advantages of EJB. Even if you use EJBs on the same server as the web application, you may gain from a number of services that the EJB container provides to the applications through EJBs. You can specify security constraints for calling EJB methods declaratively (using annotations), and you can also easily specify transaction boundaries (specify which method calls from a part of one transaction) using annotations. In addition to this, the container handles the life cycle of EJBs, including pooling of certain types of EJB objects so that more objects can be created when the load on the application increases. (For more resources related to this topic, see here.) In this article, we will create the same application using EJBs and deploy it in a Glassfish 4 server. But before that, you need to understand some basic concepts of EJBs. Types of EJB EJBs can be of following types as per the EJB 3 specifications: Session bean: Stateful session bean Stateless session bean Singleton session bean Message-driven bean In this article, we will focus on session beans. Session beans In general, session beans are meant for containing methods used to execute the main business logic of enterprise applications. Any Plain Old Java Object (POJO) can be annotated with the appropriate EJB-3-specific annotations to make it a session bean. Session beans come in three types, as follows. Stateful session bean One stateful session bean serves requests for one client only. There is a one-to-one mapping between the stateful session bean and the client. Therefore, stateful beans can hold state data for the client between multiple method calls. In our CourseManagement application, we can use a stateful bean to hold the Student data (student profile and the courses taken by him/her) after a student logs-in. The state maintained by the Stateful bean is lost when the server restarts or when the session times out. Since there is one stateful bean per client, using a stateful bean might impact the scalability of the application. We use the @Stateful annotation to create a stateful session bean. Stateless session bean A stateless session bean does not hold any state information for any client. Therefore, one session bean can be shared across multiple clients. The EJB container maintains pools of stateless beans, and when a client request comes, it takes out a bean from the pool, executes methods, and returns the bean to the pool. Stateless session beans provide excellent scalability because they can be shared and need not be created for each client. We use the @Stateless annotation to create a stateless session bean. Singleton session bean As the name suggests, there is only one instance of a singleton bean class in the EJB container (this is true in the clustered environment too; each EJB container will have an instance of a singleton bean). This means that they are shared by multiple clients, and they are not pooled by EJB containers (because there can be only one instance). Since a singleton session bean is a shared resource, we need to manage concurrency in it. Java EE provides two concurrency management options for singleton session beans: container-managed concurrency and bean-managed concurrency. Container-managed concurrency can easily be specified by annotations. See https://docs.oracle.com/javaee/7/tutorial/ejb-basicexamples002.htm#GIPSZ for more information on managing concurrency in a singleton session bean. Using a singleton bean could have an impact on the scalability of the application if there are resource contentions in the code. We use the @Singleton annotation to create a singleton session bean Accessing a session bean from the client Session beans can be designed to be accessed locally (within the same application as a session bean) or remotely (from a client running in a different application or JVM) or both. In the case of remote access, session beans are required to implement a remote interface. For local access, session beans can implement a local interface or no interface (the no-interface view of a session bean). Remote and local interfaces that session beans implement are sometimes also called business interfaces, because they typically expose the primary business functionality. Creating a no-interface session bean To create a session bean with a no-interface view, create a POJO and annotate it with the appropriate EJB annotation type and @LocalBean. For example, we can create a local stateful Student bean as follows: import javax.ejb.LocalBean; import javax.ejb.Singleton; @Singleton @LocalBean public class Student { ... } Accessing a session bean using dependency injection You can access session beans by either using the @EJBannotation (for dependency injection) or performing a Java Naming and Directory Interface (JNDI) lookup. EJB containers are required to make the JNDI URLs of EJBs available to clients. Dependency injection of session beans using @EJB work only for managed components, that is, components of the application whose life cycle is managed by the EJB container. When a component is managed by the container, it is created (instantiated) by the container and also destroyed by the container. You do not create managed components using the new operator. JEE-managed components that support direct injection of EJBs are servlets, managed beans of JSF pages and EJBs themselves (one EJB can have other EJBs injected into it). Unfortunately, you cannot have a web container injecting EJBs into JSPs or JSP beans. Also, you cannot have EJBs injected into any custom classes that you create and are instantiated using the new operator. We can use the Student bean (created previously) from a managed bean of JSF, as follows: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private Student studentEJB; } Note that if you create an EJB with a no-interface view, then all the public methods in that EJB will be exposed to the clients. If you want to control which methods can be called by clients, then you should implement the business interface. Creating a session bean using a local business interface A business interface for EJB is a simple Java interface with either the @Remote or @Local annotation. So we can create a local interface for the Student bean as follows: import java.util.List; import javax.ejb.Local; @Local public interface StudentLocal { public List<Course> getCourses(); } We implement a session bean like this: import java.util.List; import javax.ejb.Local; import javax.ejb.Stateful; @Stateful @Local public class Student implements StudentLocal { @Override public List<CourseDTO> getCourses() { //get courses are return … } } Clients can access the Student EJB only through the local interface: import javax.ejb.EJB; import javax.faces.bean.ManagedBean; @ManagedBean public class StudentJSFBean { @EJB private StudentLocal student; } The session bean can implement multiple business interfaces. Accessing a session bean using a JNDI lookup Though accessing EJB using dependency injection is the easiest way, it works only if the container manages the class that accesses the EJB. If you want to access EJB from a POJO that is not a managed bean, then dependency injection will not work. Another scenario where dependency injection does not work is when EJB is deployed in a separate JVM (this could be on a remote server). In such cases, you will have to access EJB using a JNDI lookup (visit https://docs.oracle.com/javase/tutorial/jndi/ for more information on JNDI). JEE applications can be packaged in an Enterprise Application Archive (EAR), which contains a .jar file for EJBs and a WAR file for web applications (and the lib folder contains the libraries required for both). If, for example, the name of an EAR file is CourseManagement.ear and the name of an EJB JAR file in it is CourseManagementEJBs.jar, then the name of the application is CourseManagement (the name of the EAR file) and the module name is CourseManagementEJBs. The EJB container uses these names to create a JNDI URL for lookup EJBs. A global JNDI URL for EJB is created as follows: "java:global/<application_name>/<module_name>/<bean_name>![<bean_interface>]" java:global: Indicates that it is a global JNDI URL. <application_name>: The application name is typically the name of the EAR file. <module_name>: This is the name of the EJB JAR. <bean_name>: This is the name of the EJB bean class. <bean_interface>: This is optional if EJB has a no-interface view, or if it implements only one business interface. Otherwise, it is a fully qualified name of a business interface. EJB containers are also required to publish two more variations of JNDI URLs for each EJB. These are not global URLs, which means that they can't be used to access EJBs from clients that are not in the same JEE application (in the same EAR): "java:app/[<module_name>]/<bean_name>![<bean_interface>]" "java:module/<bean_name>![<bean_interface>]" The first URL can be used if the EJB client is in the same application, and the second URL can be used if the client is in the same module (the same JAR file as the EJB). Before you look up any URL in a JNDI server, you need to create an InitialContext that includes information, among other things such as the hostname of JNDI server and the port on which it is running. If you are creating InitialContext in the same server, then there is no need to specify these attributes: InitialContext initCtx = new InitialContext(); Object obj = initCtx.lookup("jndi_url"); We can use the following JNDI URLs to access a no-interface (LocalBean) Student EJB (assuming that the name of the EAR file is CourseManagement and the name of the JAR file for EJBs is CourseManagementEJBs): URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR file, because we are using a global URL. Note that we haven't specified the interface name because we are assuming that the Student bean provides a no-interface view in this example. java:app/CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the application name because the client is expected to be in the same application. This is because the namespace of the URL is java:app. java:module/Student The client must be in the same JAR file as EJB. We can use the following JNDI URLs to access the Student EJB that implemented a local interface, StudentLocal: URL When to use java:global/CourseManagement/ CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal The client can be anywhere in the EAR file, because we are using a global URL. java:global/CourseManagement/ CourseManagementEJBs/Student The client can be anywhere in the EAR. We skipped the interface name because the bean implements only one business interface. Note that the object returned from this call will be of the StudentLocal type, and not Student. java:app/CourseManagementEJBs/Student Or java:app/CourseManagementEJBs/Student!packt.jee.book.ch6.StudentLocal   The client can be anywhere in the EAR. We skipped the application name because the JNDI namespace is java:app. java:module/Student Or java:module/Student!packt.jee.book.ch6.StudentLocal The client must be in the same EAR as the EJB. Here is an example of how we can call the Student bean with the local business interface from one of the objects (that is not managed by the web container) in our web application: InitialContext ctx = new InitialContext(); StudentLocal student = (StudentLocal) ctx.loopup ("java:app/CourseManagementEJBs/Student"); return student.getCourses(id) ; //get courses from Student EJB Creating EAR for Deployment outside Eclipse. Summary EJBs are ideal for writing business logic in web applications. They can act as the perfect bridge between web interface components, such as a JSF, servlet, or JSP, and data access objects, such as JDO. EJBs can be distributed across multiple JEE application servers (this could improve application scalability) and their life cycle is managed by the container. EJBs can easily be injected into managed objects or can be looked up using JNDI. The Eclipse JEE makes creating and consuming EJBs very easy. The JEE application server Glassfish can also be managed and applications can be deployed from within Eclipse. Resources for Article: Further resources on this subject: Contexts and Dependency Injection in NetBeans[article] WebSockets in Wildfly[article] Creating Java EE Applications [article]
Read more
  • 0
  • 0
  • 16346

article-image-embedded-linux-and-its-elements
Packt
22 Sep 2015
6 min read
Save for later

Embedded Linux and Its Elements

Packt
22 Sep 2015
6 min read
 In this article by Chris Simmonds, author of the book, Mastering Embedded Linux Programming, we'll cover the introduction of embedded Linux and its elements. (For more resources related to this topic, see here.) Why is embedded Linux popular? Linux first became a viable choice for embedded devices around 1999. That was when Axis (www.axis.com) released their first Linux-powered network camera and Tivo (www.tivo.com) their first DVR (Digital video recorder). Since 1999, Linux has become ever more popular, to the point that today it is the operating system of choice for many classes of product. As of this writing, in 2015, there are about 2 billion devices running Linux. That includes a large number of smart phones running Android, set top boxes and smart TVs and WiFi routers. Not to mention a very diverse range of devices such as vehicle diagnostics, weighing scales, industrial devices and medical monitoring units that ship in smaller volumes. So, why does your TV run Linux? At first glance, the function of a TV is simple: it has to display a stream of video on a screen. Why is a complex Unix-based operating system like Linux necessary? The simple answer is Moore's Law: Gordon Moore, co-founder of Intel stated in 1965 that the density of components on a chip will double every 2 years. That applies to the devices that we design and use in our everyday lives just as much as it does to desktops, laptops and servers. A typical SoC (System on Chip) at the heart of current devices contains many function block and has a technical reference manual that stretches to thousands of pages. Your TV is not simply displaying a video stream as the old analog sets used to. The stream is digital, possibly encrypted, and it needs processing to create an image. Your TV is (or soon will be) connected to the Internet. It can receive content from smart phones, tablets and home media servers. It can be (or soon will) used to play games. And so on and so on. You need a full operating system to manage all that hardware. Here are some points that drive the adoption of Linux: Linux has the functionality required. It has a good scheduler, a good network stack, support for many kinds of storage media, good support for multimedia devices, and so on. It ticks all the boxes. Linux has been ported to a wide range of processor architectures, including those important for embedded use: ARM, MIPS, x86 and PowerPC. Linux is open source. So you have the freedom to get the source code and modify it to meet your needs. You, or someone in the community, can create a board support package for your particular SoC, board or device. You can add protocols, features, technologies that may be missing from the mainline source code. Or, you can remove features that you don't need in order to reduce memory and storage requirements. Linux is flexible. Linux has an active community. In the case of the Linux kernel, very active. There is a new release of the kernel every 10 to 12 weeks, and each release contains code from around 1000 developers. An active community means that Linux is up to date and supports current hardware, protocols and standards. Open source licenses guarantee that you have access to the source code. There is no vendor tie-in. There is no vendor, no license fees, no restrictive NDAs, EULAs, and so on. Open source software is free in both senses: it gives you the freedom to adapt it for our own use and there is nothing to pay. For these reasons, Linux is an ideal choice for complex devices. But there are a few caveats I should mention here. Complexity makes it harder to understand. Coupled with the fast moving development process and the decentralized structures of open source, you have to put some effort into learning how to use it and to keep on re-learning as it changes. I hope that this article will help in the process. Elements of embedded Linux Every project begins by obtaining, customizing and deploying these four elements: Toolchain, Bootloader, Kernel, and Root filesystem. Toolchain The toolchain is the first element of embedded Linux and the starting point of your project. It should be constant throughout the project, in other words, once you have chosen your toolchain it is important to stick with it. Changing compilers and development libraries in an inconsistent way during a project will lead to subtle bugs. Obtaining a toolchain can be as simple as downloading and installing a package. But, the toolchain itself is a complex thing. Linux toolchains are almost always based on components from the GNU project (http://www.gnu.org). It is becoming possible to create toolchains based on LLVM/Clang (http://llvm.org). Bootloader The bootloader is the second element of Embedded Linux. It is the part that starts the system up and loads the operating system kernel. When considering which bootloader to focus on, there is one that stands out: U-Boot. In an embedded Linux system the bootloader has two main jobs: to start the system running and to load a kernel. In fact the first job is in somewhat subsidiary to the second in that it is only necessary to get as much of the system working as is necessary to load the kernel. Kernel The kernel is the third element of Embedded Linux. It is the component that is responsible for managing resources and interfacing with hardware, and so affects almost every aspect of your final software build. Usually it is tailored to your particular hardware configuration. The kernel has three main jobs to do: to manage resources, to interface to hardware, and to provide an API that offers a useful level of abstraction to user space programs, as summarized in the following diagram: Root filesystem The root filesystem is the fourth and final element of embedded Linux. The first objective is to create a minimal root filesystem that can give us a shell prompt. Then using that as a base we will add scripts to start other programs up, and to configure a network interface and user permissions. Knowing how to build the root filesystem from scratch is a useful skill. Summary In this article we briefly saw the introduction for embedded Linux and its elements. Resources for Article: Further resources on this subject: Virtualization[article] An Introduction to WEP [article] Raspberry Pi LED Blueprints [article]
Read more
  • 0
  • 0
  • 3613

article-image-cassandra-design-patterns
Packt
22 Sep 2015
18 min read
Save for later

Cassandra Design Patterns

Packt
22 Sep 2015
18 min read
In this article by Rajanarayanan Thottuvaikkatumana, author of the book Cassandra Design Patterns, Second Edition, the author has discussed how Apache Cassandra is one of the most popular NoSQL data stores. He states this based on the research paper Dynamo: Amazon’s Highly Available Key-Value Store and the research paper Bigtable: A Distributed Storage System for Structured Data. Cassandra is implemented with best features from both of these research papers. In general, NoSQL data stores can be classified into the following groups: Key-value data store Column family data store Document data store Graph data store Cassandra belongs to the column family data store group. Cassandra’s peer-to-peer architecture avoids single point failures in the cluster of Cassandra nodes and gives the ability to distribute the nodes across racks or data centres. This makes Cassandra a linearly scalable data store. In other words, the more processing you need, the more Cassandra nodes you can add to your cluster. Cassandra’s multi data centre support makes it a perfect choice to replicate the data stores across data centres for disaster recovery, high availability, separating transaction processing, analytical environments, and for building resiliency into the data store infrastructure.   Design patterns in Cassandra The term “design patterns” is a highly misinterpreted term in the software development community. In an extremely general sense, it is a set of solutions for some known problems in quite a specific context. It is used in this book to describe a pattern of using certain features of Cassandra to solve some real-world problems. This book is a collection of such design patterns with real-world examples. Coexistence patterns Cassandra is one of the highly successful NoSQL data stores, which is greatly similar to the traditional RDBMS. Cassandra column families (also known as Cassandra tables), in a logical perspective, have a similarity with RDBMS-based tables in the view of the users, even though the underlying structure of these tables are totally different. Because of this, Cassandra is best fit to be deployed along with the traditional RDBMS to solve some of the problems that RDBMS is not able to handle. The caveat here is that because of the similarity of RDBMS tables and Cassandra column families in the view of the end users, many users and data modelers try to use Cassandra in the exact the same way as the RDBMS schema is being modeled, used, and getting into serious deployment issues. How do you prevent such pitfalls? The key here is to understand the differences in a theoretical perspective as well as in a practical perspective, and follow best practices prescribed by the creators of Cassandra. Where do you start with Cassandra? The best place to look at is the new application development requirements and take it from there. Look at the cases where there is a need to normalize the RDBMS tables and keep all the data items together, which would have got distributed if you were to design the same solution in RDBMS. Instead of thinking from the pure data model perspective, start thinking in terms of the application's perspective. How the data is generated by the application, what are the read requirements, what are the write requirements, what is the response time expected out of some of the use cases, and so on. Depending on these aspects, design the data model. In the big data world, the application becomes the first class citizen and the data model leaves the driving seat in the application design. Design the data model to serve the needs of the applications. In any organization, new reporting requirements come all the time. The major challenge in to generate reports is the underlying data store. In the RDBMS world, reporting is always a challenge. You may have to join multiple tables to generate even simple reports. Even though the RDBMS objects such as views, stored procedures, and indexes maybe used to get the desired data for the reports, when the report is being generated, the query plan is going to be very complex most of the time. The consumption of processing power is another need to consider when generating such reports on the fly. Because of these complexities, many times, for reporting requirements, it is common to keep separate tables containing data exported from the transactional tables. This is a great opportunity to start with NoSQL stores like Cassandra as a reporting data store. Data aggregation and summarization are common requirements in any organization. This helps to control the data growth by storing only the summary statistics and moving the transactional data into archives. Many times, this aggregated and summarized data is used for statistical analysis. Making the summary accurate and easily accessible is a big challenge. Most of the time, data aggregation and reporting goes hand in hand. The aggregated data is heavily used in reports. The aggregation process speeds up the queries to a great extent. This is another place where you can start with NoSQL stores like Cassandra. The coexistence of RDBMS and NoSQL data stores like Cassandra is very much possible, feasible, and sensible; and this is the only way to get started with the NoSQL movement, unless you embark on a totally new product development from scratch. In summary, this section of the book discusses about some design patterns related to de-normalization, reporting, and aggregation of data using Cassandra as the preferred NoSQL data store. RDBMS migration patterns A big bang approach to any kind of technology migration is not advisable. A series of deliberations have to happen before the eventual and complete change over. Migration from RDBMS to Cassandra is not different at all. Any new technology replacing an old one must coexist harmoniously, at least for a short period of time. This gives a lot of confidence on the new technology to the stakeholders. Many technology pundits give various approaches on the RDBMS to NoSQL migration strategies. Many such guidelines are specific to the particular NoSQL data stores giving attention to specific areas, and most of the time, this will end up on the process rather than the technology. The migration from RDBMS to Cassandra is not an easy task. Mainly because the RDBMS-based systems are really time tested and trust worthy in most of the organizations. So, migrating from such a robust RDBMS-based system to Cassandra is not going to be easy for anyone. One of the best approaches to achieve this goal is to exploit some of the new or unique features in Cassandra, which many of the traditional RDBMS don't have. This also prevents the usage of Cassandra just like any other RDBMS. Cassandra is unique. Cassandra is not an RDBMS. The approach of banking on the unique features is not only applicable to the RDBMS to Cassandra migration, but also to any migration from one paradigm to another. Some of the design patterns that are discussed in this section of the book revolve around very simple and important features of Cassandra, but have profound application potential when designing the next generation NoSQL data stores using Cassandra. A wise usage of these unique features in Cassandra will give a head start on the eventual and complete migration from RDBMS. The modeling of collection objects in RDBMS is a real pain, because multiple tables are to be defined and a join is required to access data. Many RDBMS offer this by providing capability to define user-defined data types, but there is absolutely no standardization at all in this space. Collection objects are very commonly seen in the real-world applications. A list of actions, tuple of related values, set of objects, dictionaries, and things like that come quite often in applications. Cassandra has elegant ways to model this because they are data types in column families. Counting is a very commonly required process in many business processes and applications. In RDBMS, this has to be modeled as integers or long numbers, but many times, applications make big mistakes in using them in wrong ways. Cassandra has a counter data type in the column family that alleviates this problem. Getting rid of unwanted records from an RDBMS table is not an automatic process. When some application events occur, they have to be removed by application programs or through some other means. But in many situations, many data items will have a preallocated time to live. They should go away without the intervention of any external events. Cassandra has a way to assign time-to-live (TTL) attribute to data items. By making use of TTL, data items get removed without any other external event's intervention. All the design patterns covered in this section of the book revolve around some of the new features of Cassandra that will make the migration from RDBMS to Cassandra an easy task. Cache migration pattern Database access whether it is from RDBMS or other highly distributed NoSQL data stores is always an input/output (I/O) intensive operation. It makes perfect sense to cache the frequently used, but reasonably static data for fast access for the applications consuming this data. In such situations, the in-memory cache is preferred to the repeated database access for each request. Using cache is not always a pleasant experience. Getting into really weird problems such as data loss, data getting out of sync with its source and other data integrity problems are very common. It is very common to see wrong components coming into the enterprise solution stack all the time for various reasons. Overlooking on some of the features and adopting the technology without much background work is a very common pitfall. Many a times, the use of cache comes into the solution stack to reduce the latency of the responses. Once the initial results are favorable, more and more data will get tossed into the cache. Slowly, this will become a practice to see that more and more data is getting into cache. Now is the time when problems start popping up one by one. Pure in-memory cache solutions are favored by everybody, by the virtue of its ability to serve the data quickly until you start loosing data. This is because of the faults in the system, along with application and node crashes. Cache serves data much faster than being served from other data stores. But if the caching solution in use is giving data integrity problems, it is better to migrate to NoSQL data stores like Cassandra. Is Cassandra faster than the in-memory caching solutions? The obvious answer is no. But it is not as bad as many think. Cassandra can be configured to serve fast reads, and bonus comes in the form of high data integrity with strong replication capabilities. Cache is good as long as it serves its purpose without any data loss or any other data integrity issues. Emphasizing on the use case of the key/value type cache and various methods of cache to NoSQL migration are discussed in this section of the book. Cassandra cannot be used as a replacement for cache in terms of the speed of data access. But when it comes to data integrity, Cassandra shines all the time with its tuneable consistency feature. With a continual tuning and manipulating data with clean and well-written application code, data access can be improved to a great level, and it will be much better than many other data stores. The design pattern covered in this section of the book gives some guidance on migrating from caching solutions to Cassandra, if this is a must. CAP patterns When it comes to large-scale Internet applications or web services, popularly known as the Internet of Things (IoT) applications, the number of components are huge and the way they are distributed is beyond imagination. There will be hundreds of application servers, hundreds of data store nodes, and many other components in the whole ecosystem. In such a scenario, for doing an atomic transaction by getting an agreement from all the components involved is, for all practical purposes, impossible. Consistency, availability, and partition tolerance are three important guarantees, popularly known as CAP guarantees that any distributed computing systems should offer even though all is not possible simultaneously. In the IoT applications, the distribution of the application nodes is unavoidable. This means that the possibility of network partition is pretty much there. So, it is mandatory to give the P guarantee. Now, the question is whether to forfeit the C guarantee or the A guarantee. At this stage, the situation is not as grave as portrayed in the CAP Theorem conjectured by Eric Brewer. For all the use cases in a given IoT application, there is no need of having 100% of C guarantee and 100% of A guarantee. So, depending on the need of the level of A guarantee, the C guarantee can be tuned. In other words, it is called tunable consistency. Depending on the way data is ingested into Cassandra, and the way it is consumed from Cassandra, tuning is possible to give best results for the appropriate read and write requirements of the applications. In some applications, the speed at which the data is written will be very high. In other words, the velocity of the data ingestion into Cassandra is very high. This falls into the write-heavy applications. In some applications, the need to read data quickly will be an important requirement. This is mainly needed in the applications where there is a lot of data processing required. Data analytics applications, batch processing applications, and so on fall under this category. These fall into the read-heavy applications. Now, there is a third category of applications where there is an equal importance for fast writes as well as fast reads. These are the kind of applications where there is a constant inflow of data, and at the same time, there is a need to read the data by clients for various purposes. This falls into the read-write balanced applications. The consistency level requirements for all the previous three types of applications are totally different. There is no one way to tune so that it is optimal for all the three types of applications. All the three applications' consistency levels are to be tuned differently from use case to use case. In this section of the book, various design patterns related to applications with the needs of fast writes, fast reads, and moderate write and read are discussed. All these design patterns revolve around using the tuneable consistency parameters of Cassandra. Whether it is for write or read and if the consistency levels are set high, the availability levels will be low and vice versa. So, by making use of the consistency level knob, the Cassandra data store can be used for various types of writing and reading use cases. Temporal patterns In any applications, the usage of data that varies over the period of time is called as temporal data, which is very important. Temporal data is needed wherever there is a need to maintain chronology. There are so many applications in which there is a huge need for storage, retrieval, and processing of data that is tied to time. The biggest challenge in dealing with temporal data stored in a data store is that they are hugely used for analytical purposes and retrieving the data, based on various sort orders in terms of time. So, the data stores that are used to capture the temporal data should be capable of storing the data strictly adhering to the chronology. There are so many usage patterns that are seen in the real world that fall into showing temporal behavior. For the classification purpose in this book, they are bucketed into three. The first one is the general time series category. The second one is the log category, such as in an audit log, a transaction log, and so on. The third one is the conversation category, such as in the conversation messages of a chat application. There is relevance in this classification, because these are commonly used across in many of the applications. In many of the applications, these are really cross cutting concerns; and designers underestimate this aspect; and finally, many of the applications will have different data stores capturing this temporal data. There is a need to have a common strategy dealing with temporal data that fall in these three commonly seen categories in an enterprise wide solution architecture. In other words, there should be a uniform way of capturing temporal data; there should be a uniform way of processing temporal data; and there should be a commonly used set of tools and libraries to manage the temporal data. Out of the three design patterns that are discussed in this section of the book, the first Time Series pattern is a general design pattern that covers the most general behavior of any kind of temporal data. The next two design patterns namely Log pattern and Conversation pattern are two special cases of the first design pattern. This section of the book covers the general nature of temporal data, some specific instances of such data items in the real-world applications, and why Cassandra is the best fit as a NoSQL data store to persist the temporal data. Temporal data comes quite often in many use cases of lots of applications. Data modeling of temporal data is very important in the Cassandra perspective for optimal storage and quick access of the data. Some common design patterns to model temporal data have been covered in this section of the book. By focusing on some very few aspects, such as the partition key, primary key, clustering column and the number of records that gets stored in a wide row of Cassandra, very effective and high performing temporal data models can be built. Analytical patterns The 3Vs of big data namely Volume, Variety, and Velocity pose another big challenge, which is the analysis of the data stored in NoSQL data stores, such as Cassandra. What are the analytics use cases? How can the distributed data be processed? What are the data transformations that are typically seen in the applications? These are the topics covered in this section of the book. Unlike other sections of this book, the focus is shifted from Cassandra to other technologies like Apache Hadoop, Hadoop MapReduce, and Apache Spark to introduce the big data analytics tool space. The design patterns such as Map/Reduce Pattern and Transformation Pattern are very commonly seen in the data analytics world. Cassandra with Apache Spark has good compatibility, and is a very ideal tool set in the data analysis use cases. This section of the book covers some data analysis aspects and mainly discusses about data processing. Data transformation is one of the major activity in data processing. Out of the many data processing patterns, Map/Reduce Pattern deserves a special mention, because it is being used in so many batch processing and analysis use cases, dealing with big data. Spark has been chosen as the tool of choice to explain the data processing activities. This section explains how a Map/Reduce kind of data processing task can be done using Cassandra. Spark has also been discussed, which is very powerful to perform online data analysis. This section of the book also covers some of the commonly seen data transformations that are used in the data processing applications. Summary Many Cassandra design patterns have been covered in this book. If the design patterns are not being used in any real-world applications, it has only theoretical value. To give a practical approach to the applicability of these design patterns, an end-to-end application is taken as a case point and described as the last chapter of the book, which is used as a vehicle to explain the applicability of the Cassandra design patterns discussed in the earlier sections of the book. Users love Cassandra because of its SQL-like interface CQL. Also, its features are very closely related to the RDBMS even though the paradigm is totally new. Application developers love Cassandra because of the plethora of drivers available in the market so that they can write applications in their preferred programming language. Architects love Cassandra because they can store structured, semi-structured, and unstructured data in it. Database administers love Cassandra because it comes with almost no maintenance overhead. Service managers love Cassandra because of the wonderful monitoring tools available in the market. CIOs love Cassandra because it gives value for their money. And Cassandra works! An application based on Cassandra will be perfect only if its features are used in the right way, and this book is an attempt to guide the Cassandra community in this direction. Resources for Article: Further resources on this subject: Cassandra Architecture [article] Getting Up and Running with Cassandra [article] Getting Started with Apache Cassandra [article]
Read more
  • 0
  • 0
  • 12821
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-putting-function-functional-programming
Packt
22 Sep 2015
27 min read
Save for later

Putting the Function in Functional Programming

Packt
22 Sep 2015
27 min read
 In this article by Richard Reese, the author of the book Learning Java Functional Programming, we will cover lambda expressions in more depth. We will explain how they satisfy the mathematical definition of a function and how we can use them in supporting Java applications. In this article, you will cover several topics, including: Lambda expression syntax and type inference High-order, pure, and first-class functions Referential transparency Closure and currying (For more resources related to this topic, see here.) Our discussions cover high-order functions, first-class functions, and pure functions. Also examined are the concepts of referential transparency, closure, and currying. Examples of nonfunctional approaches are followed by their functional equivalent where practical. Lambda expressions usage A lambda expression can be used in many different situations, including: Assigned to a variable Passed as a parameter Returned from a function or method We will demonstrate how each of these are accomplished and then elaborate on the use of functional interfaces. Consider the forEach method supported by several classes and interfaces, including the List interface. In the following example, a List interface is created and the forEach method is executed against it. The forEach method expects an object that implements the Consumer interface. This will display the three cartoon character names: List<String> list = Arrays.asList("Huey", "Duey", "Luey"); list.forEach(/* Implementation of Consumer Interface*/); More specifically, the forEach method expects an object that implements the accept method, the interface's single abstract method. This method's signature is as follows: void accept(T t) The interface also has a default method, andThen, which is passed and returns an instance of the Consumer interface. We can use any of three different approaches for implementing the functionality of the accept method: Use an instance of a class that implements the Consumer interface Use an anonymous inner class Use a lambda expression We will demonstrate each method so that it will be clear how each technique works and why lambda expressions will often result in a better solution. We will start with the declaration of a class that implements the Consumer interface as shown next: public class ConsumerImpl<T> implements Consumer<T> { @Override public void accept(T t) { System.out.println(t); } } We can then use it as the argument of the forEach method: list.forEach(new ConsumerImpl<>()); Using an explicit class allows us to reuse the class or its objects whenever an instance is needed. The second approach uses an anonymous inner function as shown here: list.forEach(new Consumer<String>() { @Override public void accept(String t) { System.out.println(t); } }); This was a fairly common approach used prior to Java 8. It avoids having to explicitly declare and instantiate a class, which implements the Consumer interface. A simple statement that uses a lambda expression is shown next: list.forEach(t->System.out.println(t)); The lambda expression accepts a single argument and returns void. This matches the signature of the Consumer interface. Java 8 is able to automatically perform this matching process. This latter technique obviously uses less code, making it more succinct than the other solutions. If we desire to reuse this lambda expression elsewhere, we could have assigned it to a variable first and then used it in the forEach method as shown here: Consumer consumer = t->System.out.println(t); list.forEach(consumer); Anywhere a functional interface is expected, we can use a lambda expression. Thus, the availability of a large number of functional interfaces will enable the frequent use of lambda expressions and programs that exhibit a functional style of programming. While developers can define their own functional interfaces, which we will do shortly, Java 8 has added a large number of functional interfaces designed to support common operations. Most of these are found in the java.util.function package. We will use several of these throughout the book and will elaborate on their purpose, definition, and use as we encounter them. Functional programming concepts in Java In this section, we will examine the underlying concept of functions and how they are implemented in Java 8. This includes high-order, first-class, and pure functions. A first-class function is a function that can be used where other first-class entities can be used. These types of entities include primitive data types and objects. Typically, they can be passed to and returned from functions and methods. In addition, they can be assigned to variables. A high-order function either takes another function as an argument or returns a function as the return value. Languages that support this type of function are more flexible. They allow a more natural flow and composition of operations. Pure functions have no side effects. The function does not modify nonlocal variables and does not perform I/O. High-order functions We will demonstrate the creation and use of the high-order function using an imperative and a functional approach to convert letters of a string to lowercase. The next code sequence reuses the list variable, developed in the previous section, to illustrate the imperative approach. The for-each statement iterates through each element of the list using the String class' toLowerCase method to perform the conversion: for(String element : list) { System.out.println(element.toLowerCase()); } The output will be each name in the list displayed in lowercase, each on a separate line. To demonstrate the use of a high-order function, we will create a function called, processString, which is passed a function as the first parameter and then apply this function to the second parameter as shown next:   public String processString(Function<String,String> operation,String target) { return operation.apply(target); } The function passed will be an instance of the java.util.function package's Function interface. This interface possesses an accept method that is passed one data type and returns a potentially different data type. With our definition, it is passed String and returns String. In the next code sequence, a lambda expression using the toLowerCase method is passed to the processString method. As you may remember, the forEach method accepts a lambda expression, which matches the Consumer interface's accept method. The lambda expression passed to the processString method matches the Function interface's accept method. The output is the same as produced by the equivalent imperative implementation. list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); We could have also used a method reference as show next: list.forEach(s ->System.out.println( processString(String::toLowerCase, s))); The use of the high-order function may initially seem to be a bit convoluted. We needed to create the processString function and then pass either a lambda expression or a method reference to perform the conversion. While this is true, the benefit of this approach is flexibility. If we needed to perform a different string operation other than converting the target string to lowercase, we will need to essentially duplicate the imperative code and replace toLowerCase with a new method such as toUpperCase. However, with the functional approach, all we need to do is replace the method used as shown next: list.forEach(s ->System.out.println(processString(t- >t.toUpperCase(), s))); This is simpler and more flexible. A lambda expression can also be passed to another lambda expression. Let's consider another example where high-order functions can be useful. Suppose we need to convert a list of one type into a list of a different type. We might have a list of strings that we wish to convert to their integer equivalents. We might want to perform a simple conversion or perhaps we might want to double the integer value. We will use the following lists:   List<String> numberString = Arrays.asList("12", "34", "82"); List<Integer> numbers = new ArrayList<>(); List<Integer> doubleNumbers = new ArrayList<>(); The following code sequence uses an iterative approach to convert the string list into an integer list:   for (String num : numberString) { numbers.add(Integer.parseInt(num)); } The next sequence uses a stream to perform the same conversion: numbers.clear(); numberString .stream() .forEach(s -> numbers.add(Integer.parseInt(s))); There is not a lot of difference between these two approaches, at least from a number of lines perspective. However, the iterative solution will only work for the two lists: numberString and numbers. To avoid this, we could have written the conversion routine as a method. We could also use lambda expression to perform the same conversion. The following two lambda expression will convert a string list to an integer list and from a string list to an integer list where the integer has been doubled:   Function<List<String>, List<Integer>> singleFunction = s -> { s.stream() .forEach(t -> numbers.add(Integer.parseInt(t))); return numbers; }; Function<List<String>, List<Integer>> doubleFunction = s -> { s.stream() .forEach(t -> doubleNumbers.add( Integer.parseInt(t) * 2)); return doubleNumbers; }; We can apply these two functions as shown here: numbers.clear(); System.out.println(singleFunction.apply(numberString)); System.out.println(doubleFunction.apply(numberString)); The output follows: [12, 34, 82] [24, 68, 164] However, the real power comes from passing these functions to other functions. In the next code sequence, a stream is created consisting of a single element, a list. This list contains a single element, the numberString list. The map method expects a Function interface instance. Here, we use the doubleFunction function. The list of strings is converted to integers and then doubled. The resulting list is displayed: Arrays.asList(numberString).stream() .map(doubleFunction) .forEach(s -> System.out.println(s)); The output follows: [24, 68, 164] We passed a function to a method. We could easily pass other functions to achieve different outputs. Returning a function When a value is returned from a function or method, it is intended to be used elsewhere in the application. Sometimes, the return value is used to determine how subsequent computations should proceed. To illustrate how returning a function can be useful, let's consider a problem where we need to calculate the pay of an employee based on the numbers of hours worked, the pay rate, and the employee type. To facilitate the example, start with an enumeration representing the employee type: enum EmployeeType {Hourly, Salary, Sales}; The next method illustrates one way of calculating the pay using an imperative approach. A more complex set of computation could be used, but these will suffice for our needs: public float calculatePay(int hoursWorked, float payRate, EmployeeType type) { switch (type) { case Hourly: return hoursWorked * payRate; case Salary: return 40 * payRate; case Sales: return 500.0f + 0.15f * payRate; default: return 0.0f; } } If we assume a 7 day workweek, then the next code sequence shows an imperative way of calculating the total number of hours worked: int hoursWorked[] = {8, 12, 8, 6, 6, 5, 6, 0}; int totalHoursWorked = 0; for (int hour : hoursWorked) { totalHoursWorked += hour; } Alternatively, we could have used a stream to perform the same operation as shown next. The Arrays class's stream method accepts an array of integers and converts it into a Stream object. The sum method is applied fluently, returning the number of hours worked: totalHoursWorked = Arrays.stream(hoursWorked).sum(); The latter approach is simpler and easier to read. To calculate and display the pay, we can use the following statement which, when executed, will return 803.25.    System.out.println( calculatePay(totalHoursWorked, 15.75f, EmployeeType.Hourly)); The functional approach is shown next. A calculatePayFunction method is created that is passed by the employee type and returns a lambda expression. This will compute the pay based on the number of hours worked and the pay rate. This lambda expression is based on the BiFunction interface. It has an accept method that takes two arguments and returns a value. Each of the parameters and the return type can be of different data types. It is similar to the Function interface's accept method, except that it is passed two arguments instead of one. The calculatePayFunction method is shown next. It is similar to the imperative's calculatePay method, but returns a lambda expression: public BiFunction<Integer, Float, Float> calculatePayFunction( EmployeeType type) { switch (type) { case Hourly: return (hours, payRate) -> hours * payRate; case Salary: return (hours, payRate) -> 40 * payRate; case Sales: return (hours, payRate) -> 500f + 0.15f * payRate; default: return null; } } It can be invoked as shown next: System.out.println( calculatePayFunction(EmployeeType.Hourly) .apply(totalHoursWorked, 15.75f)); When executed, it will produce the same output as the imperative solution. The advantage of this approach is that the lambda expression can be passed around and executed in different contexts. First-class functions To demonstrate first-class functions, we use lambda expressions. Assigning a lambda expression, or method reference, to a variable can be done in Java 8. Simply declare a variable of the appropriate function type and use the assignment operator to do the assignment. In the following statement, a reference variable to the previously defined BiFunction-based lambda expression is declared along with the number of hours worked: BiFunction<Integer, Float, Float> calculateFunction; int hoursWorked = 51; We can easily assign a lambda expression to this variable. Here, we use the lambda expression returned from the calculatePayFunction method: calculateFunction = calculatePayFunction(EmployeeType.Hourly); The reference variable can then be used as shown in this statement: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); It produces the same output as before. One shortcoming of the way an hourly employee's pay is computed is that overtime pay is not handled. We can add this functionality to the calculatePayFunction method. However, to further illustrate the use of reference variables, we will assign one of two lambda expressions to the calculateFunction variable based on the number of hours worked as shown here: if(hoursWorked<=40) { calculateFunction = (hours, payRate) -> 40 * payRate; } else { calculateFunction = (hours, payRate) -> hours*payRate + (hours-40)*1.5f*payRate; } When the expression is evaluated as shown next, it returns a value of 1063.125: System.out.println( calculateFunction.apply(hoursWorked, 15.75f)); Let's rework the example developed in the High-order functions section, where we used lambda expressions to display the lowercase values of an array of string. Part of the code has been duplicated here for your convenience: list.forEach(s ->System.out.println( processString(t->t.toLowerCase(), s))); Instead, we will use variables to hold the lambda expressions for the Consumer and Function interfaces as shown here: Consumer<String> consumer; consumer = s -> System.out.println(toLowerFunction.apply(s)); Function<String,String> toLowerFunction; toLowerFunction= t -> t.toLowerCase(); The declaration and initialization could have been done with one statement for each variable. To display all of the names, we simply use the consumer variable as the argument of the forEach method: list.forEach(consumer); This will display the names as before. However, this is much easier to read and follow. The ability to use lambda expressions as first-class entities makes this possible. We can also assign method references to variables. Here, we replaced the initialization of the function variable with a method reference: function = String::toLowerCase; The output of the code will not change. The pure function The pure function is a function that has no side effects. By side effects, we mean that the function does not modify nonlocal variables and does not perform I/O. A method that squares a number is an example of a pure method with no side effects as shown here: public class SimpleMath { public static int square(int x) { return x * x; } } Its use is shown here and will display the result, 25: System.out.println(SimpleMath.square(5)); An equivalent lambda expression is shown here: Function<Integer,Integer> squareFunction = x -> x*x; System.out.println(squareFunction.apply(5)); The advantages of pure functions include the following: They can be invoked repeatedly producing the same results There are no dependencies between functions that impact the order they can be executed They support lazy evaluation They support referential transparency We will examine each of these advantages in more depth. Support repeated execution Using the same arguments will produce the same results. The previous square operation is an example of this. Since the operation does not depend on other external values, re-executing the code with the same arguments will return the same results. This supports the optimization technique call memoization. This is the process of caching the results of an expensive execution sequence and retrieving them when they are used again. An imperative technique for implementing this approach involves using a hash map to store values that have already been computed and retrieving them when they are used again. Let's demonstrate this using the square function. The technique should be used for those functions that are compute intensive. However, using the square function will allow us to focus on the technique. Declare a cache to hold the previously computed values as shown here: private final Map<Integer, Integer> memoizationCache = new HashMap<>(); We need to declare two methods. The first method, called doComputeExpensiveSquare, does the actual computation as shown here. A display statement is included only to verify the correct operation of the technique. Otherwise, it is not needed. The method should only be called once for each unique value passed to it. private Integer doComputeExpensiveSquare(Integer input) { System.out.println("Computing square"); return 2 * input; } A second method is used to detect when a value is used a subsequent time and return the previously computed value instead of calling the square method. This is shown next. The containsKey method checks to see if the input value has already been used. If it hasn't, then the doComputeExpensiveSquare method is called. Otherwise, the cached value is returned. public Integer computeExpensiveSquare(Integer input) { if (!memoizationCache.containsKey(input)) { memoizationCache.put(input, doComputeExpensiveSquare(input)); } return memoizationCache.get(input); } The use of the technique is demonstrated with the next code sequence: System.out.println(computeExpensiveSquare(4)); System.out.println(computeExpensiveSquare(4)); The output follows, which demonstrates that the square method was only called once: Computing square 16 16 The problem with this approach is the declaration of a hash map. This object may be inadvertently used by other elements of the program and will require the explicit declaration of new hash maps for each memoization usage. In addition, it does not offer flexibility in handling multiple memoization. A better approach is available in Java 8. This new approach wraps the hash map in a class and allows easier creation and use of memoization. Let's examine a memoization class as adapted from http://java.dzone.com/articles/java-8-automatic-memoization. It is called Memoizer. It uses ConcurrentHashMap to cache value and supports concurrent access from multiple threads. Two methods are defined. The doMemoize method returns a lambda expression that does all of the work. The memorize method creates an instance of the Memoizer class and passes the lambda expression implementing the expensive operation to the doMemoize method. The doMemoize method uses the ConcurrentHashMap class's computeIfAbsent method to determine if the computation has already been performed. If the value has not been computed, it executes the Function interface's apply method against the function argument: public class Memoizer<T, U> { private final Map<T, U> memoizationCache = new ConcurrentHashMap<>(); private Function<T, U> doMemoize(final Function<T, U> function) { return input -> memoizationCache.computeIfAbsent(input, function::apply); } public static <T, U> Function<T, U> memoize(final Function<T, U> function) { return new Memoizer<T, U>().doMemoize(function); } } A lambda expression is created for the square operation: Function<Integer, Integer> squareFunction = x -> { System.out.println("In function"); return x * x; }; The memoizationFunction variable will hold the lambda expression that is subsequently used to invoke the square operations: Function<Integer, Integer> memoizationFunction = Memoizer.memoize(squareFunction); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); System.out.println(memoizationFunction.apply(2)); The output of this sequence follows where the square operation is performed only once: In function 4 4 4 We can easily use the Memoizer class for a different function as shown here: Function<Double, Double> memoizationFunction2 = Memoizer.memoize(x -> x * x); System.out.println(memoizationFunction2.apply(4.0)); This will square the number as expected. Functions that are recursive present additional problems. Eliminating dependencies between functions When dependencies between functions are eliminated, then more flexibility in the order of execution is possible. Consider these Function and BiFunction declarations, which define simple expressions for computing hourly, salaried, and sales type pay, respectively: BiFunction<Integer, Double, Double> computeHourly = (hours, rate) -> hours * rate; Function<Double, Double> computeSalary = rate -> rate * 40.0; BiFunction<Double, Double, Double> computeSales = (rate, commission) -> rate * 40.0 + commission; These functions can be executed, and their results are assigned to variables as shown here: double hourlyPay = computeHourly.apply(35, 12.75); double salaryPay = computeSalary.apply(25.35); double salesPay = computeSales.apply(8.75, 2500.0); These are pure functions as they do not use external values to perform their computations. In the following code sequence, the sum of all three pays are totaled and displayed: System.out.println(computeHourly.apply(35, 12.75) + computeSalary.apply(25.35) + computeSales.apply(8.75, 2500.0)); We can easily reorder their execution sequence or even execute them concurrently, and the results will be the same. There are no dependencies between the functions that restrict them to a specific execution ordering. Supporting lazy evaluation Continuing with this example, let's add an additional sequence, which computes the total pay based on the type of employee. The variable, hourly, is set to true if we want to know the total of the hourly employee pay type. It will be set to false if we are interested in salary and sales-type employees: double total = 0.0; boolean hourly = ...; if(hourly) { total = hourlyPay; } else { total = salaryPay + salesPay; } System.out.println(total); When this code sequence is executed with an hourly value of false, there is no need to execute the computeHourly function since it is not used. The runtime system could conceivably choose not to execute any of the lambda expressions until it knows which one is actually used. While all three functions are actually executed in this example, it illustrates the potential for lazy evaluation. Functions are not executed until needed. Referential transparency Referential transparency is the idea that a given expression is made up of subexpressions. The value of the subexpression is important. We are not concerned about how it is written or other details. We can replace the subexpression with its value and be perfectly happy. With regards to pure functions, they are said to be referentially transparent since they have same effect. In the next declaration, we declare a pure function called pureFunction: Function<Double,Double> pureFunction = t -> 3*t; It supports referential transparency. Consider if we declare a variable as shown here: int num = 5; Later, in a method we can assign a different value to the variable: num = 6; If we define a lambda expression that uses this variable, the function is no longer pure: Function<Double,Double> impureFunction = t -> 3*t+num; The function no longer supports referential transparency. Closure in Java The use of external variables in a lambda expression raises several interesting questions. One of these involves the concept of closures. A closure is a function that uses the context within which it was defined. By context, we mean the variables within its scope. This sometimes is referred to as variable capture. We will use a class called ClosureExample to illustrate closures in Java. The class possesses a getStringOperation method that returns a Function lambda expression. This expression takes a string argument and returns an augmented version of it. The argument is converted to lowercase, and then its length is appended to it twice. In the process, both an instance variable and a local variable are used. In the implementation that follows, the instance variable and two local variables are used. One local variable is a member of the getStringOperation method and the second one is a member of the lambda expression. They are used to hold the length of the target string and for a separator string: public class ClosureExample { int instanceLength; public Function<String,String> getStringOperation() { final String seperator = ":"; return target -> { int localLength = target.length(); instanceLength = target.length(); return target.toLowerCase() + seperator + instanceLength + seperator + localLength; }; } } The lambda expression is created and used as shown here: ClosureExample ce = new ClosureExample(); final Function<String,String> function = ce.getStringOperation(); System.out.println(function.apply("Closure")); Its output follows: closure:7:7 Variables used by the lambda expression are restricted in their use. Local variables or parameters cannot be redefined or modified. These variables need to be effectively final. That is, they must be declared as final or not be modified. If the local variable and separator, had not been declared as final, the program would still be executed properly. However, if we tried to modify the variable later, then the following syntax error would be generated, indicating such variable was not permitted within a lambda expression: local variables referenced from a lambda expression must be final or effectively final If we add the following statements to the previous example and remove the final keyword, we will get the same syntax error message: function = String::toLowerCase; Consumer<String> consumer = s -> System.out.println(function.apply(s)); This is because the function variable is used in the Consumer lambda expression. It also needs to be effectively final, but we tried to assign a second value to it, the method reference for the toLowerCase method. Closure refers to functions that enclose variable external to the function. This permits the function to be passed around and used in different contexts. Currying Some functions can have multiple arguments. It is possible to evaluate these arguments one-by-one. This process is called currying and normally involves creating new functions, which have one fewer arguments than the previous one. The advantage of this process is the ability to subdivide the execution sequence and work with intermediate results. This means that it can be used in a more flexible manner. Consider a simple function such as: f(x,y) = x + y The evaluation of f(2,3) will produce a 5. We could use the following, where the 2 is "hardcoded": f(2,y) = 2 + y If we define: g(y) = 2 + y Then the following are equivalent: f(2,y) = g(y) = 2 + y Substituting 3 for y we get: f(2,3) = g(3) = 2 + 3 = 5 This is the process of currying. An intermediate function, g(y), was introduced which we can pass around. Let's see, how something similar to this can be done in Java 8. Start with a BiFunction method designed for concatenation of strings. A BiFunction method takes two parameters and returns a single value: BiFunction<String, String, String> biFunctionConcat = (a, b) -> a + b; The use of the function is demonstrated with the following statement: System.out.println(biFunctionConcat.apply("Cat", "Dog")); The output will be the CatDog string. Next, let's define a reference variable called curryConcat. This variable is a Function interface variable. This interface is based on two data types. The first one is String and represents the value passed to the Function interface's accept method. The second data type represents the accept method's return type. This return type is defined as a Function instance that is passed a string and returns a string. In other words, the curryConcat function is passed a string and returns an instance of a function that is passed and returns a string. Function<String, Function<String, String>> curryConcat; We then assign an appropriate lambda expression to the variable: curryConcat = (a) -> (b) -> biFunctionConcat.apply(a, b); This may seem to be a bit confusing initially, so let's take it one piece at a time. First of all, the lambda expression needs to return a function. The lambda expression assigned to curryConcat follows where the ellipses represent the body of the function. The parameter, a, is passed to the body: (a) ->...; The actual body follows: (b) -> biFunctionConcat.apply(a, b); This is the lambda expression or function that is returned. This function takes two parameters, a and b. When this function is created, the a parameter will be known and specified. This function can be evaluated later when the value for b is specified. The function returned is an instance of a Function interface, which is passed two parameters and returns a single value. To illustrate this, define an intermediate variable to hold this returned function: Function<String,String> intermediateFunction; We can assign the result of executing the curryConcat lambda expression using it's apply method as shown here where a value of Cat is specified for the a parameter: intermediateFunction = curryConcat.apply("Cat"); The next two statements will display the returned function: System.out.println(intermediateFunction); System.out.println(curryConcat.apply("Cat")); The output will look something similar to the following: packt.Chapter2$$Lambda$3/798154996@5305068a packt.Chapter2$$Lambda$3/798154996@1f32e575 Note that these are the values representing this functions as returned by the implied toString method. They are both different, indicating that two different functions were returned and can be passed around. Now that we have confirmed a function has been returned, we can supply a value for the b parameter as shown here: System.out.println(intermediateFunction.apply("Dog")); The output will be CatDog. This illustrates how we can split a two parameter function into two distinct functions, which can be evaluated when desired. They can be used together as shown with these statements: System.out.println(curryConcat.apply("Cat").apply("Dog")); System.out.println(curryConcat.apply("Flying ").apply("Monkeys")); The output of these statements is as follows: CatDog Flying Monkeys We can define a similar operation for doubles as shown here: Function<Double, Function<Double, Double>> curryAdd = (a) -> (b) -> a * b; System.out.println(curryAdd.apply(3.0).apply(4.0)); This will display 12.0 as the returned value. Currying is a valuable approach useful when the arguments of a function need to be evaluated at different times. Summary In this article, we investigated the use of lambda expressions and how they support the functional style of programming in Java 8. When possible, we used examples to contrast the use of classes and methods against the use of functions. This frequently led to simpler and more maintainable functional implementations. We illustrated how lambda expressions support the functional concepts of high-order, first-class, and pure functions. Examples were used to help clarify the concept of referential transparency. The concepts of closure and currying are found in most functional programming languages. We provide examples of how they are supported in Java 8. Lambda expressions have a specific syntax, which we examined in more detail. Also, there are several variations of the function that can be used to support the expression in the form, which we illustrated. Lambda expressions are based on functional interfaces using type inference. It is important to understand how to create functional interfaces and to know what standard functional interfaces are available in Java 8. Resources for Article: Further resources on this subject: An Introduction to Mastering JavaScript Promises and Its Implementation in Angular.js[article] Finding Peace in REST[article] Introducing JAX-RS API [article]
Read more
  • 0
  • 0
  • 23805

article-image-groovy-closures
Packt
16 Sep 2015
9 min read
Save for later

Groovy Closures

Packt
16 Sep 2015
9 min read
In this article by Fergal Dearle, the author of the book Groovy for Domain-Specific Languages - Second Edition, we will focus exclusively on closures. We will take a close look at them from every angle. Closures are the single most important feature of the Groovy language. Closures are the special seasoning that helps Groovy stand out from Java. They are also the single most powerful feature that we will use when implementing DSLs. In the article, we will discuss the following topics: We will start by explaining just what a closure is and how we can define some simple closures in our Groovy code We will look at how many of the built-in collection methods make use of closures for applying iteration logic, and see how this is implemented by passing a closure as a method parameter We will look at the various mechanisms for calling closures A handy reference that you might want to consider having at hand while you read this article is GDK Javadocs, which will give you full class descriptions of all of the Groovy built-in classes, but of particular interest here is groovy.lang.Closure. (For more resources related to this topic, see here.) What is a closure Closures are such an unfamiliar concept to begin with that it can be hard to grasp initially. Closures have characteristics that make them look like a method in so far as we can pass parameters to them and they can return a value. However, unlike methods, closures are anonymous. A closure is just a snippet of code that can be assigned to a variable and executed later: def flintstones = ["Fred","Barney"] def greeter = { println "Hello, ${it}" } flintstones.each( greeter ) greeter "Wilma" greeter = { } flintstones.each( greeter ) greeter "Wilma" Because closures are anonymous, they can easily be lost or overwritten. In the preceding example, we defined a variable greeter to contain a closure that prints a greeting. After greeter is overwritten with an empty closure, any reference to the original closure is lost. It's important to remember that greeter is not the closure. It is a variable that contains a closure, so it can be supplanted at any time. Because greeter has a dynamic type, we could have assigned any other object to it. All closures are a subclass of the type groovy.lang.Closure. Because groovy.lang is automatically imported, we can refer to Closure as a type within our code. By declaring our closures explicitly as Closure, we cannot accidentally assign a non-closure to them: Closure greeter = { println it } For each closure that is declared in our code, Groovy generates a Closure class for us, which is a subclass of groovy.lang.Closure. Our closure object is an instance of this class. Although we cannot predict what exact type of closure is generated, we can rely on it being a subtype of groovy.lang.Closure. Closures and collection methods We will encounter Groovy lists and see some of the iteration functions, such as the each method: def flintstones = ["Fred","Barney"] flintstones.each { println "Hello, ${it}" } This looks like it could be a specialized control loop similar to a while loop. In fact, it is a call to the each method of Object. The each method takes a closure as one of its parameters, and everything between the curly braces {} defines another anonymous closure. Closures defined in this way can look quite similar to code blocks, but they are not the same. Code defined in a regular Java or Groovy style code block is executed as soon as it is encountered. With closures, the block of code defined in the curly braces is not executed until the call() method of the closure is made: println "one" def two = { println "two" } println "three" two.call() println "four" Will print the following: one three two four Let's dig a bit deeper into the structure of the each of the calls shown in the preceding code. I refer to each as a call because that's what it is—a method call. Groovy augments the standard JDK with numerous helper methods. This new and improved JDK is referred to as the Groovy JDK, or GDK for short. In the GDK, Groovy adds the each method to the java.lang.Object class. The signature of the each method is as follows: Object each(Closure closure) The java.lang.Object class has a number of similar methods such as each, find, every, any, and so on. Because these methods are defined as part of Object, you can call them on any Groovy or Java object. They make little sense on most objects, but they do something sensible if not very useful: given: "an Integer" def number = 1 when: "we call the each method on it" number.each { println it } then: "just the object itself gets passed into the Closure" "1" == output() These methods all have specific implementations for all of the collection types, including arrays, lists, ranges, and maps. So, what is actually happening when we see the call to flintstones.each is that we are calling the list's implementation of the each method. Because each takes a Closure as its last and only parameter, the following code block is interpreted by Groovy as an anonymous Closure object to be passed to the method. The actual call to the closure passed to each is deferred until the body of the each method itself is called. The closure may be called multiple times—once for every element in the collection. Closures as method parameters We already know that parentheses around method parameters are optional, so the previous call to each can also be considered equivalent to: flintstones.each ({ println "Hello, ${it}") Groovy has a special handling for methods whose last parameter is a closure. When invoking these methods, the closure can be defined anonymously after the method call parenthesis. So, yet another legitimate way to call the preceding line is: flintstones.each() { println "hello, ${it}" } The general convention is not to use parentheses unless there are parameters in addition to the closure: given: def flintstones = ["Fred", "Barney", "Wilma"] when: "we call findIndexOf passing int and a Closure" def result = flintstones.findIndexOf(0) { it == 'Wilma'} then: result == 2 The signature of the GDK findIndexOf method is: int findIndexOf(int, Closure) We can define our own methods that accept closures as parameters. The simplest case is a method that accepts only a single closure as a parameter: def closureMethod(Closure c) { c.call() } when: "we invoke a method that accepts a closure" closureMethod { println "Closure called" } then: "the Closure passed in was executed" "Closure called" == output() Method parameters as DSL This is an extremely useful construct when we want to wrap a closure in some other code. Suppose we have some locking and unlocking that needs to occur around the execution of a closure. Rather than the writer of the code to locking via a locking API call, we can implement the locking within a locker method that accepts the closure: def locked(Closure c) { callToLockingMethod() c.call() callToUnLockingMethod() } The effect of this is that whenever we need to execute a locked segment of code, we simply wrap the segment in a locked closure block, as follows: locked { println "Closure called" } In a small way, we are already writing a mini DSL when we use these types on constructs. This call to the locked method looks, to all intents and purposes, like a new language construct, that is, a block of code defining the scope of a locking operation. When writing methods that take other parameters in addition to a closure, we generally leave the Closure argument to last. As already mentioned in the previous section, Groovy has a special syntax handling for these methods, and allows the closure to be defined as a block after the parameter list when calling the method: def closureMethodInteger(Integer i, Closure c) { println "Line $i" c.call() } when: "we invoke a method that accepts an Integer and a Closure" closureMethodInteger(1) { println "Line 2" } then: "the Closure passed in was executed with the parameter" """Line 1 Line 2""" == output() Forwarding parameters Parameters passed to the method may have no impact on the closure itself, or they may be passed to the closure as a parameter. Methods can accept multiple parameters in addition to the closure. Some may be passed to the closure, while others may not: def closureMethodString(String s, Closure c) { println "Greet someone" c.call(s) } when: "we invoke a method that accepts a String and a Closure" closureMethodString("Dolly") { name -> println "Hello, $name" } then: "the Closure passed in was executed with the parameter" """Greet someone Hello, Dolly""" == output() This construct can be used in circumstances where we have a look-up code that needs to be executed before we have access to an object. Say we have customer records that need to be retrieved from a database before we can use them: def withCustomer (id, Closure c) { def cust = getCustomerRecord(id) c.call(cust) } withCustomer(12345) { customer -> println "Found customer ${customer.name}" } We can write an updateCustomer method that saves the customer record after the closure is invoked, and amend our locked method to implement transaction isolation on the database, as follows: class Customer { String name } def locked (Closure c) { println "Transaction lock" c.call() println "Transaction release" } def update (customer, Closure c) { println "Customer name was ${customer.name}" c.call(customer) println "Customer name is now ${customer.name}" } def customer = new Customer(name: "Fred") At this point, we can write code that nests the two method calls by calling update as follows: locked { update(customer) { cust -> cust.name = "Barney" } } This outputs the following result, showing how the update code is wrapped by updateCustomer, which retrieves the customer object and subsequently saves it. The whole operation is wrapped by locked, which includes everything within a transaction: Transaction lock Customer name was Fred Customer name is now Barney Transaction release Summary In this article, we covered closures in some depth. We explored the various ways to call a closure and the means of passing parameters. We saw how we can pass closures as parameters to methods, and how this construct can allow us to appear to add mini DSL syntax to our code. Closures are the real "power" feature of Groovy, and they form the basis of most of the DSLs. Resources for Article: Further resources on this subject: Using Groovy Closures Instead of Template Method [article] Metaprogramming and the Groovy MOP [article] Clojure for Domain-specific Languages - Design Concepts with Clojure [article]
Read more
  • 0
  • 0
  • 3406

article-image-identifying-best-places
Packt
16 Sep 2015
9 min read
Save for later

Identifying the Best Places

Packt
16 Sep 2015
9 min read
In this article by Ben Mearns, author of the book QGIS Blueprints, we will take a look at how the raster data can be analyzed, enhanced, and used for map production. Specifically, you will learn to produce a grid of the suitable locations based on the criteria values in other grids using raster analysis and map algebra. Then, using the grid, we will produce a simple click-based map. The end result will be a site suitability web application with click-based discovery capabilities. We'll be looking at the suitability for the farmland preservation selection. In this article, we will cover the following topics: Vector data ETL for raster analysis Batch processing Leaflet map application publication with QGIS2Leaf (For more resources related to this topic, see here.) Vector data Extract, Transform, and Load Our suitability analysis uses map algebra and criteria grids to give us a single value for the suitability for some activity in every place. This requires that the data be expressed in the raster (grid) format. So, let's perform the other necessary ETL steps and then convert our vector data to raster. We will perform the following actions: Ensure that our data has identical spatial reference systems. For example, we may be using a layer of the roads maintained by the state department of transportation and a layer of land use maintained by the department of natural resources. These layers must have identical spatial reference systems or be transformed to have identical systems. Extract geographic objects according to their classes as defined in some attribute table field if we want to operate on them while they're still in the vector form. If no further analysis is necessary, convert to raster. Loading data and establishing the CRS conformity It is important for the layers in this project to be transformed or projected into the same geographic or projected coordinate system. This is necessary for an accurate analysis and for publication to the web formats. Perform the following steps for this: Disable 'on the fly' projection if it is turned on. Otherwise, 'on the fly' will automatically project your data again to display it with the layers that are already in the Canvas. Navigate to Setting | Options and perform the settings shown in the following screenshot: Add the project layers: Navigate to Layer | Add Layer | Vector Layer. Add the following layers from within c2/data. ApplicantsCountyEasementsLanduseRoads You can select multiple layers to add by pressing Shift and clicking on the contiguous files or pressing Ctrl and clicking on the noncontiguous files. Import the Digital Elevation Model from c2/data/dem/dem.tif. Navigate to Layer | Add Layer | Raster Layer. From the dem directory, select dem.tif and then click on Open. Even though the layers are in a different CRS, QGIS does not warn us in this case. You must discover the issue by checking each layer individually. Check the CRS of the county layer and one other layer: Highlight the county layer in the Layers panel. Navigate to Layer | Properties. The CRS is displayed under the General tab in the Coordinate reference system section: Note that the county layer is in EPSG: 26957, while the others are in EPSG: 2776. We will transform the county layer from EPSG:26957 to EPSG:2776. Navigate to Layer | Save As | Select CRS. We will save all the output from this article in c2/output. To prepare the layers for conversion to raster, we will add a new generic column to all the layers populated with the number 1. This will be translated to a Boolean type raster, where the presence of the object that the raster represents (for example, roads) is indicated by a cell of 1 and all others with a zero. Follow these steps for the applicants, easements, and roads: Navigate to Layer | Toggle Editing. Then, navigate to Layer | Open Attribute Table. Add a column with the button at the top of the Attribute table dialog. Use value as the name for the new column and the following data format options: Select the new column from the dropdown in the Attribute table and enter 1 into the value box: Click on Update All. Navigate to Layer | Toggle Editing. Finally, save. The extracting (filtering) features Let's suppose that our criteria includes only a subset of the features in our roads layer—major unlimited access roads (but not freeways), a subset of the features as determined by a classification code (CFCC). To temporarily extract this subset, we will do a layer query by performing the following steps: Filter the major roads from the roads layer. Highlight the roads layer. Navigate to Layer | Query. Double-click on CFCC to add it to the expression. Click on the = operator to add to the expression Under the Values section, click on All to view all the unique values in the CFCC field. Double-click on A21 to add this to the expression. Do this for all the codes less than A36. Include A63 for highway on-ramps. You selection code will look similar to this: "CFCC" = 'A21' OR "CFCC" = 'A25' OR "CFCC" = 'A31' OR "CFCC" = 'A35' OR "CFCC" = 'A63' Click on OK, as shown in the following screenshot: Create a new c2/output directory. Save the roads layer as a new layer with only the selected features (major_roads) in this directory. To clear a layer filter, return to the query dialog on the applied layer (highlight it in the Layers pane; navigate to Layer | Query and click on Clear). Repeat these steps for the developed (LULC1 = 1) and agriculture (LULC1 = 2) landuses (separately) from the landuse layer. Converting to raster In this section, we will convert all the needed vector layers to raster. We will be doing this in batch, which will allow us to repeat the same operation many times over multiple layers. Doing more at once—working in batch The QGIS Processing Framework provides capabilities to run the same operation many times on different data. This is called batch processing. A batch process is invoked from an operation's context menu in the Processing Toolbox. The batch dialog requires that the parameters for each layer be populated for every iteration. Convert the vector layers to raster. Navigate to Processing Toolbox. Select Advanced Interface from the dropdown at the bottom of Processing Toolbox (if it is not selected, it will show as Simple Interface). Type rasterize to search for the Rasterize tool. Right-click on the Rasterize tool and select Execute as batch process: Fill in the Batch Processing dialog, making sure to specify the parameters as follows: Parameter Value Input layer (For example, roads) Attribute field value Output raster size Output resolution in map units per pixel Horizontal 30 Vertical 30 Raster type Int16 Output layer (For example, roads) The following images show how this will look in QGIS: Scroll to the right to complete the entry of parameter values.   Organize the new layers (optional step).    Batch sometimes gives unfriendly names based on some bug in the dialog box.    Change the layer names by doing the following for each layer created by batch:    Highlight the layer.    Navigate to Layer | Properties.    Change the layer name to the name of the vector layer from which this was created (for example, applicants). You should be able to find a hint for this value in the layer properties in the layer source (name of the .tif file).    Group the layers.    Press Shift + click on all the layers created by batch and the previous roads raster.    Navigate to Right click | Group selected. Publishing the results as a web application Now that we have completed our modeling for the site selection of a farmland for conservation, let's take steps to publish this for the Web. QGIS2leaf QGIS2leaf allows us to export our QGIS map to web map formats (JavaScript, HTML, and CSS) using the Leaflet map API. Leaflet is a very lightweight, extensible, and responsive (and trendy) web mapping interface. QGIS2Leaf converts all our vector layers to GeoJSON, which is the most common textual way to express the geographic JavaScript objects. As our operational layer is in GeoJSON, Leaflet's click interaction is supported, and we can access the information in the layers by clicking. It is a fully editable HTML and JavaScript file. You can customize and upload it to an accessible web location. QGIS2leaf is very simple to use as long as the layers are prepared properly (for example, with respect to CRS) up to this point. It is also very powerful in creating a good starting application including GeoJSON, HTML, and JavaScript for our Leaflet web map. Make sure to install the QGIS2Leaf plugin if you haven't already. Navigate to Web | QGIS2leaf | Exports a QGIS Project to a working Leaflet webmap. Click on the Get Layers button to add the currently displayed layers to the set that QGIS2leaf will export. Choose a basemap and enter the additional details if so desired. Select Encode to JSON. These steps will produce a map application similar to the following one. We'll take a look at how to restore the labels: Summary In this article, using the site selection example, we covered basic vector data ETL, raster analysis, and web map creation. We started with vector data, and after unifying CRS, we prepared the attribute tables. We then filtered and converted it to raster grids using batch processing. Finally, we published the prepared vector output with QGIS2Leaf as a simple Leaflet web map application with a strong foundation for extension. Resources for Article:   Further resources on this subject: Style Management in QGIS [article] Preparing to Build Your Own GIS Application [article] Geocoding Address-based Data [article]
Read more
  • 0
  • 0
  • 2074

article-image-configuring-and-securing-virtual-private-cloud
Packt
16 Sep 2015
7 min read
Save for later

Configuring and Securing a Virtual Private Cloud

Packt
16 Sep 2015
7 min read
In this article by Aurobindo Sarkar and Sekhar Reddy, author of the book Amazon EC2 Cookbook, we will cover recipes for: Configuring VPC DHCP options Configuring networking connections between two VPCs (VPC peering) (For more resources related to this topic, see here.) In this article, we will focus on recipes to configure AWS VPC (Virtual Private Cloud) against typical network infrastructure requirements. VPCs help you isolate AWS EC2 resources, and this feature is available in all AWS regions. A VPC can span multiple availability zones in a region. AWS VPC also helps you run hybrid applications on AWS by extending your existing data center into the public cloud. Disaster recovery is another common use case for using AWS VPC. You can create subnets, routing tables, and internet gateways in VPC. By creating public and private subnets, you can put your web and frontend services in public subnet, your application databases and backed services in a private subnet. Using VPN, you can extend your on-premise data center. Another option to extend your on-premise data center is AWS Direct Connect, which is a private network connection between AWS and you're on-premise data center. In VPC, EC2 resources get static private IP addresses that persist across reboots, which works in the same way as DHCP reservation. You can also assign multiple IP addresses and Elastic Network Interfaces. You can have a private ELB accessible only within your VPC. You can use CloudFormation to automate the VPC creation process. Defining appropriate tags can help you manage your VPC resources more efficiently. Configuring VPC DHCP options DHCP options sets are associated with your AWS account, so they can be used across all your VPCs. You can assign your own domain name to your instances by specifying a set of DHCP options for your VPC. However, only one DHCP Option set can be associated with a VPC. Also, you can't modify the DHCP option set after it is created. In case your want to use a different set of DHCP options, then you will need to create a new DHCP option set and associate it with your VPC. There is no need to restart or relaunch the instances in the VPC after associating the new DHCP option set as they can automatically pick up the changes. How to Do It… In this section, we will create a DHCP option set and then associate it with your VPC. Create a DHCP option set with a specific domain name and domain name servers. In our example, we execute commands to create a DHCP options set and associate it with our VPC. We specify domain name testdomain.com and DNS servers (10.2.5.1 and 10.2.5.2) as our DHCP options. $ aws ec2 create-dhcp-options --dhcp-configuration Key=domain-name,Values=testdomain.com Key=domain-name-servers,Values=10.2.5.1,10.2.5.2 Associate the DHCP option and set your VPC (vpc-bb936ede). $ aws ec2 associate-dhcp-options --dhcp-options-id dopt-dc7d65be --vpc-id vpc-bb936ede How it works… DHCP provides a standard for passing configuration information to hosts in a network. The DHCP message contains an options field in which parameters such as the domain name and the domain name servers can be specified. By default, instances in AWS are assigned an unresolvable host name, hence we need to assign our own domain name and use our own DNS servers. The DHCP options sets are associated with the AWS account and can be used across our VPCs. First, we create a DHCP option set. In this step, we specify the DHCP configuration parameters as key value pairs where commas separate the values and multiple pairs are separated by spaces. In our example, we specify two domain name servers and a domain name. We can use up to four DNS servers. Next, we associate the DHCP option set with our VPC to ensure that all existing and new instances launched in our VPC will use this DHCP options set. Note that if you want to use a different set of DHCP options, then you will need to create a new set and again associate them with your VPC as modifications to a set of DHCP options is not allowed. In addition, you can let the instances pick up the changes automatically or explicitly renew the DHCP lease. However, in all cases, only one set of DHCP options can be associated with a VPC at any given time. As a practice, delete the DHCP options set when none of your VPCs are using it and you don't need it any longer. Configuring networking connections between two VPCs (VPC peering) In this recipe, we will configure VPC peering. VPC peering helps you connect instances in two different VPCs using their private IP addresses. VPC peering is limited to within a region. However, you can create VPC peering connection between VPCs that belong to different AWS accounts. The two VPCs that participate in VPC peering must not have matching or overlapping CIDR addresses. To create a VPC connection, the owner of the local VPC has to send the request to the owner of the peer VPC located in the same account or a different account. Once the owner of peer VPC accepts the request, the VPC peering connection is activated. You will need to update the routes in your route table to send traffic to the peer VPC and vice versa. You will also need to update your instance security groups to allow traffic from–to the peer VPC. How to Do It… Here, we present the commands to creating a VPC peering connection, accepting a peering request, and adding the appropriate route in your routing table. Create a VPC peering connection between two VPCs with IDs vpc-9c19a3f4 and vpc-0214e967. Record VpcPeeringConnectionId for further use $ aws ec2 create-vpc-peering-connection --vpc-id vpc-9c19a3f4 --peer-vpc-id vpc-0214e967 Accept VPC peering connection. Here, we will accept the VPC peering connection request with ID pcx-cf6aa4a6. $ aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id pcx-cf6aa4a6 Add a route in the route table for the VPC peering connection. The following command create route with destination CIDR (172.31.16.0/20) and VPC peer connection ID (pcx-0e6ba567) in route table rtb-7f1bda1a. $ aws ec2 create-route --route-table-id rtb-7f1bda1a --destination-cidr-block 172.31.16.0/20 --vpc-peering-connection-id pcx-0e6ba567 How it works… First, we request a VPC peering connection between two VPCs: a requester VPC that we own (i.e., vpc-9c19a3f4) and a peer VPC with that we want to create a connection (vpc-0214e967). Note that the peering connection request expires after 7 days. In order tot activate the VPC peering connection, the owner of the peer VPC must accept the request. In our recipe, as the owner of the peer VPC, we accept the VPC peering connection request. However, note that the owner of the peer VPC may be a person other than you. You can use the describe-vpc-peering-connections to view your outstanding peering connection requests. The VPC peering connection should be in the pending-acceptance state for you to accept the request. After creating the VPC peering connection, we created a route in our local VPC subnet's route table to direct traffic to the peer VPC. You can also create peering connections between two or more VPCs to provide full access to resources or peer one VPC to access centralized resources. In addition, peering can be implemented between a VPC and specific subnets or instances in one VPC with instances in another VPC. Refer to Amazon VPC documentation to set up the most appropriate peering connections for your specific requirements. Summary In this article, you learned configuring VPC DHCP options as well as configuring networking connections between two VPCs. The book Amazon EC2 Cookbook will cover recipes that relate to designing, developing, and deploying scalable, highly available, and secure applications on the AWS platform. By following the steps in our recipes, you will be able to effectively and systematically resolve issues related to development, deployment, and infrastructure for enterprise-grade cloud applications or products. Resources for Article: Further resources on this subject: Hands-on Tutorial for Getting Started with Amazon SimpleDB [article] Amazon SimpleDB versus RDBMS [article] Amazon DynamoDB - Modelling relationships, Error handling [article]
Read more
  • 0
  • 0
  • 1898
article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "emailaddress2@provider2.com">, <"Business e-mail", "emailaddress1@provider1.com">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "emailaddress1@provider1.com"); emails.put("Personal email", "emailaddress2@provider2.com"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Business email: emailaddress1@provider1.com Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "emailaddress3@provider3.com"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= emailaddress3@provider3.com, Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Here, we added a new e-mail address with the Personal email 1 key and the value is emailaddress3@provider3.com. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 4829

article-image-slideshow-presentations
Packt
15 Sep 2015
24 min read
Save for later

Slideshow Presentations

Packt
15 Sep 2015
24 min read
 In this article by David Mitchell, author of the book Dart By Example you will be introduced to the basics of how to build a presentation application using Dart. It usually takes me more than three weeks to prepare a good impromptu speech. Mark Twain Presentations make some people shudder with fear, yet they are an undeniably useful tool for information sharing when used properly. The content has to be great and some visual flourish can make it stand out from the crowd. Too many slides can make the most receptive audience yawn, so focusing the presenter on the content and automatically taking care of the visuals (saving the creator from fiddling with different animations and fonts sizes!) can help improve presentations. Compelling content still requires the human touch. (For more resources related to this topic, see here.) Building a presentation application Web browsers are already a type of multimedia presentation application so it is feasible to write a quality presentation program as we explore more of the Dart language. Hopefully it will help us pitch another Dart application to our next customer. Building on our first application, we will use a text based editor for creating the presentation content. I was very surprised how much faster a text based editor is for producing a presentation, and more enjoyable. I hope you experience such a productivity boost! Laying out the application The application will have two modes, editing and presentation. In the editing mode, the screen will be split into two panes. The top pane will display the slides and the lower will contain the editor, and other interface elements. This article will focus on the core creation side of the presentation. The application will be a single Dart project. Defining the presentation format The presentations will be written in a tiny subset of the Markdown format which is a powerful yet simple to read text file based format (much easier to read, type and understand than HTML). In 2004, John Gruber and the late Aaron Swartz created the Markdown language in 2004 with the goal of enabling people to write using an easy-to-read, easy-to-write plain text format. It is used on major websites, such as GitHub.com and StackOverflow.com. Being plain text, Markdown files can be kept and compared in version control. For more detail and background on Markdown see https://en.wikipedia.org/wiki/Markdown A simple titled slide with bullet points would be defined as: #Dart Language +Created By Google +Modern language with a familiar syntax +Structured Web Applications +It is Awesomely productive! I am positive you only had to read that once! This will translate into the following HTML. <h1>Dart Language</h1> <li>Created By Google</li>s <li>Modern language with a familiar syntax</li> <li>Structured Web Applications</li> <li>It is Awesomely productive!</li> Markdown is very easy and fast to parse, which probably explains its growing popularity on the web. It can be transformed into many other formats. Parsing the presentation The content of the TextAreaHtml element is split into a list of individual lines, and processed in a similar manner to some of the features in the Text Editor application using forEach to iterate over the list. Any lines that are blank once any whitespace has been removed via the trim method are ignored. #A New Slide Title +The first bullet point +The second bullet point #The Second Slide Title +More bullet points !http://localhost/img/logo.png #Final Slide +Any questions? For each line starting with a # symbol, a new Slide object is created. For each line starting with a + symbol, they are added to this slides bullet point list. For each line is discovered using a ! symbol the slide's image is set (a limit of one per slide). This continues until the end of the presentation source is reached. A sample presentation To get a new user going quickly, there will be an example presentation which can be used as a demonstration and testing the various areas of the application. I chose the last topic that came up round the family dinner table—the coconut! #Coconut +Member of Arecaceae family. +A drupe - not a nut. +Part of daily diets. #Tree +Fibrous root system. +Mostly surface level. +A few deep roots for stability. #Yield +75 fruits on fertile land +30 typically +Fibre has traditional uses #Finally !coconut.png #Any Questions? Presenter project structures The project is a standard Dart web application with index.html as the entry point. The application is kicked off by main.dart which is linked to in index.html, and the application functionality is stored in the lib folder. Source File Description sampleshows.dart    The text for the slideshow application.  lifecyclemixin.dart  The class for the mixin.  slideshow.dart  Data structures for storing the presentation.  slideshowapp.dart  The application object. Launching the application The main function has a very short implementation. void main() { new SlideShowApp(); } Note that the new class instance does not need to be stored in a variable and that the object does not disappear after that line is executed. As we will see later, the object will attach itself to events and streams, keeping the object alive for the lifetime that the page is loaded. Building bullet point slides The presentation is build up using two classes—Slide and SlideShow. The Slide object creates the DivElement used to display the content and the SlideShow contains a list of Slide objects. The SlideShow object is updated as the text source is updated. It also keeps track of which slide is currently being displayed in the preview pane. Once the number of Dart files grows in a project, the DartAnalyzer will recommend naming the library. It is good habit to name every .dart file in a regular project with its own library name. The slideshow.dart file has the keyword library and a name next to it. In Dart, every file is a library, whether it is explicitly declared or not. If you are looking at Dart code online you may stumble across projects with imports that look a bit strange. #import("dart:html"); This is the old syntax for Dart's import mechanism. If you see this it is a sign that other aspects of the code may be out of date too. If you are writing an application in a single project, source files can be arranged in a folder structure appropriate for the project, though keeping the relatives paths manageable is advisable. Creating too many folders is probably means it is time to create a package! Accessing private fields In Dart, as discussed when we covered packages, the privacy is at the library level but it is still possible to have private fields in a class even though Dart does not have the keywords public, protected, and private. A simple return of a private field's value can be performed with a one line function. String getFirstName() => _name; To retrieve this value, a function call is required, for example, Person.getFirstName() however it may be preferred to have a property syntax such as Person.firstName. Having private fields and retaining the property syntax in this manner, is possible using the get and set keywords. Using true getters and setters The syntax of Dart also supports get and set via keywords: int get score =>score + bonus; set score(int increase) =>score += increase * level; Using either get/set or simple fields is down to preference. It is perfectly possible to start with simple fields and scale up to getters and setters if more validation or processing is required. The advantage of the get and set keywords in a library, is the intended interface for consumers of the package is very clear. Further it clarifies which methods may change the state of the object and which merely report current values. Mixin it up In object oriented languages, it is useful to build on one class to create a more specialized related class. For example, in the text editor the base dialog class was extended to create alert and confirm pop ups. What if we want to share some functionality but do not want inheritance occurring between the classes? Aggregation can solve this problem to some extent: class A{ classb usefulObject; } The downside is that this requires a longer reference to use: new A().usefulObject.handyMethod(); This problem has been solved in Dart (and other languages) by a mixin class to do this job, allowing the sharing of functionality without forced inheritance or clunky aggregation. In Dart, a mixin must meet the requirements: No constructors in the class declaration. The base class of the mixin must be Object. No calls to a super class are made. mixins are really just classes that are malleable enough to fit into the class hierarchy at any point. A use case for a mixin may be serialization fields and methods that may be required on several classes in an application that are not part of any inheritance chain. abstract class Serialisation { void save() { //Implementation here. } void load(String filename) { //Implementation here. } } The with keyword is used to declare that a class is using a mixin. class ImageRecord extends Record with Serialisation If the class does not have an explicit base class, it is required to specify Object. class StorageReports extends Object with Serialization In Dart, everything is an object, even basic types such as num are objects and not primitive types. The classes int and double are subtypes of num. This is important to know, as other languages have different behaviors. Let's consider a real example of this. main() { int i; print("$i"); } In a language such as Java the expected output would be 0 however the output in Dart is null. If a value is expected from a variable, it is always good practice to initialize it! For the classes Slide and SlideShow, we will use a mixin from the source file lifecyclemixin.dart to record a creation and an editing timestamp. abstract class LifecycleTracker { DateTime _created; DateTime _edited; recordCreateTimestamp() => _created = new DateTime.now(); updateEditTimestamp() => _edited = new DateTime.now(); DateTime get created => _created; DateTime get lastEdited => _edited; } To use the mixin, the recordCreateTimestamp method can be called from the constructor and the updateEditTimestamp from the main edit method. For slides, it makes sense just to record the creation. For the SlideShow class, both the creation and update will be tracked. Defining the core classes The SlideShow class is largely a container objects for a list of Slide objects and uses the mixin LifecycleTracker. class SlideShow extends Object with LifecycleTracker { List<Slide> _slides; List<Slide> get slides => _slides; ... The Slide class stores the string for the title and a list of strings for the bullet points. The URL for any image is also stored as a string: class Slide extends Object with LifecycleTracker { String titleText = ""; List<String> bulletPoints; String imageUrl = ""; ... A simple constructor takes the titleText as a parameter and initializes the bulletPoints list. If you want to focus on just-the-code when in WebStorm , double-click on filename title of the tab to expand the source code to the entire window. Double-click again to return to the original layout. For even more focus on the code, go to the View menu and click on Enter Distraction Free Mode. Transforming data into HTML To add the Slide object instance into a HTML document, the strings need to be converted into instances of HTML elements to be added to the DOM (Document Object Model). The getSlideContents() method constructs and returns the entire slide as a single object. DivElement getSlideContents() { DivElement slide = new DivElement(); DivElement title = new DivElement(); DivElement bullets = new DivElement(); title.appendHtml("<h1>$titleText</h1>"); slide.append(title); if (imageUrl.length > 0) { slide.appendHtml("<img src="$imageUrl" /><br/>"); } bulletPoints.forEach((bp) { if (bp.trim().length > 0) { bullets.appendHtml("<li>$bp</li>"); } }); slide.append(bullets); return slide; } The Div elements are constructed as objects (instances of DivElement), while the content is added as literal HTML statements. The method appendHtml is used for this particular task as it renders HTML tags in the text. The regular method appendText puts the entire literal text string (including plain unformatted text of the HTML tags) into the element. So what exactly is the difference? The method appendHtml evaluates the supplied ,HTML, and adds the resultant object node to the nodes of the parent element which is rendered in the browser as usual. The method appendText is useful, for example, to prevent user supplied content affecting the format of the page and preventing malicious code being injected into a web page. Editing the presentation When the source is updated the presentation is updated via the onKeyUp event. This was used in the text editor project to trigger a save to local storage. This is carried out in the build method of the SlideShow class, and follows the pattern we discussed parsing the presentation. build(String src) { updateEditTimestamp(); _slides = new List<Slide>(); Slide nextSlide; src.split("n").forEach((String line) { if (line.trim().length > 0) { // Title - also marks start of the next slide. if (line.startsWith("#")) { nextSlide = new Slide(line.substring(1)); _slides.add(nextSlide); } if (nextSlide != null) { if (line.startsWith("+")) { nextSlide.bulletPoints.add(line.substring(1)); } else if (line.startsWith("!")) { nextSlide.imageUrl = line.substring(1); } } } }); } As an alternative to the startsWith method, the square bracket [] operator could be used for line [0] to retrieve the first character. The startsWith can also take a regular expression or a string to match and a starting index, refer to the dart:core documentation for more information. For the purposes of parsing the presentation, the startsWith method is more readable. Displaying the current slide The slide is displayed via the showSlide method in slideShowApp.dart. To preview the current slide, the current index, stored in the field currentSlideIndex, is used to retrieve the desired slide object and the Div rendering method called. showSlide(int slideNumber) { if (currentSlideShow.slides.length == 0) return; slideScreen.style.visibility = "hidden"; slideScreen ..nodes.clear() ..nodes.add(currentSlideShow.slides[slideNumber].getSlideContents ()); rangeSlidePos.value = slideNumber.toString(); slideScreen.style.visibility = "visible"; } The slideScreen is a DivElement which is then updated off screen by setting the visibility style property to hidden The existing content of the DivElement is emptied out by calling nodes.clear() and the slide content is added with nodes.add. The range slider position is set and finally the DivElement is set to visible again. Navigating the presentation A button set with familiar first, previous, next and last slide allow the user to jump around the preview of the presentation. This is carried out by having an index into the list of slides stored in the field slide in the SlideShowApp class. Handling the button key presses The navigation buttons require being set up in an identical pattern in the constructor of the SlideShowApp object. First get an object reference using id, which is the id attribute of the element, and then attaching a handler to the click event. Rather than repeat this code, a simple function can handle the process. setButton(String id, Function clickHandler) { ButtonInputElement btn = querySelector(id); btn.onClick.listen(clickHandler); } As function is a type in Dart, functions can be passed around easily as a parameter. Let us take a look at the button that takes us to the first slide. setButton("#btnFirst", startSlideShow); void startSlideShow(MouseEvent event) { showFirstSlide(); } void showFirstSlide() { showSlide(0); } The event handlers do not directly change the slide, these are carried out by other methods, which may be triggered by other inputs such as the keyboard. Using the function type The SlideShowApp constructor makes use of this feature. Function qs = querySelector; var controls = qs("#controls"); I find the querySelector method a little long to type (though it is a good descriptive of what it does). With Function being types, we can easily create a shorthand version. The constructor spends much of its time selecting and assigning the HTML elements to member fields of the class. One of the advantages of this approach is that the DOM of the page is queried only once, and the reference stored and reused. This is good for performance of the application as, once the application is running, querying the DOM may take much longer. Staying within the bounds Using min and max function from the dart:math package, the index can be kept in range of the current list. void showLastSlide() { currentSlideIndex = max(0, currentSlideShow.slides.length - 1); showSlide(currentSlideIndex); } void showNextSlide() { currentSlideIndex = min(currentSlideShow.slides.length - 1, ++currentSlideIndex); showSlide(currentSlideIndex); } These convenience functions can save a great deal if and else if comparisons and help make code a good degree more readable. Using the slider control The slider control is another new control in the HTML5 standard. This will allow the user to scroll though the slides in the presentation. This control is a personal favorite of mine, as it is so visual and can be used to give very interactive feedback to the user. It seemed to be a huge omission from the original form controls in the early generation of web browsers. Even with clear widely accepted features, HTML specifications can take a long time to clear committees and make it into everyday browsers! <input type="range" id="rngSlides" value="0"/> The control has an onChange event which is given a listener in the SlideShowApp constructor. rangeSlidepos.onChange.listen(moveToSlide);rangeSlidepos.onChange .listen(moveToSlide); The control provides its data via a simple string value, which can be converted to an integer via the int.parse method to be used as an index to the presentation's slide list. void moveToSlide(Event event) { currentSlideIndex = int.parse(rangeSlidePos.value); showSlide(currentSlideIndex); } The slider control must be kept in synchronization with any other change in slide display, use of navigation or change in number of slides. For example, the user may use the slider to reach the general area of the presentation, and then adjust with the previous and next buttons. void updateRangeControl() { rangeSlidepos ..min = "0" ..max = (currentSlideShow.slides.length - 1).toString(); } This method is called when the number of slides is changed, and as with working with most HTML elements, the values to be set need converted to strings. Responding to keyboard events Using the keyboard, particularly the arrow (cursor) keys, is a natural way to look through the slides in a presentation even in the preview mode. This is carried out in the SlideShowApp constructor. In Dart web applications, the dart:html package allows direct access to the globalwindow object from any class or function. The Textarea used to input the presentation source will also respond to the arrow keys so there will need to be a check to see if it is currently being used. The property activeElement on the document will give a reference to the control with focus. This reference can be compared to the Textarea, which is stored in the presEditor field, so a decision can be taken on whether to act on the keypress or not. Key Event Code Action Left Arrow  37  Go back a slide. Up Arrow  38  Go to first slide.   Right Arrow  39  Go to next slide.  Down Arrow  40  Go to last slide. Keyboard events, like other events, can be listened to by using a stream event listener. The listener function is an anonymous function (the definition omits a name) that takes the KeyboardEvent as its only parameter. window.onKeyUp.listen((KeyboardEvent e) { if (presEditor != document.activeElement){ if (e.keyCode == 39) showNextSlide(); else if (e.keyCode == 37) showPrevSlide(); else if (e.keyCode == 38) showFirstSlide(); else if (e.keyCode == 40) showLastSlide(); } }); It is a reasonable question to ask how to get the keyboard key codes required to write the switching code. One good tool is the W3C's Key and Character Codes page at http://www.w3.org/2002/09/tests/keys.html, to help with this but it can often be faster to write the handler and print out the event that is passed in! Showing the key help Rather than testing the user's memory, there will be a handy reference to the keyboard shortcuts. This is a simple Div element which is shown and then hidden when the key (remember to press Shift too!) is pressed again by toggling the visibility style from visible to hidden. Listening twice to event streams The event system in Dart is implemented as a stream. One of the advantages of this is that an event can easily have more than one entity listening to the class. This is useful, for example in a web application where some keyboard presses are valid in one context but not in another. The listen method is an add operation (accumulative) so the key press for help can be implemented separately. This allows a modular approach which helps reuse as the handlers can be specialized and added as required. window.onKeyUp.listen((KeyboardEvent e) { print(e); //Check the editor does not have focus. if (presEditor != document.activeElement) { DivElement helpBox = qs("#helpKeyboardShortcuts"); if (e.keyCode == 191) { if (helpBox.style.visibility == "visible") { helpBox.style.visibility = "hidden"; } else { helpBox.style.visibility = "visible"; } } } }); In, for example, a game, a common set of event handling may apply to title and introduction screen and the actual in game screen contains additional event handling as a superset. This could be implemented by adding and removing handlers to the relevant event stream. Changing the colors HTML5 provides browsers with full featured color picker (typically browsers use the native OS's color chooser). This will be used to allow the user to set the background color of the editor application itself. The color picker is added to the index.html page with the following HTML: <input id="pckBackColor" type="color"> The implementation is straightforward as the color picker control provides: InputElement cp = qs("#pckBackColor"); cp.onChange.listen( (e) => document.body.style.backgroundColor = cp.value); As the event and property (onChange and value) are common to the input controls the basic InputElement class can be used. Adding a date Most presentations are usually dated, or at least some of the jokes are! We will add a convenient button for the user to add a date to the presentation using the HTML5 input type date which provides a graphical date picker. <input type="date" id="selDate" value="2000-01-01"/> The default value is set in the index.html page as follows: The valueAsDate property of the DateInputElement class provides the Date object which can be added to the text area: void insertDate(Event event) { DateInputElement datePicker = querySelector("#selDate"); if (datePicker.valueAsDate != null) presEditor.value = presEditor.value + datePicker.valueAsDate.toLocal().toString(); } In this case, the toLocal method is used to obtain a string formatted to the month, day, year format. Timing the presentation The presenter will want to keep to their allotted time slot. We will include a timer in the editor to aid in rehearsal. Introducing the stopwatch class The Stopwatch class (from dart:core) provides much of the functionality needed for this feature, as shown in this small command line application: main() { Stopwatch sw = new Stopwatch(); sw.start(); print(sw.elapsed); sw.stop(); print(sw.elapsed); } The elapsed property can be checked at any time to give the current duration. This is very useful class, for example, it can be used to compare different functions to see which is the fastest. Implementing the presentation timer The clock will be stopped and started with a single button handled by the toggleTimer method. A recurring timer will update the duration text on the screen as follows: If the timer is running, the update Timer and the Stopwatch in field slidesTime is stopped. No update to the display is required as the user will need to see the final time: void toggleTimer(Event event) { if (slidesTime.isRunning) { slidesTime.stop(); updateTimer.cancel(); } else { updateTimer = new Timer.periodic(new Duration(seconds: 1), (timer) { String seconds = (slidesTime.elapsed.inSeconds % 60).toString(); seconds = seconds.padLeft(2, "0"); timerDisplay.text = "${slidesTime.elapsed.inMinutes}:$seconds"; }); slidesTime ..reset() ..start(); } } The Stopwatch class provides properties for retrieving the elapsed time in minutes and seconds. To format this to minutes and seconds, the seconds portion is determined with the modular division operator % and padded with the string function padLeft. Dart's string interpolation feature is used to build the final string, and as the elapsed and inMinutes properties are being accessed, the {} brackets are required so that the single value is returned. Overview of slides This provides the user with a visual overview of the slides as shown in the following screenshot: The presentation slides will be recreated in a new full screen Div element. This is styled using the fullScreen class in the CSS stylesheet in the SlideShowApp constructor: overviewScreen = new DivElement(); overviewScreen.classes.toggle("fullScreen"); overviewScreen.onClick.listen((e) => overviewScreen.remove()); The HTML for the slides will be identical. To shrink the slides, the list of slides is iterated over, the HTML element object obtained and the CSS class for the slide is set: currentSlideShow.slides.forEach((s) { aSlide = s.getSlideContents(); aSlide.classes.toggle("slideOverview"); aSlide.classes.toggle("shrink"); ... The CSS hover class is set to scale the slide when the mouse enters so a slide can be focused on for review. The classes are set with the toggle method which either adds if not present or removes if they are. The method has an optional parameter: aSlide.classes.toggle('className', condition); The second parameter is named shouldAdd is true if the class is always to be added and false if the class is always to be removed. Handout notes There is nothing like a tangible handout to give attendees to your presentation. This can be achieved with a variation of the overview display: Instead of duplicating the overview code, the function can be parameterized with an optional parameter in the method declaration. This is declared with square brackets [] around the declaration and a default value that is used if no parameter is specified. void buildOverview([bool addNotes = false]) This is called by the presentation overview display without requiring any parameters. buildOverview(); This is called by the handouts display without requiring any parameters. buildOverview(true); If this parameter is set, an additional Div element is added for the Notes area and the CSS is adjust for the benefit of the print layout. Comparing optional positional and named parameters The addNotes parameter is declared as an optional positional parameter, so an optional value can be specified without naming the parameter. The first parameter is matched to the supplied value. To give more flexibility, Dart allows optional parameters to be named. Consider two functions, the first will take named optional parameters and the second positional optional parameters. getRecords1(String query,{int limit: 25, int timeOut: 30}) { } getRecords2(String query,[int limit = 80, int timeOut = 99]) { } The first function can be called in more ways: getRecords1(""); getRecords1("", limit:50, timeOut:40); getRecords1("", timeOut:40, limit:65); getRecords1("", limit:50); getRecords1("", timeOut:40); getRecords2(""); getRecords2("", 90); getRecords2("", 90, 50); With named optional parameters, the order they are supplied is not important and has the advantage that the calling code is clearer as to the use that will be made of the parameters being passed. With positional optional parameters, we can omit the later parameters but it works in a strict left to right order so to set the timeOut parameter to a non-default value, limit must also be supplied. It is also easier to confuse which parameter is for which particular purpose. Summary The presentation editor is looking rather powerful with a range of advanced HTML controls moving far beyond text boxes to date pickers and color selectors. The preview and overview help the presenter visualize the entire presentation as they work, thanks to the strong class structure built using Dart mixins and data structures using generics. We have spent time looking at the object basis of Dart, how to pass parameters in different ways and, closer to the end user, how to handle keyboard input. This will assist in the creation of many different types of application and we have seen how optional parameters and true properties can help document code for ourselves and other developers. Hopefully you learned a little about coconuts too. The next step for this application is to improve the output with full screen display, animation and a little sound to capture the audiences' attention. The presentation editor could be improved as well—currently it is only in the English language. Dart's internationalization features can help with this. Resources for Article: Further resources on this subject: Practical Dart[article] Handling the DOM in Dart[article] Dart with JavaScript [article]
Read more
  • 0
  • 0
  • 1762

article-image-performance-design
Packt
15 Sep 2015
9 min read
Save for later

Performance by Design

Packt
15 Sep 2015
9 min read
In this article by Shantanu Kumar, author of the book, Clojure High Performance Programming - Second Edition, we learn how Clojure is a safe, functional programming language that brings great power and simplicity to the user. Clojure is also dynamically and strongly typed, and has very good performance characteristics. Naturally, every activity performed on a computer has an associated cost. What constitutes acceptable performance varies from one use-case and workload to another. In today's world, performance is even the determining factor for several kinds of applications. We will discuss Clojure (which runs on the JVM (Java Virtual Machine)), and its runtime environment in the light of performance, which is the goal of the book. In this article, we will study the basics of performance analysis, including the following: A whirlwind tour of how the application stack impacts performance Classifying the performance anticipations by the use cases types (For more resources related to this topic, see here.) Use case classification The performance requirements and priority vary across the different kinds of use cases. We need to determine what constitutes acceptable performance for the various kinds of use cases. Hence, we classify them to identify their performance model. When it comes to details, there is no sure shot performance recipe of any kind of use case, but it certainly helps to study their general nature. Note that in real life, the use cases listed in this section may overlap with each other. The user-facing software The performance of user facing applications is strongly linked to the user's anticipation. Having a difference of a good number of milliseconds may not be perceptible for the user but at the same time, a wait for more than a few seconds may not be taken kindly. One important element to normalize the anticipation is to engage the user by providing a duration-based feedback. A good idea to deal with such a scenario would be to start the task asynchronously in the background, and poll it from the UI layer to generate duration-based feedback for the user. Another way could be to incrementally render the results to the user to even out the anticipation. Anticipation is not the only factor in user facing performance. Common techniques like staging or precomputation of data, and other general optimization techniques can go a long way to improve the user experience with respect to performance. Bear in mind that all kinds of user facing interfaces fall into this use case category—the Web, mobile web, GUI, command line, touch, voice-operated, gesture...you name it. Computational and data-processing tasks Non-trivial compute intensive tasks demand a proportional amount of computational resources. All of the CPU, cache, memory, efficiency and the parallelizability of the computation algorithms would be involved in determining the performance. When the computation is combined with distribution over a network or reading from/staging to disk, I/O bound factors come into play. This class of workloads can be further subclassified into more specific use cases. A CPU bound computation A CPU bound computation is limited by the CPU cycles spent on executing it. Arithmetic processing in a loop, small matrix multiplication, determining whether a number is a Mersenne prime, and so on, would be considered CPU bound jobs. If the algorithm complexity is linked to the number of iterations/operations N, such as O(N), O(N2) and more, then the performance depends on how big N is, and how many CPU cycles each step takes. For parallelizable algorithms, performance of such tasks may be enhanced by assigning multiple CPU cores to the task. On virtual hardware, the performance may be impacted if the CPU cycles are available in bursts. A memory bound task A memory bound task is limited by the availability and bandwidth of the memory. Examples include large text processing, list processing, and more. For example, specifically in Clojure, the (reduce f (pmap g coll)) operation would be memory bound if coll is a large sequence of big maps, even though we parallelize the operation using pmap here. Note that higher CPU resources cannot help when memory is the bottleneck, and vice versa. Lack of availability of memory may force you to process smaller chunks of data at a time, even if you have enough CPU resources at your disposal. If the maximum speed of your memory is X and your algorithm on single the core accesses the memory at speed X/3, the multicore performance of your algorithm cannot exceed three times the current performance, no matter how many CPU cores you assign to it. The memory architecture (for example, SMP and NUMA) contributes to the memory bandwidth in multicore computers. Performance with respect to memory is also subject to page faults. A cache bound task A task is cache bound when its speed is constrained by the amount of cache available. When a task retrieves values from a small number of repeated memory locations, for example, a small matrix multiplication, the values may be cached and fetched from there. Note that CPUs (typically) have multiple layers of cache, and the performance will be at its best when the processed data fits in the cache, but the processing will still happen, more slowly, when the data does not fit into the cache. It is possible to make the most of the cache using cache-oblivious algorithms. A higher number of concurrent cache/memory bound threads than CPU cores is likely to flush the instruction pipeline, as well as the cache at the time of context switch, likely leading to a severely degraded performance. An input/output bound task An input/output (I/O) bound task would go faster if the I/O subsystem, that it depends on, goes faster. Disk/storage and network are the most commonly used I/O subsystems in data processing, but it can be serial port, a USB-connected card reader, or any I/O device. An I/O bound task may consume very few CPU cycles. Depending on the speed of the device, connection pooling, data compression, asynchronous handling, application caching, and more, may help in performance. One notable aspect of I/O bound tasks is that performance is usually dependent on the time spent waiting for connection/seek, and the amount of serialization that we do, and hardly on the other resources. In practice, many data processing workloads are usually a combination of CPU bound, memory bound, cache bound, and I/O bound tasks. The performance of such mixed workloads effectively depends on the even distribution of CPU, cache, memory, and I/O resources over the duration of the operation. A bottleneck situation arises only when one resource gets too busy to make way for another. Online transaction processing The online transaction processing (OLTP) systems process the business transactions on demand. It can sit behind systems such as a user-facing ATM machine, point-of-sale terminal, a network-connected ticket counter, ERP systems, and more. The OLTP systems are characterized by low latency, availability, and data integrity. They run day-to-day business transactions. Any interruption or outage is likely to have a direct and immediate impact on the sales or service. Such systems are expected to be designed for resiliency rather than the delayed recovery from failures. When the performance objective is unspecified, you may like to consider graceful degradation as a strategy. It is a common mistake to ask the OLTP systems to answer analytical queries; something that they are not optimized for. It is desirable of an informed programmer to know the capability of the system, and suggest design changes as per the requirements. Online analytical processing The online analytical processing (OLAP) systems are designed to answer analytical queries in short time. They typically get data from the OLTP operations, and their data model is optimized for querying. They basically provide for consolidation (roll-up), drill-down and slicing, and dicing of data for analytical purposes. They often use specialized data stores that can optimize the ad-hoc analytical queries on the fly. It is important for such databases to provide pivot-table like capability. Often, the OLAP cube is used to get fast access to the analytical data. Feeding the OLTP data into the OLAP systems may entail workflows and multistage batch processing. The performance concern of such systems is to efficiently deal with large quantities of data, while also dealing with inevitable failures and recovery. Batch processing Batch processing is automated execution of predefined jobs. These are typically bulk jobs that are executed during off-peak hours. Batch processing may involve one or more stages of job processing. Often batch processing is clubbed with work-flow automation, where some workflow steps are executed offline. Many of the batch processing jobs work on staging of data, and on preparing data for the next stage of processing to pick up. Batch jobs are generally optimized for the best utilization of the computing resources. Since there is little to moderate the demand to lower the latencies of some particular subtasks, these systems tend to optimize for throughput. A lot of batch jobs involve largely I/O processing and are often distributed over a cluster. Due to distribution, the data locality is preferred when processing the jobs; that is, the data and processing should be local in order to avoid network latency in reading/writing data. Summary We learned about the basics of what it is like to think more deeply about performance. The performance of Clojure applications depend on various factors. For a given application, understanding its use cases, design and implementation, algorithms, resource requirements and alignment with the hardware, and the underlying software capabilities, is essential. Resources for Article: Further resources on this subject: Big Data [article] The Observer Pattern [article] Working with Incanter Datasets [article]
Read more
  • 0
  • 0
  • 12069
article-image-getting-started-meteor
Packt
14 Sep 2015
6 min read
Save for later

Getting Started with Meteor

Packt
14 Sep 2015
6 min read
In this article, based on Marcelo Reyna's book Meteor Design Patterns, we will see that when you want to develop an application of any kind, you want to develop it fast. Why? Because the faster you develop, the better your return on investment will be (your investment is time, and the real cost is the money you could have produced with that time). There are two key ingredients ofrapid web development: compilers and patterns. Compilers will help you so that youdon’t have to type much, while patterns will increase the paceat which you solve common programming issues. Here, we will quick-start compilers and explain how they relate withMeteor, a vast but simple topic. The compiler we will be looking at isCoffeeScript. (For more resources related to this topic, see here.) CoffeeScriptfor Meteor CoffeeScript effectively replaces JavaScript. It is much faster to develop in CoffeeScript, because it simplifies the way you write functions, objects, arrays, logical statements, binding, and much more.All CoffeeScript files are saved with a .coffee extension. We will cover functions, objects, logical statements, and binding, since thisis what we will use the most. Objects and arrays CoffeeScriptgets rid of curly braces ({}), semicolons (;), and commas (,). This alone saves your fingers from repeating unnecessary strokes on the keyboard. CoffeeScript instead emphasizes on the proper use of tabbing. Tabbing will not only make your code more readable (you are probably doing it already), but also be a key factor inmaking it work. Let’s look at some examples: #COFFEESCRIPT toolbox = hammer:true flashlight:false Here, we are creating an object named toolbox that contains two keys: hammer and flashlight. The equivalent in JavaScript would be this: //JAVASCRIPT - OUTPUT var toolbox = { hammer:true, flashlight:false }; Much easier! As you can see, we have to tab to express that both the hammer and the flashlight properties are a part of toolbox. The word var is not allowed in CoffeeScript because CoffeeScript automatically applies it for you. Let’stakea look at how we would createan array: #COFFEESCRIPT drill_bits = [ “1/16 in” “5/64 in” “3/32 in” “7/64 in” ] //JAVASCRIPT – OUTPUT vardrill_bits; drill_bits = [“1/16 in”,”5/64 in”,”3/32 in”,”7/64 in”]; Here, we can see we don’t need any commas, but we do need brackets to determine that this is an array. Logical statements and operators CoffeeScript also removes a lot ofparentheses (()) in logical statements and functions. This makes the logic of the code much easier to understand at the first glance. Let’s look at an example: #COFFEESCRIPT rating = “excellent” if five_star_rating //JAVASCRIPT – OUTPUT var rating; if(five_star_rating){ rating = “excellent”; } In this example, we can clearly see thatCoffeeScript is easier to read and write.Iteffectively replaces all impliedparentheses in any logical statement. Operators such as &&, ||, and !== are replaced by words. Here is a list of the operators that you will be using the most: CoffeeScript JavaScript is === isnt !== not ! and && or || true, yes, on true false, no, off false @, this this Let's look at a slightly more complex logical statement and see how it compiles: #COFFEESCRIPT # Suppose that “this” is an object that represents a person and their physical properties if@eye_coloris “green” retina_scan = “passed” else retina_scan = “failed” //JAVASCRIPT - OUTPUT if(this.eye_color === “green”){ retina_scan = “passed”; } else { retina_scan = “failed”; } When using @eye_color to substitute for this.eye_color, notice that we do not need . Functions JavaScript has a couple of ways of creating functions. They look like this: //JAVASCRIPT //Save an anonymous function onto a variable varhello_world = function(){ console.log(“Hello World!”); } //Declare a function functionhello_world(){ console.log(“Hello World!”); } CoffeeScript uses ->instead of the function()keyword.The following example outputs a hello_world function: #COFFEESCRIPT #Create a function hello_world = -> console.log “Hello World!” //JAVASCRIPT - OUTPUT varhello_world; hello_world = function(){ returnconsole.log(“Hello World!”); } Once again, we use a tab to express the content of the function, so there is no need ofcurly braces ({}). This means that you have to make sure you have all of the logic of the function tabbed under its namespace. But what about our parameters? We can use (p1,p2) -> instead, where p1 and p2 are parameters. Let’s make our hello_world function say our name: #COFFEESCRIPT hello_world = (name) -> console.log “Hello #{name}” //JAVSCRIPT – OUTPUT varhello_world; hello_world = function(name) { returnconsole.log(“Hello “ + name); } In this example, we see how the special word function disappears and string interpolation. CoffeeScript allows the programmer to easily add logic to a string by escaping the string with #{}. Unlike JavaScript, you can also add returns and reshape the way astring looks without breaking the code. Binding In Meteor, we will often find ourselves using the properties of bindingwithin nested functions and callbacks.Function binding is very useful for these types of cases and helps avoid having to save data in additional variables. Let’s look at an example: #COFFEESCRIPT # Let’s make the context of this equal to our toolbox object # this = # hammer:true # flashlight:false # Run a method with a callback Meteor.call “use_hammer”, -> console.log this In this case, the thisobjectwill return a top-level object, such as the browser window. That's not useful at all. Let’s bind it now: #COFFEESCRIPT # Let’s make the context of this equal to our toolbox object # this = # hammer:true # flashlight:false # Run a method with a callback Meteor.call “use_hammer”, => console.log this The key difference is the use of =>instead of the expected ->sign fordefining the function. This will ensure that the callback'sthis object contains the context of the executing function. The resulting compiled script is as follows: //JAVASCRIPT Meteor.call(“use_hammer”, (function(_this) { return function() { returnConsole.log(_this); }; })(this)); CoffeeScript will improve your code and help you write codefaster. Still, itis not flawless. When you start combining functions with nested arrays, things can get complex and difficult to read, especially when functions are constructed with multiple parameters. Let’s look at an ugly query: #COFFEESCRIPT People.update sibling: $in:[“bob”,”bill”] , limit:1 -> console.log “success!” There are a few ways ofexpressing the difference between two different parameters of a function, but by far the easiest to understand. We place a comma one indentation before the next object. Go to coffeescript.org and play around with the language by clicking on the try coffeescript link. Summary We can now program faster because we have tools such as CoffeeScript, Jade, and Stylus to help us. We also seehow to use templates, helpers, and events to make our frontend work with Meteor. Resources for Article: Further resources on this subject: Why Meteor Rocks! [article] Function passing [article] Meteor.js JavaScript Framework: Why Meteor Rocks! [article]
Read more
  • 0
  • 0
  • 6788

article-image-mastering-jenkins
Packt
14 Sep 2015
13 min read
Save for later

Continuous Delivery and Continuous Deployment

Packt
14 Sep 2015
13 min read
 In this article by Jonathan McAllister, author of the book Mastering Jenkins, we will discover all things continuous namely Continuous Delivery, and Continuous Deployment practices. Continuous Delivery represents a logical extension to the continuous integration practices. It expands the automation defined in continuous integration beyond simply building a software project and executing unit tests. Continuous Delivery adds automated deployments and acceptance test verification automation to the solution. To better describe this process, let's take a look at some basic characteristics of Continuous Delivery: The development resources commit changes to the mainline of the source control solution multiple times per day, and the automation system initiates a complete build, deploy, and test validation of the software project Automated tests execute against every change deployed, and help ensure that the software remains in an always-releasable state Every committed change is treated as potentially releasable, and extra care is taken to ensure that incomplete development work is hidden and does not impact readiness of the software Feedback loops are developed to facilitate notifications of failures. This includes build results, test execution reports, delivery status, and user acceptance verification Iterations are short, and feedback is rapid, allowing the business interests to weigh in on software development efforts and propose alterations along the way Business interests, instead of engineering, will decide when to physically release the software project, and as such, the software automation should facilitate this goal (For more resources related to this topic, see here.) As described previously, Continuous Delivery (CD) represents the expansion of Continuous Integration Practices. At the time of writing of this book, Continuous Delivery approaches have been successfully implemented at scale in organizations like Amazon, Wells Fargo, and others. The value of CD derives from the ability to tie software releases to business interests, collect feedback rapidly, and course correct efficiently. The following diagram illustrates the basic automation flow for Continuous Delivery: Figure 8-10: Continuous Delivery workflow As we can see in the preceding diagram, this practice allows businesses to rapidly develop, strategically market, and release software based on pivoting market demands instead of engineering time frames. When implementing a continuous delivery solution, there are a few key points that we should keep in mind: Keep the build fast Illuminate the failures, and recover immediately Make deployments push-button, for any version to any environment Automate the testing and validation operations with defined buckets for each logical test group (unit, smoke, functional and regression) Use feature toggles to avoid branching Get feedback early and often (automation feedback, test feedback, build feedback, UAT feedback) Principles of Continuous Delivery Continuous Delivery was founded on the premise of standardized and defined release processes, automation-based build pipelines, and logical quality gates with rapid feedback loops. In a continuous delivery paradigm, builds flow from development to QA and beyond like water in a pipe. Builds can be promoted from one logical group to another, and the risk of the proposed change is exposed incrementally to a wider audience. The practical application of the Continuous Delivery principles lies in frequent commits to the mainline, which, in turn, execute the build pipeline automation suite, pass through automated quality gates for verification, and are individually signed off by business interests (in a best case scenario). The idea of incrementally exposing the risk can be better illustrated through a circle of trust diagram, as follows: Figure 8-11: Circle of Trust for code changes As illustrated in the preceding trust diagram, the number of people exposed to a build expands incrementally as the build passes from one logical development and business group to another. This model places emphasis on verification and attempts to remove waste (time) by exposing the build output only to groups that have a vested interest in the build at that phase. Continuous Delivery in Jenkins Applying the Continuous Delivery principles in Jenkins can be accomplished in a number of ways. That said, there are some definite tips and tricks which can be leveraged to make the implementation a bit less painful. In this section, we will discuss and illustrate some of the more advanced Continuous Delivery tactics and learn how to apply them in Jenkins. Your specific implementation of Continuous Delivery will most definitely be unique to your organization; so, take what is useful, research anything that is missing, and disregard what is useless. Let's get started. Rapid feedback loops Rapid feedback loops are the baseline implementation requirement for Continuous Delivery. Applying this with Jenkins can be accomplished in a pretty slick manner using a combination of the Email-Ext plugin and some HTML template magic. In large-scale Jenkins implementations, it is not wise to manage many e-mail templates, and creating a single transformable one will help save time and effort. Let's take a look how to do this in Jenkins. The Email-Ext plugin provides Jenkins with the capabilities of completely customizable e-mail notifications. It allows the Jenkins system to customize just about every aspect of notifications and can be leveraged as an easy-to-implement, template-based e-mail solution. To begin with, we will need to install the plugin into our Jenkins system. The details for this plugin can be found at the following web address: https://wiki.jenkins-ci.org/display/JENKINS/Email-ext+plugin Once the plug-in has been installed into our Jenkins system, we will need to configure the basic connection details and optional settings. To begin, navigate to the Jenkins administration area and locate the Extended Email Notification section. Jenkins->Manage Jenkins->Configure System On this page, we will need to specify, at a minimum, the following details: SMTP Server SMTP Authentication details (User Name + Password) Reply-to List (nobody@domain.com) System Admin Email Address (located further up on the page) The completed form may look something like the following screenshot: Figure 8-12: Completed form Once the basic SMTP configuration details have been specified, we can then add the Editable Email Notification post build step to our jobs, and configure the e-mail contents appropriately. The following screenshot illustrates the basic configuration options required for the build step to operate: Figure 8-13: Basic configuration options As we can see from the preceding screenshot, environment variables are piped into the plugin via the job's automation to define the e-mail contents, recipient list, and other related details. This solution makes for a highly effective feedback loop implementation. Quality gates and approvals Two of the key aspects of Continuous Delivery include the adoption of quality gates and stakeholder approvals. This requires individuals to signoff on a given change or release as it flows through the pipeline. Back in the day, this used to be managed through a Release Signoff sheet, which would often times be maintained manually on paper. In the modern digital age, this is managed through the Promoted builds plugin in Jenkins, whereby we can add LDAP or Active Directory integration to ensure that only authentic users have the access rights required to promote builds. However, there is room to expand this concept and learn some additional tips and tricks, which will ensure that we have a solid and secure implementation. Integrating Jenkins with Lightweight Directory Access Protocol (LDAP) is generally a straightforward exercise. This solution allows a corporate authentication system to be tied directly into Jenkins. This means that once the security integration is configured in Jenkins, we will be able to login to the Jenkins system (UI) by using our corporate account credentials. To connect Jenkins to a corporate authentication engine, we will first need to configure Jenkins to talk to the corporate security servers. This is configured in the Global Security administration area of the Jenkins user interface as shown in the following screenshot: Figure 8-14: Global Security configuration options The global security area of Jenkins allows us to specify the type of authentication that Jenkins will use for users who wish to access the Jenkins system. By default, Jenkins provides a built-in internal database for managing users; we will have to alter this to support LDAP. To configure this system to utilize LDAP, click the LDAP radio button, and enter your LDAP server details as illustrated in the following screenshot: Figure 8-15: LDAP server details Fill out the form with your company's LDAP specifics, and click save. If you happen to get stuck on this configuration, the Jenkins community has graciously provided an additional in-depth documentation. This documentation can be found at the following URL: https://wiki.jenkins-ci.org/display/JENKINS/LDAP+Plugin For users who wish to leverage Active Directory, there is a Jenkins plugin which can facilitate this type of integrated security solution. For more details on this plugin, please consult the plugin page at the following URL: https://wiki.jenkins-ci.org/display/JENKINS/Active+Directory+plugin Once the authentication solution has successfully been configured, we can utilize it to set approvers in the promoted builds plugin. To configure a promotion approver, we will need to edit the desired Jenkins project, and specify the users who should have the promote permissions. The following screenshot shows an example of this configuration: Figure 8-16: Configuration example As we can see, the promoted builds plugin provides an excellent signoff sheet solution. It is complete with access security controls, promotion criteria, and a robust build step implementation solution. Build pipeline(s) workflow and visualization When build pipelines are created initially, the most common practice is to simply daisy chain the jobs together. This is a perfectly reasonable initial-implementation approach, but in the long term, this may get confusing and it may become difficult to track the workflow of daisy-chained jobs. To assist with this issue, Jenkins offers a plugin to help visualize the build pipelines, and is appropriately named the Build Pipelines plugin. The details surrounding this plugin can be found at the following web URL: https://wiki.jenkins-ci.org/display/JENKINS/Build+Pipeline+Plugin This plugin provides an additional view option, which is populated by specifying an entry point to the pipeline, detecting upstream and downstream jobs, and creating a visual representation of the pipeline. Upon the initial installation of the plugin, we can see an additional option available to us when we create a new dashboard view. This is illustrated in the following screenshot: Figure 8-17: Dashboard view Upon creating a pipeline view using the build pipeline plugin, Jenkins will present us with a number of configuration options. The most important configuration options are the name of the view and the initial job dropdown selection option, as seen in the following screenshot: Figure 8-18: Pipeline view configuration options Once the basic configuration has been defined, click the OK button to save the view. This will trigger the plugin to perform an initial scan of the linked jobs and generate the pipeline view. An example of a completely developed pipeline is illustrated in the following image: Figure 8-19: Completely developed pipeline This completes the basic configuration of a build pipeline view, which gives us a good visual representation of our build pipelines. There are a number of features and customizations that we could apply to the pipeline view, but we will let you explore those and tweak the solution to your own specific needs. Continuous Deployment Just as Continuous Delivery represents a logical extension of Continuous Integration, Continuous Deployment represents a logical expansion upon the Continuous Delivery practices. Continuous Deployment is very similar to Continuous Delivery in a lot of ways, but it has one key fundamental variance: there are no approval gates. Without approval gates, code commits to the mainline have the potential to end up in the production environment in short order. This type of an automation solution requires a high-level of discipline, strict standards, and reliable automation. It is a practice that has proven valuable for the likes of Etsy, Flickr, and many others. This is because Continuous Deployment dramatically increases the deployment velocity. The following diagram describes both, Continuous Delivery and Continuous Deployment, to better showcase the fundamental difference between, them: Figure 8-20: Differentiation between Continuous Delivery and Continuous Deployment It is important to understand that Continuous Deployment is not for everyone, and is a solution that may not be feasible for some organizations, or product types. For example, in embedded software or Desktop application software, Continuous Deployment may not be a wise solution without properly architected background upgrade mechanisms, as it will most likely alienate the users due to the frequency of the upgrades. On the other hand, it's something that could be applied, fairly easily, to a simple API web service or a SaaS-based web application. If the business unit indeed desires to migrate towards a continuous deployment solution, tight controls on quality will be required to facilitate stability and avoid outages. These controls may include any of the following: The required unit testing with code coverage metrics The required a/b testing or experiment-driven development Paired programming Automated rollbacks Code reviews and static code analysis implementations Behavior-driven development (BDD) Test-driven development (TDD) Automated smoke tests in production Additionally, it is important to note that since a Continuous Deployment solution is a significant leap forward, in general, the implementation of the Continuous Delivery practices would most likely be a pre-requisite. This solution would need to be proven stable and trusted prior to the removal of the approval gates. Once removed though, the deployment velocity should significantly increase as a result. The quantifiable value of continuous deployment is well advertised by companies such as Amazon who realized a 78 percent reduction in production outages, and a 60% reduction in downtime minutes due to catastrophic defects. That said, implementing continuous deployment will require a buy-in from the stakeholders and business interests alike. Continuous Deployment in Jenkins Applying the Continuous Deployment practices in Jenkins is actually a simple exercise once Continuous Integration and Continuous Delivery have been completed. It's simply a matter of removing the approve criteria and allowing builds to flow freely through the environments, and eventually to production. The following screenshot shows how to implement this using the Promoted builds plugin: Figure 8-21: Promoted builds plugin implementation Once removed, the build automation solutions will continuously deploy for every commit to the mainline (given that all the automated tests have been passed). Summary With this article of Mastering Jenkins, you should now have a solid understanding of how to advocate for and develop Continuous Delivery, and Continuous Deployment practices at an organization. Resources for Article: Further resources on this subject: Exploring Jenkins [article] Jenkins Continuous Integration [article] What is continuous delivery and DevOps? [article]
Read more
  • 0
  • 0
  • 2662
Modal Close icon
Modal Close icon