Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-parallelize-it
Packt
18 Jul 2017
15 min read
Save for later

Parallelize It

Packt
18 Jul 2017
15 min read
In this article by Elliot Forbes, the author of the book Learning Concurrency in Python, will explain concurrency and parallelism thoroughly, and bring necessary CPU knowledge related to it. Concurrency and parallelism are two concepts that are commonly confused. The reality though is that they are quite different and if you designed software to be concurrent when instead you needed parallel execution then you could be seriously impacting your software’s true performance potential. Due to this, it's vital to know exactly what the two concepts mean so that you can understand the differences. Through knowing these differences you’ll be putting yourself at a distinct advantage when it comes to designing your own high performance software in Python. In this article we’ll be covering the following topics: What is concurrency and what are the major bottlenecks that impact our applications? What is parallelism and how does this differ from concurrency? (For more resources related to this topic, see here.) Understanding concurrency Concurrency is essentially the practice of doing multiple things at the same time, but not specifically in parallel. It can help us to improve the perceived performance of our applications and it can also improve the speed at which our applications run. The best way to think of how concurrency works is to imagine one person working on multiple tasks and quickly switching between these tasks. Imagine this one person was working concurrently on a program and at the same time dealing with support requests. This person would focus primarily on the writing of their program and quickly context switch to fixing a bug or dealing with a support issue should there be one. Once they complete the support task, they could context switch again back to writing their program really quickly. However, in computing there are typically two performance bottlenecks that we have to watch out for and guard against when writing our programs. It’s important to know the differences between the two bottlenecks as if we tried to apply concurrency to a CPU based bottleneck then you could find that the program actually starts to see performance decreases as opposed to increases. And if you tried to apply parallelism to a task that really require a concurrent solution then again you could see the same performance hits. Properties of concurrent systems All concurrent systems share a similar set of properties, these can be defined as: Multiple actors: This represent the different processes and threads all trying to actively make progress on their own tasks. We could have multiple processes that contain multiple threads all trying to run at the same time. Shared Resources: This represents the memory, the disk and other resources that the actors in the above group must utilize in order to perform what they need to do. Rules: All concurrent systems must follow a strict set of rules that define when actors can and can’t acquire locks, access memory, modify state and so on. These rules are vital in order for these concurrent systems to work otherwise our programs would tear themselves apart. Input/Output bottlenecks Input/Output bottlenecks, or I/O bottlenecks for short, are bottlenecks where your computer spends more time waiting on various inputs and outputs than it does on processing the information. You’ll typically find this type of bottleneck when you are working with an I/O heavy application. We could take your standard web browser as an example of a heavy I/O application. In a browser we typically spend a significantly longer amount of time waiting for network requests to finish for things like style sheets, scripts or HTML pages to load as opposed to rendering this on the screen. If the rate at which data is requested is slower than the rate than which it is consumed at then you have yourself an I/O bottleneck. One of the main ways to improve the speed of these applications typically is to either improve the speed of the underlying I/O by buying more expensive and faster hardware or to improve the way in which we handle these I/O requests. A great example of a program bound by I/O bottlenecks would be a web crawler. Now the main purpose of a web crawler is to traverse the web and essentially index web pages so that they can be taken into consideration when Google runs its search ranking algorithm to decide the top 10 results for a given keyword. We’ll start by creating a very simple script that just requests a page and times how long it takes to request said web page: import urllib.request import time t0 = time.time() req = urllib.request.urlopen('http://www.example.com') pageHtml = req.read() t1 = time.time() print("Total Time To Fetch Page: {} Seconds".format(t1-t0)) If we break down this code, first we import the two necessary modules, urllib.request and the time module. We then record the starting time and request the web page: example.com and then record the ending time and printing out the time difference. Now say we wanted to add a bit of complexity and follow any links to other pages so that we could index them in the future. We could use a library such as BeautifulSoup in order to make our lives a little easier: import urllib.request import time from bs4 import BeautifulSoup t0 = time.time() req = urllib.request.urlopen( 'http://www.example.com' ) t1 = time.time() print("Total Time To Fetch Page: {} Seconds".format(t1-t0)) soup = BeautifulSoup(req.read(), "html.parser" ) for link in soup.find_all( 'a' ): print (link.get( 'href' )) t2 = time.time() print( "Total Execeution Time: {} Seconds" .format) When I execute the above program I see the results like so in my terminal: You’ll notice from this output that the time to fetch the page is over a quarter of a second. Now imagine we wanted to run our web crawler for a million different web pages, our total execution time would be roughly a million times longer. The main real cause for this enormous execution time would be purely down to the I/O bottleneck we face in our program. We spend a massive amount of time waiting on our network requests and a fraction of that time parsing our retrieved page for further links to crawl. Understanding parallelism Parallelism is the art of executing two or more actions simultaneously as opposed to concurrency in which you make progress on two or more things at the same time. This is an important distinction, and in order to achieve true parallelism, we’ll need multiple processors on which to run our code on at the same time. A good analogy to think of parallel processing is to think of a queue for coffee. If you had say two queues of 20 people all waiting to use this coffee machine so that they can get through the rest of the day. Well this would be an example of concurrency. Now say you were to introduce a second coffee machine into the mix, this would then be an example of something happening in parallel. This is exactly how parallel processing works, each of the coffee machines in that room would represent one processing core and are able to make progress on tasks simultaneously. A real life example which highlights the true power of parallel processing is your computer’s graphics card. These graphics cards tend to have hundreds if not thousands of individual processing cores that live independently and can compute things at the same time. The reason we are able to run high-end PC games at such smooth frame rates is due to the fact we’ve been able to put so many parallel cores onto these cards. CPU bound bottleneck A CPU bound bottleneck is typically the inverse of an I/O bound bottleneck. This bottleneck is typically found in applications that do a lot of heavy number crunching or any other task that is computationally expensive. These are programs for which the rate at which they execute is bound by the speed of the CPU, if you throw a faster CPU in your machine you should see a direct increase in the speed of these programs. If the rate at which you are processing data far outweighs the rate at which you are requesting data then you have a CPU Bound Bottleneck. How do they work on a CPU? Understanding the differences outlined in the previous section between both concurrency and parallelism is essential but it’s also very important to understand more about the systems that your software will be running on. Having an appreciation of the different architecture styles as well as the low level mechanics helps you make the most informed decisions in your software design. Single core CPUs Single core processors will only ever execute one thread at any given time as that is all they are capable of. However, in order to ensure that we don’t see our applications hanging and being unresponsive, these processors rapidly switch between multiple threads of execution many thousands of times per second. This switching between threads is what is called a "context switch" and involves storing all the necessary information for a thread at a specific point of time and then restoring it at a different point further down the line. Using this mechanism of constantly saving and restoring threads allows us to make progress on quite a number of threads within a given second and it appears like the computer is doing multiple things at once. It is in fact doing only one thing at any given time but doing it at such speed that it’s imperceptible to users of that machine. When writing multi-threaded applications in Python it is important to note that these context switches are computationally quite expensive. There is no way to get around this unfortunately and much of the design of operating systems these days is about optimizing for these context switches so that we don’t feel the pain quite as much. Advantages of single core CPUs: They do not require any complex communication protocols between multiple cores Single core CPUs require less power which typically makes them better suited for IoT devices Disadvantages: They are limited in speed and larger applications will cause them to struggle and potentially freeze Heat dissipation issues place a hard limit on how fast a single core CPU can go Clock rate One of the key limitations to a single-core application running on a machine is the Clock Speed of the CPU. When we talk about Clock rate, we are essentially talking about how many clock cycles a CPU can execute every second. For the past 10 years we have watched as manufacturers have been able to surpass Moore’s law which was essentially an observation that the number of transistors one was able to place on a piece of silicon was able to double roughly every 2 years. This doubling of transistors every 2 years paved the way for exponential gains in single-cpu clock rates and CPUs went from the low MHz to the 4-5GHz clock speeds we are seeing on Intel’s i7 6700k processor. But with transistors getting as small as a few nanometers across, this is inevitably coming to an end. We’ve started to hit the boundaries of Physics and unfortunately if we go any smaller we’ll start to be hit by the effects of quantum tunneling. Due to these physical limitations we need to start looking at other methods in order to improve the speeds at which we are able to compute things. This is where Materlli’s Model of Scalability comes into play. Martelli model of scalability The author of Python Cookbook, Alex Martelli came up with a model on scalability which Raymond Hettinger discussed in his brilliant hour-long talk "Thinking about Concurrency", which he gave at PyCon Russia 2016. This model represents three different types of problem and programs: 1 core: single threaded and single process programs 2-8 cores: multithreaded and multiprocess programs 9+ cores: distributed computing The first category, the single core, single threaded category is able to handle a growing number of problems due to the constant improvements of the speed of single core CPUs and as a result the second category is being rendered more and more obsolete. We will eventually hit a limit with the speed at which a 2-8 core system can run at and as a result we’ll have to start looking at other methods such as multiple CPU systems or even distributed computing. If your problem is worth solving quickly and it requires a lot of power then the sensible approach is to go with the distributed computing category and spin up multiple machines and multiple instances of your program in order to tackle your problems in a truly parallel manner. Large enterprise systems that handle hundreds of millions of requests are the main inhabitants of this category. You’ll typically find that these enterprise systems are deployed on tens, if not hundreds of high performance, incredibly powerful servers in various locations across the world. Time-Sharing - the task scheduler One of the most important parts of the Operating System is the task scheduler. This acts as the maestro of the orchestra and directs everything with impeccable precision and incredible timing and discipline. This maestro has only one real goal and that is to ensure that every task has a chance to run through till completion, the when and where of a task’s execution however is non-deterministic. That is to say, if we gave a task scheduler two identical competing processes one after the other, there is no guarantee that the first process will complete first. This non-deterministic nature is what makes concurrent programming so challenging. An excellent example that highlights this non-deterministic behavior is say we take the following code: import threading import time import random counter = 1 def workerA(): global counter while counter < 1000: counter += 1 print("Worker A is incrementing counter to {}".format(counter)) sleepTime = random.randint(0,1) time.sleep(sleepTime) def workerB(): global counter while counter > -1000: counter -= 1 print("Worker B is decrementing counter to {}".format(counter)) sleepTime = random.randint(0,1) time.sleep(sleepTime) def main(): t0 = time.time() thread1 = threading.Thread(target=workerA) thread2 = threading.Thread(target=workerB) thread1.start() thread2.start() thread1.join() thread2.join() t1 = time.time() print("Execution Time {}".format(t1-t0)) if __name__ == '__main__': main() Here we have two competing threads in Python that are each trying to accomplish their own goal of either decrementing the counter to 1,000 or conversely incrementing it to 1,000. In a single core processor there is the possibility that worker A managers to complete its task before worker B has a chance to execute and the same can be said for worker B. However there is a third potential possibility and that is that the task scheduler continues to switch between worker A and worker B for an infinite number of times and never complete. The above code incidentally also shows one of the dangers of multiple threads accessing shared resources without any form of synchronization. There is no accurate way to determine what will happen to our counter and as such our program could be considered unreliable. Multi-core processors We’ve now got some idea as to how single-core processors work, but now it’s time to take a look at multicore processors. Multicore processors contain multiple independent processing units or “cores”. Each core contains everything it needs in order to execute a sequence of stored instructions. These cores each follow their own cycle: Fetch - This step involves fetching instructions from program memory. This is dictated by a program counter (PC) which identifies the location of the next step to execute. Decode - The core converts the instruction that it has just fetched and converts it into a series of signals that will trigger various other parts of the CPU. Execute - Finally we perform the execute step. This is where we run the instruction that we have just fetched and decoded and typically the results of this execution are then stored in a CPU register. Having multiple cores offers us the advantage of being able to work independently on multiple Fetch -> Decode -> Execute cycles. This style of architecture essentially enables us to create higher performance programs that leverage this parallel execution. Advantages of multicore processors: We are no longer bound by the same performance limitations that a single core processor is bound Applications that are able to take advantage of multiple cores will tend to run faster if well designed Disadvantages of multicore processors: They require more power than your typical single core processor. Cross-core communication is no simple feat, we have multiple different ways of doing this. Summary In this article we covered a multitude of topics including the differences between Concurrency and Parallelism. We also looked at how they both leverage the CPU in different ways. Resources for Article: Further resources on this subject: Python Data Science Up and Running [article] Putting the Fun in Functional Python [article] Basics of Python for Absolute Beginners [article]
Read more
  • 0
  • 0
  • 4959

article-image-selenium-testing-tools
Packt
23 Feb 2015
8 min read
Save for later

Selenium Testing Tools

Packt
23 Feb 2015
8 min read
In this article by Raghavendra Prasad MG, author of the book Learning Selenium Testing Tools, Third Edition you will be introduced to the Selenium IDE WebDriver by demonstrating its installation, basic features, its advanced features, and implementation of automation framework with programming language basics required for automation. Automation being a key point of success of any software organization, everybody is looking at the freeware and huge community supported tool like Selenium. Anybody who is willing to learn and work on automation with Selenium has an opportunity to learn the tool from basic to advanced stage with this book and the book would become a life time reference for the reader. (For more resources related to this topic, see here.) Key features of the book Following are the key features of the book: The book contain the information from basic level to advanced levels and user need not have know anything as a pre requisite. The book contains real time examples which make the reader to co-relate with there real time scenarios. The book contains basics of Java which is required for Selenium automation, hence reader need to go through other books exclusively which would contain the information which is no where required for the Selenium automation. The book contains the concept of automation framework design and implementation, which would definitely help reader to build and implement his/her own automation framework. What you will learn from this book? There are a lot of things you will learn from the book. A few of them are mentioned as follows: History of Selenium and its evolution Working with previous versions of Selenium that is, Selenium IDE Selenium WebDriver – basic to advanced state Basics of Java (only what is required for Selenium automation) WebElement handling with Selenium WebDriver Page Object Factory implementation Automation frameworks types, design and implementation Utilities building for automation Who this book is for? The book is for manual testers. Even any software professionals and the ones who wants to make their carrier in Selenium automation testing can use this book. This book also helps automation testers / automation architects who want to build or implement the automation / automation frameworks on Selenium automation tool. What this book covers The book covers the following major topics: Selenium IDE Selenium IDE is a Firefox add-on developed originally by Shinya Kasatani as a way to use the original Selenium Core code without having to copy Selenium Core onto the server. Selenium Core is the key JavaScript modules that allows Selenium to drive the browser. It has been developed using JavaScript so that it can interact with the DOM (Document Object Model) using native JavaScript calls. Selenium IDE has been developed to allow testers and developers to record their actions as they follow the workflow that they need to test. Locators Locators shows how we can find elements on the page to be used in our tests. We will use XPath, CSS, link text, and ID to find elements on the page so that we can interact with them. Locators allow us to find elements on a page that can be used in our tests. In the last chapter we managed to work against a page which had decent locators. In HTML, it is seen as a good practice to make sure that every element you need to interact with has an ID attribute and a name attribute. Unfortunately, following best practices can be extremely difficult, especially when building the HTML dynamically on the server before sending it back to the browser. Following are the locators used in Selenium IDE: Locators Description Example ID This element identifies an ID attribute on the page id=inputButton name This element identifies name attribute on the page name=buttonFind link This element identifies links by the text link=index XPath This element identifies by XPath xpath=//div[@class='classname'] CSS This element identifies by CSS css=#divinthecenter DOM This element identifies by DOM dom=document.getElementById("inputButton") Selenium WebDriver The primary feature of the Selenium WebDriver is the integration of the WebDriver API and its design to provide a simpler, more concise programming interface in addition to addressing some limitations in the Selenium-RC API. Selenium WebDriver was developed to better support dynamic web pages where elements of a page may change without the page itself being reloaded. WebDriver's goal is to supply a well designed object-oriented API that provides improved support for modern advanced web application testing problems. Finding elements When working with WebDriver on a web application, we will need to find elements on the page. This is the Core to being able to work. All the methods for performing actions to the web application, like typing and clicking require that we search the element first. Finding an element on the page by its ID The first item that we are going to look at is finding an element by ID. Searching elements by ID is one of the easiest ways to find an element. We start with findElementByID(). This method is a helper method that sets an argument for a more generic findElement call. We will see now how we can use it in action. The method's signature looks like the following line of code: findElementById(String using); The using variable takes the ID of the element that you wish to look for. It will return a WebElement object that we can then work with. Using findElementById() We find an element on the page by using the findElementById() method that is on each of the Browser Driver classes. findElement calls will return a WebElement object that we can perform actions on. Follow these steps to see how it works: Open your Java IDE. (IntelliJ or Eclipse are the one's that are mostly used) We are going to use the following command: WebElement element = ((FindsById)driver). findElementById("verifybutton"); Run the test from the IDE. It will look like the following screenshot: Page Objects In this section of the article, we are going to have a look at how we can apply some best practices to tests. You will learn how to make maintainable test suites that will allow you to update tests in seconds. We will have a look at creating your own DSL so that people can see intent. We will create tests using the Page Object pattern. Working with FirefoxDriver FirefoxDriver is the easiest driver to use, since everything that we need to use is all bundled with the Java client bindings. We do the basic task of loading the browser and type into the page as follows: Update the setUp() method to load the FirefoxDriver(); driver = new FirefoxDriver(); Now we need to find an element. We will find the one with the ID nextBid: WebElement element = driver.findElement(By.id("nextBid")); Now we need to type into that element as follows: element.sendKeys("100"); Run your test and it should look like the following: import org.openqa.selenium.*; import org.openqa.selenium.firefox.*; import org.testng.annotations.*; public class TestChapter6 {   WebDriver driver;   @BeforeTest public void setUp(){    driver = new FirefoxDriver();    driver.get      ("http://book.theautomatedtester.co.uk/chapter4"); }   @AfterTest public void tearDown(){    driver.quit(); }   @Test public void testExamples(){    WebElement element = driver.findElement(By.id("nextBid"));    element.sendKeys("100"); } } We are currently witnessing an explosion of mobile devices in the market. A lot of them are more powerful than your average computer was just over a decade ago. This means, that in addition to having nice clean, responsive, and functional desktop applications, we are starting to have to make sure the same basic functionality is available to mobile devices. We are going to be looking at how we can set up mobile devices to be used with Selenium WebDriver. We will learn the following topics: How to use the stock browser on Android How to test with Opera Mobile How to test on iOS Understanding Selenium Grid Selenium Grid is a version of Selenium that allows teams to set up a number of Selenium instances and then have one central point to send your Selenium commands to. This differs from what we saw in Selenium Remote WebDriver where we always had to explicitly say where the Selenium Server is as well as know what browsers that server can handle. With Selenium Grid we just ask for a specific browser, and then the hub that is part of Selenium Grid will route all the Selenium commands through to the Remote Control you want. Summary We have understood and learnt what is Selenium and its evolutions from IDE to WebDriver and Grid as well. In addition we have learnt how to identify the WebElements using WebDriver, and its design pattern and locators through WebDriver. And we learnt on the automation framework design and implementation, mobile application automation on Android and iOS. Finally we understood the concept of Selenium Grid. Resources for Article: Further resources on this subject: Quick Start into Selenium Tests [article] Getting Started With Selenium Webdriver and Python [article] Exploring Advanced Interactions of WebDriver [article]
Read more
  • 0
  • 0
  • 4957

article-image-creating-budget-your-business-gnucash
Packt
16 Feb 2011
4 min read
Save for later

Creating a Budget for your Business with Gnucash

Packt
16 Feb 2011
4 min read
Why do you need to create a budget? There are two main reasons why you may want to create budgets. You want a trip planner for your business. You will use this on a day-to-day basis to run your business and make decisions. The second reason is if you are seeking outside finance for your business from a bank, investor, or other lender. They will require you to submit your business plan along with projected financials. Time for action – creating a budget for your business You are going to create a budget for the next three months to serve as a guide for your operations. Typically, investors, banks and other lenders will need financial projections for a longer period. As a minimum they will need one year projections which may go up to 3 to 5 years in many cases. From the menu select Actions Budget | New Budget|. A new budget screen will open. Click on the Options toolbar button. The Budget Options dialog will open. For this tutorial, we are going to select a beginning date of three months back. This is only for the purposes of this tutorial and will allow us to quickly run Budget vs. Actual reports. In the Budget Period pane, change the beginning on date to a date three months ago. Change the Number of Periods to 3. Type in the Budget Name MACS Jun-Aug Budget as shown in the following screenshot and click on OK. The screen will show a list of accounts with a column for each month. The date shown in the title of each column is the beginning of that period. Now enter the values by simply clicking on the cell and entering the amount as shown in the following screenshot.   Using the Tab key while entering budget amounts Don't use the Tab key. The value entered in the previous field seems to vanish into thin air if you use the Tab key. Instead use the Enter key and the mouse. When you are done entering all the values, don't forget to save changes. Now that the budget has been created, you are ready to run the reports. From the menu select Reports Budget | Budget Report|. In the Options dialog select all the Income and Expenses accounts in the Accounts tab. Check Show Difference in the Display tab and click on OK to see the report as shown in the following screenshot: We are going to create the Cash Flow Budget in a spreadsheet. Go ahead and copy the data from the preceding report to the spreadsheet of your choice. Put in additional rows and formulas along the lines shown below. We are showing the cashflow for a six month period in the following screenshot to make it easier for you to see some of the trends and business challenges more clearly: What just happened? What if you had tomorrow's news... TODAY? His name is Gary Hobson. He gets tomorrow's newspaper today. He doesn't know how. He doesn't know why. All he knows is when the early edition hits his doorstep, he has twenty-four hours to set things right. You may recall that in the TV series early edition, Kyle Chandler who plays the role of Gary Hobson uses this knowledge to prevent terrible events each day. What if we told you that you can get tomorrow's news for your business today? You can prevent terrible events from happening to your business. You can get tomorrow's sales, expenses, and cash flow in the form of a budget. Mistakes are far less costly when made on paper than with actual dollars. Sometimes budgets are referred to as projections. For example, banks, investors, and lenders will ask for a business plan with profit and loss, balance sheet, and cash flow projections. Other times, these are called forecasts, especially when referring to sales forecasts. Regardless of whether we call them budgets, projections or forecasts, we are referring to the future. Unlike the rest of bookkeeping, which is concerned with the past, budgeting is one area, which tries to look in the crystal ball, and attempts to see what the future might look like, or what you are committing to make it look like. If you are running a business without a budget, I am sure there are times when the thought flashes through your mind, "I wish I had known that earlier." Your budget is the crystal ball that enables you to see the future, and do something about it. Generally, when you complete a budget, you will have a number of revelations. For example, you might find that your cash flow is going into negative territory in the third month. The budget allows you to perceive problems before they occur and alter your plans to prevent those problems.
Read more
  • 0
  • 0
  • 4941

article-image-introduction-hibernate-and-spring-part-1
Packt
29 Dec 2009
4 min read
Save for later

An Introduction to Hibernate and Spring: Part 1

Packt
29 Dec 2009
4 min read
This article by Ahmad Seddighi, introduces Spring and Hibernate, explaining what persistence is, why it is important, and how it is implemented in Java applications. It provides a theoretical discussion of Hibernate and how Hibernate solves problems related to persistence. Finally, we take a look at Spring and the role of Spring in persistence. Hibernate and Spring are open-source Java frameworks that simplify developing Java/JEE applications from simple, stand-alone applications running on a single JVM, to complex enterprise applications running on full-blown application servers. Hibernate and Spring allow developers to produce scalable, reliable, and effective code. Both frameworks support declarative configuration and work with a POJO (Plain Old Java Object) programming model (discussed later in this article), minimizing the dependence of application code on the frameworks, and making development more productive and portable. Although the aim of these frameworks partially overlap, for the most part, each is used for a different purpose. The Hibernate framework aims to solve the problems of managing data in Java: those problems which are not fully solved by the Java persistence API, JDBC (Java Database Connectivity), persistence providers, DBMS (Database Management Systems), and their mediator language, SQL (Structured Query Language). In contrast, Spring is a multitier framework that is not dedicated to a particular area of application architecture. However, Spring does not provide its own solution for issues such as persistence, for which there are already good solutions. Rather, Spring unifies preexisting solutions under its consistent API and makes them easier to use. As mentioned, one of these areas is persistence. Spring can be integrated with a persistence solution, such as Hibernate, to provide an abstraction layer over the persistence technology, and produce more portable, manageable, and effective code. Furthermore, Spring provides other services spread over the application architecture, such as inversion of control and aspect-oriented programming (explained later in this article), decoupling the application's components, and modularizing common behaviors. This article looks at the motivation and goals for Hibernate and Spring. The article begins with an explanation of why Hibernate is needed, where it can be used, and what it can do. We'll take a quick look at Hibernates alternatives, exploring their advantages and disadvantages. I'll outline the valuable features that Hibernate offers and explain how it can solve the problems of the traditional approach to Java persistence. The discussion continues with Spring. I'll explain what Spring is, what services it offers, and how it can help to develop a high-quality data-access layer with Hibernate. Persistence management in Java Persistence has long been a challenge in the enterprise community. Many persistence solutions from primitive, file-based approaches, to modern, object-oriented databases have been presented. For any of these approaches, the goal is to provide reliable, efficient, flexible, and scalable persistence. Among these competing solutions, relational databases (because of certain advantages) have been most widely accepted in the IT world. Today, almost all enterprise applications use relational databases. A relational database is an application that provides the persistence service. It provides many persistence features, such as indexing data to provide speedy searches; solves the relevant problems, such as protecting data from unauthorized access; and handles many complications, such as preserving relationships among data. Creating, modifying, and accessing relational databases is fairly simple. All such databases present data in two-dimensional tables and support SQL, which is relatively easy to learn and understand. Moreover, they provide other services, such as transactions and replication. These advantages are enough to ensure the popularity of relational databases. To provide support for relational databases in Java, the JDBC API was developed. JDBC allows Java applications to connect to relational databases, express their persistence purpose as SQL expressions, and transmit data to and from databases. The following screenshot shows how this works: Using this API, SQL statements can be passed to the database, and the results can be returned to the application, all through a driver. The mismatch problem JDBC handles many persistence issues and problems in communicating with relational databases. It also provides the needed functionality for this purpose. However, there remains an unsolved problem in Java applications: Java applications are essentially object-oriented programs, whereas relational databases store data in a relational form. While applications use object-oriented forms of data, databases represent data in two-dimensional table forms. This situation leads to the so-called object-relational paradigm mismatch, which (as we will see later) causes many problems in communication between object-oriented and relational environments. For many reasons, including ease of understanding, simplicity of use, efficiency, robustness, and even popularity, we may not discard relational databases. However, the mismatch cannot be eliminated in an effortless and straightforward manner.
Read more
  • 0
  • 0
  • 4939

article-image-presenting-data-using-adf-faces
Packt
20 Mar 2014
7 min read
Save for later

Presenting Data Using ADF Faces

Packt
20 Mar 2014
7 min read
(For more resources related to this topic, see here.) In this article, you will learn how to present a single record, multiple records, and master-details records on your page using different components and methodologies. You will also learn how to enable the internationalizing and localizing processes in your application by using a resource bundle and the different options of bundle you can have. Starting from this article onward, we will not use the HR schema. We will rather use the FacerHR schema in the Git repository under the BookDatabaseSchema folder and read the README.txt file for information on how to create the database schema. This schema will be used for the whole book, so you need to do this only once. Make sure you validate your database connection information for your recipes to work without problem. Presenting single records on your page In this recipe, we will address the need for presenting a single record in a page, which is useful specifically when you want to focus on a specific record in the table of your database; for example, a user's profile can be represented by a single record in an employee's table. The application and its model have been created for you; you can see it by cloning the PresentingSingleRecord application from the Git repository. How to do it... In order to present a single record in pages, follow the ensuing steps: Open the PresentingSingleRecord application. Create a bounded task flow by right-clicking on ViewController and navigating to New | ADF Task Flow. Name the task flow single-employee-info and uncheck the Create with Page Fragments option. You can create a task flow with a page fragment, but you will need a page to host it at the end; alternatively, you can create a whole page if the task flow holds only one activity and is not reusable. However, in this case, I prefer to create a page-based task flow for fast deployment cycles and train you to always start from task flow. Add a View activity inside of the task flow and name it singleEmployee. Double-click on the newly created activity to create the page; this page will be based on the Oracle Three Column layout. Close the dialog by pressing the OK button. Navigate to Data Controls pane | HrAppModuleDataControl, drag-and-drop EmployeesView1 into the white area of the page template, and select ADF Form from the drop-down list that appears as you drop the view object. Check the Row Navigation option so that it has the first, previous, next, and last buttons for navigating through the task. Group attributes based on their category, so the Personal Information group should include the EmployeeId, FirstName, LastName, Email, and Phone Number attributes; the Job Information group should include HireDate, Job, Salary, and CommissionPct; and the last group will be Department Information that includes both ManagerId and DepartmentId attributes. Select multiple components by holding the Ctrl key and click on the Group button at the top-right corner, as shown in the following screenshot: Change the Display Label values of the three groups to eInfo, jInfo, and dInfo respectively. The Display Label option is a little misleading when it comes to groups in a form as groups don't have titles. Due to this, Display Label will be assigned to the Id attribute of the af:group component that will wrap the components, which can't have space and should be reasonably small; however, Input Text w/Label or Output Text w/Label will end up in the Label attribute in the panelLabelAndMessage component. Change the Component to Use option of all attributes from ADF Input Text w/Label to ADF Output Text w/Label. You might think that if you check the Read-Only Form option, it will have the same effect, but it won't. What will happen is that the readOnly attribute of the input text will change to true, which will make the input text non-updateable; however, it won't change the component type. Change the Display Label option for the attributes to have more human-readable labels to the end user; you should end up with the following screen: Finish by pressing the OK button. You can save yourself the trouble of editing the Display Label option every time you create a component that is based on a view object by changing the Label attribute in UI Hints from the entity object or view object. More information can be found in the documentation at http://docs.oracle.com/middleware/1212/adf/ADFFD/bcentities.htm#sm0140. Examine the page structure from the Structure pane in the bottom-left corner as shown in the following screenshot. A panel form layout can be found inside the center facet of the page template. This panel form layout represents an ADF form, and inside of it, there are three group components; each group has a panel label and message for each field of the view object. At the bottom of the panel form layout, you can locate a footer facet; expand it to see a panel group layout that has all the navigation buttons. The footer facet identifies the locations of the buttons, which will be at the bottom of this panel form layout even if some components appear inside the page markup after this facet. Examine the panel form layout properties by clicking on the Properties pane, which is usually located in the bottom-right corner. It allows you to change attributes such as Max Columns, Rows, Field Width, or Label Width. Change these attributes to change the form and to have more than one column. If you can't see the Structure or Properties pane, you can see them again by navigating to Window menu | Structure or Window menu | Properties. Save everything and run the page, placing it inside the adf-config task flow; to see this in action, refer to the following screenshot: How it works... The best component to represent a single record is a panel form layout, which presents the user with an organized form layout for different input/output components. If you examine the page source code, you can see an expression like #{bindings.FirstName.inputValue}, which is related to the FirstName binding inside the Bindings section of the page definition where it points to EmployeesView1Iterator. However, iterator means multiple records, then why FirstName is only presenting a single record? It's because the iterator is aware of the current row that represents the row in focus, and this row will always point to the first row of the view object's select statement when you render the page. By pressing different buttons on the form, the Current Row value changes and thus the point of focus changes to reflect a different row based on the button you pressed. When you are dealing with a single record, you can show it as the input text or any of the user input's components; alternatively, you can change it as the output text if you are just viewing it. In this recipe, you can see that the Group component is represented as a line in the user interface when you run the page. If you were to change the panel form layout's attributes, such as Max Columns or Rows, you would see a different view. Max Columns represents the maximum number of columns to show in a form, which defaults to 3 in case of desktops and 2 in case of PDAs; however, if this panel form layout is inside another panel form layout, the Max Columns value will always be 1. The Rows attribute represents the numbers of rows after which we should start a new column; it has a default value of 231-1. You can know more about each attribute by clicking on the gear icon that appears when you hover over an attribute and reading the information on the property's Help page. The benefit of having a panel form layout is that all labels are aligned properly; this organizes everything for you similar to the HTML table component. See also Check the following reference for more information about arranging content in forms: http://docs.oracle.com/middleware/1212/adf/ADFUI/af_orgpage.htm#CDEHDJEA
Read more
  • 0
  • 0
  • 4935

article-image-using-javascript-html
Packt
12 Jan 2016
13 min read
Save for later

Using JavaScript with HTML

Packt
12 Jan 2016
13 min read
In this article by Syed Omar Faruk Towaha, author of the book JavaScript Projects for Kids, we will discuss about HTML, HTML canvas, implementing JavaScript codes on our HTML pages, and few JavaScript operations. (For more resources related to this topic, see here.) HTML HTML is a markup language. What does it mean? Well, a markup language processes and presents texts using specific codes for formatting, styling, and layout design. There are lots of markup languages; for example, Business Narrative Markup Language (BNML), ColdFusion Markup Language (CFML), Opera Binary Markup Language (OBML), Systems Biology Markup Language (SBML), Virtual Human Markup Language (VHML), and so on. However, in modern web, we use HTML. HTML is based on Standard Generalized Markup Language (SGML). SGML was basically used to design document papers. There are a number of versions of HTML. HTML 5 is the latest version. Throughout this book, we will use the latest version of HTML. Before you start learning HTML, think about your favorite website. What does the website contain? A few web pages? You may see some texts, few images, one or two text field, buttons, and some more elements on each of the webpages. Each of these elements are formatted by HTML. Let me introduce you to a web page. On your Internet browser, go to https://www.google.com. You will see a page similar to the following image: The first thing that you will see on the top of your browser is the title of the webpage: Here, the marked box, 1, is the title of the web page that we loaded. The second box, 2, indicates some links or some texts. The word Google in the middle of the page is an image. The third box, 3, indicates two buttons. Can you tell me what Sign in on the right-hand top of the page is? Yes, it is a button. Let's demonstrate the basic structure of HTML. The term tag will be used frequently to demonstrate the structure. An HTML tag is nothing but a few predefined words between the less than sign (<) and the greater than sign (>). Therefore, the structure of a tag is <WORD>, where WORD is the predefined text that is recognized by the Internet browsers. This type of tag is called open tag. There is another type of tag that is known as a close tag. The structure of a close tag is as </WORD>. You just have to put a forward slash after the less than sign. After this section, you will be able to make your own web page with a few texts using HTML. The structure of an HTML page is similar to the following image: This image has eight tags. Let me introduce all these tags with their activities: 1:This is the <html> tag, which is an open tag and it closes at line 15 with the </html> tag. These tags tell your Internet browser that all the texts and scripts in these two tags are HTML documents. 2:This is the <head> tag, which is an open tag and closes at line 7 with the </head> tag. These tags contain the title, script, style, and metadata of a web page. 3:This is the <title> tag, which closes at line 6 with the </title> tag. This tag contains the title of a webpage. The previous image had the title Google. To see this on the web browser you need to type like that: <title> Google </title> 4:This is the close tag of <title> tag 5:This is the closing tag of <head> tag 6:This is the <body> tag, closes at line 13 with tag </body> Whatever you can see on a web page is written between these two tags. Every element, image, link, and so on are formatted here. To see This is a web page on your browser, you need to type similar to the following:     <body> This is a web page </body> 7:The </body> tag closes here. 8:The </html> tag is closed here. Your first webpage You have just learned the eight basic tags of an HTML page. You can now make your own web page. How? Why not you try with me? Open your text editor. Press Ctrl + N, which will open a new untitled file as shown in the following image: Type the following HTML codes on the blank page: <html>     <head>       <title>         My Webpage!       </title>     </head>     <body>       This is my webpage :)     </body>   </html> Then, press Ctrl + Shift + S that will tell you to save your code somewhere on your computer, as follows: Type a suitable name on the File Name: field. I would like to name my HTML file webpage, therefore, I typed webpage.html. You may be thinking why I added an .html extension. As this is an HTML document, you need to add .html or .htm after the name that you give to your webpage. Press the Save button.This will create an HTML document on your computer. Go to the directory, where you saved your HTML file. Remember that you can give your web page any name. However, this name will not be visible on your browser. It is not the title of your webpage. It is a good practice not to keep a blank space on your web page's name. Consider that you want to name your HTML file This is my first webpage.html. Your computer will face no trouble showing the result on the Internet browsers; however, when your website will be on a server, this name may face a problem. Therefore, I suggest you to keep an underscore (_) where you need to add a space similar to the following: This_is_my_first_webpage.html. You will find a file similar to the following image: Now, double-click on the file. You will see your first web page on your Internet browser! You typed My Webpage! between the <title> and </title> tags, which is why your browser is showing this in the first selection box, 1. Also, you typed This is my webpage :) between the <body> and </body> tags. Therefore, you can see the text on your browser in the second selection box, 2. Congratulations! You created your first web page! You can edit your codes and other texts of the webpage.html file by right-clicking on the file and selecting Open with Atom. You must save (Ctrl + S) your codes and text before reopening the file with your browser. Implementing Canvas To add canvas on your HTML page, you need to define the height and width of your canvas in the <canvas> and </canvas> tags as shown in the following: <html>   <head>     <title>Canvas</title>   </head>   <body>   <canvas id="canvasTest" width="200" height="100"     style="border:2px solid #000;">       </canvas>   </body> </html> We have defined canvas id as canvasTest, which will be used to play with the canvas. We used the inline CSS on our canvas. A 2 pixel solid border is used to have a better view of the canvas. Adding JavaScript Now, we are going to add few lines of JavaScript to our canvas. We need to add our JavaScript just after the <canvas>…</canvas> tags in the <script> and </script> tags. Drawing a rectangle To test our canvas, let's draw a rectangle in the canvas by typing the following codes: <script type="text/javascript">   var canvas = document.getElementById("canvasTest"); //called our     canvas by id   var canvasElement = canvas.getContext("2d"); // made our canvas     2D   canvasElement.fillStyle = "black"; //Filled the canvas black   canvasElement.fillRect(10, 10, 50, 50); //created a rectangle </script> In the script, we declared two JavaScript variables. The canvas variable is used to hold the content of our canvas using the canvas ID, which we used on our <canvas>…</canvas> tags. The canvasElement variable is used to hold the context of the canvas. We made fillStyle black so that the rectangle that we want to draw becomes black when filled. We used canvasElement.fillRect(x, y, w, h); for the shape of the rectangle. Where x is the distance of the rectangle from the x axis, y is the distance of the rectangle from the y axis. The w and h parameters are the width and height of the rectangle, respectively. The full code is similar to the following: <html>   <head>     <title>Canvas</title>   </head>   <body>     <canvas id="canvasTest" width="200" height="100"       style="border:2px solid #000;">     </canvas>     <script type="text/javascript">       var canvas = document.getElementById("canvasTest"); //called         our canvas by id       var canvasElement = canvas.getContext("2d"); // made our         canvas 2D       canvasElement.fillStyle = "black"; //Filled the canvas black       canvasElement.fillRect(10, 10, 50, 50); //created a         rectangle     </script>   </body> </html> The output of the code is as follows: Drawing a line To draw a line in the canvas, you need to insert the following code in your <script> and </script> tags: <script type="text/javascript">   var c = document.getElementById("canvasTest");   var canvasElement = c.getContext("2d");   canvasElement.moveTo(0,0);   canvasElement.lineTo(100,100);   canvasElement.stroke(); </script> Here, canvasElement.moveTo(0,0); is used to make our line start from the (0,0) co-ordinate of our canvas. The canvasElement.lineTo(100,100); statement is used to make the line diagonal. The canvasElement.stroke(); statement is used to make the line visible. Here is the output of the code: A quick exercise Draw a line using canvas and JavaScript that will be parallel to the y axis of the canvas Draw a rectangle having 300 px height and 200 px width and draw a line on the same canvas touching the rectangle. Assignment operators An assignment operator assigns a value to an operator. I believe you that already know about assignment operators, don't you? Well, you used an equal sign (=) between a variable and its value. By doing this, you assigned the value to the variable. Let's see the following example: Var name = "Sherlock Holmes" The Sherlock Holmes string is assigned to the name variable. You already learned about the increment and decrement operators. Can you tell me what will be the output of the following codes: var x = 3; x *= 2; document.write(x); The output will be 6. Do you remember why this happened? The x *= 2; statement is similar to x = x * 2; as x is equal to 3 and later multiplied by 2. The final number (3 x 2 = 6) is assigned to the same x variable. That's why we got the output as shown in the following screenshot: Let's perform following exercise: What is the output of the following codes: var w = 32; var x = 12; var y = 9; var z = 5; w++; w--; x*2; y = x; y--; z%2; document.write("w = "+w+", x = "+x+", y = "+y+", z = "+z); The output that we get is w = 32, x = 12, y = 11, z = 5. JavaScript comparison and logical operators If you want to do something logical and compare two numbers or variables in JavaScript, you need to use a few logical operators. The following are a few examples of the comparison operators: Operator Description == Equal to != Not equal to > Greater than < Less than => Equal to or greater than <= Less than or equal to The example of each operator is shown in the following screenshot: According to mozilla.org, "Object-oriented programming (OOP) is a programming paradigm that uses abstraction to create models based on the real world. OOP uses several techniques from previously established paradigms, including modularity, polymorphism, and encapsulation." Nicholas C. Zakas states that "OOP languages typically are identified through their use of classes to create multiple objects that have the same properties and methods." You probably have assumed that JavaScript is an object-oriented programming language. Yes, you are absolutely right. Let's see why it is an OOP language. We call a computer programming language object oriented, if it has the following few features: Inheritance Polymorphism Encapsulation Abstraction Before going any further, let's discuss objects. We create objects in JavaScript in the following manner. var person = new Object(); person.name = "Harry Potter"; person.age = 22; person.job = "Magician"; We created an object for person. We added a few properties of person. If we want to access any property of the object, we need to call the property. Consider that you want to have a pop up of the name property of the preceding person object. You can do this with the following method: person.callName = function(){  alert(this.name); }; We can write the preceding code as the following: var person = {   name: "Harry Potter",   age: 22,   job: "Magician",   callName: function(){     alert(this.name);   } }; Inheritance in JavaScript Inherit means derive something (characteristics, quality, and so on) from one's parents or ancestors. In programming languages, when a class or an object is based on another class or object in order to maintain the same behavior of mother class or object is known as inheritance. We can also say that this is a concept of gaining properties or behaviors of something else. Suppose, X inherits something from Y, it is like X is a type of Y. JavaScript occupies the inheritance capability. Let's see an example. A bird inherits from animal as a bird is a type of animal. Therefore, a bird can do the same thing as an animal. This kind of relationship in JavaScript is a little complex and needs a syntax. We need to use a special object called prototype, which assigns the properties to a type. We need to remember that only function has prototypes. Our Animal function should look similar to the following: function Animal(){ //We can code here. }; To add few properties to the function, we need to add a prototype, as shown in the following: Animal.prototype.eat = function(){ alert("Animal can eat."); }; Summary In this article, you have learned how to write HTML code and implement JavaScript code with the HTML file and HTML canvas. You have learned a few arithmetic operations with JavaScript. The sections in this article are from different chapters of the book, therefore, the flow may look like not in line. I hope you read the original book and practice the code that we discussed there. Resources for Article: Further resources on this subject: Walking You Through Classes [article] JavaScript Execution with Selenium [article] Object-Oriented JavaScript with Backbone Classes [article]
Read more
  • 0
  • 0
  • 4903
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-deployment-and-devops
Packt
14 Oct 2016
16 min read
Save for later

Deployment and DevOps

Packt
14 Oct 2016
16 min read
 In this article by Makoto Hashimoto and Nicolas Modrzyk, the authors of the book Clojure Programming Cookbook, we will cover the recipe Clojure on Amazon Web Services. (For more resources related to this topic, see here.) Clojure on Amazon Web Services This recipe is a standalone dish where you can learn how to combine the elegance of Clojure with Amazon Web Services (AWS). AWS was started in 2006 and is used by many businesses as easy to use web services. This style of serverless services is becoming more and more popular. You can use computer resources and software services on demand, without the need of preparing hardware or installing software by yourselves. You will mostly make use of the amazonica library, which is a comprehensive Clojure client for the entire Amazon AWS set of APIs. This library wraps the Amazon AWS APIs and supports most of AWS services including EC2, S3, Lambda, Kinesis, Elastic Beanstalk, Elastic MapReduce, and RedShift. This recipe has received a lot of its content and love from Robin Birtle, a leading member of the Clojure Community in Japan. Getting ready You need an AWS account and credentials to use AWS, so this recipe starts by showing you how to do the setup and acquire the necessary keys to get started. Signing up on AWS You need to sign up AWS if you don't have your account in AWS yet. In this case, go to https://aws.amazon.com, click on Sign In to the Console, and follow the instruction for creating your account:   To complete the sign up, enter the number of a valid credit card and a phone number. Getting access key and secret access key To call the API, you now need your AWS's access key and secret access key. Go to AWS console and click on your name, which is located in the top right corner of the screen, and select Security Credential, as shown in the following screenshot: Select Access Keys (Access Key ID and Secret Access Key), as shown in the following screenshot:   Then, the following screen appears; click on New Access Key: You can see your access key and secret access key, as shown in the following screenshot: Copy and save these strings for later use. Setting up dependencies in your project.clj Let's add amazonica library to your project.clj and restart your REPL: :dependencies [[org.clojure/clojure "1.8.0"] [amazonica "0.3.67"]] How to do it… From there on, we will go through some sample usage of the core Amazon services, accessed with Clojure, and the amazonica library. The three main ones we will review are as follows: EC2, Amazon's Elastic Cloud, which allows to run Virtual Machines on Amazon's Cloud S3, Simple Storage Service, which gives you Cloud based storage SQS, Simple Queue Services, which gives you Cloud-based data streaming and processing Let's go through each of these one by one. Using EC2 Let's assume you have an EC2 micro instance in Tokyo region: First of all, we will declare core and ec2 namespace in amazonica to use: (ns aws-examples.ec2-example (:require [amazonica.aws.ec2 :as ec2] [amazonica.core :as core])) We will set the access key and secret access key for enabling AWS client API accesses AWS. core/defcredential does as follows: (core/defcredential "Your Access Key" "Your Secret Access Key" "your region") ;;=> {:access-key "Your Access Key", :secret-key "Your Secret Access Key", :endpoint "your region"} The region you need to specify is ap-northeast-1, ap-south-1, or us-west-2. To get full regions list, use ec2/describe-regions: (ec2/describe-regions) ;;=> {:regions [{:region-name "ap-south-1", :endpoint "ec2.ap-south-1.amazonaws.com"} ;;=> ..... ;;=> {:region-name "ap-northeast-2", :endpoint "ec2.ap-northeast-2.amazonaws.com"} ;;=> {:region-name "ap-northeast-1", :endpoint "ec2.ap-northeast-1.amazonaws.com"} ;;=> ..... ;;=> {:region-name "us-west-2", :endpoint "ec2.us-west-2.amazonaws.com"}]} ec2/describe-instances returns very long information as the following: (ec2/describe-instances) ;;=> {:reservations [{:reservation-id "r-8efe3c2b", :requester-id "226008221399", ;;=> :owner-id "182672843130", :group-names [], :groups [], .... To get only necessary information of instance, we define the following __get-instances-info: (defn get-instances-info[] (let [inst (ec2/describe-instances)] (->> (mapcat :instances (inst :reservations)) (map #(vector [:node-name (->> (filter (fn [x] (= (:key x)) "Name" ) (:tags %)) first :value)] [:status (get-in % [:state :name])] [:instance-id (:instance-id %)] [:private-dns-name (:private-dns-name %)] [:global-ip (-> % :network-interfaces first :private-ip-addresses first :association :public-ip)] [:private-ip (-> % :network-interfaces first :private-ip-addresses first :private-ip-address)])) (map #(into {} %)) (sort-by :node-name)))) ;;=> #'aws-examples.ec2-example/get-instances-info Let's try to use the following function: get-instances-info) ;;=> ({:node-name "ECS Instance - amazon-ecs-cli-setup-my-cluster", ;;=> :status "running", ;;=> :instance-id "i-a1257a3e", ;;=> :private-dns-name "ip-10-0-0-212.ap-northeast-1.compute.internal", ;;=> :global-ip "54.199.234.18", ;;=> :private-ip "10.0.0.212"} ;;=> {:node-name "EcsInstanceAsg", ;;=> :status "terminated", ;;=> :instance-id "i-c5bbef5a", ;;=> :private-dns-name "", ;;=> :global-ip nil, ;;=> :private-ip nil}) As in the preceding example function, we can obtain instance-id list. So, we can start/stop instances using ec2/start-instances and ec2/stop-instances_ accordingly: (ec2/start-instances :instance-ids '("i-c5bbef5a")) ;;=> {:starting-instances ;;=> [{:previous-state {:code 80, :name "stopped"}, ;;=> :current-state {:code 0, :name "pending"}, ;;=> :instance-id "i-c5bbef5a"}]} (ec2/stop-instances :instance-ids '("i-c5bbef5a")) ;;=> {:stopping-instances ;;=> [{:previous-state {:code 16, :name "running"}, ;;=> :current-state {:code 64, :name "stopping"}, ;;=> :instance-id "i-c5bbef5a"}]} Using S3 Amazon S3 is secure, durable, and scalable storage in AWS cloud. It's easy to use for developers and other users. S3 also provide high durability, availability, and low cost. The durability is 99.999999999 % and the availability is 99.99 %. Let's create s3 buckets names makoto-bucket-1, makoto-bucket-2, and makoto-bucket-3 as follows: (s3/create-bucket "makoto-bucket-1") ;;=> {:name "makoto-bucket-1"} (s3/create-bucket "makoto-bucket-2") ;;=> {:name "makoto-bucket-2"} (s3/create-bucket "makoto-bucket-3") ;;=> {:name "makoto-bucket-3"} s3/list-buckets returns buckets information: (s3/list-buckets) ;;=> [{:creation-date #object[org.joda.time.DateTime 0x6a09e119 "2016-08-01T07:01:05.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-1"} ;;=> {:creation-date #object[org.joda.time.DateTime 0x7392252c "2016-08-01T17:35:30.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-2"} ;;=> {:creation-date #object[org.joda.time.DateTime 0x4d59b4cb "2016-08-01T17:38:59.000+09:00"], ;;=> :owner ;;=> {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, ;;=> :name "makoto-bucket-3"}] We can see that there are three buckets in your AWS console, as shown in the following screenshot: Let's delete two of the three buckets as follows: (s3/list-buckets) ;;=> [{:creation-date #object[org.joda.time.DateTime 0x56387509 "2016-08-01T07:01:05.000+09:00"], ;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", :display-name "tokoma1"}, :name "makoto-bucket-1"}] We can see only one bucket now, as shown in the following screenshot: Now we will demonstrate how to send your local data to s3. s3/put-object uploads a file content to the specified bucket and key. The following code uploads /etc/hosts and makoto-bucket-1: (s3/put-object :bucket-name "makoto-bucket-1" :key "test/hosts" :file (java.io.File. "/etc/hosts")) ;;=> {:requester-charged? false, :content-md5 "HkBljfktNTl06yScnMRsjA==", ;;=> :etag "1e40658df92d353974eb249c9cc46c8c", :metadata {:content-disposition nil, ;;=> :expiration-time-rule-id nil, :user-metadata nil, :instance-length 0, :version-id nil, ;;=> :server-side-encryption nil, :etag "1e40658df92d353974eb249c9cc46c8c", :last-modified nil, ;;=> :cache-control nil, :http-expires-date nil, :content-length 0, :content-type nil, ;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil, ;;=> :ongoing-restore nil}} s3/list-objects lists objects in a bucket as follows: (s3/list-objects :bucket-name "makoto-bucket-1") ;;=> {:truncated? false, :bucket-name "makoto-bucket-1", :max-keys 1000, :common-prefixes [], ;;=> :object-summaries [{:storage-class "STANDARD", :bucket-name "makoto-bucket-1", ;;=> :etag "1e40658df92d353974eb249c9cc46c8c", ;;=> :last-modified #object[org.joda.time.DateTime 0x1b76029c "2016-08-01T07:01:16.000+09:00"], ;;=> :owner {:id "3d6e87f691897059c23bcfb88b17da55f0c9aa02cc2a44e461f1594337059d27", ;;=> :display-name "tokoma1"}, :key "test/hosts", :size 380}]} To obtain the contents of objects in buckets, use s3/get-object: (s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts") ;;=> {:bucket-name "makoto-bucket-1", :key "test/hosts", ;;=> :input-stream #object[com.amazonaws.services.s3.model.S3ObjectInputStream 0x24f810e9 ;;=> ...... ;;=> :last-modified #object[org.joda.time.DateTime 0x79ad1ca9 "2016-08-01T07:01:16.000+09:00"], ;;=> :cache-control nil, :http-expires-date nil, :content-length 380, :content-type "application/octet-stream", ;;=> :restore-expiration-time nil, :content-encoding nil, :expiration-time nil, :content-md5 nil, ;;=> :ongoing-restore nil}} The result is a map, the content is a stream data, and the value of :object-content. To get the result as a string, we will use slurp_ as follows: (slurp (:object-content (s3/get-object :bucket-name "makoto-bucket-1" :key "test/hosts"))) ;;=> "127.0.0.1tlocalhostn127.0.1.1tphenixnn# The following lines are desirable for IPv6 capable hostsn::1 ip6-localhost ip6-loopbacknfe00::0 ip6-localnetnff00::0 ip6-mcastprefixnff02::1 ip6-allnodesnff02::2 ip6-allroutersnn52.8.30.189 my-cluster01-proxy1 n52.8.169.10 my-cluster01-master1 n52.8.198.115 my-cluster01-slave01 n52.9.12.12 my-cluster01-slave02nn52.8.197.100 my-node01n" Using Amazon SQS Amazon SQS is a high-performance, high-availability, and scalable Queue Service. We will demonstrate how easy it is to handle messages on queues in SQS using Clojure: (ns aws-examples.sqs-example (:require [amazonica.core :as core] [amazonica.aws.sqs :as sqs])) To create a queue, you can use sqs/create-queue as follows: (sqs/create-queue :queue-name "makoto-queue" :attributes {:VisibilityTimeout 3000 :MaximumMessageSize 65536 :MessageRetentionPeriod 1209600 :ReceiveMessageWaitTimeSeconds 15}) ;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"} To get information of queue, use sqs/get-queue-attributes as follows: (sqs/get-queue-attributes "makoto-queue") ;;=> {:QueueArn "arn:aws:sqs:ap-northeast-1:864062283993:makoto-queue", ... You can configure a dead letter queue using sqs/assign-dead-letter-queue as follows: (sqs/create-queue "DLQ") ;;=> {:queue-url "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ"} (sqs/assign-dead-letter-queue (sqs/find-queue "makoto-queue") (sqs/find-queue "DLQ") 10) ;;=> nil Let's list queues defined: (sqs/list-queues) ;;=> {:queue-urls ;;=> ["https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ" ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue"]} The following image is of the console of SQS: Let's examine URLs of queues: (sqs/find-queue "makoto-queue") ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/makoto-queue" (sqs/find-queue "DLQ") ;;=> "https://sqs.ap-northeast-1.amazonaws.com/864062283993/DLQ" To send messages, we use sqs/send-message: (sqs/send-message (sqs/find-queue "makoto-queue") "hello sqs from Clojure") ;;=> {:md5of-message-body "00129c8cc3c7081893765352a2f71f97", :message-id "690ddd68-a2f6-45de-b6f1-164eb3c9370d"} To receive messages, we use sqs/receive-message: (sqs/receive-message "makoto-queue") ;;=> {:messages [ ;;=> {:md5of-body "00129c8cc3c7081893765352a2f71f97", ;;=> :receipt-handle "AQEB.....", :message-id "bd56fea8-4c9f-4946-9521-1d97057f1a06", ;;=> :body "hello sqs from Clojure"}]} To remove all messages in your queues, we use sqs/purge-queue: (sqs/purge-queue :queue-url (sqs/find-queue "makoto-queue")) ;;=> nil To delete queues, we use sqs/delete-queue: (sqs/delete-queue "makoto-queue") ;;=> nil (sqs/delete-queue "DLQ") ;;=> nil Serverless Clojure with AWS Lambda Lambda is an AWS product that allows you to run Clojure code without the hassle and expense of setting up and maintaining a server environment. Behind the scenes, there are still servers involved, but as far as you are concerned, it is a serverless environment. Upload a JAR and you are good to go. Code running on Lambda is invoked in response to an event, such as a file being uploaded to S3, or according to a specified schedule. In production environments, Lambda is normally used in wider AWS deployment that includes standard server environments to handle discrete computational tasks. Particularly those that benefit from Lambda's horizontal scaling that just happens with configuration required. For Clojurians working on personal project, Lambda is a wonderful combination of power and limitation. Just how far can you hack Lambda given the constraints imposed by AWS? Clojure namespace helloworld Start off with a clean empty projected generated using lein new. From there, in your IDE of choice, configure and package and a new Clojure source file. In the following example, the package is com.sakkam and the source file uses the Clojure namespace helloworld. The entry point to your Lambda code is a Clojure function that is exposed as a method of a Java class using Clojure's gen-class. Similar to use and require, the gen-class function can be included in the Clojure ns definition, as the following, or specified separately. You can use any name you want for the handler function but the prefix must be a hyphen unless an alternate prefix is specified as part of the :methods definition: (ns com.sakkam.lambda.helloworld (:gen-class :methods [^:static [handler [String] String]])) (defn -myhandler [s] (println (str "Hello," s))) From the command line, use lein uberjar to create a JAR that can be uploaded to AWS Lambda. Hello World – the AWS part Getting your Hello World to work is now a matter of creating a new Lambda within AWS, uploading your JAR, and configuring your handler. Hello Stream The handler method we used in our Hello World Lambda function was coded directly and could be extended to accept custom Java classes as part of the method signature. However, for more complex Java integrations, implementing one of AWS's standard interfaces for Lambda is both straightforward and feels more like idiomatic Clojure. The following example replaces our own definition of a handler method with an implementation of a standard interface that is provided as part of the aws-lambda-java-core library. First of all, add the dependency [com.amazonaws/aws-lambda-java-core "1.0.0"] into your project.clj. While you are modifying your project.clj, also add in the dependency for [org.clojure/data.json "0.2.6"] since we will be manipulating JSON formatted objects as part of this exercise. Then, either create a new Clojure namespace or modify your existing one so that it looks like the following (the handler function must be named -handleRequest since handleRequest is specified as part of the interface): (ns aws-examples.lambda-example (:gen-class :implements [com.amazonaws.services.lambda.runtime.RequestStreamHandler]) (:require [clojure.java.io :as io] [clojure.data.json :as json] [clojure.string :as str])) (defn -handleRequest [this is os context] (let [w (io/writer os) parameters (json/read (io/reader is) :key-fn keyword)] (println "Lambda Hello Stream Output ") (println "this class: " (class this)) (println "is class:" (class is)) (println "os class:" (class os)) (println "context class:" (class context)) (println "Parameters are " parameters)) (.flush w)) Use lein uberjar again to create a JAR file. Since we have an existing Lambda function in AWS, we can overwrite the JAR used in the Hello World example. Since the handler function name has changed, we must modify our Lambda configuration to match. This time, the default test that provides parameters in JSON format should work as is, and the result will look something like the following: We can very easily get a more interesting test of Hello Stream by configuring this Lambda to run whenever a file is uploaded to S3. At the Lambda management page, choose the Event Sources tab, click on Add Event, and choose an S3 bucket to which you can easily add a file. Now, upload a file to the specified S3 bucket and then navigate to the logs of the Hello World Lambda function. You will find that Hello World has been automatically invoked, and a fairly complicated object that represents the uploaded file is supplied as a parameter to our Lambda function. Real-world Lambdas To graduate from a Hello World Lambda to real-world Lambdas, the chances are you going to need richer integration with other AWS facilities. As a minimum, you will probably want to write a file to an S3 bucket or insert a notification into SNS queue. Amazon provides an SDK that makes this integration straightforward for developers using standard Java. For Clojurians, using the Amazon Clojure wrapper Amazonica is a very fast and easy way to achieve the same. How it works… Here, we will explain how AWS works. What Is Amazon EC2? Using EC2, we don't need to buy hardware or installing operating system. Amazon provides various types of instances for customers' use cases. Each instance type has varies combinations of CPU, memory, storage, and networking capacity. Some instance types are given in the following table. You can select appropriate instances according to the characteristics of your application. Instance type Description M4 M4 type instance is designed for general purpose computing. This family provides a balanced CPU, memory and network bandwidth C4 C4 type instance is designed for applications that consume CPU resources. C4 is the highest CPU performance with the lowest cost R3 R3 type instances are for memory-intensive applications G2 G2 type instances has NVIDIA GPU and is used for graphic applications and GPU computing applications such as deep learning   The following table shows the variations of models of M4 type instance. You can choose the best one among models. Model vCPU RAM (GiB) EBS bandwidth (Mbps) m4.large 2 8 450 m4.xlarge 4 16 750 m4.2xlarge 8 32 1,000 m4.4xlarge 16 64 2,000 m4.10xlarge 40 160 4,000   Amazon S3 Amazon S3 is storage for Cloud. It provides a simple web interface that allows you to store and retrieve data. S3 API is an ease of use but ensures security. S3 provides Cloud storage services and is scalable, reliable, fast, and inexpensive. Buckets and Keys Buckets are containers for objects stored in Amazon S3. Objects are stored in buckets. Bucket name is unique among all regions in the world. So, names of buckets are the top-level identities of S3 and units of charges and access controls. Keys are the unique identifiers for an object within a bucket. Every object in a bucket has exactly one key. Keys are the second-level identifiers and should be unique in a bucket. To identify an object, you use the combination of bucket name and key name. Objects Objects are accessed by a bucket names and keys. Objects consist of data and metadata. Metadata is a set of name-value pairs that describe the characteristics of object. Examples of metadata are the date last modified and content type. Objects can have multiple versions of data. There's more… It is clearly impossible to review all the different APIs for all the different services proposed via the Amazonica library, but you would probably get the feeling of having tremendous powers in your hands right now. (Don't forget to give that credit card back to your boss now …) Some other examples of Amazon services are as follows: Amazon IoT: This proposes a way to get connected devices easily and securely interact with cloud applications and other devices. Amazon Kinesis: This gives you ways of easily loading massive volumes of streaming data into AWS and easily analyzing them through streaming techniques. Summary We hope you enjoyed this appetizer to the book Clojure Programming Cookbook, which will present you a set of progressive readings to improve your Clojure skills, and make it so that Clojure becomes your de facto everyday language for professional and efficient work. This book presents different topics of generic programming, which are always to the point, with some fun so that each recipe feels not like a classroom, but more like a fun read, with challenging exercises left to the reader to gradually build up skills. See you in the book! Resources for Article: Further resources on this subject: Customizing Xtext Components [article] Reactive Programming and the Flux Architecture [article] Setup Routine for an Enterprise Spring Application [article]
Read more
  • 0
  • 0
  • 4887

article-image-new-soa-capabilities-biztalk-server-2009-wcf-sql-server-adapter
Packt
26 Oct 2009
3 min read
Save for later

New SOA Capabilities in BizTalk Server 2009: WCF SQL Server Adapter

Packt
26 Oct 2009
3 min read
Do not go where the path may lead; go instead where there is no path and leave a trail.-Ralph Waldo Emerson Many of the patterns and capabilities shown in this article are compatible with the last few versions of the BizTalk Server product. So what's new in BizTalk Server 2009?` BizTalk Server 2009 is the sixth formal release of the BizTalk Server product. This upcoming release has a heavy focus on platform modernization through new support for Windows Server 2008, Visual Studio.NET 2008, SQL Server 2008, and the .NET Framework 3.5. This will surely help developers who have already moved to these platforms in their day-to-day activities but have been forced to maintain separate environments solely for BizTalk development efforts. Lets get started. What is the WCF SQL Adapter? The BizTalk Adapter Pack 2.0 now contains five system and data adapters including SAP, Siebel, Oracle databases, Oracle applications, and SQL Server. What are these adapters and how are they different than the adapters available for previous version of BizTalk? Up until recently, BizTalk adapters were built using a commonly defined BizTalk Adapter Framework. This framework prescribed interfaces and APIs for adapter developers in order to elicit a common look and feel for the users of the adapters. Moving forward, adapter developers are encouraged by Microsoft to use the new WCF LOB Adapter SDK. As you can guess from the name, this new adapter framework, which can be considered an evolution of the BizTalk Adapter Framework, is based on WCF technologies. All of the adapters in the BizTalk Adapter Pack 2.0 are built upon the WCF LOB Adapter SDK. What this means is that all of the adapters are built as reusable, metadata-rich components that are surfaced to users as WCF bindings. So much like you have a wsHttp or netTcp binding, now you have a sqlBinding or sapBinding. As you would expect from a WCF binding, there is a rich set of configuration attributes for these adapters and they are no longer tightly coupled to BizTalk itself. Microsoft has made connection a commodity, and no longer do organizations have to spend tens of thousands of dollars to connect to line of business systems like SAP through expensive, BizTalk-only adapters. This latest version of the BizTalk Adapter Pack now includes a SQL Server adapter, which replaces the legacy BizTalk-only SQL Server adapter. What do we get from this SQL Server adapter that makes it so much better than the old one? Feature Classic SQL Adapter WCF SQL Adapter Execute create-read-update-delete statements on tables and views; execute stored procedures and generic T-SQL statements Partial (send operations only support stored procedures and updategrams) Yes Database polling via FOR XML Yes Yes Database polling via  traditional tabular results No Yes Proactive database push via SQL Query Notification No Yes Expansive adapter configuration which impacts connection management and transaction behavior No Yes Support for composite transactions which allow aggregation of operations across tables or procedures into a single atomic transaction No Yes Rich metadata browsing and retrieval for finding and selecting database operations No Yes Support for the latest data types (e.g. XML) and SQL Server 2008 platform No Yes Reusable outside of BizTalk applications by WCF or basic HTTP clients No Yes Adapter extension and configuration through out of the box WCF components or custom WCF behaviors No Yes Dynamic WSDL generation which always reflects current state of the system instead of fixed contract which always requires explicit updates No Yes
Read more
  • 0
  • 0
  • 4874

article-image-apache-ofbiz-entity-engine
Packt
08 Nov 2010
8 min read
Save for later

Apache OFBiz Entity Engine

Packt
08 Nov 2010
8 min read
  Apache OfBiz Cookbook Over 60 simple but incredibly effective recipes for taking control of OFBiz Optimize your OFBiz experience and save hours of frustration with this timesaving collection of practical recipes covering a wide range of OFBiz topics. Get answers to the most commonly asked OFBiz questions in an easy-to-digest reference style of presentation. Discover insights into OFBiz design, implementation, and best practices by exploring real-life solutions. Each recipe in this Cookbook is crafted to describe not only "how" to accomplish a specific task, but also "why" the technique works to ensure you get the most out of your OFBiz implementation.         Read more about this book       (For more resources on Apache, see here.) Introduction Secure and reliable data storage is the key business driver behind any data management strategy. That OFBiz takes data management seriously and does not leave all the tedious and error-prone data management tasks to the application developer or the integrator is evident from the visionary design and implementation of the Entity Engine. The Entity Engine is a database agnostic application development and deployment framework seamlessly integrated into the OFBiz project code. It handles all the day-to-day data management tasks necessary to securely and reliably operate an enterprise. These tasks include, but are not limited to support for: Simultaneously connecting to an unlimited number of databases Managing an unlimited number of database connection pools Overseeing database transactions Handling database error conditions The true power of the Entity Engine is that it provides OFBiz Applications with all the tools, utilities, and an Application Programming Interface (API) necessary to easily read and write data to all configured data sources in a consistent and predictable manner without concern for database connections, the physical location of the data, or the underlying data type. To best understand how to effectively use the Entity Engine to meet all your data storage needs, a quick review of Relational Database Management Systems (RDBMS) is in order: RDBMS tables are the basic organizational structure of a relational database. An OFBiz entity is a model of a database table. As a model, entities describe a table's structure, content format, and any applicable associations a table may have with other tables. Database tables are further broken down into one or more columns. Table columns have data type and format characteristics constrained by the underlying RDBMS and assigned to them as part of a table's definition. The entity model describes a mapping of table columns to entity fields. Physically, data is stored in tables as one or more rows. A record is a unique instance of the content within a table's row. Users access table records by reading and writing one or more rows as mapped by an entity's model. In OFBiz, records are called entity values. Keys are a special type of field. Although there are several types of keys, OFBiz is primarily concerned with primary keys and foreign keys. A table's primary key is a column or group of columns that uniquely identifies a row within a table. The value of the primary key uniquely identifies a table's row throughout the entire database. A foreign key is a key used in one table to represent the value of a primary key in a related table. Foreign keys are used to establish unique and referentially correct relationships between one or more tables. Relationships are any associations that tables may have with one another. Views are "virtual" tables composed of columns from one or more tables in the database. OFBiz has a similar construct (although it differs from the traditional RDBMS definition of a "view") in the view-entity. Note: while this discussion has focused on RDMS, there is nothing to preclude you from using the Entity Engine in conjunction with any other types of data source(s). The Entity Engine provides all the tools and utilities necessary to effectively and securely access an unlimited number of databases regardless of the physical location of the data source, as shown in the following figure: Changing the default database Out-of-the-box, OFBiz is integrated with the Apache Derby database system (http://db.apache.org/derby). While Derby is sufficient to handle OFBiz during software development, evaluation, and functional testing, it is not recommended for environments that experience high transaction volumes. In particular, it is not recommended for use in production environments. Getting ready Before configuring an external database, the following few steps have to be ensured: Before changing the OFBiz Entity Engine configuration to use a remote data source, you must first create the remote database; the remote database must exist. Note: if you are not going to install the OFBiz schema and/or seed data on the remote database, but rather intend to use it as is, you will not need to create a database. You will need, however, to define entities for each remote database table you wish to access, and assign those entities to one or more entity groups. Add a user/owner for the remote database. OFBiz will access the database as this user. Make sure the user has all necessary privileges to create and remove database tables. Add a user/owner password (if desired or necessary) to the remote database. Ensure that the IP port the database is listening on for remote connections is open and clear of any firewall obstructions (for example, by default, PostgreSQL listens for connections on port 5432). Add the appropriate database driver to the ~framework/entity/lib/jdbc directory. For example, if you are using PostgreSQL version 8.3, download the postgresql-8.3-605.jdbc2.jar driver from the PostgreSQL website (http://jdbc.postgresql.org/download.html). How to do it... To configure another external database, follow these few steps: Open the Entity Engine's configuration file located at:~framework/entity/config/entityengine.xml Within the entityengine.xml file, configure the remote database's usage settings. A suggested method for doing this is to take an existing datasource element entry and modify that to reflect the necessary settings for a remote database. There are examples provided for most of the commonly used databases.For example, to configure a remote PostgreSQL database with the name of myofbiz_db, with a username ofbiz and password of ofbiz, edit the localpostnew configuration entry as shown here: <datasource name="localpostnew" helper-class= "org.ofbiz.entity.datasource.GenericHelperDAO" schema-name="public" field-type-name="postnew" check-on-start="true" add-missing-on-start="true" use-fk-initially-deferred="false" alias-view-columns="false" join-style="ansi" result-fetch-size="50" use-binary-type-for-blob="true"> <read-data reader-name="seed"/> <read-data reader-name="seed-initial"/> <read-data reader-name="demo"/> <read-data reader-name="ext"/> <inline-jdbc jdbc-driver="org.postgresql.Driver" jdbc-uri="jdbc:postgresql://127.0.0.1/myofbiz_db" jdbc-username="ofbiz" jdbc-password="ofbiz" isolation-level="ReadCommitted" pool-minsize="2" pool-maxsize="250"/> </datasource> Configure the default delegator for this data source: <delegator name="default" entity-model-reader="main" entity-group-reader="main" entity-eca-reader="main" distributed-cache-clear-enabled="false"> <group-map group-name="org.ofbiz" datasource-name="localpostnew"/> <group-map group-name="org.ofbiz.olap" datasource-name="localderbyolap"/> </delegator> Save and close the entityengine.xml file. From the OFBiz install directory, rebuild OFBiz by running the ant run-install command. Start OFBiz. Test by observing that the database was created and populated. You may use the WebTools entity reference page (https://localhost:8443/webtools/control/entityref) to search for your newly created entities, or a third-party tool designed to work with your specific database. How it works... The Entity Engine is configured using the entityengine.xml file. Whenever OFBiz is restarted, the Entity Engine initializes itself by first referencing this file, and then building and testing all the designated database connections. In this way, an unlimited number of data source connections, database types, and even low-level driver combinations may be applied at runtime without affecting the higher-level database access logic. By abstracting the connection using one or more delegators, OFBiz further offloads lowlevel database connection management from the developer, and handles all connection maintenance, data mappings, and the default transaction configuration for an unlimited number of target databases. To configure one or more database connections, add datasource element declarations with settings as shown here: To specify that the Entity Engine should be connected to a database using a JDBC driver and to configure the specific connection parameters to pass, set the inline-jdbc element attributes as detailed here: Connecting to a remote database A "remote" database is any data source that is not the default Derby database. A remote database may be network connected and/or installed on the local server. The Entity Engine supports simultaneous connections to an unlimited number of remote databases in addition to, or as a replacement for, the default instance of Derby. Each remote database connection requires a datasource element entry in the entityengine.xml file. Adding and removing database connections may be performed at any time; however, entityengine.xml file changes are only effective upon OFBiz restart.
Read more
  • 0
  • 0
  • 4862

article-image-diving-oop-principles
Packt
17 May 2016
21 min read
Save for later

Diving into OOP Principles

Packt
17 May 2016
21 min read
In this article by Andrea Chiarelli, the author of the book Mastering JavaScript Object-Oriented Programming, we will discuss about the OOP nature of JavaScript by showing that it complies with the OOP principles. It also will explain the main differences with classical OOP. The following topics will be addressed in the article: What are the principles of OOP paradigm? Support of abstraction and modeling How JavaScript implements Aggregation, Association, and Composition The Encapsulation principle in JavaScript How JavaScript supports inheritance principle Support of the polymorphism principle What are the differences between classical OOP and JavaScript's OOP (For more resources related to this topic, see here.) Object-Oriented Programming principles Object-Oriented Programming (OOP) is one of the most popular programming paradigms. Many developers use languages based on this programming model such as C++, Java, C#, Smalltalk, Objective-C, and many other. One of the keys to the success of this programming approach is that it promotes a modular design and code reuse—two important features when developing complex software. However, the Object-Oriented Programming paradigm is not based on a formal standard specification. There is not a technical document that defines what OOP is and what it is not. The OOP definition is mainly based on a common sense taken from the papers published by early researchers as Kristen Nygaard, Alan Kays, William Cook, and others. An interesting discussion about various attempts to define Object-Oriented Programming can be found online at the following URL: http://c2.com/cgi/wiki?DefinitionsForOo Anyway, a widely accepted definition to classify a programming language as Object Oriented is based on two requirements—its capability to model a problem through objects and its support of a few principles that grant modularity and code reuse. In order to satisfy the first requirement, a language must enable a developer to describe the reality using objects and to define relationships among objects such as the following: Association: This is the object's capability to refer another independent object Aggregation: This is the object's capability to embed one or more independent objects Composition: This is the object's capability to embed one or more dependent objects Commonly, the second requirement is satisfied if a language supports the following principles: Encapsulation: This is the capability to concentrate into a single entity data and code that manipulates it, hiding its internal details Inheritance: This is the mechanism by which an object acquires some or all features from one or more other objects Polymorphism: This is the capability to process objects differently based on their data type or structure Meeting these requirements is what usually allows us to classify a language as Object Oriented. Is JavaScript Object Oriented? Once we have established the principles commonly accepted for defining a language as Object Oriented, can we affirm that JavaScript is an OOP language? Many developers do not consider JavaScript a true Object-Oriented language due to its lack of class concept and because it does not enforce compliance with OOP principles. However, we can see that our informal definition make no explicit reference to classes. Features and principles are required for objects. Classes are not a real requirement, but they are sometimes a convenient way to abstract sets of objects with common properties. So, a language can be Object Oriented if it supports objects even without classes, as in JavaScript. Moreover, the OOP principles required for a language are intended to be supported. They should not be mandatory in order to do programming in a language. The developer can choose to use constructs that allow him to create Object Oriented code or not. Many criticize JavaScript because developers can write code that breaches the OOP principles. But this is just a choice of the programmer, not a language constraint. It also happens with other programming languages, such as C++. We can conclude that lack of abstract classes and leaving the developer free to use or not features that support OOP principles are not a real obstacle to consider JavaScript an OOP language. So, let's analyze in the following sections how JavaScript supports abstraction and OOP principles. Abstraction and modeling support The first requirement for us to consider a language as Object Oriented is its support to model a problem through objects. We already know that JavaScript supports objects, but here we should determine whether they are supported in order to be able to model reality. In fact, in Object-Oriented Programming we try to model real-world entities and processes and represent them in our software. We need a model because it is a simplification of reality, it allows us to reduce the complexity offering a vision from a particular perspective and helps us to reason about relationship among entities. This simplification feature is usually known as abstraction, and it is sometimes considered as one of the principles of OOP. Abstraction is the concept of moving the focus from the details and concrete implementation of things to the features that are relevant for a specific purpose, with a more general and abstract approach. In other words, abstraction is the capability to define which properties and actions of a real-world entity have to be represented by means of objects in a program in order to solve a specific problem. For example, thanks to abstraction, we can decide that to solve a specific problem we can represent a person just as an object with name, surname, and age, since other information such as address, height, hair color, and so on are not relevant for our purpose. More than a language feature, it seems a human capability. For this reason, we prefer not to consider it an OOP principle but a (human) capability to support modeling. Modeling reality not only involves defining objects with relevant features for a specific purpose. It also includes the definition of relationships between objects, such as Association, Aggregation, and Composition. Association Association is a relationship between two or more objects where each object is independent of each other. This means that an object can exist without the other and no object owns the other. Let us clarify with an example. In order to define a parent–child relationship between persons, we can do so as follows: function Person(name, surname) { this.name = name; this.surname = surname; this.parent = null; } var johnSmith = new Person("John", "Smith"); var fredSmith = new Person("Fred", "Smith"); fredSmith.parent = johnSmith; The assignment of the object johnSmith to the parent property of the object fredSmith establishes an association between the two objects. Of course, the object johnSmith lives independently from the object fredSmith and vice versa. Both can be created and deleted independently each other. As we can see from the example, JavaScript allows to define association between objects using a simple object reference through a property. Aggregation Aggregation is a special form of association relationship where an object has a major role than the other one. Usually, this major role determines a sort of ownership of an object in relation to the other. The owner object is often called aggregate and the owned object is called component. However, each object has an independent life. An example of aggregation relationship is the one between a company and its employees, as in the following example: var company = { name: "ACME Inc.", employees: [] }; var johnSmith = new Person("John", "Smith"); var marioRossi = new Person("Mario", "Rossi"); company.employees.push(johnSmith); company.employees.push(marioRossi); The person objects added to employees collection help to define the company object, but they are independent from it. If the company object is deleted, each single person still lives. However, the real meaning of a company is bound to the presence of its employees. Again, the code show us that the aggregation relationship is supported by JavaScript by means of object reference. It is important not to confuse the Association with the Aggregation. Even if the support of the two relationships is syntactically identical, that is, the assignment or attachment of an object to a property, from a conceptual point of view they represent different situations. Aggregation is the mechanism that allows you to create an object consisting of several objects, while the association relates autonomous objects. In any case, JavaScript makes no control over the way in which we associate or aggregate objects between them. Association and Aggregation raise a constraint more conceptual than technical. Composition Composition is a strong type of Aggregation, where each component object has no independent life without its owner, the aggregate. Consider the following example: var person = {name: "John", surname: "Smith", address: { street: "123 Duncannon Street", city: "London", country: "United Kingdom" }}; This code defines a person with his address represented as an object. The address property in strictly bound to the person object. Its life is dependent on the life of the person and it cannot have an independent life without the person. If the person object is deleted, also the address object is deleted. In this case, the strict relation between the person and his address is expressed in JavaScript assigning directly the literal representing the address to the address property. OOP principles support The second requirement that allows us to consider JavaScript as an Object-Oriented language involves the support of at least three principles—encapsulation, inheritance, and polymorphism. Let analyze how JavaScript supports each of these principles. Encapsulation Objects are central to the Object-Oriented Programming model, and they represent the typical expression of encapsulation, that is, the ability to concentrate in one entity both data (properties) and functions (methods), hiding the internal details. In other words, the encapsulation principle allows an object to expose just what is needed to use it, hiding the complexity of its implementation. This is a very powerful principle, often found in the real world that allows us to use an object without knowing how it internally works. Consider for instance how we drive cars. We need just to know how to speed up, brake, and change direction. We do not need to know how the car works in details, how its motor burns fuel or transmits movement to the wheels. To understand the importance of this principle also in software development, consider the following code: var company = { name: "ACME Inc.", employees: [], sortEmployeesByName: function() {...} }; It creates a company object with a name, a list of employees and a method to sort the list of employees using their name property. If we need to get a sorted list of employees of the company, we simply need to know that the sortEmployeesByName() method accomplishes this task. We do not need to know how this method works, which algorithm it implements. That is an implementation detail that encapsulation hides to us. Hiding internal details and complexity has two main reasons: The first reason is to provide a simplified and understandable way to use an object without the need to understand the complexity inside. In our example, we just need to know that to sort employees, we have to call a specific method. The second reason is to simplify change management. Changes to the internal sort algorithm do not affect our way to order employees by name. We always continue to call the same method. Maybe we will get a more efficient execution, but the expected result will not change. We said that encapsulation hides internal details in order to simplify both the use of an object and the change of its internal implementation. However, when internal implementation depends on publicly accessible properties, we risk to frustrate the effort of hiding internal behavior. For example, what happens if you assign a string to the property employees of the object company? company.employees = "this is a joke!"; company.sortEmployeesByName(); The assignment of a string to a property whose value is an array is perfectly legal in JavaScript, since it is a language with dynamic typing. But most probably, we will get an exception when calling the sort method after this assignment, since the sort algorithm expects an array. In this case, the encapsulation principle has not been completely implemented. A general approach to prevent direct access to relevant properties is to replace them with methods. For example, we can redefine our company object as in the following: function Company(name) { var employees = []; this.name = name; this.getEmployees = function() { return employees; }; this.addEmployee = function(employee) { employees.push(employee); }; this.sortEmployeesByName = function() { ... }; } var company = new Company("ACME Inc."); With this approach, we cannot access directly the employees property, but we need to use the getEmployees() method to obtain the list of employees of the company and addEmployee() to add an employee to the list. This guarantees that the internal state remains really hidden and consistent. The way we created methods for the Company() constructor is not the best one. This is just one possible approach to enforce encapsulation by protecting the internal state of an object. This kind of data protection is usually called information hiding and, although often linked to encapsulation, it should be considered as an autonomous principle. Information hiding deals with the accessibility to an object's members, in particular to properties. While encapsulation concerns hiding details, the information hiding principle usually allows different access levels to the members of an object. Inheritance In Object-Oriented Programming, inheritance enables new objects to acquire the properties of existing objects. This relationship between two objects is very common and can be found in many situations in real life. It usually refers to creating a specialized object starting from a more general one. Let's consider, for example, a person: he has some features such as name, surname, height, weight, and so on. The set of features describes a generic entity that represents a person. Using abstraction, we can select the features needed for our purpose and represent a person as an object: If we need a special person who is able to program computers, that is a programmer, we need to create an object that has all the properties of a generic person plus some new properties that characterize the programmer object. For instance, the new programmer object can have a property describing which programming language he knows. Suppose we choose to create the new programmer object by duplicating the properties of the person object and adding to it the programming language knowledge as follows: This approach is in contrast with the Object-Oriented Programming goals. In particular, it does not reuse existing code, since we are duplicating the properties of the person object. A more appropriate approach should reuse the code created to define the person object. This is where the inheritance principle can help us. It allows to share common features between objects avoiding code duplication. Inheritance is also called subclassing in languages that support classes. A class that inherits from another class is called subclass, while the class from which it is derived is called superclass. Apart from the naming, the inheritance concept is the same, although of course it does not seem suited to JavaScript. We can implement inheritance in JavaScript in various ways. Consider, for example, the following constructor of person objects: function Person() { this.name = ""; this.surname = ""; } In order to define a programmer as a person specialized in computer programming, we will add a new property describing its knowledge about a programming language: knownLanguage. A simple approach to create the programmer object that inherits properties from person is based on prototype. Here is a possible implementation: function Programmer() { this.knownLanguage = ""; } Programmer.prototype = new Person(); We will create a programmer with the following code: var programmer = new Programmer(); We will obtain an object that has the properties of the person object (name and surname) and the specific property of the programmer (knownLanguage), that is, the programmer object inherits the person properties. This is a simple example to demonstrate that JavaScript supports the inheritance principle of Object-Oriented Programming at its basic level. Inheritance is a complex concept that has many facets and several variants in programming, many of them dependent on the used language. Polymorphism In Object-Oriented Programming, polymorphism is understood in different ways, even if the basis is a common notion—the ability to handle multiple data types uniformly. Support of polymorphism brings benefits in programming that go toward the overall goal of OOP. Mainly, it reduces coupling in our application, and in some cases, allows to create more compact code. Most common ways to support polymorphism by a programming language include: Methods that take parameters with different data types (overloading) Management of generic types, not known in advance (parametric polymorphism) Expressions whose type can be represented by a class and classes derived from it (subtype polymorphism or inclusion polymorphism) In most languages, overloading is what happens when you have two methods with the same name but different signature. At compile time, the compiler works out which method to call based on matching between types of invocation arguments and method's parameters. The following is an example of method overloading in C#: public int CountItems(int x) { return x.ToString().Length; } public int CountItems(string x) { return x.Length; } The CountItems()method has two signatures—one for integers and one for strings. This allows to count the number of digits in a number or the number of characters in a string in a uniform manner, just calling the same method. Overloading can also be expressed through methods with different number of arguments, as shown in the following C# example: public int Sum(int x, int y) { return Sum(x, y, 0); } public int Sum(int x, int y, int z) { return x+ y + z; } Here, we have the Sum()method that is able to sum two or three integers. The correct method definition will be detected on the basis of the number of arguments passed. As JavaScript developers, we are able to replicate this behavior in our scripts. For example, the C# CountItems() method become in JavaScript as follows: function countItems(x) { return x.toString().length; } While the Sum() example will be as follows: function sum(x, y, z) { x = x?x:0; y = y?y:0; z = z?z:0; return x + y + z; } Or, using the more convenient ES6 syntax: function sum(x = 0, y = 0, z = 0) { return x + y + z; } These examples demonstrate that JavaScript supports overloading in a more immediate way than strong-typed languages. In strong-typed languages, overloading is sometimes called static polymorphism, since the correct method to invoke is detected statically by the compiler at compile time. Parametric polymorphism allows a method to work on parameters of any type. Often it is also called generics and many languages support it in built-in methods. For example, in C#, we can define a list of items whose type is not defined in advance using the List<T> generic type. This allows us to create lists of integers, strings, or any other type. We can also create our generic class as shown by the following C# code: public class Stack<T> { private T[] items; private int count; public void Push(T item) { ... } public T Pop() { ... } } This code defines a typical stack implementation whose item's type is not defined. We will be able to create, for example, a stack of strings with the following code: var stack = new Stack<String>(); Due to its dynamic data typing, JavaScript supports parametric polymorphism implicitly. In fact, the type of function's parameters is inherently generic, since its type is set when a value is assigned to it. The following is a possible implementation of a stack constructor in JavaScript: function Stack() { this.stack = []; this.pop = function(){ return this.stack.pop(); } this.push = function(item){ this.stack.push(item); } } Subtype polymorphism allows to consider objects of different types, but with an inheritance relationship, to be handled consistently. This means that wherever I can use an object of a specific type, here I can use an object of a type derived from it. Let's see a C# example to clarify this concept: public class Person { public string Name {get; set;} public string SurName {get; set;} } public class Programmer:Person { public String KnownLanguage {get; set;} } public void WriteFullName(Person p) { Console.WriteLine(p.Name + " " + p.SurName); } var a = new Person(); a.Name = "John"; a.SurName = "Smith"; var b = new Programmer(); b.Name = "Mario"; b.SurName = "Rossi"; b.KnownLanguage = "C#"; WriteFullName(a); //result: John Smith WriteFullName(b); //result: Mario Rossi In this code, we again present the definition of the Person class and its derived class Programmer and define the method WriteFullName() that accepts argument of type Person. Thanks to subtype polymorphism, we can pass to WriteFullName() also objects of type Programmer, since it is derived from Person. In fact, from a conceptual point of view a programmer is also a person, so subtype polymorphism fits to a concrete representation of reality. Of course, the C# example can be easily reproduced in JavaScript since we have no type constraint. Let's see the corresponding code: function Person() { this.name = ""; this.surname = ""; } function Programmer() { this.knownLanguage = ""; } Programmer.prototype = new Person(); function writeFullName(p) { console.log(p.name + " " + p.surname); } var a = new Person(); a.name = "John"; a.surname = "Smith"; var b = new Programmer(); b.name = "Mario"; b.surname = "Rossi"; b.knownLanguage = "JavaScript"; writeFullName(a); //result: John Smith writeFullName(b); //result: Mario Rossi As we can see, the JavaScript code is quite similar to the C# code and the result is the same. JavaScript OOP versus classical OOP The discussion conducted so far shows how JavaScript supports the fundamental Object-Oriented Programming principles and can be considered a true OOP language as many other. However, JavaScript differs from most other languages for certain specific features which can create some concern to the developers used to working with programming languages that implement the classical OOP. The first of these features is the dynamic nature of the language both in data type management and object creation. Since data types are dynamically evaluated, some features of OOP, such as polymorphism, are implicitly supported. Moreover, the ability to change an object structure at runtime breaks the common sense that binds an object to a more abstract entity like a class. The lack of the concept of class is another big difference with the classical OOP. Of course, we are talking about the class generalization, nothing to do with the class construct introduced by ES6 that represents just a syntactic convenience for standard JavaScript constructors. Classes in most Object-Oriented languages represent a generalization of objects, that is, an extra level of abstraction upon the objects. So, classical Object-Oriented programming has two types of abstractions—classes and objects. An object is an abstraction of a real-world entity while a class is an abstraction of an object or another class (in other words, it is a generalization). Objects in classical OOP languages can only be created by instantiating classes. JavaScript has a different approach in object management. It has just one type of abstraction—the objects. Unlike the classical OOP approach, an object can be created directly as an abstraction of a real-world entity or as an abstraction of another object. In the latter case the abstracted object is called prototype. As opposed to the classical OOP approach, the JavaScript approach is sometimes called Prototypal Object-Oriented Programming. Of course, the lack of a notion of class in JavaScript affects the inheritance mechanism. In fact, while in classical OOP inheritance is an operation allowed on classes, in prototypal OOP inheritance is an operation on objects. That does not mean that classical OOP is better than prototypal OOP or vice versa. They are simply different approaches. However, we cannot ignore that these differences lead to some impact in the way we manage objects. At least we note that while in classical OOP classes are immutable, that is we cannot add, change, or remove properties or methods at runtime, in prototypal OOP objects and prototypes are extremely flexible. Moreover, classical OOP adds an extra level of abstraction with classes, leading to a more verbose code, while prototypal OOP is more immediate and requires a more compact code. Summary In this article, we explored the basic principles of Object-Oriented Programming paradigm. We have been focusing on abstraction to define objects, association, aggregation, and composition to define relationships between objects, encapsulation, inheritance, and polymorphism principles to outline the basic principles required by OOP. We have seen how JavaScript supports all features that allows us to define it as a true OOP language and have compared classical OOP with prototypal OOP. Once we established that JavaScript is a true Object-Oriented language like other languages such as Java, C #, and C ++. Resources for Article:   Further resources on this subject: Just Object Oriented Programming (Object Oriented Programming, explained) [article] Introducing Object Oriented Programmng with TypeScript [article] Python 3 Object Oriented Programming: Managing objects [article]
Read more
  • 0
  • 0
  • 4857
article-image-modeling-relationships-gorm
Packt
09 Jun 2010
6 min read
Save for later

Modeling Relationships with GORM

Packt
09 Jun 2010
6 min read
(For more resources on Groovy DSL, see here.) Storing and retrieving simple objects is all very well, but the real power of GORM is that it allows us to model the relationships between objects, as we will now see. The main types of relationships that we want to model are associations, where one object has an associated relationship with another, for example, Customer and Account, composition relationships, where we want to build an object from sub components, and inheritance, where we want to model similar objects by describing their common properties in a base class. Associations Every business system involves some sort of association between the main business objects. Relationships between objects can be one-to-one, one-to-many, or many-to-many. Relationships may also imply ownership, where one object only has relevance in relation to another parent object. If we model our domain directly in the database, we need to build and manage tables, and make associations between the tables by using foreign keys. For complex relationships, including many-to-many relationships, we may need to build special tables whose sole function is to contain the foreign keys needed to track the relationships between objects. Using GORM, we can model all of the various associations that we need to establish between objects directly within the GORM class definitions. GORM takes care of all of the complex mappings to tables and foreign keys through a Hibernate persistence layer. One-to-one The simplest association that we need to model in GORM is a one-to-one association. Suppose our customer can have a single address; we would create a new Address domain class using the grails create-domain-class command, as before. class Address { String street String city static constraints = { }} To create the simplest one-to-one relationship with Customer, we just add an Address field to the Customer class. class Customer { String firstName String lastName Address address static constraints = { }} When we rerun the Grails application, GORM will recreate a new address table. It will also recognize the address field of Customer as an association with the Address class, and create a foreign key relationship between the customer and address tables accordingly. This is a one-directional relationship. We are saying that a Customer "has an" Address but an Address does not necessarily "have a" Customer. We can model bi-directional associations by simply adding a Customer field to the Address. This will then be reflected in the relational model by GORM adding a customer_id field to the address table. class Address { String street String city Customer customer static constraints = { } }mysql> describe address;+-------------+--------------+------+-----+---------+----------------+| Field | Type | Null | Key | Default | Extra |+-------------+--------------+------+-----+---------+----------------+| id | bigint(20) | NO | PRI | NULL | auto_increment || version | bigint(20) | NO | | | || city | varchar(255) | NO | | | || customer_id | bigint(20) | YES | MUL | NULL | || street | varchar(255) | NO | | | |+-------------+--------------+------+-----+---------+----------------+5 rows in set (0.01 sec)mysql> These basic one-to-one associations can be inferred by GORM just by interrogating the fields in each domain class via reflection and the Groovy metaclasses. To denote ownership in a relationship, GORM uses an optional static field applied to a domain class, called belongsTo. Suppose we add an Identity class to retain the login identity of a customer in the application. We would then use class Customer { String firstName String lastName Identity ident}class Address { String street String city}class Identity { String email String password static belongsTo = Customer} Classes are first-class citizens in the Groovy language. When we declare static belongsTo = Customer, what we are actually doing is storing a static instance of a java.lang.Class object for the Customer class in the belongsTo field. Grails can interrogate this static field at load time to infer the ownership relation between Identity and Customer. Here we have three classes: Customer, Address, and Identity. Customer has a one-to-one association with both Address and Identity through the address and ident fields. However, the ident field is "owned" by Customer as indicated in the belongsTo setting. What this means is that saves, updates, and deletes will be cascaded to identity but not to address, as we can see below. The addr object needs to be saved and deleted independently of Customer but id is automatically saved and deleted in sync with Customer. def addr = new Address(street:"1 Rock Road", city:"Bedrock")def id = new Identity(email:"email", password:"password")def fred = new Customer(firstName:"Fred", lastName:"Flintstone", address:addr,ident:id)addr.save(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0fred.save(flush:true)assert Customer.list().size == 1assert Address.list().size == 1assert Identity.list().size == 1fred.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 1assert Identity.list().size == 0addr.delete(flush:true)assert Customer.list().size == 0assert Address.list().size == 0assert Identity.list().size == 0 Constraints You will have noticed that every domain class produced by the grails create-domain- class command contains an empty static closure, constraints. We can use this closure to set the constraints on any field in our model. Here we apply constraints to the e-mail and password fields of Identity. We want an e-mail field to be unique, not blank, and not nullable. The password field should be 6 to 200 characters long, not blank, and not nullable. class Identity { String email String password static constraints = { email(unique: true, blank: false, nullable: false) password(blank: false, nullable:false, size:6..200) }} From our knowledge of builders and the markup pattern, we can see that GORM could be using a similar strategy here to apply constraints to the domain class. It looks like a pretended method is provided for each field in the class that accepts a map as an argument. The map entries are interpreted as constraints to apply to the model field. The Builder pattern turns out to be a good guess as to how GORM is implementing this. GORM actually implements constraints through a builder class called ConstrainedPropertyBuilder. The closure that gets assigned to constraints is in fact some markup style closure code for this builder. Before executing the constraints closure, GORM sets an instance of ConstrainedPropertyBuilder to be the delegate for the closure. We are more accustomed to seeing builder code where the Builder instance is visible. def builder = new ConstrainedPropertyBuilder()builder.constraints {} Setting the builder as a delegate of any closure allows us to execute the closure as if it was coded in the above style. The constraints closure can be run at any time by Grails, and as it executes the ConstrainedPropertyBuilder, it will build a HashMap of the constraints it encounters for each field. We can illustrate the same technique by using MarkupBuilder and NodeBuilder. The Markup class in the following code snippet just declares a static closure named markup. Later on we can use this closure with whatever builder we want, by setting the delegate of the markup to the builder that we would like to use. class Markup { static markup = { customers { customer(id:1001) { name(firstName:"Fred", surname:"Flintstone") address(street:"1 Rock Road", city:"Bedrock") } customer(id:1002) { name(firstName:"Barney", surname:"Rubble") address(street:"2 Rock Road", city:"Bedrock") } } }}Markup.markup.setDelegate(new groovy.xml.MarkupBuilder())Markup.markup() // Outputs xmlMarkup.markup.setDelegate(new groovy.util.NodeBuilder())def nodes = Markup.markup() // builds a node tree
Read more
  • 0
  • 0
  • 4844

article-image-applying-linq-entities-wcf-service
Packt
05 Feb 2013
15 min read
Save for later

Applying LINQ to Entities to a WCF Service

Packt
05 Feb 2013
15 min read
(For more resources related to this topic, see here.) Creating the LINQNorthwind solution The first thing we need to do is create a test solution. In this article, we will start from the data access layer. Perform the following steps: Start Visual Studio. Create a new class library project LINQNorthwindDAL with solution name LINQNorthwind (make sure the Create directory for the solution is checked to specify the solution name). Delete the Class1.cs file. Add a new class ProductDAO to the project. Change the new class ProductDAO to be public. Now you should have a new solution with the empty data access layer class. Next, we will add a model to this layer and create the business logic layer and the service interface layer. Modeling the Northwind database In the previous section, we created the LINQNorthwind solution. Next, we will apply LINQ to Entities to this new solution. For the data access layer, we will use LINQ to Entities instead of the raw ADO.NET data adapters. As you will see in the next section, we will use one LINQ statement to retrieve product information from the database and the update LINQ statements will handle the concurrency control for us easily and reliably. As you may recall, to use LINQ to Entities in the data access layer of our WCF service, we first need to add an entity data model to the project. In the Solution Explorer, right-click on the project item LINQNorthwindDAL, select menu options Add | New Item..., and then choose Visual C# Items | ADO.NET Entity Data Model as Template and enter Northwind.edmx as the name. Select Generate from database, choose the existing Northwind connection, and add the Products table to the model. Click on the Finish button to add the model to the project. The new column RowVersion should be in the Product entity . If it is not there, add it to the database table with a type of Timestamp and refresh the entity data model from the database In the EMD designer, select the RowVersion property of the Product entity and change its Concurrency Mode from None to Fixed. Note that its StoreGeneratedPattern should remain as Computed. This will generate a file called Northwind.Context.cs, which contains the Db context for the Northwind database. Another file called Product.cs is also generated, which contains the Product entity class. You need to save the data model in order to see these two files in the Solution Explorer. In Visual Studio Solution Explorer, the Northwind.Context.cs file is under the template file Northwind.Context.tt and Product.cs is under Northwind.tt. However, in Windows Explorer, they are two separate files from the template files. Creating the business domain object project During Implementing a WCF Service in the Real World, we create a business domain object (BDO) project to hold the intermediate data between the data access objects and the service interface objects. In this section, we will also add such a project to the solution for the same purpose. In the Solution Explorer, right-click on the LINQNorthwind solution. Select Add | New Project... to add a new class library project named LINQNorthwindBDO. Delete the Class1.cs file. Add a new class file ProductBDO.cs. Change the new class ProductBDO to be public. Add the following properties to this class: ProductID ProductName QuantityPerUnit UnitPrice Discontinued UnitsInStock UnitsOnOrder ReorderLevel RowVersion The following is the code list of the ProductBDO class: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace LINQNorthwindBDO { public class ProductBDO { public int ProductID { get; set; } public string ProductName { get; set; } public string QuantityPerUnit { get; set; } public decimal UnitPrice { get; set; } public int UnitsInStock { get; set; } public int ReorderLevel { get; set; } public int UnitsOnOrder { get; set; } public bool Discontinued { get; set; } public byte[] RowVersion { get; set; } } } As noted earlier, in this article we will use BDO to hold the intermediate data between the data access objects and the data contract objects. Besides this approach, there are some other ways to pass data back and forth between the data access layer and the service interface layer, and two of them are listed as follows: The first one is to expose the Entity Framework context objects from the data access layer up to the service interface layer. In this way, both the service interface layer and the business logic layer—we will implement them soon in following sections—can interact directly with the Entity Framework. This approach is not recommended as it goes against the best practice of service layering. Another approach is to use self-tracking entities. Self-tracking entities are entities that know how to do their own change tracking regardless of which tier those changes are made on. You can expose self-tracking entities from the data access layer to the business logic layer, then to the service interface layer, and even share the entities with the clients. Because self-tracking entities are independent of entity context, you don't need to expose the entity context objects. The problem of this approach is, you have to share the binary files with all the clients, thus it is the least interoperable approach for a WCF service. Now this approach is not recommended by Microsoft, so in this book we will not discuss it. Using LINQ to Entities in the data access layer Next we will modify the data access layer to use LINQ to Entities to retrieve and update products. We will first create GetProduct to retrieve a product from the database and then create UpdateProduct to update a product in the database. Adding a reference to the BDO project Now we have the BDO project in the solution, we need to modify the data access layer project to reference it. In the Solution Explorer, right-click on the LINQNorthwindDAL project. Select Add Reference.... Select the LINQNorthwindBDO project from the Projects tab under Solution. Click on the OK button to add the reference to the project. Creating GetProduct in the data access layer We can now create the GetProduct method in the data access layer class ProductDAO, to use LINQ to Entities to retrieve a product from the database. We will first create an entity DbContext object and then use LINQ to Entities to get the product from the DbContext object. The product we get from DbContext will be a conceptual entity model object. However, we don't want to pass this product object back to the upper-level layer because we don't want to tightly couple the business logic layer with the data access layer. Therefore, we will convert this entity model product object to a ProductBDO object and then pass this ProductBDO object back to the upper-level layers. To create the new method, first add the following using statement to the ProductBDO class: using LINQNorthwindBDO; Then add the following method to the ProductBDO class: public ProductBDO GetProduct(int id) { ProductBDO productBDO = null; using (var NWEntities = new NorthwindEntities()) { Product product = (from p in NWEntities.Products where p.ProductID == id select p).FirstOrDefault(); if (product != null) productBDO = new ProductBDO() { ProductID = product.ProductID, ProductName = product.ProductName, QuantityPerUnit = product.QuantityPerUnit, UnitPrice = (decimal)product.UnitPrice, UnitsInStock = (int)product.UnitsInStock, ReorderLevel = (int)product.ReorderLevel, UnitsOnOrder = (int)product.UnitsOnOrder, Discontinued = product.Discontinued, RowVersion = product.RowVersion }; } return productBDO; } Within the GetProduct method, we had to create an ADO.NET connection, create an ADO. NET command object with that connection, specify the command text, connect to the Northwind database, and send the SQL statement to the database for execution. After the result was returned from the database, we had to loop through the DataReader and cast the columns to our entity object one by one. With LINQ to Entities, we only construct one LINQ to Entities statement and everything else is handled by LINQ to Entities. Not only do we need to write less code, but now the statement is also strongly typed. We won't have a runtime error such as invalid query syntax or invalid column name. Also, a SQL Injection attack is no longer an issue, as LINQ to Entities will also take care of this when translating LINQ expressions to the underlying SQL statements. Creating UpdateProduct in the data access layer In the previous section, we created the GetProduct method in the data access layer, using LINQ to Entities instead of ADO.NET. Now in this section, we will create the UpdateProduct method, using LINQ to Entities instead of ADO.NET. Let's create the UpdateProduct method in the data access layer class ProductBDO, as follows: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { message = "product updated successfully"; bool ret = true; using (var NWEntities = new NorthwindEntities()) { var productID = productBDO.ProductID; Product productInDB = (from p in NWEntities.Products where p.ProductID == productID select p).FirstOrDefault(); // check product if (productInDB == null) { throw new Exception("No product with ID " + productBDO.ProductID); } NWEntities.Products.Remove(productInDB); // update product productInDB.ProductName = productBDO.ProductName; productInDB.QuantityPerUnit = productBDO.QuantityPerUnit; productInDB.UnitPrice = productBDO.UnitPrice; productInDB.Discontinued = productBDO.Discontinued; productInDB.RowVersion = productBDO.RowVersion; NWEntities.Products.Attach(productInDB); NWEntities.Entry(productInDB).State = System.Data.EntityState.Modified; int num = NWEntities.SaveChanges(); productBDO.RowVersion = productInDB.RowVersion; if (num != 1) { ret = false; message = "no product is updated"; } } return ret; } Within this method, we first get the product from database, making sure the product ID is a valid value in the database. Then, we apply the changes from the passed-in object to the object we have just retrieved from the database, and submit the changes back to the database. Let's go through a few notes about this method: You have to save productID in a new variable and then use it in the LINQ query. Otherwise, you will get an error saying Cannot use ref or out parameter 'productBDO' inside an anonymous method, lambda expression, or query expression. If Remove and Attach are not called, RowVersion from database (not from the client) will be used when submitting to database, even though you have updated its value before submitting to the database. An update will always succeed, but without concurrency control. If Remove is not called and you call the Attach method, you will get an error saying The object cannot be attached because it is already in the object context. If the object state is not set to be Modified, Entity Framework will not honor your changes to the entity object and you will not be able to save any change to the database. Creating the business logic layer Now let's create the business logic layer. Right click on the solution item and select Add | New Project.... Add a class library project with the name LINQNorthwindLogic. Add a project reference to LINQNorthwindDAL and LINQNorthwindBDO to this new project. Delete the Class1.cs file. Add a new class file ProductLogic.cs. Change the new class ProductLogic to be public. Add the following two using statements to the ProductLogic.cs class file: using LINQNorthwindDAL; using LINQNorthwindBDO; Add the following class member variable to the ProductLogic class: ProductDAO productDAO = new ProductDAO(); Add the following new method GetProduct to the ProductLogic class: public ProductBDO GetProduct(int id) { return productDAO.GetProduct(id); } Add the following new method UpdateProduct to the ProductLogic class: public bool UpdateProduct( ref ProductBDO productBDO, ref string message) { var productInDB = GetProduct(productBDO.ProductID); // invalid product to update if (productInDB == null) { message = "cannot get product for this ID"; return false; } // a product cannot be discontinued // if there are non-fulfilled orders if (productBDO.Discontinued == true && productInDB.UnitsOnOrder > 0) { message = "cannot discontinue this product"; return false; } else { return productDAO.UpdateProduct(ref productBDO, ref message); } } Build the solution. We now have only one more step to go, that is, adding the service interface layer. Creating the service interface layer The last step is to create the service interface layer. Right-click on the solution item and select Add | New Project.... Add a WCF service library project with the name of LINQNorthwindService. Add a project reference to LINQNorthwindLogic and LINQNorthwindBDO to this new service interface project. Change the service interface file IService1.cs, as follows: Change its filename from IService1.cs to IProductService.cs. Change the interface name from IService1 to IProductService, if it is not done for you. Remove the original two service operations and add the following two new operations: [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); Remove the original CompositeType and add the following data contract classes: [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } The following is the content of the IProductService.cs file: using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ServiceModel; using System.Text; namespace LINQNorthwindService { [ServiceContract] public interface IProductService { [OperationContract] [FaultContract(typeof(ProductFault))] Product GetProduct(int id); [OperationContract] [FaultContract(typeof(ProductFault))] bool UpdateProduct(ref Product product, ref string message); } [DataContract] public class Product { [DataMember] public int ProductID { get; set; } [DataMember] public string ProductName { get; set; } [DataMember] public string QuantityPerUnit { get; set; } [DataMember] public decimal UnitPrice { get; set; } [DataMember] public bool Discontinued { get; set; } [DataMember] public byte[] RowVersion { get; set; } } [DataContract] public class ProductFault { public ProductFault(string msg) { FaultMessage = msg; } [DataMember] public string FaultMessage; } } Change the service implementation file Service1.cs, as follows: Change its filename from Service1.cs to ProductService.cs. Change its class name from Service1 to ProductService, if it is not done for you. Add the following two using statements to the ProductService.cs file: using LINQNorthwindLogic; using LINQNorthwindBDO; Add the following class member variable: ProductLogic productLogic = new ProductLogic(); Remove the original two methods and add following two methods: public Product GetProduct(int id) { ProductBDO productBDO = null; try { productBDO = productLogic.GetProduct(id); } catch (Exception e) { string msg = e.Message; string reason = "GetProduct Exception"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } if (productBDO == null) { string msg = string.Format("No product found for id {0}", id); string reason = "GetProduct Empty Product"; throw new FaultException<ProductFault> (new ProductFault(msg), reason); } Product product = new Product(); TranslateProductBDOToProductDTO(productBDO, product); return product; } public bool UpdateProduct(ref Product product, ref string message) { bool result = true; // first check to see if it is a valid price if (product.UnitPrice <= 0) { message = "Price cannot be <= 0"; result = false; } // ProductName can't be empty else if (string.IsNullOrEmpty(product.ProductName)) { message = "Product name cannot be empty"; result = false; } // QuantityPerUnit can't be empty else if (string.IsNullOrEmpty(product.QuantityPerUnit)) { message = "Quantity cannot be empty"; result = false; } else { try { var productBDO = new ProductBDO(); TranslateProductDTOToProductBDO(product, productBDO); result = productLogic.UpdateProduct( ref productBDO, ref message); product.RowVersion = productBDO.RowVersion; } catch (Exception e) { string msg = e.Message; throw new FaultException<ProductFault> (new ProductFault(msg), msg); } } return result; } Because we have to convert between the data contract objects and the business domain objects, we need to add the following two methods: private void TranslateProductBDOToProductDTO( ProductBDO productBDO, Product product) { product.ProductID = productBDO.ProductID; product.ProductName = productBDO.ProductName; product.QuantityPerUnit = productBDO.QuantityPerUnit; product.UnitPrice = productBDO.UnitPrice; product.Discontinued = productBDO.Discontinued; product.RowVersion = productBDO.RowVersion; } private void TranslateProductDTOToProductBDO( Product product, ProductBDO productBDO) { productBDO.ProductID = product.ProductID; productBDO.ProductName = product.ProductName; productBDO.QuantityPerUnit = product.QuantityPerUnit; productBDO.UnitPrice = product.UnitPrice; productBDO.Discontinued = product.Discontinued; productBDO.RowVersion = product.RowVersion; } Change the config file App.config, as follows: Change Service1 to ProductService. Remove the word Design_Time_Addresses. Change the port to 8080. Now, BaseAddress should be as follows: http://localhost:8080/LINQNorthwindService/ProductService/ Copy the connection string from the App.config file in the LINQNorthwindDAL project to the following App.config file: <connectionStrings> <add name="NorthwindEntities" connectionString="metadata=res://*/Northwind. csdl|res://*/Northwind.ssdl|res://*/Northwind. msl;provider=System.Data.SqlClient;provider connection string="data source=localhost;initial catalog=Northwind;integrated security=True;Multipl eActiveResultSets=True;App=EntityFramework"" providerName="System.Data.EntityClient" /> </connectionStrings> You should leave the original connection string untouched in the App.config file in the data access layer project. This connection string is used by the Entity Model Designer at design time. It is not used at all during runtime, but if you remove it, whenever you open the entity model designer in Visual Studio, you will be prompted to specify a connection to your database. Now build the solution and there should be no errors. Testing the service with the WCF Test Client Now we can run the program to test the GetProduct and UpdateProduct operations with the WCF Test Client. You may need to run Visual Studio as administrator to start the WCF Test Client. First set LINQNorthwindService as the startup project and then press Ctrl + F5 to start the WCF Test Client. Double-click on the GetProduct operation, enter a valid product ID, and click on the Invoke button. The detailed product information should be retrieved and displayed on the screen, as shown in the following screenshot: Now double-click on the UpdateProduct operation, enter a valid product ID, and specify a name, price, quantity per unit, and then click on Invoke. This time you will get an exception as shown in the following screenshot: From this image we can see that the update failed. The error details, which are in HTML View in the preceding screenshot, actually tell us it is a concurrency error. This is because, from WCF Test Client, we can't enter a row version as it is not a simple datatype parameter, thus we didn't pass in the original RowVersion for the object to be updated, and when updating the object in the database, the Entity Framework thinks this product has been updated by some other users.
Read more
  • 0
  • 0
  • 4836

article-image-parallelization-using-reducers
Packt
06 Jan 2016
18 min read
Save for later

Parallelization using Reducers

Packt
06 Jan 2016
18 min read
In this article by Akhil Wali, the author of the book Mastering Clojure, we will study this particular abstraction of collections and how it is quite orthogonal to viewing collections as sequences. Sequences and laziness are great way of handling collections. The Clojure standard library provides several functions to handle and manipulate sequences. However, abstracting a collection as a sequence has an unfortunate consequence—any computation that is performed over all the elements of a sequence is inherently sequential. All standard sequence functions create a new collection that be similar to the collection they are passed. Interestingly, performing a computation over a collection without creating a similar collection—even as an intermediary result—is quite useful. For example, it is often required to reduce a given collection to a single value through a series of transformations in an iterative manner. This sort of computation does not necessarily require the intermediary results of each transformation to be saved. (For more resources related to this topic, see here.) A consequence of iteratively computing values from a collection is that we cannot parallelize it in a straightforward way. Modern map-reduce frameworks handle this kind of computation by pipelining the elements of a collection through several transformations in parallel and finally reducing the results into a single result. Of course, the result could be a new collection as well. A drawback is that this methodology produces concrete collections as intermediate results of each transformation, which is rather wasteful. For example, if we want to filter values from a collection, a map-reduce strategy would require creating empty collections to represent values that are left out of the reduction step to produce the final result. This incurs unnecessary memory allocation and also creates additional work for the reduction step that produces the final result. Hence, there’s scope for optimizing these kinds of computations. This brings us to the notion of treating computations over collections as reducers to attain better performance. Of course, this doesn't mean that reducers are a replacement for sequences. Sequences and laziness are great for abstracting computations that create and manipulate collections, while reducers are specialized high-performance abstractions of collections in which a collection needs to be piped through several transformations and combined to produce the final result. Reducers achieve a performance gain in the following ways: Reducing the amount of memory allocated to produce the desired result Parallelizing the process of reducing a collection into a single result, which could be an entirely new collection The clojure.core.reducers namespace provides several functions for processing collections using reducers. Let's now examine how reducers are implemented and also study a few examples that demonstrate how reducers can be used. Using reduce to transform collections Sequences and functions that operate on sequences preserve the sequential ordering between the constituent elements of a collection. Lazy sequences avoid unnecessary realization of elements in a collection until they are required for a computation, but the realization of these values is still performed in a sequential manner. However, this characteristic of sequential ordering may not be desirable for all computations performed over it. For example, it's not possible to map a function over a vector and then lazily realize values in the resulting collection by random access, since the map function converts the collection that it is supplied into a sequence. Also, functions such as map and filter are lazy but still sequential by nature. Consider a unary function as shown in Example 3.1 that we intend to map it over a given vector. The function must compute a value from the one it is supplied, and also perform a side effect so that we can observe its application over the elements in a collection: Example 3.1. A simple unary function (defn square-with-side-effect [x]   (do     (println (str "Side-effect: " x))     (* x x))) The square-with-side-effect function defined here simply returns the square of a number x using the * function. This function also prints the value of x using a println form whenever it is called. Suppose this function is mapped over a given vector. The resulting collection will have to be realized completely if a computation has to be performed over it, even if all the elements from the resulting vector are not required. This can be demonstrated as follows: user> (def mapped (map square-with-side-effect [0 1 2 3 4 5])) #'user/mapped user> (reduce + (take 3 mapped)) Side-effect: 0 Side-effect: 1 Side-effect: 2 Side-effect: 3 Side-effect: 4 Side-effect: 5 5 As previously shown, the mapped variable contains the result of mapping the square-with-side-effect function over a vector. If we try to sum the first three values in the resulting collection using the reduce, take, and + functions, all the values in the [0 1 2 3 4 5] vector are printed as a side effect. This means that the square-with-side-effect function was applied to all the elements in the initial vector, despite the fact that only the first three elements were actually required by the reduce form. Of course, this can be solved by using the seq function to convert the vector to a sequence before we map the square-with-side-effect function over it. But then, we lose the ability to efficiently access elements in a random order in the resulting collection. To dive deeper into why this actually happens, you first need to understand how the standard map function is actually implemented. A simplified definition of the map function is shown here: Example 3.2. A simplified definition of the map function (defn map [f coll]   (cons (f (first coll))         (lazy-seq (map f (rest coll))))) The definition of map in Example 3.2 is a simplified and rather incomplete one, as it doesn't check for an empty collection and cannot be used over multiple collections. That aside, this definition of map does indeed apply a function f to all the elements in a coll collection. This is implemented using a composition of the cons, first, rest, and lazy-seq forms. The implementation can be interpreted as, "apply the f function to the first element in the coll collection, and then map f over the rest of the collection in a lazy manner." An interesting consequence of this implementation is that the map function has the following characteristics: The ordering among the elements in the coll collection is preserved. This computation is performed recursively. The lazy-seq form is used to perform the computation in a lazy manner. The use of the first and rest forms indicates that coll must be a sequence, and the cons form will also produce a result that is a sequence. Hence, the map function accepts a sequence and builds a new one. Another interesting characteristic about lazy sequences is that they are realized in chunks. This means that a lazy sequence is realized in chunks of 32 elements, each as an optimization, when the values in the sequence are actually required. Sequences that behave this way are termed as chunked sequences. Of course, not all sequences are chunked, and we can check whether a given sequence is chunked using the chunked-seq? predicate. The range function returns a chunked sequence, shown as follows: user> (first (map #(do (print !) %) (range 70))) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 0 user> (nth (map #(do (print !) %) (range 70)) 32) !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! 32 Both the statements in the output shown previously select a single element from a sequence returned by the map function. The function passed to the map function in both the above statements prints the ! character and returns the value supplied to it. In the first statement, the first 32 elements of the resulting sequence are realized, even though only the first element is required. Similarly, the second statement is observed to realize the first 64 elements of the resulting sequence when the element at the 32nd position is obtained using the nth function. Chunked sequences have been an integral part of Clojure since version 1.1. However, none of the properties of the sequences described above are needed to transform a given collection into a result that is not a sequence. If we were to handle such computations efficiently, we cannot build on functions that return sequences, such as map and filter. Incidentally, the reduce function does not necessarily produce a sequence. It also has a couple of other interesting properties: The reduce function actually lets the collection it is passed to define how it is computed over or reduced. Thus, reduce is collection independent. Also, the reduce function is versatile enough to build a single value or an entirely new collection as well. For example, using reduce with the * or + function will create a single-valued result, while using it with the cons or concat function can create a new collection as the result. Thus, reduce can build anything. A collection is said to be reducible if it defines how it can be reduced to a single result. The binary function that is used by the reduce function along with a collection is also termed as a reducing function. A reducing function requires two arguments: one to represent the result of the computation so far, and another to represent an input value that has to be combined into the result. Several reducing functions can be composed into one, which effectively changes how the reduce function processes a given collection. This composition is done using reducers, which can be thought of as a shortened version of the term reducing function transformers. The use of sequences and laziness can be compared to the use of reducers to perform a given computation by Rich Hickey's infamous pie-maker analogy. Suppose a pie-maker has been supplied a bag of apples with an intent to reduce the apples to a pie. There are a couple of transformations needed to perform this task. First, the stickers on all the apples have to be removed; as in, we map a function to "take the sticker off" the apples in the collection. Also, all the rotten apples will have to be removed, which is analogous to using the filter function to remove elements from a collection. Instead of performing this work herself, the pie-maker delegates it to her assistant. The assistant could first take the stickers off all the apples, thus producing a new collection, and then take out the rotten apples to produce another new collection, which illustrates the use of lazy sequences. But then, the assistant would be doing unnecessary work by removing the stickers off of the rotten apples, which will have to be discarded later anyway. On the other hand, the assistant could delay this work until the actual reduction of the processed apples into a pie is performed. Once the work actually needs to be performed, the assistant will compose the two tasks of mapping and filtering the collection of apples, thus avoiding any unnecessary work. This case depicts the use of reducers for composing and transforming the tasks needed to effectively reduce a collection of apples to a pie. By using reducers, we create a recipe of tasks to reduce a collection of apples to a pie and delay all processing until the final reduction, instead of dealing with collections of apples as intermediary results of each task: The following namespaces must be included in your namespace declaration for the upcoming examples. (ns my-namespace   (:require [clojure.core.reducers :as r])) The clojure.core.reducers namespace requires Java 6 with the jsr166y.jar or Java 7+ for fork/join support.  Let's now briefly explore how reducers are actually implemented. Functions that operate on sequences use the clojure.lang.ISeq interface to abstract the behavior of a collection. In the case of reducers, the common interface that we must build upon is that of a reducing function. As we mentioned earlier, a reducing function is a two-arity function in which the first argument is the result produced so far and the second argument is the current input, which has to be combined with the first argument. The process of performing a computation over a collection and producing a result can be generalized into three distinct cases. They can be described as follows: A new collection with the same number of elements as that of the collection it is supplied needs to be produced. This one-to-one case is analogous to using the map function. The computation shrinks the supplied collection by removing elements from it. This can be done using the filter function. The computation could also be expansive, in which case it produces a new collection that contains an increased number of elements. This is like what the mapcat function does. These cases depict the different ways by which a collection can be transformed into the desired result. Any computation, or reduction, over a collection can be thought of as an arbitrary sequence of such transformations. These transformations are represented by transformers, which are functions that transform a reducing function. They can be implemented as shown here in Example 3.3: Example 3.3. Transformers (defn mapping [f]   (fn [rf]     (fn [result input]       (rf result (f input)))))    (defn filtering [p?]   (fn [rf]     (fn [result input]       (if (p? input)         (rf result input)         result))))   (defn mapcatting [f]   (fn [rf]     (fn [result input]       (reduce rf result (f input))))) The mapping, filtering, and mapcatting functions in the above example Example 3.3 represent the core logic of the map, filter, and mapcat functions respectively. All of these functions are transformers that take a single argument and return a new function. The returned function transforms a supplied reducing function, represented by rf, and returns a new reducing function, created using this expression: (fn [result input] ... ). Functions returned by the mapping, filtering, and mapcatting functions are termed as reducing function transformers. The mapping function applies the f function to the current input, represented by the input variable. The value returned by the f function is then combined with the accumulated result, represented by result, using the reducing function rf. This transformer is a frighteningly pure abstraction of the standard map function that applies an f function over a collection. The mapping function makes no assumptions about the structure of the collection it is supplied, or how the values returned by the f function are combined to produce the final result. Similarly, the filtering function uses a predicate, p?, to check whether the current input of the rf reducing function must combined into the final result, represented by result. If the predicate is not true, then the reducing function will simply return the result value without any modification. The mapcatting function uses the reduce function to combine the value result with the result of the (f input) expression. In this transformer, we can assume that the f function will return a new collection and the rf reducing function will somehow combine two collections into a new one. One of the foundations of the reducers library is the CollReduce protocol defined in the clojure.core.protocols namespace. This protocol defines the behavior of a collection when it is passed as an argument to the reduce function, and it is declared as shown below Example 3.4: Example 3.4. The CollReduce protocol (defprotocol CollReduce   (coll-reduce [coll rf init])) The clojure.core.reducers namespace defines a reducer function that creates a reducible collection by dynamically extending the CollReduce protocol, as shown in this code Example 3.5: The reducer function (defn reducer   ([coll xf]    (reify      CollReduce      (coll-reduce [_ rf init]        (coll-reduce coll (xf rf) init))))) The reducer function combines a collection (coll) and a reducing function transformer (xf), which is returned by the mapping, filtering, and mapcatting functions, to produce a new reducible collection. When reduce is invoked on a reducible collection, it will ultimately ask the collection to reduce itself using the reducing function returned by the (xf rf) expression. Using this mechanism, several reducing functions can be composed into a computation that has to be performed over a given collection. Also, the reducer function needs to be defined only once, and the actual implementation of coll-reduce is provided by the collection supplied to the reducer function. Now, we can redefine the reduce function to simply invoke the coll-reduce function implemented by a given collection, as shown herein Example 3.6: Example 3.6. Redefining the reduce function (defn reduce   ([rf coll]    (reduce rf (rf) coll))   ([rf init coll]    (coll-reduce coll rf init))) As shown in the above code Example 3.6, the reduce function delegates the job of reducing a collection to the collection itself using the coll-reduce function. Also, the reduce function will use the rf reducing function to supply the init argument when it is not specified. An interesting consequence of this definition of reduce is that the rf function must produce an identity value when it is supplied no arguments. The standard reduce function even uses the CollReduce protocol to delegate the job of reducing a collection to the collection itself, but it will also fall back on the default definition of reduce if the supplied collection does not implement the CollReduce protocol. Since Clojure 1.4, the reduce function allows a collection to define how it is reduced using the clojure.core.CollReduce protocol. Clojure 1.5 introduced the clojure.core.reducers namespace, which extends the use of this protocol. All the standard Clojure collections, namely lists, vectors, sets, and maps, implement the CollReduce protocol. The reducer function can be used to build a sequence of transformations to be applied on a collection when it is passed as an argument to the reduce function. This can be demonstrated as follows: user> (r/reduce + 0 (r/reducer [1 2 3 4] (mapping inc))) 14 user> (reduce + 0 (r/reducer [1 2 3 4] (mapping inc))) 14 In this output, the mapping function is used with the inc function to create a reducing function transformer that increments all the elements in a given collection. This transformer is then combined with a vector using the reducer function to produce a reducible collection. The call to reduce in both of the above statements is transformed into the (reduce + [2 3 4 5]) expression, thus producing the result as 14. We can now redefine the map, filter, and mapcat functions using the reducer function, as shown belowin Example 3.7: Redefining the map, filter and mapcat functions using the reducer form (defn map [f coll]   (reducer coll (mapping f)))   (defn filter [p? coll]   (reducer coll (filtering p?)))   (defn mapcat [f coll]   (reducer coll (mapcatting f))) As shown in Example 3.7, the map, filter, and mapcat functions are now simply compositions of the reducer form with the mapping, filtering, and mapcatting transformers respectively. The definitions of CollReduce, reducer, reduce, map, filter, and mapcat are simplified versions of their actual definitions in the clojure.core.reducers namespace. The definitions of the map, filter, and mapcat functions shown in Example 3.7 have the same shape as the standard versions of these functions, shown as follows: user> (r/reduce + (r/map inc [1 2 3 4])) 14 user> (r/reduce + (r/filter even? [1 2 3 4])) 6 user> (r/reduce + (r/mapcat range [1 2 3 4])) 10 Hence, the map, filter, and mapcat functions from the clojure.core.reducers namespace can be used in the same way as the standard versions of these functions. The reducers library also provides a take function that can be used as a replacement for the standard take function. We can use this function to reduce the number of calls to the square-with-side-effect function (from Example 3.1) when it is mapped over a given vector, as shown below: user> (def mapped (r/map square-with-side-effect [0 1 2 3 4 5])) #'user/mapped user> (reduce + (r/take 3 mapped)) Side-effect: 0 Side-effect: 1 Side-effect: 2 Side-effect: 3 5 Thus, using the map and take functions from the clojure.core.reducers namespace as shown above avoids the application of the square-with-side-effect function over all five elements in the [0 1 2 3 4 5] vector, as only the first three are required. The reducers library also provides variants of the standard take-while, drop, flatten, and remove functions which are based on reducers. Effectively, functions based on reducers will require a lesser number of allocations than sequence-based functions, thus leading to an improvement in performance. For example, consider the process and process-with-reducer functions, shown here: Example 3.8. Functions to process a collection of numbers using sequences and reducers (defn process [nums]   (reduce + (map inc (map inc (map inc nums))))) (defn process-with-reducer [nums]   (reduce + (r/map inc (r/map inc (r/map inc nums))))) This process function in Example 3.8 applies the inc function over a collection of numbers represented by nums using the map function. The process-with-reducer function performs the same action but uses the reducer variant of the map function. The process-with-reducer function will take a lesser amount of time to produce its result from a large vector compared to the process function, as shown here: user> (def nums (vec (range 1000000))) #'user/nums user> (time (process nums)) "Elapsed time: 471.217086 msecs" 500002500000 user> (time (process-with-reducer nums)) "Elapsed time: 356.767024 msecs" 500002500000 The process-with-reducer function gets a slight performance boost as it requires a lesser number of memory allocations than the process function. The performance of this computation can be improved by a greater scale if we can somehow parallelize it. Summary In this article, we explored the clojure.core.reducers library in detail. We took a look at how reducers are implemented and also how we can use reducers to handle large collections of data in an efficient manner. Resources for Article:   Further resources on this subject: Getting Acquainted with Storm [article] Working with Incanter Datasets [article] Application Performance [article]
Read more
  • 0
  • 0
  • 4834
article-image-introduction-python-lists-and-dictionaries
Packt
09 Mar 2016
10 min read
Save for later

An Introduction to Python Lists and Dictionaries

Packt
09 Mar 2016
10 min read
In this article by Jessica Ingrassellino, the author of Python Projects for Kids, you will learn that Python has very efficient ways of storing data, which is one reason why it is popular among many companies that make web applications. You will learn about the two most important ways to store and retrieve data in Python—lists and dictionaries. (For more resources related to this topic, see here.) Lists Lists have many different uses when coding and many different operations can be performed on lists, thanks to Python. In this article, you will only learn some of the many uses of lists. However, if you wish to learn more about lists, the Python documentation is very detailed and available at https://docs.python.org/3/tutorial/datastructures.html?highlight=lists#more-on-lists. First, it is important to note that a list is made by assigning it a name and putting the items in the list inside of square brackets []. In your Python shell, type the three lists, one on each line: fruit = ['apple', 'banana', 'kiwi', 'dragonfruit'] years = [2012,  2013,  2014,  2015] students_in_class = [30,  22,  28,  33] The lists that you just typed have a particular kind of data inside. However, one good feature of lists is that they can mix up data types within the same list. For example, I have made this list that combines strings and integers: computer_class = ['Cynthia', 78, 42, 'Raj', 98, 24, 35, 'Kadeem', 'Rachel'] Now that we have made the lists, we can get the contents of the list in many ways. In fact, once you create a list, the computer remembers the order of the list, and the order stays constant until it is changed purposefully. The easiest way for us to see that the order of lists is maintained is to run tests on the lists that we have already made. The first item of a Python list is always counted as 0 (zero). So, for our first test, let's see if asking for the 0 item actually gives us the first item. Using our fruit list, we will type the name of the list inside of the print statement, and then add square brackets [] with the number 0: print(fruit[0]) Your output will be apple, since apple is the first fruit in the list that we created earlier. So, we have evidence that counting in Python does start with 0. Now, we can try to print the fourth item in the fruit list. You will notice that we are entering 3 in our print command. This is because the first item started at 0. Type the following code into your Python shell: print(fruit[3]) What is your outcome? Did you expect dragonfruit to be the answer? If so, good, you are learning to count items in lists. If not, remember that the first item in a list is the 0 item. With practice, you will become better at counting items in Python lists. For extra practice, work with the other lists that we made earlier, and try printing different items from the list by changing the number in the following line of code: print(list_name[item_number]) Where the code says list_name, write the name of the list that you want to use. Where the code says item_number, write the number of the item that you want to print. Remember that lists begin counting at 0. Changing the list – adding and removing information Even though lists have order, lists can be changed. Items can be added to a list, removed from a list, or changed in a list. Again, there are many ways to interact with lists. We will only discuss a few here, but you can always read the Python documentation for more information. To add an item to our fruit list, for example, we can use a method called list.append(). To use this method, type the name of the list, a dot, the method name append, and then parenthesis with the item that you would like to add contained inside. If the item is a string, remember to use single quotes. Type the following code to add an orange to the list of fruits that we have made:   fruit.append('orange') Then, print the list of fruit to see that orange has been added to the list:     print(fruit) Now, let's say that we no longer wish for the dragonfruit to appear on our list. We will use a method called list.remove(). To do this, we will type the name of our list, a dot, the method name called remove, and the name of the item that we wish to remove:     fruit.remove('dragonfruit') Then, we will print the list to see that the dragonfruit has been removed:     print(fruit) If you have more than one of the same item in the list, list.remove() will only remove the first instance of that item. The other items with the same name need to be removed separately. Loops and lists Lists and for loops work very well together. With lists, we can do something called iteration. By itself, the word iteration means to repeat a procedure over and over again. We know that the for loops repeat things for a limited and specific number of times. In this sample, we have three colors in our list. Make this list in your Python terminal: colors = ['green', 'yellow', 'red'] Using our list, we may decide that for each color in the list, we want to print the statement called I see and add each color in our list. Using the for loop with the list, we can type the print statement one time and get three statements in return. Type the following for loop into your Python shell: for color in colors:        print('I see  ' + str(color)  +  '.') Once you are done typing the print line and pressing Enter twice, your for loop will start running, and you should see the following statements printed out in your Python shell: As you can imagine, lists and the for loops are very powerful when used together. Instead of having to type the line three times with three different pieces of code, we only had to type two lines of code. We used the str() method to make sure that the sentence that we printed combined with the list items. Our for loop is helpful because those two lines of code would work if there were 20 colors in our list. Dictionaries Dictionaries are another way to organize data. At first glance, a dictionary may look just like a list. However, dictionaries have different jobs, rules, and syntax. Dictionaries have names and use curly braces to store information. For example, if we wanted to make a dictionary called sports, we would then put the dictionary entries inside of curly braces. Here is a simple example: numbers = {'one': 1, 'two': 2, 'three': 3} Key/value pairs in dictionaries A dictionary stores information with things called keys and values. In a dictionary of items, for example, we may have keys that tell us the names of each item and values that tell us how many of each item we have in our inventory. Once we store these items in our dictionary, we can add or remove new items (keys), add new amounts (values), or change the amounts of existing items. Here is an example of a dictionary that could hold some information for a game. Let’s suppose that the hero in our game has some items needed to survive. Here is a dictionary of our hero's items: items = {'arrows' : 200, 'rocks' : 25, 'food' : 15, 'lives' : 2} Unlike lists, a dictionary uses keys and values to find information. So, this dictionary has the keys called arrows, rocks, food, and lives. Each of the numbers tells us the amount of items that our hero has. Dictionaries have different characteristics than lists do. So, we can look up certain items in our dictionary using the print function: print(items['arrows']) The result of this print command will print 200, as this is the number of arrows our hero has in his inventory: Changing the dictionary – adding and removing information Python offers us ways not only to make a dictionary but to also add and remove things from our dictionaries. For example, let's say that in our game, we allow the player to discover a fireball later in the game. To add the item to the dictionary, we will use what is called the subscript method to add a new key and a new value to our dictionary. This means that we will use the name of the dictionary and square brackets to write the name of the item that we wish to add, and finally, we will set the value to how many items we want to put into our dictionary:   items['fireball'] = 10 If we print the entire dictionary of items, you will see that fireball has been added:   print(items)   items = {'arrows' : 200, 'rocks' : 25, 'food' : 15, 'lives' : 2, 'fireball' : 10} We can also change the number of items in our dictionary using the dict.update() method. This method uses the name of the dictionary and the word update. Then, in parentheses (), we use curly braces {} to type the name of the item that we wish to update—a colon (:), and the new number of items we want in the dictionary. Try this in your Python shell:   items.update({'rocks':10})   print(items) You will notice that if you have done print(items), then you will now have 10 rocks instead of 25. We have successfully updated our number of items. To remove something from a dictionary, one must reference the key or the name of the item and delete the item. By doing so, the value that goes with the item will also be removed. In Python, this means using del along with the name of the dictionary and the name of the item you wish to remove. Using the items list as our example, let's remove lives, and then use a print statement to test and see if the lives key was removed:   del items['lives']   print(items) The items list will now look as follows: With dictionaries, information is stored and retrieved differently than with lists, but we can still perform the same operation of adding and removing information as well as making changes to information. Summary Lists and dictionaries are two different ways to store and retrieve information in Python. Although, the sample data that we saw in this article was small, Python can handle large sets of data and process them at high speeds. Learning to use lists and dictionaries will allow you to solve complicated programming problems in many realms, including gaming, web app development, and data analysis. Resources for Article: Further resources on this subject: Exception Handling in MySQL for Python [article] Configuring and securing PYTHON LDAP Applications Part 1 [article] Web scraping with Python (Part 2) [article]
Read more
  • 0
  • 0
  • 4833

article-image-java-hibernate-collections-associations-and-advanced-concepts
Packt
15 Sep 2015
16 min read
Save for later

Java Hibernate Collections, Associations, and Advanced Concepts

Packt
15 Sep 2015
16 min read
In this article by Yogesh Prajapati and Vishal Ranapariya, the author of the book Java Hibernate Cookbook, he has provide a complete guide to the following recipes: Working with a first-level cache One-to-one mapping using a common join table Persisting Map (For more resources related to this topic, see here.) Working with a first-level cache Once we execute a particular query using hibernate, it always hits the database. As this process may be very expensive, hibernate provides the facility to cache objects within a certain boundary. The basic actions performed in each database transaction are as follows: The request reaches the database server via the network. The database server processes the query in the query plan. Now the database server executes the processed query. Again, the database server returns the result to the querying application through the network. At last, the application processes the results. This process is repeated every time we request a database operation, even if it is for a simple or small query. It is always a costly transaction to hit the database for the same records multiple times. Sometimes, we also face some delay in receiving the results because of network routing issues. There may be some other parameters that affect and contribute to the delay, but network routing issues play a major role in this cycle. To overcome this issue, the database uses a mechanism that stores the result of a query, which is executed repeatedly, and uses this result again when the data is requested using the same query. These operations are done on the database side. Hibernate provides an in-built caching mechanism known as the first-level cache (L1 cache). Following are some properties of the first-level cache: It is enabled by default. We cannot disable it even if we want to. The scope of the first-level cache is limited to a particular Session object only; the other Session objects cannot access it. All cached objects are destroyed once the session is closed. If we request for an object, hibernate returns the object from the cache only if the requested object is found in the cache; otherwise, a database call is initiated. We can use Session.evict(Object object) to remove single objects from the session cache. The Session.clear() method is used to clear all the cached objects from the session. Getting ready Let's take a look at how the L1 cache works. Creating the classes For this recipe, we will create an Employee class and also insert some records into the table: Source file: Employee.java @Entity @Table public class Employee { @Id @GeneratedValue private long id; @Column(name = "name") private String name; // getters and setters @Override public String toString() { return "Employee: " + "nt Id: " + this.id + "nt Name: " + this.name; } } Creating the tables Use the following table script if the hibernate.hbm2ddl.auto configuration property is not set to create: Use the following script to create the employee table: CREATE TABLE `employee` ( `id` bigint(20) NOT NULL AUTO_INCREMENT, `name` varchar(255) DEFAULT NULL, PRIMARY KEY (`id`) ); We will assume that two records are already inserted, as shown in the following employee table: id name 1 Yogesh 2 Aarush Now, let's take a look at some scenarios that show how the first-level cache works. How to do it… Here is the code to see how caching works. In the code, we will load employee#1 and employee#2 once; after that, we will try to load the same employees again and see what happens: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); System.out.println("nLoading employee#1 again..."); /* Line 10 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 15 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Loading employee#1 again... Employee: Id: 1 Name: Yogesh Loading employee#2 again... Employee: Id: 2 Name: Aarush How it works… Here, we loaded Employee#1 and Employee#2 as shown in Line 2 and 6 respectively and also the print output for both. It's clear from the output that hibernate will hit the database to load Employee#1 and Employee#2 because at startup, no object is cached in hibernate. Now, in Line 10, we tried to load Employee#1 again. At this time, hibernate did not hit the database but simply use the cached object because Employee#1 is already loaded and this object is still in the session. The same thing happened with Employee#2. Hibernate stores an object in the cache only if one of the following operations is completed: Save Update Get Load List There's more… In the previous section, we took a look at how caching works. Now, we will discuss some other methods used to remove a cached object from the session. There are two more methods that are used to remove a cached object: evict(Object object): This method removes a particular object from the session clear(): This method removes all the objects from the session evict (Object object) This method is used to remove a particular object from the session. It is very useful. The object is no longer available in the session once this method is invoked and the request for the object hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); /* Line 5 */ session.evict(employee1); System.out.println("nEmployee#1 removed using evict(…)..."); System.out.println("nLoading employee#1 again..."); /* Line 9*/ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Employee#1 removed using evict(…)... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Here, we loaded an Employee#1, as shown in Line 2. This object was then cached in the session, but we explicitly removed it from the session cache in Line 5. So, the loading of Employee#1 will again hit the database. clear() This method is used to remove all the cached objects from the session cache. They will no longer be available in the session once this method is invoked and the request for the objects hits the database: Code System.out.println("nLoading employee#1..."); /* Line 2 */ Employee employee1 = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1.toString()); System.out.println("nLoading employee#2..."); /* Line 6 */ Employee employee2 = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2.toString()); /* Line 9 */ session.clear(); System.out.println("nAll objects removed from session cache using clear()..."); System.out.println("nLoading employee#1 again..."); /* Line 13 */ Employee employee1_dummy = (Employee) session.load(Employee.class, new Long(1)); System.out.println(employee1_dummy.toString()); System.out.println("nLoading employee#2 again..."); /* Line 17 */ Employee employee2_dummy = (Employee) session.load(Employee.class, new Long(2)); System.out.println(employee2_dummy.toString()); Output Loading employee#1... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush All objects removed from session cache using clear()... Loading employee#1 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 1 Name: Yogesh Loading employee#2 again... Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from Employee employee0_ where employee0_.id=? Employee: Id: 2 Name: Aarush Here, Line 2 and 6 show how to load Employee#1 and Employee#2 respectively. Now, we removed all the objects from the session cache using the clear() method. As a result, the loading of both Employee#1 and Employee#2 will again result in a database hit, as shown in Line 13 and 17. One-to-one mapping using a common join table In this method, we will use a third table that contains the relationship between the employee and detail tables. In other words, the third table will hold a primary key value of both tables to represent a relationship between them. Getting ready Use the following script to create the tables and classes. Here, we use Employee and EmployeeDetail to show a one-to-one mapping using a common join table: Creating the tables Use the following script to create the tables if you are not using hbm2dll=create|update: Use the following script to create the detail table: CREATE TABLE `detail` ( `detail_id` bigint(20) NOT NULL AUTO_INCREMENT, `city` varchar(255) DEFAULT NULL, PRIMARY KEY (`detail_id`) ); Use the following script to create the employee table: CREATE TABLE `employee` ( `employee_id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`employee_id`) ); Use the following script to create the employee_detail table: CREATE TABLE `employee_detail` ( `detail_id` BIGINT(20) DEFAULT NULL, `employee_id` BIGINT(20) NOT NULL, PRIMARY KEY (`employee_id`), KEY `FK_DETAIL_ID` (`detail_id`), KEY `FK_EMPLOYEE_ID` (`employee_id`), CONSTRAINT `FK_EMPLOYEE_ID` FOREIGN KEY (`employee_id`) REFERENCES `employee` (`employee_id`), CONSTRAINT `FK_DETAIL_ID` FOREIGN KEY (`detail_id`) REFERENCES `detail` (`detail_id`) ); Creating the classes Use the following code to create the classes: Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "employee_id") private long id; @Column(name = "name") private String name; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="employee_id") , inverseJoinColumns=@JoinColumn(name="detail_id") ) private Detail employeeDetail; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Detail getEmployeeDetail() { return employeeDetail; } public void setEmployeeDetail(Detail employeeDetail) { this.employeeDetail = employeeDetail; } @Override public String toString() { return "Employee" +"n Id: " + this.id +"n Name: " + this.name +"n Employee Detail " + "nt Id: " + this.employeeDetail.getId() + "nt City: " + this.employeeDetail.getCity(); } } Source file: Detail.java @Entity @Table(name = "detail") public class Detail { @Id @GeneratedValue @Column(name = "detail_id") private long id; @Column(name = "city") private String city; @OneToOne(cascade = CascadeType.ALL) @JoinTable( name="employee_detail" , joinColumns=@JoinColumn(name="detail_id") , inverseJoinColumns=@JoinColumn(name="employee_id") ) private Employee employee; public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public String getCity() { return city; } public void setCity(String city) { this.city = city; } public long getId() { return id; } public void setId(long id) { this.id = id; } @Override public String toString() { return "Employee Detail" +"n Id: " + this.id +"n City: " + this.city +"n Employee " + "nt Id: " + this.employee.getId() + "nt Name: " + this.employee.getName(); } } How to do it… In this section, we will take a look at how to insert a record step by step. Inserting a record Using the following code, we will insert an Employee record with a Detail object: Code Detail detail = new Detail(); detail.setCity("AHM"); Employee employee = new Employee(); employee.setName("vishal"); employee.setEmployeeDetail(detail); Transaction transaction = session.getTransaction(); transaction.begin(); session.save(employee); transaction.commit(); Output Hibernate: insert into detail (city) values (?) Hibernate: insert into employee (name) values (?) Hibernate: insert into employee_detail (detail_id, employee_id) values (?,?) Hibernate saves one record in the detail table and one in the employee table and then inserts a record in to the third table, employee_detail, using the primary key column value of the detail and employee tables. How it works… From the output, it's clear how this method works. The code is the same as in the other methods of configuring a one-to-one relationship, but here, hibernate reacts differently. Here, the first two statements of output insert the records in to the detail and employee tables respectively, and the third statement inserts the mapping record in to the third table, employee_detail, using the primary key column value of both the tables. Let's take a look at an option used in the previous code in detail: @JoinTable: This annotation, written on the Employee class, contains the name="employee_detail" attribute and shows that a new intermediate table is created with the name "employee_detail" joinColumns=@JoinColumn(name="employee_id"): This shows that a reference column is created in employee_detail with the name "employee_id", which is the primary key of the employee table inverseJoinColumns=@JoinColumn(name="detail_id"): This shows that a reference column is created in the employee_detail table with the name "detail_id", which is the primary key of the detail table Ultimately, the third table, employee_detail, is created with two columns: one is "employee_id" and the other is "detail_id". Persisting Map Map is used when we want to persist a collection of key/value pairs where the key is always unique. Some common implementations of java.util.Map are java.util.HashMap, java.util.LinkedHashMap, and so on. For this recipe, we will use java.util.HashMap. Getting ready Now, let's assume that we have a scenario where we are going to implement Map<String, String>; here, the String key is the e-mail address label, and the value String is the e-mail address. For example, we will try to construct a data structure similar to <"Personal e-mail", "emailaddress2@provider2.com">, <"Business e-mail", "emailaddress1@provider1.com">. This means that we will create an alias of the actual e-mail address so that we can easily get the e-mail address using the alias and can document it in a more readable form. This type of implementation depends on the custom requirement; here, we can easily get a business e-mail using the Business email key. Use the following code to create the required tables and classes. Creating tables Use the following script to create the tables if you are not using hbm2dll=create|update. This script is for the tables that are generated by hibernate: Use the following code to create the email table: CREATE TABLE `email` ( `Employee_id` BIGINT(20) NOT NULL, `emails` VARCHAR(255) DEFAULT NULL, `emails_KEY` VARCHAR(255) NOT NULL DEFAULT '', PRIMARY KEY (`Employee_id`,`emails_KEY`), KEY `FK5C24B9C38F47B40` (`Employee_id`), CONSTRAINT `FK5C24B9C38F47B40` FOREIGN KEY (`Employee_id`) REFERENCES `employee` (`id`) ); Use the following code to create the employee table: CREATE TABLE `employee` ( `id` BIGINT(20) NOT NULL AUTO_INCREMENT, `name` VARCHAR(255) DEFAULT NULL, PRIMARY KEY (`id`) ); Creating a class Source file: Employee.java @Entity @Table(name = "employee") public class Employee { @Id @GeneratedValue @Column(name = "id") private long id; @Column(name = "name") private String name; @ElementCollection @CollectionTable(name = "email") private Map<String, String> emails; public long getId() { return id; } public void setId(long id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } public Map<String, String> getEmails() { return emails; } public void setEmails(Map<String, String> emails) { this.emails = emails; } @Override public String toString() { return "Employee" + "ntId: " + this.id + "ntName: " + this.name + "ntEmails: " + this.emails; } } How to do it… Here, we will consider how to work with Map and its manipulation operations, such as inserting, retrieving, deleting, and updating. Inserting a record Here, we will create one employee record with two e-mail addresses: Code Employee employee = new Employee(); employee.setName("yogesh"); Map<String, String> emails = new HashMap<String, String>(); emails.put("Business email", "emailaddress1@provider1.com"); emails.put("Personal email", "emailaddress2@provider2.com"); employee.setEmails(emails); session.getTransaction().begin(); session.save(employee); session.getTransaction().commit(); Output Hibernate: insert into employee (name) values (?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?,?,?) When the code is executed, it inserts one record into the employee table and two records into the email table and also sets a primary key value for the employee record in each record of the email table as a reference. Retrieving a record Here, we know that our record is inserted with id 1. So, we will try to get only that record and understand how Map works in our case. Code Employee employee = (Employee) session.get(Employee.class, 1l); System.out.println(employee.toString()); System.out.println("Business email: " + employee.getEmails().get("Business email")); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Employee Id: 1 Name: yogesh Emails: {Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Business email: emailaddress1@provider1.com Here, we can easily get a business e-mail address using the Business email key from the map of e-mail addresses. This is just a simple scenario created to demonstrate how to persist Map in hibernate. Updating a record Here, we will try to add one more e-mail address to Employee#1: Code Employee employee = (Employee) session.get(Employee.class, 1l); Map<String, String> emails = employee.getEmails(); emails.put("Personal email 1", "emailaddress3@provider3.com"); session.getTransaction().begin(); session.saveOrUpdate(employee); session.getTransaction().commit(); System.out.println(employee.toString()); Output Hibernate: select employee0_.id as id0_0_, employee0_.name as name0_0_ from employee employee0_ where employee0_.id=? Hibernate: select emails0_.Employee_id as Employee1_0_0_, emails0_.emails as emails0_, emails0_.emails_KEY as emails3_0_ from email emails0_ where emails0_.Employee_id=? Hibernate: insert into email (Employee_id, emails_KEY, emails) values (?, ?, ?) Employee Id: 2 Name: yogesh Emails: {Personal email 1= emailaddress3@provider3.com, Personal email=emailaddress2@provider2.com, Business email=emailaddress1@provider1.com} Here, we added a new e-mail address with the Personal email 1 key and the value is emailaddress3@provider3.com. Deleting a record Here again, we will try to delete the records of Employee#1 using the following code: Code Employee employee = new Employee(); employee.setId(1); session.getTransaction().begin(); session.delete(employee); session.getTransaction().commit(); Output Hibernate: delete from email where Employee_id=? Hibernate: delete from employee where id=? While deleting the object, hibernate will delete the child records (here, e-mail addresses) as well. How it works… Here again, we need to understand the table structures created by hibernate: Hibernate creates a composite primary key in the email table using two fields: employee_id and emails_KEY. Summary In this article you familiarized yourself with recipes such as working with a first-level cache, one-to-one mapping using a common join table, and persisting map. Resources for Article: Further resources on this subject: PostgreSQL in Action[article] OpenShift for Java Developers[article] Oracle 12c SQL and PL/SQL New Features [article]
Read more
  • 0
  • 0
  • 4829
Modal Close icon
Modal Close icon