Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Programming

1083 Articles
article-image-python-testing-installing-robot-framework
Packt
20 Jun 2011
2 min read
Save for later

Python Testing: Installing the Robot Framework

Packt
20 Jun 2011
2 min read
How to do it... Be sure to activate your virtualenv sandbox. Install by typing: easy_install robotframework. Using any type of window navigator, go to <virtualenv root>/build/robotframework/doc/quickstart and open quickstart.html with your favorite browser. This is not only a guide but also a runnable test suite. Switch to your virtualenv's build directory for Robot Framework: cd <virtualenv root>/build/robotframework/doc/quickstart. Run the Quick Start manual through pybot to verify installation: pybot quickstart.html. Inspect the generated report.html, log.html, and output.xml files generated by the test run. Install the Robot Framework Selenium library to allow integration with Selenium by first downloading: http://robotframework.org/SeleniumLibrary/. Unpack the tarball. Switch to the directory: cd robotframework-seleniumlibrary-2.5. Install the package: python setup.py install. Switch to the demo directory: cd demo. Start up the demo web app: python rundemo.py demoapp start. Start up the Selenium server: python rundemo.py selenium start. Run the demo tests: pybot login_tests. Shutdown the demo web app: python rundemo.py demoapp stop. Shutdown the Selenium server: python rundemo.py selenium stop. Inspect the generated report.html, log.html, output.xml, and selenium_log.txt files generated by the test run. Summary With this recipe, we have installed the Robot Framework and one third-party library that integrates Robot with Selenium. Further resources on this subject: Inheritance in Python Python Testing: Mock Objects Python: Unit Testing with Doctest Tips & Tricks on MySQL for Python Testing Tools and Techniques in Python
Read more
  • 0
  • 0
  • 7545

article-image-web-services-microsoft-azure
Packt
29 Nov 2010
8 min read
Save for later

Web Services in Microsoft Azure

Packt
29 Nov 2010
8 min read
A web service is not one single entity and consists of three distinct parts: An endpoint, which is the URL (and related information) where client applications will find our service A host environment, which in our case will be Azure A service class, which is the code that implements the methods called by the client application A web service endpoint is more than just a URL. An endpoint also includes: The bindings, or communication and security protocols The contract (or promise) that certain methods exist, how these methods should be called, and what the data will look like when returned A simple way to remember the components of an endpoint is A/B/C, that is, address/bindings/contract. Web services can fill many roles in our Azure applications—from serving as a simple way to place messages into a queue, to being a complete replacement for a data access layer in a web application (also known as a Service Oriented Architecture or SOA). In Azure, web services serve as HTTP/HTTPS endpoints, which can be accessed by any application that supports REST, regardless of language or operating system. The intrinsic web services libraries in .NET are called Windows Communication Foundation (WCF). As WCF is designed specifically for programming web services, it's referred to as a service-oriented programming model. We are not limited to using WCF libraries in Azure development, but we expect it to be a popular choice for constructing web services being part of the .NET framework. A complete introduction to WCF can be found at http://msdn.microsoft.com/en-us/netframework/aa663324.aspx. When adding WCF services to an Azure web role, we can either create a separate web role instance, or add the web services to an existing web role. Using separate instances allows us to scale the web services independently of the web forms, but multiple instances increase our operating costs. Separate instances also allow us to use different technologies for each Azure instance; for example, the web form may be written in PHP and hosted on Apache, while the web services may be written in Java and hosted using Tomcat. Using the same instance helps keep our costs much lower, but in that case we have to scale both the web forms and the web services together. Depending on our application's architecture, this may not be desirable. Securing WCF Stored data are only as secure as the application used for accessing it. The Internet is stateless, and REST has no sense of security, so security information must be passed as part of the data in each request. If the credentials are not encrypted, then all requests should be forced to use HTTPS. If we control the consuming client applications, we can also control the encryption of the user credentials. Otherwise, our only choice may be to use clear text credentials via HTTPS. For an application with a wide or uncontrolled distribution (like most commercial applications want to be), or if we are to support a number of home-brewed applications, the authorization information must be unique to the user. Part of the behind-the-services code should check to see if the user making the request can be authenticated, and if the user is authorized to perform the action. This adds additional coding overhead, but it's easier to plan for this up front. There are a number of ways to secure web services—from using HTTPS and passing credentials with each request, to using authentication tokens in each request. As it happens, using authentication tokens is part of the AppFabric Access Control, and we'll look more into the security for WCF when we dive deeper into Access Control. Jupiter Motors web service In our corporate portal for Jupiter Motors, we included a design for a client application, which our delivery personnel will use to update the status of an order and to decide which customers will accept delivery of their vehicle. For accounting and insurance reasons, the order status needs to be updated immediately after a customer accepts their vehicle. To do so, the client application will call a web service to update the order status as soon as the Accepted button is clicked. Our WCF service is interconnected to other parts of our Jupiter Motors application, so we won't see it completely in action until it all comes together. In the meantime, it will seem like we're developing blind. In reality, all the components would probably be developed and tested simultaneously. Creating a new WCF service web role When creating a web service, we have a choice to add the web service to an existing web role or create a new web role. This helps us deploy and maintain our website application separately from our web services. And in order for us to scale the web role independently from the worker role, we'll create our web service in a role separate from our web application. Creating a new WCF service web role is very simple—Visual Studio will do the "hard work" for us and allow us to start coding our services. First, open the JupiterMotors project. Create the new web role by right-clicking on the Roles folder in our project, choosing Add, and then select the New Web Role Project… option. When we do this, we will be asked what type of web role we want to create. We will choose a WCF Service Web Role, call it JupiterMotorsWCFRole, and click on the Add button. Because different services must have unique names in our project, a good naming convention to use is the project name concatenated with the type of role. This makes the different roles and instances easily discernable and complies with the unique naming requirement. This is where Visual Studio does its magic. It creates the new role in the cloud project, creates a new web role for our WCF web services, and creates some template code for us. The template service created is called "Service1". You will see both, a Service1.svc file as well as an IService1.vb file. Also, a web.config file (as we would expect to see in any web role) is created in the web role and is already wired up for our Service1 web service. All of the generated code is very helpful if you are learning WCF web services. This is what we should see once Visual Studio finishes creating the new project: We are going to start afresh with our own services—we can delete Service1.svc and IService1.vb. Also, in the web.config file, the following boilerplate code can be deleted (we'll add our own code as needed): <system.serviceModel> <services> <service name="JupiterMotorsWCFRole.Service1" behaviorConfiguration="JupiterMotorsWCFRole. Service1Behavior"> <!-- Service Endpoints --> <endpoint address="" binding="basicHttpBinding" contract="JupiterMotorsWCFRole.IService1"> <!-- Upon deployment, the following identity element should be removed or replaced to reflect the identity under which the deployed service runs. If removed, WCF will infer an appropriate identity automatically. --> <identity> <dns value="localhost"/> </identity> </endpoint> <endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange"/> </service> </services> <behaviors> <serviceBehaviors> <behavior name="JupiterMotorsWCFRole.Service1Behavior"> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true"/> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="false"/> </behavior> </serviceBehaviors> </behaviors> </system.serviceModel> Let's now add a WCF service to the JupiterMotorsWCFRole project. To do so, right-click on the project, then Add, and select the New Item... option. We now choose a WCF service and will name it as ERPService.svc: Just like the generated code when we created the web role, ERPService.svc as well as IERPService.vb files were created for us, and these are now wired into the web.config file. There is some generated code in the ERPService.svc and IERPService.vb files, but we will replace this with our code in the next section. When we create a web service, the actual service class is created with the name we specify. Additionally, an interface class is automatically created. We can specify the name for the class; however, being an interface class, it will always have its name beginning with letter I. This is a special type of interface class, called a service contract. The service contract provides a description of what methods and return types are available in our web service.
Read more
  • 0
  • 0
  • 7497

article-image-testing-your-application-cljstest
Packt
11 May 2016
13 min read
Save for later

Testing Your Application with cljs.test

Packt
11 May 2016
13 min read
In this article written by David Jarvis, Rafik Naccache, and Allen Rohner, authors of the book Learning ClojureScript, we'll take a look at how to configure our ClojureScript application or library for testing. As usual, we'll start by creating a new project for us to play around with: $ lein new figwheel testing (For more resources related to this topic, see here.) We'll be playing around in a test directory. Most JVM Clojure projects will have one already, but since the default Figwheel template doesn't include a test directory, let's make one first (following the same convention used with source directories, that is, instead of src/$PROJECT_NAME we'll create test/$PROJECT_NAME): $ mkdir –p test/testing We'll now want to make sure that Figwheel knows that it has to watch the test directory for file modifications. To do that, we will edit the the dev build in our project.clj project's :cljsbuild map so that it's :source-paths vector includes both src and test. Your new dev build configuration should look like the following: {:id "dev" :source-paths ["src" "test"] ;; If no code is to be run, set :figwheel true for continued automagical reloading :figwheel {:on-jsload "testing.core/on-js-reload"} :compiler {:main testing.core :asset-path "js/compiled/out" :output-to "resources/public/js/compiled/testing.js" :output-dir "resources/public/js/compiled/out" :source-map-timestamp true}} Next, we'll get the old Figwheel REPL going so that we can have our ever familiar hot reloading: $ cd testing $ rlwrap lein figwheel Don't forget to navigate a browser window to http://localhost:3449/ to get the browser REPL to connect. Now, let's create a new core_test.cljs file in the test/testing directory. By convention, most libraries and applications in Clojure and ClojureScript have test files that correspond to source files with the suffix _test. In this project, this means that test/testing/core_test.cljs is intended to contain the tests for src/testing/core.cljs. Let's get started by just running tests on a single file. Inside core_test.cljs, let's add the following code: (ns testing.core-test (:require [cljs.test :refer-macros [deftest is]])) (deftest i-should-fail (is (= 1 0))) (deftest i-should-succeed (is (= 1 1))) This code first requires two of the most important cljs.test macros, and then gives us two simple examples of what a failed test and a successful test should look like. At this point, we can run our tests from the Figwheel REPL: cljs.user=> (require 'testing.core-test) ;; => nil cljs.user=> (cljs.test/run-tests 'testing.core-test) Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 2 tests containing 2 assertions. 1 failures, 0 errors. ;; => nil At this point, what we've got is tolerable, but it's not really practical in terms of being able to test a larger application. We don't want to have to test our application in the REPL and pass in our test namespaces one by one. The current idiomatic solution for this in ClojureScript is to write a separate test runner that is responsible for important executions and then run all of your tests. Let's take a look at what this looks like. Let's start by creating another test namespace. Let's call this one app_test.cljs, and we'll put the following in it: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is]])) (deftest another-successful-test (is (= 4 (count "test")))) We will not do anything remarkable here; it's just another test namespace with a single test that should pass by itself. Let's quickly make sure that's the case at the REPL: cljs.user=> (require 'testing.app-test) nil cljs.user=> (cljs.test/run-tests 'testing.app-test) Testing testing.app-test Ran 1 tests containing 1 assertions. 0 failures, 0 errors. ;; => nil Perfect. Now, let's write a test runner. Let's open a new file that we'll simply call test_runner.cljs, and let's include the following: (ns testing.test-runner (:require [cljs.test :refer-macros [run-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (defn run-all-tests [] (run-tests 'testing.app-test 'testing.core-test)) Again, nothing surprising. We're just making a single function for us that runs all of our tests. This is handy for us at the REPL: cljs.user=> (testing.test-runner/run-all-tests) Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (cljs/test.js?zx=icyx7aqatbda:430:14) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. ;; => nil Ultimately, however, we want something we can run at the command line so that we can use it in a continuous integration environment. There are a number of ways we can go about configuring this directly, but if we're clever, we can let someone else do the heavy lifting for us. Enter doo, the handy ClojureScript testing plugin for Leiningen. Using doo for easier testing configuration doo is a library and Leiningen plugin for running cljs.test in many different JavaScript environments. It makes it easy to test your ClojureScript regardless of whether you're writing for the browser or for the server, and it also includes file watching capabilities such as Figwheel so that you can automatically rerun tests on file changes. The doo project page can be found at https://github.com/bensu/doo. To configure our project to use doo, first we need to add it to the list of plugins in our project.clj file. Modify the :plugins key so that it looks like the following: :plugins [[lein-figwheel "0.5.2"] [lein-doo "0.1.6"] [lein-cljsbuild "1.1.3" :exclusions [[org.clojure/clojure]]]] Next, we will add a new cljsbuild build configuration for our test runner. Add the following build map after the dev build map on which we've been working with until now: {:id "test" :source-paths ["src" "test"] :compiler {:main testing.test-runner :output-to "resources/public/js/compiled/testing_test.js" :optimizations :none}} This configuration tells Cljsbuild to use both our src and test directories, just like our dev profile. It adds some different configuration elements to the compiler options, however. First, we're not using testing.core as our main namespace anymore—instead, we'll use our test runner's namespace, testing.test-runner. We will also change the output JavaScript file to a different location from our compiled application code. Lastly, we will make sure that we pass in :optimizations :none so that the compiler runs quickly and doesn't have to do any magic to look things up. Note that our currently running Figwheel process won't know about the fact that we've added lein-doo to our list of plugins or that we've added a new build configuration. If you want to make Figwheel aware of doo in a way that'll allow them to play nicely together, you should also add doo as a dependency to your project. Once you've done that, exit the Figwheel process and restart it after you've saved the changes to project.clj. Lastly, we need to modify our test runner namespace so that it's compatible with doo. To do this, open test_runner.cljs and change it to the following: (ns testing.test-runner (:require [doo.runner :refer-macros [doo-tests]] [testing.app-test] [testing.core-test])) ;; This isn't strictly necessary, but is a good idea depending ;; upon your application's ultimate runtime engine. (enable-console-print!) (doo-tests 'testing.app-test 'testing.core-test) This shouldn't look too different from our original test runner—we're just importing from doo.runner rather than cljs.test and using doo-tests instead of a custom runner function. The doo-tests runner works very similarly to cljs.test/run-tests, but it places hooks around the tests to know when to start them and finish them. We're also putting this at the top-level of our namespace rather than wrapping it in a particular function. The last thing we're going to need to do is to install a JavaScript runtime that we can use to execute our tests with. Up until now, we've been using the browser via Figwheel, but ideally, we want to be able to run our tests in a headless environment as well. For this purpose. we recommend installing PhantomJS (though other execution environments are also fine). If you're on OS X and have Homebrew installed (http://www.brew.sh), installing PhantomJS is as simple as typing brew install phantomjs. If you're not on OS X or don't have Homebrew, you can find instructions on how to install PhantomJS on the project's website at http://phantomjs.org/. The key thing is that the following should work: $ phantomjs -v 2.0.0 Once you've got PhantomJS installed, you can now invoke your test runner from the command line with the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Let's break down this command. The first part, lein doo, just tells Leiningen to invoke the doo plugin. Next, we have phantom, which tells doo to use PhantomJS as its running environment. The doo plugin supports a number of other environments, including Chrome, Firefox, Internet Explorer, Safari, Opera, SlimerJS, NodeJS, Rhino, and Nashorn. Be aware that if you're interested in running doo on one of these other environments, you may have to configure and install additional software. For instance, if you want to run tests on Chrome, you'll need to install Karma as well as the appropriate Karma npm modules to enable Chrome interaction. Next we have test, which refers to the cljsbuild build ID we set up earlier. Lastly, we have once, which tells doo to just run tests and not to set up a filesystem watcher. If, instead, we wanted doo to watch the filesystem and rerun tests on any changes, we would just use lein doo phantom test. Testing fixtures The cljs.test project has support for adding fixtures to your tests that can run before and after your tests. Test fixtures are useful for establishing isolated states between tests—for instance, you can use tests to set up a specific database state before each test and to tear it down afterward. You can add them to your ClojureScript tests by declaring them with the use-fixtures macro within the testing namespace you want fixtures applied to. Let's see what this looks like in practice by changing one of our existing tests and adding some fixtures to it. Modify app-test.cljs to the following: (ns testing.app-test (:require [cljs.test :refer-macros [deftest is use-fixtures]])) ;; Run these fixtures for each test. ;; We could also use :once instead of :each in order to run ;; fixtures once for the entire namespace instead of once for ;; each individual test. (use-fixtures :each {:before (fn [] (println "Setting up tests...")) :after (fn [] (println "Tearing down tests..."))}) (deftest another-successful-test ;; Give us an idea of when this test actually executes. (println "Running a test...") (is (= 4 (count "test")))) Here, we've added a call to use-fixtures that prints to the console before and after running the test, and we've added a println call to the test itself so that we know when it executes. Now when we run this test, we get the following: $ lein doo phantom test once ;; ====================================================================== ;; Testing with Phantom: Testing testing.app-test Setting up tests... Running a test... Tearing down tests... Testing testing.core-test FAIL in (i-should-fail) (:) expected: (= 1 0) actual: (not (= 1 0)) Ran 3 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Note that our fixtures get called in the order we expect them to. Asynchronous testing Due to the fact that client-side code is frequently asynchronous and JavaScript is single threaded, we need to have a way to support asynchronous tests. To do this, we can use the async macro from cljs.test. Let's take a look at an example using an asynchronous HTTP GET request. First, let's modify our project.clj file to add cljs-ajax to our dependencies. Our dependencies project key should now look something like this: :dependencies [[org.clojure/clojure "1.8.0"] [org.clojure/clojurescript "1.7.228"] [cljs-ajax "0.5.4"] [org.clojure/core.async "0.2.374" :exclusions [org.clojure/tools.reader]]] Next, let's create a new async_test.cljs file in our test.testing directory. Inside it, we will add the following code: (ns testing.async-test (:require [ajax.core :refer [GET]] [cljs.test :refer-macros [deftest is async]])) (deftest test-async (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!"))})) Note that we're not using async in our test at the moment. Let's try running this test with doo (don't forget that you have to add testing.async-test to test_runner.cljs!): $ lein doo phantom test once ... Testing testing.async-test ... Ran 4 tests containing 3 assertions. 1 failures, 0 errors. Subprocess failed Now, our test here passes, but note that the println async code never fires, and our additional assertion doesn't get called (looking back at our previous examples, since we've added a new is assertion we should expect to see four assertions in the final summary)! If we actually want our test to appropriately validate the error-handler callback within the context of the test, we need to wrap it in an async block. Doing so gives us a test that looks like the following: (deftest test-async (async done (GET "http://www.google.com" ;; will always fail from PhantomJS because ;; `Access-Control-Allow-Origin` won't allow ;; our headless browser to make requests to Google. {:error-handler (fn [res] (is (= (:status-text res) "Request failed.")) (println "Test finished!") (done))}))) Now, let's try to run our tests again: $ lein doo phantom test once ... Testing testing.async-test Test finished! ... Ran 4 tests containing 4 assertions. 1 failures, 0 errors. Subprocess failed Awesome! Note that this time we see the printed statement from our callback, and we can see that cljs.test properly ran all four of our assertions. Asynchronous fixtures One final "gotcha" on testing—the fixtures we talked about earlier in this article do not handle asynchronous code automatically. This means that if you have a :before fixture that executes asynchronous logic, your test can begin running before your fixture has completed! In order to get around this, all you need to do is to wrap your :before fixture in an async block, just like with asynchronous tests. Consider the following for instance: (use-fixtures :once {:before #(async done ... (done)) :after #(do ...)}) Summary This concludes our section on cljs.test. Testing, whether in ClojureScript or any other language, is a critical software engineering best practice to ensure that your application behaves the way you expect it to and to protect you and your fellow developers from accidentally introducing bugs to your application. With cljs.test and doo, you have the power and flexibility to test your ClojureScript application with multiple browsers and JavaScript environments and to integrate your tests into a larger continuous testing framework. Resources for Article: Further resources on this subject: Clojure for Domain-specific Languages - Design Concepts with Clojure [article] Visualizing my Social Graph with d3.js [article] Improving Performance with Parallel Programming [article]
Read more
  • 0
  • 0
  • 7471

article-image-entering-people-information
Packt
24 Jun 2015
9 min read
Save for later

Entering People Information

Packt
24 Jun 2015
9 min read
In this article by Pravin Ingawale, author of the book Oracle E-Business Suite R12.x HRMS – A Functionality Guide, will learn about entering a person's information in Oracle HRMS. We will understand the hiring process in Oracle. This, actually, is part of the Oracle I-recruitment module in Oracle apps. Then we will see how to create an employee in Core HR. Then, we will learn the concept of person types and defining person types. We will also learn about entering information for an employee, including additional information. Let's see how to create an employee in core HR. (For more resources related to this topic, see here.) Creating an employee An employee is the most important entity in an organization. Before creating an employee, the HR officer must know the date from which the employee will be active in the organization. In Oracle terminology, you can call it the employee's hire date. Apart from this, the HR officer must know basic details of the employee such as first name, last name, date of birth, and so on. Navigate to US HRMS Manager | People | Enter and Maintain. This is the basic form, called People in Oracle HRMS, which is used to create an employee in the application. As you can see in the form, there is a field named Last, which is marked in yellow. This indicates that this is mandatory to create an employee record. First, you need to set the effective date on the form. You can set this by clicking on the icon, as shown in the following screenshot: You need to enter the mandatory field data along with additional data. The following screenshot shows the data entered: Once you enter the required data, you need to specify the action for the entered record. The action we have selected is Create Employment. The Create Employment action will create an employee in the application. There are other actions such as Create Applicant, which is used to create an applicant for I-Recruitment. The Create Placement action is used to create a contingent worker in your enterprise. Once you select this action, it will prompt you to enter the person type of this employee as in the following screenshot. Select the Person Type as Employee and save the record. We will see the concept of person type in the next section. Once you select the employee person type and then save the record, the system will automatically generate the employee number for the person. In our case, the system has generated an employee number 10160. So now, we have created an employee in the application. Concept of person types In any organization, you need to identify different types of people. Here, you can say that you need to group different types of people. There are basically three types of people you capture in HRMS system. They are as follows: Employees: These include current employees and past employees. Past employees are those who were part of your enterprise earlier and are no longer active in the system. You can call them terminated or ex-employees. Applicants: If you are using I-recruitment, applicants can be created. External people: Contact is a special category of external type. Contacts are associated with an employee or an applicant. For example, there might be a need to record the name, address, and phone number of an emergency contact for each employee in your organization. There might also be a need to keep information on dependents of an employee for medical insurance purposes or for some payments in payroll processing. Using person types There are predefined person types in Oracle HRMS. You can add more person types as per your requirements. You can also change the name of existing person types when you install the system. Let's take an example for your understanding. Your organization has employees. There might be employees of different types; you might have regular employees and employees who are contractors in your organization. Hence, you can categorize employees in your organization into two types: Regular employees Consultants The reason for creating these categories is to easily identify the employee type and store different types of information for each category. Similarly, if you are using I-recruitment, then you will have candidates. Hence, you can categorize candidates into two types. One will be internal candidate and the other will be external candidate. Internal candidates will be employees within your organization who can apply for an opening within your organization. An external candidate is an applicant who does not work for your organization but is applying for a position that is open in your company. Defining person types In an earlier section, you learned the concept of person types, and now you will learn how to define person types in the system. Navigate to US HRMS Manager | Other Definitions | Person Types. In the preceding screenshot, you can see four fields, that is, User Name, System Name, Active, and Default flag. There are eight person types recognized by the system and identified by a system name. For each system name, there are predefined usernames. A username can be changed as per your needs. There must be one username that should be the default. While creating an employee, the person types that are marked by the default flag will come by default. To change a username for a person type, delete the contents of the User Name field and type the name you'd prefer to keep. To add a new username to a person type system name: Select New Record from the Edit menu. Enter a unique username and select the system name you want to use. Deactivating person types You cannot delete person types, but you can deactivate them by unchecking the Active checkbox. Entering personal and additional information Until now, you learned how to create an employee by entering basic details such as title, gender, and date of birth. In addition to this, you can enter some other information for an employee. As you can see on the people form, there are various tabs such as Employment, Office details, Background, and so on. Each tab has some fields that can store information. For example, in our case, we have stored the e-mail address of the employee in the Office Details tab. Whenever you enter any data for an employee and then click on the Save button, it will give you two options as shown in the following screenshot: You have to select one of the options to save the data. The differences between both the options are explained with an example. Let's say you have hired a new employee as of 01-Jan-2014. Hence, a new record will be created in the application with the start date as 01-Jan-2014. This is called an effective start date of the record. There is no end date for this record, so Oracle gives it a default end date, which is 31-Dec-4712. This is called the effective end date of the record. Now, in our case, Oracle has created a single record with the start date and end date as 01-Jan-2014 and 31-Dec-4712, respectively. When we try to enter additional data for this record (in our case, it is phone number) then Oracle will prompt you to select the Correction or Update option. This is called the date-tracked option. If you select the correction mode, then Oracle will update an existing record in the application. Now, if you date track to, say, 01-Aug-2014 and then enter the phone number and select the update mode, then it will end the historical data with the new date minus one and create a new record with the start date 01-Aug-2014 with the phone number that you have entered. Thus, the historical data will be preserved and a new record will be created with the start date 01-Aug-2014 and a phone number. The following tabular representation will help you understand better in Correction mode: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Dec-4712 +0099999999 Now, if you want to change the phone number from 01-Aug-2014 in Update mode (date 01-Aug-2014), then the record will be as follows: Employee Number LastName Effective Start Date Effective End Date Phone Number 10160 Test010114 01-Jan-2014 31-Jul-2014 +0099999999 10160 Test010114 01-Aug-2014 31-Jul-2014 +0088888888 Thus, in update mode, you can see that historical data is intact. If HR wants to view some historical data, then the HR employee can easily view this data. Everything associated with Oracle HRMS is date-tracked. Every characteristic about the organization, person, position, salary, and benefits is tightly date-tracked. This concept is very important in Oracle and is used in almost all the forms in which you store employee-related information. Thus, you have learned about the date tracking concept in Oracle APPS. There are some additional fields, which can be configured as per your requirements. Additional personal data can be stored in these fields. These are called as descriptive flexfields in Oracle. We created personal DFF to store data about Years of Industry Experience and whether an employee is Oracle Certified or not. This data can be stored in the People form DFF as marked in the following screenshot: When you click on the box, it will open the new form as shown in the following screenshot. Here, you can enter the additional data. This is called Additional Personal Details DFF. It is stored in personal data; this is normally referred to as the People form DFF. We have created a Special Information Types (SIT) to store information on languages known by an employee. This data will have two attributes, namely, the language known and the fluency. This can be entered by navigating to US HRMS Manager | People | Enter and Maintain | Special Info. Click on the Details section. This will open a new form to enter the required details. Each record in the SIT is date-tracked. You can enter the start date and the end date. Thus, we have seen DFF in which you stored additional person data and we have seen KFF, where you enter the SIT data. Summary In this article, you have learned about creating a new employee, entering employee data, and additional data using DFF and KFF. You also learned the concept of person type. Resources for Article: Further resources on this subject: Knowing the prebuilt marketing, sales, and service organizations [article] Oracle E-Business Suite with Desktop Integration [article] Oracle Integration and Consolidation Products [article]
Read more
  • 0
  • 0
  • 7421

article-image-2-dimensional-image-filtering
Packt
26 Sep 2013
13 min read
Save for later

2-Dimensional Image Filtering

Packt
26 Sep 2013
13 min read
(For more resources related to this topic, see here.) An introduction to image filtering Morphological operations and edge detection are actually types of image filtering, even though we used them in a black box sense, without really looking under the hood. Hopefully, this approach will get you accustomed to the details of image filtering a little faster. First of all, let's give a general definition of image filtering; it can be explained as the process of modifying the values of the pixels using a function that is typically applied on a local neighborhood of the image. In many situations, applying the function on a neighborhood involves a special operation, called convolution, with an operand called kernel. In this sense, you have already applied such a process in the case of erosion or dilation and even in the case of edge detection. The former processes used the strel function to create a kernel, while the latter used a kernel based on your choice of the edge detection method. But let's not get ahead of ourselves. We will try to take things one step at a time, starting by explaining neighborhood processing. Processing neighborhoods of pixels In the previous paragraph, we mentioned that the filtering process typically takes place on a specific neighborhood of pixels. When this neighborhood process is applied for all pixels, it is called sliding neighborhood operation. In it, we slide a rectangular neighborhood window through all possible positions of the image and modify its central pixel using a function of the pixels in the neighborhood. Let's see how this is done, using a numeric example. We'll start with something simple, like a linear filtering process, that is, averaging. Let's suppose that we have a small image, sized 8x8 pixels and we want to modify its pixel values, so that they get assigned with the rounded average of the pixels' values in their 3x3 neighborhoods. This will be easier to explain by using a real numeric example. Let's explain what happens in the step shown in the following image, in which the central pixel of the highlighted 3x3 neighborhood (in the fourth row and sixth column) will be replaced by the average value of all the pixels in the neighborhood (rounded to the nearest integer): Let the image be called I, the result in pixel I(4,6) will be: Substituting the values of the pixels, we can calculate the average value: Hence, the value of the central pixel of the neighborhood will become 121 (the closest integer to 120.89). By repeating the process described previously for all the pixels of the image, we get a result commonly known as mean filtering or average filtering. The final result of the entire process is shown in the following figure: You may be wondering now; the choice of neighborhood, for the example, was very convenient, but what happens when we want to change the value of a pixel on the borders of the image such as let's say pixel I(1,4)? Why was it set to 77 as shown in the image? This is indeed a valid and natural question, and you are very intuitive if you already thought about it. The answer is that the way to tackle this problem when you want your resulting image to be the same size as your original image is to involve only the neighboring pixels that exist in your calculations. However, since in our example, the calculation that has to be performed is averaging the neighborhood pixels, the denominator will still be 9, hence, it will be like we pad the rest of the neighborhood with zeros. Let's demonstrate this example as well: As shown in the previous image, the central pixel value gets evaluated as follows: Of course, since there is no 0th line, the first three operands of the addition are non-existent, hence set to zero: Therefore, the result of the averaging process for the aforementioned neighborhood will be equal to 77 (as shown in the image). This approach is not the only one we have for the image borders. We could assign the maximum possible value (255 for our example) to the non-existent pixels, or assign them the mean value of the rest of the neighborhood, and so on. The choice we make affects the quality of the borders of the image, as we will see in real pictures later on. The basics of convolution The process described previously was performed in overlapping neighborhoods of the image, but no use of a kernel was mentioned. So, what is this all about? And how does the convolution fit in this framework? Well, the truth is that the process described previously is actually describing the essence of convolution, which is passing a kernel over all possible equally sized neighborhoods of the image and using it to modify the value of the central pixel. The only problem in our case is that we did not use a specific kernel in the process described. Or did we? Let's try to find out using MATLAB code to perform two-dimensional convolution. The 3x3 neighborhood we used for the described process can be replaced by a 3x3 kernel, as long as the final result remains the same. The kernel that accomplishes this effect is a 3x3 matrix with all pixels set to 1/9. Convolving this kernel with the original image produces the same result as the aforementioned example. To demonstrate the process, we can use the two-dimensional convolution MATLAB function conv2 as follows, to get the result: >> original = [132 101 101 107 115 121 110 92 120 124 122 120 129 123 121 129 134 146 144 134 134 132 134 138 143 147 136 121 121 115 107 107 145 147 138 129 119 113 113 122 162 155 152 149 142 129 118 122 127 122 115 113 117 102 95 94 67 74 78 80 89 89 107 109]; % Create original image >> kernel = ones(3,3)*(1/9); % Create kernel >> conv_result = conv2(original, kernel,'same'); % Perform convolution >> final_result = round(conv_result) % Rounding of result The final result obtained is as follows: final_result = 53 78 75 77 79 80 77 50 84 125 122 123 124 124 122 80 90 135 133 129 125 124 123 82 96 142 138 131 124 121 120 80 100 147 142 134 126 120 116 77 95 140 136 130 124 116 112 74 79 117 115 115 112 110 107 72 43 65 65 66 66 67 66 45 As expected, the result is the same as the one calculated using the analytical process described before. The convolution kernel has done its job. In our process, we used a 8x8 original image and a 3x3 kernel with the values of all pixels as 1/9 (this is what happens when you get a 3x3 matrix with all instances of 1 and multiply it by 1/9, as we did) and finally ordered the conv2 function to produce the result using the padding process described earlier for the borders, hence calculating a result with the same dimensions as the original. But how did it do it? What exactly is convolution? Now it is time to fully understand convolution. But first, you must get acquainted with its mathematical equations. Since learning math is not the purpose of this book, we will try to give you just the basics, so that you get an idea of what this operation is all about, as it is invaluable for image filtering. The ugly mathematical truth Let's start with the mathematical definition of convolution for discrete functions (since in digital image processing all functions are discrete). To form our problem in a signal processing sense, we can define it as passing an input image I, through a Linear Space Invariant (LSI) system, performing convolution with a kernel h (also called a filter), to produce an output image, g. Hence, we get the following block diagram: This process is described mathematically by the following equation: where * is the symbol for convolution and the large Σ denotes a sum. The reason we have two sums is because our process is two-dimensional. Without going into too much detail, we can summarize the process described previously using the following steps, which are also followed in the implementation of conv2: Rotate the convolution kernel by 180 degrees to abide by the process in the double sum of the equation. Determine the central pixel of the neighborhood. This is straightforward when the neighborhood has an odd number of rows and columns, but must be based on some rule if either of the dimensions is even. Apply the rotated kernel to each pixel of the input image. This is a multiplication of each pixel in the rotated kernel by the corresponding pixel on the image neighborhood processed. It can be thought of as the weighted sum of the neighborhood pixels. The result of conv2 can be either of the following choices: full: Larger than the original image, taking into account all the pixels that can be computed using the convolution kernel, even if their center falls out of the image. This is the default choice for the function. same: Same size as the original image, using zeros to calculate border pixel values. valid: Smaller than the original image, so that it uses only pixels that have full valid neighbors in the computations. This means that when you want to produce a convolution result with the same size as the original image, you will have to use same as an input, as we did in our previous example. By now, those of you that are not very much into math may be tempted to stop reading. So, let's stop the mathematical jargon and dive into the practical examples. We know what a convolution does and we have seen an example on the pixels of a very small image, using an averaging convolution kernel. So, what does this process really do to an image? Time for action – applying averaging filters in images We will start off with an easy-to-follow example, so that all the theory described previously is demonstrated. In this example, we will also introduce some new MATLAB functions, to facilitate your understanding. Let's start: First, we load our image, which is holiday_image2.bmp: >> img = imread('holiday_image2.bmp'); Then, we generate our convolution kernel, using function fspecial and then rotate it 180 degrees: >> kernel = fspecial('average',3); >> kernel = rot90(kernel,2) The output of the code will be as follows: kernel = 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 0.1111 Now, it is time to use the three different ways of convolving our image: >> con1 = conv2(img,kernel); % Default usage ('full') >> con2 = conv2(img,kernel,'same'); % convolution using 'same' >> con3 = conv2(img,kernel,'valid'); % convolution using 'valid' In the previous step, you probably got a warning saying: Warning: CONV2 on values of class UINT8 is obsolete. Use CONV2(DOUBLE(A),DOUBLE(B)) or CONV2(SINGLE(A),SINGLE(B)) instead. This actually means that UNIT8 type will not be supported by conv2 in the future. To be on the safe side, you might want to use the suggestion by MATLAB and convert your image to single prior to convolving it: >> img = single(img); >> kernel = fspecial('average',3); % Create 3x3 averaging kernel >> con1 = conv2(img,kernel); % Default usage ('full') >> con2 = conv2(img,kernel,'same'); % convolution using 'same' >> con3 = conv2(img,kernel,'valid'); % convolution using 'valid' Now, we can show our results in one figure, along with the original image. This time, we are going to use an empty matrix as the second argument in imshow, to avoid having to convert our results to UNIT8: >> figure;subplot(2,2,1),imshow(img,[]),title('Original') >> subplot(2,2,2),imshow(con1,[]),title('full') >> subplot(2,2,3),imshow(con2,[]),title('same') >> subplot(2,2,4),imshow(con3,[]),title('valid') It is obvious that the three results are identical, but there is a small detail. Their size is not. So let's see if we got what we expected. In the Workspace window, you can see the difference in sizes: Let's now discuss the physical, qualitative meaning of averaging an image. What does it exactly do? The answer is; it performs blurring of the image. To examine this effect, we can crop the tower from our original and averaged image and display the result. The tower can be cropped using the following coordinates: >> tower_original = img(51:210,321:440); >> tower_blurred = con2(51:210,321:440); figure >> subplot(1,2,1),imshow(tower_original),title('Original tower') >> subplot(1,2,2),imshow(tower_blurred),title('Blurred tower') The original image and the blurred image are as follows: What just happened? The process described in the previous example demonstrated the usage of convolution in its various implementations, using the averaging kernel produced using fspecial. This function is designed to generate kernels for popular filtering tasks, as we will further analyze in the following sections. In our case, we created a 3x3 kernel of values equal to 1/9 (which is almost equal to 0.1111, hence the result in step 2). Then, the three different choices of convolution were applied and the results were displayed along with the original image. Of course, a detail such as the size of the borders cannot be easily observed in full scale, so we observed the difference in the sizes of the results. Finally, we displayed a part of the original image next to the same part of the same convolution result, to prove that the result of the averaging process is a blurring of the image. Alternatives to convolution Convolution is not the only way to perform image filtering. There is also correlation, which gives us the same result. Filtering an image using correlation can be accomplished by using the MATLAB function called filter2, which performs, as its name implies, a two-dimensional filtering of two images. The first input in this case is a kernel (filter) and the second input is an image (or in a more general case a two-dimensional matrix). We will not go into detail here, just point out that one main difference between the two methods is that correlation does not need the kernel to be rotated. The border issue remains, having the same three approaches as in the case of convolution using conv2. A demonstration on the equivalence of the two functions is given if we type in the following commands: >> img = imread('holiday_image2.bmp'); >> img = img(51:210,321:440); >> kernel = fspecial('average',3); >> kernel180 = rot90(kernel,3); >> conv_result = conv2(img,kernel180,'same'); >> corr_result = filter2(kernel,img,'same'); >> subplot(1,3,1),imshow(img),title('Original') >> subplot(1,3,2),imshow(uint8(conv_result)),title('Blurred - conv2') >> subplot(1,3,3),imshow(uint8(corr_result)),title('Blurred - filter2') The result of the preceding code is displayed as follows: In our example, the two kernels used for conv2 and filter2 are identical, since the averaging filter used is square (3x3) and all its elements are equal. The generalized process shown will be useful when we have a more complex kernel. Using imfilter The two alternative solutions for performing image filtering presented so far have their origin in general two-dimensional signal processing theory. This means that they should be expanded for three-dimensional signals when we have to deal with colored image filtering. The process is pretty straightforward and involves repeating the process for all three separate colored channels. But why do that, when we have a function that takes care of checking the image before applying the filter and then selecting the correct method? This specialized function is called imfilter and it is designed for handling images, regardless if they are grayscale or color. This function can implement both filtering methods described in previous paragraphs and it can also define the result to be same or full. Its extra functionality comes in the selection of the way it handles boundary values, and the automatic processing of color images. Furthermore, this function performs the needed conversions, in case the image input is integer-valued. Combined with the fspecial function, this will probably be your most valuable tool in MATLAB when it comes to image filtering.
Read more
  • 0
  • 0
  • 7346

article-image-using-oracle-service-bus-console
Packt
15 Sep 2010
9 min read
Save for later

Using Oracle Service Bus Console

Packt
15 Sep 2010
9 min read
(For more resources on BPEL, SOA and Oracle see here.) To log into Oracle Service Bus Console, we have to open a web browser and access the following URL: http://host_name:port/sbconsole, where host_name is the name of the host on which OSB is installed and port is a number that is set during the installation process. We log in as user weblogic. The Oracle Service Bus Console opens, as shown in the following screenshot: The Dashboard page is opened by default, displaying information about alerts. We will show how to defne and monitor alerts later in this article. In the upper-left corner, we can see the Change Center. Change Center is key to making confguration changes in OSB. Before making any changes, we have to create a new session by clicking the Create button. Then, we are able to make different changes without disrupting existing services. When fnished, we activate all changes by clicking Activate. If we want to roll back the changes, we can click the Discard button. We can also view all changes before activating them and write a comment. Creating a project and importing resources from OSR First, we have to create a new session, by clicking the Create button in the Change Center. Next, we will create a new project. OSB uses projects to allow logical grouping of resources and to better organize related parts of large development projects. We click on the Project Explorer link in the main menu. In the Projects page, we enter the name of the project (TravelApproval) and click Add Project. The new project is now shown in the projects list on the left side in the Project Explorer. We click on the project. Next, we add folders to the project, as we want to group resources by type. To create a folder, we enter the folder name in the Enter New Folder Name field and click Add folder. We add six folders: BusinessServices, ProxyServices, WSDL, XSD, XSLT, and AlertDestinations. Next, we have to create resources. We will show how to import service and all related resources from the UDDI registry. Before creating a connection to the UDDI registry, we will activate the current session. First, we review all changes. We click the View Changes link in the Change Center. We can see the list of all changes in the current session. We can also undo changes by clicking the undo link in the last column. Now, we activate the session by clicking on the Activate button. The Activate Session page opens. We can add a description to the session and click Submit. Now, all changes made are activated. Creating connection to Oracle Service Registry First, we start a new session in the Change Center. Then we click on the System Administration link in the main menu. We click on the UDDI Registries and then Add registry on the right side of the page. We enter connection parameters and click Save. Now, the registry is listed in the UDDI Registries list, as shown next: We can optionally activate a current session. In that case, we have to create a new session before importing resources from UDDI. Importing resources from Oracle Service Registry We click on the Import from UDDI link on the left-hand side. As there is only one connection to the registry, this connection is selected by default. First, we have to select the Business Entity. We select Packt Publishing. Then we click on the Search button to display all services of the selected business entity. In the next screenshot, we can see that currently there is only one service published. We select the service and click Next. In the second step, we select the project and folder, where we want to save the resources. We select the TravelApproval project and the folder BusinessServices and click Next. On the fnal screen, we just click the Import button. Now we can see that a business service, a WSDL, and three XSD resources have been created. All resources have been created automatically, as we imported a service from the UDDI registry. If we create resources by hand, we frst have to create an XML Schema in WSDL resources, and then the Business service. As all resources have been saved to the BusinessServices folder, we have to move them to appropriate folders based on their type. We go back to the Project Explorer and click on the BusinessServices folder in the TravelApproval project. We can see all imported resources in the Resources list at the bottom of the page. We can move resources by clicking on the Move Resource icon and then selecting the target folder. We move the WSDL resource to the WSDL folder and the XML Schemas to the XSD folder. Configuring a business service If we want to monitor service metrics, such as average response time, number of messages, and number of errors, we have to enable monitoring of the business service. We will also show how to improve performances by enabling service result caching, which is a new feature in OSB 11g PS2. Enabling service result caching OSB supports service result caching through the use of Oracle Coherence, which is an in-memory data grid solution. In this way, we can dramatically improve performances if the response of the business service is relatively static. To enable the use of service result caching globally, we have to open the Operations | Global Settings and set Enable Result Caching to true.. In the Project Explorer, we click on our Business service. On the Confguration Details tab, we will enable service result caching. We scroll-down and edit the Message Handling Confguration. Then we expand the Advanced Settings. We select the Result Caching checkbox. Next, we have to specify the cache token, which uniquely identifes a single cache result. This is usually an ID field. In our simplifed example, we do not have an ID field; therefore, we will use the employee last name for testing purposes. We enter the following cache token expression: $body/emp:employee/LastName. Then we set the expiration time to 20 minutes. Then, we click Next and Save. Now, if the business service locates cached results through a cache key, it returns those cached results to the client instead of invoking the external service. If the result is not cached, the business service invokes the external service, returns the result to the client, and stores the result in cache. Service result caching works only when the business service is invoked from a proxy service. Enabling service monitoring Again, we click on our Business service and then click on the Operational Settings tab. We select the Enabled checkbox next to the Monitoring and set the Aggregation Interval to 20 minutes. The aggregation interval is the sliding window of time over which metrics are computed. We can also defne SLA alerts which are based on these metrics. We click Update to save the changes. Then, we activate the changes by clicking on the Activate button in the Change Center. Testing a business service After activating the changes, we can test the business service using the Test Console. To open the console, we select the BusinessServices folder and then click on the bug icon next to the Business service. The Test Console opens. We set the XML payload and click the Execute button. After executing the Business service, we can see the response message as shown in the next screenshot: Creating an Alert destination Before creating a proxy service, we will create an Alert Destination resource, which will be later used for sending e-mail alerts to the administrator. Remember, that we have already created the AlertDestinations folder. To be able to send e-mail alerts, we have to frst confgure the SMTP server on the System Administration page. To create an Alert destination, we navigate to the AlertDestinations folder and then select the Alert Destination from the Create Resource drop-down. We set the name to Administrator and add an e-mail recipient by clicking the Add button. We enter the recipient e-mail address (we can add more recipients) and select the SMTP server.   Then we click Save twice. Creating a proxy service Although at the frst sight it might seem redundant, using a proxy service instead of calling the original business service directly has several advantages. If we add a proxy service between the service consumer and the original service, we gain transparency. Through OSB, we can monitor and supervise the service and control the inbound and outbound messages. This becomes important when changes happen. For example, when a service interface or the payload changes, the proxy service can mask the changes to all service consumers that have not yet been upgraded to use the new version. This is, however, not the only beneft. A proxy service can enable authentication and authorization when accessing a service. It can provide a means to monitor service SLAs, and much more. Therefore, it often makes sense to consider using proxy services. We will show an example to demonstrate the capabilities of proxy services. We will create a proxy service, which will contain the message processing logic and will be used to decouple service clients from the service provider. Our proxy service will validate the request against the corresponding XML schema. It will also perform error handling and alert the service administrator of any problems with the service execution. First, we start a new session (if there is no active session) by clicking the Create button in the Change Center. Then we navigate to the ProxyServices folder in the Project Explorer. We click on the Create Resources drop-down and select Proxy Service. The General Confguration page opens. We set the name of the proxy service to EmployeeTravelStatusServiceProxy. We also have to defne the interface of the service. We select the Business service, as we want the proxy service to use the same interface as the business service. We click the Browse button and select the EmployeeTravelStatusService business service. Then we click Next. On the Transport Configuration screen, we can change the transport Protocol and Endpoint URI. We use the defaults values and click Next. The HTTP Transport Confguration screen opens. We click Next on the remaining confguration screens. On the Summary page, we click the Save button at the bottom of the page.
Read more
  • 0
  • 0
  • 7309
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-dealing-legacy-code
Packt
31 Mar 2015
16 min read
Save for later

Dealing with Legacy Code

Packt
31 Mar 2015
16 min read
In this article by Arun Ravindran, author of the book Django Best Practices and Design Patterns, we will discuss the following topics: Reading a Django code base Discovering relevant documentation Incremental changes versus full rewrites Writing tests before changing code Legacy database integration (For more resources related to this topic, see here.) It sounds exciting when you are asked to join a project. Powerful new tools and cutting-edge technologies might await you. However, quite often, you are asked to work with an existing, possibly ancient, codebase. To be fair, Django has not been around for that long. However, projects written for older versions of Django are sufficiently different to cause concern. Sometimes, having the entire source code and documentation might not be enough. If you are asked to recreate the environment, then you might need to fumble with the OS configuration, database settings, and running services locally or on the network. There are so many pieces to this puzzle that you might wonder how and where to start. Understanding the Django version used in the code is a key piece of information. As Django evolved, everything from the default project structure to the recommended best practices have changed. Therefore, identifying which version of Django was used is a vital piece in understanding it. Change of Guards Sitting patiently on the ridiculously short beanbags in the training room, the SuperBook team waited for Hart. He had convened an emergency go-live meeting. Nobody understood the "emergency" part since go live was at least 3 months away. Madam O rushed in holding a large designer coffee mug in one hand and a bunch of printouts of what looked like project timelines in the other. Without looking up she said, "We are late so I will get straight to the point. In the light of last week's attacks, the board has decided to summarily expedite the SuperBook project and has set the deadline to end of next month. Any questions?" "Yeah," said Brad, "Where is Hart?" Madam O hesitated and replied, "Well, he resigned. Being the head of IT security, he took moral responsibility of the perimeter breach." Steve, evidently shocked, was shaking his head. "I am sorry," she continued, "But I have been assigned to head SuperBook and ensure that we have no roadblocks to meet the new deadline." There was a collective groan. Undeterred, Madam O took one of the sheets and began, "It says here that the Remote Archive module is the most high-priority item in the incomplete status. I believe Evan is working on this." "That's correct," said Evan from the far end of the room. "Nearly there," he smiled at others, as they shifted focus to him. Madam O peered above the rim of her glasses and smiled almost too politely. "Considering that we already have an extremely well-tested and working Archiver in our Sentinel code base, I would recommend that you leverage that instead of creating another redundant system." "But," Steve interrupted, "it is hardly redundant. We can improve over a legacy archiver, can't we?" "If it isn't broken, then don't fix it", replied Madam O tersely. He said, "He is working on it," said Brad almost shouting, "What about all that work he has already finished?" "Evan, how much of the work have you completed so far?" asked O, rather impatiently. "About 12 percent," he replied looking defensive. Everyone looked at him incredulously. "What? That was the hardest 12 percent" he added. O continued the rest of the meeting in the same pattern. Everybody's work was reprioritized and shoe-horned to fit the new deadline. As she picked up her papers, readying to leave she paused and removed her glasses. "I know what all of you are thinking... literally. But you need to know that we had no choice about the deadline. All I can tell you now is that the world is counting on you to meet that date, somehow or other." Putting her glasses back on, she left the room. "I am definitely going to bring my tinfoil hat," said Evan loudly to himself. Finding the Django version Ideally, every project will have a requirements.txt or setup.py file at the root directory, and it will have the exact version of Django used for that project. Let's look for a line similar to this: Django==1.5.9 Note that the version number is exactly mentioned (rather than Django>=1.5.9), which is called pinning. Pinning every package is considered a good practice since it reduces surprises and makes your build more deterministic. Unfortunately, there are real-world codebases where the requirements.txt file was not updated or even completely missing. In such cases, you will need to probe for various tell-tale signs to find out the exact version. Activating the virtual environment In most cases, a Django project would be deployed within a virtual environment. Once you locate the virtual environment for the project, you can activate it by jumping to that directory and running the activated script for your OS. For Linux, the command is as follows: $ source venv_path/bin/activate Once the virtual environment is active, start a Python shell and query the Django version as follows: $ python >>> import django >>> print(django.get_version()) 1.5.9 The Django version used in this case is Version 1.5.9. Alternatively, you can run the manage.py script in the project to get a similar output: $ python manage.py --version 1.5.9 However, this option would not be available if the legacy project source snapshot was sent to you in an undeployed form. If the virtual environment (and packages) was also included, then you can easily locate the version number (in the form of a tuple) in the __init__.py file of the Django directory. For example: $ cd envs/foo_env/lib/python2.7/site-packages/django $ cat __init__.py VERSION = (1, 5, 9, 'final', 0) ... If all these methods fail, then you will need to go through the release notes of the past Django versions to determine the identifiable changes (for example, the AUTH_PROFILE_MODULE setting was deprecated since Version 1.5) and match them to your legacy code. Once you pinpoint the correct Django version, then you can move on to analyzing the code. Where are the files? This is not PHP One of the most difficult ideas to get used to, especially if you are from the PHP or ASP.NET world, is that the source files are not located in your web server's document root directory, which is usually named wwwroot or public_html. Additionally, there is no direct relationship between the code's directory structure and the website's URL structure. In fact, you will find that your Django website's source code is stored in an obscure path such as /opt/webapps/my-django-app. Why is this? Among many good reasons, it is often more secure to move your confidential data outside your public webroot. This way, a web crawler would not be able to accidentally stumble into your source code directory. Starting with urls.py Even if you have access to the entire source code of a Django site, figuring out how it works across various apps can be daunting. It is often best to start from the root urls.py URLconf file since it is literally a map that ties every request to the respective views. With normal Python programs, I often start reading from the start of its execution—say, from the top-level main module or wherever the __main__ check idiom starts. In the case of Django applications, I usually start with urls.py since it is easier to follow the flow of execution based on various URL patterns a site has. In Linux, you can use the following find command to locate the settings.py file and the corresponding line specifying the root urls.py: $ find . -iname settings.py -exec grep -H 'ROOT_URLCONF' {} ; ./projectname/settings.py:ROOT_URLCONF = 'projectname.urls'   $ ls projectname/urls.py projectname/urls.py Jumping around the code Reading code sometimes feels like browsing the web without the hyperlinks. When you encounter a function or variable defined elsewhere, then you will need to jump to the file that contains that definition. Some IDEs can do this automatically for you as long as you tell it which files to track as part of the project. If you use Emacs or Vim instead, then you can create a TAGS file to quickly navigate between files. Go to the project root and run a tool called Exuberant Ctags as follows: find . -iname "*.py" -print | etags - This creates a file called TAGS that contains the location information, where every syntactic unit such as classes and functions are defined. In Emacs, you can find the definition of the tag, where your cursor (or point as it called in Emacs) is at using the M-. command. While using a tag file is extremely fast for large code bases, it is quite basic and is not aware of a virtual environment (where most definitions might be located). An excellent alternative is to use the elpy package in Emacs. It can be configured to detect a virtual environment. Jumping to a definition of a syntactic element is using the same M-. command. However, the search is not restricted to the tag file. So, you can even jump to a class definition within the Django source code seamlessly. Understanding the code base It is quite rare to find legacy code with good documentation. Even if you do, the documentation might be out of sync with the code in subtle ways that can lead to further issues. Often, the best guide to understand the application's functionality is the executable test cases and the code itself. The official Django documentation has been organized by versions at https://docs.djangoproject.com. On any page, you can quickly switch to the corresponding page in the previous versions of Django with a selector on the bottom right-hand section of the page: In the same way, documentation for any Django package hosted on readthedocs.org can also be traced back to its previous versions. For example, you can select the documentation of django-braces all the way back to v1.0.0 by clicking on the selector on the bottom left-hand section of the page: Creating the big picture Most people find it easier to understand an application if you show them a high-level diagram. While this is ideally created by someone who understands the workings of the application, there are tools that can create very helpful high-level depiction of a Django application. A graphical overview of all models in your apps can be generated by the graph_models management command, which is provided by the django-command-extensions package. As shown in the following diagram, the model classes and their relationships can be understood at a glance: Model classes used in the SuperBook project connected by arrows indicating their relationships This visualization is actually created using PyGraphviz. This can get really large for projects of even medium complexity. Hence, it might be easier if the applications are logically grouped and visualized separately. PyGraphviz Installation and Usage If you find the installation of PyGraphviz challenging, then don't worry, you are not alone. Recently, I faced numerous issues while installing on Ubuntu, starting from Python 3 incompatibility to incomplete documentation. To save your time, I have listed the steps that worked for me to reach a working setup. On Ubuntu, you will need the following packages installed to install PyGraphviz: $ sudo apt-get install python3.4-dev graphviz libgraphviz-dev pkg-config Now activate your virtual environment and run pip to install the development version of PyGraphviz directly from GitHub, which supports Python 3: $ pip install git+http://github.com/pygraphviz/pygraphviz.git#egg=pygraphviz Next, install django-extensions and add it to your INSTALLED_APPS. Now, you are all set. Here is a sample usage to create a GraphViz dot file for just two apps and to convert it to a PNG image for viewing: $ python manage.py graph_models app1 app2 > models.dot $ dot -Tpng models.dot -o models.png Incremental change or a full rewrite? Often, you would be handed over legacy code by the application owners in the earnest hope that most of it can be used right away or after a couple of minor tweaks. However, reading and understanding a huge and often outdated code base is not an easy job. Unsurprisingly, most programmers prefer to work on greenfield development. In the best case, the legacy code ought to be easily testable, well documented, and flexible to work in modern environments so that you can start making incremental changes in no time. In the worst case, you might recommend discarding the existing code and go for a full rewrite. Or, as it is commonly decided, the short-term approach would be to keep making incremental changes, and a parallel long-term effort might be underway for a complete reimplementation. A general rule of thumb to follow while taking such decisions is—if the cost of rewriting the application and maintaining the application is lower than the cost of maintaining the old application over time, then it is recommended to go for a rewrite. Care must be taken to account for all the factors, such as time taken to get new programmers up to speed, the cost of maintaining outdated hardware, and so on. Sometimes, the complexity of the application domain becomes a huge barrier against a rewrite, since a lot of knowledge learnt in the process of building the older code gets lost. Often, this dependency on the legacy code is a sign of poor design in the application like failing to externalize the business rules from the application logic. The worst form of a rewrite you can probably undertake is a conversion, or a mechanical translation from one language to another without taking any advantage of the existing best practices. In other words, you lost the opportunity to modernize the code base by removing years of cruft. Code should be seen as a liability not an asset. As counter-intuitive as it might sound, if you can achieve your business goals with a lesser amount of code, you have dramatically increased your productivity. Having less code to test, debug, and maintain can not only reduce ongoing costs but also make your organization more agile and flexible to change. Code is a liability not an asset. Less code is more maintainable. Irrespective of whether you are adding features or trimming your code, you must not touch your working legacy code without tests in place. Write tests before making any changes In the book Working Effectively with Legacy Code, Michael Feathers defines legacy code as, simply, code without tests. He elaborates that with tests one can easily modify the behavior of the code quickly and verifiably. In the absence of tests, it is impossible to gauge if the change made the code better or worse. Often, we do not know enough about legacy code to confidently write a test. Michael recommends writing tests that preserve and document the existing behavior, which are called characterization tests. Unlike the usual approach of writing tests, while writing a characterization test, you will first write a failing test with a dummy output, say X, because you don't know what to expect. When the test harness fails with an error, such as "Expected output X but got Y", then you will change your test to expect Y. So, now the test will pass, and it becomes a record of the code's existing behavior. Note that we might record buggy behavior as well. After all, this is unfamiliar code. Nevertheless, writing such tests are necessary before we start changing the code. Later, when we know the specifications and code better, we can fix these bugs and update our tests (not necessarily in that order). Step-by-step process to writing tests Writing tests before changing the code is similar to erecting scaffoldings before the restoration of an old building. It provides a structural framework that helps you confidently undertake repairs. You might want to approach this process in a stepwise manner as follows: Identify the area you need to make changes to. Write characterization tests focusing on this area until you have satisfactorily captured its behavior. Look at the changes you need to make and write specific test cases for those. Prefer smaller unit tests to larger and slower integration tests. Introduce incremental changes and test in lockstep. If tests break, then try to analyze whether it was expected. Don't be afraid to break even the characterization tests if that behavior is something that was intended to change. If you have a good set of tests around your code, then you can quickly find the effect of changing your code. On the other hand, if you decide to rewrite by discarding your code but not your data, then Django can help you considerably. Legacy databases There is an entire section on legacy databases in Django documentation and rightly so, as you will run into them many times. Data is more important than code, and databases are the repositories of data in most enterprises. You can modernize a legacy application written in other languages or frameworks by importing their database structure into Django. As an immediate advantage, you can use the Django admin interface to view and change your legacy data. Django makes this easy with the inspectdb management command, which looks as follows: $ python manage.py inspectdb > models.py This command, if run while your settings are configured to use the legacy database, can automatically generate the Python code that would go into your models file. Here are some best practices if you are using this approach to integrate to a legacy database: Know the limitations of Django ORM beforehand. Currently, multicolumn (composite) primary keys and NoSQL databases are not supported. Don't forget to manually clean up the generated models, for example, remove the redundant 'ID' fields since Django creates them automatically. Foreign Key relationships may have to be manually defined. In some databases, the auto-generated models will have them as integer fields (suffixed with _id). Organize your models into separate apps. Later, it will be easier to add the views, forms, and tests in the appropriate folders. Remember that running the migrations will create Django's administrative tables (django_* and auth_*) in the legacy database. In an ideal world, your auto-generated models would immediately start working, but in practice, it takes a lot of trial and error. Sometimes, the data type that Django inferred might not match your expectations. In other cases, you might want to add additional meta information such as unique_together to your model. Eventually, you should be able to see all the data that was locked inside that aging PHP application in your familiar Django admin interface. I am sure this will bring a smile to your face. Summary In this article, we looked at various techniques to understand legacy code. Reading code is often an underrated skill. But rather than reinventing the wheel, we need to judiciously reuse good working code whenever possible. Resources for Article: Further resources on this subject: So, what is Django? [article] Adding a developer with Django forms [article] Introduction to Custom Template Filters and Tags [article]
Read more
  • 0
  • 0
  • 7306

article-image-your-first-fuelphp-application-7-easy-steps
Packt
04 Mar 2015
12 min read
Save for later

Your first FuelPHP application in 7 easy steps

Packt
04 Mar 2015
12 min read
In this article by Sébastien Drouyer, author of the book FuelPHP Application Development Blueprints we will see that FuelPHP is an open source PHP framework using the latest technologies. Its large community regularly creates and improves packages and extensions, and the framework’s core is constantly evolving. As a result, FuelPHP is a very complete solution for developing web applications. (For more resources related to this topic, see here.) In this article, we will also see how easy it is for developers to create their first website using the PHP oil utility. The target application Suppose you are a zoo manager and you want to keep track of the monkeys you are looking after. For each monkey, you want to save: Its name If it is still in the zoo Its height A description input where you can enter custom information You want a very simple interface with five major features. You want to be able to: Create new monkeys Edit existing ones List all monkeys View a detailed file for each monkey Delete monkeys These preceding five major features, very common in computer applications, are part of the Create, Read, Update and Delete (CRUD) basic operations. Installing the environment The FuelPHP framework needs the three following components: Webserver: The most common solution is Apache PHP interpreter: The 5.3 version or above Database: We will use the most popular one, MySQL The installation and configuration procedures of these components will depend on the operating system you use. We will provide here some directions to get you started in case you are not used to install your development environment. Please note though that these are very generic guidelines. Feel free to search the web for more information, as there are countless resources on the topic. Windows A complete and very popular solution is to install WAMP. This will install Apache, MySQL and PHP, in other words everything you need to get started. It can be accessed at the following URL: http://www.wampserver.com/en/ Mac PHP and Apache are generally installed on the latest version of the OS, so you just have to install MySQL. To do that, you are recommended to read the official documentation: http://dev.mysql.com/doc/refman/5.1/en/macosx-installation.html A very convenient solution for those of you who have the least system administration skills is to install MAMP, the equivalent of WAMP but for the Mac operating system. It can be downloaded through the following URL: http://www.mamp.info/en/downloads/ Ubuntu As this is the most popular Linux distribution, we will limit our instructions to Ubuntu. You can install a complete environment by executing the following command lines: # Apache, MySQL, PHP sudo apt-get install lamp-server^   # PHPMyAdmin allows you to handle the administration of MySQL DB sudo apt-get install phpmyadmin   # Curl is useful for doing web requests sudo apt-get install curl libcurl3 libcurl3-dev php5-curl   # Enabling the rewrite module as it is needed by FuelPHP sudo a2enmod rewrite   # Restarting Apache to apply the new configuration sudo service apache2 restart Getting the FuelPHP framework There are four common ways to download FuelPHP: Downloading and unzipping the compressed package which can be found on the FuelPHP website. Executing the FuelPHP quick command-line installer. Downloading and installing FuelPHP using Composer. Cloning the FuelPHP GitHub repository. It is a little bit more complicated but allows you to select exactly the version (or even the commit) you want to install. The easiest way is to download and unzip the compressed package located at: http://fuelphp.com/files/download/28 You can get more information about this step in Chapter 1 of FuelPHP Application Development Blueprints, which can be accessed freely. It is also well-documented on the website installation instructions page: http://fuelphp.com/docs/installation/instructions.html Installation directory and apache configuration Now that you know how to install FuelPHP in a given directory, we will explain where to install it and how to configure Apache. The simplest way The simplest way is to install FuelPHP in the root folder of your web server (generally the /var/www directory on Linux systems). If you install fuel in the DIR directory inside the root folder (/var/www/DIR), you will be able to access your project on the following URL: http://localhost/DIR/public/ However, be warned that fuel has not been implemented to support this, and if you publish your project this way in the production server, it will introduce security issues you will have to handle. In such cases, you are recommended to use the second way we explained in the section below, although, for instance if you plan to use a shared host to publish your project, you might not have the choice. A complete and up to date documentation about this issue can be found in the Fuel installation instruction page: http://fuelphp.com/docs/installation/instructions.html By setting up a virtual host Another way is to create a virtual host to access your application. You will need a *nix environment and a little bit more apache and system administration skills, but the benefit is that it is more secured and you will be able to choose your working directory. You will need to change two files: Your apache virtual host file(s) in order to link a virtual host to your application Your system host file, in order redirect the wanted URL to your virtual host In both cases, the files location will be very dependent on your operating system and the server environment you are using, so you will have to figure their location yourself (if you are using a common configuration, you won’t have any problem to find instructions on the web). In the following example, we will set up your system to call your application when requesting the my.app URL on your local environment. Let’s first edit the virtual host file(s); add the following code at the end: <VirtualHost *:80>    ServerName my.app    DocumentRoot YOUR_APP_PATH/public    SetEnv FUEL_ENV "development"    <Directory YOUR_APP_PATH/public>        DirectoryIndex index.php        AllowOverride All        Order allow,deny        Allow from all    </Directory> </VirtualHost> Then, open your system host files and add the following line at the end: 127.0.0.1 my.app Depending on your environment, you might need to restart Apache after that. You can now access your website on the following URL: http://my.app/ Checking that everything works Whether you used a virtual host or not, the following should now appear when accessing your website: Congratulation! You just have successfully installed the FuelPHP framework. The welcome page shows some recommended directions to continue your project. Database configuration As we will store our monkeys into a MySQL database, it is time to configure FuelPHP to use our local database. If you open fuel/app/config/db.php, all you will see is an empty array but this configuration file is merged to fuel/app/config/ENV/db.php, ENV being the current Fuel’s environment, which in that case is development. You should therefore open fuel/app/config/development/db.php: <?php //... return array( 'default' => array(    'connection' => array(      'dsn'       => 'mysql:host=localhost;dbname=fuel_dev',      'username'   => 'root',      'password'   => 'root',    ), ), ); You should adapt this array to your local configuration, particularly the database name (currently set to fuel_dev), the username, and password. You must create your project’s database manually. Scaffolding Now that the database configuration is set, we will be able to generate a scaffold. We will use for that the generate feature of the oil utility. Open the command-line utility and go to your website root directory. To generate a scaffold for a new model, you will need to enter the following line: php oil generate scaffold/crud MODEL ATTR_1:TYPE_1 ATTR_2:TYPE_2 ... Where: MODEL is the model name ATTR_1, ATTR_2… are the model’s attributes names TYPE_1, TYPE_2… are each attribute type In our case, it should be: php oil generate scaffold/crud monkey name:string still_here:bool height:float description:text Here we are telling oil to generate a scaffold for the monkey model with the following attributes: name: The name of the monkey. Its type is string and the associated MySQL column type will be VARCHAR(255). still_here: Whether or not the monkey is still in the facility. Its type is boolean and the associated MySQL column type will be TINYINT(1). height: Height of the monkey. Its type is float and its associated MySQL column type will be FLOAT. description: Description of the monkey. Its type is text and its associated MySQL column type will be TEXT. You can do much more using the oil generate feature, as generating models, controllers, migrations, tasks, package and so on. We will see some of these in the FuelPHP Application Development Blueprints book and you are also recommended to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/generate.html When you press Enter, you will see the following lines appear: Creating migration: APPPATH/migrations/001_create_monkeys.php Creating model: APPPATH/classes/model/monkey.php Creating controller: APPPATH/classes/controller/monkey.php Creating view: APPPATH/views/monkey/index.php Creating view: APPPATH/views/monkey/view.php Creating view: APPPATH/views/monkey/create.php Creating view: APPPATH/views/monkey/edit.php Creating view: APPPATH/views/monkey/_form.php Creating view: APPPATH/views/template.php Where APPPATH is your website directory/fuel/app. Oil has generated for us nine files: A migration file, containing all the necessary information to create the model’s associated table The model A controller Five view files and a template file More explanation about these files and how they interact with each other can be accessed in Chapter 1 of the FuelPHP Application Development Blueprints book, freely available. For those of you who are not yet familiar with MVC and HMVC frameworks, don’t worry; the chapter contains an introduction to the most important concepts. Migrating One of the generated files was APPPATH/migrations/001_create_monkeys.php. It is a migration file and contains the required information to create our monkey table. Notice the name is structured as VER_NAME where VER is the version number and NAME is the name of the migration. If you execute the following command line: php oil refine migrate All migrations files that have not been yet executed will be executed from the oldest version to the latest version (001, 002, 003, and so on). Once all files are executed, oil will display the latest version number. Once executed, if you take a look at your database, you will observe that not one, but two tables have been created: monkeys: As expected, a table have been created to handle your monkeys. Notice that the table name is the plural version of the word we typed for generating the scaffold; such a transformation was internally done using the Inflector::pluralize method. The table will contain the specified columns (name, still_here), the id column, but also created_at and updated_at. These columns respectively store the time an object was created and updated, and are added by default each time you generate your models. It is though possible to not generate them with the --no-timestamp argument. migration: This other table was automatically created. It keeps track of the migrations that were executed. If you look into its content, you will see that it already contains one row; this is the migration you just executed. You can notice that the row does not only indicate the name of the migration, but also a type and a name. This is because migrations files can be placed at many places such as modules or packages. The oil utility allows you to do much more. Don’t hesitate to take a look at the official documentation: http://fuelphp.com/docs/packages/oil/intro.html Or, again, to read FuelPHP Application Development Blueprints’ Chapter 1 which is available for free. Using your application Now that we generated the code and migrated the database, our application is ready to be used. Request the following URL: If you created a virtual host: http://my.app/monkey Otherwise (don’t forget to replace DIR): http://localhost/DIR/public/monkey As you can notice, this webpage is intended to display the list of all monkeys, but since none have been added, the list is empty. Then let’s add a new monkey by clicking on the Add new Monkey button. The following webpage should appear: You can enter your monkey’s information here. The form is certainly not perfect - for instance the Still here field use a standard input although a checkbox would be more appropriated - but it is a great start. All we will have to do is refine the code a little bit. Once you have added several monkeys, you can again take a look at the listing page: Again, this is a great start, though we might want to refine it. Each item on the list has three associated actions: View, Edit, and Delete. Let’s first click on View: Again a great start, though we will refine this webpage. You can return back to the listing by clicking on Back or edit the monkey file by clicking on Edit. Either accessed from the listing page or the view page, it will display the same form as when creating a new monkey, except that the form will be prefilled of course. Finally, if you click on Delete, a confirmation box will appear to prevent any miss clicking. Want to learn more ? Don’t hesitate to check out FuelPHP Application Development Blueprints’ Chapter 1 which is freely available in Packt Publishing’s website. In this chapter, you will find a more thorough introduction to FuelPHP and we will show how to improve this first application. You are also recommended to explore FuelPHP website, which contains a lot of useful information and an excellent documentation: http://www.fuelphp.com There is much more to discover about this wonderful framework. Summary In this article we leaned about the installation of the FuelPHP environment and installation of directories in it. Resources for Article: Further resources on this subject: PHP Magic Features [Article] FuelPHP [Article] Building a To-do List with Ajax [Article]
Read more
  • 0
  • 0
  • 7271

article-image-events-and-signals
Packt
16 Oct 2013
16 min read
Save for later

Events and Signals

Packt
16 Oct 2013
16 min read
(For more resources related to this topic, see here.) Event management An event in Qt is an object inherited from the abstract QEvent class which is a notification of something significant that has happened. Events become more useful in creating custom widgets on our own. An event can happen either within an application or as a result of an outside activity that the application needs to know about. When an event occurs, Qt creates an event object and notifies to the instance of an QObject class or one of its subclasses through their event() function. Events can be generated from both inside and outside the application. For instance, the QKeyEvent and QMouseEvent object represent some kind of keyboard and mouse interaction and they come from the window manager; the QTimerEvent objects are sent to QObject when one of its timers fires, and they usually come from the operating system; the QChildEvent objects are sent to QObject when a child is added or removed and they come from inside of your Qt application. The users of PySide usually get confused with events and signals. Events and signals are two parallel mechanisms used to accomplish the same thing. As a general difference, signals are useful when using a widget, whereas events are useful when implementing the widget. For example, when we are using a widget like QPushButton, we are more interested in its clicked() signal than in the low-level mouse press or key press events that caused the signal to be emitted. But if we are implementing the QPushButton class, we are more interested in the implementation of code for mouse and key events. Also, we usually handle events but get notified by signal emissions. Event loop All the events in Qt will go through an event loop. One main key concept to be noted here is that the events are not delivered as soon as they are generated; instead they're queued up in an event queue and processed later one-by-one. The event dispatcher will loop through this queue and dispatch these events to the target QObject and hence it is called an event loop. Qt's main event loop dispatcher, QCoreApplication.exec() will fetch the native window system events from the event queue and will process them, convert them into the QEvent objects, and send it to their respective target QObject. A simple event loop can be explained as described in the following pseudocode: while(application_is_active) { while(event_exists_in_event_queue) process_next_event(); wait_for_more_events(); } The Qt's main event loop starts with the QCoreApplication::exec() call and this gets blocked until QCoreApplication::exit() or QCoreApplication::quit() is called to terminate the loop. The wait_for_more_events() function blocks until some event is generated. This blocking is not a busy wait blocking and will not burn the CPU resources. Generally the event loop can be awaken by a window manager activity, socket activity, timers, or event posted by other threads. All these activities require a running event loop. It is more important not to block the event loop because when it is struck, widgets will not update themselves, timers won't fire, networking communications will slow down and stop. In short, your application will not respond to any external or internal events and hence it is advised to quickly react to events and return to the event loop as soon as possible. Event processing Qt offers five methods to do event processing. They are: By re-implementing a specific event handler like keyPressEvent(), paintEvent() By re-implementing the QObject::event() class Installing an event filter on a single QObject Installing an event filter on the QApplication object Subclassing QApplication and re-implementing notify() Generally, this can be broadly divided into re-implementing event handlers and installing event filters. We will see each of them in little detail. Reimplementing event handlers We can implement the task at hand or control a widget by reimplementing the virtual event handling functions. The following example will explain how to reimplement a few most commonly used events, a key press event, a mouse double-click event, and a window resize event. We will have a look at the code first and defer the explanation after the code: # Import necessary modules import sys from PySide.QtGui import * from PySide.QtCore import * # Our main widget class class MyWidget(QWidget): # Constructor function def __init__(self): QWidget.__init__(self) self.setWindowTitle("Reimplementing Events") self.setGeometry(300, 250, 300, 100) self.myLayout = QVBoxLayout() self.myLabel = QLabel("Press 'Esc' to close this App") self.infoLabel = QLabel() self.myLabel.setAlignment(Qt.AlignCenter) self.infoLabel.setAlignment(Qt.AlignCenter) self.myLayout.addWidget(self.myLabel) self.myLayout.addWidget(self.infoLabel) self.setLayout(self.myLayout) # Function reimplementing Key Press, Mouse Click and Resize Events def keyPressEvent(self, event): if event.key() == Qt.Key_Escape: self.close() def mouseDoubleClickEvent(self, event): self.close() def resizeEvent(self, event): self.infoLabel.setText("Window Resized to QSize(%d, %d)" % (event.size().width(), event.size().height())) if __name__ =='__main__': # Exception Handling try: myApp = QApplication(sys.argv) myWidget = MyWidget() myWidget.show() myApp.exec_() sys.exit(0) except NameError: print("Name Error:", sys.exc_info()[1]) except SystemExit: print("Closing Window...") except Exception: print(sys.exc_info()[1]) In the preceding code, the keyPressEvent() function reimplements the event generated as a result of pressing a key. We have implemented in such a way that the application closes when the Esc key is pressed. On running this code, we would get a output similar to the one shown in the following screenshot: The application will be closed if you press the Esc key. The same functionality is implemented on a mouse double-click event. The third event is a resize event. This event gets triggered when you try to resize the widget. The second line of text in the window will show the size of the window in (width, height) format. You could witness the same on resizing the window. Similar to keyPressEvent(), we could also implement keyReleaseEvent() that would be triggered on release of the key. Normally, we are not very interested in the key release events except for the keys where it is important. The specific keys where the release event holds importance are the modifier keys such as Ctrl, Shift, and Alt. These keys are called modifier keys and can be accessed using QKeyEvent::modifiers. For example, the key press of a Ctrl key can be checked using Qt.ControlModifier. The other modifiers are Qt.ShiftModifier and Qt.AltModifier. For instance, if we want to check the press event of combination of Ctrl + PageDown key, we could have the check as: if event.key() == Qt.Key_PageDown and event.modifiers() == Qt.ControlModifier: print("Ctrl+PgDn Key is pressed") Before any particular key press or mouse click event handler function, say, for example, keyPressEvent() is called, the widget's event() function is called first. The event() method may handle the event itself or may delegate the work to a specific event handler like resizeEvent() or keyPressEvent(). The implementation of the event() function is very helpful in some special cases like the Tab key press event. In most cases, the widget with the keyboard focuses the event() method will call setFocus() on the next widget in the tab order and will not pass the event to any of the specific handlers. So we might have to re-implement any specific functionality for the Tab key press event in the event() function. This behavior of propagating the key press events is the outcome of Qt's Parent-Child hierarchy. The event gets propagated to its parent or its grand-parent and so on if it is not handled at any particular level. If the top-level widget also doesn't handle the event it is safely ignored. The following code shows an example for reimplementing the event() function: class MyWidget(QWidget): # Constructor function def __init__(self): QWidget.__init__(self) self.setWindowTitle("Reimplementing Events") self.setGeometry(300, 250, 300, 100) self.myLayout = QVBoxLayout() self.myLabel1 = QLabel("Text 1") self.myLineEdit1 = QLineEdit() self.myLabel2 = QLabel("Text 2") self.myLineEdit2 = QLineEdit() self.myLabel3 = QLabel("Text 3") self.myLineEdit3 = QLineEdit() self.myLayout.addWidget(self.myLabel1) self.myLayout.addWidget(self.myLineEdit1) self.myLayout.addWidget(self.myLabel2) self.myLayout.addWidget(self.myLineEdit2) self.myLayout.addWidget(self.myLabel3) self.myLayout.addWidget(self.myLineEdit3) self.setLayout(self.myLayout) # Function reimplementing event() function def event(self, event): if event.type()== QEvent.KeyRelease and event.key()== Qt.Key_Tab: self.myLineEdit3.setFocus() return True return QWidget.event(self,event) In the preceding example, we try to mask the default behavior of the Tab key. If you haven't implemented the event() function, pressing the Tab key would have set focus to the next available input widget. You will not be able to detect the Tab key press in the keyPress() function as described in the previous examples, since the key press is never passed to them. Instead, we have to implement it in the event() function. If you execute the preceding code, you would see that every time you press the Tab key the focus will be set into the third QLineEdit widget of the application. Inside the event() function, it is more important to return the value from the function. If we have processed the required operation, True is returned to indicate that the event is handled successfully, else, we pass the event handling to the parent class's event() function. Installing event filters One of the interesting and notable features of Qt's event model is to allow a QObject instance to monitor the events of another QObject instance before the latter object is even notified of it. This feature is very useful in constructing custom widgets comprising of various widgets altogether. Consider that you have a requirement to implement a feature in an internal application for a customer such that pressing the Enter key must have to shift the focus to next input widget. One way to approach the problem is to reimplement the keyPressEvent() function for all the widgets present in the custom widget. Instead, this can be achieved by reimplementing the eventFilter() function for the custom widget. If we implement this, the events will first be passed on to the custom widget's eventFilter() function before being passed on to the target widget. An example is implemented as follows: def eventFilter(self, receiver, event): if(event.type() == QEvent.MouseButtonPress): QMessageBox.information(None,"Filtered Mouse Press Event!!",'Mouse Press Detected') return True return super(MyWidget,self).eventFilter(receiver, event) Remember to return the result of event handling, or pass it on to the parent's eventFilter() function. To invoke eventFilter(), it has to be registered as follows in the constructor function: self.installEventFilter(self) The event filters can also be implemented for the QApplication as a whole. This is left as an exercise for you to discover. Reimplementing the notify() function The final way of handling events is to reimplement the notify() function of the QApplication class. This is the only way to get all the events before any of the event filters discussed previously are notified. The event gets notified to this function first before it gets passed on to the event filters and specific event functions. The use of notify() and other event filters are generally discouraged unless it is absolutely necessary to implement them because handling them at top level might introduce unwanted results, and we might end up in handling the events that we don't want to. Instead, use the specific event functions to handle events. The following code excerpt shows an example of re-implementing the notify() function: class MyApplication(QApplication): def __init__(self, args): super(MyApplication, self).__init__(args) def notify(self, receiver, event): if (event.type() == QEvent.KeyPress): QMessageBox.information(None, "Received Key Release EVent", "You Pressed: "+ event.text()) return super(MyApplication, self).notify(receiver, event) Signals and slots The fundamental part of any GUI program is the communication between the objects. Signals and slots provide a mechanism to define this communication between the actions happened and the result proposed for the respective action. Prior to Qt's modern implementation of signal/slot mechanism, older toolkits achieve this kind of communication through callbacks. A callback is a pointer to a function, so if you want a processing function to notify about some event you pass a pointer to another function (the callback) to the processing function. The processing function then calls the callback whenever appropriate. This mechanism does not prove useful in the later advancements due to some flaws in the callback implementation. A signal is an observable event, or at least notification that the event has happened. A slot is a potential observer, more usually a function that is called. In order to establish communication between them, we connect a signal to a slot to establish the desired action. However, we have already seen the concept of connecting a signal to a slot in the earlier chapters while designing the text editor application. Those implementations handle and connect different signals to different objects. However, we may have different combinations as defined in the bullet points: One signal can be connected to many slots Many signals can be connected to the same slot A signal can be connected to other signals Connections can be removed PySide offers various predefined signals and slots such that we can connect a predefined signal to a predefined slot and do nothing else to achieve what we want. However, it is also possible to define our own signals and slots. Whenever a signal is emitted, Qt will simply throw it away. We can define the slot to catch and notice the signal that is being emitted. The first code excerpt that follows this text will be an example for connecting predefined signals to predefined slots and the latter will discuss the custom user defined signals and slots. The first example is a simple EMI calculator application that takes the Loan Amount, Rate of Interest, and Number of Years as its input, and calculates the EMI per month and displays it to the user. To start with, we set in a layout the components required for the EMI calculator application. The Amount will be a text input from the user. The rate of years will be taken from a spin box input or a dial input. A spin box is a GUI component which has its minimum and maximum value set, and the value can be modified using the up and down arrow buttons present at its side. The dial represents a clock like widget whose values can be changed by dragging the arrow. The Number of Years value is taken by a spin box input or a slider input: class MyWidget(QWidget): def __init__(self): QWidget.__init__(self) self.amtLabel = QLabel('Loan Amount') self.roiLabel = QLabel('Rate of Interest') self.yrsLabel = QLabel('No. of Years') self.emiLabel = QLabel('EMI per month') self.emiValue = QLCDNumber() self.emiValue.setSegmentStyle(QLCDNumber.Flat) self.emiValue.setFixedSize(QSize(130,30)) self.emiValue.setDigitCount(8) self.amtText = QLineEdit('10000') self.roiSpin = QSpinBox() self.roiSpin.setMinimum(1) self.roiSpin.setMaximum(15) self.yrsSpin = QSpinBox() self.yrsSpin.setMinimum(1) self.yrsSpin.setMaximum(20) self.roiDial = QDial() self.roiDial.setNotchesVisible(True) self.roiDial.setMaximum(15) self.roiDial.setMinimum(1) self.roiDial.setValue(1) self.yrsSlide = QSlider(Qt.Horizontal) self.yrsSlide.setMaximum(20) self.yrsSlide.setMinimum(1) self.calculateButton = QPushButton('Calculate EMI') self.myGridLayout = QGridLayout() self.myGridLayout.addWidget(self.amtLabel, 0, 0) self.myGridLayout.addWidget(self.roiLabel, 1, 0) self.myGridLayout.addWidget(self.yrsLabel, 2, 0) self.myGridLayout.addWidget(self.amtText, 0, 1) self.myGridLayout.addWidget(self.roiSpin, 1, 1) self.myGridLayout.addWidget(self.yrsSpin, 2, 1) self.myGridLayout.addWidget(self.roiDial, 1, 2) self.myGridLayout.addWidget(self.yrsSlide, 2, 2) self.myGridLayout.addWidget(self.calculateButton, 3, 1) self.setLayout(self.myGridLayout) self.setWindowTitle("A simple EMI calculator") Until now, we have set the components that are required for the application. Note that, the application layout uses a grid layout option. The next set of code is also defined in the contructor's __init__ function of the MyWidget class which will connect the different signals to slots. There are different ways by which you can use a connect function. The code explains the various options available: self.roiDial.valueChanged.connect(self.roiSpin.setValue) self.connect(self.roiSpin, SIGNAL("valueChanged(int)"), self.roiDial.setValue) In the first line of the previous code, we connect the valueChanged() signal of roiDial to call the slot of roiSpin, setValue(). So, if we change the value of roiDial, it emits a signal that connects to the roiSpin's setValue() function and will set the value accordingly. Here, we must note that changing either the spin or dial must change the other value because both represent a single entity. Hence, we induce a second line which calls roiDial's setValue() slot on changing the roiSpin's value. However, it is to be noted that the second form of connecting signals to slots is deprecated. It is given here just for reference and it is strongly discouraged to use this form. The following two lines of code execute the same for the number of years slider and spin: self.yrsSlide.valueChanged.connect(self.yrsSpin.setValue) self.connect(self.yrsSpin, SIGNAL("valueChanged(int)"), self.yrsSlide, SLOT("setValue(int)")) In order to calculate the EMI value, we connect the clicked signal of the push button to a function (slot) which calculates the EMI and displays it to the user: self.connect(self.calculateButton, SIGNAL("clicked()"), self.showEMI) The EMI calculation and display function is given for your reference: def showEMI(self): loanAmount = float(self.amtText.text()) rateInterest = float( float (self.roiSpin.value() / 12) / 100) noMonths = int(self.yrsSpin.value() * 12) emi = (loanAmount * rateInterest) * ( ( ( (1 + rateInterest) ** noMonths ) / ( ( (1 + rateInterest) ** noMonths ) - 1) )) self.emiValue.display(emi) self.myGridLayout.addWidget(self.emiLabel, 4, 0) self.myGridLayout.addWidget(self.emiValue, 4, 2) The sample output of the application is shown in the following screenshot: The EMI calculator application uses the predefined signals, say, for example, valueChanged(), clicked() and predefined slots, setValue(). However, the application also uses a user-defined slot showEMI() to calculate the EMI. As with slots, it is also possible to create a user-defined signal and emit it when required. The following program is an example for creating and emitting user-defined signals: import sys from PySide.QtCore import * # define a new slot that receives and prints a string def printText(text): print(text) class CustomSignal(QObject): # create a new signal mySignal = Signal(str) if __name__ == '__main__': try: myObject = CustomSignal() # connect signal and slot myObject.mySignal.connect(printText) # emit signal myObject.mySignal.emit("Hello, Universe!") except Exception: print(sys.exc_info()[1]) This is a very simple example of using custom signals. In the CustomSignal class, we create a signal named mySignal and we emit it in the main function. Also, we define that on emission of the signal mySignal, the printText() slot would be called. Many complex signal emissions can be built this way.
Read more
  • 0
  • 0
  • 7270

article-image-introduction-nfrs
Packt
20 Jun 2017
14 min read
Save for later

Introduction to NFRs

Packt
20 Jun 2017
14 min read
In this article by Sameer Paradkar, the author of the book Mastering Non-Functional Requirements, we will learn the non-functional requirements are those aspects of the IT system that, while not directly affect the business functionality of the application but have a profound impact on the efficiency and effectiveness of business systems for end users as well as the people responsible for supporting the program. The definition of these requirements is an essential factor in developing a total customer solution that delivers business goals. Non-functional requirements are used primarily to drive the operational aspects of the architecture, in other words, to address major operational and technical areas of the system to ensure the robustness and ruggedness of the application. Benchmark or Proof-of-Concept can be used to verify if the implementation meets these requirements or indicate if a corrective action is necessary. Ideally, a series of tests should be planned that maps to the development schedule and grows in complexity. The topics that are covered in this article are as follows: Definition of NFRs NFR KPIs and metrics (For more resources related to this topic, see here.) Introducing NFR The following pointers state the definition of NFR: To define requirements and constraints on the IT system As a basis for cost estimates and early system sizing To assess the viability of the proposed IT system NFRs are an important determining factor of the architecture and design of the operational models As an guideline to design phase to meet NFRs such as performance, scalability, availability The NFRs foreach of the domains e.g. scalability, availability and so on,must be understood to facilitate the design and development of the target operating model. These include the servers, networks, and platforms including the application runtime environments. These are critical for the execution of benchmark tests. They also affect the design of technical and application components. End users have expectations about the effectiveness of the application. These characteristics include ease of software use, speed, reliability, and recoverability when unexpected conditions arise. The NFRs define these aspects of the IT system. The non-functional requirements should be defined precisely and involves quantifying them. NFRs should provide measurements the application must meet. For example, the maximum number of time allowed to execute a process, the number of hours in a day an application must be available, the maximum size of a database on disk, and the number of concurrent users supported are typical NFRs the software must implement. Figure 1: Key Non-Functional Requirements There are many kinds of non-functional requirements, including: Performance Performance is the responsiveness of the application to perform specific actions in a given time span. Performance is scored in terms of throughput or latency. Latency is the time taken by the application to respond to an event. Throughput is the number of events scored in a given time interval. An application’s performance can directly impact its scalability. Enhancing application’s performance often enhances scalability by reducing contention for shared resources. Performance attributes specify the timing characteristics of the application. Certain features are more time-sensitive than others; the NFRs should identify such software tasks that have constraints on their performance. Response time relates to the time needed to complete specific business processes, batch or interactive, within the target business system. The system must be designed to fulfil the agreed upon response time requirements, while supporting the defined workload mapped against the given static baseline, on a system platform that does not exceed the stated utilization. The following attributes are: Throughput: The ability of the system to execute a given number of transactions within a given unit of time Response times: The distribution of time which the system takes to respond to the request Scalability Scalability is the ability to handle an increase in the work load without impacting the performance, or the ability to quickly expand the architecture. Itis the ability to expand the architecture to accommodate more users, more processes, more transactions, additional systems and services as the business requirements change and the systems evolve to meet the future business demands. This permits existing systems to be extended without replacing them. Thisdirectly affects the architecture and the selection of software components and hardware. The solution must allow the hardware and the deployed software services and components to be scaled horizontally as well as vertically. Horizontal scaling involves replicating the same functionality across additional nodes vertical scaling involves the same functionality across bigger and more powerful nodes. Scalability definitions measure volumes of users and data the system should support. There are two key techniques for improving both vertical and horizontal scalability. Vertical Scaling is also known as scaling up and includes adding more resources such as memory, CPUand hard disk to a system. Horizontal scaling is also know as scaling out and includes adding more nodes to a cluster forwork load sharing. The following attributes are: Throughput: Number of maximum transactions your system needs to handle. E.g., thousand a day or A million Storage: Amount  of data you going to need to store Growth requirements: Data growth in the next 3-5 years Availability Availability is the time frame in which the system functions normally and without failures. Availability is measured as the percentage of total application downtime over a defined time period. Availability is affected by failures, exceptions, infrastructure issues, malicious attacks, and maintenance and upgrades. It is the uptime or the amount of time the system is operational and available for use. This is specified because some systems are architected with expected downtime for activities like database upgrades and backups. Availability also conveys the number of hours or days per week or weeks per year the application will be available to its end customers, as well as how rapidly it can recover from faults. Since the architecture establishes software, hardware, and networking entities, this requirement extends to all of them. Hardware availability, recoverability, and reliability definitions measure system up-time. For example, it is specified in terms of mean time between failures or “MTBF”. The following attributes are: Availability: Application availability considering the weekends, holidays and maintenance times and failures. Locations of operation: Geographic location, Connection requirements and the restrictions of the network prevail. Offline Requirement: Time available for offline operations including batch processing & system maintenance. Length of time between failures Recoverability: Time required by the system can resume operation in the event of failure. Resilience: The reliability characteristics of the system and sub-components Capacity This non-functional requirement defines the ways in which the system is expected to scale-up by increasing capacity, hardware or adding machines based on business objectives. Capacity is delivering enough functionality required for the end users.  A request for a web service to provide 1,000 requests per second when the server is only capable of 100 requests a second, may not succeed.  While this sounds like an availability issue, it occurs because the server is unable to handle the requisite capacity. A single node may not be able to provide enough capacity, and one needs to deploy multiple nodes with a similar configuration to meet organizational capacity requirements. Capacity to identify a failing node and restart it on another machine or VM is a non-functional requirement. The following attributes are: Throughput is the number of peak transactions the system needs to handle Storage: Volume of data the system can persist at run time to disk and relates to the memory/disk Year-on-yeargrowthrequirements (users, processing and storage) e-channel growth projections Different types of things (for example, activities or transactions supported, and so on) For each type of transaction, volumes on an hourly, daily, weekly, monthly, and so on During the specific time of the day (for example, at lunch), week, month or year are volumes significantly higher Transaction volume growth expected and additional volumes you will be able to handle Security Security is the ability of an application to avoid malicious incidences and events outside of the designed system usage, and prevent disclosure or loss of information. Improving security increases the reliability of application by reducing the likelihood of an attack succeeding and impairing operations. Adding security controls protects assets and prevents unauthorized access and manipulation of critical information. The factors that affect an application security are confidentiality and integrity. The key security controls used to secure systems are authorization, authentication, encryption, auditing, and logging. Definition and monitoring of effectiveness in meeting the security requirements of the system, for example, to avoid financial harm in accounting systems, is critical. Integrityrequirements are restrictingaccess to functionality or data to certain users and protecting the privacyof data entered into the software. The following attributes are: Authentication: Correct identification of parties attempting to access systems and protection of systems from unauthorized parties Authorization: Mechanism required to authorize users to perform different functions within the systems Encryption(data at rest or data in flight): All external communications between the data server and clients must beencrypted Data confidentiality: All data must be protectively marked, stored and protected Compliance: The process to confirm systems compliance with the organization's security standards and policies  Maintainability Maintainability is the ability of any application to go through modifications and updates with a degree of ease. This is the degree of flexibility with which the application can be modified, whether for bug fixes or to update functionality. These changes may impact any of the components, services, functionality, or interfaces in the application landscape while modifying to fix errors, or to meet changing business requirements. This is also a degree of time it takes to restore the system to its normal state following a failure or fault. Improving maintainability can improve the availability and reduce the run-time defects. Application’s maintainability is dependent on the overall quality attributes. It is critical as a large chunk of the IT budget is spent on maintenance of systems. The more maintainable a system is the lower the total cost of ownership. The following attributes are: Conformance to design standards, coding standards, best practices, reference architectures, and frameworks. Flexibility: The degree to which the system is intended to support change Release support: The way in which the system supports the introduction of initial release, phased rollouts and future releases Manageability Manageability is the ease with which the administrators can manage the application, through useful instrumentation exposed for monitoring. It is the ability of the system or the group of the system to provide key information to the operations and support team to be able to debug, analyze and understand the root cause of failures. It deals with compliance/governance with the domain frameworks and polices. The key is to design the application that is easy to manage, by exposing useful instrumentation for monitoring systems and for understanding the cause of failures. The following attributes are: System must maintain total traceability of transactions Businessobjectsand database fields are part of auditing User and transactional timestamps. File characteristics include size before, size after and structure Getting events and alerts as thresholds (for example, memory, storage, processor) are breached Remotely manage applications and create new virtual instances at the click of a button Rich graphical dashboard for all key applications metrics and KPI Reliability Reliability is the ability of the application to maintain its integrity and veracity over a time span and also in the event of faults or exceptions. It is measured as the probability that the software will not fail and that it will continue functioning for a defined time interval. It alsospecifies the ability of the system to maintain its performance over a time span. Unreliable software is prone to failures anda few processes may be more sensitive to failure than others, because such processes may not be able to recover from a fault or exceptions. The following attributes are: The characteristic of a system to perform its functions under stated conditions for a specificperiod of time. Mean Time To Recovery: Time is available to get the system back up online. Mean Time Between Failures – Acceptable threshold for downtime Data integrity is also known as referential integrity in database tables and interfaces Application Integrity and Information Integrity: during transactions Fault trapping (I/O): Handling failures and recovery Extensibility Extensibility is the ability of a system to cater to future changes through flexible architecture, design or implementation. Extensible applications have excellent endurance, which prevents the expensive processes of procuring large inflexible applications and retiring them due to changes in business needs. Extensibility enables organizations to take advantage of opportunities and respond to risks. While there is a significant difference extensibility is often tangled with modifiability quality. Modifiability means that is possible to change the software whereas extensibility means that change has been planned and will be effortless. Adaptability is at times erroneously leveraged with extensibility. However, adaptability deals with how the user interactions with the system are managed and governed. Extensibilityallows a system, people, technology, information, and processes all working together to achieve following objectives: The following attributes are: Handle new information types Manage new or changed business entities Consume or provide new feeds Recovery In the event of a natural calamity for example, flood or hurricane, the entire facility where the application is hosted may become inoperable or inaccessible. Business-critical applications should have a strategy to recover from such disasters within a reasonable amount of time frame. The solution implementing various processes must be integrated with the existing enterprise disaster recovery plan. The processes must be analysed to understand the criticality of each process to the business, the impact of loss to the business in case of non-availability of the process. Based on this analysis, appropriate disaster procedures must be developed, and plans should be outlined. As part of disaster recovery, electronic backups of data and procedures must be maintained at the recovery location and be retrievable within the appropriate time frames for system function restoration. In the case of high criticality, real-time mirroring to a mirror site should be deployed. The following attributes are: Recoveryprocess: Recovery Time Objectives(RTO) / Recovery Point Objectives(RPO) Restore time: Time required switching to the secondary site when the primary fails RPO/Backup time: Time it takes to back your data Backup frequencies: Frequency of backing-up the transaction data, configuration data and code Interoperability Interoperability is the ability to exchange information and communicate with internal and external applications and systems. Interoperable systems make it easier to exchange information both internally and externally. The data formats, transport protocols and interfaces are the key attributes for architecting interoperable systems. Standardization of data formats, transport protocols and interfaces are the key aspect to be considered when architecting interoperable system. Interoperability is achieved through: Publishing and describing interfaces Describing the syntax used to communicate Describing the semantics of information it produces and consumes Leveraging open standards to communicate with external systems Loosely coupled with external systems The following attributes are: Compatibility with shared applications: Other system it needs to integrate Compatibility with 3rd party applications: Other systems it has to live with amicably Compatibility with various OS: Different OS compatibility Compatibility on different platforms: Hardware platforms it needs to work on Usability Usability measures characteristics such as consistency and aesthetics in the user interface. Consistency is the constant use of mechanisms employed in the user interface while Aesthetics refers to the artistic, visual quality of the user interface. It is the ease at which the users operate the system and make productive use of it. Usability is discussed with relation to the system interfaces, but it can just as well be applied to any tool, device, or rich system. This addresses the factors that establish the ability of the software to be understood, used, and learned by its intended users. The application interfaces must be designed with end users in mind so that they are intuitive to use, are localized, provide access for differently abled users, and provide an excellent overall user experience. The following attributes are: Look and feel standards: Layout and flow, screen element density, keyboard shortcuts, UI metaphors, colors. Localization/Internationalization requirements: Keyboards, paper sizes, languages, spellings, and so on Summary It explains he introduction of NFRs and why NFRs are a critical for building software systems. The article also explained various KPI for each of the key of NFRs i.e. scalability, availability, reliability and do on.  Resources for Article: Further resources on this subject: Software Documentation with Trac [article] The Software Task Management Tool - Rake [article] Installing Software and Updates [article]
Read more
  • 0
  • 0
  • 7250
article-image-testing-xtext-and-xtend
Packt
20 Aug 2013
20 min read
Save for later

Testing with Xtext and Xtend

Packt
20 Aug 2013
20 min read
(For more resources related to this topic, see here.) Introduction to testing Writing automated tests is a fundamental technology / methodology when developing software. It will help you write quality software where most aspects (possibly all aspects) are somehow verified in an automatic and continuous way. Although successful tests do not guarantee that the software is bug free, automated tests are a necessary condition for professional programming (see Beck 2002, Martin 2002, 2008, 2011 for some insightful reading about this subject). Tests will also document your code, whether it is a framework, a library, or an application; tests are form of documentation that does not risk to get stale with respect to the implementation itself. Javadoc comments will likely not be kept in synchronization with the code they document, manuals will tend to become obsolete if not updated consistently, while tests will fail if they are not up-to-date. The Test Driven Development (TDD) methodology fosters the writing of tests even before writing production code. When developing a DSL one can relax this methodology by not necessarily writing the tests first. However, one should write tests as soon as a new functionality is added to the DSL implementation. This must be taken into consideration right from the beginning, thus, you should not try to write the complete grammar of a DSL, but proceed gradually; write a few rules to parse a minimal program, and immediately write tests for parsing some test input programs. Only when these tests pass you should go on to implementing other parts of the grammar. Moreover, if some validation rules can already be implemented with the current version of the DSL, you should write tests for the current validator checks as well. Ideally, one does not have to run Eclipse to manually check whether the current implementation of the DSL works as expected. Using tests will then make the development much faster. The number of tests will grow as the implementation grows, and tests should be executed each time you add a new feature or modify an existing one. You will see that since tests will run automatically, executing them over and over again will require no additional effort besides triggering their execution (think instead if you should manually check that what you added or modified did not break something). This also means that you will not be scared to touch something in your implementation; after you did some changes, just run the whole test suite and check whether you broke something. If some tests fail,you will just need to check whether the failure is actually expected (and in case fix the test) or whether your modifications have to be fixed. It is worth noting that using a version control system (such as Git) is essential to easily get back to a known state; just experimenting with your code and finding errors using tests does not mean you can easily backtrack. You will not even be scared to port your implementation to a new version of the used frameworks. For example, when a new version of Xtext is released, it is likely that some API has changed and your DSL implementation might not be built anymore with the new version. Surely, running the MWE2 workflow is required. But after your sources compile again, your test suite will tell you whether the behavior of your DSL is still the same. In particular, if some of the tests fail, you can get an immediate idea of which parts need to be changed to conform to the new version of Xtext. Moreover, if your implementation relies on a solid test suite, it will be easier for contributors to provide patches and enhancements for your DSL; they can run the test suite themselves or they can add further tests for a specific bugfix or for a new feature. It will also be easy for the main developers to decide whether to accept the contributions by running the tests. Last but not the least, you will discover that writing tests right from the beginning will force you to write modular code (otherwise you will not be able to easily test it) and it will make programming much more fun. Xtext and Xtend themselves are developed with a test driven approach. Junit 4 Junit is the most popular unit test framework for Java and it is shipped with the Eclipse JDT. In particular, the examples in this article are based on Junit version 4. To implement Junit tests, you just need to write a class with methods annotated with @org.junit.Test. We will call such methods simply test methods. Such Java (or Xtend) classes can then be executed in Eclipse using the "Junit test" launch configuration; all methods annotated with @Test will be then executed by Junit. In test methods you can use assert methods provided by Junit to implement a test. For example, assertEquals (expected, actual) checks whether the two arguments are equal; assertTrue(expression) checks whether the passed expression evaluates to true. If an assertion fails, Junit will record such failure; in particular, in Eclipse, the Junit view will provide you with a report about tests that failed. Ideally, no test should fail (and you should see the green bar in the Junit view). All test methods can be executed by Junit in any order, thus, you should never write a test method which depends on another one; all test methods should be executable independently from each other. If you annotate a method with @Before, that method will be executed before each test method in that class, thus, it can be used to prepare a common setup for all the test methods in that class. Similarly, a method annotated with @After will be executed after each test method (even if it fails), thus, it can be used to cleanup the environment. A static method annotated with @BeforeClass will be executed only once before the start of all test methods (@AfterClass has the complementary intuitive functionality). The ISetup interface Running tests means we somehow need to bootstrap the environment to make it support EMF and Xtext in addition to the implementation of our DSL. This is done with a suitable implementation of ISetup. We need to configure things differently depending on how we want to run tests; with or without Eclipse and with or without Eclipse UI being present. The way to set up the environment is quite different when Eclipse is present, since many services are shared and already part of the Eclipse environment. When setting up the environment for non-Eclipse use (also referred to as standalone) there are a few things that must be configured, such as creating a Guice injector and registering information required by EMF. The method createInjectorAndDoEMFRegistration in the ISetup interface is there to do exactly this. Besides the creation of an Injector, this method also performs all the initialization of EMF global registries so that after the invocation of that method, the EMF API to load and store models of your language can be fully used, even without a running Eclipse. Xtext generates an implementation of this interface, named after your DSL, which can be found in the runtime plugin project. For our Entities DSL it is called EntitiesStandaloneSetup. The name "standalone" expresses the fact that this class has to be used when running outside Eclipse. Thus, the preceding method must never be called when running inside Eclipse (otherwise the EMF registries will become inconsistent). In a plain Java application the typical steps to set up the DSL (for example, our Entities DSL) can be sketched as follows: Injector injector = new EntitiesStandaloneSetup().createInjectorAndDoEMFRegistration();XtextResourceSet resourceSet = injector.getInstance(XtextResourceSet.class);resourceSet.addLoadOption (XtextResource.OPTION_RESOLVE_ALL, Boolean.TRUE);Resource resource = resourceSet.getResource (URI.createURI("/path/to/my.entities"), true);Model model = (Model) resource.getContents().get(0); This standalone setup class is especially useful also for Junit tests that can then be run without an Eclipse instance. This will speed up the execution of tests. Of course, in such tests you will not be able to test UI features. As we will see in this article, Xtext provides many utility classes for testing which do not require us to set up the runtime environment explicitly. However, it is important to know about the existence of the setup class in case you either need to tweak the generated standalone compiler or you need to set up the environment in a specific way for unit tests. Implementing tests for your DSL Xtext highly fosters using unit tests, and this is reflected by the fact that, by default, the MWE2 workflow generates a specific plug-in project for testing your DSL. In fact, usually tests should reside in a separate project, since they should not be deployed as part of your DSL implementation. This additional project ends with the .tests suffix, thus, for our Entities DSL, it is org.example.entities.tests. The tests plug-in project has the needed dependencies on the required Xtext utility bundles for testing. We will use Xtend to write Junit tests. In the src-gen directory of the tests project, you will find the injector p roviders for both headless and UI tests. You can use these providers to easily write Junit test classes without having to worry about the injection mechanisms setup. The Junit tests that use the injector provider will typically have the following shape (using the Entities DSL as an example): @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class MyTest { @Inject MyClass ... As hinted in the preceding code, in this class you can rely on injection; we used @InjectWith and declared that EntitiesInjectorProvider has to be used to create the injector. EntitiesInjectorProvider will transparently provide the correct configuration for a standalone environment. As we will see later in this article, when we want to test UI features, we will use EntitiesUiInjectorProvider (note the "Ui" in the name). Testing the parser The first tests you might want to write are the ones which concern parsing. This reflects the fact that the grammar is the first thing you must write when implementing a DSL. You should not try to write the complete grammar before starting testing: you should write only a few rules and soon write tests to check if those rules actually parse an input test program as you expect. The nice thing is that you do not have to store the test input in a file (though you could do that); the input to pass to the parser can be a string, and since we use Xtend, we can use multi-line strings. The Xtext test framework provides the class ParseHelper to easily parse a string. The injection mechanism will automatically tell this class to parse the input string with the parser of your DSL. To parse a string, we inject an instance of ParseHelper<T>, where T is the type of the root class in our DSL's model – in our Entities example, this class is called Model. The method ParseHelper.parse will return an instance of T after parsing the input string given to it. By injecting the ParseHelper class as an extension, we can directly use its methods on the strings we want to parse. Thus, we can write: @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesParserTest { @Inject extension ParseHelper<Model> @Test def void testParsing() { val model = ''' entity MyEntity { MyEntity attribute; } '''.parse val entity = model.entities.get(0) Assert::assertEquals("MyEntity", entity.name) val attribute = entity.attributes.get(0) Assert::assertEquals("attribute", attribute.name); Assert::assertEquals("MyEntity", (attribute.type.elementType as EntityType). entity.name); } ... In this test, we parse the input and test that the expected structure was constructed as a result of parsing. These tests do not add much value in the Entities DSL, but in a more complex DSL you do want to test that the structure of the parsed EMF model is as you expect. You can now run the test: right-click on the Xtend file and select Run As | JUnit Test as shown in the following screenshot. The test should pass and you should see the green bar in the Junit view. Note that the parse method returns an EMF model even if the input string contains syntax errors (it tries to parse as much as it can); thus, if you want to make sure that the input string is parsed without any syntax error, you have to check that explicitly. To do that, you can use another utility class, ValidationTestHelper. This class provides many assert methods that take an EObject argument. You can use an extension field and simply call assertNoErrors on the parsed EMF object. Alternatively, if you do not need the EMF object but you just need to check that there are no parsing errors, you can simply call it on the result of parse, for example: class EntitiesParserTest { @Inject extension ParseHelper<Model> @Inject extension ValidationTestHelper... @Test def void testCorrectParsing() { ''' entity MyEntity { MyEntity attribute } '''.parse.assertNoErrors } If you try to run the tests again, you will get a failure for this new test, as shown in the following screenshot: The reported error should be clear enough: we forgot to add the terminating ";" in our input program, thus we can fix it and run the test again; this time the green bar should be back. You can now write other @Test methods for testing the various features of the DSL (see the sources of the examples). Depending on the complexity of your DSL you may have to write many of them. Tests should test one specific thing at a time; lumping things together (to reduce the overhead of having to write many test methods) usually makes it harder later. Remember that you should follow this methodology while implementing your DSL, not after having implemented all of it. If you follow this strictly, you will not have to launch Eclipse to manually check that you implemented a feature correctly, and you will note that this methodology will let you program really fast. Ideally, you should start with the grammar with a single rule, especially if the grammar contains nonstandard terminals. The very first task is to write a grammar that just parses all terminals. Write a test for that to ensure there are no overlapping terminals before proceeding; this is not needed if terminals are not added to the standard terminals. After that add as few rules as possible in each round of development/testing until the grammar is complete. Testing the validator Earlier we used the ValidationTestHelper class to test that it was possible to parse without errors. Of course, we also need to test that errors and warnings are detected. In particular, we should test any error situation handled by our own validator. The ValidationTestHelper class contains utility methods (besides assertNoErrors) that allow us to test whether the expected errors are correctly issued. For instance, for our Entities DSL, we wrote a custom validator method that checks that the entity hierarchy is acyclic. Thus, we should write a test that, given an input program with a cycle in the hierarchy, checks that such an error is indeed raised during validation. Although not strictly required, it is better to separate Junit test classes according to the tested features, thus, we write another Junit class, EntitiesValidatorTest, which contains tests related to validation. The start of this new Junit test class should look familiar: @RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesValidatorTest { @Inject extension ParseHelper<Model> @Inject extension ValidationTestHelper ... We are now going to use the assertError method from ValidationTestHelper, which, besides the EMF model element to validate, requires the following arguments: EClass of the object which contains the error (which is usually retrieved through the EMF EPackage class generated when running the MWE2 workflow) The expected Issue Code An optional string describing the expected error message Thus, we parse input containing an entity extending itself and we pass the arguments to assertError according to the error generated by checkNoCycleInEntityHierarchy in EntitiesValidator: @Testdef void testEntityExtendsItself() { ''' entity MyEntity extends MyEntity { } '''.parse.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'MyEntity'" )} Note that the EObject argument is the one returned by the parse method (we use assertError as an extension method). Since the error concerns an Entity object, we specify the corresponding EClass (retrieved using EntitiesPackage), the expected Issue Code, and finally, the expected error message. This test should pass. We can now write another test which tests the same validation error on a more complex input with a cycle in the hierarchy involving more than one entity; in this test we make sure that our validator issues an error for each of the entities involved in the hierarchy cycle: @Testdef void testCycleInEntityHierarchy() { val model = ''' entity A extends B {} entity B extends C {} entity C extends A {} '''.parse model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'A'" ) model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'B'" ) model.assertError(EntitiesPackage::eINSTANCE.entity, EntitiesValidator::HIERARCHY_CYCLE, "cycle in hierarchy of entity 'C'" )} Note that this time we must store the parsed EMF model into a variable since we will call assertError many times. We can also test that the NamesAreUniqueValidator method detects elements with the same name: @Testdef void testDuplicateEntities() { val model = ''' entity MyEntity {} entity MyEntity {} '''.parse model.assertError(EntitiesPackage::eINSTANCE.entity, null, "Duplicate Entity 'MyEntity'" )} In this case, we pass null for the issue argument, since no Issue Code is reported by NamesAreUniqueValidator. Similarly, we can write a test where the input has two attributes with the same name: @Testdef void testDuplicateAttributes() { val model = ''' entity MyEntity { MyEntity attribute; MyEntity attribute; } '''.parse model.assertError(EntitiesPackage::eINSTANCE.attribute, null, "Duplicate Attribute 'attribute'" )} Note that in this test we pass the EClass corresponding to Attribute, since duplicate attributes are involved in the expected error. Do not worry if it seems tricky to get the arguments for assertError right the first time; writing a test that fails the first time it is executed is expected in Test Driven Development. The error of the failing test should put you on the right track to specify the arguments correctly. However, by inspecting the error of the failing test, you must first make sure that the actual output is what you expected, otherwise something is wrong either with your test or with the implementation of the component that you are testing. Testing the formatter As we said in the previously, the formatter is also used in a non-UI environment (indeed, we implemented that in the runtime plug-in project), thus, we can test the formatter for our DSL with plain Junit tests. At the moment, there is no helper class in the Xtext framework for testing the formatter, thus we need to do some additional work to set up the tests for the formatter. This example will also provide some more details on Xtext and EMF, and it will introduce unit test methodologies that are useful in many testing scenarios where you need to test whether a string output is as you expect. First of all, we create another Junit test class for testing the formatter; this time we do not need the helper for the validator; we will inject INodeModelFormatter as an extension field since this is the class internally used by Xtext to perform formatting. One of the main principles of unit testing (which is also its main strength) is that you should test a single functionality in isolation. Thus, to test the formatter, we must not run a UI test that opens an Xtext editor on an input file and call the menu item which performs the formatting; we just need to test the class to which the formatting is delegated and we do not need a running Eclipse for that. import static extension org.junit.Assert.*@RunWith(typeof(XtextRunner))@InjectWith(typeof(EntitiesInjectorProvider))class EntitiesFormatterTest { @Inject extension ParseHelper<Model> @Inject extension INodeModelFormatter; Note that we import all the static methods of the Junit Assert class as extension methods. Then, we write the code that actually performs the formatting given an input string. Since we will write several tests for formatting, we isolate such code in a reusable method. This method is not annotated with @Test, thus it will not be automatically executed by Junit as a test method. This is the Xtend code that returns the formatted version of the input string: (input.parse.eResource as XtextResource).parseResult. rootNode.format(0, input.length).formattedText The method ParseHelper.parse returns the EMF model object, and each EObject has a reference to the containing EMF resource; we know that this is actually XtextResource (a specialized version of an EMF resource). We retrieve the result of parsing, that is, an IParseResult object, from the resource. The result of parsing contains the node model; recall from, that the node model carries the syntactical information that is, offsets and spaces of the textual input. The root of the node model, ICompositeNode, can be passed to the formatter to get the formatted version (we can even specify to format only a part of the input program). Now we can write a reusable method that takes an input char sequence and an expected char sequence and tests that the formatted version of the input program is equal to what we expect: def void assertFormattedAs(CharSequence input, CharSequence expected) { expected.toString.assertEquals( (input.parse.eResource as XtextResource).parseResult. rootNode.format(0, input.length).formattedText)} The reason why we convert the expected char sequence into a string will be clear in a minute. Note the use of Assert.assertEquals as an extension method. We can now write our first formatting test using our extension method assertFormattedAs: @Testdef void testEntities() { ''' entity E1 { } entity E2 {} '''.assertFormattedAs( '''...''' )} Why did we specify "…" as the expected formatted output? Why did we not try to specify what we really expect as the formatted output? Well, we could have written the expected output, and probably we would have gotten it right on the first try, but why not simply make the test fail and see the actual output? We can then copy that in our test once we are convinced that it is correct. So let's run the test, and when it fails, the Junit view tells us what the actual result is, as shown in the following screenshot: If you now double-click on the line showing the comparison failure in the Junit view, you will get a dialog showing a line by line comparison, as shown in the following screenshot: You can verify that the actual output is correct, copy that, and paste it into your test as the expected output. The test will now succeed: @Testdef void testEntities() { ''' entity E1 { } entity E2 {} '''.assertFormattedAs('''entity E1 {}entity E2 {}''' )} We did not indent the expected output in the multi-line string since it is easy to paste it like that from the Junit dialog. Using this technique you can easily write Junit tests that deal with comparisons. However, the "Result Comparison" dialog appears only if you pass String objects to assertEquals; that is why we converted the char sequence into a string in the implementation of assertFormattedAs. We now add a test for testing the formatting of attributes; the final result will be: @Testdef void testAttributes() { ''' entity E1 { int i ; string s; boolean b ;} '''.assertFormattedAs(''' entity E1 { int i; string s; boolean b; }''' )} Summary In this article we introduced unit testing for languages implemented with Xtext. Being able to test most of the DSL aspects without having to start an Eclipse environment really speeds up development.Test Driven Development is an important programming methodology that helps you make your implementations more modular, more reliable, and resilient to changes of the libraries used by your code. Resources for Article: Further resources on this subject: Making Money with Your Game [Article] Getting started with Kinect for Windows SDK Programming [Article] Installing Alfresco Software Development Kit (SDK) [Article]
Read more
  • 0
  • 0
  • 7179

article-image-understanding-business-activity-monitoring-oracle-soa-suite
Packt
28 Oct 2009
14 min read
Save for later

Understanding Business Activity Monitoring in Oracle SOA Suite

Packt
28 Oct 2009
14 min read
How BAM differs from traditional business intelligence The Oracle SOA Suite stores the state of all processes in a database in documented schemas so why do we need yet another reporting tool to provide insight into our processes and services? In other words how does BAM differ from traditional BI (Business Intelligence)? In traditional BI, reports are generated and delivered either on a scheduled basis or in response to a user request. Any changes to the information will not be reflected until the next scheduled run or until a user requests the report to be rerun. BAM is an event-driven reporting tool that generates alerts and reports in real time, based on a continuously changing data stream, some of whose data may not be in the database. As events occur in the Services and Processes, the business has defined they are captured by BAM and reports and views are updated in real time. Where necessary these updated reports are delivered to users. This delivery to users can take several forms. The best known is the dashboard on users' desktops that will automatically update without any need for the user to refresh the screen. There are also other means to deliver reports to the end user, including sending them via a text message or an email. Traditional reporting tools such as Oracle Reports and Oracle Discoverer as well as Oracles latest Business Intelligence Suite can be used to provide some real-time reporting needs but they do not provide the event driven reporting that gives the business a continuously updating view of the current business situation. Event Driven Architecture Event Driven Architecture (EDA) is about building business solutions around responsiveness to events. Events may be simple triggers such as a stock out event or they may be more complex triggers such as the calculations to realize that a stock out will occur in three days. An Event Driven Architecture will often take a number of simple events and then combine them through a complex event processing sequence to generate complex events that could not have been raised without aggregation of several simpler events. Oracle BAM scenarios Oracle Business Activity Monitoring is typically used to monitor two distinct types of real-time data. Firstly it may be used to monitor the overall state of processes in the business. For example it may be used to track how many auctions are currently running, how many have bids on them, and how many have completed in the last 24 hours (or other time periods). Secondly it may be used to track in real-time Key Performance Indicators or KPIS. For example it may be used to provide a real-time updating dashboard to a seller to show the current total value of all the sellers' auctions and to track this against an expected target. In the first case, we are interested in how business processes are progressing and are using BAM to identify bottlenecks and failure points within those processes. Bottlenecks can be identified by too much time being spent on given steps in the process. BAM allows us to compute the time taken between two points in a process, such as the time between order placement and shipping, and provide real-time feedback on those times. Similarly BAM can be used to track the percentage drop-out rate between steps in a sales process, allowing the business to take appropriate action. In the second case, our interest is on some aggregate number, such as our total liabilities should we win all the auctions we are bidding on. This requires us to aggregate results from many events, possibly performing some kind of calculation on them to provide us with a single KPI that gives an indication to the business of how things are going. BAM allows us to continuously update this number in real on a dashboard without the need for continued polling. It also allows us to trigger alerts, perhaps through email or SMS, to notify an individual, when a threshold is breached. In both cases reports delivered can be customized based on the individual receiving the report. BAM architecture It may seem odd to have a section on architecture in the middle of a article about how to effectively use BAM, but key to successful utilization of BAM is an understanding of how the different tiers relate to each other. Logical view The following diagram represents a logical view of how BAM operates. Events are acquired from one or more sources through event acquisition and then normalized, correlated, and stored in event storage (generally a memory area in BAM that is backed up to disc). The report cache generates reports based on events in storage and then delivers those reports, together with real-time updates through the report delivery layer. Event processing is also performed on events in storage, and when defined conditions are met, alerts will be delivered through the alert delivery service. Physical view To better understand the physical view of the architecture of BAM, we have divided this section into four parts. Let us discuss these in detail. Capture This logical view maps onto the physical BAM components shown in the following diagram. Data acquisition in the SOA Suite is handled by sensors in BPEL and ESB. BAM can also receive events from JMS message queues and access data in databases (useful for historical comparison). For complex data formats or for other data sources then Oracle Data Integrator (ODI is a separate product to the SOA Suite) is recommended by Oracle. Although potentially less efficient and more work than running ODI, it is also possible to use adapters to acquire data from multiple sources and feed it into BAM through ESB or BPEL. At the data capture level we need to think of the data items that we can provide to feed the reports and alerts that we desire to generate. We must consider the sources of that data and the best way to load it into BAM. Store Once the data is captured, it is then stored in a normalized form in the Active Data Cache (ADC). This storage facility has the ability to do simple correlation based on fields within the data, and multiple data items received from the acquisition layer may update just a single object in the data cache. For example the state of a given BPEL process instance may be represented by a single object in the ADC and all updates to that process state will just update that single data item rather than creating multiple data items. Process Reports are run based on user demand. Once a report is run it will update the user's screen on a real time basis. Where multiple users are accessing the same report only one instance of the report is maintained by the report server. As events are captured and stored in real time the report engine will continuously monitor them for any changes that need to be made to those reports which are currently active. When changes are detected that impact active reports, then the appropriate report will be updated in memory and the updates sent to the user screen. In addition to the event processing required to correctly insert and update items in the ADC, there is also a requirement to monitor items in the ADC for events that require some sort of action to be taken. This is the job of the event processor. This will monitor data in the ADC to see if registered thresholds on values have been exceeded or if certain time-outs have expired. The event processor will often need to perform calculations across multiple data items to do this. Deliver Delivery of reports takes place in two ways. First, users request reports to be delivered to their desktop by selecting views within BAM. These reports are delivered as HTML pages within a browser and are updated whenever the underlying data used in the report changes. The second approach is that reports are sent out as a result of events being triggered by the Event Processing Engine. In the latter case, the report may be delivered by email, SMS, or voice messaging using the notifications service. A final option available for these event generated reports is to invoke a web service to take some sort of automated action. Closing the loop While monitoring what is happening is all very laudable, it is only of benefit if we actually do something about what we are monitoring. BAM provides the real-time monitoring ability very well but it also provides the facility to invoke other services to respond to undesirable events such as stock outs. The ability to invoke external services is crucial to the concept of a closed loop control environment where as a result of monitoring we are able to reach back into the processes and either alter their execution or start new ones. For example when a stock out or low stock event is raised then the message centre could invoke a web service requesting a supplier to send more stock to replenish inventory. Placing this kind of feedback mechanism in BAM allows us to trigger events across multiple applications and locations in a way that may not be possible within a single application or process. For example, in response to a stock out, instead of requesting our supplier to provide more stock, we may be monitoring stock levels in independent systems and, based on stock levels elsewhere, may redirect stock from one location to another. BAM platform anomaly In 10g SOA Suite, BAM runs only as a Windows application. Unlike the rest of SOA Suite, it does not run on a JEE Application Server and it can only run on the Windows platform. In the next release, 11g, BAM will be provided as a JEE application that can run on a number of application servers and operating systems. User interface Development in Oracle BAM is done through a web-based user interface. This user interface gives access to four different applications that allow you to interact with different parts of BAM. These are: Active Viewer for giving access to reports; this relates to the deliver stage for user requested reports. Active Studio for building reports; this relates to the 'process' stage for creating reports. Architect for setting up both inbound and outbound events. Data elements are defined here as data sources. Alerts are also configured here. This covers setting up, acquire and store stages as well as the deliver stage for alerts. Administrator for managing users and roles as well as defining the types of message sources. We will not examine the applications individually but will take a task-focused look at how to use them as part of providing some specific reports. Monitoring process state Now that we have examined how BAM is constructed, let us use this knowledge to construct some simple dashboards that track the state of a business process. We will instrument a simple version of an auction process. The process is shown in the following figure: An auction is started and then bids are placed until the time runs out at which point the auction is completed. This is modelled in BPEL. This process has three distinct states: Started Bid received Completed We are interested in the number of auctions in each state as well as the total value of auctions in progress. One needs to follow these steps to build the dashboard: Define our data within the Active Data Cache Create sensors in BPEL and map to data in the ADC Create suitable reports Run the reports Defining data objects Data in BAM is stored in data objects. Individual data objects contain the information that is reported in BAM dashboards and may be updated by multiple events. Generally BAM will report against aggregations of objects, but there is also the ability for reports to drill down into individual data objects. Before defining our data objects let's group them into an Auction folder so they are easy to find. To do this we use the BAM Architect application and select Data Objects which gives us the following screen: We select Create subfolder to create the folder and give it a name Auction. We then select Create folder to actually create the folder and we get a confirmation message to tell us that the folder was created. Notice that once created, the folder also appears in the Folders window on the left-hand side of the screen. Now we have our folder we can create a data object. Again we select Data Objects from the drop-down menu. To define the data objects that are to be stored in our Active Data Cache, we open the Auction folder if it is not already open and selectCreate Data Object. If we don't select the Auction folder then we pick it later when filling in the details of the data object. We need to give our object a unique name within the folder and optionally provide it with a tip text that helps explain what the object does when the mouse is moved over it in object listings. Having named our object we can now create the data fields by selecting Add a field. When adding fields we need to provide a name and type as well as indicating if they must contain data; the default Nullable does not require a field to be populated. We may also optionally indicate if a field should be public "available for display" and what if any tool tip text it should have. Once all the data fields have been defined then we can click Create Data Object to actually create the object as we have defined it. We are then presented with a confirmation screen that the object has been created. Grouping data into hierarchies When creating a data object it is possible to specify Dimensions for the object. A dimension is based on one or more fields within the object. A given field can only participate in one dimension. This gives the ability to group the object by the fields in the given dimension. If multiple fields are selected for a single dimension then they can be layered into a hierarchy, for example to allow analysis by country, region, and city. In this case all three elements would be selected into a single dimension, perhaps called geography. Within geography a hierarchy could be set up with country at the top, region next, and finally city at the bottom, allowing drill down to occur in views. Just as a data object can have multiple dimensions, a dimension can also have multiple hierarchies. A digression on populating data object fields In the previous discussion, we mentioned the Nullable attribute that can be attached to fields. This is very important as we do not expect to populate all or even most of the fields in a data object at one moment in time. Do not confuse data objects with the low level events that are used to populate them. Data objects in BAM do not have a one-to-one correspondence with the low level events that populate them. In our auction example there will be just one auction object for every auction. However there will be at least two and usually more messages for every auction; one message for the auction starting, another for the auction completing, and additional messages for each bid received. These messages will all populate or in some cases overwrite different parts of the auction data object. The table shows how the three messages populate different parts of the data object. Message Auction ID State Highest bid Reserve Expires Seller Highest bidder Auction Started Inserted Inserted Inserted Inserted Inserted Inserted   Bid Received   Updated Updated       Updated Auction Finished   Updated           Summary In this article we have explored how Business Activity Monitoring differs from and is complementary to more traditional Business Intelligence solutions such as Oracle Reports and Business Objects. We have explored how BAM can allow the business to monitor the state of business targets and Key Performance Indicators, such as the current most popular products in a retail environment or the current time taken to serve customers in a service environment.
Read more
  • 0
  • 0
  • 7172

article-image-decoupling-units-unittestmock
Packt
24 Nov 2014
27 min read
Save for later

Decoupling Units with unittest.mock

Packt
24 Nov 2014
27 min read
In this article by Daniel Arbuckle, author of the book Learning Python Testing, you'll learn how by using the unittest.mock package, you can easily perform the following: Replace functions and objects in your own code or in external packages. Control how replacement objects behave. You can control what return values they provide, whether they raise an exception, even whether they make any calls to other functions, or create instances of other objects. Check whether the replacement objects were used as you expected: whether functions or methods were called the correct number of times, whether the calls occurred in the correct order, and whether the passed parameters were correct. (For more resources related to this topic, see here.) Mock objects in general All right, before we get down to the nuts and bolts of unittest.mock, let's spend a few moments talking about mock objects overall. Broadly speaking, mock objects are any objects that you can use as substitutes in your test code, to keep your tests from overlapping and your tested code from infiltrating the wrong tests. However, like most things in programming, the idea works better when it has been formalized into a well-designed library that you can call on when you need it. There are many such libraries available for most programming languages. Over time, the authors of mock object libraries have developed two major design patterns for mock objects: in one pattern, you can create a mock object and perform all of the expected operations on it. The object records these operations, and then you put the object into playback mode and pass it to your code. If your code fails to duplicate the expected operations, the mock object reports a failure. In the second pattern, you can create a mock object, do the minimal necessary configuration to allow it to mimic the real object it replaces, and pass it to your code. It records how the code uses it, and then you can perform assertions after the fact to check whether your code used the object as expected. The second pattern is slightly more capable in terms of the tests that you can write using it but, overall, either pattern works well. Mock objects according to unittest.mock Python has several mock object libraries; as of Python 3.3, however, one of them has been crowned as a member of the standard library. Naturally that's the one we're going to focus on. That library is, of course, unittest.mock. The unittest.mock library is of the second sort, a record-actual-use-and-then-assert library. The library contains several different kinds of mock objects that, between them, let you mock almost anything that exists in Python. Additionally, the library contains several useful helpers that simplify assorted tasks related to mock objects, such as temporarily replacing real objects with mocks. Standard mock objects The basic element of unittest.mock is the unittest.mock.Mock class. Even without being configured at all, Mock instances can do a pretty good job of pretending to be some other object, method, or function. There are many mock object libraries for Python; so, strictly speaking, the phrase "mock object" could mean any object that was created by any of these libraries. Mock objects can pull off this impersonation because of a clever, somewhat recursive trick. When you access an unknown attribute of a mock object, instead of raising an AttributeError exception, the mock object creates a child mock object and returns that. Since mock objects are pretty good at impersonating other objects, returning a mock object instead of the real value works at least in the common case. Similarly, mock objects are callable; when you call a mock object as a function or method, it records the parameters of the call and then, by default, returns a child mock object. A child mock object is a mock object in its own right, but it knows that it's connected to the mock object it came from—its parent. Anything you do to the child is also recorded in the parent's memory. When the time comes to check whether the mock objects were used correctly, you can use the parent object to check on all of its descendants. Example: Playing with mock objects in the interactive shell (try it for yourself!): $ python3.4 Python 3.4.0 (default, Apr 2 2014, 08:10:08) [GCC 4.8.2] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from unittest.mock import Mock, call >>> mock = Mock() >>> mock.x <Mock name='mock.x' id='140145643647832'> >>> mock.x <Mock name='mock.x' id='140145643647832'> >>> mock.x('Foo', 3, 14) <Mock name='mock.x()' id='140145643690640'> >>> mock.x('Foo', 3, 14) <Mock name='mock.x()' id='140145643690640'> >>> mock.x('Foo', 99, 12) <Mock name='mock.x()' id='140145643690640'> >>> mock.y(mock.x('Foo', 1, 1)) <Mock name='mock.y()' id='140145643534320'> >>> mock.method_calls [call.x('Foo', 3, 14), call.x('Foo', 3, 14), call.x('Foo', 99, 12), call.x('Foo', 1, 1), call.y(<Mock name='mock.x()' id='140145643690640'>)] >>> mock.assert_has_calls([call.x('Foo', 1, 1)]) >>> mock.assert_has_calls([call.x('Foo', 1, 1), call.x('Foo', 99, 12)]) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python3.4/unittest/mock.py", line 792, in assert_has_ calls ) from cause AssertionError: Calls not found. Expected: [call.x('Foo', 1, 1), call.x('Foo', 99, 12)] Actual: [call.x('Foo', 3, 14), call.x('Foo', 3, 14), call.x('Foo', 99, 12), call.x('Foo', 1, 1), call.y(<Mock name='mock.x()' id='140145643690640'>)] >>> mock.assert_has_calls([call.x('Foo', 1, 1), ... call.x('Foo', 99, 12)], any_order = True) >>> mock.assert_has_calls([call.y(mock.x.return_value)]) There are several important things demonstrated in this interactive session. First, notice that the same mock object was returned each time that we accessed mock.x. This always holds true: if you access the same attribute of a mock object, you'll get the same mock object back as the result. The next thing to notice might seem more surprising. Whenever you call a mock object, you get the same mock object back as the return value. The returned mock isn't made new for every call, nor is it unique for each combination of parameters. We'll see how to override the return value shortly but, by default, you get the same mock object back every time you call a mock object. This mock object can be accessed using the return_value attribute name, as you might have noticed from the last statement of the example. The unittest.mock package contains a call object that helps to make it easier to check whether the correct calls have been made. The call object is callable, and takes note of its parameters in a way similar to mock objects, making it easy to compare it to a mock object's call history. However, the call object really shines when you have to check for calls to descendant mock objects. As you can see in the previous example, while call('Foo', 1, 1) will match a call to the parent mock object, if the call used these parameters, call.x('Foo', 1, 1), it matches a call to the child mock object named x. You can build up a long chain of lookups and invocations. For example: >>> mock.z.hello(23).stuff.howdy('a', 'b', 'c') <Mock name='mock.z.hello().stuff.howdy()' id='140145643535328'> >>> mock.assert_has_calls([ ... call.z.hello().stuff.howdy('a', 'b', 'c') ... ]) >>> Notice that the original invocation included hello(23), but the call specification wrote it simply as hello(). Each call specification is only concerned with the parameters of the object that was finally called after all of the lookups. The parameters of intermediate calls are not considered. That's okay because they always produce the same return value anyway unless you've overridden that behavior, in which case they probably don't produce a mock object at all. You might not have encountered an assertion before. Assertions have one job, and one job only: they raise an exception if something is not as expected. The assert_has_calls method, in particular, raises an exception if the mock object's history does not include the specified calls. In our example, the call history matches, so the assertion method doesn't do anything visible. You can check whether the intermediate calls were made with the correct parameters, though, because the mock object recorded a call immediately to mock.z.hello(23) before it recorded a call to mock.z.hello().stuff.howdy('a', 'b', 'c'): >>> mock.mock_calls.index(call.z.hello(23)) 6 >>> mock.mock_calls.index(call.z.hello().stuff.howdy('a', 'b', 'c')) 7 This also points out the mock_calls attribute that all mock objects carry. If the various assertion functions don't quite do the trick for you, you can always write your own functions that inspect the mock_calls list and check whether things are or are not as they should be. We'll discuss the mock object assertion methods shortly. Non-mock attributes What if you want a mock object to give back something other than a child mock object when you look up an attribute? It's easy; just assign a value to that attribute: >>> mock.q = 5 >>> mock.q 5 There's one other common case where mock objects' default behavior is wrong: what if accessing a particular attribute is supposed to raise an AttributeError? Fortunately, that's easy too: >>> del mock.w >>> mock.w Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python3.4/unittest/mock.py", line 563, in __getattr__ raise AttributeError(name) AttributeError: w Non-mock return values and raising exceptions Sometimes, actually fairly often, you'll want mock objects posing as functions or methods to return a specific value, or a series of specific values, rather than returning another mock object. To make a mock object always return the same value, just change the return_value attribute: >>> mock.o.return_value = 'Hi' >>> mock.o() 'Hi' >>> mock.o('Howdy') 'Hi' If you want the mock object to return different value each time it's called, you need to assign an iterable of return values to the side_effect attribute instead, as follows: >>> mock.p.side_effect = [1, 2, 3] >>> mock.p() 1 >>> mock.p() 2 >>> mock.p() 3 >>> mock.p() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python3.4/unittest/mock.py", line 885, in __call__ return _mock_self._mock_call(*args, **kwargs) File "/usr/lib64/python3.4/unittest/mock.py", line 944, in _mock_call result = next(effect) StopIteration If you don't want your mock object to raise a StopIteration exception, you need to make sure to give it enough return values for all of the invocations in your test. If you don't know how many times it will be invoked, an infinite iterator such as itertools.count might be what you need. This is easily done: >>> mock.p.side_effect = itertools.count() If you want your mock to raise an exception instead of returning a value, just assign the exception object to side_effect, or put it into the iterable that you assign to side_effect: >>> mock.e.side_effect = [1, ValueError('x')] >>> mock.e() 1 >>> mock.e() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/lib64/python3.4/unittest/mock.py", line 885, in __call__ return _mock_self._mock_call(*args, **kwargs) File "/usr/lib64/python3.4/unittest/mock.py", line 946, in _mock_call raise result ValueError: x The side_effect attribute has another use, as well that we'll talk about. Mocking class or function details Sometimes, the generic behavior of mock objects isn't a close enough emulation of the object being replaced. This is particularly the case when it's important that they raise exceptions when used improperly, since mock objects are usually happy to accept any usage. The unittest.mock package addresses this problem using a technique called speccing. If you pass an object into unittest.mock.create_autospec, the returned value will be a mock object, but it will do its best to pretend that it's the same object you passed into create_autospec. This means that it will: Raise an AttributeError if you attempt to access an attribute that the original object doesn't have, unless you first explicitly assign a value to that attribute Raise a TypeError if you attempt to call the mock object when the original object wasn't callable Raise a TypeError if you pass the wrong number of parameters or pass a keyword parameter that isn't viable if the original object was callable Trick isinstance into thinking that the mock object is of the original object's type Mock objects made by create_autospec share this trait with all of their children as well, which is usually what you want. If you really just want a specific mock to be specced, while its children are not, you can pass the template object into the Mock constructor using the spec keyword. Here's a short demonstration of using create_autospec: >>> from unittest.mock import create_autospec >>> x = Exception('Bad', 'Wolf') >>> y = create_autospec(x) >>> isinstance(y, Exception) True >>> y <NonCallableMagicMock spec='Exception' id='140440961099088'> Mocking function or method side effects Sometimes, for a mock object to successfully take the place of a function or method means that the mock object has to actually perform calls to other functions, or set variable values, or generally do whatever a function can do. This need is less common than you might think, and it's also somewhat dangerous for testing purposes because, when your mock objects can execute arbitrary code, there's a possibility that they stop being a simplifying tool for enforcing test isolation, and become a complex part of the problem instead. Having said that, there are still times when you need a mocked function to do something more complex than simply returning a value, and we can use the side_effect attribute of mock objects to achieve this. We've seen side_effect before, when we assigned an iterable of return values to it. If you assign a callable to side_effect, this callable will be called when the mock object is called and passed the same parameters. If the side_effect function raises an exception, this is what the mock object does as well; otherwise, the side_effect return value is returned by the mock object. In other words, if you assign a function to a mock object's side_effect attribute, this mock object in effect becomes that function with the only important difference being that the mock object still records the details of how it's used. The code in a side_effect function should be minimal, and should not try to actually do the job of the code the mock object is replacing. All it should do is perform any expected externally visible operations and then return the expected result.Mock object assertion methods As we saw in the Standard mock objects section, you can always write code that checks the mock_calls attribute of mock objects to see whether or not things are behaving as they should. However, there are some particularly common checks that have already been written for you, and are available as assertion methods of the mock objects themselves. As is normal for assertions, these assertion methods return None if they pass, and raise an AssertionError if they fail. The assert_called_with method accepts an arbitrary collection of arguments and keyword arguments, and raises an AssertionError unless these parameters were passed to the mock the last time it was called. The assert_called_once_with method behaves like assert_called_with, except that it also checks whether the mock was only called once and raises AssertionError if that is not true. The assert_any_call method accepts arbitrary arguments and keyword arguments, and raises an AssertionError if the mock object has never been called with these parameters. We've already seen the assert_has_calls method. This method accepts a list of call objects, checks whether they appear in the history in the same order, and raises an exception if they do not. Note that "in the same order" does not necessarily mean "next to each other." There can be other calls in between the listed calls as long as all of the listed calls appear in the proper sequence. This behavior changes if you assign a true value to the any_order argument. In that case, assert_has_calls doesn't care about the order of the calls, and only checks whether they all appear in the history. The assert_not_called method raises an exception if the mock has ever been called. Mocking containers and objects with a special behavior One thing the Mock class does not handle is the so-called magic methods that underlie Python's special syntactic constructions: __getitem__, __add__, and so on. If you need your mock objects to record and respond to magic methods—in other words, if you want them to pretend to be container objects such as dictionaries or lists, or respond to mathematical operators, or act as context managers or any of the other things where syntactic sugar translates it into a method call underneath—you're going to use unittest.mock.MagicMock to create your mock objects. There are a few magic methods that are not supported even by MagicMock, due to details of how they (and mock objects) work: __getattr__, __setattr__, __init__ , __new__, __prepare__, __instancecheck__, __subclasscheck__, and __del__. Here's a simple example in which we use MagicMock to create a mock object supporting the in operator: >>> from unittest.mock import MagicMock >>> mock = MagicMock() >>> 7 in mock False >>> mock.mock_calls [call.__contains__(7)] >>> mock.__contains__.return_value = True >>> 8 in mock True >>> mock.mock_calls [call.__contains__(7), call.__contains__(8)] Things work similarly with the other magic methods. For example, addition: >>> mock + 5 <MagicMock name='mock.__add__()' id='140017311217816'> >>> mock.mock_calls [call.__contains__(7), call.__contains__(8), call.__add__(5)] Notice that the return value of the addition is a mock object, a child of the original mock object, but the in operator returned a Boolean value. Python ensures that some magic methods return a value of a particular type, and will raise an exception if that requirement is not fulfilled. In these cases, MagicMock's implementations of the methods return a best-guess value of the proper type, instead of a child mock object. There's something you need to be careful of when it comes to the in-place mathematical operators, such as += (__iadd__) and |= (__ior__), and that is the fact that MagicMock handles them somewhat strangely. What it does is still useful, but it might well catch you by surprise: >>> mock += 10 >>> mock.mock_calls [] What was that? Did it erase our call history? Fortunately, no, it didn't. What it did was assign the child mock created by the addition operation to the variable called mock. That is entirely in accordance with how the in-place math operators are supposed to work. Unfortunately, it has still cost us our ability to access the call history, since we no longer have a variable pointing at the parent mock object. Make sure that you have the parent mock object set aside in a variable that won't be reassigned, if you're going to be checking in-place math operators. Also, you should make sure that your mocked in-place operators return the result of the operation, even if that just means return self.return_value, because otherwise Python will assign None to the left-hand variable. There's another detailed way in which in-place operators work that you should keep in mind: >>> mock = MagicMock() >>> x = mock >>> x += 5 >>> x <MagicMock name='mock.__iadd__()' id='139845830142216'> >>> x += 10 >>> x <MagicMock name='mock.__iadd__().__iadd__()' id='139845830154168'> >>> mock.mock_calls [call.__iadd__(5), call.__iadd__().__iadd__(10)] Because the result of the operation is assigned to the original variable, a series of in-place math operations builds up a chain of child mock objects. If you think about it, that's the right thing to do, but it is rarely what people expect at first. Mock objects for properties and descriptors There's another category of things that basic Mock objects don't do a good job of emulating: descriptors. Descriptors are objects that allow you to interfere with the normal variable access mechanism. The most commonly used descriptors are created by Python's property built-in function, which simply allows you to write functions to control getting, setting, and deleting a variable. To mock a property (or other descriptor), create a unittest.mock.PropertyMock instance and assign it to the property name. The only complication is that you can't assign a descriptor to an object instance; you have to assign it to the object's type because descriptors are looked up in the type without first checking the instance. That's not hard to do with mock objects, fortunately: >>> from unittest.mock import PropertyMock >>> mock = Mock() >>> prop = PropertyMock() >>> type(mock).p = prop >>> mock.p <MagicMock name='mock()' id='139845830215328'> >>> mock.mock_calls [] >>> prop.mock_calls [call()] >>> mock.p = 6 >>> prop.mock_calls [call(), call(6)] The thing to be mindful of here is that the property is not a child of the object named mock. Because of this, we have to keep it around in its own variable because otherwise we'd have no way of accessing its history. The PropertyMock objects record variable lookup as a call with no parameters, and variable assignment as a call with the new value as a parameter. You can use a PropertyMock object if you actually need to record variable accesses in your mock object history. Usually you don't need to do that, but the option exists. Even though you set a property by assigning it to an attribute of a type, you don't have to worry about having your PropertyMock objects bleed over into other tests. Each Mock you create has its own type object, even though they all claim to be of the same class: >>> type(Mock()) is type(Mock()) False Thanks to this feature, any changes that you make to a mock object's type object are unique to that specific mock object. Mocking file objects It's likely that you'll occasionally need to replace a file object with a mock object. The unittest.mock library helps you with this by providing mock_open, which is a factory for fake open functions. These functions have the same interface as the real open function, but they return a mock object that's been configured to pretend that it's an open file object. This sounds more complicated than it is. See for yourself: >>> from unittest.mock import mock_open >>> open = mock_open(read_data = 'moose') >>> with open('/fake/file/path.txt', 'r') as f: ... print(f.read()) ... moose If you pass a string value to the read_data parameter, the mock file object that eventually gets created will use that value as the data source when its read methods get called. As of Python 3.4.0, read_data only supports string objects, not bytes. If you don't pass read_data, read method calls will return an empty string. The problem with the previous code is that it makes the real open function inaccessible, and leaves a mock object lying around where other tests might stumble over it. Read on to see how to fix these problems. Replacing real code with mock objects The unittest.mock library gives a very nice tool for temporarily replacing objects with mock objects, and then undoing the change when our test is done. This tool is unittest.mock.patch. There are a lot of different ways in which that patch can be used: it works as a context manager, a function decorator, and a class decorator; additionally, it can create a mock object to use for the replacement or it can use the replacement object that you specify. There are a number of other optional parameters that can further adjust the behavior of the patch. Basic usage is easy: >>> from unittest.mock import patch, mock_open >>> with patch('builtins.open', mock_open(read_data = 'moose')) as mock: ... with open('/fake/file.txt', 'r') as f: ... print(f.read()) ... moose >>> open <built-in function open> As you can see, patch dropped the mock open function created by mock_open over the top of the real open function; then, when we left the context, it replaced the original for us automatically. The first parameter of patch is the only one that is required. It is a string describing the absolute path to the object to be replaced. The path can have any number of package and subpackage names, but it must include the module name and the name of the object inside the module that is being replaced. If the path is incorrect, patch will raise an ImportError, TypeError, or AttributeError, depending on what exactly is wrong with the path. If you don't want to worry about making a mock object to be the replacement, you can just leave that parameter off: >>> import io >>> with patch('io.BytesIO'): ... x = io.BytesIO(b'ascii data') ... io.BytesIO.mock_calls [call(b'ascii data')] The patch function creates a new MagicMock for you if you don't tell it what to use for the replacement object. This usually works pretty well, but you can pass the new parameter (also the second parameter, as we used it in the first example of this section) to specify that the replacement should be a particular object; or you can pass the new_callable parameter to make patch use the value of that parameter to create the replacement object. We can also force the patch to use create_autospec to make the replacement object, by passing autospec=True: >>> with patch('io.BytesIO', autospec = True): ... io.BytesIO.melvin Traceback (most recent call last): File "<stdin>", line 2, in <module> File "/usr/lib64/python3.4/unittest/mock.py", line 557, in __getattr__ raise AttributeError("Mock object has no attribute %r" % name) AttributeError: Mock object has no attribute 'melvin' The patch function will normally refuse to replace an object that does not exist; however, if you pass it create=True, it will happily drop a mock object wherever you like. Naturally, this is not compatible with autospec=True. The patch function covers the most common cases. There are a few related functions that handle less common but still useful cases. The patch.object function does the same thing as patch, except that, instead of taking the path string, it accepts an object and an attribute name as its first two parameters. Sometimes this is more convenient than figuring out the path to an object. Many objects don't even have valid paths (for example, objects that exist only in a function local scope), although the need to patch them is rarer than you might think. The patch.dict function temporarily drops one or more objects into a dictionary under specific keys. The first parameter is the target dictionary; the second is a dictionary from which to get the key and value pairs to put into the target. If you pass clear=True, the target will be emptied before the new values are inserted. Notice that patch.dict doesn't create the replacement values for you. You'll need to make your own mock objects, if you want them. Mock objects in action That was a lot of theory interspersed with unrealistic examples. Let's take a look at what we've learned and apply it for a more realistic view of how these tools can help us. Better PID tests The PID tests suffered mostly from having to do a lot of extra work to patch and unpatch time.time, and had some difficulty breaking the dependence on the constructor. Patching time.time Using patch, we can remove a lot of the repetitiveness of dealing with time.time; this means that it's less likely that we'll make a mistake somewhere, and saves us from spending time on something that's kind of boring and annoying. All of the tests can benefit from similar changes: >>> from unittest.mock import Mock, patch >>> with patch('time.time', Mock(side_effect = [1.0, 2.0, 3.0, 4.0, 5.0])): ... import pid ... controller = pid.PID(P = 0.5, I = 0.5, D = 0.5, setpoint = 0, ... initial = 12) ... assert controller.gains == (0.5, 0.5, 0.5) ... assert controller.setpoint == [0.0] ... assert controller.previous_time == 1.0 ... assert controller.previous_error == -12.0 ... assert controller.integrated_error == 0.0 Apart from using patch to handle time.time, this test has been changed. We can now use assert to check whether things are correct instead of having doctest compare the values directly. There's hardly any difference between the two approaches, except that we can place the assert statements inside the context managed by patch. Decoupling from the constructor Using mock objects, we can finally separate the tests for the PID methods from the constructor, so that mistakes in the constructor cannot affect the outcome: >>> with patch('time.time', Mock(side_effect = [2.0, 3.0, 4.0, 5.0])): ... pid = imp.reload(pid) ... mock = Mock() ... mock.gains = (0.5, 0.5, 0.5) ... mock.setpoint = [0.0] ... mock.previous_time = 1.0 ... mock.previous_error = -12.0 ... mock.integrated_error = 0.0 ... assert pid.PID.calculate_response(mock, 6) == -3.0 ... assert pid.PID.calculate_response(mock, 3) == -4.5 ... assert pid.PID.calculate_response(mock, -1.5) == -0.75 ... assert pid.PID.calculate_response(mock, -2.25) == -1.125 What we've done here is set up a mock object with the proper attributes, and pass it into calculate_response as the self-parameter. We could do this because we didn't create a PID instance at all. Instead, we looked up the method's function inside the class and called it directly, allowing us to pass whatever we wanted as the self-parameter instead of having Python's automatic mechanisms handle it. Never invoking the constructor means that we're immune to any errors it might contain, and guarantees that the object state is exactly what we expect here in our calculate_response test. Summary In this article, we've learned about a family of objects that specialize in impersonating other classes, objects, methods, and functions. We've seen how to configure these objects to handle corner cases where their default behavior isn't sufficient, and we've learned how to examine the activity logs that these mock objects keep, so that we can decide whether the objects are being used properly or not. Resources for Article: Further resources on this subject: Installing NumPy, SciPy, matplotlib, and IPython [Article] Machine Learning in IPython with scikit-learn [Article] Python 3: Designing a Tasklist Application [Article]
Read more
  • 0
  • 0
  • 7112
article-image-wxpython-design-approaches-and-techniques
Packt
09 Dec 2010
11 min read
Save for later

wxPython: Design Approaches and Techniques

Packt
09 Dec 2010
11 min read
wxPython 2.8 Application Development Cookbook Over 80 practical recipes for developing feature-rich applications using wxPython Develop flexible applications in wxPython. Create interface translatable applications that will run on Windows, Macintosh OSX, Linux, and other UNIX like environments. Learn basic and advanced user interface controls. Packed with practical, hands-on cookbook recipes and plenty of example code, illustrating the techniques to develop feature rich applications using wxPython.     Introduction Programming is all about patterns. There are patterns at every level, from the programming language itself, to the toolkit, to the application. Being able to discern and choose the optimal approach to use to solve the problem at hand can at times be a difficult task. The more patterns you know, the bigger your toolbox, and the easier it will become to be able to choose the right tool for the job. Different programming languages and toolkits often lend themselves to certain patterns and approaches to problem solving. The Python programming language and wxPython are no different, so let's jump in and take a look at how to apply some common design approaches and techniques to wxPython applications. Creating Singletons In object oriented programming, the Singleton pattern is a fairly simple concept of only allowing exactly one instance of a given object to exist at a given time. This means that it only allows for only one instance of the object to be in memory at any given time, so that all references to the object are shared throughout the application. Singletons are often used to maintain a global state in an application since all occurrences of one in an application reference the same exact instance of the object. Within the core wxPython library, there are a number of singleton objects, such as ArtProvider , ColourDatabase , and SystemSettings . This recipe shows how to make a singleton Dialog class, which can be useful for creating non-modal dialogs that should only have a single instance present at a given time, such as a settings dialog or a special tool window. How to do it... To get started, we will define a metaclass that can be reused on any class that needs to be turned into a singleton. We will get into more detail later in the How it works section. A metaclass is a class that creates a class. It is passed a class to it's __init__ and __call__ methods when someone tries to create an instance of the class. class Singleton(type): def __init__(cls, name, bases, dict): super(Singleton, cls).__init__(name, bases, dict) cls.instance = None def __call__(cls, *args, **kw): if not cls.instance: # Not created or has been Destroyed obj = super(Singleton, cls).__call__(*args, **kw) cls.instance = obj cls.instance.SetupWindow() return cls.instance Here we have an example of the use of our metaclass, which shows how easy it is to turn the following class into a singleton class by simply assigning the Singleton class as the __metaclass__ of SingletonDialog. The only other requirement is to define the SetupWindow method that the Singleton metaclass uses as an initialization hook to set up the window the first time an instance of the class is created. Note that in Python 3+ the __metaclass__ attribute has been replaced with a metaclass keyword argument in the class definition. class SingletonDialog(wx.Dialog): __metaclass__ = Singleton def SetupWindow(self): """Hook method for initializing window""" self.field = wx.TextCtrl(self) self.check = wx.CheckBox(self, label="Enable Foo") # Layout vsizer = wx.BoxSizer(wx.VERTICAL) label = wx.StaticText(self, label="FooBar") hsizer = wx.BoxSizer(wx.HORIZONTAL) hsizer.AddMany([(label, 0, wx.ALIGN_CENTER_VERTICAL), ((5, 5), 0), (self.field, 0, wx.EXPAND)]) btnsz = self.CreateButtonSizer(wx.OK) vsizer.AddMany([(hsizer, 0, wx.ALL|wx.EXPAND, 10), (self.check, 0, wx.ALL, 10), (btnsz, 0, wx.EXPAND|wx.ALL, 10)]) self.SetSizer(vsizer) self.SetInitialSize() How it works... There are a number of ways to implement a Singleton in Python. In this recipe, we used a metaclass to accomplish the task. This is a nicely contained and easily reusable pattern to accomplish this task. The Singleton class that we defined can be used by any class that has a SetupWindow method defined for it. So now that we have done it, let's take a quick look at how a singleton works. The Singleton metaclass dynamically creates and adds a class variable called instance to the passed in class. So just to get a picture of what is going on, the metaclass would generate the following code in our example: class SingletonDialog(wx.Dialog): instance = None Then the first time the metaclass's __call__ method is invoked, it will then assign the instance of the class object returned by the super class's __call__ method, which in this recipe is an instance of our SingletonDialog. So basically, it is the equivalent of the following: SingletonDialog.instance = SingletonDialog(*args,**kwargs) Any subsequent initializations will cause the previously-created one to be returned, instead of creating a new one since the class definition maintains the lifetime of the object and not an individual reference created in the user code. Our SingletonDialog class is a very simple Dialog that has TextCtrl, CheckBox, and Ok Button objects on it. Instead of invoking initialization in the dialog's __init__ method, we instead defined an interface method called SetupWindow that will be called by the Singleton metaclass when the object is initially created. In this method, we just perform a simple layout of our controls in the dialog. If you run the sample application that accompanies this topic, you can see that no matter how many times the show dialog button is clicked, it will only cause the existing instance of the dialog to be brought to the front. Also, if you make changes in the dialog's TextCtrl or CheckBox, and then close and reopen the dialog, the changes will be retained since the same instance of the dialog will be re-shown instead of creating a new one. Implementing an observer pattern The observer pattern is a design approach where objects can subscribe as observers of events that other objects are publishing. The publisher(s) of the events then just broadcasts the events to all of the subscribers. This allows the creation of an extensible, loosely-coupled framework of notifications, since the publisher(s) don't require any specific knowledge of the observers. The pubsub module provided by the wx.lib package provides an easy-to-use implementation of the observer pattern through a publisher/subscriber approach. Any arbitrary number of objects can subscribe their own callback methods to messages that the publishers will send to make their notifications. This recipe shows how to use the pubsub module to send configuration notifications in an application. How to do it... Here, we will create our application configuration object that stores runtime configuration variables for an application and provides a notification mechanism for whenever a value is added or modified in the configuration, through an interface that uses the observer pattern: import wx from wx.lib.pubsub import Publisher # PubSub message classification MSG_CONFIG_ROOT = ('config',) class Configuration(object): """Configuration object that provides notifications. """ def __init__(self): super(Configuration, self).__init__() # Attributes self._data = dict() def SetValue(self, key, value): self._data[key] = value # Notify all observers of config change Publisher.sendMessage(MSG_CONFIG_ROOT + (key,), value) def GetValue(self, key): """Get a value from the configuration""" return self._data.get(key, None) Now, we will create a very simple application to show how to subscribe observers to configuration changes in the Configuration class: class ObserverApp(wx.App): def OnInit(self): self.config = Configuration() self.frame = ObserverFrame(None, title="Observer Pattern") self.frame.Show() self.configdlg = ConfigDialog(self.frame, title="Config Dialog") self.configdlg.Show() return True def GetConfig(self): return self.config This dialog will have one configuration option on it to allow the user to change the applications font: class ConfigDialog(wx.Dialog): """Simple setting dialog""" def __init__(self, *args, **kwargs): super(ConfigDialog, self).__init__(*args, **kwargs) # Attributes self.panel = ConfigPanel(self) # Layout sizer = wx.BoxSizer(wx.VERTICAL) sizer.Add(self.panel, 1, wx.EXPAND) self.SetSizer(sizer) self.SetInitialSize((300, 300)) class ConfigPanel(wx.Panel): def __init__(self, parent): super(ConfigPanel, self).__init__(parent) # Attributes self.picker = wx.FontPickerCtrl(self) # Setup self.__DoLayout() # Event Handlers self.Bind(wx.EVT_FONTPICKER_CHANGED, self.OnFontPicker) def __DoLayout(self): vsizer = wx.BoxSizer(wx.VERTICAL) hsizer = wx.BoxSizer(wx.HORIZONTAL) vsizer.AddStretchSpacer() hsizer.AddStretchSpacer() hsizer.AddWindow(self.picker) hsizer.AddStretchSpacer() vsizer.Add(hsizer, 0, wx.EXPAND) vsizer.AddStretchSpacer() self.SetSizer(vsizer) Here, in the FontPicker's event handler, we get the newly-selected font and call SetValue on the Configuration object owned by the App object in order to change the configuration, which will then cause the ('config', 'font') message to be published: def OnFontPicker(self, event): """Event handler for the font picker control""" font = self.picker.GetSelectedFont() # Update the configuration config = wx.GetApp().GetConfig() config.SetValue('font', font) Now, here, we define the application's main window that will subscribe it's OnConfigMsg method as an observer of all ('config',) messages, so that it will be called whenever the configuration is modified: class ObserverFrame(wx.Frame): """Window that observes configuration messages""" def __init__(self, *args, **kwargs): super(ObserverFrame, self).__init__(*args, **kwargs) # Attributes self.txt = wx.TextCtrl(self, style=wx.TE_MULTILINE) self.txt.SetValue("Change the font in the config " "dialog and see it update here.") # Observer of configuration changes Publisher.subscribe(self.OnConfigMsg, MSG_CONFIG_ROOT) def __del__(self): # Unsubscribe when deleted Publisher.unsubscribe(self.OnConfigMsg) Here is the observer method that will be called when any message beginning with 'config' is sent by the pubsub Publisher. In this sample application, we just check for the ('config', 'font') message and update the font of the TextCtrl object to use the newly-configured one: def OnConfigMsg(self, msg): """Observer method for config change messages""" if msg.topic[-1] == 'font': # font has changed so update controls self.SetFont(msg.data) self.txt.SetFont(msg.data) if __name__ == '__main__': app = ObserverApp(False) app.MainLoop() How it works... This recipe shows a convenient way to manage an application's configuration by allowing the interested parts of an application to subscribe to updates when certain parts of the configuration are modified. Let's start with a quick walkthrough of how pubsub works. Pubsub messages use a tree structure to organize the categories of different messages. A message type can be defined either as a tuple ('root', 'child1', 'grandchild1') or as a dot-separated string ('root.child1.grandchild1'). Subscribing a callback to ('root',) will cause your callback method to be called for all messages that start with ('root',). This means that if a component publishes ('root', 'child1', 'grandchild1') or ('root', 'child1'), then any method that is subscribed to ('root',) will also be called Pubsub basically works by storing the mapping of message types to callbacks in static memory in the pubsub module. In Python, modules are only imported once any other part of your application that uses the pubsub module shares the same singleton Publisher object. In our recipe, the Configuration object is a simple object for storing data about the configuration of our application. Its SetValue method is the important part to look at. This is the method that will be called whenever a configuration change is made in the application. In turn, when this is called, it will send a pubsub message of ('config',) + (key,) that will allow any observers to subscribe to either the root item or more specific topics determined by the exact configuration item. Next, we have our simple ConfigDialog class. This is just a simple example that only has an option for configuring the application's font. When a change is made in the FontPickerCtrl in the ConfigPanel, the Configuration object will be retrieved from the App and will be updated to store the newly-selected Font. When this happens, the Configuration object will publish an update message to all subscribed observers. Our ObserverFrame is an observer of all ('config',) messages by subscribing its OnConfigMsg method to MSG_CONFIG_ROOT. OnConfigMsg will be called any time the Configuration object's SetValue method is called. The msg parameter of the callback will contain a Message object that has a topic and data attribute. The topic attribute will contain the tuple that represents the message that triggered the callback and the data attribute will contain any data that was associated with the topic by the publisher of the message. In the case of a ('config', 'font') message, our handler will update the Font of the Frame and its TextCtrl.
Read more
  • 0
  • 0
  • 7107

article-image-netbeans-ide-7-building-ejb-application
Packt
01 Jun 2011
10 min read
Save for later

NetBeans IDE 7: Building an EJB Application

Packt
01 Jun 2011
10 min read
  NetBeans IDE 7 Cookbook Over 70 highly focused practical recipes to maximize your output with NetBeans         Introduction Enterprise Java Beans (EJB) is a framework of server-side components that encapsulates business logic. These components adhere to strict specifications on how they should behave. This ensures that vendors who wish to implement EJB-compliant code must follow conventions, protocols, and classes ensuring portability. The EJB components are then deployed in EJB containers, also called application servers, which manage persistence, transactions, and security on behalf of the developer. If you wish to learn more about EJBs, visit http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book. For our EJB application to run, we will need the application servers. Application servers are responsible for implementing the EJB specifications and creating the perfect environment for our EJBs to run in. Some of the capabilities supported by EJB and enforced by Application Servers are: Remote access Transactions Security Scalability NetBeans 6.9, or higher, supports the new Java EE 6 platform, making it the only IDE so far to bring the full power of EJB 3.1 to a simple IDE interface for easy development. NetBeans makes it easy to develop an EJB application and deploy on different Application Servers without the need to over-configure and mess with different configuration files. It's as easy as a project node right-click. Creating EJB project In this recipe, we will see how to create an EJB project using the wizards provided by NetBeans. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, then you can download it from http://download.netbeans.org. There are two application servers in this installation package, Apache Tomcat or GlassFish, and either one can be chosen, but at least one is necessary. In this recipe, we will use the GlassFish version that comes together with NetBeans 7.0 installation package. How to do it... Lets create a new project by either clicking File and then New Project, or by pressing Ctrl+Shift+N. In the New Project window, in the categories side, choose Java Web and in Projects side, select WebApplication, then click Next. In Name and Location, under Project Name, enter EJBApplication. Tick the Use Dedicated Folder for Storing Libraries option box. Now either type the folder path or select one by clicking on browse. After choosing the folder, we can proceed by clicking Next. In Server and Settings, under Server, choose GlassFish Server 3.1. Tick Enable Contexts and Dependency Injection. Leave the other values with their default values and click Finish. The new project structure is created. How it works... NetBeans creates a complete file structure for our project. It automatically configures the compiler and test libraries and creates the GlassFish deployment descriptor. The deployment descriptor filename specific for the GlassFish web server is glassfish-web.xml.   Adding JPA support The Java Persistence API (JPA) is one of the frameworks that equips Java with object/relational mapping. Within JPA, a query language is provided that supports the developers abstracting the underlying database. With the release of JPA 2.0, there are many areas that were improved, such as: Domain Modeling EntityManager Query interfaces JPA query language and others We are not going to study the inner workings of JPA in this recipe. If you wish to know more about JPA, visit http://jcp.org/en/jsr/detail?id=317 or http://download.oracle.com/javaee/5/tutorial/doc/bnbqa.html. NetBeans provides very good support for enabling your application to quickly create entities annotated with JPA. In this recipe, we will see how to configure your application to use JPA. We will continue to expand the previously-created project. Getting ready We will use GlassFish Server in this recipe since it is the only server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. Another source of installed Java DB is the JDK installation directory. It is not necessary to build on top of the previous recipe, but it is imperative to have a database schema. Feel free to create your own entities by following the steps presented in this recipe. How to do it... Right-click on EJBApplication node and select New Entity Classes from Database.... In Database Tables: Under Data Source, select jdbc/sample and let the IDE initialize Java DB. When Available Tables is populated, select MANUFACTURER, click Add, and then click Next. In Entity Classes: leave all the fields with their default values and only in Package, enter entities and click Finish. How it works... NetBeans then imports and creates our Java class from the database schema, in our case the Manufacturer.java file placed under the entities package. Besides that, NetBeans makes it easy to import and start using the entity straightaway. Many of the most common queries, for example find by name, find by zip, and find all, are already built into the class itself. The JPA queries, which are akin to normal SQL queries, are defined in the entity class itself. Listed below are some of the queries defined in the entity class Manufacturer.java: @Entity @Table(name = "MANUFACTURER") @NamedQueries({ @NamedQuery(name = "Manufacturer.findAll", query = "SELECT m FROM Manufacturer m"), @NamedQuery(name = "Manufacturer.findByManufacturerId", query = "SELECT m FROM Manufacturer m WHERE m.manufacturerId = :manufacturerId"), The @Entity annotation defines that this class, Manufacturer.java, is an entity and when followed by the @Table annotation, which has a name parameter, points out the table in the Database where the information is stored. The @NamedQueries annotation is the place where all the NetBeans-generated JPA queries are stored. There can be as many @NamedQueries as the developer feels necessary. One of the NamedQueries we are using in our example is named Manufacturer.findAll, which is a simple select query. When invoked, the query is translated to: SELECT m FROM Manufacturer m On top of that, NetBeans implements the equals, hashCode, and toString methods. Very useful if the entities need to be used straight away with some collections, such as HashMap. Below is the NetBeans-generated code for both hashCode and the toString methods: @Override public int hashCode() { int hash = 0; hash += (manufacturerId != null ? manufacturerId.hashCode() : 0); return hash; } @Override public boolean equals(Object object) { // TODO: Warning - this method won't work in the case the id fields are not set if (!(object instanceof Manufacturer)) { return false; } Manufacturer other = (Manufacturer) object; if ((this.manufacturerId == null && other.manufacturerId != null) || (this.manufacturerId != null && !this.manufacturerId. equals(other.manufacturerId))) { return false; } return true; } NetBeans also creates a persistence.xml and provides a Visual Editor, simplifying the management of different Persistence Units (in case our project needs to use more than one); thereby making it possible to manage the persistence.xml without even touching the XML code. A persistence unit, or persistence.xml, is the configuration file in JPA which is placed under the configuration files, when the NetBeans view is in Projects mode. This file defines the data source and what name the persistence unit has in our example: <persistence-unit name="EJBApplicationPU" transaction-type="JTA"> <jta-data-source>jdbc/sample</jta-data-source> <properties/> </persistence-unit> The persistence.xml is placed in the configuration folder, when using the Projects view. In our example, our persistence unit name is EJBApplicationPU, using the jdbc/sample as the data source. To add more PUs, click on the Add button that is placed on the uppermost right corner of the Persistence Visual Editor. This is an example of adding another PU to our project:   Creating Stateless Session Bean A Session Bean encapsulates business logic in methods, which in turn are executed by a client. This way, the business logic is separated from the client. Stateless Session Beans do not maintain state. This means that when a client invokes a method in a Stateless bean, the bean is ready to be reused by another client. The information stored in the bean is generally discarded when the client stops accessing the bean. This type of bean is mainly used for persistence purposes, since persistence does not require a conversation with the client. It is not in the scope of this recipe to learn how Stateless Beans work in detail. If you wish to learn more, please visit: http://jcp.org/en/jsr/detail?id=318 or https://www.packtpub.com/developer-guide-for-ejb3/book In this recipe, we will see how to use NetBeans to create a Stateless Session Bean that retrieves information from the database, passes through a servlet and prints this information on a page that is created on-the-fly by our servlet. Getting ready It is required to have NetBeans with Java EE support installed to continue with this recipe. If this particular NetBeans version is not available in your machine, please visit http://download.netbeans.org. We will use the GlassFish Server in this recipe since it is the only Server that supports Java EE 6 at the moment. We also need to have Java DB configured. GlassFish already includes a copy of Java DB in its installation folder. It is possible to follow the steps on this recipe without the previous code, but for better understanding we will continue to build on the top of the previous recipes source code. How to do it... Right-click on EJBApplication node and select New and Session Bean.... For Name and Location: Name the EJB as ManufacturerEJB. Under Package, enter beans. Leave Session Type as Stateless. Leave Create Interface with nothing marked and click Finish. Here are the steps for us to create business methods: Open ManufacturerEJB and inside the class body, enter: @PersistenceUnit EntityManagerFactory emf; public List findAll(){ return emf.createEntityManager().createNamedQuery("Manufacturer. findAll").getResultList(); } Press Ctrl+Shift+I to resolve the following imports: java.util.List; javax.persistence.EntityManagerFactory; javax.persistence.PersistenceUnit; Creating the Servlet: Right-click on the EJBApplication node and select New and Servlet.... For Name and Location: Name the servlet as ManufacturerServlet. Under package, enter servlets. Leave all the other fields with their default values and click Next. For Configure Servlet Deployment: Leave all the default values and click Finish. With the ManufacturerServlet open: After the class declaration and before the processRequest method, add: @EJB ManufacturerEJB manufacturerEJB; Then inside the processRequest method, first line after the try statement, add: List<Manufacturer> l = manufacturerEJB.findAll(); Remove the /* TODO output your page here and also */. And finally replace: out.println("<h1>Servlet ManufacturerServlet at " + request. getContextPath () + "</h1>"); With: for(int i = 0; i < 10; i++ ) out.println("<b>City</b>"+ l.get(i).getCity() +", <b>State</b>"+ l.get(i).getState() +"<br>" ); Resolve all the import errors and save the file. How it works... To execute the code produced in this recipe, right-click on the EJBApplication node and select Run. When the browser launches append to the end of the URL/ManufacturerServlet, hit Enter. Our application will return City and State names. One of the coolest features in Java EE 6 is that usage of web.xml can be avoided if annotating the servlet. The following code does exactly that: @WebServlet(name="ManufacturerServlet", urlPatterns={"/ ManufacturerServlet"}) Since we are working on Java EE 6, our Stateless bean does not need the daunting work of creating interfaces, the @Stateless annotation takes care of that, making it easier to develop EJBs. We then add the persistence unit, represented by the EntityManagerFactory and inserted by the @PersistenceUnit annotation. Finally we have our business method that is used from the servlet. The findAll method uses one of the named queries from our entity to fetch information from the database.  
Read more
  • 0
  • 0
  • 7100
Modal Close icon
Modal Close icon