Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-working-value-extractors-and-simplifying-queries-oracle-coherence-35
Packt
27 Apr 2010
5 min read
Save for later

Working with Value Extractors and Simplifying Queries in Oracle Coherence 3.5

Packt
27 Apr 2010
5 min read
Coherence allows you to do look up one or more objects based on attributes other than the identity by specifying a filter for set-based operations defined by the QueryMap interface. public interface QueryMap extends Map {Set keySet(Filter filter);Set entrySet(Filter filter);Set entrySet(Filter filter, Comparator comparator);...} As you can see from the previous interface definition, all three methods accept a filter as the first argument, which is an instance of a class implementing a very simple com.tangosol.util.Filter interface: public interface Filter {boolean evaluate(Object o);} Basically, the Filter interface defines a single method, evaluate, which takes an object to evaluate as an argument and returns true if the specified object satisfies the criteria defined by the filter, or false if it doesn't. This mechanism is very flexible, as it allows you to filter your cached objects any way you want. For example, it would be quite simple to implement a filter that can be used to retrieve all the account transactions in a specific period: public class TransactionFilter implements Filter {private Long m_accountId;private Date m_from;private Date m_to;public TransactionFilter(Long accountId, Date from, Date to) {m_accountId = accountId;m_from = from;m_to = to;}public boolean evaluate(Object o) {Transaction tx = (Transaction) o;return tx.getId().getAccountId().equals(m_accountId)&& tx.getTime().compareTo(from) >= 0&& tx.getTime().compareTo(to) <= 0;}} While the previous sample filter implementation is perfectly valid and will return correct results if executed against the transactions cache, it would be very cumbersome if you had to define every single query criterion in the application by implementing a custom filter class as we did previously. Fortunately, Coherence provides a number of built-in filters that make custom filter implementation unnecessary in the vast majority of cases. Built-in filters Most queries can be expressed in terms of object attributes and standard logical and relational operators, such as AND, OR, equals, less than, greater than, and so on. For example, if we wanted to find all the transactions for an account, it would be much easier if we could just execute the query analogous to the select * from Transactions where account_id = 123 SQL statement than to write a custom filter that checks if the accountId attribute is equal to 123. The good news is that Coherence has a number of built-in filters that allow us to do exactly that. The following table lists all the filters from the com.tangosol.util.filter package that you can use to construct custom queries: As you can see, pretty much all of the standard Java logical operators and SQL predicates are covered. This will allow us to construct query expressions as complex as the ones we can define in Java code or the SQL where clause. The bad news is that there is no query language in Coherence that allows you to specify a query as a string. Instead, you need to create the expression tree for the query programmatically, which can make things a bit tedious. For example, the where clause of the SQL statement we specified earlier, select * from Transactions where account_id = 123, can be represented by the following Coherence filter definition: Filter filter = new EqualsFilter("getId.getAccountId", 123); In this case it is not too bad: we simply create an instance of an EqualsFilter that will extract the value of an accountId attribute from a Transaction.Id instance and compare it with 123. However, if we modify the query to filter transactions by date as well, the filter expression that we need to create becomes slightly more complex: Filter filter = new AndFilter(new EqualsFilter("getId.getAccountId", accountId),new BetweenFilter("getTime", from, to)); If you need to combine several logical expressions, this can quickly get out of hand, so we will look for a way to simplify filter creation shortly. But first, let's talk about something we used in the examples without paying much attention to it—value extractors. Value extractors As you can see from the previous examples, a query is typically expressed in terms of object attributes, such as accountId or time, while the evaluate method defined by the Filter interface accepts a whole object that the attributes belong to, such as a Transaction instance. That implies that we need a generic way to extract attribute values from an object instance—otherwise, there would be no way to define reusable filters, such as the ones in the table earlier that ship with Coherence, and we would be forced to implement a custom filter for each query we need to execute. In order to solve this problem and enable extraction of attribute values from an object, Coherence introduces value extractors. A value extractor is an object that implements a com.tangosol.util.ValueExtractor interface: public interface ValueExtractor {Object extract(Object target);} The sole purpose of a value extractor is to extract a derived value from the target object that is passed as an argument to the extract method . The result could be a single attribute value, a combination of multiple attributes (concatenation of first and last name, for example), or in general, a result of some transformation of a target object.
Read more
  • 0
  • 0
  • 5251

article-image-querying-data-grid-coherence-35-obtaining-query-results-and-using-indexes
Packt
27 Apr 2010
6 min read
Save for later

Querying the Data Grid in Coherence 3.5: Obtaining Query Results and Using Indexes

Packt
27 Apr 2010
6 min read
The easiest way to obtain query results is to invoke one of the QueryMap.entrySet methods: Filter filter = ...;Set<Map.Entry> results = cache.entrySet(filter); This will return a set of Map.Entry instances representing both the key and the value of a cache entry, which is likely not what you want. More often than not you need only values, so you will need to iterate over the results and extract the value from each Map.Entry instance: List values = new ArrayList(results.size());for (Map.Entry entry : entries) {values.add(entry.getValue());} After doing this a couple times you will probably want to create a utility method for this task. Because all the queries should be encapsulated within various repository implementations, we can simply add the following utility methods to our AbstractCoherenceRepository class: public abstract class AbstractCoherenceRepository<K, V extendsEntity<K>> {...protected Collection<V> queryForValues(Filter filter) {Set<Map.Entry<K, V>> entries = getCache().entrySet(filter);return extractValues(entries);}protected Collection<V> queryForValues(Filter filter,Comparator comparator) {Set<Map.Entry<K, V>> entries =getCache().entrySet(filter, comparator);return extractValues(entries);}private Collection<V> extractValues(Set<Map.Entry<K, V>> entries) {List<V> values = new ArrayList<V>(entries.size());for (Map.Entry<K, V> entry : entries) {values.add(entry.getValue());}return values;} What happened to the QueryMap.values() method?Obviously, things would be a bit simpler if the QueryMap interface also had an overloaded version of the values method that accepts a filter and optionally comparator as arguments.I'm not sure why this functionality is missing from the API, but I hope it will be added in one of the future releases. In the meantime, a simple utility method is all it takes to provide the missing functionality, so I am not going to complain too much. Controlling query scope using data affinity Data affinity can provide a significant performance boost because it allows Coherence to optimize the query for related objects. Instead of executing the query in parallel across all the nodes and aggregating the results, Coherence can simply execute it on a single node, because data affinity guarantees that all the results will be on that particular node. This effectively reduces the number of objects searched to approximately C/N, where C is the total number of objects in the cache query is executed against, and N is the number of partitions in the cluster. However, this optimization is not automatic—you have to target the partition to search explicitly, using KeyAssociatedFilter: Filter query = ...;Filter filter = new KeyAssociatedFilter(query, key); In the previous example, we create a KeyAssociatedFilter that wraps the query we want to execute. The second argument to its constructor is the cache key that determines the partition to search. To make all of this more concrete, let's look at the final implementation of the code for our sample application that returns account transactions for a specific period. First, we need to add the getTransactions method to our Account class: public Collection<Transaction> getTransactions(Date from, Date to) {return getTransactionRepository().findTransactions(m_id, from, to);} Finally, we need to implement the findTransactions method within the CoherenceTransactionRepository: public Collection<Transaction> findTransactions(Long accountId, Date from, Date to) {Filter filter = new FilterBuilder().equals("id.accountId", accountId).between("time", from, to).build();return queryForValues(new KeyAssociatedFilter(filter, accountId),new DefaultTransactionComparator());} As you can see, we target the query using the account identifier and ensure that the results are sorted by transaction number by passing DefaultTransactionComparator to the queryForValues helper method we implemented earlier. This ensures that Coherence looks for transactions only within the partition that the account with the specified id belongs to. Querying near cache One situation where a direct query using the entrySet method might not be appropriate is when you need to query a near cache. Because there is no way for Coherence to determine if all the results are already in the front cache, it will always execute the query against the back cache and return all the results over the network, even if some or all of them are already present in the front cache. Obviously, this is a waste of network bandwidth. What you can do in order to optimize the query is to obtain the keys first and then retrieve the entries by calling the CacheMap.getAll method: Filter filter = ...;Set keys = cache.keySet(filter);Map results = cache.getAll(keys); The getAll method will try to satisfy as many results as possible from the front cache and delegate to the back cache to retrieve only the missing ones. This will ensure that we move the bare minimum of data across the wire when executing queries, which will improve the throughput. However, keep in mind that this approach might increase latency, as you are making two network roundtrips instead of one, unless all results are already in the front cache. In general, if the expected result set is relatively small, it might make more sense to move all the results over the network using a single entrySet call. Another potential problem with the idiom used for near cache queries is that it could return invalid results. There is a possibility that some of the entries might change between the calls to keySet and getAll. If that happens, getAll might return entries that do not satisfy the filter anymore, so you should only use this approach if you know that this cannot happen (for example, if objects in the cache you are querying, or at least the attributes that the query is based on, are immutable). Sorting the results We have already seen that the entrySet method allows you to pass a Comparator as a second argument, which will be used to sort the results. If your objects implement the Comparable interface you can also specify null as a second argument and the results will be sorted based on their natural ordering. For example, if we defined the natural sort order for transactions by implementing Comparable within our Transaction class, we could've simply passed null instead of a DefaultTransactionComparator instance within the findTransactions implementation shown earlier. On the other hand, if you use near cache query idiom, you will have to sort the results yourself. This is again an opportunity to add utility methods that allow you to query near cache and to optionally sort the results to our base repository class. However, there is a lot more to cover in this article, so I will leave this as an exercise for the reader.
Read more
  • 0
  • 0
  • 1992

article-image-working-aggregators-oracle-coherence-35
Packt
27 Apr 2010
5 min read
Save for later

Working with Aggregators in Oracle Coherence 3.5

Packt
27 Apr 2010
5 min read
For example, you might want to retrieve the total amount of all orders for a particular customer. One possible solution is to retrieve all the orders for the customer using a filter and to iterate over them on the client in order to calculate the total. While this will work, you need to consider the implications: You might end up moving a lot of data across the network in order to calculate a result that is only few bytes long You will be calculating the result in a single-threaded fashion, which might introduce a performance bottleneck into your application The better approach would be to calculate partial results on each cache node for the data it manages, and to aggregate those partial results into a single answer before returning it to the client. Fortunately, we can use Coherence aggregators to achieve exactly that. By using an aggregator, we limit the amount of data that needs to be moved across the wire to the aggregator instance itself, the partial results returned by each Coherence node the aggregator is evaluated on, and the final result. This reduces the network traffic significantly and ensures that we use the network as efficiently as possible. It also allows us to perform the aggregation in parallel, using full processing power of the Coherence cluster. At the very basic, an aggregator is an instance of a class that implements the com.tangosol.util.InvocableMap.EntryAggregator interface: interface EntryAggregator extends Serializable {Object aggregate(Set set);} However, you will rarely have the need to implement this interface directly. Instead, you should extend the com.tangosol.util.aggregator.AbstractAggregator class that also implements the com.tangosol.util.InvocableMap.ParallelAwareAggregator interface, which is required to ensure that the aggregation is performed in parallel across the cluster. The AbstractAggregator class has a constructor that accepts a value extractor to use and defines the three abstract methods you need to override: public abstract class AbstractAggregatorimplements InvocableMap.ParallelAwareAggregator {public AbstractAggregator(ValueExtractor valueExtractor) {...}protected abstract void init(boolean isFinal);protected abstract void process(Object value, boolean isFinal);protected abstract Object finalizeResult(boolean isFinal);} The init method is used to initialize the result of aggregation, the process method is used to process a single aggregation value and include it in the result, and the finalizeResult method is used to create the final result of the aggregation. Because aggregators can be executed in parallel, the init and finalizeResult methods accept a flag specifying whether the result to initialize or finalize is the final result that should be returned by the aggregator or a partial result, returned by one of the parallel aggregators. The process method also accepts an isFinal flag, but in its case the semantics are somewhat different—if the isFinal flag is true, that means that the object to process is the result of a single parallel aggregator execution that needs to be incorporated into the final result. Otherwise, it is the value extracted from a target object using the value extractor that was specified as a constructor argument. This will all be much clearer when we look at an example. Let's write a simple aggregator that returns an average value of a numeric attribute: public class AverageAggregatorextends AbstractAggregator {private transient double sum;private transient int count;public AverageAggregator() {// deserialization constructor}public AverageAggregator(ValueExtractor valueExtractor) {super(valueExtractor);}public AverageAggregator(String propertyName) {super(propertyName);}protected void init(boolean isFinal) {sum = 0;count = 0;}protected void process(Object value, boolean isFinal) {if (value != null) {if (isFinal) {PartialResult pr = (PartialResult) o;sum += pr.getSum();count += pr.getCount();}else {sum += ((Number) o).doubleValue();count++;}}}protected Object finalizeResult(boolean isFinal) {if (isFinal) {return count == 0 ? null : sum / count;}else {return new PartialResult(sum, count);}}static class PartialResult implements Serializable {private double sum;private int count;PartialResult(double sum, int count) {this.sum = sum;this.count = count;}public double getSum() {return sum;}public int getCount() {return count;}}} As you can see, the init method simply sets both the sum and the count fields to zero, completely ignoring the value of the isFinal flag. This is OK, as we want those values to start from zero whether we are initializing our main aggregator or one of the parallel aggregators. The finalizeResult method, on the other hand, depends on the isFinal flag to decide which value to return. If it is true, it divides the sum by the count in order to calculate the average and returns it. The only exception is if the count is zero, in which case the result is undefined and the null value is returned. However, if the isFinal flag is false, the finalizeResult simply returns an instance of a PartialResult inner class, which is nothing more than a holder for the partial sum and related count on a single node. Finally, the process method also uses the isFinal flag to determine its correct behavior. If it's true, that means that the value to be processed is a PartialResult instance, so it reads partial sum and count from it and adds them to the main aggregator's sum and count fields. Otherwise, it simply adds the value to the sum field and increments the count field by one. We have implemented AverageAggregator in order to demonstrate with a simple example how the isFinal flag should be used to control the aggregation, as well as to show that the partial and the final result do not have to be of the same type. However, this particular aggregator is pretty much a throw-away piece of code, as we'll see in the next section.
Read more
  • 0
  • 0
  • 3070

article-image-installing-pentaho-data-integration-mysql
Packt
09 Apr 2010
8 min read
Save for later

Installing Pentaho Data Integration with MySQL

Packt
09 Apr 2010
8 min read
In order to work with Pentaho 3.2 Data Integration(PDI) you need to install the software. It's a simple task; let's do it. Time for action – installing PDI These are the instructions to install Kettle, whatever your operating system. The only prerequisite to install PDI is to have JRE 5.0 or higher installed. If you don't have it, please download it from http://www.javasoft.com/ and install it before proceeding. Once you have checked the prerequisite, follow these steps: From http://community.pentaho.com/sourceforge/ follow the link to Pentaho Data Integration (Kettle). Alternatively, go directly to the download page http://sourceforge.net/projects/pentaho/files/Data Integration. Choose the newest stable release. At this time, it is 3.2.0. Download the file that matches your platform. The preceding screenshot should help you. Unzip the downloaded file in a folder of your choice —C:/Kettle or /home/your_dir/kettle. If your system is Windows, you're done. Under UNIX-like environments, it's recommended that you make the scripts executable. Assuming that you chose Kettle as the installation folder, execute the following command: cd Kettlechmod +x *.sh What just happened? You have installed the tool in just a few minutes. Now you have all you need to start working Launching the PDI graphical designer: Spoon Now that you've installed PDI, you must be eager to do some stuff with data. That will be possible only inside a graphical environment. PDI has a desktop designer tool named Spoon. Let's see how it feels to work with it. Time for action – starting and customizing Spoon In this tutorial you're going to launch the PDI graphical designer and get familiarized with its main features. Start Spoon. If your system is Windows, type the following command: Spoon.bat In other platforms such as Unix, Linux, and so on, type: Spoon.sh If you didn't make spoon.sh executable, you may type: sh Spoon.sh As soon as Spoon starts, a dialog window appears asking for the repository connection data. Click the No Repository button. The main window appears. You will see a small window with the tip of the day. After reading it, close that window. A welcome! window appears with some useful links for you to see. Close the welcome window. You can open that window later from the main menu. Click Options... from the Edit menu. A window appears where you can change various general and visual characteristics. Uncheck the circled checkboxes: Select the tab window Look Feel. Change the Grid size and Preferred Language settings as follows: Click the OK button. Restart Spoon in order to apply the changes. You should neither see the repository dialog, nor the welcome window. You should see the following screen instead: What just happened? You ran for the first time the graphical designer of PDI Spoon, and applied some custom configuration. From the Look Feel configuration window, you changed the size of the dotted grid that appears in the canvas area while you are working. You also changed the preferred language. In the Option tab window, you chose not to show either the repository dialog or the welcome window at startup. These changes were applied as you restarted the tool, not before. The second time you launched the tool, the repository dialog didn't show up. When the main window appeared, all the visible texts were shown in French, which was the selected language, and instead of the welcome window, there was a blank screen. Spoon This tool that you're exploring in this section is the PDI's desktop design tool. With Spoon you design, preview, and test all your work, that is, transformations and jobs. When you see PDI screenshots, what you are really seeing are Spoon screenshots. The other PDI components that you will meet in the following chapters are executed from terminal windows. Setting preferences in the Options window In the tutorial you changed some preferences in the Options window. There are several look and feel characteristics you can change beyond those you changed. Feel free to experiment with this setting. Remember to restart Spoon in order to see the changes applied. If you choose any language as preferred language other than English, you should select a diff erent language as alternati ve. If you do so, every name or descripti on not translated to your preferred language will be shown in the alternative language. Just for the curious people: Italian and French are the overall winners of the list of languages to which the tool has been translated from English. Below them follow Korean, Argenti neanSpanish, Japanese, and Chinese. One of the setti ngs you changed was the appearance of the welcome window at start up. The welcome window has many useful links, all related with the tool: wiki pages, news, forum access, and more. It's worth exploring them. You don't have to change the settings again to see the welcome window. You can open it from the menu Help | Show the Welcome Screen. Storing transformations and jobs in a repository The first time you launched Spoon, you chose No Repository. After that, you confi gured Spoon to stop asking you for the Repository option. You must be curious about what the repository is and why not to use it. Let's explain it. As said, the results of working with PDI are Transformati ons and Jobs. In order to save the Transformations and Jobs, PDI offers two methods: Repository: When you use the repository method you save jobs and transformations in a repository. A repository is a relational database specially designed for this purpose. Files: The files method consists of saving jobs and transformations as regular XML files in the filesystem, with extension kjb and ktr respectively. The following diagram summarizes this: You cannot mix the two methods (files and repository) in the same project. Therefore, you must choose the method when you start the tool. Why did we choose not to work with repository, or in other words, to work with fi les? This is mainly for the following two reasons: Working with files is more natural and practical for most users. Working with repository requires minimum database knowledge and that you also have access to a database engine from your computer. Having both preconditions would allow you to learn working with both methods. However, it's probable that you haven't. Creating your first transformation Until now, you've seen the very basic elements of Spoon. For sure, you must be waiti ng to do some interesting task beyond looking around. It's time to create your first transformation. Time for action – creating a hello world transformation How about starting by saying Hello to the World? Not original but enough for a very first practical exercise. Here is how you do it: Create a folder named pdi_labs under the folder of your choice. Open Spoon. From the main menu select File | New Transformation. At the left-hand side of the screen, you'll see a tree of Steps. Expand the Input branch by double-clicking it. Left -click the Generate Rows icon. Without releasing the button, drag-and-drop the selected icon to the main canvas. The screen will look like this: Double-click the Generate Rows step that you just put in the canvas and fill the text boxes and grid as follows: From the Steps tree, double-click the Flow step. Click the Dummy icon and drag-and-drop it to the main canvas. Click the Generate Rows step and holding the Shift key down, drag the cursor towards the Dummy step. Release the button. The screen should look like this: Right-click somewhere on the canvas to bring up a contextual menu. Select New note. A note editor appears. Type some description such as Hello World! and click OK. From the main menu, select Transformation | Configuration. A window appears to specify transformation properties. Fill the Transformation name with a simple name as hello_world. Fill the Description field with a short description such as My first transformation. Finally provide a more clear explanation in the Extended description text box and click OK. From the main menu, select File | Save. Save the transformation in the folder pdi_labs with the name hello_world. Select the Dummy step by left -clicking it. Click on the Preview button in the menu above the main canvas. A debug window appears. Click the Quick Launch button. The following window appears to preview the data generated by the transformation: Close the preview window and click the Run button. A window appears. Click Launch. The execution results are shown in the bottom of the screen. The Logging tab should look as follows:
Read more
  • 0
  • 1
  • 5365

article-image-author-podcast-bob-griesemer-oracle-warehouse-builder-11g
Packt
09 Apr 2010
1 min read
Save for later

Author Podcast - Bob Griesemer on Oracle Warehouse Builder 11g

Packt
09 Apr 2010
1 min read
Click here to download the interview, or hit play in the media player below.    
Read more
  • 0
  • 0
  • 1542

article-image-oracles-rdbms-sql-command-dump-block
Packt
09 Apr 2010
8 min read
Save for later

Oracle's RDBMS SQL Command Dump Block

Packt
09 Apr 2010
8 min read
Do not do this in a production database. Before continuing with this article, you should read the Oracle Database Concepts 11g Release 2 (11.2) of the documentation, the book every DBA should start with. Our examination of data blocks starts in Section 12-6 of the Concepts Manual. Data block format: "Every Oracle data block has a format or internal structure that enables the database to track the data and free space in the block. This format is similar whether the data block contains table, index, or table cluster data." A block is the smallest unit of logical storage that the Relational Database Management System (RDBMS) can manipulate. Block size is determined by the database parameter DB_BLOCK_SIZE. The logical storage of data blocks, extents, segments, and table spaces (from smallest to largest) map to the data files, which are stored in operating system blocks. An undo block will store the undo transaction that is the actual SQL command needed to reverse the original SQL transaction statement. This undo is needed for read consistency for all read-only queries until you commit or rollback that transaction. Read consistency within a changed block (transaction) is maintained for any of the following commands: insert, update, delete, merge, select for update, or lock table. Any of the previous changes are tracked until the command is issued to either commit or rollback a particular transaction. This consistency keeps the data view to each user the same, whether they are just doing queries or actually changing any other data. A point in time or what is called the System Change Number (SCN) identifies each transaction, and transaction flags show the state of the transaction. The only end user that can see any changed data will be the one making the changes, no matter the application used until they commit that change. The SCN advances for every change to the database as a sequential counter, which identifies a certain point in time. The SCN tracks more than just single transactions by end users. These transactions will be in Data Definition Language (DDL) or Data Manipulation Language (DML). DDL statements are associated with creating objects (create table) or what is also called metadata. DML are the other commands mentioned earlier (insert, update, delete, among others) that manipulate the data in some way. The RDBMS advances the SCN if another person logs in, reconnects, or alters their session as well as when Oracle background processes (which constantly check the state of activity inside of the database) take place. It is undo that gives everyone a point-in-time consistent view of the data, which is called Read Consistency. There are controls created from business rules within the application called triggers and integrity constraints that validate the data entered by the user. Database locks control access to data during changes for exclusive access by the end user changing it. During a delete or update statement: The data block is read, loading it into a memory structure called a buffer cache The redo log buffer will contain the corresponding delete or update statement An entry in the undo segment header block is created for this transaction It also copies the delete or update row into an undo block For a delete, the row is removed from the data block and that block is marked as dirty Locks keep exclusive use of that block until a commit or rollback occurs Dirty is an internal designation where the block is identified as having changed data that has not been written to disk. The RDBMS needs to track this information for transactional integrity and consistency. The underlying dynamic performance view v$bh indicates when a particular block is dirty, as seen by the following query: SYS@ORCL11>select file#, block# from v$bh where dirty='Y'; When a transaction is committed by the end user: The transaction SCN is updated in the data block and the undo segment header marks that statement as committed in the header section of the undo block. The logwriter process (LGWR) will flush the log buffer to the appropriate online redo log file. SCN is changed on the data block if it is still in the buffer cache (fast commit). Delayed block cleanout can happen when all of the changed blocks don't have the updated SCN indicating the commit has occurred. This can cause problems with a transaction that is updating large numbers of rows if a rollback needs to occur. Symptoms include hanging onto an exclusive lock until that rollback is finished, and causing end users to wait. The delayed block cleanout process does occasionally cause problems that would require opening an Oracle Support Request. Delayed block cleanout was implemented to save time by reducing the number of disk reads to update the SCN until the RDBMS needs to access data from that same block again. If the changed block has already been written to the physical disk and the Oracle background process encounters this same block (for any other query, DML, or DDL), it will also record the committed change at the same time. It does this by checking the transaction entry by SCN in the undo header, which indicates the changes that have been committed. That transaction entry is located in the transaction table, which keeps track of all active transactions for that undo segment. Each transaction is uniquely identified by the assignment of a transaction ID (XID), which is found in the v$transaction view. This XID is written in the undo header block along with the Undo Byte Address (Uba), which consists of the file and block numbers UBAFIL data file and UBABLK data block, and columns found in the v$transaction view, respectively. Please take the time to go through the following demonstration; it will solidify the complex concepts in this article. Demonstration of data travel path Dumping a block is one of the methods to show how data is stored. It will show the actual contents of the block, whether it is a Table or Index Block, and an actual address that includes the data file number and block number. Remember from the concepts manual that several blocks together make up an extent, and extents then make up segments. A single segment maps to a particular table or index. It is easy to see from the following simplified diagram how different extents can be stored in different physical locations in different data files but the same logical tablespace: The data in the test case comes from creating a small table (segment) with minimum data in a tablespace with a single data file created just for this demonstration. Automatic Segment Space Management (ASSM) is the default in 11g. If you create a tablespace in 11g with none of the optional storage parameters, the RDBMS by default creates an ASSM segment with locally managed autoallocated extents. It is possible to define the size of the extents at tablespace creation time that depends on the type of data to be stored. If all of the data is uniform and you need to maintain strict control over the amount of space used, then uniform extents are desirable. Allowing the RDBMS to autoallocate extents is typical in situations where the data is not the same size for each extent, reducing the amount of time spent in allocating and maintaining space for database segments. Discussing the details, options, and differences for all of the ways to manage segment space in Oracle Database 11g is beyond the scope of this article. For this example, we will be using race car track information as the sample data. For this demonstration, you will create a specific user with the minimum amount of privileges needed to complete this exercise; SQL is provided for that step in the script. There are several key files in the zipped code for this article that you will need for this exercise, listed as follows: dumpblock_sys.sql dumpblock_ttracker.sql dumpblocksys.lst dumpblockttracker.lst NEWDB_ora_8582_SYSDUMP1.rtf NEWDB_ora_8582_SYSDUMP1.txt NEWDB_ora_8621_SYSDUMP2.rtf NEWDB_ora_8621_SYSDUMP2.txt NEWDB_ora_8628_SYSDUMP3.rtf NEWDB_ora_8628_SYSDUMP3.txt NEWDB_ora_8635_SYSDUMP4.rtf NEWDB_ora_8635_SYSDUMP4.txt You will also need access to a conversion calculator to translate the hexadecimal to a number that is the first listing below—use hexadecimal input and decimal output. The second will allow you to look up Hex (Hexadecimal) equivalents for characters.http://calculators.mathwarehouse.com/binary-hexadecimal-calculator.php#hexadecimalBinaryCalculatorhttp://www.asciitable.com/ Location of trace files The dump block statement will create a trace file in the user dump (udump) directory on any version prior to 11gR1, which can be viewed by a text editor. Using 11gR1 and above, you will find it in the diag directory location. This example will demonstrate how to use the adrci command-line utility to view trace files. First we set the home path where the utility will find the files, then search with the most recent listed first—in this case, it is the NEWDB_ora_9980.trc file. Now that you know the location for the trace files, how do you determine which trace file was produced? The naming convention for trace files includes the actual process number associated with that session. Use the following command to produce trace files with a specific name, making it easier to identify a separate task: SYS@NEWDB>ALTER SESSION SET TRACEFILE_IDENTIFIER = SYSDUMP_SESSION;
Read more
  • 0
  • 0
  • 6573
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-unveil-power-your-business-data-oracle-discoverer
Packt
08 Apr 2010
4 min read
Save for later

Unveil the Power of Your Business Data with Oracle Discoverer

Packt
08 Apr 2010
4 min read
A quick guide to Oracle Discoverer packaging Before proceeding to get the Oracle Discoverer software, it’s important to realize what you actually need and what the Oracle Discoverer packaging provides. At the moment, there are two options when it comes to the current Oracle Discoverer software: Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 Portal, Forms, Reports and Discoverer suite, part of Oracle Fusion Middleware 11g Release 1 The first option – the Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 – includes the following components: Oracle Business Intelligence Discoverer Oracle HTTP Server Oracle Application Server Containers for J2EE (OC4J) Oracle Enterprise Manager 10g Application Server Control Oracle Application Server Web Cache Oracle Application Server Reports Services The first component, Oracle Business Intelligence Discoverer, in the above list represents actually a group of components whose name starts with Discoverer. The package includes: Discoverer Plus Discoverer Viewer Discoverer Services Discoverer Portlet Provider Note, however, that the above list does not include all the Discoverer components. For example, you won’t find the following Discoverer components there: Discoverer Administrator Discoverer Desktop The above components are included in a complementary package called Oracle Business Intelligence Tools. As mentioned at the beginning of this section, another option to take advantage of the Discoverer components is to install the Portal, Forms, Reports and Discoverer suite, which is part of Oracle Fusion Middleware 11g Release 1. This package includes the following components: HTTP Server WebCache Portal Forms Services Forms Builder Reports Services Report Builder/Compiler Discoverer Administrator Discoverer Plus Discoverer Viewer Discoverer Services Discoverer Desktop Enterprise Manager Fusion Middleware Control As you can see, the Portal, Forms, Reports and Discoverer suite, unlike Oracle Business Intelligence suite, does include Discoverer Administrator and Discoverer Desktop. So you won’t need to install another package to obtain these components. A major downside to choosing the Portal, Forms, Reports and Discoverer suite, though, is that it requires some additional software to be installed in your system. Here is the list of the required additional software components: WebLogic Server Repository Creation Utility Identity Management SSO Metadata Repository Creation Assistant Patch Scripts Identity Management 10gR3 Oracle Database Due to this reason – to save you the trouble of installing a lot of software – the "Installation process" section later in this article will cover the installation of Oracle Business Intelligence suite, part of Oracle Application Server 10g Release 2 rather than the Portal, Forms, Reports and Discoverer suite of Oracle Fusion Middleware 11g Release 1. Getting the software Once you have decided on the package you want to install, you can go for it to the OTN’s Software Downloads page at http://www.oracle.com/technology/software/index.html. It’s important to remember that each software component available from this page comes with a Development License, which allows for free download and unlimited evaluation time. You can look at the license at http://www.oracle.com/technology/software/popup-license/standard-license.html. Later, if you so desire, you can always buy products with full-use licenses. So, the OTN’s Software Downloads page, go to the Middleware section and, assuming you want to download Oracle Business Intelligence suite, click the Business Intelligence SE link to proceed to the Oracle Application Server 10g Release 2 (10.1.2.0.2) page at http://www.oracle.com/technology/software/products/ias/htdocs/101202.html. On this page, go down to the Business Intelligence section and find the links to the packages provided for your operating system. Each package is supposed to be copied on a separate CD. The number of CDs and the size of packages to be copied on them may vary depending on the operating system. What you need to do is download the installation packages and then copy each to a CD. Looking through the links to the installation packages, you may notice that Tools CD – the link to the package containing the Oracle Business Intelligence Tools suite – is available only for Microsoft Windows operating system. This is because the components included in the Oracle Business Intelligence Tools suite are Windows-only applications. If, instead of the Oracle Business Intelligence suite, you decided on the Portal, Forms, Reports and Discoverer suite, you have to follow the Oracle Fusion Middleware 11g R1 link in the Middleware section on the OTN’s Software Downloads page. Following that link, you’ll be directed to the Oracle Fusion Middleware 11gR1 Software Downloads page at http://www.oracle.com/technology/software/products/middleware/htdocs/fmw_11_download.html. On this page, go down to the Portal, Forms, Reports and Discoverer section and pick up the distribution divided into several packages. Again, the number of packages within a distribution and their size may vary depending on the operating system.
Read more
  • 0
  • 0
  • 2317

article-image-installing-coherence-35-and-accessing-data-grid-part-1
Packt
31 Mar 2010
10 min read
Save for later

Installing Coherence 3.5 and Accessing the Data Grid: Part 1

Packt
31 Mar 2010
10 min read
When I first started evaluating Coherence, one of my biggest concerns was how easy it would be to set up and use, especially in a development environment. The whole idea of having to set up a cluster scared me quite a bit, as any other solution I had encountered up to that point that had the word "cluster" in it was extremely difficult and time consuming to configure. My fear was completely unfounded—getting the Coherence cluster up and running is as easy as starting Tomcat. You can start multiple Coherence nodes on a single physical machine, and they will seamlessly form a cluster. Actually, it is easier than starting Tomcat. Installing Coherence In order to install Coherence you need to download the latest release from the Oracle Technology Network (OTN) website. The easiest way to do so is by following the link from the main Coherence page on OTN. At the time of this writing, this page was located at http://www.oracle.com/technology/products/coherence/index.html, but that might change. If it does, you can find its new location by searching for 'Oracle Coherence' using your favorite search engine. In order to download Coherence for evaluation, you will need to have an Oracle Technology Network (OTN) account. If you don't have one, registration is easy and completely free. Once you are logged in, you will be able to access the Coherence download page, where you will find the download links for all available Coherence releases: one for Java, one for .NET, and one for each of the supported C++ platforms. You can download any of the Coherence releases you are interested in while you are there, but for the remainder of this article you will only need the first one. The latter two (.NET and C++) are client libraries that allow .NET and C++ applications to access the Coherence data grid. Coherence ships as a single ZIP archive. Once you unpack it you should see the README.txt file containing the full product name and version number, and a single directory named coherence. Copy the contents of the coherence directory to a location of your choice on your hard drive. The common location on Windows is c:coherence and on Unix/Linux /opt/coherence, but you are free to put it wherever you want. The last thing you need to do is to configure the environment variable COHERENCE_HOME to point to the top-level Coherence directory created in the previous step, and you are done. Coherence is a Java application, so you also need to ensure that you have the Java SDK 1.4.2 or later installed and that JAVA_HOME environment variable is properly set to point to the Java SDK installation directory. If you are using a JVM other than Sun's, you might need to edit the scripts used in the following section. For example, not all JVMs support the -server option that is used while starting the Coherence nodes, so you might need to remove it. What's in the box? The first thing you should do after installing Coherence is become familiar with the structure of the Coherence installation directory. There are four subdirectories within the Coherence home directory: bin: This contains a number of useful batch files for Windows and shell scripts for Unix/Linux that can be used to start Coherence nodes or to perform various network tests doc: This contains the Coherence API documentation, as well as links to online copies of Release Notes, User Guide, and Frequently Asked Questions documents examples: This contains several basic examples of Coherence functionality lib: This contains JAR files that implement Coherence functionality Shell scripts on UnixIf you are on a Unix-based system, you will need to add execute permission to the shell scripts in the bin directory by executing the following command: $ chmod u+x *.sh Starting up the Coherence cluster In order to get the Coherence cluster up and running, you need to start one or more Coherence nodes. The Coherence nodes can run on a single physical machine, or on many physical machines that are on the same network. The latter will definitely be the case for a production deployment, but for development purposes you will likely want to limit the cluster to a single desktop or laptop. The easiest way to start a Coherence node is to run cache-server.cmd batch file on Windows or cache-server.sh shell script on Unix. The end result in either case should be similar to the following screenshot: There is quite a bit of information on this screen, and over time you will become familiar with each section. For now, notice two things: At the very top of the screen, you can see the information about the Coherence version that you are using, as well as the specific edition and the mode that the node is running in. Notice that by default you are using the most powerful, Grid Edition, in development mode. The MasterMemberSet section towards the bottom lists all members of the cluster and provides some useful information about the current and the oldest member of the cluster. Now that we have a single Coherence node running, let's start another one by running the cache-server script in a different terminal window. For the most part, the output should be very similar to the previous screen, but if everything has gone according to the plan, the MasterMemberSet section should reflect the fact that the second node has joined the cluster: MasterMemberSet ( ThisMember=Member(Id=2, ...) OldestMember=Member(Id=1, ...) ActualMemberSet=MemberSet(Size=2, BitSetCount=2 Member(Id=1, ...) Member(Id=2, ...) )RecycleMillis=120000RecycleSet=MemberSet(Size=0, BitSetCount=0)) You should also see several log messages on the first node's console, letting you know that another node has joined the cluster and that some of the distributed cache partitions were transferred to it. If you can see these log messages on the first node, as well as two members within the ActualMemberSet on the second node, congratulations—you have a working Coherence cluster. Troubleshooting cluster start-up In some cases, a Coherence node will not be able to start or to join the cluster. In general, the reason for this could be all kinds of networking-related issues, but in practice a few issues are responsible for the vast majority of problems. Multicast issues By far the most common issue is that multicast is disabled on the machine. By default, Coherence uses multicast for its cluster join protocol, and it will not be able to form the cluster unless it is enabled. You can easily check if multicast is enabled and working properly by running the multicast-test shell script within the bin directory. If you are unable to start the cluster on a single machine, you can execute the following command from your Coherence home directory: $ . bin/multicast-test.sh –ttl 0 This will limit time-to-live of multicast packets to the local machine and allow you to test multicast in isolation. If everything is working properly, you should see a result similar to the following: Starting test on ip=Aleks-Mac-Pro.home/192.168.1.7,group=/237.0.0.1:9000, ttl=0Configuring multicast socket...Starting listener...Fri Aug 07 13:44:44 EDT 2009: Sent packet 1.Fri Aug 07 13:44:44 EDT 2009: Received test packet 1 from selfFri Aug 07 13:44:46 EDT 2009: Sent packet 2.Fri Aug 07 13:44:46 EDT 2009: Received test packet 2 from selfFri Aug 07 13:44:48 EDT 2009: Sent packet 3.Fri Aug 07 13:44:48 EDT 2009: Received test packet 3 from self If the output is different from the above, it is likely that multicast is not working properly or is disabled on your machine. This is frequently the result of a firewall or VPN software running, so the first troubleshooting step would be to disable such software and retry. If you determine that was indeed the cause of the problem you have two options. The first, and obvious one, is to turn the offending software off while using Coherence. However, for various reasons that might not be an acceptable solution, in which case you will need to change the default Coherence behavior, and tell it to use the Well-Known Addresses (WKA) feature instead of multicast for the cluster join protocol. Doing so on a development machine is very simple—all you need to do is add the following argument to the JAVA_OPTS variable within the cache-server shell script: -Dtangosol.coherence.wka=localhost With that in place, you should be able to start Coherence nodes even if multicastis disabled. Localhost and loopback addressOn some systems, localhost maps to a loopback address, 127.0.0.1. If that's the case, you will have to specify the actual IP address or host name for the tangosol.coherence.wka configuration parameter. The host name should be preferred, as the IP address can change as you move from network to network, or if your machine leases an IP address from a DHCP server. As a side note, you can tell whether the WKA or multicast is being used for the cluster join protocol by looking at the section above the MasterMemberSet section when the Coherence node starts. If multicast is used, you will see something similar to the following: Group{Address=224.3.5.1, Port=35461, TTL=4} The actual multicast group address and port depend on the Coherence version being used. As a matter of fact, you can even tell the exact version and the build number from the preceding information. In this particular case, I am using Coherence 3.5.1 release, build 461. This is done in order to prevent accidental joins of cluster members into an existing cluster. For example, you wouldn't want a node in the development environment using newer version of Coherence that you are evaluating to join the existing production cluster, which could easily happen if the multicast group address remained the same. On the other hand, if you are using WKA, you should see output similar to the following instead: WellKnownAddressList(Size=1, WKA{Address=192.168.1.7, Port=8088} ) Using the WKA feature completely disables multicast in a Coherence cluster, and is recommended for most production deployments, primarily due to the fact that many production environments prohibit multicast traffic altogether, and that some network switches do not route multicast traffic properly. That said, configuring WKA for production clusters is out of the scope of this article, and you should refer to Coherence product manuals for details. Binding issues Another issue that sometimes comes up is that one of the ports that Coherence attempts to bind to is already in use and you see a bind exception when attempting to start the node. By default, Coherence starts the first node on port 8088, and increments port number by one for each subsequent node on the same machine. If for some reason that doesn't work for you, you need to identify a range of available ports for as many nodes as you are planning to start (both UDP and TCP ports with the same numbers must be available), and tell Coherence which port to use for the first node by specifying the tangosol.coherence.localport system property. For example, if you want Coherence to use port 9100 for the first node, you will need to add the following argument to the JAVA_OPTS variable in the cache-server shell script: -Dtangosol.coherence.localport=9100
Read more
  • 0
  • 0
  • 4114

article-image-installing-coherence-35-and-accessing-data-grid-part-2
Packt
31 Mar 2010
10 min read
Save for later

Installing Coherence 3.5 and Accessing the Data Grid: Part 2

Packt
31 Mar 2010
10 min read
Using the Coherence API One of the great things about Coherence is that it has a very simple and intuitive API that hides most of the complexity that is happening behind the scenes to distribute your objects. If you know how to use a standard Map interface in Java, you already know how to perform basic tasks with Coherence. In this section, we will first cover the basics by looking at some of the foundational interfaces and classes in Coherence. We will then proceed to do something more interesting by implementing a simple tool that allows us to load data into Coherence from CSV files, which will become very useful during testing. The basics: NamedCache and CacheFactory As I have briefly mentioned earlier, Coherence revolves around the concept of named caches. Each named cache can be configured differently, and it will typically be used to store objects of a particular type. For example, if you need to store employees, trade orders, portfolio positions, or shopping carts in the grid, each of those types will likely map to a separate named cache. The first thing you need to do in your code when working with Coherence is to obtain a reference to a named cache you want to work with. In order to do this, you need to use the CacheFactory class, which exposes the getCache method as one of its public members. For example, if you wanted to get a reference to the countries cache that we created and used in the console example, you would do the following: NamedCache countries = CacheFactory.getCache("countries"); Once you have a reference to a named cache, you can use it to put data into that cache or to retrieve data from it. Doing so is as simple as doing gets and puts on a standard Java Map: countries.put("SRB", "Serbia");String countryName = (String) countries.get("SRB"); As a matter of fact, NamedCache is an interface that extends Java's Map interface, so you will be immediately familiar not only with get and put methods, but also with other methods from the Map interface, such as clear, remove, putAll, size, and so on. The nicest thing about the Coherence API is that it works in exactly the same way, regardless of the cache topology you use. For now let's just say that you can configure Coherence to replicate or partition your data across the grid. The difference between the two is that in the former case all of your data exists on each node in the grid, while in the latter only 1/n of the data exists on each individual node, where n is the number of nodes in the grid. Regardless of how your data is stored physically within the grid, the NamedCache interface provides a standard API that allows you to access it. This makes it very simple to change cache topology during development if you realize that a different topology would be a better fit, without having to modify a single line in your code. In addition to the Map interface, NamedCache extends a number of lower-level Coherence interfaces. The following table provides a quick overview of these interfaces and the functionality they provide: The "Hello World" example In this section we will implement a complete example that achieves programmatically what we have done earlier using Coherence console—we'll put a few countries in the cache, list cache contents, remove items, and so on. To make things more interesting, instead of using country names as cache values, we will use proper objects this time. That means that we need a class to represent a country, so let's start there: public class Country implements Serializable, Comparable {private String code;private String name;private String capital;private String currencySymbol;private String currencyName;public Country() {}public Country(String code, String name, String capital,String currencySymbol, String currencyName) {this.code = code;this.name = name;this.capital = capital;this.currencySymbol = currencySymbol;this.currencyName = currencyName;}public String getCode() {return code;}public void setCode(String code) {this.code = code;}public String getName() {return name;}public void setName(String name) {this.name = name;}public String getCapital() {return capital;}public void setCapital(String capital) {this.capital = capital;}public String getCurrencySymbol() {return currencySymbol;}public void setCurrencySymbol(String currencySymbol) {this.currencySymbol = currencySymbol;}public String getCurrencyName() {return currencyName;}public void setCurrencyName(String currencyName) {this.currencyName = currencyName;}public String toString() {return "Country(" +"Code = " + code + ", " +"Name = " + name + ", " +"Capital = " + capital + ", " +"CurrencySymbol = " + currencySymbol + ", " +"CurrencyName = " + currencyName + ")";}public int compareTo(Object o) {Country other = (Country) o;return name.compareTo(other.name);}} There are several things to note about the Country class, which also apply to other classes that you want to store in Coherence: Because the objects needs to be moved across the network, classes that are stored within the data grid need to be serializable. In this case we have opted for the simplest solution and made the class implement the java.io.Serializable interface. This is not optimal, both from performance and memory utilization perspective, and Coherence provides several more suitable approaches to serialization. We have implemented the toString method that prints out an object's state in a friendly format. While this is not a Coherence requirement, implementing toString properly for both keys and values that you put into the cache will help a lot when debugging, so you should get into a habit of implementing it for your own classes. Finally, we have also implemented the Comparable interface. This is also not a requirement, but it will come in handy in a moment to allow us to print out a list of countries sorted by name. Now that we have the class that represents the values we want to cache, it is time to write an example that uses it: import com.tangosol.net.NamedCache;import com.tangosol.net.CacheFactory;import ch02.Country;import java.util.Set;import java.util.Map;public class CoherenceHelloWorld {public static void main(String[] args) {NamedCache countries = CacheFactory.getCache("countries");// first, we need to put some countries into the cachecountries.put("USA", new Country("USA", "United States","Washington", "USD", "Dollar"));countries.put("GBR", new Country("GBR", "United Kingdom","London", "GBP", "Pound"));countries.put("RUS", new Country("RUS", "Russia", "Moscow","RUB", "Ruble"));countries.put("CHN", new Country("CHN", "China", "Beijing","CNY", "Yuan"));countries.put("JPN", new Country("JPN", "Japan", "Tokyo","JPY", "Yen"));countries.put("DEU", new Country("DEU", "Germany", "Berlin","EUR", "Euro"));countries.put("FRA", new Country("FRA", "France", "Paris","EUR", "Euro"));countries.put("ITA", new Country("ITA", "Italy", "Rome","EUR", "Euro"));countries.put("SRB", new Country("SRB", "Serbia", "Belgrade","RSD", "Dinar"));assert countries.containsKey("JPN"): "Japan is not in the cache";// get and print a single countrySystem.out.println("get(SRB) = " + countries.get("SRB"));// remove Italy from the cacheint size = countries.size();System.out.println("remove(ITA) = " + countries.remove("ITA"));assert countries.size() == size - 1: "Italy was not removed";// list all cache entriesSet<Map.Entry> entries = countries.entrySet(null, null);for (Map.Entry entry : entries) {System.out.println(entry.getKey() + " = " + entry.getValue());}}} Let's go through this code section by section. At the very top, you can see import statements for NamedCache and CacheFactory, which are the only Coherence classes we need for this simple example. We have also imported our Country class, as well as Java's standard Map and Set interfaces. The first thing we need to do within the main method is to obtain a reference to the countries cache using the CacheFactory.getCache method. Once we have the cache reference, we can add some countries to it using the same old Map.put method you are familiar with. We then proceed to get a single object from the cache using the Map.get method , and to remove one using Map.remove. Notice that the NamedCache implementation fully complies with the Map.remove contract and returns the removed object. Finally, we list all the countries by iterating over the set returned by the entrySet method. Notice that Coherence cache entries implement the standard Map.Entry interface. Overall, if it wasn't for a few minor differences, it would be impossible to tell whether the preceding code uses Coherence or any of the standard Map implementations. The first telltale sign is the call to the CacheFactory.getCache at the very beginning, and the second one is the call to entrySet method with two null arguments. We have already discussed the former, but where did the latter come from? The answer is that Coherence QueryMap interface extends Java Map by adding methods that allow you to filter and sort the entry set. The first argument in our example is an instance of Coherence Filter interface. In this case, we want all the entries, so we simply pass null as a filter. The second argument, however, is more interesting in this particular example. It represents the java.util.Comparator that should be used to sort the results. If the values stored in the cache implement the Comparable interface, you can pass null instead of the actual Comparator instance as this argument, in which case the results will be sorted using their natural ordering (as defined by Comparable.compareTo implementation). That means that when you run the previous example, you should see the following output: get(SRB) = Country(Code = SRB, Name = Serbia, Capital = Belgrade,CurrencySymbol = RSD, CurrencyName = Dinar)remove(ITA) = Country(Code = ITA, Name = Italy, Capital = Rome,CurrencySymbol = EUR, CurrencyName = Euro)CHN = Country(Code = CHN, Name = China, Capital = Beijing, CurrencySymbol= CNY, CurrencyName = Yuan)FRA = Country(Code = FRA, Name = France, Capital = Paris, CurrencySymbol= EUR, CurrencyName = Euro)DEU = Country(Code = DEU, Name = Germany, Capital = Berlin,CurrencySymbol = EUR, CurrencyName = Euro)JPN = Country(Code = JPN, Name = Japan, Capital = Tokyo, CurrencySymbol =JPY, CurrencyName = Yen)RUS = Country(Code = RUS, Name = Russia, Capital = Moscow, CurrencySymbol= RUB, CurrencyName = Ruble)SRB = Country(Code = SRB, Name = Serbia, Capital = Belgrade,CurrencySymbol = RSD, CurrencyName = Dinar)GBR = Country(Code = GBR, Name = United Kingdom, Capital = London,CurrencySymbol = GBP, CurrencyName = Pound)USA = Country(Code = USA, Name = United States, Capital = Washington,CurrencySymbol = USD, CurrencyName = Dollar) As you can see, the countries in the list are sorted by name, as defined by our Country.compareTo implementation. Feel free to experiment by passing a custom Comparator as the second argument to the entrySet method, or by removing both arguments, and see how that affects result ordering. If you are feeling really adventurous and can't wait to learn about Coherence queries, take a sneak peek by changing the line that returns the entry set to: Set<Map.Entry> entries = countries.entrySet(new LikeFilter("getName", "United%"), null); As a final note, you might have also noticed that I used Java assertions in the previous example to check that the reality matches my expectations (well, more to demonstrate a few other methods in the API, but that's beyond the point). Make sure that you specify the -ea JVM argument when running the example if you want the assertions to be enabled, or use the run-helloworld target in the included Ant build file, which configures everything properly for you. That concludes the implementation of our first Coherence application. One thing you might notice is that the CoherenceHelloWorld application will run just fine even if you don't have any Coherence nodes started, and you might be wondering how that is possible. The truth is that there is one Coherence node—the CoherenceHelloWorld application. As soon as the CacheFactory.getCache method gets invoked, Coherence services will start within the application's JVM and it will either join the existing cluster or create a new one, if there are no other nodes on the network. If you don't believe me, look at the log messages printed by the application and you will see that this is indeed the case. Now that you know the basics, let's move on and build something slightly more exciting, and much more useful.
Read more
  • 0
  • 0
  • 1809

article-image-slowly-changing-dimension-scd-type-6
Packt
30 Mar 2010
6 min read
Save for later

Slowly Changing Dimension (SCD) Type 6

Packt
30 Mar 2010
6 min read
The Example We will apply SCD’s to maintain the history of Product dimension, specifically the history of changes of Product's Product Group. The PRODUCT_SK column is the surrogate key of the Product dimension table. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME 1 11 PENCIL 1 WRITING SUPPLY 2 22 PEN 1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPL SCD Type 1 We will apply SCD Type 1 to the PENCIL product in the Product dimension table. Let’s say PENCIL changes its product group into 4. Effecting this change by applying SCD Type 1 just updates the existing row of PENCIL on its product group. We do not have record of its previous product group; in other words, we do not maintain its product group history. The updated PENCIL’s product group is shown highlighted in blue. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME 1 11 PENCIL 4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY SCD Type 2 SCD Type 2 is essentially the opposite of Type 1. When we apply SCD Type 2, we never update or delete any existing product group. To apply SCD Type 2 we need an effective date and an expiry date. Effective date 31-Dec-99 means the row is not expired. It is the most current version of the product. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_GROUP_CODE PRODUCT_GROUP_NAME EFFECTIVE_DATE EXPIRY_DATE 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99 Assuming the product group change of PENCIL is effective 1 April 2010, we update the expiry date of its existing row to 31 March 2010, one day before the effective date of the effective date of the change, and insert a new row that represents its new, current version. PRODUCT_SK PRODUCT_CODE PRODUCT_NAME PRODUCT_ GROUP _CODE PRODUCT_GROUP _NAME EFFECTIVE_DATE EXPIRY_DATE 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-09 31-Dec-99 SCD Type 3 With SCD Type 3 we maintain history but in one record only. We have one column for each version of the product group. You need to have as many columns as the number of versions you want to keep. One of the most common SCD Type 3 applications is to maintain two versions of product group: the original version and the current version. When there is no product group change yet, the current product group is the same as the original product group. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP_NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY When the pencil’s product group changes, let’s say on 1 April 2010, we expire its original product group by changing the expiry date to a day earlier (30 March 2010), and replace its current product group to the new product group. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_CODE PRODUCT_ GROUP_NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY When its product group changes again in the future, we will replace just the current product group with the new product group. The expiry date does not change. It gets updated once only the first time its product group changes. SCD Type 6 SCD Type 6 combines the three basic types. Type 6 is particularly applicable if you want to maintain complete history and would also like have an easy way to effect on current version. Let’s apply Type 6 instead of Type 3 only. We have applied Type 3 by having two versions of product group. When the pencil’s product group changes we update its existing current product group (that is Type 1 update). We also apply Type 2 by adding a new row. PRODUCT_SK PRODUCT_CODE PRODUCT_ NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP_ NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     4 NON ELECTRONIC SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-10 31-Dec-99     4 NON ELECTRONIC SUPPLY On the next pencil’s product group change (1 July 2010), we will again apply all three SCD types. PRODUCT _SK PRODUCT _CODE PRODUCT _NAME PRODUCT_ GROUP_ CODE PRODUCT_ GROUP _NAME EFFECTIVE_ DATE EXPIRY_ DATE   CURRENT_ PRODUCT_ GROUP_ CODE CURRENT_ PRODUCT_ GROUP_ NAME 1 11 PENCIL 1 WRITING SUPPLY 1-Jan-09 31-Mar-10     5 LEGACY SUPPLY 2 22 PEN 1 WRITING SUPPLY 1-Jan-09 31-Dec-99   1 WRITING SUPPLY 3 33 TONER 2 PRINTING SUPPLY 1-Jan-09 31-Dec-99   2 PRINTING SUPPLY 4 44 NOTEBOOK 4 NON ELECTRONIC SUPPLY 1-Jan-09 31-Dec-99     4 NON ELECTRONIC SUPPLY 5 11 PENCIL 4 NON ELECTRONIC SUPPLY 1-Apr-10 30-Jun-10   5 LEGACY SUPPLY 6 11 PENCIL 5 LEGACY SUPPLY 1-Jul-10 31-Dec-99   5 LEGACY SUPPLY QUERY Let’s next see how our Type 6 in the Product dimension works on a sales fact. (In a real sales fact data you will have some other dimensions, meaning the fact table will have more surrogate key columns than just the product surrogate key) If our interest is in the current version, our SQL query will use the current product group column. An example SQL query will look like: SELECT current_product_group_name, SUM(sales_amt)FROM sales_fact s, product_dim pWHEREs.product_sk = p.product_skAND product_name = ‘PENCIL’GROUP BY current_product_group_code The output of the query will be: The reason of applying SCD Type 2 is to have a complete history that tracks changes. SQL queries that take into account dimension history use the product group column: SELECT product_group_name, SUM(sales_amt)FROM sales_fact s, product_dim p, date_dim dWHEREs.product_sk = p.product_skAND product_name = ‘PENCIL’GROUP BY product_group_code The output of the query will be: SUMMARY This article discusses what SCD Type 6 is, when to apply it, and how it works. The name Type 6 comes from the ‘sum’ of the three basic SCD types (6 = 1 + 2 + 3).
Read more
  • 0
  • 0
  • 11469
article-image-configuring-and-formatting-ireport-elements
Packt
29 Mar 2010
7 min read
Save for later

Configuring and Formatting iReport Elements

Packt
29 Mar 2010
7 min read
A complete report is structured by composing a set of sections called bands. Each band has its own configurable height, a particular position in the structure, and is used for a particular objective. The available bands are: Title, Page Header, Column Header, Detail 1, Column Footer, Page Footer, Last Footer, and Summary. A report structured with bands is shown in the following screenshot: Besides the mentioned bands, there are two special bands which are Background and No Data. Band Description Title Is the first band of the report and is printed only once. Title can be shown on a new page. You can configure this from the report properties discussed in the previous section of this chapter. Just to review-go to report Properties | More... and check the Title on a new page checkbox. Page Header Is printed on each page of the report and is used for setting up the page header. Column Header Is printed on each page, if there is a detail band on that page. This band is used for the column heading. Detail This band is repeatedly printed for each row in the data source. In the List of Products report, it is printed for each product record. Column Footer Is printed on each page if there is a detail band on that page. This band is used for the column heading. If the Floating column footer in report Properties is checked, then the column footer will be shown just below the last data of the column, otherwise it will be shown at the bottom of the page (above the page footer). Page Footer Is printed on each page except the last page, if Last Page Footer is set. If Last Page Footer is not set, then it is printed on the last page also. This band is a good place to insert page numbers. Last Page Footer Is printed only on the last page as a page footer. Summary Is printed only once at the end of the report. It can be printed on a separate page if it is configured from the report Properties. In the following chapters, we will produce some reports where you will learn about the suitability of this band. Background Is used for setting a page background. For example, we may want a watermark image for the report pages. No Data When no data is available for the reports, this band is printed if it is set as the When no data option in the report Properties. Showing/hiding bands and inserting elements Now, we are going to configure the report bands (setting height, visibility, and so on) and format the report elements. Select Column Footer from the Report Inspector. You will see the Column Footer - Properties on the right of the designer. Type 25 in the Band height field. Press Enter. Now you can see the Column Footer band in your report, which was invisible before you set the band height. A band becomes invisible in the report if its height is set to zero. We have already learned how to change the height of a band. We can also make a band invisible using the Print When Expression option. If we write new Boolean(false) in Print When Expression of a band, then that will make the band invisible, even though its height is set to greater than zero. If we write new Boolean(true), then the band will be visible. It is true by default. Drag a Static Text element from the Palette window and drop it on the Column Footer band. Double-click on Static Text and type End of Record, replacing the text Static Text. Select the static text element (End of Record). Go to Format | Position and then choose Center. Now the element has been positioned in the center of the Column Footer band. In the same way, insert two Line elements. Place one element at the left and another at the right of the static text. Select both the lines. Go to Format | Position, and then choose Center Vertically . The lines are now positioned in the center of the Column Footer vertically. Select both the lines and go to Format | Size and then choose Same Width. Now both the lines are equal in width. Select the static text element (End of Record) and the left line. Now go to Format | Position and choose Join Sides Right. This moves the line to the right, and it is now connected to the static text element. Repeat the previous step for the right line and finally choose Join Sides Left. Now the line has moved to the left and is connected with the static text element. In the same way, change the column headers as you want by double-clicking the labels on the Column Header band. Now, the columns may be Product Code, Name, and Description. Now your report design should look like the following screenshot: Preview the report, and you will see the lines and static text (End of Record) at the bottom of the column. By default, the Column Footer is placed at the bottom of the page. To show the Column Footer just below the table of data, the Float column footer option must be enabled from the report Properties window. Sizing elements We can increase or decrease the size of an element by dragging the mouse accordingly. Sometimes, we need to set the size of an element automatically based on other elements' sizes. There are various options for setting the automatic size of an element. These options are available in the format menu (Format | Size). Size Options Description Same Width This makes the selected elements of the same width. The width of the element that you select first is used as the new width of the selected elements. Same Width (max) The width of the largest of the selected elements is set as the width of all the selected elements. Same Width (min) The width of the smallest of the selected elements is set as the width of all the selected elements. Same Height This makes the selected elements of the same height. The height of the element that you select first is used as the new height of the selected elements. Same Height (max) The height of the largest of the selected elements is set as the height of all the selected elements. Same Height (min) The height of the smallest of the selected elements is set as the height of all the selected elements. Same Size Both the width and the height of the selected elements become the same. Position Description Center Horizontally (band/cell based) The selected element is placed in the center of the band horizontally. Center Vertically (band/cell based) The selected element is placed in the center of the band vertically. Center (in band/cell) The selected element is placed in the center of the band both horizontally and vertically. Center (in background) If the Background band is visible and if the element is on the Background band, then it will be placed in the center both horizontally and vertically. Join Left Joins two elements. For joining, one element will be moved to the left. Join Right Joins two elements. For joining, one element will be moved to the right. Align to Left Margin The selected element will be joined with the left margin of the report. Align to Right Margin The selected element will be joined with the right margin of the report.
Read more
  • 0
  • 0
  • 6711

article-image-setting-ireport-pages
Packt
29 Mar 2010
2 min read
Save for later

Setting Up the iReport Pages

Packt
29 Mar 2010
2 min read
Configuring the page format We can follow the listed steps for setting up report pages: Open the report List of Products. Go to menu Window | Report Inspector. The following window will appear on the left side of the report designer: Select the report List of Products, right-click on it, and choose Page Format…. The Page format… dialog box will appear, select A4 from the Format drop-down list, and select Portrait from the Page orientation section. You can modify the page margins if you need to, or leave it as it is to have the default margins. For our report, you need not change the margins. Press OK. Page size You have seen that there are many preset sizes/formats for the report, such as Custom, Letter, Note, Legal, A0 to A10, B0 to B5, and so on. You will choose the appropriate one based on your requirements. We have chosen A4. If the number of columns is too high to fit in Portrait, then choose the Landscape orientation. If you change the preset sizes, the report elements (title, column heading, fields, or other elements) will not be positioned automatically according to the new page size. You have to position each element manually. So be careful if you decide to change the page size. Configuring properties We can modify the default settings of report properties in the following way: Right-click on List of Products and choose Properties. We can configure many important report properties from the Properties window. You can see that there are many options here. You can change the Report name, Page size, Margins, Columns, and more. We have already learnt about setting up pages, so now our concern is to learn about some of the other (More…) options.
Read more
  • 0
  • 0
  • 7006

article-image-ireport-netbeans
Packt
19 Mar 2010
3 min read
Save for later

iReport in NetBeans

Packt
19 Mar 2010
3 min read
Creating different types of reports inside the NetBeans IDE The first step is to download the NetBeans IDE and the iReport plugin for this. The iReport plugin for NetBeans is available for free download at the following locations: https://sourceforge.net/projects/ireport/files or http://plugins.netbeans.org/PluginPortal/faces/PluginDetailPage.jsp?pluginid=4425 After downloading the plugin, follow the listed steps to install the plugin in NetBeans: Start the NetBeans IDE. Go to Tools | Plugins. Select the Downloaded tab. Press Add Plugins…. Select the plugin files. For iReport 3.7.0 the plugins are: iReport-3.7.0.nbm, jasperreports-components-plugin-3.7.0.nbm, jasperreportsextensions-plugin-3.7.0.nbm, and jasperserver-plugin-3.7.0.nbm. After opening the plugin files you will see the following screen: Check the Install checkbox of ireport-designer, and press the Install button at the bottom of the window. The following screen will appear: Press Next >, and accept the terms of the License Agreement. If the Verify Certificate dialog box appears, click Continue. Press Install, and wait for the installer to complete the installation. After the installation is done, press Finish and close the Plugins dialog. If the IDE requests for a restart, then do it. Now the IDE is ready for creating reports. Creating reports We have already learnt about creating various types of reports, such as reports without parameters, reports with parameters, reports with variables, subreports, crosstab reports, reports with charts and images, and so on. We have also attained knowledge associated with these types of reports. Now, we will learn quickly how to create these reports using NetBeans with the help of the installed iReport plugins. Creating a NetBeans database JDBC connection The first step is to create a database connection, which will be used by the report data sources. Follow the listed steps: Select the Services tab from the left side of the project window. Select Databases. Right-click on Databases, and press New Connection…. In the New Database Connection dialog, set the following under Basic setting, and check the Remember password checkbox: Option Value Driver Name MySQL (Connector/J Driver) Host localhost Port 3306 Database inventory User Name root Password packt Press OK. Now the connection is created, and you can see this under the Services | Databases section, as shown in the following screenshot: Creating a report data source The NetBeans database JDBC connection created previously will be used by a report data source that will be used by the report. Follow the listed steps to create the data source: From the NetBeans toolbar, press the Report Datasources button. You will see the following dialog box: Press New. Select NetBeans Database JDBC connection, and press Next >. Enter inventory in the Name field, and from the Connection drop-down list, select jdbc:mysql://localhost:3306/inventory [root on Default schema]. Press Test, and if the connection is successful, press Save and close the Connections / Datasources dialog box.
Read more
  • 0
  • 0
  • 27718
article-image-ms-access-queries-oracle-sql-developer-12-tool
Packt
17 Mar 2010
12 min read
Save for later

MS Access Queries with Oracle SQL Developer 1.2 Tool

Packt
17 Mar 2010
12 min read
In my previous article with the Oracle SQL Developer 1.1, I discussed the installation and features of this stand-alone GUI product which can be used to query several database products. Connecting to an Oracle Xe 10G was also described. The many things you do in Oracle 10G XE can also be carried out with the Oracle SQL Developer. It is expected to enhance productivity in your Oracle applications. You can use Oracle SQL Developer to connect, run, and debug SQL, SQL*Plus and PL/SQL. It can run on at least three different operating systems. Using this tool you can connect to Oracle, SQL Server, MySql and MS Access databases. In this article, you will learn how to install the Oracle SQL Developer 1.2 and connect to an MS Access database. The 1.2 version has several features that were not present in version 1.1 especially regarding Migration from other products. Downloading and installing the Oracle SQL Developer Go to the Oracle site (you need to be registered to download) and after accepting the license agreement you will be able to download sqldeveloper-1.2.2998.zip, a 77MB download if you do not have JDK1.5 already installed. You may place this in any directory. From the unzipped contents, double-click on the SQLDeveloper.exe. The User Interface On a Windows machine, you may get a security warning which you may safely override and click on Run. This opens up the splash window shown in the next picture followed by the Oracle SQL Developer interface shown in the picture that follows. Figure 1 The main window The main window of this tool is shown in the next picture. Figure 2 It has a main menu at the very top where you can access File, Edit, View, Navigate, Run, Debug, Source, Migration, Tools and Help menus. The menu item Migration has been added in this new version. Immediately below the main menu on the left, you have a tabbed window with two tabs, Connections, and Reports. This will be the item you have to contend with since most things start only after establishing a connection. The connection brings with it the various related objects in the databases. View Menu The next picture shows the drop-down of the View main menu, where you can see other details such as links to the debugger, reports, connections, and snippets. In this new version, many more items have been added such as Captured Objects, Converted Objects, and Find DB Object. Figure 3 Snippets are often-used SQL statements or clauses that you may want to insert. You may also save your snippets by clicking on the bright green plus sign in the window shown, which opens up the superposed Save Snippet window. Figure 4 In the Run menu item, you can run files as well as look at the Execution Profile. Debug Menu The debug menu item has all the necessary hooks to toggle break points: step into, step over, step out and step to End of Method, etc., including garbage collection and clean up as shown in the next picture. Figure 5 Tools Menu Tools give access to External Tools that can be launched, Exports both DDL and data, schema diff, etc. as shown in the next picture. Figure 6 Help gives you both full-text search and indexed search. This is an important area which you must visit; you can also update the help. Figure 7 About Menu The About drop-down menu item in the above figure opens up the following window where you have complete information about this product that includes version information, properties, and extensions. Figure 8 Migration Menu As mentioned earlier the Migration is a new item in the main menu and its drop-down menu elements are shown in the next picture. It even has a menu item to make a quick migration of all recent versions of MS Access (97, 2000, 2002, and 2003). The Repository Management item is another very useful feature. The MySQL and SQL Server Offline Capture menu item can capture database create scripts from several versions of MySQL and MS SQL Server by locating them on the machine. Figure 9 Connecting to a Microsoft Access Database If you are interested in Oracle 10G XE it will be helpful if you refresh your Oracle 10G XE knowledge or read the several Oracle 10G XE articles whose links are shown on the author’s blog. This is a good place for you to look at new developments, scripts, UI description, etc. This section, however, deals with connecting to an MS Access database. Click on the "Connections" icon with the bright green plus sign as shown in the next figure. Figure 10 This opens up the next window, New/Select Database Connection. This is where you must supply all the information. As you can see it has identified a resident (that is a local Oracle 10G XE server) Oracle 10G XE on the machine. Of course, you need to configure it further. In addition to Oracle, it can connect to MySQL, MS Access, and SQL Server as well. This interface has not changed much from version 1.1; you have the same control elements. Figure 11 On the left-hand side of this window, you will generate the Connection Name and Connection Details once you fill in the appropriate information on the right. Connection name is what you supply; to get connected you need to have a username and password as well. If you want, you can save the password to avoid providing it again and again. At the bottom of the screen, you can save the connection, test it and connect to it. There is also access to online help. In the above window, click on the tab, in the middle of the page, Access. The following window opens in which all you need to do is to use the Browse button to locate the Microsoft Access Database on your machine (windows default for mdb files is My Documents). Figure 12 Hitting the Browse button opens the window, Open with the default location, My Documents—the default directory for MDB files. Figure 13 Choosing a database Charts.mdb and clicking the Open button brings the file pointer to the New / Select Database Connection in the box to the left of the Browse button. When you click on the Test button if the connection is OK you should get an appropriate message. However for the Charts.mdb file you get the following error. Figure 14 The software is complaining about the lack of read access to the system tables. Providing read access to System tables. There are a couple of System tables in MS Access which are usually hidden but can be displayed using Tools option in MS Access. Figure 15 In the View tab if you place a check mark for System objects then you will see the following tables. The System tables are as shown in the red rectangle. Figure 16 If you need to modify the security settings for these tables you can do so as shown in the next figure by following the trail, Tools  Security User and Group permissions. Figure 17 Click on the User and Group Permissions menu item which opens the next window Users and Group Permissions shown here, Figure 18 For the user who is Admin, scroll through each of the system tables and place a check mark for Read Design and Read Data check boxes. Click on the OK button and close the application. Now you again use the Browse button to locate the Charts.mdb file after providing a name for the connection at the top of the New / Select Database Connection page. For this tutorial MyCharts was chosen as the name for the connection. Once this file name appears in the box to the left of the Browse button, click on the Test button. This message screen is very fast (appears and disappears). If there is a problem, it will bring up the message as before. Now click on the Connect button at the bottom of the screen in the New / Select Database Connection page window. This immediately adds the connection MyCharts to the Connections folder shown in the left. The + sign can be clicked to expand all the objects in the Charts.mdb database as shown in the next figure. Figure 19 You can further expand the Table nodes to show the data that the table contains as shown in the next figure for the Portfolio table. Figure 20 The Relationships tab in the above figure shows related and referenced objects as shown. This is just a lone table with no relationships established and therefore none showing. Figure 21 It may be noted that the Oracle SQL Developer can only connect to MDB files. It cannot connect to Microsoft Access projects (ADP files), or the new MS Access 2007 file types. Using the SQL Interface SQL Statements are run from the SQL Worksheet which can be displayed by right clicking the connection and choosing Open SQL Worksheet item from the drop-down list as shown. Figure 22 You will type in the SQL queries in area below Enter SQL Statement label in the above figure (now hidden behind the drop-down menu). Making a new connection with more tables and a relationship In order to run a few simple queries on the connected database, three more tables were imported into Charts.mdb after establishing a relationship between the new tables in the access database as shown in the following figure. Figure 23 Another connection named, NewCharts was created in Oracle SQL Developer. The connection string that SQL Developer will take for NewCharts is of the following format (some white spaces were introduced into the connection string shown to get rid of MS Word warnings). @jdbc:odbc: Driver= {Microsoft Access Driver (*.mdb)}; DBQ=C:Documents and SettingsJayMy DocumentsCharts.mdb; DriverID=22;READONLY=false} This string can be reviewed after a connection is established in the New / Select Database Connection window as shown in the next figure. Figure 24 A simple Query Let us look at a very simple query using the PrincetonTemp table. After entering the query you can execute the statement by clicking on the right pointing green arrowhead as shown. The result of running this query will appear directly below the Enter SQL Statement window as shown. Figure 25 Just above the Enter SQL Statement label is the SQL Toolbar displaying several icons (left to right) which are as follows with the associated key board access: Execute SQL Statement(F9) ->Green arrow head Run Script(F5) Commit(F11) Rollback(F12) Cancel(CTRL+Q) SQL History(F8) Execute Explain Plan(F6) Autotrace(F10) Clear(CTRL+D) It also displays time taken to execute ->0.04255061 seconds. The bottom pane is showing the result of the query in a table format. If you would run the same query with the Run Script (F5) button you would see the result in the Script Output tab of the bottom pane. In addition to SQL you can also use the SQL Worksheet to run SQL *PLUS and PL/SQL statements with some exceptions as long as they are supported by the provider used in making the connection. Viewing relationship between tables in the SQL Developer Three tables Orders, Order Details, and Products were imported into the Charts.mdb after enforcing referential integrity relationships in the Access database as seen earlier. Will the new connection NewCharts be able to see these relationships? This question is answered in the following figure.   Click on any one of these tables and you will see the Data as well as Relationships tabs in the bottom pane as shown. Figure 26 Now if you click on the Relationships tab for Products you will see the display just showing three empty columns as seen in an earlier figure. However if you click on the Order Details which really links the three tables you will see the following displayed. Figure 27 Query joining three tables The Orders, Order Details, and Products tables are related by relational integrity as seen above. The following query which chooses one or two columns from each table can be run in a new SQL worksheet. Select Products.ProductName,Orders.ShipName,Orders.OrderDate,[Order Details].Quantityfrom Products, Orders, [Order Details]where Orders.OrderID=[Order Details].OrderIDandProducts.ProductID=[Order Details] and Orders.OrdersDate >'12-31-1997' The result of running this query (only four rows of data shown) can be seen in the next figure. Figure 28 Note that the syntax must match the syntax required by the Provider ODBC, date has to be #12-31-1997# instead of ’12-21-1997’. Summary The article described the new version of the stand alone Oracle’s GUI SQL Developer tool. It can connect to couple of databases such MS Access, SQL Server, Oracle and MySQL. Its utility could have been far greater had it provided connectivity to ODBC and OLE DB. I am disappointed it did not, in this version as well. The connection to MS Access seems to bring in not only tables but the other objects except Data Access Pages, but the external applications that you can use are limited to Word, Notepad, IE etc but not a Report Viewer. These objects and their utility remains to be explored.  Only a limited number of features were explored in this article and it excluded new features like Migration and Translation Scratch Editor which translates MS ACCESS, SQL Server and My SQL syntaxes to PL / SQL. These will be considered in a future article.
Read more
  • 0
  • 0
  • 16249

article-image-building-your-first-ireport
Packt
02 Mar 2010
4 min read
Save for later

Building Your First iReport

Packt
02 Mar 2010
4 min read
So let's get on with it! Creating a connection/data source Before going to create the connection, a database should be set up. The SQL query for the database used for creating reports can be downloaded from the Packt website. Now, we are going to create a connection/data source in iReport and build our first report in some easy to follow steps: You need to create the connection/data source just once before developing the first report. This connection will be reused for the following reports. Start iReport. Press the Report Datasources button in the toolbar. You will see a dialog box similar to the following screenshot: Press the New button. Another dialog box will appear for selecting the data source type. There are several types to choose from, according to your requirement. For now, choose Database JDBC connection, and press Next >. Another dialog box will appear to set up the Database JDBC connection properties. Give a sensible name to the connection. In this case, it is inventory. Choose the JDBC Driver from the list, according to your connection type and/or your database. In this case, it is MySQL (com.mysql.jdbc.Driver). Write the JDBC URL, according to the driver you have chosen. For this tutorial, it is jdbc:mysql://localhost/inventory. In the previous code for connecting to a database from a Java program using JDBC—jdbc is the connection protocol, mysql is the subprotocol, localhost is the MySQL server if it runs on the same computer, and inventory is the database name. Enter the Username and Password. Generally, for a MySQL server, the username is root and you have set a customized password during the installation of the MySQL server. The screenshot is as follows: Press Test to confirm that you have set all the properties correctly. If all the settings are correct, then you will see a message that says Connection test successful!. You can save the password by checking the Save Password checkbox, but be warned that iReport stores passwords in clear text. Storing passwords in clear text is a bad thing for us, isn't it? If you do not specify a password now, iReport will ask you for one only when required and will not save it. Now save the connection. You will see that the newly created connection is listed in the Connections/Datasources window. If you have more than one connections, then you can set one as the default connection. In order to do this, select the connection and press Set as Default. Enter the Username and Password. Generally, for a MySQL server, the username is root and you have set a customized password during the installation of the MySQL server. The screenshot is as follows: When we execute the report with an active connection, the reports are filled with data from the database or other data sources. We can also see the report output with empty data sources, which has, by default, a single record with all fields set to null. An empty data source is used to print a static report. However, in order to choose the tables and columns from a database automatically using the Report Wizard, we need to connect to a database/data source first. To do this, we must create a connection/data source. Building your first report Having set up a connection, we are ready to build our first report. We will keep it very simple, just to be familiar with the steps required for building a report. We will create a report that lists out all the products; that is, we will show all the rows of the product table of our database. Follow the steps listed and build your first report: Go to the File menu and click New…. You will see a dialog box like the following screenshot: From the list of Report templates, select Simple Blue and press Launch Report Wizard. Enter Report name as List of Products and press Next >. Now you will specify the query to retrieve the report fields. Select your connection from the Connections / Data Sources drop-down list. Write the SQL query for the report you want to develop. In our case, it is SELECT ProductCode, Name, Description FROM Product.
Read more
  • 0
  • 0
  • 2194
Modal Close icon
Modal Close icon