Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-working-aspnet-datalist-control
Packt
19 Feb 2010
8 min read
Save for later

Working With ASP.NET DataList Control

Packt
19 Feb 2010
8 min read
In this article by Joydip Kanjilal, we will discuss the ASP.NET DataList control which can be used to display a list of repeated data items. We will learn about the following: Using the DataList control Binding images to a DataList control dynamically Displaying data using the DataList control Selecting, editing and deleting data using this control Handling the DataList control events The ASP.NET DataList Control The DataList control like the Repeater control is a template driven, light weight control, and acts as a container of repeated data items. The templates in this control are used to define the data that it will contain. It is flexible in the sense that you can easily customize the display of one or more records that are displayed in the control. You have a property in the DataList control called RepeatDirection that can be used to customize the layout of the control. The RepeatDirection property can accept one of two values, that is, Vertical or Horizontal. The RepeatDirection is Vertical by default. However, if you change it to Horizontal, rather than displaying the data as rows and columns, the DataList control will display them as a list of records with the columns in the data rendered displayed as rows. This comes in handy, especially in situations where you have too many columns in your database table or columns with larger widths of data. As an example, imagine what would happen if there is a field called Address in our Employee table having data of large size and you are displaying the data using a Repeater, a DataGrid, or a GridView control. You will not be able to display columns of such large data sizes with any of these controls as the display would look awkward. This is where the DataList control fits in. In a sense, you can think the DataList control as a combination of the DataGrid and the Repeater controls. You can use templates with it much as you did with a Repeater control and you can also edit the records displayed in the control, much like the DataGrid control of ASP.NET. The next section compares the features of the three controls that we have mentioned so far, that is, the Repeater, the DataList, and the DataGrid control of ASP.NET. When the web page is in execution with the data bound to it using the Page_Load event, the data in the DataList control is rendered as DataListItem objects, that is, each item displayed is actually a DataListItem. Similar to the Repeater control, the DataList control does not have Paging and Sorting functionalities build into it. Using the DataList Control To use this control, drag and drop the control in the design view of the web form onto a web form from the toolbox. Refer to the following screenshot, which displays a DataList control on a web form: The following list outlines the steps that you can follow to add a DataList control in a web page and make it working: Drag and drop a DataList control in the web form from the toolbox. Set the DataSourceID property of the control to the data source that you will use to bind data to the control, that is, you can set this to an SQL Data Source control. Open the .aspx file, declare the <ItemTemplate> element and define the fields as per your requirements. Use data binding syntax through the Eval() method to display data in these defined fields of the control. You can bind data to the DataList control in two different ways, that is, using the DataSourceID and the DataSource properties. You can use the inbuilt features like selecting and updating data when using the DataSourceID property. Note that you need to write custom code for selecting and updating data to any data source that implements the ICollection and IEnumerable data sources. We will discuss more on this later. The next section discusses how you can handle the events in the DataList control. Displaying Data Similar to the Repeater control, the DataList control contains a template that is used to display the data items within the control. Since there are no data columns associated with this control, you use templates to display data. Every column in a DataList control is rendered as a <span> element. A DataList control is useless without templates. Let us now lern what templates are, the types of templates, and how to work with them. A template is a combination of HTML elements, controls, and embedded server controls, and can be used to customize and manipulate the layout of a control. A template comprises HTML tags and controls that can be used to customize the look and feel of controls like Repeater, DataGrid, or DataList. There are seven templates and seven styles in all. You can use templates for the DataList control in the same way you did when using the Repeater control. The following is the list of templates and their associated styles in the DataList control The Templates are as follows: ItemTemplate AlternatingItemTemplate EditItemTemplate FooterTemplate HeaderTemplate SelectedItemTemplate SeparatorTemplate The following screenshot illustrates the different templates of this control. As you can see from this figure, the templates are grouped under three broad categories. These are: Item Templates Header and Footer Templates Separator Template Note that out of the templates given above, the ItemTemplate is the one and only mandatory template that you have to use when working with a DataList control. Here is a sample of how your DataList control's templates are arranged: < asp:DataList id="dlEmployee" runat="server"><HeaderTemplate>...</HeaderTemplate><ItemTemplate>...</ItemTemplate><AlternatingItemTemplate>...</AlternatingItemTemplate><FooterTemplate>...</FooterTemplate></asp:DataList> The following screenshot displays a DataList control populated with data and with its templates indicated. Customizing a DataList control at run timeYou can customize the DataList control at run time using the ListItemType property in the ItemCreated event of this control as follows: private void DataList1_ItemCreated(objectsender, ...........System.Web.UI.WebControls.DataListItemEventArgs e){ switch (e.Item.ItemType) { case System.Web.UI.WebControls.ListItemType.Item : e.Item.BackColor = Color.Red; break; case System.Web.UI.WebControls.ListItemType. AlternatingItem : e.Item.BackColor = Color.Blue; break; case System.Web.UI.WebControls.ListItemType. SelectedItem : e.Item.BackColor = Color.Green; break; default : break; }} The Styles that you can use with the DataList control to customize the look and feel are: AlternatingItemStyle EditItemStyle FooterStyle HeaderStyle ItemStyle SelectedItemStyle SeparatorStyle You can use any of these styles to format the control, that is, format the HTML code that is rendered. You can also use layouts of the DataList control for formatting, that is, further customization of your user interface. The available layouts are as follows: FlowLayout TableLayout VerticalLayout HorizontalLayout You can specify your desired flow or table format at design time by specifying the following in the .aspx file. RepeatLayout = "Flow" You can also do the same at run time by specifying your desired layout using the RepeatLayout property of the DataList control as shown in the following code snippet: DataList1.RepeatLayout = RepeatLayout.Flow In the code snippet, it is assumed that the name of the DataList control is DataList1. Let us now understand how we can display data using the DataList control. For this, we would first drag and drop a DataList control in our web form and specify the templates for displaying data. The code in the .aspx file is as follows: <asp:DataList ID="DataList1" runat="server"> <HeaderTemplate> <table border="1"> <tr> <th> Employee Code </th> <th> Employee Name </th> <th> Basic </th> <th> Dept Code </th> </tr> </HeaderTemplate> <ItemTemplate> <tr bgcolor="#0xbbbb"> <td> <%# DataBinder.Eval(Container.DataItem, "EmpCode")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "EmpName")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "Basic")%> </td> <td> <%# DataBinder.Eval(Container.DataItem, "DeptCode")%> </td> </tr> </ItemTemplate> <FooterTemplate> </FooterTemplate></asp:DataList> The DataList control is populated with data in the Page_Load event of the web form using the DataManager class as usual. protected void Page_Load(object sender, EventArgs e) { DataManager dataManager = new DataManager(); DataList1.DataSource = dataManager.GetEmployees(); DataList1.DataBind(); } Note that the DataBinder.Eval() method has been used as usual to display the values of the corresponding fields from the data container in the DataList control. The data container in our case is the DataSet instance that is returned by the GetEmployees () method of the DataManager class. When you execute the application, the output is as follows:
Read more
  • 0
  • 1
  • 9507

article-image-author-podcast-aleksander-seovic-talks-about-oracle-coherence-35
Packt
17 Feb 2010
1 min read
Save for later

Author Podcast - Aleksander Seovic Talks About Oracle Coherence 3.5

Packt
17 Feb 2010
1 min read
Aleksander Seovic is the author of Oracle Coherence 3.5, which will help you to design and build scalable, reliable, high-performance applications using software of the same name. The book is due out in March, but you can get a flavour of it in his interview with Cameron Purdy, below. For more information on Aleksander's book, visit: http://www.packtpub.com/oracle-coherence-3-5/book. Listen Here      
Read more
  • 0
  • 0
  • 3361

article-image-oracle-11g-streams-rules-part-2
Packt
05 Feb 2010
10 min read
Save for later

Oracle 11g Streams: RULES (Part 2)

Packt
05 Feb 2010
10 min read
Rule based transformation—eat your heart out transformers! As with all good rules, some are made to be broken; or maybe changed. In some circumstances we need to have rules that govern change. In Advance Replication, a number one rule is that a replicated table must have the same structure at all master sites. The column names and data types have to be identical or the "apply" of a deferred transaction will fail. With Streams, we can now break this rule by adding a new rule that allows the LCR to "morph" to a new structure. We call this ability Rule Based Transformation; and it is done via complex rules and action context. When you plan your Rule Based Transformation design, you want to remember that Rule Based Transformation rules are only evaluated with positive Rule Sets. If the Rule Set is negative, the Rule Based Transformation is ignored. Declarative versus User Created In the real world, there are many ways to accomplish the same thing; just as there are many ways to model data. You may run into a situation where the table structure in one master database may be different from the structure of the table in another master database but data must be replicated between them. It could be that a table column at one master is a VARCHAR2, but is a DATE at another master site. Or perhaps the column does not exist at all. Rule Based Transformation provides the ability to capture the LCR and convert it to the necessary structure needed to apply it at the destination site. This is not to be confused with transformations accomplished via the DBMS_TRANSFORMATION package. That is a different fish (and doesn't swim in this stream). A special note concerning SUBSET Rules and transformations. A SUBSET Rule has an internal row_migration transformation assigned to it when it is created. This internal transformation will always be the first one executed before any other transformations. Another thing to keep in mind is the amount of "transformation" that will be applied to the LCR. If extensive transformations need to be made to the LCR, you may wish to consider using a custom DML handler instead to take advantage of the apply parallel capabilities. The remainder of this section is going to use the premise that we have an LCR that we need to change a column name for, before we send it out from the source site. The LCR is generated on a table which has a different column name than the corresponding table at all the other sites. This being the case, we are going to create the transformation at the Capture process. There are two ways to accomplish this; either by using a declarative transformation or a user created transformation. We will review each, and then apply the method to our LCR that needs a column name change. Depending on the Transformation type, you can use one of the following views to find information concerning the transformation: Declarative: DBA_STREAMS_TRANSFORMATIONS User Created: DBA_STREAMS_TRANSFORM_FUNCTION Declarative Transformation As of 10g, Oracle provides commonly used transformations in the DBMS_STREAMS_ADM package. These transformations are referred to as declarative transformations. Declarative transformations only work with row LCR's (aka DML LCR's). The row LCR can be a Streams captured LCR (basic or synchronous), or a user created message. The procedures allow you to add transformation rules to do the following: Add a column (DBMS_STREAMS_ADM.ADD_COLUMN) Delete a column (DBMS_STREAMS_ADM.DELETE_COLUMN) Rename a column (DBMS_STREAMS_ADM.RENAME_COLUMN) Rename a table (DBMS_STREAMS_ADM.RENAME_TABLE) Rename a schema (DBMS_STREAMS_ADM.RENAME_SCHEMA) Special considerations when DBMS_STREAMS_ADM.ADD_COLUMN Be aware that the DBMS_STREAMS_ADM.ADD_COLUMN procedure does not support a number of data types. These include: LOBS (BLOB, CLOB, NCLOB, BFILE, and so on) LONG, LONG RAW, and so on ROWID User-defined types (including object types, REFs, varrays, nested tables, and so on) Oracle-supplied types (including ANY types, XML types, spatial types, and media types) For more information on DBMS_STREAMS_ADM Declarative Transformation subprograms, please refer to the Oracle Database PL/SQL Packages and Types Reference. For our purposes, we want to use the DBMS_STREAMS_ADM.RENAME_COLUMN to create a declarative transformation. In our example, we will work with the JOB_HISTORY table from the Oracle Example HR Schema. We will assume that at our source database the HR.JOB_HISTORY table has a column named DEPARTMENT_ID, and at the destination database the corresponding column in the HR.JOB_HISTORY is DEPT_ID. Declarative Transformations can only be added to an existing rule. If the rules specifi ed do not exist, an error is raised. Also, the transformation will be owned by STRM_ADMIN so make sure you have explicitly granted all privileges on HR.JOB_HISTORY to STRM_ADMIN. First we find the rule to which we wish to add the declarative transformation, logged in as STRM_ADMIN we can look at the USER_RULES view: SQL> select * from user_rules;RULE_NAME------------------------------RULE_CONDITION-------------------------------------RULE_EVALUATION_CONTEXT_OWNER RULE_EVALUATION_CONTEXT_NAME------------------------------ ------------------------------RULE_ACTION_CONTEXT(ACTX_LIST(NVN_NAME, NVN_VALUE()))--------------------------------------------------------------RULE_COMMENT--------------------------------------------------------------HR1((:dml.get_object_owner() = 'HR') and :dml.get_source_database_name()= 'STRM1')SYS STREAMS$_EVALUATION_CONTEXTHR2((:ddl.get_object_owner() = 'HR' or :ddl.get_base_table_owner() ='HR') and :ddl.get_source_database_name() = 'STRM1' )SYS STREAMS$_EVALUATION_CONTEXT HR1 is our Row LCR (:dml) rule, so we will add To create our declarative transformation Rule, we issue the following command: beginDBMS_STREAMS_ADM.RENAME_COLUMN(rule_name => 'strm_admin.HR1',table_name => 'HR.JOB_HISTORY',from_column_name => 'DEPARTMENT_ID',to_column_name => 'DEPT_ID',value_type => '*', -- defaultstep_number => 0, --defaultoperation => 'ADD' -–default);end;/ We can now check the rule in the USER_RULES view: SQL> select * from user_rules where rule_name = 'HR1';RULE_NAME------------------------------RULE_CONDITION-------------------------------------------------------------RULE_EVALUATION_CONTEXT_OWNER RULE_EVALUATION_CONTEXT_NAME------------------------------ ------------------------------RULE_ACTION_CONTEXT(ACTX_LIST(NVN_NAME, NVN_VALUE()))-------------------------------------------------------------RULE_COMMENT-------------------------------------------------------------HR1((:dml.get_object_owner() = 'HR') and :dml.get_source_database_name()= 'STRM1')SYS STREAMS$_EVALUATION_CONTEXTRE$NV_LIST(RE$NV_ARRAY(RE$NV_NODE('STREAMS$_INTERNAL_TRANS',ANYDATA()))) Notice that the RULE_COMMENT now has an entry indicating the inclusion of the transformation rule. We can also look at the DBA_STREAMS_TRANSFORMATION view: SQL> select rule_owner, rule_name, transform_type,2 from_column_name, to_column_name, value_type,3 declarative_type, precedence, step_number4 from dba_streams_transformations;RULE_OWNER------------------------------RULE_NAME TRANSFORM_TYPE------------------------------ --------------------------FROM_COLUMN_NAME TO_COLUMN_NAME VAL-------------------- -------------------- ---DECLARATIVE_TYPE PRECEDENCE STEP_NUMBER-------------------- ---------- -----------STRM_ADMINHR1 DECLARATIVE TRANSFORMATIONDEPARTMENT_ID DEPT_ID *RENAME COLUMN 2 0 To remove the declarative transformation from the rule, we use the same procedure we used to create the transformation, but set the operation parameter to REMOVE: beginDBMS_STREAMS_ADM.RENAME_COLUMN(rule_name => 'strm_admin.HR1',table_name => 'HR.JOB_HISTORY',from_column_name => 'DEPARTMENT_ID',to_column_name => 'DEPT_ID',operation => 'REMOVE' -–default);end;/ Note: Removing the declarative transformation does not clear the RULE_COMMENT we see in the USER_RULES view. However, it does clear the entry from the DBA_STREAMS_TRANSFORMATION view . For more detailed information on using the DBMS_STREAMS_ADM.RENAME_COLUMN, and other declarative transformation procedures, please refer to the Oracle PL/SQL Packages and Types Reference, and the Oracle Streams Concepts and Administration Guide. User Created Rule Based Transformations (UCRBT) You can also create your own Rule Based Transformations. These transformations are referred to as user-created transformations (imagine that). The steps for creating a UCRBT are pretty basic. Create the PL/SQL function that performs the transformation. The function should receive the LCR as a SYS.ANYDATA IN parameter The function should return either an LCR a SYS.ANYDATA or STREAMS$_ANYDATA_ARRAY If the function returns a STREAMS$_ANYDATA_ARRAY, it can only be associated with a capture rule Grant the EXECUTE privilege on the function to the appropriate user as necessary. Create or locate the rules for which the transformation will be used. Set the custom rule-based transformation for each rule by running the SET_RULE_TRANSFORM_FUNCTION procedure. In this example, we will setup a UCRBT that makes the same transformation as the previous declarative transformation. The UCRBT is going to be owned by STRM_ADMIN so make sure you have explicitly granted all privileges on HR.JOB_HISTORY to STRM_ADMIN. The code for this example can be found in the UCRBT.sql code file. First we create the PL/SQL function to accomplish the transformation; STRM_ADMIN will be the function owner, so make sure you are logged in as STRM_ADMIN in this example: CREATE OR REPLACE FUNCTION DEPT_COLNAME_CHANGE (evt IN SYS.AnyData)RETURN SYS.AnyData ISlcr SYS.LCR$_ROW_RECORD;obj_name VARCHAR2(30);rc NUMBER;BEGINIF evt.GetTypeName='SYS.LCR$_ROW_RECORD' THENrc := evt.getObject(lcr);obj_name := lcr.GET_OBJECT_NAME();IF obj_name = 'JOB_HISTORY' THENlcr.RENAME_COLUMN('DEPARTMENT_ID','DEPT_ID','*');RETURN SYS.ANYDATA.ConvertObject(lcr);END IF;END IF;RETURN evt;END;/ Because STRM_ADMIN is the function owner, we do not need to grant EXECUTE on the function. If the function was created in a different schema, then we would want to explicitly grant execute on the function to STRM_ADMIN. Next we determine which rule to which to add the transformation function. You can either create a new rule at this point, or use an existing rule. We will use our HR1 rule from above (we can do this because we removed the Declarative RENAME_COLUMN transformation from the rule in our last step of the Declarative Transformation example). select * from dba_rules; Then, we use the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure to add the transformation function to the desired rule: BEGINDBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION(rule_name => 'HR1',transform_function => 'strm_admin.DEPT_COLNAME_CHANGE');END;/ We will now see the transformation in the DBA/ALL_STREAMS_TRANSFORM_FUNCTION view: SQL> select * from all_streams_transform_function;RULE_OWNER------------------------------RULE_NAME VALUE_TYPE------------------------------ --------------------TRANSFORM_FUNCTION_NAME CUSTOM_TYPE----------------------------------- -----------STRM_ADMINHR1 SYS.VARCHAR2"STRM_ADMIN"."DEPT_COLNAME_CHANGE" ONE TO ONE For more detailed information on UCRBT, please reference the Usage Notes for the DBMS_STREAMS_ADM.SET_RULE_TRANSFORM_FUNCTION procedure in the Oracle PL/SQL Packages and Types Reference, and the Oracle Streams Concepts and Administration Guide. Transformation Order of Execution It is possible to have a combination of declarative and user defined transformations assigned to a single rule. This being the case, how do you know which ones get executed when? Especially, if you have not assigned step numbers. There is a default order of execution for transformation that help keep the rule from running amuck. If the rule is a Subset rule, then Row Migration is always executed first Next are Declarative Rule based transformations These are further ordered by the step number specified for each transformation if they have been assigned. If the step numbers are not assigned, the transformations are executed in the following order: Delete_Column Rename_Column Add_Column Rename_Table Rename_Schema Last (but not the least), the User Created Rule-Based Transformation is executed.
Read more
  • 0
  • 0
  • 2702

article-image-oracle-11g-streams-rules-part-1
Packt
05 Feb 2010
7 min read
Save for later

Oracle 11g Streams: RULES (Part 1)

Packt
05 Feb 2010
7 min read
Streams is all about the rules; literally. The action context that a Streams process takes is governed by the rule conditions. When you create a rule, Oracle generates system conditions, and evaluation contexts, that are used to evaluate each LCR to determine if the action context for the process should be accomplished. We have already addressed a number of these system conditions during our TAG discussion; for instance INCLUDE_TAGGED_LCR=FALSE generates a system evaluation for theLCR$_ROW_RECORD_TYPE :dml.is_null_tag='Y' subprogram. For more information on LCR Types, reference Oracle Database PL/SQL Packages and Types Reference manual. You can control what system evaluations are included in the rule by the parameter values you specify, as well as add user-defined evaluations with the AND_CONDITION parameter. There is a lot going on under the calm surface water of rules. Understanding how this activity flows together will help you become more advanced in creating rules to manipulate your Streams throughout your current environment. So, let's grab our snorkels and masks, and stick our heads under the surface and take a look. Rule components Rules have three components: conditions, evaluation context, and action context. These components coordinate with the "when", "what", and "how" of the LCR being processed. The conditions tell the Streams process "when" the LCR should be processed, the evaluation context defines "what" data/information the Streams process uses to process the LCR, and the action context tells the Streams process "how" to handle the LCR. Rule conditions The rule condition is essentially the "where clause". The conditions are evaluated against the properties of the LCR and return either TRUE or FALSE The conditions can contain compound expressions and operators (AND, OR, NOT, and so on).The final evaluation returned from the condition (TRUE or FALSE) is the final result of all the compound expressions. An example of a system-generated condition would be that of our good friend :dml.is_null_tag = 'Y' (generated by the INCLUDE_TAGGED_LCR=FALSE parameter of the DBMS_STREAMS_ADM.ADD_*_RULE procedures). On rule creation, the condition is passed in as a string (so make sure to escape any single quotes within the string). ':dml.get_object_owner() = ''OE'' and :dml.get_tag() =HEXTORAW(''22'')' It is important to remember that you want to keep your rule conditions as simple as possible. Complex rule conditions can have a significant impact on performance. The rule condition created by our Sub-Setting example is an example of a complex rule as it includes a PL/SQL call to a function. Also, rule conditions that contain NOT, or != can also impact performance. Rule Evaluation Context The rule evaluation context defines data external to the LCR properties that can be referenced in the rule conditions. This is comparable to the SQL statement from clause. This reference is a database object that contains the external data. The evaluation context provides the rule conditions with the necessary information for interpreting and evaluating the conditions that reference external data. If the evaluation context references objects, the rule owner must have the proper privileges to reference the object (select and execute) as the rule condition is evaluated in the schema of the evaluation context owner. Information contained in an Evaluation Context might include table aliases used in the condition, variable names and types, and/or a function to use to evaluate the rules to which the evaluation context is assigned. Evaluation Context structure can get a bit confusing. To get a better feel of it, you may want to start by looking at the following database views: DBA/ALL/USER_EVALUATION_CONTEXT_TABLES: table alias used DBA/ALL/USER_EVALUATION_CONTEXT_VARS: variable types used DBA/ALL/USER_EVALUATION_CONTEXTS: functions used Streams system created rules (created using DBMS_STREAMS_ADM) will create rules using the standard Oracle-supplied SYS.STREAMS$_EVALUATION_CONTEXT rule evaluation context. This evaluation context is composed of a variable_types> list for the :dml and :ddl variables, and the evaluation function SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION as seen in the previous DBA views. You can create your own evaluation context using the DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT procedure: DBMS_RULE_ADM.CREATE_EVALUATION_CONTEXT(evaluation_context_name IN VARCHAR2,table_aliases IN SYS.RE$TABLE_ALIAS_LIST DEFAULT NULL,variable_types IN SYS.RE$VARIABLE_TYPE_LIST DEFAULT NULL,evaluation_function IN VARCHAR2 DEFAULT NULL,evaluation_context_comment IN VARCHAR2 DEFAULT NULL); If you create a custom Evaluation Context that uses the SYS.DBMS_STREAMS_INTERNAL.EVALUATION_CONTEXT_FUNCTION, it must include the same variables and types as in the SYS.STREAMS$_EVALUATION_CONTEXT (a.k.a. :dml and :ddl). Variable_types> can be defined using SYS.RE$VARIABLE_TYPE_LIST, which in turn accepts individual variable types defined using SYS.RE$VARIABLE_TYPE. Similarly, if you create a custom function to use as the evaluation function, it must have the following signature: FUNCTION evaluation_function_name(rule_set_name IN VARCHAR2,evaluation_context IN VARCHAR2,event_context IN SYS.RE$NV_LIST DEFAULT NULL,table_values IN SYS.RE$TABLE_VALUE_LIST DEFAULT NULL,column_values IN SYS.RE$COLUMN_VALUE_LIST DEFAULT NULL,variable_values IN SYS.RE$VARIABLE_VALUE_LIST DEFAULT NULL,attribute_values IN SYS.RE$ATTRIBUTE_VALUE_LIST DEFAULT NULL,stop_on_first_hit IN BOOLEAN DEFAULT FALSE,simple_rules_only IN BOOLEAN DEFAULT FALSE,true_rules OUT SYS.RE$RULE_HIT_LIST,maybe_rules OUT SYS.RE$RULE_HIT_LIST);RETURN BINARY_INTEGER; Where the returned BINARY_INTEGER value must be one of the following: DBMS_RULE_ADM.EVALUATION_SUCCESSDBMS_RULE_ADM.EVALUATION_CONTINUEDBMS_RULE_ADM.EVALUATION_FAILURE For more information on creating custom Evaluation Contexts and evaluation functions and Rule Types, refer to the Oracle Database PL/SQL Packages and Types Reference manual, and The Oracle Streams Extended Examples manual. Once an Evaluation Context is created it can be assigned to a rule or a rule set using the evaluation_context parameter of the appropriate DBMS_RULE_ADM procedure. The Evaluation Context for a Rule can be different than the Evaluation Context for a Rule Set to which the Rule might be assigned. The bottom line is that a Rule must be able to associate itself with an Evaluation Context at some level. We will revisit this concept as we discuss Rule Creation a little later on this section. Action Context The rule action context is just that, the action information that the rule evaluation engine returns to the client application, to be acted upon by the client application, when the rule evaluates to true. This is not the action itself, but values to be used by the action code that are specific to the rule. The action context is of the SYS.RE$NV_LIST type, which contains an array of name-value pairs and is associated to a rule condition. A rule condition can only have one action context. The action context itself is optional and can contain zero to many name-value pairs. The SYS.RE$NV_LIST has the following construct: TYPE SYS.RE$NV_LIST AS OBJECT(actx_list SYS.RE$NV_ARRAY); Subprograms are: ADD_PAIR (name IN VARCHAR2,value IN ANYDATA);GET_ALL_NAMES ()RETURN SYS.RE$NAME_ARRAY;GET_VALUE (name IN VARCHAR2)RETURN ANYDATA;REMOVE_PAIR (name IN VARCHAR2); For more information on creating and populating Action Contexts types, refer to the Oracle Database PL/SQL Packages and Types Reference manual. For more information on Rule components refer to the Oracle Streams Concepts and Administration manual.
Read more
  • 0
  • 0
  • 2418

article-image-n-way-replication-oracle-11g-streams-part-2
Packt
05 Feb 2010
7 min read
Save for later

N-Way Replication in Oracle 11g Streams: Part 2

Packt
05 Feb 2010
7 min read
Streaming STRM2 to STRM1 Now the plan for setting up Streams for STRM2. It is the mirror image of what we have done above, except for the test part. On STRM2, log in as STRM_ADMIN. -- ADD THE QUEUE, a good queue name is STREAMS_CAPTURE_Q -- ADD THE CAPTURE RULE -- ADD THE PROPAGATION RULE -- INSTANTIATE TABLE ACROSS DBLINK -- DBLINK TO DESTINATION is STRM1.US.APGTECH.COM -- SOURCE is STRM2.US.APGTECH.COM On STRM1 log in as STRM_ADMIN. -- ADD THE QUEUE: A good queue name is STREAMS_APPLY_Q -- ADD THE APPLY RULE Start everything up and test the Stream on STRM2. Then check to see if the record is STREAM'ed to STRM1. -- On STRM2 log in as STRM_ADMIN -- ADD THE QUEUE :A good queue name is STREAMS_CAPTURE_Q -- STRM_ADMIN@STRM2.US.APGTECH.COM>BEGINDBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => '"STREAMS_CAPTURE_QT"',queue_name => '"STREAMS_CAPTURE_Q"',queue_user => '"STRM_ADMIN"');END;/commit;-- ADD THE CAPTURE RULE-- STRM_ADMIN@STRM2.US.APGTECH.COM>BEGINDBMS_STREAMS_ADM.ADD_TABLE_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_type => 'capture',streams_name => '"STREAMS_CAPTURE"',queue_name => '"STRM_ADMIN"."STREAMS_CAPTURE_Q"',include_dml => true,include_ddl => true,include_tagged_lcr => false,inclusion_rule => true);END;/commit;-- ADD THE PROPAGATION RULE-- STRM_ADMIN@STRM2.US.APGTECH.COM>BEGINDBMS_STREAMS_ADM.ADD_TABLE_PROPAGATION_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_name => '"STREAMS_PROPAGATION"',source_queue_name =>'"STRM_ADMIN"."STREAMS_CAPTURE_Q"',destination_queue_name =>'"STRM_ADMIN"."STREAMS_APPLY_Q"@STRM1.US.APGTECH.COM',include_dml => true,include_ddl => true,source_database => 'STRM2.US.APGTECH.COM',inclusion_rule => true);END;/COMMIT; Because the table was instantiated from STRM1 already, you can skip this step. -- INSTANTIATE TABLE ACROSS DBLINK-- STRM_ADMIN@STRM2.US.APGTECH.COM>DECLAREiscn NUMBER; -- Variable to hold instantiation SCN valueBEGINiscn := DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER();DBMS_APPLY_ADM.SET_TABLE_INSTANTIATION_SCN@STRM1.US.APGTECH.COM(source_object_name => 'LEARNING.EMPLOYEES',source_database_name => 'STRM1.US.APGTECH.COM',instantiation_scn => iscn);END;/COMMIT; -- On STRM1, log in as STRM_ADMIN. -- ADD THE QUEUE, a good queue name is STREAMS_APPLY_Q-- STRM_ADMIN@STRM1.US.APGTECH.COM>BEGINDBMS_STREAMS_ADM.SET_UP_QUEUE(queue_table => '"STREAMS_APPLY_QT"',queue_name => '"STREAMS_APPLY_Q"',queue_user => '"STRM_ADMIN"');END;/COMMIT;-- ADD THE APPLY RULE-- STRM_ADMIN@STRM1.US.APGTECH.COM>BEGINDBMS_STREAMS_ADM.ADD_TABLE_RULES(table_name => '"LEARNING.EMPLOYEES"',streams_type => 'apply',streams_name => '"STREAMS_APPLY"',queue_name => '"STRM_ADMIN"."STREAMS_APPLY_Q"',include_dml => true,include_ddl => true,include_tagged_lcr => false,inclusion_rule => true);END;/commit; Start everything up and Test. -- STRM_ADMIN@STRM1.US.APGTECH.COM>BEGINDBMS_APPLY_ADM.SET_PARAMETER(apply_name => 'STREAMS_APPLY',parameter => 'disable_on_error',value => 'n');END;/COMMIT;-- STRM_ADMIN@STRM1.US.APGTECH.COM>DECLAREv_started number;BEGINSELECT DECODE(status, 'ENABLED', 1, 0) INTO v_startedFROM DBA_APPLY where apply_name = 'STREAMS_APPLY';if (v_started = 0) thenDBMS_APPLY_ADM.START_APPLY(apply_name => '"STREAMS_APPLY"');end if;END;/COMMIT;-- STRM_ADMIN@STRM2.US.APGTECH.COM>DECLAREv_started number;BEGINSELECT DECODE(status, 'ENABLED', 1, 0) INTO v_startedFROM DBA_CAPTURE where CAPTURE_NAME = 'STREAMS_CAPTURE';if (v_started = 0) thenDBMS_CAPTURE_ADM.START_CAPTURE(capture_name => '"STREAMS_CAPTURE"');end if;END;/ Then on STRM2: -- STRM_ADMIN@STRM2.US.APGTECH.COM>ACCEPT fname PROMPT 'Enter Your Mom's First Name:'ACCEPT lname PROMPT 'Enter Your Mom's Last Name:'Insert into LEARNING.EMPLOYEES (EMPLOYEE_ID, FIRST_NAME, LAST_NAME,TIME) Values (5, '&fname', '&lname', NULL);dbms_lock.sleep(10); --give it time to replicate Then on STRM1, search for the record. -- STRM_ADMIN@STRM1.US.APGTECH.COM>Select * from LEARNING.EMPLOYEES; We now have N-way replication. But wait, what about conflict resolution?Good catch; all of this was just to set up N-way replication. In this case, it is a 2-way replication. It will work the majority of the time; that is until there is conflict. Conflict resolution needs to be set up and in this example the supplied/built-in conflict resolution handler MAXIMUM will be used. Now, let us cause some CONFLICT! Then we will be good people and create the conflict resolution and ask for world peace while we are at it! Conflict resolution Conflict between User 1 and User 2 has happened. Unbeknown to both of them, they have both inserted the exact same row of data to the same table, at roughly the same time. User 1's insert is to the STRM1 database. User 2's insert is to the STRM2 database. Normally the transaction that arrives second will raise an error. It is most likely that the error will be some sort of primary key violation and that the transaction will fail. We do not want that to happen. We want the transaction that arrives last to "win" and be committed to the database. At this point, you may be wondering "How do I choose which conflict resolution to use?" Well, you do not get to choose, the Business Community that you support will determine the rules most of the time. They will tell you how they want conflict resolution handled. Your responsibility is to know what can be solved with built-in conflict resolutions and when you will need to create custom conflict resolution. Going back to User 1 and User 2. In this particular case, User 2's insert arrives later than User 1's insert. Now the conflict resolution is added using the DBMS_APPLY_ADM package, specifically the procedure DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_ HANDLER which instructs the APPLY process on how to handle the conflict. Scripts_5_1_CR.sql shows the conflict resolution used to resolve the conflict between User 1 and User 2. Since it is part of the APPLY process, this script is run by the Streams Administrator. In our case, that would be STRM_ADMIN. This type of conflict can occur on either STRM1 or STRM2 database, so the script will be run on both databases. The numbers to the left are there for reference reasons. They are not in the provided code. -- Scripts_5_1_CR.sql1. DECLARE2. cols DBMS_UTILITY.NAME_ARRAY;3. BEGIN4. cols(0) := 'employee_id';5. cols(1) := 'first_name';6. cols(2) := 'last_name';7. cols(3) := 'time';8. DBMS_APPLY_ADM.SET_UPDATE_CONFLICT_HANDLER(9. object_name => 'learning.employees',10. method_name => 'MAXIMUM',11. resolution_column => 'time',12. column_list => cols);13. END;14. /15. Commit; So what do these 15 magical lines do to resolve conflict?Let us break it down piece by piece logically first, and look at the specific syntax of the code. Oracle needs to know where to look when a conflict happens. In our example, that is the learning.employees table. Furthermore, Oracle needs more than just the table name. It needs to know what columns are involved. Line 9 informs Oracle of the table. Lines 1 -7 relate to the columns. Line 8 is the actual procedure name. What Oracle is supposed to do when this conflict happens, is answered by Line 10. Line 10 instructs Oracle to take the MAXIMUM of the resolution_column and use that to resolve the conflict. Since our resolution column is time, the last transaction to arrive is the "winner" and is applied.
Read more
  • 0
  • 0
  • 1641

article-image-n-way-replication-oracle-11g-streams-part-1
Packt
05 Feb 2010
4 min read
Save for later

N-Way Replication in Oracle 11g Streams: Part 1

Packt
05 Feb 2010
4 min read
N-way replication refers to a Streams environment where there are multiple sources. In this article, we will still use the STRM1 and STRM2 databases but with a little twist; making both databases the source. By making both STRM1 and STRM2 sources, we need to first consider a couple of unique situations and do a little more pre-planning, specifically for N-Way replication. The concepts and techniques used to configure a 2-way replication can then be used to scale to N-way replication. We all need to crawl before we run, the better you crawl (understand) this article, the easier it will be to scale up to N-way replication. Pay close attention and learn the technique so that you can implement it well. We need to repeat this—Streams is not Failover. We need to repeat this—Streams is not Failover. No, that is not a typo. The authors are passionate about Streams and want to see you successfully implement it. To successfully implement Streams, you need to know not to step into the trap of using it for Failover. Both authors have done some work where Failover was the requirement. Streams is not a Failover solution. Failover is handled by Oracle Data Guard, NOT Oracle Streams. Streams is about distributing the data to multiple locations. On more than one occasion, Streams was used as a Failover technology because it can distribute data to multiple locations. Do not fall into the trap of using the wrong tool for the wrong job. Streams distributes (replicates) data. As such, there will always be some difference between the databases in a Streams environment. All replication technology has this problem. The only time where all of the databases are in sync is, when there is no activity and all replication has been applied to all target locations. If you need Failover, then use the proper tool. Oracle Data Guard is for Failover. It has the necessary processes to guarantee a different level of failover from a primary site to a secondary site, whereas Streams is a Replication tool that distributes data. Just remember the following, when there is a discussion of Replication and Failover that comes up: Streams distributes data, it is built for replication Data Guard is built for Failover Pre-planning for N-way replication When we set up N-way replication, we must consider the possibility of a collision of data. Since we have multiple sources of data, it is possible for the exact same data to be inputted on any or all of the sources at the exact same time. When this happens, it is a conflict. This example is just one type of conflict that can happen in N-way replication environments. The types of conflict that can occur are as follows: Update conflict: When transactions from different databases try to update the same row at nearly the same time. Delete conflict: When one transaction deletes a row and the next transaction tries to update or delete the row. Transactions originate from different databases. Unique conflict: When transactions from different databases violate a primary or unique constraint, the first transaction is accepted. The second transaction obtains the conflict. Foreign key conflict : This happens when a transaction from a Source tries to insert a child record before the parent record exists. The good news is that Oracle has provided built-in conflict resolution in Streams that solves the most common situations. The built-in solutions are as follows: OVERWRITE DISCARD MAXIMUM MINIMUM We will provide an example of conflict resolution after we build our N-way replication. In our case, we will use MAXIMUM. As part of the pre-planning for N-way replication, we highly suggest creating a simple table such as the Setup Table. Avoiding Conflict As conflict requires additional pre-planning and configuration, one begins to wonder, "Are there techniques so that we can configure N-way replication without the possibility of conflict?" The simple answer to the question is "Yes". The not-so simple answer is that there is some configuration magic that needs to be done and the devil is in the details. Limiting who and what can be updated is one method of avoiding conflict. Think of it this way— there is no conflict if we agree to who and what can update the specific data. User 1 can only update his specific data and no one else can do that. Similarly, user 2 can only update his specific data. So, user 1 and user 2 can never cause a conflict. Now this may be a little bit difficult depending on the application. This can be implemented with the use of offset sequences. One sequence produces only odd values, and another produces only even values. We could also use a combination of sequence and some unique characteristics of the database.
Read more
  • 0
  • 0
  • 2200
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-migrating-mysql-table-using-oracle-sql-developer-15
Packt
29 Jan 2010
4 min read
Save for later

Migrating a MySQL table using Oracle SQL Developer 1.5

Packt
29 Jan 2010
4 min read
Oracle SQL Developer Tool is a stand alone graphic database developer tool that connects to Oracle as well as third-party databases which can be used to perform a variety of tasks from running simple queries to migration of databases from third party vendor products to Oracle. This article by Dr. Jayaram Krishnaswamy, shows how the reader may use Oracle's most recent tool, the Oracle SQL Developer 1.5 to work with the MySQL database. An example of migrating a table in MySQL to Oracle 10G XE is also described. The Oracle SQL Developer Tool has steadily improved from its beginnings in version 1.1. The earlier versions are briefly explained here. The latest version, SQL Developer 1.5.4 released in March 2009 was described in this article. The SQL Developer tool[(1.5.4.59.40)] bundle can be downloaded from Oracle's web site, Oracle Technology Products. When you unzip the bundle you are ready to start using this tool. You may get an even more recent version of this tool as it is continuously updated. It is assumed that you have a MySQL Server that you can connect to and that you have the required credentials. The MySQL server used in developing this article was installed when the XAMPP bundle was installed. Reader will benefit by reading earlier MySQL articles 1, 2, 3 on the Packt site. Connecting to MySQL Out of the box Oracle SQL Developer 1.5.4 only supports Oracle and MS Access. The product documents clearly says that it can connect to other database products. This article will show how this is achieved. In order to install products from Oracle you must have username and password for the Oracle web Account. Bring up the Oracle SQL Developer application by clicking the executable. The program starts up and after a while the user interface gets displayed as shown. Right click on Connection, the New Connection page opens as shown displaying the default connection to the resident Oracle 10G XE server. Click the menu item Help and choose "Check for Updates". This brings up the wizard displaying the Welcome screen as shown in the next figure. Click Next. The "Source" page of the wizard shows up as shown. The updates for Oracle SQL Developer is already chosen. Place a check mark for "Third Party SQL Developer Extensions". You can choose to install looking for updates on the internet or from the downloaded bundle, if it exists. First try the internet and click Next. This brings up the "Updates" page of the wizard as shown in the next figure. Read the warning on this window. The extensions are not evaluated by Oracle but available. The details of available extensions are as follows: OrindaBuild Java Code Generator version 6.1.20090331 shown in the next figure. The JTDS DBC Driver version 11.1.58.17 shown in the next figure. The MYSQL JDBC driver shown in the next figure: The last one is a patch for the Oracle SQL Developer to fix some of the import, LDAP and performance issues as shown. For this article only the JTDS JDBC driver for MS SQL Server and the MySQL JDBC options were checked. The License agreements are for the JTDS drivers. Click Next. The License agreements must be accepted. Click I Agree. Click Next. This is the download step of the wizard. To proceed further you must have the Oracle Web Account username and password. Here you have the option to signup as well. After a while the new extensions are downloaded as shown in the next figure. Click Finish to close the wizard. You need to restart SQL Developer to complete the installation of the extensions. Click Yes on the "Confirm Exit" window that shows up. Now, when you click New Connection to create a new connection you display the "New / Select Database Connection" as shown. You can now see that other 3rd party databases are added to the window. Choose the tab for MySQL. Fill in the required details as shown in the next figure appropriate for your MySQL installation. You must provide a name for the connection. Herein the connection is named, My_MySQL. The credentials must be provided as shown or that which is appropriate for your installation. The port is the default designated for this server when you install the product. You may accept the other defaults on this page and click Test. The word "success" gets displayed in the status label at bottom left. The connection name and connection details gets added to the page shown above.
Read more
  • 0
  • 0
  • 8687

article-image-getting-started-sql-developer-part-1
Packt
07 Jan 2010
8 min read
Save for later

Getting Started with SQL Developer: Part 1

Packt
07 Jan 2010
8 min read
Preparing your environment Preparing your environment depends on a few factors, including the platform you are working on and whether you have an early edition of SQL Developer previously installed. First, you need to locate the software, download, and install it. Finding and downloading the software SQL Developer is available through a variety of sources as both a standalone download and as part of the Oracle Database and Oracle JDeveloper installations. SQL Developer is a free product, and you can download it from the Oracle Technology Network, http://www.oracle.com/technology/products/database/sql_developer. Use this link to reach the download for the latest standalone production release. It also includes details of the release and is regularly updated with news of preview releases and new articles. While SQL Developer is free to download and use, you are required to read and agree to the license before you can proceed with the download. The product also falls under Oracle Support contracts, if you have a Support contract for the database, which means that you can log Oracle Support tickets. Downloading and installing the Java Development Kit SQL Developer requires the Java SE Development Kit (JDK); this includes the Java Runtime Environment (JRE) and other tools, which are used by SQL Developer utilities such as the PL/SL Debugger. For Microsoft Windows, you can download and install SQL Developer with the JDK already installed. This means you'll download and unzip the product and will be ready to start, as there are no extra steps required. For the other operating systems, you'll need to download the JDK and direct SQL Developer to the path yourself. Indeed, as many other products require a JDK to be installed, you may already have one on your system. In this case, just direct the product to use an existing JDK installation. For Microsoft Windows, ensure you download SQL Developer without the JDK to make use of an existing JDK installation. The SQL Developer download site offers a selection of download choices: Microsoft Windows (with or without the JDK) Linux (without the JDK) Mac OS X (without the JDK) In each case, make your selection and download the required file. The download sites for the JDK are as follows: For Microsoft Windows and Linux: http://java.sun.com/javase/downloads/index.jsp For Mac:http://developer.apple.com/java/download/ SQL Developer is shipped with the minimum JDK required. You can download and use the latest updates to the JDK. You should be aware that some updates to the JDK are not supported. This detail is posted on the SQL Developer Downloads page for each release. Starting from SQL Developer 2.1, JDK 1.6 is the minimum JDK supported. Once you have installed the JDK, you can start SQL Developer. Installing and starting SQL Developer SQL Developer does not use an installer. All you need to do is unzip the given file into an empty folder, locate, and run the executable. Do not unzip SQL Developer into an $Oracle_Home folder or an existing SQL Developer install. Unzipping the file creates an sqldeveloper folder, which includes a selection of sub-folders and fles, including the sqldeveloper.exe executable. If your download does not include the JDK, then you'll be prompted for the full path of the java.exe. Browse to the location of the fle and select it. The path should include the full path and executable (for example, C:Program FilesJavajdk1.6.0_13binjava.exe): Working with different platforms Whether you are accessing SQL Developer as part of the Oracle Database 11g installation or as a stand-alone install, there is a selection of executables available to you. These are either platform specifc or provide additional detail while running the product. Microsoft Windows The first executable you'll find is in the root folder sqldeveloper. This is the executable more generally used. If you navigate down to sqldeveloperbin, there are two additional executables, sqldeveloper.exe and sqldeveloperW.exe. The latter is the same as the executable in the root folder. Use either of these for running SQL Developer. The additional executable is often used for debugging purposes. Use sqldeveloperbinsqldeveloper.exe to invoke SQL Developer and a separate console window which displays additional Java messages. You can use these messages when encountering errors in the product and if you want to log an issue with Oracle Support. Oracle SQL DeveloperThree steps to getting started on Microsoft Windows:Download: Download the full file, with JDK, from the Oracle Technology Network websiteUnzip: Unzip the file to an empty directoryDouble-click: Double-click on the sqldevelopersqldeveloper.exe file Alternative platforms Microsoft Windows is the predominant platform used by SQL Developer users. There is a steadily growing audience for Linux and Max OS X. As neither of these platform downloads include the JDK, you need to first access, download, and install the JDK. On starting either Linux or the Mac OS, you'll be prompted for the full path of the JDK as described. Mac OS X Download the file specific to Mac OS X and unzip it to an empty folder. This creates an sqldeveloper folder, complete with files and sub-folders. Run the sqldeveloper.sh file. Linux Use the Linux rpm command to install SQL Developer. For example, your command might look like this: rpm -Uhv sqldeveloper-1.5.54.40-1.noarch.rpm In the same way that unzip creates an sqldeveloper folder, with sub-folders and files, the rpm, command creates an sqldeveloper folder, complete with files and sub-folders. Switch to this new folder and run the sqldeveloper.sh executable. Migrating settings from a previous release On the initial startup of any release of SQL Developer, you may be asked one or two questions. The first is the location of the Java executable of the JDK as discussed. If you have installed the full release with the JDK, this question is skipped. The second question is if you want to migrate any preferences from a previous release. Regardless of whether this is the first SQL Developer install on the machine or not, the frst time you invoke SQL Developer, you are offered the choice of migrating your settings. You can migrate settings of any release from SQL Developer 1.5 and above. By default, the utility looks for the latest previous installation of the software. If you want to migrate from a different installation, select the Show All Installations button (seen above). This displays a list of all SQL Developer installations that have the system folder in the Documents and Settings system folder (for example, C:Documents and Settings<your_user>Application DataSQL Developersystem1.5.1.54.40) and includes releases from SQL Developer 1.5 and above. For releases prior to SQL Developer 1.5, the system folder was created within the SQL Developer install (for example, D:SQLDeveloperBuilds1.2.11.2.1.3213sqldevelopersqldevelopersystem).   Maintaining your environment Once you have SQL Developer installed, it is helpful to know about the environmental settings and some of the files that are created when you start the product. Knowing about the version you have installed is important if only to be able to identify this when asking questions on the forum, or when contacting Oracle Support. Verifying the current release To verify the SQL Developer release you have, select the Help | About menu once you start SQL Developer or JDeveloper. In the dialog invoked, select the Extensions tab and find the Oracle SQL Developer extension, as shown in the next screenshot. This will match the build number on the download site if you have the latest release. The screenshot shows a number of the extensions that make up SQL Developer. If your dialog does not show the Version or Status columns, you can select the column headers to resize the visible columns and bring the others into focus. Using Check for Updates SQL Developer offers a built-in patching and extensions utility, known as Check for Updates. Check for Updates is used to release: SQL Developer extensions General Oracle extensions Minor patches Third-party tools required by SQL Developer, such as the non-Oracle database drivers Third-party extensions You can control whether Check for Updates warns you about new updates using the Tools | Preferences menu. Select Extensions and then select Automatically Check for Updates. For SQL Developer extensions, if you already have SQL Developer installed and you're not secured by a firewall, you'll be alerted about new updates. You need not use the utility to get the updates, but you'll be aware of the latest release from the alert. For all other extensions, you need to start Check for Updates to see what's available. To do this, select Help | Check for Updates. In either situation, just follow the dialog to find the updates you require.
Read more
  • 0
  • 0
  • 6645

article-image-getting-started-sql-developer-part-2
Packt
07 Jan 2010
6 min read
Save for later

Getting Started with SQL Developer: Part 2

Packt
07 Jan 2010
6 min read
A quick overview Let's start with a run-through of the product. By spending a little time dipping into a number of areas of the product, you can start laying down a map of how the pieces connect and provide a base that you can drill down into later. Sample schemas To follow the examples in the article, you need access to SYSTEM and the shipped sample schemas, HR, OE, SH, PM, and IX available in Oracle Database 9i, 10g, or 11g. Specifcally, this article uses the sample schemas shipped with Oracle Database 11g. There are two ways to install the sample schema. The first way is when you install the database. You can elect to have the sample schema installed at that point. Second, if you have not installed these, then you can locate the sample schema in the $ORACLE_HOME/demo/schema folder and follow the instructions on installing them using the Oracle online documentation. Not all of these schemas are available for Oracle Express Edition. In this article, we use SYSTEM to verify the HR schema is unlocked, and then we use the HR sample schema, which is available in Oracle Express Edition. Creating your first connection To complete this quick walk through, you need to know the username and password of the SYSTEM user. You also need to know the location of the database, whether this is the machine name or the IP address, and the database SID. To begin, start SQL Developer. The very first time you start SQL Developer, you'll be asked if you want to migrate from a previous version. Select No and allow the tool to start up. The first thing you need to do after you have started SQL Developer for the first time is to create your initial connections. Create a connection for SYSTEM: Select Connections, right-click and select New Connection. This invokes the New Database Connection dialog. You can edit and control all of the connection details using this dialog. Complete the details, as seen in the following screenshot, relevant to your environment. Click on Test to ensure you have the connection details correct and click on Connect. You are now connected as SYSTEM. Use this connection to verify your other users, by continuing as follows: Select the new connection you have created, expand the node, and scroll down to Other Users. Expand Other Users and find the user HR. Right-click and select Edit User. Verify that the account for HR is unlocked and the Password has not expired, that is, the properties Account is Locked and Password Expired are deselected. If either of these is selected, deselect them. You can change the password for HR at this point too. It's good practice to modify the passwords of the shipped sample schemas once you have unlocked them. Now you are really ready to begin! Once again, select Connections, right-click and select New Connection. Give the connection a name (for example, HR_11g. Provide the Username (HR) and a Password. If you are working on Oracle Database 11g, be aware that passwords are now case sensitive. Select the Save Password checkbox. This makes life easy while you are working with SQL Developer. Passwords are stored in an encrypted file. However, you should always be aware of saving passwords and possible security implications this may have. Use the Basic connection. This requires no more detail than the location of the database and the SID, details you have. Click on Test to test the connection. Click on Connect. Using basic commands in the SQL Worksheet As soon as you connect to a user, SQL Developer opens an SQL Worksheet. You may  have started working with Oracle using the SQL*Plus command line, or even the GUI window. Either way, you'd start with a selection of SQL*Plus and SQL commands. Enter the following into the SQL Worksheet: DESC DEPARTMENTSSELECT * FROM DEPARTMENTS; Press the F5 key (or use the Run Script button). The output of both commands appears in the Script Output tab, which appears below the SQL Worksheet (as seen in the previous screenshot). Both commands are handled by a few simple clicks of the mouse in SQL Developer. Select and expand the HR_11g connection in the Connections navigator. Expand the Tables node and select DEPARTMENTS. The DEPARTMENTS tab now opens, displaying a list of the column names and details. These are the same details as given by the DESC (describe) SQL*Plus command that you entered in the SQL Worksheet. It also provides additional detail, such as the Primary Key and column comments. Select the Data tab and notice that you now see the output from your second command. These two tabs are included with a number of other tabs, each with additional details about the DEPARTMENTS table. You would need to write a number of SQL queries in order to get the additional detail from the data dictionary if you were working in SQL*Plus. Select the EMPLOYEES table. Notice that the new table, EMPLOYEES, immediately replaces the previous DEPARTMENTS table with its details. Select the Triggers tab, and select one of the triggers. The trigger and related trigger detail is displayed in a master-detail window: Browsing and updating data Return to the EMPLOYEES data by again selecting the Data tab. The data grid that is displayed provides a variety of options. To get started with the data grid, double-click on an item or field, such as the name of one of the employees, and change it. Tab out of the field and notice that the change is applied to the data grid and an asterisk (*) flags the record. Commit and Rollback buttons are available to send the change to the database, or to undo your action. Roll back the changes. Once again you get feedback, this time in the Data Editor log, as shown in the following screenshot: Running reports Select the Reports navigator and expand the Data Dictionary Reports node. Expand the Table node and review the available reports. Expand Constraints and select the Unique Constraints report. As you select the report, a dialog displays requesting the Connection name. Select the connection you created, HR_11g, and click on OK. An Enter Bind Values dialog now appears, requesting the table name as an input parameter. Click on Apply to accept the default, which in this case, means all tables: Run the same report for any user by selecting the Connections drop-down list on the right-hand side. Navigating around SQL Developer SQL Developer has a selection of windows, navigators, and tabs. On start-up, you are presented with the main navigator toolbars and menus: The two main navigators: Connections and Reports, are presented in a tabbed window. These and other navigators, such as the Versioning Navigator, are available through the main View menu. You can also open windows such as Snippets, Recent Objects, and Find DB Objects using the View menu. Any navigators that you open during a session, and that are  still open when you close the product, are automatically opened when you restart the product.
Read more
  • 0
  • 0
  • 2673

article-image-introducing-sql-developer-data-modeler-part-2
Packt
07 Jan 2010
9 min read
Save for later

Introducing SQL Developer Data Modeler: Part 2

Packt
07 Jan 2010
9 min read
Working with diagrams and their components You can waste away many hours laying out the elements on a diagram. Therefore, this aspect of modeling can be time consuming. However, a model serves as documentation and a communication device. Therefore, taking the time to make sure it is well annotated and clearly designed is important. Most of the controls for the models are on the context menu, allowing you to modify individual aspects of the diagram. The context menu changes depending on whether you have an object or line selected, or you're just clicking in the open space. You can also set general defaults using Tools | General Options | Diagram. In this section, we'll look at the various options available when working with the diagrams. Formatting the elements Before moving a relationship line, entity, or table, you can dramatically change the impact and readability of a large diagram just by changing the colors. This is readily demonstrated when importing from two or more schemas. Using the previous example where we imported from two schemas, open one of the subviews and select all of the tables. With the objects selected, invoke the Format Object dialog using the context menu: If this is the first time you are adjusting the colors, the dialog does not display any colors as you open it. The colors used in the diagram are the default settings. Deselect Use Default Color and click on the Background and Border Color items to select and set the new color. When you are done, click on OK and note the changes applied to the subview. Switch to the main relational model to review the impact there. The color applied to the subview is also applied to the main model as shown. This is very useful when illustrating how tables in different schemas relate to each other. For example, take the HR and OE sample schema, all of the tables related to human resources are maintained in the HR schema, while those related to the order entry system are maintained in the OE schema. You may well have applications designed around the HR schema and others tied to the OE schema, but some may involve both. In the following relational model, the OE tables are now colored green, so we're able to identify them, but we can also see where the schemas link. We can see that a CUSTOMER may deal with one EMPLOYEE and has many ORDERS: Selecting all of the tables in a modelSelect any table and click on Select Neighbors from the context menu. Select All Zones, to select all of the tables. Use this instead of Ctrl+A, which selects all tables and all lines. Changing the default format settings Instead of changing individual items or even a set of items, you can change the default color for each of the element types displayed on a diagram. The Tools | General Options | Diagram | Format provides the ability to control the color of each of the elements displayed such as tables, views, and entities: To edit any of the elements in the dialog, double-click on the object, or select the object and the edit icon. This allows you to adjust the color of the item and to format the font. You can use the font color to highlight mandatory, Unique, or Foreign Keys. Setting general diagram properties Use the same Tools | General Options | Diagram to set model properties,which include: Displaying the grid Controlling the background color of the diagram Controlling the Auto Route feature which is on by default Set display properties for certain items on each of the models, including the control of: The diagram notation for the logical model, which supports the Barker and Bachman notations The display of the relationship names for either the logical or relational models The flow names for process models For example, to display the relationship names on an Entity Relationship Diagram (as seen below), check the display property on the Tools | General Options | Model | Logical, and ensure that the Relation Cardinality properties for the relationships are also set. Creating subviews and displays Adding subviews and displays offers you alternative ways of laying out elements on a diagram and for working with subsets of items. You can create multiple subviews and displays for either logical or relational models, and remove them as easily, without impacting the main models. Adding subviews to your design You have already encountered a subview by importing from a number of schemas in the data dictionary. Subviews are not only a reflection of the different schemas in a design, but they can also represent any subset of elements in the design, allowing you to work with a smaller, more manageable set of elements. You can create a subview from the object browser by selecting: The SubViews node and using the New SubView context menu. In this case, you have a new empty diagram that you can populate by dragging tables or entities (depending on the subview in question) onto the page. Any of the model tabs and then selecting the Create SubView menu. This creates a new and empty subview. An element or elements on an existing model and using the Create SubView from selected context menu on the diagram. In this case, the new subview will contain the tables or entities you selected: The layout of the subview is not linked to the main model in any way. What is linked is how you format the items on the subview and any structural changes you make to the objects. You can continue to add new items to the subview by dragging them onto the surface from the object browser. When deleting items from the subview, you should choose whether the item is deleted: From the view (Delete View) From the complete design (Delete Object) Adding displays A display is an alternative view of a diagram, whether a main model or a subview, and is linked directly to that model. If you delete the model, the display is also deleted. Any items that you add or remove from displays are also automatically added or removed from the main model they are linked to. To create a new display, select the tab of any model and select Create Display from the context menu. The new display created is, initially, a replica of the model you selected in both layout and items available. All additional displays are listed at the bottom of the model. In the following example, the HR subview has two displays created, as highlighted at the bottom of the screenshot, the main HR display and the new Display_1. The Create Display context menu is also illustrated: Use the new display to change the layout of the model and to adjust the level of detail displayed on the diagram. A second display of the same model is useful when you want to show more or less detail on a model. You can, for example, create a display which only displays the entity or table names. Right-click in the space on a diagram and select View Details | Names Only. We'll discuss how to layout the diagram elements later. Creating a composite view If you create a number of subviews, create a new diagram showing the composite models of each of these on a single layout. This serves as a useful reminder of the number of subviews or models you have by having a thumbnail of the various layouts. Alternatively, you can add a composite view of one subview and place it on another. To create a composite view, select the model in the browser and drag it onto the diagram surface. You can drag any model onto any other diagram surface, except its own: Once you have the composite folder displayed on the diagram, display the composite model within that folder by selecting Composite View from the context menu. If the model you want to view has a selection of displays, then you can also select the display you want to see within that composite. The following screenshot shows the, subview, displaying the composite models of the HR subview, the main logical model, and both displays of the logical model: Controlling the layout When working with a large number of items in a model, it's important to keep the layout organized. A variety of tools to help with the process are explained in the following sections. Adjusting the level of detail displayed Change the amount of detail displayed in a table (or entity) using the View Details menu. It is invoked with a right-click on the white space of any diagram. The View Details menu has options for displaying: All Details Names Only Columns Datatype Keys Adjusting the width and height across the model If you have a large diagram and want to see how tables or entities relate to each other, you can create a more compact model using a display, without impacting the main model. This can be done by setting the details to display the name only and then resizing and repositioning the objects. In the following screenshot, we have set the model to display only the name of the tables. Create a more compact diagram by resizing one of the tables to a more fitting set of dimensions, select the rest, and then resize them all to the same width and height: Controlling alignment Once you have positioned the items, align them to complete the model. Use the Edit menu with the required items for top and left alignment as shown in the following screenshot: Resizing and alignmentThe first item you select is the one that drives the position for left or top alignment, and the item that controls the width and height of all subsequent items selected.
Read more
  • 0
  • 0
  • 7108
article-image-features-and-utilities-sql-developer-data-modeler
Packt
07 Jan 2010
8 min read
Save for later

Features and utilities in SQL Developer Data Modeler

Packt
07 Jan 2010
8 min read
Oracle SQL Developer Data Modeler is available as an independent product, providing a focused data modeling tool for data architects and designers. There is also a Data Modeler Viewer extension to SQL Developer, which allows users to open previously created data models and to create read-only models of their database schemas. SQL Developer Data Modeler is a vast tool, supporting the design of logical Entity Relationship Diagrams, and relational models, with forward and reverse engineering capabilities between the two. It supports multi-dimensional, data flow, data type, and physical models, and allows files to be imported from a variety sources and exported to a variety of destinations. It allows users to set naming conventions and verify designs using a set of predefined design rules. Each of these topics is extensive, so in this two-part article by Sue Harper (author of Oracle SQL Developer 2.1)  we'll review a few of the areas, illustrating how you can use them and highlight a few key features, using the independent, stand alone release of SQL Developer Data Modeler. We'll include a rief review of the integration points of the Data Modeler Viewer extension to SQL Developer. The product offers support for Oracle and non-Oracle Databases. In the interest of time and space, we have elected to only work with the Oracle database. Oracle SQL Developer Data Modeler SQL Developer Data Modeler provides users with a lightweight tool which provides application and database developers a quick and easy way of diagrammatically displaying their data structures, making changes, and submitting the new changes to update a schema. In this article, we will not attempt to teach data modeling (except to provide some generally accepted definitions). Instead, we will discuss how the product supports data modeling and a few of the features provided. There are a variety of books available on the subject, which describe and define modeling best practice. Feature overview The Data Modeler supports a number of graphical models and a selection of text-based models. The graphical models are: Logical—this is the entity relationship model or Entity Relationship Diagram (ERD), and comprises entities, attributes, and relationships. Relational—this is the schema or database model and is comprised of tables, columns, views, and constraints. In SQL Developer Data Modeler, these models are database independent, and need to be associated with the physical model to support database specific DDL. Data Types—this is the model that supports modeling SQL99 structured types and for viewing inheritance hierarchies. The data types modeled here are used in both the logical and relational models. Multidimensional models—these models support fact, dimension, and summary classifications for multi-dimensional models. Data Flow—these models support the definition of primitive, composite, and transformational tasks. The following support these graphical models: Domains—these allow you to define and reuse a data type with optional constraints or allowable values. You can use domains in the Logical and Relational models. Physical—this model is associated with a relational model and defines the physical attributes for a specific database and version. Business Information—this allows you to model or document the business details that support a design. Tying these graphical and textual models together are a variety of utilities, which include: Forward and reverse engineering between the Logical and Relational models Import from various databases Export, including DDL script generation, for various databases Design Rules for verifying standards and completeness Name templates, glossary, and abbreviation files for supporting naming standards Integrated architecture SQL Developer Data Modeler is made up of a number of layers, which have a tightly synchronized relationship. The Logical model is thought of as the core of the product, providing the starting point for any design, and feeding details into other models. The following diagram shows an illustration of how the models relate to each other: The logical ERD provides the basis for one or more relational models, and each of these feeds into one or more physical models, which are in turn used for the DDL generation. You can create separate data types models and use the defined data type in either the logical or relational models. Both relational and logical models can have multiple subviews created, and each subview can have many displays created. Getting started SQL Developer Data Modeler is an independent product, and with the exception of the Data Modeler Viewer extension to SQL Developer 2.1, is not packaged with other Oracle tools. You can download it and install it in a directory of your choice, with no impact on other tools. To install, simply unzip the file. Installing and setting up the environment Getting started with SQL Developer Data Modeler is straightforward. Follow the links from the Data Modeler site on OTN, http://www.oracle.com/technology/products/database/datamodeler to the download location. You are offered a choice of files to download: For Microsoft Windows, a ZIP file with or without the JRE included For the Mac OS X, a ZIP file without the JRE included For Linux, a ZIP file without the JRE included For any of these ZIP files, extract the file contents and run the datamodeler.exe, which is in the top-level /datamodeler folder, or in the /datamodeler/bin folder. For Linux, use the datamodeler.sh executable. If the file you choose does not include a JRE, you will be prompted on startup for the location of your installed JRE. The minimum supported release is JRE 1.6 update 6.0. Oracle clients and JDBC drivers If you are designing and building a model from scratch, or have access to the DDL script file for importing models, then you do not need to have access to a database. However, if you want to import from a database, you'll need to create a database connection. In this case, there is no need for an Oracle client in your development environment because you can use the thin JDBC Driver to connect to the database. SQL Developer Data Modeler also supports the TNS alias. Therefore, if you have access to a tnsnames.ora file, or have other Oracle software installed in your environment, you can access the tnsnames file to make the database connection if and when required. Creating your first models The Data Modeler browser starts with empty Logical and Relational models. This allows you to start a new design and build a model from scratch, whether a logical model with entities and attributes, or a relational model with tables and columns. The Data Modeler also supports metadata to be imported from a variety of sources, which include: Importing metadata from: DDL scripts Data dictionary Importing from other modeling tools: Oracle Designer CA Erwin 4.x Importing other formats: VAR file XMLA (Microsoft, Hyperion) The context menu displaying the choices available is shown in the following screenshot: Once you have created and saved your models, you can open these or share them with colleagues. To open an existing model, use the menu: File | Open—browse to the location of the files, which then opens the full design with all of the saved models File | Recent Designs—opens the full design, with all of the saved models, with no need to first search for the location File | Import | Data Modeler Design—more granular, offering a choice of models saved in a set of models Recent diagramsUse File | Recent Diagrams to display a list of all diagrams you have recently worked on and saved. Using this approach saves you from needing to browse to the location of the stored files. Importing from the Data Dictionary There are many ways to start using the tool by just starting to draw any one of the model types mentioned. In the screenshot shown earlier, we highlighted the File | Import | Data Dictionary option. Using this allows you to import from Oracle 9i, Oracle 10g, Oracle Database 11g, Microsoft SQL Server 2000 and 2005, and IBM DB2 LUW Versions 7 and 8. Creating a database connection Before you can import from any database, you need to create a database connection for each database you'll connect to. Once created, you'll see all of the schemas in the database and the objects you have access to. Access the New Database Connection dialog from the File | Import wizard (seen in the following screenshot). If you have no connections, click on Add to create a new connection. For a Basic connection, you need to provide the Hostname of the database server, the Port, and SID. The connection dialog also supports TNS alias and the advanced JDBC URL. Before you can add connections for non-Oracle databases, you need to add the required JDBC drivers. To add these drivers, use Tools | General Options | Third Party JDBC Drivers.
Read more
  • 0
  • 0
  • 11527

article-image-text-search-your-database-or-solr
Packt
30 Nov 2009
6 min read
Save for later

Text Search, your Database or Solr

Packt
30 Nov 2009
6 min read
Text Search Features If you think that text search is just a basic thing and nothing more than returning results that matched words in a user query, then think again! There are many technical details that a good search implementation will give you control over to affect how well this fundamental capability works, like text analysis and relevancy ranking. But there are also a variety of ancillary features to look for that make a big difference such as result highlighting and faceting. Text analysis: This is the processing of the original text into indexed terms, and there's a lot to it. Being able to configure the tokenization of words could mean that searching for “Mac” will be found if the word “MacBook” is in the text. And then there's synonym processing so that users can search for similar words. You might want both a common language dictionary and also hand-picked ones for your data. There's the ability to smartly handle desired languages instead of the pervasive English. And then there's stemming which normalizes word variations so that for example “working” and “work” can be indexed the same. Yet another variation of text analysis is phonetic indexing to find words that sound-like the search query. Relevancy ranking: This is the logic behind ranking the search results that closest match the query. In Lucene/Solr, there are a variety of factors in an overall equation with the opportunity to adjust factors based on matching certain fields, certain documents, or using field values in a configurable equation. By comparison, the commercial Endeca platform allows configuration of a variety of matching rules that behaves more like a strict sort. Query features & syntax: From boolean logic to grouping to phrases to fuzzy-searches, to score boosting... there are a variety of queries that you might perform and combine. Many apps would prefer to hide this from users but some may wish to expose it for “advanced” searches. Result highlighting: Displaying a text snippet of a matched document containing the matched word in context is clearly very useful. We have all seen this in Google. Query spell correction (i.e. “did you mean”): Instead of unhelpfully returning no results (or very few), the search implementation should be able to try and suggest variation(s) of the search that will yield more results. This feature is customarily based on the actual indexed words and not a language dictionary. The variations might be based on the so-called edit-distance which is basically the number of alterations needed, or it might be based on phonetic matching. Faceted navigation: This is a must-have feature which enables search results to include aggregated values for designated fields that users can subsequently choose to filter the results on. It is commonly used on e-commerce sites to the left of the results to navigate products by various attributes like price ranges, vendors, etc. Term-suggest (AKA search auto-complete): As seen on Google, as you start typing a word in the search box, it suggests possible completions of the word. These are relevancy sorted and filtered to those that are also found with any words prior to the word you are typing. Sub-string indexing: In some cases, it is needed to match arbitrary sub-strings of words instead of being limited to complete words. Unlike what happens with an SQL like clause, the data is indexed in such a way for this to be quick. Geo-location search: Given the coordinates to a location on the globe with records containing such coordinates, you should be able to search for matching records from a user-specified coordinate. An extension to Solr allows a radial based search with appropriate ranking, but it is also straight-forward to box the search based on a latitude & longitude. Field/facet suggestions: The Endeca search platform can determine that your search query matches some field values used for faceting and then offer a convenient filter for them. For example, given a data set of employees, the search box could have a pop-up suggestion that the word in the search box matches a department code and then offer the choice of navigating to those matching records. This can be easier and faster than choosing facets to filter on, especially if there are a great number of facet-able fields. Solr doesn't have this feature but it would not be a stretch to implement it based on its existing foundation. Clustering: This is another aid to navigating search results besides faceting. Search result clustering will dynamically divide the results into multiple groups called clusters, based on statistical correlation of terms in common. It is a bit of an exotic feature, but is useful with lots of results with lots of text information and after any faceted navigation is done if applicable. So that's quite a list and there are other features you may find too. This should give you a list of features to look for in whatever you choose. Some features are obviously more important to you than others. How NOT to implement text search: the SQL “like” clause Perhaps “back in the day” you implemented search by simply adding like “%searchword%” in the where clause of your SQL (the author is guilty as charged!) But of course, this has serious problems such as: It is very slow, especially given a data set of decent size and any amount of load on the server. A database does a simple brute-force search. There is no concept of relevancy (e.g. a match score). A record simply matched or not. You are forced to sort on one of your columns. It is too literal. Even if the search is case insensitive, any extra punctuation can screw it up, or it may match parts of words when you only wanted to match whole words. So the bottom line is don't do it! There are smarter approaches to this problem. Probably, the only situation you would do this is if you had a particular database column holding a limited number of short values and you have it indexed. Searches should go quickly and it's very easy to implement this approach.
Read more
  • 0
  • 0
  • 3506

article-image-including-charts-and-graphics-pentaho-reports-part-1
Packt
09 Nov 2009
8 min read
Save for later

Including Charts and Graphics in Pentaho Reports (Part 1)

Packt
09 Nov 2009
8 min read
Supported charts Pentaho Reporting relies on JFreeChart, an open source Java chart library, for charting visualization within reports. From within Report Designer, many chart types are supported. In the chart editor, two areas of properties appear when editing a chart. The first area of properties is related to chart rendering, and the second tabbed area of properties is related to the data that populates a chart. Following is the screenshot of the chart editor within Pentaho Report Designer: All chart types receive their data from three general types of datasets. The first type is known as a Category Dataset , where the dataset series and values are grouped by categories. A series is like a sub-group. If the exact category and series appear, the chart will sum the values into a single result. The following table is a simple example of a category dataset: Category Series Sale Price Store 1 Sales Cash $14 Store 1 Sales Credit $12 Store 2 Sales Cash $100 Store 2 Sales Credit $120 Pentaho Reporting builds a Category Dataset using the CategorySetDataCollector. Also available is the PivotCategorySetCollector, which pivots the category and series data. Collector classes implement Pentaho Reporting’s Function API. The second type of dataset is known as an XY Series Dataset, which is a two dimensional group of values that may be plotted in various forms. In this dataset, the series may be used to draw different lines, and so on. Here is a simple example of an XY series dataset : Series Cost of Goods (X) Sale Price (Y) Cash 10 14 Credit 11 12 Cash 92 100 Credit 105 120 Note that X is often referred to as the domain, and Y is referred to as the range. Pentaho Reporting builds an XY Series Dataset using the XYSeriesCollector. The XYZSeriesCollector also exists for three dimensional data. The third type of dataset is known as a Time Series Dataset , which is a two dimensional group of values that are plotted based on a time and date. The Time Series Dataset is more like an XY Series than a Category Dataset, as the time scale is displayed in a linear fashion with appropriate distances between the different time references. Time Series Sale Price May 05, 2009 11:05pm Cash $14 June 07, 2009 12:42pm Credit $12 June 14, 2009 4:20pm Cash $100 June 01, 2009 1:22pm Credit $120 Pentaho Reporting builds a Time Series Dataset using the TimeSeriesCollector. Common chart rendering properties Most charts share a common set of properties. The following properties are common across most charts. Any exceptions are mentioned as part of the specific chart type. Required Property Group Property name Description name The name of the chart object within the report. This is not displayed during rendering, but must be unique in the report. A default name is generated for each chart added to the report. data-source The dataset name for the chart, which is automatically populated with the name of the dataset in the Primary DataSource panel of the chart editor. no-data-message The message to display if no data is available to render the chart. Title Property Group Property name Description chart-title The title of the chart, which is rendered in the report. chart-title-field A field representing the chart title. title-font The chart title's font family, size, and style. Options Property Group Property name Description horizontal If set to True, the chart's X and Y axis are rotated horizontally. The default value is set to False. series-colors The color in which to render each series. The default for the first three series colors are red, blue, and green. General Property Group Property name Description 3-D If set to True, renders the chart in a 3D perspective. The default value is set to False. anti-alias If set to True, renders chart fonts as anti-aliased. The default value is set to True. bg-color Sets the background around the chart to the specified color. If not set, defaults to gray. bg-image Sets the background of the chart area to the specified image. If not set, the background of the chart area defaults to white. The chart area is the area within the axes of the chart. Supported image types include PNG, JPG, and GIF file formats. show-border If set to True, displays a border around the chart. The default value is set to True. border-color Sets the border to the specified color. If not set, defaults to black. plot-border If set to False, clears the default rendering value of the chart border. plot-bg-color Sets the plot background color to the specified color. If not set, defaults to white. plot-fg-alpha Sets the alpha value of the plot foreground colors relative to the plot background. The default value is set to 1.0. plot-bg-alpha Sets the alpha value of the plot background color relative to the chart background color. The default value is set to 1.0.   Legend Property Group Property name Description show-legend If set to True, displays the legend for the chart. The default value is set to False. location The location of the legend in relation to the chart, which may be set to top, bottom, left, or right. The default location is bottom. legend-border If set to True, renders a border around the legend. The default value is set to True. legend-font The type of Java font to render the legend labels in. legend-bg-color Sets the legend background color. If not set, defaults to white. legend-font-color Sets the legend font color. If not set, defaults to black. Advanced Property Group Property name Description dependencyLevel The dependency level field informs the reporting engine what order the chart should be executed in relation to other items in the report. This is useful if you are using special functions that may need to execute prior to generating the chart. The default value is set to 0. Negative values execute before 0, and positive values execute after 0.   Common category series rendering properties The following properties appear in charts that render category information: Options Property Group Property name Description stacked If set to True, the series values will appear layered on top of one another instead of being displayed relative to one another. stacked-percent If set to True, determines the percentages of each series, and renders the bar height based on those percentages. The property stacked must be set to True for this property to have an effect. General Property Group Property name Description gridlines If set to True, displays category grid lines. This value is set to True by default. X-Axis Property Group Property name Description label-rotation If set, adjusts the inline item label rotation value. The value should be specified in degrees. If not specified, labels are rendered horizontally. You must have show-labels set to true for this value to be relevant. date-format If the item value is a date, a Java date format string may be provided to format the date appropriately. Please see Java's SimpleDateFormat JavaDoc for formatting details. numeric-format If the item value is a decimal number, a Java decimal format string may be provided to format the number appropriately. Please see Java's DecimalFormat JavaDoc for formatting details. text-format The label format used for displaying category items within the chart. This property is required if you would like to display the category item values. The following parameters may be defined in the format string to access details of the item: {0}: To access the Series Name detail of an item {1}: To access the Category detail of an item {2}: To access the Item value details of an item To display just the item value, set the format string to "{2}". x-axis-title If set, displays a label describing the category axis. show-labels If set to true, displays x-axis labels in the chart. x-axis-label-width Sets the maximum category label width ratio, which determines the maximum length each category label should render in. This might be useful if you have really long category names. x-axis-label-rotation If set, adjusts the category item label rotation value. The value should be specified in degrees. If not specified, labels are rendered horizontally. x-font The font to render the category axis title and labels in. Y-Axis Property Group Property name Description y-axis-title If set, displays a label along the value axis of the chart. label-rotation If set, determines the upward angle position of the label, where the value passed into JFreeChart is the mathematical pie over the value. Unfortunately, this property is not very flexible and you may find it difficult to use. y-tick-interval The numeric interval value to separate range ticks in the chart. y-font The font to render the range axis title in. y-sticky-0 If the range includes zero in the axis, making it sticky will force truncation of the axis to zero if set to True. The default value of this property is True. y-incl-0 If set to True, the range axis will force zero to be included in the axis. y-min The minimum value to render in the range axis. y-max The maximum value to render in the range axis. y-tick-font The font to render the range tick value in. y-tick-fmt-str The DecimalFormat string to render the numeric range tick value. enable-log-axis If set to true, displays the y-axis as a logarithmic scale. log-format If set to true, will present the logarithmic scale in a human readable view.
Read more
  • 0
  • 0
  • 5208
article-image-including-charts-and-graphics-pentaho-reports-part-2
Packt
29 Oct 2009
6 min read
Save for later

Including Charts and Graphics in Pentaho Reports (Part 2)

Packt
29 Oct 2009
6 min read
Ring chart The ring chart is identical to the pie chart, except that it renders as a ring versus a complete pie. In addition to sharing all the properties similar to the pie chart, it also defines the following rendering property : Options Property Group Property name Description section-depth This property defines the percentage of the radius to render the section as. The default value is set to 0.5. Ring chart example For this example, simply open the defined pie chart example and select the Ring chart type. Also, set the section-depth to 0.1, in order to generate the following effect: Multi pie chart The multi pie chart renders a group of pie charts, based on a category dataset. This meta-chart renders individual series data as a pie chart, each broken into individual categories within the individual pie charts. The multi pie chart utilizes the common properties defined above, including the category dataset properties. In addition to the standard set of properties, it also defines the following two properties: Options Property Group Property name Description label-format This label defines how each item within a chart is rendered. The default value is set to "{0}". The format string may also contain any of the following: {0}: To render the item name {1}: To render the item value {2}: To render the item percentage in relation to the entire pie chart by-row This value defaults to True. If set to False, the series and category fields are reversed, and individual charts render series information. Note that the horizontal, series-color, stacked and stacked-percent properties do not apply to this chart type. Multi pie chart example This example demonstrates the distribution of purchased item types, based on payment type. To begin, create a new report. You'll reuse the bar chart's SQL query. Now, place a new Chart element into the Report Header. Edit the chart, selecting Multi Pie as the chart type. To configure the dataset for this chart, select ITEMCATEGORY as the category-column. Set the value-columns property to QUANTITY and the series-by-field to PAYMENTTYPE. Waterfall chart The waterfall chart displays a unique stacked bar chart that spans categories. This chart is useful when comparing categories to one another. The last category in a waterfall chart normally equals the total of all the other categories to render appropriately, but this is based on the dataset, not the chart rendering. The waterfall chart utilizes the common properties defined above, including the category dataset properties. The stacked property is not available for this chart. There are no additional properties defined for the waterfall chart. Waterfall chart example In this example, you'll compare by type, the quantity of items in your inventory. Normally, the last category would be used to display the total values. The chart will render the data provided with or without a summary series, so you'll just use the example SQL query from the bar chart example. Add a Chart element to the Report Header and select Waterfall as the chart type. Set the category-column to ITEMCATEGORY, the value-columns to QUANTITY, and the series-by-value property to Quantity. Now, apply your changes and preview the results. Bar line chart The bar line chart combines the bar and line charts, allowing visualization of trends with categories, along with comparisons. The bar line chart is unique in that it requires two category datasets to populate the chart. The first dataset populates the bar chart, and the second dataset populates the line chart. The bar line chart utilizes the common properties defined above, including the category dataset properties. This chart also inherits the properties from both the bar chart, as well as the line chart. This chart also has certain additional properties, which are listed in the following table: Required Property Group Property name Description bar-data-source The name of the first dataset required by the bar line chart, which populates the bars in the chart. This value is automatically populated with the correct name. line-data-source The name of the second dataset required by the bar line chart, which populates the lines in the chart. This value is automatically populated with the correct name. Bar Options Property Group Property name Description ctgry-tick-font Defines the Java font that renders the Categories. Line Options Property Group Property name Description line-series-color Defines the color in which to render each line series. line-tick-fmt Specifies the Java DecimalFormat string for rendering the Line Axis Labels lines-label-font Defines the Java font to use when rendering line labels. line-tick-font Defines the Java font to use when rendering the Line Axis Labels. As part of the bar line chart, a second y-axis is defined for the lines. The property group Y2-Axis is available with similar properties as the standard y-axis. Bar line chart example To demonstrate the bar line chart, you'll reuse the SQL query from the area chart example. Create a new report, and add a Chart element to the Report Header. Edit the chart, and select Bar Line as the chart type. You'll begin by configuring the first dataset. Set the category-column to ITEMCATEGORY, the value-columns to COST, and the series-by-value property to Cost. To configure the second dataset, set the category-column to ITEMCATEGORY, the value-columns to SALEPRICE, and the series-by-value property to Sale Price. Set the x-axis-label-width to 2.0, and reduce the x-font size to 7. Also, set show-legend to True. You're now ready to preview the bar line chart. Bubble chart The bubble chart allows you to view three dimensions of data. The first two dimensions are your traditional X and Y dimensions, also known as domain and range. The third dimension is expressed by the size of the individual bubbles rendered. The bubble chart utilizes the common properties defined above, including the XY series dataset properties. The bubble chart also defines the following properties: Options Property Group Property name Description max-bubble-size This value defines the diameter of the largest bubble to render. All other bubble sizes are relative to the maximum bubble size. The default value is 0, so this value must be set to a reasonable value for rendering of bubbles to take place. Note that this value is based on pixels, not the domain or range values. The bubble chart defines the following additional dataset property: Required Property Group Property name Description z-value-columns This is the data source column to use for Z value, which specifies the bubble diameter relative to the maximum bubble size. Bubble chart example In this example, you need to define a three dimensional SQL query to populate the chart. You'll use inventory information, and calculate Item Category Margin: SELECT"INVENTORY"."ITEMCATEGORY","INVENTORY"."ONHAND","INVENTORY"."ONORDER","INVENTORY"."COST","INVENTORY"."SALEPRICE","INVENTORY"."SALEPRICE" - "INVENTORY"."COST" MARGINFROM"INVENTORY"ORDER BY"INVENTORY"."ITEMCATEGORY" ASC Now that you have a SQL query to work with, add a Chart element to the Report Header and select Bubble as the chart type. First, you'll populate the correct dataset fields. Set the series-by-field property to ITEMCATEGORY. Now, set the X, Y, and Z value columns to ONHAND, SALEPRICE, and MARGIN. You're now ready to customize the chart rendering. Set the x-title to On Hand, the y-title to Sales Price, the max-bubble-size to 100, and the show-legend property to True. The final result should look like this:
Read more
  • 0
  • 0
  • 4011

article-image-external-tables-oracle-10g11g-database-part-2
Packt
28 Oct 2009
13 min read
Save for later

External Tables in Oracle 10g/11g Database: Part 2

Packt
28 Oct 2009
13 min read
Data transformation with External Tables One of the main uses of the External Tables is their support of the ETL process, allowing the user to perform a data load that is transformed to the target format without an intermediate stage table. Let's read an External Table whose contents are: This data can be loaded in a single command to multiple tables. Let's create several tables with the same structure: SQL> desc amount_jan Name Null? Type ----------------- -------- ------------ REGION VARCHAR2(16) AMOUNT NUMBER(3) Now issue the command to send the data from the external table to the different tables. INSERT ALL INTO AMOUNT_JAN (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_FEB (REGION, AMOUNT) VALUES(COUNTRY, FEB) INTO AMOUNT_MAR (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_APR (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_MAY (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_JUN (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_JUL (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_AUG (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_SEP (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_OCT (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_NOV (REGION, AMOUNT) VALUES(COUNTRY, JAN) INTO AMOUNT_DEC (REGION, AMOUNT) VALUES(COUNTRY, JAN)SELECT COUNTRY, JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DECFROM REGION_REVENUE; In this example, we will perform a conditional insert to different tables depending on the value of the amount column. We will first create three tables, one for low, another for average, and a third for high amounts: SQL> create table low_amount( 2 region varchar2(16), 3 month number(2), 4 amount number(3));Table created.SQL> create table high_amount as select * from low_amount;Table created. Now we can read the External Table and have the data inserted conditionally to one of three mutually exclusive targets. INSERT ALL WHEN ( JAN <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '01', JAN ) WHEN ( FEB <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '02', FEB ) WHEN ( MAR <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '03', MAR ) WHEN ( APR <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '04', APR ) WHEN ( MAY <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '05', MAY ) WHEN ( JUN <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '06', JUN ) WHEN ( JUL <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '07', JUL ) WHEN ( AUG <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '08', AUG ) WHEN ( SEP <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '09', SEP ) WHEN ( OCT <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '10', OCT ) WHEN ( NOV <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '11', NOV ) WHEN ( DEC <= 500 ) THEN INTO LOW_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '12', DEC ) WHEN ( JAN > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '01', JAN ) WHEN ( FEB > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '02', FEB ) WHEN ( MAR > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '03', MAR ) WHEN ( APR > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '04', APR ) WHEN ( MAY > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '05', MAY ) WHEN ( JUN > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '06', JUN ) WHEN ( JUL > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '07', JUL ) WHEN ( AUG > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '08', AUG ) WHEN ( SEP > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '09', SEP ) WHEN ( OCT > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '10', OCT ) WHEN ( NOV > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '11', NOV ) WHEN ( DEC > 500 ) THEN INTO HIGH_AMOUNT( REGION, MONTH, AMOUNT) VALUES ( COUNTRY, '12', DEC )SELECT COUNTRY, JAN, FEB, MAR, APR, MAY, JUN, JUL, AUG, SEP, OCT, NOV, DECFROM REGION_REVENUE; Extending the alert.log analysis with External Tables Reading the alert.log from the database is a useful feature which can help you to find any outstanding error messages reported in this file. create table ALERT_LOG ( text_line varchar2(512)) organization external ( type ORACLE_LOADER default directory BACKGROUND_DUMP_DEST access parameters( records delimited by newline nobadfile nodiscardfile nologfile ) location( 'alert_beta.log') ); Once the External Table has been created, the alert.log file can be queried just like any other regular table. SQL> select text_line from alert_log 2 where text_line like 'ORA-%';TEXT_LINE-----------------------------------------------------------------ORA-1109 signalled during: ALTER DATABASE CLOSE NORMAL...ORA-00313: open failed for members of log group 1 of thread 1ORA-00312: online log 1 thread 1: '/u01/oracle/oradata/beta/redo01.log'ORA-27037: unable to obtain file statusORA-00313: open failed for members of log group 2 of thread 1ORA-00312: online log 2 thread 1: '/u01/oracle/oradata/beta/redo02.log'ORA-27037: unable to obtain file statusORA-00313: open failed for members of log group 3 of thread 1ORA-00312: online log 3 thread 1: '/u01/oracle/oradata/beta/redo03.log'ORA-27037: unable to obtain file status Querying the alert.log file up to this phase is useful just to see the contents of the file and look for basic ORA-% strings. This could also be achieved by using the alert.log link in the Enterprise Manager (EM). The alert.log file can be queried by means of the EM, but as this can only be viewed from the EM in an interactive mode, you can only rely on the preset alerts. If further automatic work needs to be done, then it is useful to do some more work with the alert analysis tool. A temporary table can be used to store the contents of the ALERT_LOG table, along with an extra TIMESTAMP column, so it can be queried in detail in an EM-like manner. create global temporary table TMP_ALERT_LOG ( LINE_NO NUMBER(6), TIMESTAMP DATE, TEXT_LINE VARCHAR2(512))on commit preserve rows; A bit of PLSQL programming is necessary so the ALERT_LOG file can be modified and inserted into the TMP_ALERT_LOG, (enabling further queries can be done). declarecursor alertLogCur is select ROWNUM, TEXT_LINE from ALERT_LOG;currentDate date;altertLogRec ALERT_LOG.TEXT_LINE%TYPE;testDay varchar2(10);begincurrentDate := sysdate;for alertLogInst in alertLogCur loop -- fetch row and determine if this is a date row testDay := substr(alertLogInst.text_line, 1, 3); if testDay = 'Sun' or testDay = 'Mon' or testDay = 'Tue' or testDay = 'Wed' or testDay = 'Thu' or testDay = 'Fri' or testDay = 'Sat' then -- if this is a date row, it sets the current logical record date currentDate := to_date( alertlogInst.text_line, 'Dy Mon DD HH24:MI:SS YYYY'); end if; insert into TMP_ALERT_LOG values( alertLogInst.rownum, currentDate, alertLogInst.text_line );end loop;end;/ As the contents of the alert.log end up in a temporary table, more than one DBA can query it at the same time, or restrict the DBA's accessibilities. There is no need to manage the purge and maintenance of the table after the session has ended, it can be indexed and there is little overhead by means of this procedure. Moreover, as this is a temporary object, minimum redo log information is generated. Once the external ALERT_LOG and the temporary ALERT_LOG tables have been created, it is possible to perform, not only filters by date (provided by Enterprise Manager) but also any query against the alert.log file. SELECT TIMESTAMP, TEXT_LINEFROM TMP_ALERT_LOGWHERE TIMESTAMP IN ( SELECT TIMESTAMP FROM TMP_ALERT_LOG WHERE TEXT_LINE LIKE 'ORA-%')AND TIMESTAMP BETWEEN SYSDATE-30 AND SYSDATEORDER BY LINE_NO; Further treatment can be done on this concept to look for specific error messages, analyze specific time frames and perform drill down analysis. This procedure can be extended to read the trace files or any other text file from the database. Reading the listener.log from the database One particular extension of the above procedure is to read the listener.log file. This file has a specific star-delimited field file format which can be advantageous, and eases the read by means of the Loader driver. The file format is as follows: 21-JUL-2008 00:39:50 * (CONNECT_DATA=(SID=beta)(CID=(PROGRAM=perl)(HOST=alpha.us.oracle.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.10)(PORT=8392)) * establish * beta * 021-JUL-2008 00:39:56 * (CONNECT_DATA=(SID=beta)(CID=(PROGRAM=perl)(HOST=alpha.us.oracle.com)(USER=oracle))) * (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.10)(PORT=8398)) * establish * beta * 021-JUL-2008 00:40:16 * service_update * beta * 021-JUL-2008 00:41:19 * service_update * beta * 021-JUL-2008 00:44:43 * ping * 0 The file has a format that can be deduced from the above data sample: TIMESTAMP * CONNECT DATA [* PROTOCOL INFO] * EVENT [* SID] * RETURN CODE As you can see this format, even though it is structured, it may have a different number of fields, so at loading time this issue must be considered. In order for us to map this table to the database, we should consider the variable number of fields to have the External Table created. We'll create a temporary table so that this doesn't create an additional transactional overhead. Now, let's create an External Table based on this format that points to $ORACLE_HOME/network/log: create directory NETWORK_LOG_DIRas '$ORACLE_HOME/network/log'; Now, let's create the external table: create table LISTENER_LOG ( TIMESTAMP date, CONNECT_DATA varchar2(2048), PROTOCOL_INFO varchar2(64), EVENT varchar2(64), SID varchar2(64), RETURN_CODE number(5))organization external ( type ORACLE_LOADER default directory NETWORK_LOG_DIR access parameters ( records delimited by NEWLINE nobadfile nodiscardfile nologfile fields terminated by "*" LDRTRIM reject rows with all null fields ( "TIMESTAMP" char date_format DATE mask "DD-MON-YYYY HH24:MI:SS ", "CONNECT_DATA", "PROTOCOL_INFO", "EVENT", "SID", "RETURN_CODE" ) ) location ('listener.log'))reject limit unlimited; The structure of interest is specified above, so there will be several rows rejected. Seeing as this file is not fully structured, you will find some non formatted information; the bad file and the log file are not meaningful in this context. Another application of the LISTENER_LOG External Table is usage trend analysis. This query can be issued to detect usage peak hours. SQL> select to_char(round(TIMESTAMP, 'HH'), 'HH24:MI') HOUR, 2 lpad('#', count(*), '#') CX 3 from listener_log 4 group by round(TIMESTAMP, 'HH') 5 order by 1;HOUR CX----- ------------------------------------------------14:00 ###15:00 ##########################16:00 ######################17:00 #####################18:00 #####################19:00 ############### Reading the listener.log file this way allows the DBA not only to keep track of the listener behavior, but also it allows a security administrator to easily spot hacking attempts. Let's find out who is trying to access the database with sqlplus.exe. SQL> select timestamp, protocol_info 2 from listener_log 3 where connect_data like '%sqlplus.exe%' 4 /TIMESTAMP PROTOCOL_INFO-------------------- --------------------------------------------------------01-SEP-2008 14:30:37 (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=3651))01-SEP-2008 14:31:08 (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=3666))01-SEP-2008 14:31:35 (ADDRESS=(PROTOCOL=tcp)(HOST=192.168.2.101)(PORT=3681)) The use of External Tables to analyze the listener.log can be used not only to have an in-database version of the listener.log perform periodic and programmatic analysis of the listener behavior, but also to determine usage trends and correlate information with the audit team so that unauthorized connection programs can be easily and quickly spotted. Further useful applications can be found by reading the listener.log file. There are two fields that must be further parsed to get information out of them, but parsing those fields goes beyond the scope of this article. The structure that the analysis should consider is detailed next: Connect String SID: The Database Oracle SID, which is populated if the connection was performed by SID, otherwise it is NULL. CID: It contains two subfields, PROGRAM and HOST. SERVER: This field indicates the connection type, either dedicated or shared. SERVICE_NAME: This field is populated when the connection is performed by a Service instead of SID. COMMAND: The command issued by the user. SERVICE: Present only when listener commands are issued. FAILOVER_MODE: In Real Application Clusters (RAC) environments this field is used if the client performed a connection due to a failover. It shows the failover mode used. Protocol PROTOCOL: Indicates the used to perform the connection; this will be TCP most of the times. HOST: This is the client's IP Address. PORT: The port number of the oracle server used to establish the connection. Mapping XML files as External Tables XML has become a de facto information exchange format, which is why oracle has included the XML Database (XDB) feature from 9.2.0. However, it requires the data to be actually loaded into the database before it can be processed. An External Table allows the user to take a quick look at the contents of the external file prior to performing any further processing. In this example an External Table is created out of an XML file. This file is read by means of a CLOB field, and some further XDB commands can be issued against the external XML file to extract and view data. Let's create the external XML file first: create table EMPLOYEES_XML (xmlFile CLOB)organization external ( type ORACLE_LOADER default directory EXTTABDIR access parameters ( fields (xmllob char terminated by ',') column transforms (xmlFile from lobfile(xmllob)) ) location('employees.dat'))reject limit unlimited; The employees.dat file contains the file name of the XML file to load as an external CLOB file. This file, for the purpose of the demo, contains the file name: employees.xml. Now the file can be queried from the database as if it was a regular table with a single XML column. Dynamically changing the external reference When managing External Tables, there should be an easy way to redefine the external source file. It is enough to change the External Table properties by means of an ALTER TABLE command. Let's create a stored procedure that performs this task by means of a dynamically generated DDL command. This procedure, named Change_External_Table redefines the location property. Using a stored program unit is a flexible way to perform this task. create procedure change_external_table( p_table_name in varchar2, p_file_name in varchar2) isbeginexecute immediate 'alter table '|| p_table_name|| ' location ('''|| p_file_name|| ''')' ;exceptionwhen othersthenraise_application_error(sqlcode,sqlerrm) ;end ;/ Oracle 11g External Table enhancements External Tables work the same in 10g and in 11g, so there are no differences when working with these two versions. When working with Data Pump External Tables, and one single row proves defective, the data set reading operation is aborted. An enhancement in this 11g release prevents the data load aborting, thus saving reprocessing time. Summary Managing data with External Tables is a means not only for mapping external flat files as regular (but limited) tables inside the database, but also a tool to more efficiently perform administrative tasks such as programmatically processing database log files such as the alert.log or the listener.log files. It can be used to easily view external XML formatted files from inside the database without actually loading the file to the database. It can also be used as a means of unloading data in temporary external storage to exchange data among different Oracle versions. This particular feature allows the user to easily build an Oracle Datamart that allows the pre-formatting and summarization of data from the source, enabling it to be directly inserted into the target data warehouse.
Read more
  • 0
  • 0
  • 2900
Modal Close icon
Modal Close icon