Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-hbase-administration-performance-tuning
Packt
21 Aug 2012
8 min read
Save for later

HBase Administration, Performance Tuning

Packt
21 Aug 2012
8 min read
Setting up Hadoop to spread disk I/O Modern servers usually have multiple disk devices to provide large storage capacities. These disks are usually configured as RAID arrays, as their factory settings. This is good for many cases but not for Hadoop. The Hadoop slave node stores HDFS data blocks and MapReduce temporary files on its local disks. These local disk operations benefit from using multiple independent disks to spread disk I/O. In this recipe, we will describe how to set up Hadoop to use multiple disks to spread its disk I/O. Getting ready We assume you have multiple disks for each DataNode node. These disks are in a JBOD (Just a Bunch Of Disks) or RAID0 configuration. Assume that the disks are mounted at /mnt/d0, /mnt/d1, …, /mnt/dn, and the user who starts HDFS has write permission on each mount point. How to do it... In order to set up Hadoop to spread disk I/O, follow these instructions: On each DataNode node, create directories on each disk for HDFS to store its data blocks: hadoop$ mkdir -p /mnt/d0/dfs/datahadoop$ mkdir -p /mnt/d1/dfs/data…hadoop$ mkdir -p /mnt/dn/dfs/data Add the following code to the HDFS configuration file (hdfs-site.xml): hadoop@master1$ vi $HADOOP_HOME/conf/hdfs-site.xml <property> <name>dfs.data.dir</name> <value>/mnt/d0/dfs/data,/mnt/d1/dfs/data,...,/mnt/dn/dfs/data</value> </property> Sync the modified hdfs-site.xml file across the cluster: hadoop@master1$ for slave in `cat $HADOOP_HOME/conf/slaves`do rsync -avz $HADOOP_HOME/conf/ $slave:$HADOOP_HOME/conf/done Restart HDFS: hadoop@master1$ $HADOOP_HOME/bin/stop-dfs.shhadoop@master1$ $HADOOP_HOME/bin/start-dfs.sh How it works... We recommend JBOD or RAID0 for the DataNode disks, because you don't need the redundancy of RAID, as HDFS ensures its data redundancy using replication between nodes. So, there is no data loss when a single disk fails. Which one to choose, J BOD or RAID0? You will theoretically get better performance from a JBOD configuration than from a RAID configuration. This is because, in a RAID configuration, you have to wait for the slowest disk in the array to complete before the entire write operation can complete, which makes the average I/O time equivalent to the slowest disk's I/O time. In a JBOD configuration, operations on a faster disk will complete independently of the slower ones, which makes the average I/O time faster than the slowest one. However, enterprise-class RAID cards might make big differences. You might want to benchmark your JBOD and RAID0 configurations before deciding which one to go with. For both JBOD and RAID0 configurations, you will have the disks mounted at different paths. The key point here is to set the dfs.data.dirproperty to all the directories created on each disk. The dfs.data.dirproperty specifies where the DataNode should store its local blocks. By setting it to comma-separated multiple directories, DataNode stores its blocks across all the disks in round robin fashion. This causes Hadoop to efficiently spread disk I/O to all the disks. Warning Do not leave blanks between the directory paths in the dfs.data.dir property value, or it won't work as expected. You will need to sync the changes across the cluster and restart HDFS to apply them. There's more... If you run MapReduce, as MapReduce stores its temporary files on TaskTracker's local file system, you might also like to set up MapReduce to spread its disk I/O: On each TaskTracker node, create directories on each disk for MapReduce to store its intermediate data files: hadoop$ mkdir -p /mnt/d0/mapred/localhadoop$ mkdir -p /mnt/d1/mapred/local…hadoop$ mkdir -p /mnt/dn/mapred/local Add the following to MapReduce's configuration file (mapred-site.xml): hadoop@master1$ vi $HADOOP_HOME/conf/mapred-site.xml <property> <name>mapred.local.dir</name> <value>/mnt/d0/mapred/local,/mnt/d1/mapred/local,...,/mnt/dn/mapred/local</value> </property> Sync the modified mapred-site.xml file across the cluster and restart MapReduce. MapReduce generates a lot of temporary files on TaskTrackers' local disks during its execution. Like HDFS, setting up multiple directories on different disks helps spread MapReduce disk I/O significantly. Using network topology script to make Hadoop rack-aware Hadoop has the concept of "Rack Awareness ". Administrators are able to define the rack of each DataNode in the cluster. Making Hadoop rack-aware is extremely important because: Rack awareness prevents data loss Rack awareness improves network performance In this recipe, we will describe how to make Hadoop rack-aware and why it is important. Getting ready You will need to know the rack to which each of your slave nodes belongs. Log in to the master node as the user who started Hadoop. How to do it... The following steps describe how to make Hadoop rack-aware: Create a topology.sh script and store it under the Hadoop configuration directory. Change the path for topology.data, in line 3, to fit your environment: hadoop@master1$ vi $HADOOP_HOME/conf/topology.sh while [ $# -gt 0 ] ; do nodeArg=$1 exec< /usr/local/hadoop/current/conf/topology.data result="" while read line ; do ar=( $line ) if [ "${ar[0]}" = "$nodeArg" ] ; then result="${ar[1]}" fi done shift if [ -z "$result" ] ; then echo -n "/default/rack " else echo -n "$result " fi done Don't forget to set the execute permission on the script file: hadoop@master1$ chmod +x $HADOOP_HOME/conf/topology.sh Create a topology.data file, as shown in the following snippet; change the IP addresses and racks to fit your environment: hadoop@master1$ vi $HADOOP_HOME/conf/topology.data10.161.30.108 /dc1/rack110.166.221.198 /dc1/rack210.160.19.149 /dc1/rack3 Add the following to the Hadoop core configuration file (core-site.xml): hadoop@master1$ vi $HADOOP_HOME/conf/core-site.xml <property> <name>topology.script.file.name</name> <value>/usr/local/hadoop/current/conf/topology.sh</value> </property> Sync the modified files across the cluster and restart HDFS and MapReduce. Make sure HDFS is now rack-aware. If everything works well, you should be able to find something like the following snippet in your NameNode log file: 2012-03-10 13:43:17,284 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack3/10.160.19.149:50010 2012-03-10 13:43:17,297 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack1/10.161.30.108:50010 2012-03-10 13:43:17,429 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack2/10.166.221.198:50010 Make sure MapReduce is now rack-aware. If everything works well, you should be able to find something like the following snippet in your JobTracker log file: 2012-03-10 13:50:38,341 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack3/ip-10-160-19-149.us-west-1.compute.internal 2012-03-10 13:50:38,485 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack1/ip-10-161-30-108.us-west-1.compute.internal 2012-03-10 13:50:38,569 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack2/ip-10-166-221-198.us-west-1.compute.internal How it works... The following diagram shows the concept of Hadoop rack awareness: Each block of the HDFS files will be replicated to multiple DataNodes, to prevent loss of all the data copies due to failure of one machine. However, if all copies of data happen to be replicated on DataNodes in the same rack, and that rack fails, all the data copies will be lost. So to avoid this, the NameNode needs to know the network topology in order to use that information to make intelligent data replication. As shown in the previous diagram, with the default replication factor of three, two data copies will be placed on the machines in the same rack, and another one will be put on a machine in a different rack. This ensures that a single rack failure won't result in the loss of all data copies. Normally, two machines in the same rack have more bandwidth and lower latency between them than two machines in different racks. With the network topology information, Hadoop is able to maximize network performance by reading data from proper DataNodes. If data is available on the local machine, Hadoop will read data from it. If not, Hadoop will try reading data from a machine in the same rack, and if it is available on neither, data will be read from machines in different racks. In step 1, we create a topology.sh script. The script takes DNS names as arguments and returns network topology (rack) names as the output. The mapping of DNS names to network topology is provided by the topology.data file, which was created in step 2. If an entry is not found in the topology.data file, the script returns /default/rack as a default rack name. Note that we use IP addresses, and not hostnames in the topology. data file. There is a known bug that Hadoop does not correctly process hostnames that start with letters "a" to "f". Check HADOOP-6682 for more details. In step 3, we set the topology.script.file.name property in core-site.xml, telling Hadoop to invoke topology.sh to resolve DNS names to network topology names. After restarting Hadoop, as shown in the logs of steps 5 and 6, HDFS and MapReduce add the correct rack name as a prefix to the DNS name of slave nodes. This indicates that the HDFS and MapReduce rack awareness work well with the aforementioned settings.
Read more
  • 0
  • 0
  • 4569

article-image-article-hbase-administration-performance-tuning-hadoop-0
Packt
21 Aug 2012
7 min read
Save for later

HBase Administration, Performance Tuning, Hadoop

Packt
21 Aug 2012
7 min read
Setting up Hadoop to spread disk I/O Modern servers usually have multiple disk devices to provide large storage capacities. These disks are usually configured as RAID arrays, as their factory settings. This is good for many cases but not for Hadoop. The Hadoop slave node stores HDFS data blocks and MapReduce temporary files on its local disks. These local disk operations benefit from using multiple independent disks to spread disk I/O. In this recipe, we will describe how to set up Hadoop to use multiple disks to spread its disk I/O. Getting ready We assume you have multiple disks for each DataNode node. These disks are in a JBOD (Just a Bunch Of Disks) or RAID0 configuration. Assume that the disks are mounted at /mnt/d0, /mnt/d1, …, /mnt/dn, and the user who starts HDFS has write permission on each mount point. How to do it... In order to set up Hadoop to spread disk I/O, follow these instructions: On each DataNode node, create directories on each disk for HDFS to store its data blocks: code 1 Add the following code to the HDFS configuration file (hdfs-site.xml): code 2 Sync the modified hdfs-site.xml file across the cluster: code 3 Restart HDFS: code 4 How it works... We recommend JBOD or RAID0 for the DataNode disks, because you don't need the redundancy of RAID, as HDFS ensures its data redundancy using replication between nodes. So, there is no data loss when a single disk fails. Which one to choose, J BOD or RAID0? You will theoretically get better performance from a JBOD configuration than from a RAID configuration. This is because, in a RAID configuration, you have to wait for the slowest disk in the array to complete before the entire write operation can complete, which makes the average I/O time equivalent to the slowest disk's I/O time. In a JBOD configuration, operations on a faster disk will complete independently of the slower ones, which makes the average I/O time faster than the slowest one. However, enterprise-class RAID cards might make big differences. You might want to benchmark your JBOD and RAID0 configurations before deciding which one to go with. For both JBOD and RAID0 configurations, you will have the disks mounted at different paths. The key point here is to set the dfs.data.dirproperty to all the directories created on each disk. The dfs.data.dirproperty specifies where the DataNode should store its local blocks. By setting it to comma-separated multiple directories, DataNode stores its blocks across all the disks in round robin fashion. This causes Hadoop to efficiently spread disk I/O to all the disks. Warning Do not leave blanks between the directory paths in the dfs.data.dir property value, or it won't work as expected. You will need to sync the changes across the cluster and restart HDFS to apply them. There's more... If you run MapReduce, as MapReduce stores its temporary files on TaskTracker's local file system, you might also like to set up MapReduce to spread its disk I/O: On each TaskTracker node, create directories on each disk for MapReduce to store its intermediate data files: code 5 Add the following to MapReduce's configuration file (mapred-site.xml): code 6 Sync the modified mapred-site.xml file across the cluster and restart MapReduce. MapReduce generates a lot of temporary files on TaskTrackers' local disks during its execution. Like HDFS, setting up multiple directories on different disks helps spread MapReduce disk I/O significantly. Using network topology script to make Hadoop rack-aware Hadoop has the concept of "Rack Awareness ". Administrators are able to define the rack of each DataNode in the cluster. Making Hadoop rack-aware is extremely important because: Rack awareness prevents data loss Rack awareness improves network performance In this recipe, we will describe how to make Hadoop rack-aware and why it is important. Getting ready You will need to know the rack to which each of your slave nodes belongs. Log in to the master node as the user who started Hadoop. How to do it... The following steps describe how to make Hadoop rack-aware: Create a topology.sh script and store it under the Hadoop configuration directory. Change the path for topology.data, in line 3, to fit your environment: code 7 Don't forget to set the execute permission on the script file: code 8 Create a topology.data file, as shown in the following snippet; change the IP addresses and racks to fit your environment: code 9 Add the following to the Hadoop core configuration file (core-site.xml): code 10 Sync the modified files across the cluster and restart HDFS and MapReduce. Make sure HDFS is now rack-aware. If everything works well, you should be able to find something like the following snippet in your NameNode log file: 2012-03-10 13:43:17,284 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack3/10.160.19.149:50010 2012-03-10 13:43:17,297 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack1/10.161.30.108:50010 2012-03-10 13:43:17,429 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack2/10.166.221.198:50010 Make sure MapReduce is now rack-aware. If everything works well, you should be able to find something like the following snippet in your JobTracker log file: 2012-03-10 13:50:38,341 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack3/ip-10-160-19-149.us-west-1.compute.internal 2012-03-10 13:50:38,485 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack1/ip-10-161-30-108.us-west-1.compute.internal 2012-03-10 13:50:38,569 INFO org.apache.hadoop.net.NetworkTopology: Adding a new node: /dc1/rack2/ip-10-166-221-198.us-west-1.compute.internal How it works... The following diagram shows the concept of Hadoop rack awareness: Each block of the HDFS files will be replicated to multiple DataNodes, to prevent loss of all the data copies due to failure of one machine. However, if all copies of data happen to be replicated on DataNodes in the same rack, and that rack fails, all the data copies will be lost. So to avoid this, the NameNode needs to know the network topology in order to use that information to make intelligent data replication. As shown in the previous diagram, with the default replication factor of three, two data copies will be placed on the machines in the same rack, and another one will be put on a machine in a different rack. This ensures that a single rack failure won't result in the loss of all data copies. Normally, two machines in the same rack have more bandwidth and lower latency between them than two machines in different racks. With the network topology information, Hadoop is able to maximize network performance by reading data from proper DataNodes. If data is available on the local machine, Hadoop will read data from it. If not, Hadoop will try reading data from a machine in the same rack, and if it is available on neither, data will be read from machines in different racks. In step 1, we create a topology.sh script. The script takes DNS names as arguments and returns network topology (rack) names as the output. The mapping of DNS names to network topology is provided by the topology.data file, which was created in step 2. If an entry is not found in the topology.data file, the script returns /default/rack as a default rack name. Note that we use IP addresses, and not hostnames in the topology. data file. There is a known bug that Hadoop does not correctly process hostnames that start with letters "a" to "f". Check HADOOP-6682 for more details. In step 3, we set the topology.script.file.name property in core-site.xml, telling Hadoop to invoke topology.sh to resolve DNS names to network topology names. After restarting Hadoop, as shown in the logs of steps 5 and 6, HDFS and MapReduce add the correct rack name as a prefix to the DNS name of slave nodes. This indicates that the HDFS and MapReduce rack awareness work well with the aforementioned settings.
Read more
  • 0
  • 0
  • 1066

article-image-article-ibm-cognos-business-intelligence-dashboard-business-insight-advanced
Packt
16 Jul 2012
4 min read
Save for later

IBM Cognos 10 Business Intelligence

Packt
16 Jul 2012
4 min read
Introducing IBM Cognos 10 BI Cognos Connection In this recipe we will be exploring Cognos Connection, which is the user interface presented to the user when he/she logs in to IBM Cognos 10 BI for the first time. IBM Cognos 10 BI, once installed and configured, can be accessed through the Web using supported web browsers. For a list of supported web browsers, refer to the Installation and Configuration Guide shipped with the product. Getting ready As stated earlier, make sure that IBM Cognos 10 BI is installed and configured. Install and configure the GO Sales and GO Data Warehouse samples. Use the gateway URI to log on to the web interface called Cognos Connection. How to do it... To explore Cognos Connection, perform the following steps: Log on to Cognos Connection using the gateway URI that may be similar to http://<HostName>:<PortNumber>/ibmcognos/cgi-bin/cognos.cgi. Take note of the Cognos Connection interface. It has the GO Sales and GO Data Warehouse samples visible. Note the blue-colored folder icon, shown as in the preceding screenshot. It represents metadata model packages that are published to Cognos Connection using the Cognos Framework Manager tool. These packages have objects that represent business data objects, relationships, and calculations, which can be used to author reports and dashboards. Refer to the book, IBM Cognos TM1 Cookbook by Packt Publishing to learn how to create metadata models packages. From the toolbar, click on Launch. This will open a menu, showing different studios, each having different functionality, as shown in the following screenshot: We will use Business Insight and Business Insight Advanced, which are the first two choices in the preceding menu. These are the two components used to create and view dashboards. For other options, refer to the corresponding books by the same publisher. For instance, refer to the book, IBM Cognos 8 Report Studio Cookbook to know more about creating and distributing complex reports. Query Studio and Analysis Studio are meant to provide business users with the facility to slice and dice business data themselves. Event Studio is meant to define business situations and corresponding actions. Coming back to Cognos Connection, note that a yellow-colored folder icon, which is shown as represents a user-defined folder, which may or may not contain other published metadata model packages, reports, dashboards, and other content. In our case, we have a user-defined folder called Samples. This was created when we installed and configured samples shipped with the product. Click on the New Folder icon, which is represented by , on the toolbar to create a user-defined folder. Other options are also visible here, for instance to create a new dashboard.   Click on the user-defined folder—Samples to view its contents, as shown in the following screenshot: As shown in the preceding screenshot, it has more such folders, each having its own content. The top part of the pane shows the navigation path. Let's navigate deeper into Models | Business Insight Samples to show some sample dashboards, created using IBM Cognos Business Insight, as shown in the following screenshot: Click on one of these links to view the corresponding dashboard. For instance, click on Sales Dashboard (Interactive) to view the dashboard, as shown in the following screenshot: The dashboard can also be opened in the authoring tool, which is IBM Cognos Business Insight, in this case by clicking on the icon shown as on extreme right, on Cognos Connection. It will show the same result as shown in the preceding screenshot. We will see the Business Insight interface in detail later in this article. How it works... Cognos Connection is the primary user interface that user sees when he/she logs in for the first time. Business data has to be first identified and imported from the metadata model using the Cognos Framework Manager tool. Relationships (inner/outer joins) and calculations are then created, and the resultant metadata model package is published to the IBM Cognos 10 BI Server. This becomes available on Cognos Connection. Users are given access to appropriate studios on Cognos Connection, according to their needs. Analysis, reports, and dashboards are then created and distributed using one of these studios. The preceding sample has used Business Insight, for instance. Later sections in this article will look more into Business Insight and Business Insight Advanced. The next section focuses on the Business Insight interface details from the navigation perspective.
Read more
  • 0
  • 0
  • 1461

article-image-article-ibm-cognos-10-bi-business-insight-dashboard
Packt
16 Jul 2012
7 min read
Save for later

IBM Cognos 10 BI dashboarding components

Packt
16 Jul 2012
7 min read
Introducing IBM Cognos 10 BI Cognos Connection In this recipe we will be exploring Cognos Connection, which is the user interface presented to the user when he/she logs in to IBM Cognos 10 BI for the first time. IBM Cognos 10 BI, once installed and configured, can be accessed through the Web using supported web browsers. For a list of supported web browsers, refer to the Installation and Configuration Guide shipped with the product. Getting ready As stated earlier, make sure that IBM Cognos 10 BI is installed and configured. Install and configure the GO Sales and GO Data Warehouse samples. Use the gateway URI to log on to the web interface called Cognos Connection. How to do it... To explore Cognos Connection, perform the following steps: Log on to Cognos Connection using the gateway URI that may be similar to http://<HostName>:<PortNumber>/ibmcognos/cgi-bin/cognos.cgi. Take note of the Cognos Connection interface. It has the GO Sales and GO Data Warehouse samples visible. Note the blue-colored folder icon, shown as in the preceding screenshot. It represents metadata model packages that are published to Cognos Connection using the Cognos Framework Manager tool. These packages have objects that represent business data objects, relationships, and calculations, which can be used to author reports and dashboards. Refer to the book, IBM Cognos TM1 Cookbook by Packt Publishing to learn how to create metadata models packages. From the toolbar, click on Launch. This will open a menu, showing different studios, each having different functionality, as shown in the following screenshot: We will use Business Insight and Business Insight Advanced, which are the first two choices in the preceding menu. These are the two components used to create and view dashboards. For other options, refer to the corresponding books by the same publisher. For instance, refer to the book, IBM Cognos 8 Report Studio Cookbook to know more about creating and distributing complex reports. Query Studio and Analysis Studio are meant to provide business users with the facility to slice and dice business data themselves. Event Studio is meant to define business situations and corresponding actions. Coming back to Cognos Connection, note that a yellow-colored folder icon, which is shown as represents a user-defined folder, which may or may not contain other published metadata model packages, reports, dashboards, and other content. In our case, we have a user-defined folder called Samples. This was created when we installed and configured samples shipped with the product. Click on the New Folder icon, which is represented by , on the toolbar to create a user-defined folder. Other options are also visible here, for instance to create a new dashboard.   Click on the user-defined folder—Samples to view its contents, as shown in the following screenshot: As shown in the preceding screenshot, it has more such folders, each having its own content. The top part of the pane shows the navigation path. Let's navigate deeper into Models | Business Insight Samples to show some sample dashboards, created using IBM Cognos Business Insight, as shown in the following screenshot: Click on one of these links to view the corresponding dashboard. For instance, click on Sales Dashboard (Interactive) to view the dashboard, as shown in the following screenshot: The dashboard can also be opened in the authoring tool, which is IBM Cognos Business Insight, in this case by clicking on the icon shown as on extreme right, on Cognos Connection. It will show the same result as shown in the preceding screenshot. We will see the Business Insight interface in detail later in this article. How it works... Cognos Connection is the primary user interface that user sees when he/she logs in for the first time. Business data has to be first identified and imported from the metadata model using the Cognos Framework Manager tool. Relationships (inner/outer joins) and calculations are then created, and the resultant metadata model package is published to the IBM Cognos 10 BI Server. This becomes available on Cognos Connection. Users are given access to appropriate studios on Cognos Connection, according to their needs. Analysis, reports, and dashboards are then created and distributed using one of these studios. The preceding sample has used Business Insight, for instance. Later sections in this article will look more into Business Insight and Business Insight Advanced. The next section focuses on the Business Insight interface details from the navigation perspective. Exploring IBM Cognos Business Insight User Interface In this recipe we will explore IBM Cognos Business Insight User Interface in more detail. We will explore various areas of the UI, each dedicated to perform different actions. Getting ready As stated earlier, we will be exploring different sections of Cognos Business Insight. Hence, make sure that IBM Cognos 10 BI installation is open and samples are set up properly. We will start the recipe assuming that the IBM Cognos Connection window is already open on the screen. How to do it... To explore IBM Cognos Business Insight User Interface, perform the following steps: In the IBM Cognos Connection window, navigate to Business Insight Samples, as shown in the following screenshot: Click on one of the dashboards, for instance Marketing Dashboard to open the dashboard in Business Insight. Different areas are labeled, as shown in the following figure: The overall layout is termed as Dashboard. The topmost toolbar is called Application bar . The Application bar contains different icons to manage the dashboard as a whole. For instance, we can create, open, e-mail, share, or save the dashboard using one of the icons on the Application bar. The user can explore different icons on the Application bar by hovering the mouse pointer over them. Hovering displays the tooltip, which has a brief but self-explanatory help text. Similarly, it has a Widget toolbar for every widget, which gets activated when the user clicks on the corresponding widget. When the mouse is focused away from the widget, the Widget toolbar disappears. It has various options, for instance to refresh the widget data, print as PDF, resize to ? t content, and so on. It also provides the user with the capability to change the chart type as well as to change the color palette. However, all these options have help text associated with them, which is activated on mouse hover. Content tab and Content pane show the list of objects available on the Cognos Connection. Directory structure on Cognos Connection can be navigated using Content pane and Content tab, and hence, available objects can be added to or removed from the dashboard. The drag-and-drop functionality has been provided as a result of which creating and editing a dashboard has become as simple as moving objects between the Dashboard area and Cognos Connection. The Toolbox tab displays additional widgets. The Slider Filter and Select Value Filter widgets allow the user to filter report content. The other toolbox widgets allow user to add more report content to the dashboard, such as HTML content, images, RSS feeds, and rich text. How it works... In the preceding section, we have seen basic areas of Business Insight. More than one user can log on to the IBM Cognos 10 BI server, and create various objects on Cognos Connection. These objects include packages, reports, cubes, templates, and statistics to name a few. These objects can be created using one or more tools available to users. For instance, reports can be created using one of the studios available. Cubes can be created using IBM Cognos TM1 or IBM Cognos Transformer and published on Cognos Connection. Metadata model packages can be created using IBM Cognos Framework Manager and published on Cognos Connection. These objects can then be dragged, dropped, and formatted as standalone objects in Cognos Business Insight, and hence, dashboards can be created.
Read more
  • 0
  • 0
  • 5159

article-image-article-ibm-cognos-10-bi-dashboard-business-insight
Packt
16 Jul 2012
7 min read
Save for later

IBM Cognos 10 Business Intelligencea

Packt
16 Jul 2012
7 min read
Introducing IBM Cognos 10 BI Cognos Connection In this recipe we will be exploring Cognos Connection, which is the user interface presented to the user when he/she logs in to IBM Cognos 10 BI for the first time. IBM Cognos 10 BI, once installed and configured, can be accessed through the Web using supported web browsers. For a list of supported web browsers, refer to the Installation and Configuration Guide shipped with the product. Getting ready As stated earlier, make sure that IBM Cognos 10 BI is installed and configured. Install and configure the GO Sales and GO Data Warehouse samples. Use the gateway URI to log on to the web interface called Cognos Connection. How to do it... To explore Cognos Connection, perform the following steps: Log on to Cognos Connection using the gateway URI that may be similar to http://<HostName>:<PortNumber>/ibmcognos/cgi-bin/cognos.cgi. Take note of the Cognos Connection interface. It has the GO Sales and GO Data Warehouse samples visible. Note the blue-colored folder icon, shown as in the preceding screenshot. It represents metadata model packages that are published to Cognos Connection using the Cognos Framework Manager tool. These packages have objects that represent business data objects, relationships, and calculations, which can be used to author reports and dashboards. Refer to the book, IBM Cognos TM1 Cookbook by Packt Publishing to learn how to create metadata models packages. From the toolbar, click on Launch. This will open a menu, showing different studios, each having different functionality, as shown in the following screenshot: We will use Business Insight and Business Insight Advanced, which are the first two choices in the preceding menu. These are the two components used to create and view dashboards. For other options, refer to the corresponding books by the same publisher. For instance, refer to the book, IBM Cognos 8 Report Studio Cookbook to know more about creating and distributing complex reports. Query Studio and Analysis Studio are meant to provide business users with the facility to slice and dice business data themselves. Event Studio is meant to define business situations and corresponding actions. Coming back to Cognos Connection, note that a yellow-colored folder icon, which is shown as represents a user-defined folder, which may or may not contain other published metadata model packages, reports, dashboards, and other content. In our case, we have a user-defined folder called Samples. This was created when we installed and configured samples shipped with the product. Click on the New Folder icon, which is represented by , on the toolbar to create a user-defined folder. Other options are also visible here, for instance to create a new dashboard.   Click on the user-defined folder—Samples to view its contents, as shown in the following screenshot: As shown in the preceding screenshot, it has more such folders, each having its own content. The top part of the pane shows the navigation path. Let's navigate deeper into Models | Business Insight Samples to show some sample dashboards, created using IBM Cognos Business Insight, as shown in the following screenshot: Click on one of these links to view the corresponding dashboard. For instance, click on Sales Dashboard (Interactive) to view the dashboard, as shown in the following screenshot: The dashboard can also be opened in the authoring tool, which is IBM Cognos Business Insight, in this case by clicking on the icon shown as on extreme right, on Cognos Connection. It will show the same result as shown in the preceding screenshot. We will see the Business Insight interface in detail later in this article. How it works... Cognos Connection is the primary user interface that user sees when he/she logs in for the first time. Business data has to be first identified and imported from the metadata model using the Cognos Framework Manager tool. Relationships (inner/outer joins) and calculations are then created, and the resultant metadata model package is published to the IBM Cognos 10 BI Server. This becomes available on Cognos Connection. Users are given access to appropriate studios on Cognos Connection, according to their needs. Analysis, reports, and dashboards are then created and distributed using one of these studios. The preceding sample has used Business Insight, for instance. Later sections in this article will look more into Business Insight and Business Insight Advanced. The next section focuses on the Business Insight interface details from the navigation perspective. Exploring IBM Cognos Business Insight User Interface In this recipe we will explore IBM Cognos Business Insight User Interface in more detail. We will explore various areas of the UI, each dedicated to perform different actions. Getting ready As stated earlier, we will be exploring different sections of Cognos Business Insight. Hence, make sure that IBM Cognos 10 BI installation is open and samples are set up properly. We will start the recipe assuming that the IBM Cognos Connection window is already open on the screen. How to do it... To explore IBM Cognos Business Insight User Interface, perform the following steps: In the IBM Cognos Connection window, navigate to Business Insight Samples, as shown in the following screenshot: Click on one of the dashboards, for instance Marketing Dashboard to open the dashboard in Business Insight. Different areas are labeled, as shown in the following figure: The overall layout is termed as Dashboard. The topmost toolbar is called Application bar . The Application bar contains different icons to manage the dashboard as a whole. For instance, we can create, open, e-mail, share, or save the dashboard using one of the icons on the Application bar. The user can explore different icons on the Application bar by hovering the mouse pointer over them. Hovering displays the tooltip, which has a brief but self-explanatory help text. Similarly, it has a Widget toolbar for every widget, which gets activated when the user clicks on the corresponding widget. When the mouse is focused away from the widget, the Widget toolbar disappears. It has various options, for instance to refresh the widget data, print as PDF, resize to ? t content, and so on. It also provides the user with the capability to change the chart type as well as to change the color palette. However, all these options have help text associated with them, which is activated on mouse hover. Content tab and Content pane show the list of objects available on the Cognos Connection. Directory structure on Cognos Connection can be navigated using Content pane and Content tab, and hence, available objects can be added to or removed from the dashboard. The drag-and-drop functionality has been provided as a result of which creating and editing a dashboard has become as simple as moving objects between the Dashboard area and Cognos Connection. The Toolbox tab displays additional widgets. The Slider Filter and Select Value Filter widgets allow the user to filter report content. The other toolbox widgets allow user to add more report content to the dashboard, such as HTML content, images, RSS feeds, and rich text. How it works... In the preceding section, we have seen basic areas of Business Insight. More than one user can log on to the IBM Cognos 10 BI server, and create various objects on Cognos Connection. These objects include packages, reports, cubes, templates, and statistics to name a few. These objects can be created using one or more tools available to users. For instance, reports can be created using one of the studios available. Cubes can be created using IBM Cognos TM1 or IBM Cognos Transformer and published on Cognos Connection. Metadata model packages can be created using IBM Cognos Framework Manager and published on Cognos Connection. These objects can then be dragged, dropped, and formatted as standalone objects in Cognos Business Insight, and hence, dashboards can be created.
Read more
  • 0
  • 0
  • 1068

article-image-android-database-programming-binding-ui
Packt
07 Jun 2012
6 min read
Save for later

Android Database Programming: Binding to the UI

Packt
07 Jun 2012
6 min read
(For more resources on Android, see here.) SimpleCursorAdapters and ListViews There are two major ways of retrieving data on Android, and each has its own class of ListAdapters, which will then know how to handle and bind the passed-in data. The first way of retrieving data is one that we're very familiar with already – through making queries and obtaining Cursor objects. The subclass of ListAdapters that wrap around Cursors is called CursorAdapter, and in the next section we'll focus on the SimpleCursorAdapter, which is the most straightforward instance of CursorAdapter. The Cursor points to a subtable of rows containing the results of our query. By iterating through this cursor, we are able to examine the fields of each row. Now we would like to convert each row of the subtable into a corresponding row in our list. The first step in doing this is to set up a ListActivity (a variant of the more common Activity class). As its name suggests, a ListActivity is simply a subclass of the Activity class which comes with methods that allow you to attach ListAdapters. The ListActivity class also allows you to inflate XML layouts, which contain list tags. In our example, we will use a very bare-bones XML layout (named list.xml) that only contains a ListView tag as follows: <?xml version="1.0" encoding="utf-8"?><LinearLayout android_orientation="vertical" android_layout_width="fill_parent" android_layout_height="wrap_content" > <ListView android_id="@android:id/android:list" android_layout_width="fill_parent" android_layout_height="wrap_content" /></LinearLayout> This is the first step in setting up what's called a ListView in Android. Similar to how defining a TextView allows you to see a block of text in your Activity, defining a ListView will allow you to interact with a scrollable list of row objects in your Activity. Intuitively, the next question in your mind should be: Where do I define how each row actually looks? Not only do you need to define the actual list object somewhere, but each row should have its own layout as well. So, to do this we create a separate list_entry.xml file in our layouts directory. The example I'm about to use is the one that queries the Contacts content provider and returns a list containing each contact's name, phone number, and phone number type. Thus, each row of my list should contain three TextViews, one for each data field. Subsequently, my list_entry.xml file looks like the following: <?xml version="1.0" encoding="utf-8"?><LinearLayout android_orientation="vertical" android_layout_width="fill_parent" android_layout_height="wrap_content" android_padding="10dip" > <TextView android_id="@+id/name_entry" android_layout_width="wrap_content" android_layout_height="wrap_content" android_textSize="28dip" /> <TextView android_id="@+id/number_entry" android_layout_width="wrap_content" android_layout_height="wrap_content" android_textSize="16dip" /> <TextView android_id="@+id/number_type_entry" android_layout_width="wrap_content" android_layout_height="wrap_content" android_textColor="#DDD" android_textSize="14dip" /></LinearLayout> So we have a vertical LinearLayout which contains three TextViews, each with its own properly defined ID as well as its own aesthetic properties (that is, text size and text color). In terms of set up – this is all we need! Now we just need to create the ListActivity itself, inflate the list.xml layout, and specify the adapter. To see how all this is done, let's take a look at the code before breaking it apart piece by piece: public class SimpleContactsActivity extends ListActivity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.list); // MAKE QUERY TO CONTACT CONTENTPROVIDER String[] projections = new String[] { Phone._ID, Phone.DISPLAY_NAME, Phone.NUMBER, Phone.TYPE }; Cursor c = getContentResolver().query(Phone.CONTENT_URI, projections, null, null, null); startManagingCursor(c); // THE DESIRED COLUMNS TO BE BOUND String[] columns = new String[] { Phone.DISPLAY_NAME, Phone.NUMBER, Phone.TYPE }; // THE XML DEFINED VIEWS FOR EACH FIELD TO BE BOUND TO int[] to = new int[] { R.id.name_entry, R.id.number_entry, R.id.number_type_entry }; // CREATE ADAPTER WITH CURSOR POINTING TO DESIRED DATA SimpleCursorAdapter cAdapter = new SimpleCursorAdapter(this, R.layout.list_entry, c, columns, to); // SET THIS ADAPTER AS YOUR LIST ACTIVITY'S ADAPTER this.setListAdapter(cAdapter); }} So what's going on here? Well, the first part of the code you should recognize by now – we're simply making a query over the phone's contact list (specifically over the Contact content provider's Phone table) and asking for the contact's name, number, and number type. Next, the SimpleCursorAdapter takes as two of its parameters, a string array and an integer array which represent a mapping between Cursor columns and XML layout views. In our case, this is as follows: // THE DESIRED COLUMNS TO BE BOUNDString[] columns = new String[] { Phone.DISPLAY_NAME, Phone.NUMBER, Phone.TYPE };// THE XML DEFINED VIEWS FOR EACH FIELD TO BE BOUND TOint[] to = new int[] { R.id.name_entry, R.id.number_entry, R.id.number_type_entry }; This is so that the data in the DISPLAY_NAME column will get bound to the TextView with ID name_entry, and so on. Once we have these mappings defined, the next part is to just instantiate the SimpleCursorAdapter, which can be seen in this line: // CREATE ADAPTER WITH CURSOR POINTING TO DESIRED DATASimpleCursorAdapter cAdapter = new SimpleCursorAdapter(this, R.layout.list_entry, c, columns, to); Now, the SimpleCursorAdapter takes five parameters – the first is the Context, which essentially tells the CursorAdapter which parent Activity it needs to inflate and bind the rows to. The next parameter is the ID of the R layout that you defined earlier, which will tell the CursorAdapter what each row should look like and, furthermore, where it can inflate the corresponding Views. Next, we pass in the Cursor, which tells the adapter what the underlying data actually is, and lastly, we pass in the mappings. Hopefully, the previous code makes sense, and the parameters of SimpleCursorAdapter make sense as well. The result of this previous Activity can be seen in the following screenshot: Everything looks good, except for these random integers floating around under the phone number. Why are there a bunch of 1s, 2s, and 3s at the bottom of each row where the types should be? Well, the phone number types are not returned as Strings but are instead returned as integers. From there through a simple switch statement, we can easily convert these integers into more descriptive Strings. However, you'll quickly see that with our very simple, straightforward use of the built-in SimpleCursorAdapter class, there was nowhere for us to implement any "special" logic that would allow us to convert such returned integers to Strings. This is when overriding the SimpleCursorAdapter class becomes necessary, because only then can we have full control over how the Cursor's data is to be displayed in each row. And so, we move onwards to the next section where we see just that.
Read more
  • 0
  • 0
  • 2505
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
article-image-plsql-using-collections
Packt
17 May 2012
18 min read
Save for later

PL/SQL: Using Collections

Packt
17 May 2012
18 min read
Collections—an overview A collection is a homogeneous single dimensional structure, which constitutes an ordered set of elements of a similar type. Being a homogeneous structure, all elements are of the same data type. The structure of the element contains cells with a subscript. The elements reside in these cells to make the index as their location information. The subscript or cell index becomes identification of an element and is used for its access. Structure of a collection type, SPORT, is shown in the following diagram. Note the subscript and elements into it. A new element, GOLF, enters at the last empty location and is represented as SPORT [6]: A collection element can be of any valid SQL data type or a user-defined type. An element of the SQL primitive data type is a scalar value while an element of the user-defined type is an object type instance. A collection can be used within a PL/SQL program by declaring a PL/SQL variable of collection type. The local PL/SQL variable can hold the instances of its collection type. Besides, a database column in a table can also be of the schema collection type. The collections in Oracle are strictly one dimensional. They cannot be realized on two-dimensional coordinates. However, multidimensional arrays can be realized when the collection has an object type or collection type attribute. A collection can be bounded or unbounded. Bounded collections can accommodate a limited number of elements while unbounded collections have no upper limit for subscripts. Collections provide an efficient way to organize the data in an array or set format while making the use of object-oriented features. An instance of a nested table or varray collection type is accessed as an object while the data is still stored in database columns. Collections can be used to avail data caching in programs and boost up the performance of SQL operations. On dedicated server connections, a session always uses User Global Area (UGA), a component of PGA, for collection operations. On the other hand, for shared server mode, the collection operations are still carried out in UGA; but UGA is now a part of System Global Area (SGA), thus indirectly in SGA. This is because in shared server connections, multiple server processes can affect a session, thus UGA must be allocated out of the SGA. Categorization Collections are of two types—persistent and non-persistent. A collection is persistent if it stores the collection structure and elements physically in the database. Contrarily, a non-persistent collection is active for a program only that is, maximum up to a session. Apart from the preceding categories, a collection can be realized in three formats namely, associative array, nested table or varray. This categorization is purely based on their objective and behavioral properties in a PL/SQL program. The following diagram combines the abstract and physical classification of collections: We will take a quick tour of these collection types now and discuss them in detail in the coming sections: Associative array (index-by table): This is the simplest form of non- persistent unbounded collections. As a non-persistent collection, it cannot be stored in the database, but they are available within a PL/SQL block only. The collection structure and data of associative array cannot be retained once the program is completed. Initially, during the days of Oracle 7, it was known as PL/SQL tables. Later, Oracle 8 version released it as index-by tables as they used an index to identify an element. Nested table: This is a persistent form of unbounded collections which can be created in the database as well as in PL/SQL block. Varray (variable-size array): This is a persistent but bounded form of collection which can be created in the database as well as in PL/SQL. Similar to a nested table, a varray is also a unidimensional homogeneous collection. The collection size and storage scheme are the factors which differentiate varrays from nested tables. Unlike a nested table, a varray can accommodate only a defined (fixed) number of elements. Selecting an appropriate collection type Here are a few guidelines to decide upon the appropriate usage of collection types in programs: Use of associative arrays is required when: You have to temporarily cache the program data in an array format for lookup purpose. You need string subscripts for the collection elements. Note that it supports negative subscripts, too. Map hash tables from the client to the database. Use of nested tables is preferred when: You have to stores data as sets in the database. Database columns of nested table type can be declared to hold the data persistently. Perform major array operations such as insertion and deletion, on a large volume of data. Use of varrays is preferred when: You have to store calculated or predefined volume of data in the database. Varray offers limited and defined storage of rows in a collection. Order of the elements has to be preserved. Associative arrays Associative arrays are analogous to conventional arrays or lists which can be defined within a PL/SQL program only. Neither the array structure nor the data can be stored in the database. It can hold the elements of a similar type in a key-value structure without any upper bound to the array. Each cell of the array is distinguished by its subscript, index, or cell number. The index can be a number or a string. Associative arrays were first introduced in Oracle 7 release as PL/SQL tables to signify its usage within the scope of a PL/SQL block. Oracle 8 release identified the PL/SQL table as Index by table due to its structure as an index-value pair. Oracle 10g release recognized the behavior of index by tables as arrays so as to rename it as associative arrays due to association of an index with an array. The following diagram explains the physical lookup structure of an associative array: Associative arrays follow the following syntax for declaration in a PL/SQL declare block: TYPE [COLL NAME] IS TABLE OF [ELEMENT DATA TYPE] NOT NULL INDEX BY [INDEX DATA TYPE] In the preceding syntax, the index type signifies the data type of the array subscript. RAW, NUMBER, LONG-RAW, ROWID, and CHAR are the unsupported index data types. The suited index types are BINARY_INTEGER, PLS_INTEGER, POSITIVE, NATURAL, SIGNTYPE, or VARCHAR2. The element's data type can be one of the following: PL/SQL scalar data type: NUMBER (along with its subtypes), VARCHAR2 (and its subtypes), DATE, BLOB, CLOB, or BOOLEAN Inferred data: The data type inherited from a table column, cursor expression or predefined package variable User-defined type: A user defined object type or collection type For illustration, the following are the valid conditions of the associative array in a PL/SQL block: /*Array of CLOB data*/ TYPE clob_t IS TABLE OF CLOB INDEX BY PLS_INTEGER; /*Array of employee ids indexed by the employee names*/ TYPE empno_t IS TABLE OF employees.empno%TYPE NOT NULL INDEX BY employees.ename%type; The following PL/SQL program declares an associative array type in a PL/ SQL block. Note that the subscript of the array is of a string type and it stores the number of days in a quarter. This code demonstrates the declaration of an array and assignment of the element in each cell and printing them. Note that the program uses the FIRST and NEXT collection methods to display the array elements. The collection methods would be covered in detail in the PL/SQL collection methods section: /*Enable the SERVEROUTPUT on to display the output*/ SET SERVEROUTPUT ON /*Start the PL/SQL block*/ DECLARE /*Declare a collection type associative array and its variable*/ TYPE string_asc_arr_t IS TABLE OF NUMBER INDEX BY VARCHAR2(10); l_str string_asc_arr_t; l_idx VARCHAR2(50); BEGIN /*Assign the total count of days in each quarter against each cell*/ l_str ('JAN-MAR') := 90; l_str ('APR-JUN') := 91; l_str ('JUL-SEP') := 92; l_str ('OCT-DEC') := 93; l_idx := l_str.FIRST; WHILE (l_idx IS NOT NULL) LOOP DBMS_OUTPUT.PUT_LINE('Value at index '||l_idx||' is '||l_str(l_ idx)); l_idx := l_str.NEXT(l_idx); END LOOP; END; / Value at index APR-JUN is 91 Value at index JAN-MAR is 90 Value at index JUL-SEP is 92 Value at index OCT-DEC is 93 PL/SQL procedure successfully completed. In the preceding block, note the string indexed array. A string indexed array considerably improves the performance by using indexed organization of array values. In the last block, we noticed the explicit assignment of data. In the following program, we will try to populate the array automatically in the program. The following PL/SQL block declares an associative array to hold the ASCII values of number 1 to 100: /*Enable the SERVEROUTPUT on to display the output*/ SET SERVEROUTPUT ON /*Start the PL/SQL Block*/ DECLARE /*Declare an array of string indexed by numeric subscripts*/ TYPE ASCII_VALUE_T IS TABLE OF VARCHAR2(12) INDEX BY PLS_INTEGER; L_GET_ASCII ASCII_VALUE_T; BEGIN /*Insert the values through a FOR loop*/ FOR I IN 1..100 LOOP L_GET_ASCII(I) := ASCII(I); END LOOP; /*Display the values randomly*/ DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(5)); DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(15)); DBMS_OUTPUT.PUT_LINE(L_GET_ASCII(75)); END; / 53 49 55 PL/SQL procedure successfully completed. The salient features of associative arrays are as follows: An associative array can exist as a sparse or empty collection Being a non-persistent collection, it cannot participate in DML transactions It can be passed as arguments to other local subprograms within the same block Sorting of an associative array depends on the NLS_SORT parameter An associative array declared in package specification behaves as a session-persistent array Nested tables Nested tables are a persistent form of collections which can be created in the database as well as PL/SQL. It is an unbounded collection where the index or subscript is implicitly maintained by the Oracle server during data retrieval. Oracle automatically marks the minimum subscript as 1 and relatively handles others. As there is no upper limit defined for a nested table, its size can grow dynamically. Though not an index-value pair structure, a nested table can be accessed like an array in a PL/SQL block. A nested table is initially a dense collection but it might become sparse due to delete operations on the collection cells. Dense collection is the one which is tightly populated. That means, there exists no empty cells between the lower and upper indexes of the collection. Sparse collections can have empty cells between the first and the last cell of the collection. A dense collection may get sparse by performing the "delete" operations. When a nested table is declared in a PL/SQL program, they behave as a one-dimensional array without any index type or upper limit specification. A nested table defined in a database exists as a valid schema object type. It can be either used in a PL/SQL block to declare a PL/SQL variable for temporarily holding program data or a database column of particular nested table type can be included in a table, which can persistently store the data in the database. A nested table type column in a table resembles a table within a table, but Oracle draws an out- of-line storage table to hold the nested table data. This scenario is illustrated in the following diagram: Whenever a database column of nested table type is created in a table (referred to as parent table), Oracle creates a storage table with the same storage options as that of the parent table. The storage table created by Oracle in the same segment carries the name as specified in the NESTED TABLE STORE AS clause during creation of the parent table. Whenever a row is created in the parent table, the following actions are performed by the Oracle server: A unique identifier is generated to distinguish the nested table instances of different parent rows, for the parent row The instance of the nested table is created in the storage table alongside the unique identifier of the parent row The Oracle server takes care of these nested table operations. For the programmer or user, the whole process is hidden and appears as a normal "insert" operation. A nested table definition in PL/SQL follows the following syntax: DECLARE TYPE type_name IS TABLE OF element_type [NOT NULL]; In the preceding syntax, element_type is a primitive data type or a user-defined type, but not as a REF CURSOR type. In a database, a nested table can be defined using the following syntax: CREATE [OR REPLACE] TYPE type_name IS TABLE OF [element_type] [NOT NULL]; / In the preceding syntax, [element_type] can be a SQL supported scalar data type, a database object type, or a REF object type. Unsupported element types are BOOLEAN, LONG, LONG-RAW, NATURAL, NATURALN, POSITIVE, POSITIVEN, REF CURSOR, SIGNTYPE, STRING, PLS_INTEGER, SIMPLE_INTEGER, BINARY_INTEGER and all other non-SQL supported data types. If the size of the element type of a database collection type has to be increased, follow this syntax: ALTER TYPE [type name] MODIFY ELEMENT TYPE [modified element type] [CASCADE | INVALIDATE]; The keywords, CASCADE or INVALIDATE, decide whether the collection modification has to invalidate the dependents or the changes that have to be cascaded across the dependents. The nested table from the database can be dropped using the DROP command, as shown in the following syntax (note that the FORCE keyword drops the type irrespective of its dependents): DROP TYPE [collection name] [FORCE] Nested table collection type as the database object We will go through the following illustration to understand the behavior of a nested table, when created as a database collection type: /*Create the nested table in the database*/ SQL> CREATE TYPE NUM_NEST_T AS TABLE OF NUMBER; / Type created. The nested table type, NUM_NEST_T, is now created in the database. Its metadata information can be queried from the USER_TYPES and USER_COLL_TYPES dictionary views: SELECT type_name, typecode, type_oid FROM USER_TYPES WHERE type_name = 'NUM_NEST_T'; TYPE_NAME TYPECODE TYPE_OID --------------- --------------- -------------------------------- NUM_NEST_T COLLECTION 96DE421E47114638A9F5617CE735731A Note that the TYPECODE value shows the type of the object in the database and differentiates collection types from user-defined object types: SELECT type_name, coll_type, elem_type_name FROM user_coll_types WHERE type_name = 'NUM_NEST_T'; TYPE_NAME COLL_TYPE ELEM_TYPE_NAME --------------- ---------- -------------------- NUM_NEST_T TABLE NUMBER Once the collection type has been successfully created in the database, it can be used to specify the type for a database column in a table. The CREATE TABLE statement in the following code snippet declares a column of the NUM_NEST_T nested table type in the parent table, TAB_USE_NT_COL. The NESTED TABLE [Column] STORE AS [Storage table] clause specifies the storage table for the nested table type column. A separate table for the nested table column, NUM, ensures its out-of-line storage. SQL> CREATE TABLE TAB_USE_NT_COL (ID NUMBER, NUM NUM_NEST_T) NESTED TABLE NUM STORE AS NESTED_NUM_ID; Table created. A nested table collection type in PL/SQL n PL/SQL, a nested table can be declared and defined in the declaration section of the block as a local collection type. As a nested table follows object orientation, the PL/SQL variable of the nested table type has to be necessarily initialized. The Oracle server raises the exception ORA-06531: Reference to uninitialized collection if an uninitialized nested table type variable is encountered during block execution. As the nested table collection type has been declared within the PL/SQL block, its scope, visibility, and life is the execution of the PL/SQL block only. The following PL/SQL block declares a nested table. Observe the scope and visibility of the collection variable. Note that the COUNT method has been used to display the array elements. /*Enable the SERVEROUTPUT to display the results*/ SET SERVEROUTPUT ON /*Start the PL/SQL block*/ DECLARE /*Declare a local nested table collection type*/ TYPE LOC_NUM_NEST_T IS TABLE OF NUMBER; L_LOCAL_NT LOC_NUM_NEST_T := LOC_NUM_NEST_T (10,20,30); BEGIN /*Use FOR loop to parse the array and print the elements*/ FOR I IN 1..L_LOCAL_NT.COUNT LOOP DBMS_OUTPUT.PUT_LINE('Printing '||i||' element: '||L_LOCAL_ NT(I)); END LOOP; END; / Printing 1 element: 10 Printing 2 element: 20 Printing 3 element: 30 PL/SQL procedure successfully completed. Additional features of a nested table In the earlier sections, we saw the operational methodology of a nested table. We will now focus on the nested table's metadata. Furthermore, we will demonstrate a peculiar behavior of the nested table for the "delete" operations. Oracle's USER_NESTED_TABLES and USER_NESTED_TABLE_COLS data dictionary views maintain the relationship information of the parent and the nested tables. These dictionary views are populated only when a database of a nested table collection type is included in a table. The USER_NESTED_TABLES static view maintains the information about the mapping of a nested table collection type with its parent table. The structure of the dictionary view is as follows: SQL> desc USER_NESTED_TABLES Name Null? Type ----------------------- -------- --------------- TABLE_NAME VARCHAR2(30) TABLE_TYPE_OWNER VARCHAR2(30) TABLE_TYPE_NAME VARCHAR2(30) PARENT_TABLE_NAME VARCHAR2(30) PARENT_TABLE_COLUMN VARCHAR2(4000) STORAGE_SPEC VARCHAR2(30) RETURN_TYPE VARCHAR2(20) ELEMENT_SUBSTITUTABLE VARCHAR2(25) Let us query the nested table relationship properties for the TAB_USE_NT_COL table from the preceding view: SELECT parent_table_column, table_name, return_type, storage_spec FROM user_nested_tables WHERE parent_table_name='TAB_USE_NT_COL' / PARENT_TAB TABLE_NAME RETURN_TYPE STORAGE_SPEC ---------------------------------------------------------------------- NUM NESTED_NUM_ID VALUE DEFAULT In the preceding view query, RETURN_TYPE specifies the return type of the collection. It can be VALUE (in this case) or LOCATOR. Another column, STORAGE_SPEC, signifies the storage scheme used for the storage of a nested table which can be either USER_SPECIFIED or DEFAULT (in this case). The USER_NESTED_TABLE_COLS view maintains the information about the collection attributes contained in the nested tables: SQL> desc USER_NESTED_TABLE_COLS Name Null? Type ----------------------- -------- --------------- TABLE_NAME NOT NULL VARCHAR2(30) COLUMN_NAME NOT NULL VARCHAR2(30) DATA_TYPE VARCHAR2(106) DATA_TYPE_MOD VARCHAR2(3) DATA_TYPE_OWNER VARCHAR2(30) DATA_LENGTH NOT NULL NUMBER DATA_PRECISION NUMBER DATA_SCALE NUMBER NULLABLE VARCHAR2(1) COLUMN_ID NUMBER DEFAULT_LENGTH NUMBER DATA_DEFAULT LONG NUM_DISTINCT NUMBER LOW_VALUE RAW(32) HIGH_VALUE RAW(32) DENSITY NUMBER NUM_NULLS NUMBER NUM_BUCKETS NUMBER LAST_ANALYZED DATE SAMPLE_SIZE NUMBER CHARACTER_SET_NAME VARCHAR2(44) CHAR_COL_DECL_LENGTH NUMBER GLOBAL_STATS VARCHAR2(3) USER_STATS VARCHAR2(3) AVG_COL_LEN NUMBER CHAR_LENGTH NUMBER CHAR_USED VARCHAR2(1) V80_FMT_IMAGE VARCHAR2(3) DATA_UPGRADED VARCHAR2(3) HIDDEN_COLUMN VARCHAR2(3) VIRTUAL_COLUMN VARCHAR2(3) SEGMENT_COLUMN_ID NUMBER INTERNAL_COLUMN_ID NOT NULL NUMBER HISTOGRAM VARCHAR2(15) QUALIFIED_COL_NAME VARCHAR2(4000) We will now query the nested storage table in the preceding dictionary view to list all its attributes: SELECT COLUMN_NAME, DATA_TYPE, DATA_LENGTH, HIDDEN_COLUMN FROM user_nested_table_cols where table_name='NESTED_NUM_ID' / COLUMN_NAME DATA_TYP DATA_LENGTH HID ------------------------------ ---------- ----------- ----- NESTED_TABLE_ID RAW 16 YES COLUMN_VALUE NUMBER 22 NO We observe that though the nested table had only number elements, there is two- columned information in the view. The COLUMN_VALUE attribute is the default pseudo column of the nested table as there are no "named" attributes in the collection structure. The other attribute, NESTED_TABLE_ID, is a hidden unique 16-byte system generated raw hash code which latently stores the parent row identifier alongside the nested table instance to distinguish the parent row association. If an element is deleted from the nested table, it is rendered as parse. This implies that once an index is deleted from the collection structure, the collection doesn't restructure itself by shifting the cells in a forward direction. Let us check out the sparse behavior in the following example. The following PL/SQL block declares a local nested table and initializes it with a constructor. We will delete the first element and print it again. The system raises the NO_DATA_FOUND exception when we query the element at the index 1 in the collection:   /*Enable the SERVEROUTPUT to display the block messages*/ SQL> SET SERVEROUTPUT ON /*Start the PL/SQL block*/ SQL> DECLARE /*Declare the local nested table collection*/ TYPE coll_method_demo_t IS TABLE OF NUMBER; /*Declare a collection variable and initialize it*/ L_ARRAY coll_method_demo_t := coll_method_demo_t (10,20,30,40,50); BEGIN /*Display element at index 1*/ DBMS_OUTPUT.PUT_LINE('Element at index 1 before deletion:'||l_ array(1)); /*Delete the 1st element from the collection*/ L_ARRAY.DELETE(1); /*Display element at index 1*/ DBMS_OUTPUT.PUT_LINE('Element at index 1 after deletion:'||l_ array(1)); END; / Element at index 1 before deletion:10 DECLARE * ERROR at line 1: ORA-01403: no data found ORA-06512: at line 15
Read more
  • 0
  • 0
  • 8214

article-image-troubleshooting-intuit-quickbooks
Packt
08 May 2012
12 min read
Save for later

Troubleshooting with Intuit Quickbooks

Packt
08 May 2012
12 min read
The following table is the Recipe Reference Card for the keyboard shortcuts included in this article: Find Ctrl+F Delete the line Ctrl+Del Save and close Alt+A Advance to the next field Tab Regress to the previous field Shift+Tab Customize the report Alt+M   Clearing stale undeposited funds When the Undeposited Funds window includes customer payments, which you know have already been deposited, recorded, and reconciled, the Income or Unearned Income and Undeposited Funds accounts are overstated. You can use this recipe to efficiently combine the cleared deposit with the undeposited funds. Getting ready Verify that the appropriate bank account is reconciled for the period containing the stale undeposited funds. If not, this can be resolved simply by deleting the recorded deposit and recording the deposit of the undeposited funds. How to do it... With the Find tool (Edit | Find or Ctrl+F), use the Amount filter, along with the Date filter, if necessary, to bring up both the deposit and the customer payment already recorded. Open up the deposit, and click on the Payments button. Check off the appropriate transaction, and click on OK to add this to the deposit. Click on the line item for the deposit originally on the screen, that is, the duplicate of the payment that you just added to this screen. Click on Edit | Delete Line (Ctrl+Del), and then Save and Close (Alt+A). How it works… The only way to directly delete an item added to Undeposited Funds is to delete the underlying customer payment or sales receipt. However, this is not advisable, because these transactions are typically accurate representations of a real-world activity. Additionally, when the deposit was recorded, the related account duplicated the income or customer deposit from the original invoice or sales receipt. Therefore, the deposit itself needs to be modified to simultaneously remove the duplicate offset account, and resolve the outstanding Undeposited Funds item. There's more… For a printable and memorizable list of all outstanding items in the Undeposited Funds account, open the Undeposited Funds ledger. Click on Customize Report | Filters | Choose Filter | Cleared | No. On the Header/Footer tab, in the Report Title field, enter Undeposited Funds, and click on OK. Adjusting cash basis receivables or payables balances Does your cash basis balance sheet show balance in your receivable or payables accounts? This recipe will take you through the two-step process of resolving these items: Locate them Adjust them Getting ready To find out which customers and vendors are responsible for your cash basis accounts receivable or accounts payable balances, respectively, run the following report: Go to Reports | Custom Reports | Summary. Set Dates to All. If you desire a cut-off date, leave the From field blank, and enter your cut-off date in the To field. Set Report Basis to Cash. Set Display rows by to Customer or Vendor. Go to Advanced, and set Display Rows to Non-zero. Go to the Filters tab, and set Account to Accounts Receivable (or Accounts Payable). Go to the Header/Footer tab, and set Report Title to Cash Basis A/R by Customer or Cash Basis A/P by Vendor. The report total matches your balance sheet account total for the same cut-off date. How to do it... Double-click one of the account balances to reveal the detail, and remove the columns irrelevant to this effort. Scan the activity for patterns, unusual items, or clues about the cash basis balance. Gain an understanding of the transaction, and resolve the items by changing the accounts, changing a date, making a journal entry, noting that no adjustment is needed, or other action. Refresh the report, and confirm either a zero balance or an appropriate cash basis balance. How it works… Some of the most likely patterns to scan for include the following: A Balance column, which keeps returning to 0.00, and then stops returning to 0.00: An unusual transaction Type: A recurring figure in the Balance column. This lets you know that at least one culprit occurred before the recurrence began: A zero balance right before a transaction, which is also the aggregate balance sheet account balance: There's more… The most common reasons for a cash basis receivables or payable balance are: Payment date precedes bill or invoice date, and the report cut-off is between both the dates Offset account is a balance sheet account, and the bill or the invoice is unpaid Writing off stale receivables Making a journal entry to write off stale A/R in bulk is easy, but this makes it difficult to trace through the accounting records. The possible uses for more precise information include producing a trail for taxing authorities, internal or independent auditors, or banks. A separate spreadsheet may suffice, but it may be difficult to coordinate. This recipe focuses on straightforward ways to write off these balances in a detailed, but effi cient fashion. Getting ready Have your criteria ready for which invoices are to be written off. The A/R Aging Summary report may help (Reports | Customers & Receivables | A/R Aging Summary). To further analyze your oldest receivables: Go to Customize Report | Age through how many days?, and type 360. Go to Filters | Choose Filter | Aging | >=, and type 90. Go to Header/Footer, add the text: Older Than 90 days to the Report Title, and click on OK. How to do it... To write off a stale receivable: Go to the Customers page or Home Page | Receive Payments. From the Received From drop-down box, select the appropriate customer. If applicable, select the particular job instead. In the Date field, enter the effective date of the writeoff. Click on the Discount & Credits button. In the Discount and Credits pop-up window, fill in the amount to be written off, the writeoff account (generally Bad Debt), and the same class from the original invoice, if class tracking is used in the file: The completed screen should have 0.00 in the Amount and Payment fields. Include the amount written off in the Discount field: If an allowance for doubtful accounts is used against bad debt for writeoffs, then set up the Allowance account as an Accounts Receivable account type , and select the Allowance account from the drop-down box at the top of the Customer Payments screen. A set of journal entries can be used later, to remove the amounts from both Accounts Receivable and the Allowance account. There's more… This is the same procedure that can be used to record discounts, but the key is that an income or expense account must always be selected. This procedure is not appropriate for a balance sheet account to be selected, such as debiting a liability account while crediting A/R, or debiting the Allowance account while crediting A/R. This will cause a cash basis balance sheet report to be out of balance. If that combination of debits and credits is essential, then use a journal entry instead. Then, apply the journal entry to the original invoice, by opening the invoice, and clicking the Apply Credits button. When this recipe is used to write off receivables, Act. Revenue is reduced in the Job Profitability Summary report , and there is no effect on the Item Profitability Summary report. The same reporting results are attained if a journal entry is used to debit Bad Debt Expense and credit Accounts Receivable. If a Credit Memo is used instead, Act. Revenue is reduced in the Job Profitability Summary report as well as the Item Profitability Summary report. In order to increase the Act. Cost column in the Job Profitability Summary report instead, use the Write Checks screen in an unusual fashion: on the Items tab, use an Other Charge item called Bad Debt or Writeoffs. When you create this item, link it to the Bad Debt Expense account. On the Write Checks screen, be sure to enter the Customer:Job name as well as the writeoff amount. On the Expenses tab, select Accounts Receivable, and enter the writeoff amount as a negative number, so that the total amount of the check equals 0. Be sure that the check bears no check number, and clear it in the next bank reconciliation. This technique causes both the Job Profitability and Item Profitability reports to show the transaction as an expense, rather than as a reduction of revenue. It works because QuickBooks includes the Write Check transactions in the Act. Cost column of these reports. Writing off stale payables Making a journal entry to write off stale A/P in bulk is easy, but makes it difficult to trace through the accounting records. Possible uses for more precise information include producing a trail for taxing authorities, internal or independent auditors, or banks. A separate spreadsheet may suffice, but may be difficult to coordinate. This recipe focuses on straightforward ways to write off these balances in a detailed but efficient fashion. Getting ready Have your criteria ready for which bills are to be written off. The A/P Aging Summary report may help (Reports | Vendors & Payables | A/P Aging Summary). To further analyze your oldest payables: Go to Customize Report | Age through how many days?, and type 360. Go to Filters | Choose Filter | Aging | >=, and type 90. Go to Header/Footer, add the text: Older Than 90 days to the Report Title, and click on OK. How to do it... Go to Vendors or Home Page | Pay Bills. Consider using the Filter by drop-down list to only show bills from a particular vendor, and consider using the Sort by drop-down list to organize the payables list by Due Date. For one single vendor, check off the fi rst bill to be written off. Click on the Set Discount button . In the Discount and Credits pop-up window, fi ll in the amount to be written off, the writeoff account (generally the same expense account as the original bill), and the same class from the original bill, if class tracking is used in the file: Click on Done, and proceed to the next bill for the same vendor. Make sure the Payment Date field is the effective date of the write off. The completed screen should have 0.00 in the Amt. to Pay field. Include the amount written off in the Disc. Used field: When the writeoffs for that vendor are complete, click on Pay Selected Bills, followed by Pay More Bills for additional writeoffs. There's more… The advantage of this recipe is that the transaction is created and applied to the bill in a single step. However, the drawback is that it does not appear in the Job Profitability Summary or the Item Profitability Summary reports. For that to occur, create a vendor credit instead, by using the Enter Bills screen, and clicking on the Credit button. Then, use the Items tab to record the credit, using the same item that was used in the original bill. Additionally, use the Customer:Job field to apply the credit to a particular job. For a partial writeoff, after the Discount and Credits window is closed, be sure to manually input 0.00 into the Amt. to Pay field. The default is to include the remaining balance in that field, and this recipe assumes that the current action is only to record writeoffs, not payments to vendors. Balancing the balance sheet How can a balance sheet get out of balance in a software program? If you're reading this recipe, you may have already seen for yourself that the impossible can happen. The following is a procedure to root out the transaction which is causing this phenomenon. Getting ready A balance sheet prepared on the cash basis can be out of balance if certain transactions were saved, for example if the Discount feature was used with a balance sheet account. How to do it... Open a Balance Sheet Summary report. Click on the Customize Report button and in the Dates drop-down box, select All. If the report is on the accrual basis, change the Report Basis to Cash. In the Display columns by drop-down box, change the selection to Year, and click on OK. Look at the balance sheet, and identify the earliest year in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the year identified in the previous step. In the Display columns by drop-down box, change the selection to Month, and click on OK. Look at the balance sheet, and identify the earliest month in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the month identified in the previous step. In the Display columns by drop-down box , change the selection to Week, and click on OK. Look at the balance sheet, and identify the earliest week in which the balance sheet is out of balance. Click on the Customize Report button . In the From and To fields, enter the beginning and ending dates of the week identified in the previous step. In the Display columns by drop-down box, change the selection to Day, and click on OK. Look at the balance sheet, and identify the earliest day in which the balance sheet is out of balance: Run a transaction journal (Reports | Accountant & Taxes | Journal), limit the transactions to that day, and scan the report for the transaction responsible. Delete the transaction which caused the imbalance, which is usually a Customer Payment or other A/R or A/P data entry screen, and make a journal entry instead, to cover the appropriate debit and credit. There's more… If the Discount feature was used to reclassify an Accounts Receivable balance to Retainage Receivable, make a journal entry to achieve the same General Ledger effect instead, and apply the transaction to the original invoice, by opening the invoice and using the Apply Credits button.
Read more
  • 0
  • 0
  • 2142

article-image-visualizations-made-easy-gnuplot
Packt
01 Mar 2012
12 min read
Save for later

Visualizations made easy with gnuplot

Packt
01 Mar 2012
12 min read
  This article, written by Lee Philips the author of gnuplot Cookbook contains the following :   Making a surface plot Using coordinate mappings Coloring the surface Making a contour plot Making a vector plot Making an image plot or heat map Combining contours and images Combining surfaces with images Plotting a path in 3D Drawing parametric surfaces Making a surface plot A surface plot represents the dependent quantity z, which depends on the two independent variables x and y, as a surface whose height indicates the value of z.   The previous figure is a perspective drawing of a surface representing the Bessel function J0(r), where r is the distance from (x=0, y=0). The height of the surface shows the value of J0, given on the vertical axis (unlabeled in this figure, but usually called z). The other two (unlabeled) axes defning the plane above which the surface is drawn are the x and y axes.     How to do it… The following code listing is the script that coaxed gnuplot into making the previous figure:   set isosamples 40unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1splot besj0(x**2+y**2)set view 29,53 #Done implicitly by mousing.set term pngcairo mono enhancedset out 'bessel.png'replot   How it works… There are several new commands in this recipe. The set isosamples command sets the isoline density. This is analogous to the set samples command when making 2D plots, but it sets the number of lines used in forming the surface. The number of isosamples can be set independently in each direction; if one number is specifed, it is used for both directions. The default of 10 usually creates surfaces that are far too coarse to be useful.   Turning to the second of the highlighted commands, the splot command is the 3D version of our old friend the plot command (it probably initially stood for "surface plot", but now can do several things besides plot surfaces, as we shall see in the rest of this article). It expects a function of x and y rather than x alone. Although we are interested in plotting something that has the type of symmetry that would be most conveniently expressed in polar (spherical or cylindrical) coordinates, these geometries are not available for function plots in 3D in gnuplot. (They are available through the set mapping command for data plots, as we shall see later in this article.) Therefore in such cases, we are required to convert our expressions to the rectangular coordinate system. Instead of what we would call r in a cylindrical coordinate system, here we use the equivalent x**2 + y**2.   In this recipe, we would like to illustrate, as far as possible, the interactive approach to creating a fnal 3D plot. The next highlighted line, beginning with set view, can be entered on the command line or included in a script. The view is the orientation in degrees of the perspective drawing of the 3D plot. Naturally, it does arise in 2D. It is diffcult to determine what is the most useful view for a particular plot without looking at it and experimenting with it; therefore, even if our fnal product is intended to be a fle, a common workfow is to frst create the plot using an interactive terminal (x11 or wxt). Then we rotate the plot with the mouse, and possibly scale and zoom it using the middle mouse button, until we arrive at the desired appearance. This is what we mean by the comment in the set view command. Now we can reset the terminal to the fnal output device that we need, specify the output fle, and simply say replot. The view and scaling at which we left the interactive plot is retained as a set of global settings and will be refected in our fnal output fle. These settings are also displayed at the bottom of the interactive plot window, so we can record them if we are going to make similar plots in the future, or want a set of plots to be drawn with the same settings. Note that we also redefned the ztics value. This is because when the plot is tilted to the fnal view angle that we chose, the perspective causes the tic labels on the vertical axis to be crowded together; this is a common problem with 3D plots, and taking manual control of the tics on the z-axis is the solution.   There's more… Following is the same plot with one setting changed (aside from a slight adjustment in the view angle):   While the frst plot was essentially a wireframe that we could see through, this version has the appearance of a solid, opaque surface. All we need to do is to say set hidden3d. This, which only works when lines or linespoints are being used, makes the surface appear opaque by removing from the plot any part of the surface, other surfaces, and other plot elements such as the axes and tic labels, that are behind the surface from our point of view. The underside of the surface is shown in a contrasting color with a color output device, but the two sides of the surface are not distinguished in monochrome. The name of the setting refers to the technique of hidden line removal; gnuplot is justly famed for the quality of its hidden line removal algorithm, and is one reason this program is so well regarded for its 3D plotting ability.   Using coordinate mappings It is possible, when making 3D plots from data fles, for the data to be interpreted in spherical or cylindrical coordinates rather than the default Cartesian system. For details, type help set mapping. We will give an example of using the cylindrical mapping to conveniently draw a shape with cylindrical symmetry.   The previous figure is a perspective view of a surface that somewhat resembles a Christmas tree ornament. The relevant feature of this surface is that it has rotational symmetry around the z (vertical) axis, which means that it is most naturally expressed in cylindrical coordinates.   How to do it… Try the following script:   set mapping cylindricalunset ticsunset borderset hiddenset xrange [-pi : pi]set yrange [-pi : pi]set zrange [0 : pi]set iso 60unset keysplot '++' using 1:2:(sin($2)) with lines   How it works… There are several new ideas used in this recipe. Breaking it down, these are:   The set mapping command The frst, highlighted line contains the new command that is the subject of this recipe. When the default Cartesian (x-y-z) coordinate system is changed to cylindrical then the columns of data read in during a data plot are interpreted as θ-z-r, where θ is the angular coordinate, z is the vertical coordinate, and r is the radius. A spherical mapping is also available and explained in the gnuplot online help (help set mapping). If the data fle only has two columns, then the plot is drawn with r = 1.   In our example we don't want to plot from a data fle, however. We want to plot a function given directly in the script. This presents us with a problem, as gnuplot does not support cylindrical or spherical plots of functions in 3D. The solution is to use one of gnuplot's pseudofles.   The ++ pseudofle The "++" pseudofle creates rows of imaginary data with three columns x-y-z unless we change the coordinate mapping, which of course in this example we have. Setting the mapping to cylindrical means that the fctitious data columns will be interpreted as θ-z-r.   Now to plot a function, we use the using notation applied to the imaginary columns of data. We've done this in the fnal line of the script, where we plot the sine of the second column (z).   To clarify the use of "++" when plotting surfaces, note that, in Cartesian coordinates, the two commands "splot sin(x)+cos(y)" and "splot '++' using 1:2:(sin($1)+cos($2)) with lines" produce exactly the same plot.   Coordinate ranges We have also established ranges for all variables in the set xrange and two other commands following it. The ranges for the polar coordinates are taken from the corresponding Cartesian coordinates, that is, when we set the xrange, we are setting both the range of the x-axis displayed on the plot and the range of the variable θ in the cylindrical coordinate system. It is mandatory to set xrange and yrange when using the "++" flename.   This mixing of the coordinate system in which the function is calculated and the Cartesian system in which it is displayed can be confusing, but the example shows a strategy, which should make it possible to get predictable results. Setting the xrange and yrange as we've done puts the r = 0 point in the middle of the graph and prevents part of the plot from being cut off. It also sets up a full rotation of the angular coordinate over a range of 2 p.   If we wanted to plot, say, our shape with half of it sliced off by a vertical plane, the easiest way to do this is not to fddle with the coordinate ranges, but to apply a transformation to one of the fctitious data columns: splot '++' using ($1/2):2:(sin($2)) with lines, will do the trick without any surprising side effects. In this example the underlying angular coordinate (column 1) still passes through a full rotation, but we've divided it in half without changing the way the figure is projected onto the Cartesian display. Note that the 60 isolines will still be used in our reduced angular range, so we might want to set iso to a smaller value.   Completing the picture We've eliminated all of the graph adornments (unset tics, unset border, unset key) so we will be left with only the surface. The isosamples are set to give a suffciently smooth surface drawing that is nevertheless not too crowded with isosurface lines (see the previous recipe). set hidden ensures that we shall see only the outer surface of the shape.   Coloring the surface The wireframe splot with hidden line removal that we covered in the frst recipe of this article, Making a surface plot, gives the visual impression of a solid surface. The numerical value encoded into the surface's height can be visually estimated, roughly, by the perspective provided by the isolines in conjunction with the tics on the vertical axis. But gnuplot also has a way to draw real solid surfaces whose height is indicated by color or shade.   The previous figure shows the same mathematical function plotted in the frst recipe in this article (Making a surface plot). Now the numerical value of the function at any point is indicated by both the height of the surface and its shade; the surface is now drawn as an opaque membrane rather than as a network of curves.   How to do it…   To produce the previous figure, run the following in gnuplot:   set isosamples 100set samples 100unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1unset surfaceset pm3dsplot besj0(x**2+y**2) The surface will be drawn with a palette of colors when a color output device is being used and with a grayscale palette when using a monochrome terminal.     How it works… If you compare the previous script with the one in the Making a surface plot recipe at the beginning of this article, you will see that the only signifcant difference is the highlighted line. The pm3d mode colors the imaginary surface being plotted according to its height or z-value at every point, with the mapping between the height and the color or shade determined by the palette, which we shall discuss in some more detail shortly.   The other modifcations are to increase the number of isolines, in order to get a smoother surface, and to turn off the drawing of the individual isolines themselves with the command unset surface. We also need to set the sample frequency; generally we want this to be equal to the isosample frequency. In pm3d mode, the two orthogonal sets of isolines are drawn with two different spacings given by the two parameters. Although the gnuplot manual claims that the global hidden3d setting does not affect pm3d surface plots, it in fact seems to, and should not be turned on, as it appears to slightly degrade the drawing quality.   There's more… Sometimes we want both a colored surface and a set of isolines; in fact, this can often be the clearest type of quantitative 3D plot. The way to achieve the highest quality in this type of graph is to use the hidden3d linestyle option to pm3d, as we do in the following script:   set iso 30set samp 30unset keyset title "J_0(r^2)"set xrange [-4:4]set yrange [-4:4]set ztics 1unset surfset style line 1 lt 4 lw .5set pm3d at s hidden3d 1splot besj0(x**2+y**2) This requires us to defne a user linestyle. Then the linestyle is referred to in an option to the set pm3d command. This will cause the isolines to be drawn using lines in this style, which allows us to have them in any color, thickness, or pattern supported by our terminal. Further, the isolines will be drawn with hidden line removal, so they will appear to be embedded in the opaque surface. As before, the global hidden3d option should not be turned on.   Note that we've also reduced the sample and isoline frequency, to keep our plot from being too crowded with isolines. (The at s component of the set pm3d command means at surface.)  
Read more
  • 0
  • 0
  • 13522

article-image-administrating-mysql-server
Packt
09 Feb 2012
10 min read
Save for later

Administrating the MySQL Server

Packt
09 Feb 2012
10 min read
(For more resources on MySQL, see here.) Managing users and their privileges The Privileges sub-page (visible only if we are logged in as a privileged user) contains dialogs to manage MySQL user accounts. It also contains dialogs to manage privileges on global, database, and table levels. This sub-page is hierarchical. When editing a user's privileges, we can see the global privileges as well as the database-specific privileges. Then, when viewing database-specific privileges for a user, we can view and edit this user's privileges for any table within this database. The user overview The first page displayed when we enter the Privileges sub-page is called User overview. This shows all user accounts and a summary of their global privileges, as shown in the following screenshot: (Move the mouse over the image to enlarge.) From this page, we can: Edit a user's privileges, via the Edit Privileges link for this user Export a user's privileges definition, via the Export link for this user Use the checkboxes to remove users, via the Remove selected users dialog Access the page where the Add a new User dialog is available The displayed users' list has columns with the following characteristics: Column Characteristic User The user account we are defining. Host The machine name or IP address, from which this user account will be connecting to the MySQL server. A % value here indicates all hosts. Password Contains Yes if a password is defined and No if it isn't. The password itself cannot be seen from phpMyAdmin's interface or by directly looking at the mysql.user table, as it is encrypted with a one-way hashing algorithm. Global privileges A list of the user's global privileges. Grant Contains Yes if the user can grant his/her privileges to others. Action Contains a link to edit this user's privileges or export them. Exporting privileges This feature can be useful when we need to create a user with the same password and privileges on another MySQL server. Clicking on Export for user marc produces the following panel: Then it's only a matter of selecting these GRANT statements and pasting them in the SQL box of another phpMyAdmin window, where we have logged in on another MySQL server. Privileges reload At the bottom of User overview page, this message is displayed: Note: phpMyAdmin gets the users' privileges directly from MySQL's privilege tables. The content of these tables may differ from the privileges the server uses, if they have been changed manually. In this case, you should reload the privileges before you continue. Here, the text reload the privileges is clickable. The effective privileges (the ones against which the server bases its access decisions) are the privileges that are located in the server's memory. Privilege modifications that are made from the User overview page are made both in memory and on disk in the mysql database. Modifications made directly to the mysql database do not have immediate effect. The reload the privileges operation reads the privileges from the database and makes them effective in memory. Adding a user The Add a new User link opens a dialog for user account creation. First, we see the panel where we will describe the account itself, as shown in the following screenshot: The second part of the Add a new User dialog is where we will specify the user's global privileges, which apply to the server as a whole (see the Assigning global privileges section of this article), as shown in the following screenshot: Entering the username The User name menu offers two choices. We can choose Use text field: and enter a username in the box, or we can choose Any user to create an anonymous user (the blank user). More details about the anonymous user are available at http://dev.mysql.com/doc/refman/5.5/en/connection-access.html. Let us choose Use text field: and enter bill. Assigning a host value By default, this menu is set to Any host, with % as the host value. The Local choice means localhost. The Use host table choice (which creates a blank value in the host field) means to look in the mysql.host table for database-specific privileges. Choosing Use text field: allows us to enter the exact host value we want. Let us choose Local. Setting passwords Even though it's possible to create a user without a password (by selecting the No password option), it's best to have a password. We have to enter it twice (as we cannot see what is entered) to confirm the intended password. A secure password should have more than eight characters, and should contain a mixture of uppercase and lowercase characters, digits, and special characters. Therefore, it's recommended to have phpMyAdmin generate a password—this is possible in JavaScript-enabled browsers. In the Generate password dialog, clicking on Generate button enters a random password (in clear text) on the screen and fills the Password and Re-type input fields with the generated password. At this point, we should note the password so that we can pass it on to the user. Understanding rights for database creation A frequent convention is to assign a user the rights to a database having the same name as this user. To accomplish this, the Database for user section offers the Create database with same name and grant all privileges radio button. Selecting this checkbox automates the process by creating both the database (if it does not already exist) and assigning the corresponding rights. Please note that, with this method, each user would be limited to one database (user bill, database bill). Another possibility is to allow users to create databases that have the same prefix as their usernames. Therefore, the other choice Grant all privileges on wildcard name (username_%) performs this function by assigning a wildcard privilege. With this in place, user bill could create the databases bill_test, bill_2, bill_payroll, and so on; phpMyAdmin does not pre-create the databases in this case. Assigning global privileges Global privileges determine the user's access to all databases. Hence, these are sometimes known as superuser privileges. A normal user should not have any of these privileges unless there is a good reason for this. Moreover, should a user account that has global privileges become compromised, the damage could be far greater. If we are really creating a superuser, we will select every global privilege that he or she needs. These privileges are further divided into Data, Structure, and Administration groups. In our example, bill will not have any global privileges. Limiting the resources used We can limit the resources used by this user on this server (for example, the maximum queries per hour). Zero means no limit. We will not impose any resources limits on bill. The following screenshot shows the status of the screen just before hitting Create user to create this user's definition (with the remaining fields being set to default): Editing a user profile The page used to edit a user's profile appears whenever we click on Edit Privileges for a user in the User overview page. Let us try it for our newly created user bill. There are four sections on this page, each with its own Go button. Hence, each section is operated independently and has a distinct purpose. Editing global privileges The section for editing the user's privileges has the same look as the Add a new User dialog, and is used to view and to change global privileges. Assigning database-specific privileges In this section, we define the databases to which our user has access, and his or her exact privileges on these databases. As shown in the previous screenshot, we see None because we haven't defined any privileges yet. There are two ways of defining database privileges. First, we can choose one of the existing databases from the drop-down menu as shown in the following screenshot: This assigns privileges only for the chosen database. Secondly, we can also choose Use text field: and enter a database name. We could enter a non-existent database name, so that the user can create it later (provided we give him/her the CREATE privilege in the next panel). We can also use special characters, such as the underscore and the percent sign, for wildcards. For example, entering bill here would enable him to create a bill database, and entering bill% would enable him to create a database with any name that starts with bill. For our example, we will enter bill and click on Go. The next screen is used to set bill's privileges on the bill database, and create table-specific privileges. To learn more about the meaning of a specific privilege, we can hover the mouse over a privilege name (which is always in English), and an explanation about this privilege appears in the current language. We give SELECT, INSERT, UPDATE, DELETE, CREATE, ALTER, INDEX, and DROP privileges to bill on this database. We then click on Go. After the privileges have been assigned, the interface stays at the same place, so that we can refine these privileges further. We cannot assign table-specific privileges for the moment, as the database does not yet exist. To go back to the general privileges page of bill, click on the 'bill'@'localhost' title. This brings us back to the following, familiar page except for a change in one section: We see the existing privileges (we could click on Edit Privileges link to edit or on Revoke link to revoke them) on the bill database for user bill, and we can add privileges for bill on another database. We can also see that bill has no table-specific privilege on the bill database. Changing the password The Change password dialog is part of the Edit user page, and we can use it either to change bill's password or to remove it. Removing the password will enable bill to log in without a password. The dialog offers a choice of password hashing options, and it's recommended to keep the default of MySQL 4.1+ hashing. For more details about hashing, please visit http://dev.mysql.com/doc/refman/5.1/en/password-hashing.html. Changing login information or copying a user This dialog can be used to change the user's login information, or to copy his or her login information to a new user. For example, suppose that Bill calls and tells us that he prefers the login name billy instead of bill. We just have to add a y to the username, and then select delete the old one from the user tables radio button, as shown in the following screenshot: After clicking on Go, bill no longer exists in the mysql database. Also, all of his privileges, including the privileges on the bill database, will have been transferred to the new user—billy. However, the user definition of bill will still exist in memory, and hence it's still effective. If we had chosen the delete the old one from the user tables and reload the privileges afterwards option instead, the user definition of bill would immediately have ceased to be valid. Alternatively, we could have created another user based on bill, by making use of the keep the old one choice. We can transfer the password to the new user by choosing Do not change the password option, or change it by entering a new password twice. The revoke all active privileges… option immediately terminates the effective current privileges for this user, even if he or she is currently logged in. Removing a user Removing a user is done from the User overview section of the Privileges page. We select the user to be removed. Then (in Remove selected users) we can select the Drop the databases that have the same names as the users option to remove any databases that are named after the users we are deleting. A click on Go effectively removes the selected users.
Read more
  • 0
  • 0
  • 2430
article-image-integrating-ibm-cognos-tm1-ibm-cognos-8-bi
Packt
16 Dec 2011
4 min read
Save for later

Integrating IBM Cognos TM1 with IBM Cognos 8 BI

Packt
16 Dec 2011
4 min read
(For more resources on IBM, see here.) Before proceeding with the actual steps of the recipe, we will take a note of the following integration considerations: The measured Dimension in the TM1 Cube needs to be explicitly identified. The Data Source needs to be created in IBM Cognos Connection which points to the TM1 Cube. New Data Source can also be created from IBM Cognos Framework Manager, but for the sake of simplicity we will be creating that from IBM Cognos Connection itself. The created Data Source is used in IBM Cognos Framework Manager Model to create a Metadata Package and publish to IBM Cognos Connection. Metadata Package can be used to create reports, generate queries, slice and dice, or event management using one of the designer studios available in IBM Cognos BI. We will focus on each of the above steps in this recipe, where we will be using one of the Cubes created as part of demodata TM1 Server application and we will be using the Cube as a Data Source in the IBM Cognos BI layer. Getting ready Ensure that the TM1 Admin Server service is started and demodata TM1 Server is running. We should have IBM Cognos 8 BI Server running and IBM Cognos 8 Framework Manager installed. How to do it... Open the TM1 Architect and right-click on the Sales_Plan Cube. Click on Properties. In the Measures Dimension box, click on Sales_Plan_Measures and then for Time Dimension click on Months. Note that the preceding step is compulsory if we want to use the Cube as a Data Source for the BI layer. We need to explicitly define a measures dimension and a time dimension. Click on OK and minimize the TM1 Architect, keep the server running. Now from the Start menu, open IBM Cognos Framework Manager, which is desktop-based tool used to create metadata models. Create a new project from IBM Cognos 8 Framework Manager. Enter the Project name as Demodata and provide the Location where the model file will be located. Note that each project generates a .cpf file which can be opened in the IBM Cognos Framework Manager. Provide valid user credentials so that IBM Cognos Framework Manager can link to a running IBM Cognos BI Server setup. Users and roles are defined by IBM Cognos BI admin user. Choose English as the authoring language when the Select Language list comes up. This will open the Metadata Wizard - Select Metadata Source. We use the Metadata Wizard to create a new Data Source or point to an existing Data Source. In the Metadata Wizard make sure that Data Sources is selected and click on the Next button. In the next screen, click on the New button to create a new Data Source by the name of TM1_Demodata_Sales_Plan. This will open a New data source wizard, where we need to specify the name of the Data Source. On next screen, it will ask for the Data Source Type for which we will specify TM1 from the drop-down, as we want to create a new Data Source based on the TM1 Cube Sales_Data. On the next screen specify the connection parameters. For Administration Host we can specify a name or localhost, depending on the name of the server. In our case, we have specified name of the server as ankitgar, hence we are using an actual name instead of a localhost. In the case of TM1 sitting on another server within the network, we will provide the IP address or name of the host in UNC format. Test the connection to test whether the connection to the TM1 Cube is successful. Click on Close and proceed. Click on the Finish button to complete the creation of the Data Source. The new Data Source is created on the Cognos 8 Server and now can be used by anyone with valid privileges given by the admin user. It's just a connection to the Sales_Plan TM1 Cube which now can be used to create metadata models and, hence, reports and queries perform the various functions suggested in the preceding sections. Now it will return to Metadata Wizard as shown, with the new Data Source appearing with the list of already created Data Sources. Click on the newly created Data Source and on the Next button. It will display all available Cubes on the DemoData TM1 Server, the machine name being the server name (localhost/ankitgar). Click on the Sales_Plan cube and then on Next.
Read more
  • 0
  • 0
  • 3894

article-image-getting-started-apache-solr
Packt
02 Dec 2011
8 min read
Save for later

Getting Started with Apache Solr

Packt
02 Dec 2011
8 min read
  (For more resources on Apache, see here.) We're going to get started by downloading Solr, examine its directory structure, and then finally run it. This sets you up for the next section, which tours a running Solr server. Get Solr: You can download Solr from its website: http://lucene.apache.org/ solr/. The last Solr release this article was written for is version 3.4. Solr has had several relatively minor point-releases since 3.1 and it will continue. In general I recommend using the latest release since Solr and Lucene's code are extensively tested. Lucid Imagination also provides a Solr distribution called "LucidWorks for Solr". As of this writing it is Solr 3.2 with some choice patches that came after to ensure its stability and performance. It's completely open source; previous LucidWorks releases were not as they included some extras with use limitations. LucidWorks for Solr is a good choice if maximum stability is your chief concern over newer features. Get Java: The only prerequisite software needed to run Solr is Java 5 (a.k.a. java version 1.5) or later—ideally Java 6. Typing java –version at a command line will tell you exactly which version of Java you are using, if any. Use latest version of Java! The initial release of Java 7 included some serious bugs that were discovered shortly before its release that affect Lucene and Solr. The release of Java 7u1 on October 19th, 2011 resolves these issues. These same bugs occurred with Java 6 under certain JVM switches, and Java 6u29 resolves them. Therefore, I advise you to use the latest Java release. Java is available on all major platforms including Windows, Solaris, Linux, and Apple. Visit http://www.java.com to download the distribution for your platform. Java always comes with the Java Runtime Environment (JRE) and that's all Solr requires. The Java Development Kit (JDK) includes the JRE plus the Java compiler and various diagnostic utility programs. One such useful program is jconsole, and so the JDK distribution is recommended. Solr is a Java-based web application, but you don't need to be particularly familiar with Java in order to use it. Solr's installation directory structure When you unzip Solr after downloading it, you should find a relatively straightforward directory structure: client: Convenient language-specific client APIs for talking to Solr. Ignore the client directory Most client libraries are maintained by other organizations, except for the Java client SolrJ which lies in the dist/ directory. client/ only contains solr-ruby , which has fallen out of favor compared to rsolr —both of which are Ruby Solr clients. contrib: Solr contrib modules. These are extensions to Solr. The final JAR file for each of these contrib modules is actually in dist/; so the actual files here are mainly the dependent JAR files. analysis-extras: A few text analysis components that have large dependencies. There are some "ICU" Unicode classes for multilingual support, a Chinese stemmer, and a Polish stemmer. clustering: A engine for clustering search results. dataimporthandler: The DataImportHandler (DIH) —a very popular contrib module that imports data into Solr from a database and some other sources. extraction: Integration with Apache Tika– a framework for extracting text from common file formats. This module is also called SolrCell and Tika is also used by the DIH's TikaEntityProcessor. uima: Integration with Apache UIMA—a framework for extracting metadata out of text. There are modules that identify proper names in text and identify the language, for example. To learn more, see Solr's wiki: http://wiki.apache.org/solr/SolrUIMA. velocity: Simple Search UI framework based on the Velocity templating language. dist: Solr's WAR and contrib JAR files. The Solr WAR file is the main artifact that embodies Solr as a standalone file deployable to a Java web server. The WAR does not include any contrib JARs. You'll also find the core of Solr as a JAR file, which you might use if you are embedding Solr within an application, and Solr's test framework as a JAR file, which is to assist in testing Solr extensions. You'll also see SolrJ's dependent JAR files here. docs: Documentation—the HTML files and related assets for the public Solr website, to be precise. It includes a good quick tutorial, and of course Solr's API. Even if you don't plan on extending the API, some parts of it are useful as a reference to certain pluggable Solr configuration elements—see the listing for the Java package org.apache.solr.analysis in particular. example: A complete Solr server, serving as an example. It includes the Jetty servlet engine (a Java web server), Solr, some sample data and sample Solr configurations. The interesting child directories are: example/etc: Jetty's configuration. Among other things, here you can change the web port used from the pre-supplied 8983 to 80 (HTTP default). exampledocs: Sample documents to be indexed into the default Solr configuration, along with the post.jar program for sending the documents to Solr. example/solr: The default, sample Solr configuration. This should serve as a good starting point for new Solr applications. It is used in Solr's tutorial. example/webapps: Where Jetty expects to deploy Solr from. A copy of Solr's WAR file is here, which contains Solr's compiled code. Solr's home directory and Solr cores When Solr starts, the very first thing it does is determine where the Solr home directory is. There are various ways to tell Solr where it is, but by default it's the directory named simply solr relative to the current working directory where Solr is started. You will usually see a solr.xml file in the home directory, which is optional but recommended. It mainly lists Solr cores. For simpler configurations like example/solr, there is just one Solr core, which uses Solr's home directory as its core instance directory . A Solr core holds one Lucene index and the supporting Solr configuration for that index. Nearly all interactions with Solr are targeted at a specific core. If you want to index different types of data separately or shard a large index into multiple ones then Solr can host multiple Solr cores on the same Java server. A Solr core's instance directory is laid out like this: conf: Configuration files. The two I mention below are very important, but it will also contain some other .txt and .xml files which are referenced by these two. conf/schema.xml: The schema for the index including field type definitions with associated analyzer chains. conf/solrconfig.xml: The primary Solr configuration file. conf/xslt: Various XSLT files that can be used to transform Solr's XML query responses into formats such as Atom and RSS. conf/velocity: HTML templates and related web assets for rapid UI prototyping using Solritas. The soon to be discussed "browse" UI is implemented with these templates. data: Where Lucene's index data lives. It's binary data, so you won't be doing anything with it except perhaps deleting it occasionally to start anew. lib: Where extra Java JAR files can be placed that Solr will load on startup. This is a good place to put contrib JAR files, and their dependencies. Running Solr Now we're going to start up Jetty and finally see Solr running albeit without any data to query yet. We're about to run Solr directly from the unzipped installation. This is great for exploring Solr and doing local development, but it's not what you would seriously do in a production scenario. In a production scenario you would have a script or other mechanism to start and stop the servlet engine with the operating system—Solr does not include this. And to keep your system organized, you should keep the example directly as exactly what its name implies—an example. So if you want to use the provided Jetty servlet engine in production, a fine choice then copy the example directory elsewhere and name it something else. First go to the example directory, and then run Jetty's start.jar file by typing the following command: >>cd example >>java -jar start.jar The > > notation is the command prompt. These commands will work across *nix and DOS shells. You'll see about a page of output, including references to Solr. When it is finished, you should see this output at the very end of the command prompt: 2008-08-07 14:10:50.516::INFO: Started SocketConnector @ 0.0.0.0:8983 The 0.0.0.0 means it's listening to connections from any host (not just localhost, notwithstanding potential firewalls) and 8983 is the port. If Jetty reports this, then it doesn't necessarily mean that Solr was deployed successfully. You might see an error such as a stack trace in the output if something went wrong. Even if it did go wrong, you should be able to access the web server: http://localhost:8983. Jetty will give you a 404 page but it will include a list of links to deployed web applications, which will just be Solr for this setup. Solr is accessible at: http://localhost:8983/solr, and if you browse to that page, then you should either see details about an error if Solr wasn't loaded correctly, or a simple page with a link to Solr's admin page, which should be http://localhost:8983/solr/admin/. You'll be visiting that link often. To quit Jetty (and many other command line programs for that matter), press Ctrl+C on the keyboard.
Read more
  • 0
  • 0
  • 2979

article-image-article-oracle-bi-publisher-11g-learning-new-xpt-format
Packt
31 Oct 2011
4 min read
Save for later

Oracle BI Publisher 11g: Learning the new XPT format

Packt
31 Oct 2011
4 min read
(For more resources on topic_name, see here.) We will cover the following topics in this article: The Layout Editor presentation Designing a Layout Export options The Layout Editor First, you have to choose a predefined layout from the Create Report Interface. As you can see in the following screenshot, this interface displays a list of predefined layouts: 3180EN_05_01 image 01 You can add your own predefined layouts to this list and make them available for your later use or even for all the users. After choosing a layout from the Basic Templates or the Shared Templates group, the Layout Editor Interface is displayed. Designing a Layout In the Layout Editor Interface, as shown in the following screenshot, you have tools to perform activities such as: Insert a component: Select the desired component from the Components pane on the left or from the Insert tab of the toolbar and drag-and-drop it into the design area Set component properties: Set the component properties from the Properties pane on the left or from the component-specific tab of the toolbar (only the most commonly used components) Insert a data element: Drag the element from the Data Source pane to the design area Drop Value Here Drop Label Here Drop Series Here 3180EN_05_02 image 02(within bullets) As shown in the preceding screenshot, a precise dropping area is marked. For example, in a chart you have the following marked areas: Set page layout options: In order to set the page layout options, use the Page Layout tab and the Properties pane Save the Layout: Use the activity icons from the toolbar on the right side In the following sections, a few elements will be inserted to complete our report design. You will also see the steps that you need to follow when inserting and setting the properties of these components. Text elements In order to make changes in the settings of Text elements, follow the steps given here: Click on the Insert tab and choose the Text Item component from the toolbar, as shown in the following screenshot: 3180EN_05_03 image 03 Click on the Text tab and set a font color for your text using the Font Color icon from the toolbar, as shown in the following screenshot: 3180EN_05_04 image 04 Set the text margins using the Properties panel, as shown in the following screenshot: 3180EN_05_05 image 05 In this way, we obtain the desired report title, as you can see in the following screenshot: 3180EN_05_06 image 06 In order to insert data elements in our report's components, we will use the following Data Model: 3180EN_05_07 image 07 Charts In order to create Charts, follow the steps given here: Click on the Insert tab and choose the Chart component from the toolbar, as shown in the following screenshot. Select the newly inserted chart and go to the Chart tab on the toolbar to set the chart type (Vertical Bar in this example) and the chart style (Project in this example). Drag the LOAN_PERIOD and the PRICE fields from the Data Source (from the left pane) over the Drop Value Here area of the design view. Drop the TITLE field from the Data Source over the Drop Label Here area: 3180EN_05_08 image 08 Data tables In order to create Data tables , follow the steps given here: Click on the Insert tab and choose the Data Table component from the toolbar, as shown in the following screenshot. Drag the fields LOAN_DATE, LOAN_PERIOD, TITLE, and YEAR from the Data Source over the area marked as Drop a Data Item Here. Select the LOAN_DATE column and in the Properties pane set the Formatting Mask to yyyy-mm-dd. For each column of the table, enter a suitable value for Width, in the Properties pane. For example, the fi rst column has a width of 1.00 inch: 3180EN_05_09 image 09
Read more
  • 0
  • 0
  • 3995
article-image-oracle-bi-publisher-11g-working-multiple-data-sources
Packt
25 Oct 2011
5 min read
Save for later

Oracle BI Publisher 11g: Working with Multiple Data Sources

Packt
25 Oct 2011
5 min read
(For more resources on Oracle 11g, see here.) The Data Model Editor's interface deals with all the components and functionalities needed for the data model to achieve the structure you need. However, the main component is the Data Set. In order to create a data model structure in BIP, you can choose from a variety of data set types, such as: SQL Query MDX Query Oracle BI Analysis View Object Web Service LDAP Query XML file Microsoft Excel file Oracle BI Discoverer HTTP Taking advantage of this variety requires multiple Data Sources of different types to be defined in the BIP. In this article, we will see: How data sources are configured How the data is retrieved from different data sets How data set type characteristics and the links between elements influence the data model structure Administration Let's first see, how you can verify or configure your data sources. You must choose the Administration link found in the upper-right corner of any of the BIP interface pages, as shown in the following screenshot:     The connection to your database can be choosen from the following connection types: Java Database Connectivity (JDBC) Java Naming and Directory Interface (JNDI) Lightweight Directory Access Protocol (LDAP) Online Analytical Processing (OLAP) Available Data Sources To get to your data source, BIP offers two possibilities: YOu can use a connection. In order to use a connection, these are the available connection types: JDBC JNDI LDAP OLAP You can also use a file. In the following sections, the Data Source types&mdashJDBC, JNDI, OLAP Connections, and File&mdashwill be explained in detail. JDBC Connection Let's take the first example. To configure a Data Source to use JDBC, from the Administration page, choose JDBC Connection from the Data Sources types list, as shown in the following screenshot:     You can see the requested parameters for configuring a JDBC connection in the following screenshot: Data Source Name: Enter a name of your choice. Driver Type: Choose a type from the list. The relating parameters are: Database Driver Class: A driver, matching your database type. Connection String: Information containing the computer name on which your database server is running, for example, port, database name, and so on. Username: Enter a database username. Password: Provide the database user's password. The Use System User option allows you to use the operating system's credentials as your credentials. For example, in this case, your MS SQL Database Server uses Windows authentication as the only authentication method. When you have a system administrator in-charge of these configurations, all you have to do is to find which are the available Data Sources and eventually you can check if the connection works. Click on the Test Connection button at the bottom of the page to test the connection:     JNDI Connection JNDI Connection pool is in fact another way to access your JDBC Data Sources. Using a connection pool increases efficiency by maintaining a cache of physical connections that can be reused, allowing multiple clients to share a small number of physical connections. In order to configure a Data Source to use JNDI, from the Administration page, choose JNDI Connection from the Data Sources types list. The following screen will appear:     As you can see in the preceding screenshot, on the Add Data Source page you must enter the following parameters: Data Source Name: Enter a name of your choice JNDI Name: This is the JNDI location for the pool set up in your application server, for example, jdbc/BIP10gSource The users having roles included in Allowed Roles list only will be able to create reports using this Data Source. OLAP Connection Use the OLAP Connection to connect to OLAP databases. BI Publisher supports the following OLAP types: Oracle Hyperion Essbase Microsoft SQL Server 2000 Analysis Services Microsoft SQL Server 2005 Analysis Services SAP BW In order to configure a connection to an OLAP database, from the Administration page, choose OLAP Connection from the Data Sources types list. The following screen will appear:     On the Add Data Source page, the following parameters must be entered: Data Source Name: Enter a name of your choice OLAP Type: Choose a type from the list Connection String: Depending on the supported OLAP databases, the connection string format is as follows: Oracle Hyperion Essbase Format: [server name] Microsoft SQL Server 2000 Analysis Services Format: Data Source=[server];Provider=msolap;Initial Catalog=[catalog] Microsoft SQL Server 2005 Analysis Services Format: Data Source=[server];Provider=msolap.3;Initial Catalog=[catalog] SAP BW Format: ASHOST=[server] SYSNR=[system number] CLIENT=[client] LANG=[language] Username and Password: Used for OLAP database authentication File Another example of a data source type is File. In order to gain access to XML or Excel files, you need a File Data Source. In order to set up this kind of Data Source, only one step is required&mdashenter the path to the Directory in which your files reside. You can see in the following screenshot that demo files Data Source points to the default BIP files directory. The file needs to be accessible from the BI Server (not on your local machine):    
Read more
  • 0
  • 0
  • 8313

article-image-null-5
Packt
17 Oct 2011
10 min read
Save for later

Tips and Tricks: Report Page in IBM Cognos 8 Report Studio

Packt
17 Oct 2011
10 min read
(Read more interesting articles on IBM Cognos here.) Showing images dynamically (Traffic Light report) Let us suppose that we have a report which shows month-on-month difference in sales quantity Getting ready Please note that you will need administrator rights on the Cognos server to complete this recipe. If the server is installed on your personal machine, you will have these rights by default. For this recipe, we need to first create three icons or images for red, yellow, and green. They should be already available on the Cognos server under <Cognos Installation>webcontentsamplesimages folder. If not, then create them using any image editor software or use the images supplied with this book. Once you have the three images which you need to conditionally show on the report, place them on the Cognos server under <Cognos Installation>webcontentsamplesimages folder. (If the folder is not there, create one). Change the IIS security to allow 'Anonymous Read and Browse' accesses. Now open the report that shows the month-on-month running differences. Insert a new 'image' from the insertable objects pane on the list report, as a new column. Now go to Condition Explorer and create a new string variable. Define the expression as: if ([Query1].[Running Difference] > 0)then ('green')else if ([Query1].[Running Difference] < 0)then ('red')else ('yellow') Call the variable Traffic and define three possible values for the same (red, yellow, and green). Now go back to the report page. Select the image. Open its URL Source Variable dialog. Choose the variable Traffic and click OK. From Condition Explorer, choose 'red' condition. Now click on the image again. It will allow you to define the image URL for this condition. Set the URL to: ../samples/images/Red.jpg Similarly, define the URL for 'yellow' and 'green' conditions as ../samples/images/yellow.jpg and ../samples/images/green.jpg respectively. Run the report to test it. How it works... Cognos Report Studio allows you to put the images in the report by specifying the URL of the image. The images can be anywhere on the intranet or internet. They will be displayed properly as long as the URL is accessible from Cognos application server and gateway. In this recipe, we are using a report which already calculates the Running Difference. Hence, we just had to define conditional variable to trap different possible conditions. The Image component allows us to define the URL for different conditions by attaching it to the Traffic variable in step 8. There's more... In this case, though the URL of the image changes dynamically, it is not truly 100% dynamic. There are three static URLs already defined in the report and one is picked up depending on the condition. We can also use a data item or report expression as source of the URL value. In that case, it will be totally dynamic, and based on the values coming from database; Cognos will work out the URL of the image and display it correctly. This is useful when the image filenames and locations are stored in the database. For example, Product Catalogue kind of reports. More info This recipe works fine in HTML, PDF, and Excel formats.We have used relative URLs for the images, so that report can be easily deployed to other environments where Cognos installation might be in a different location. However, we need to ensure that the images are copied in all environments in the folder mentioned in step 2. Handling missing image issue In the previous recipe, we saw how to add images to the report. You will be using that technique in many cases, some involving hundreds of images (For example, Product Catalogue).There will often be a case in which database has a URL or image name, whereas the corresponding image is either missing or inaccessible. In such a case, the web browser shows an error symbol. This looks quite ugly and needs to be handled properly. In this recipe, we will see how to handle this problem gracefully. Getting ready We will use the report prepared in previous recipe. We need to delete the Green.jpg file (or rename it to something else) from the server, in order to create the missing image scenario. How to do it... In the previous recipe, we added an image object and defined its conditional URLs . We need to replace that image with an HTML Item. For that, unlock the report objects and delete the image component. Add an HTML Item in the same column. Select this HTML item and from the Properties pane, set its HTML Source Variable to 'Traffic'. (Please note that we already have this conditional variable in the last recipe). Now define the HTML for different conditions. Start with 'red'. Choose 'red' from conditional explorer and define the HTML as: For 'yellow', define the HTML as: <img src="../samples/images/yellow.jpg" alt="No Change" onError="img2txt(this)"/> For 'green', define HTML as: <img src="../samples/images/green.jpg" alt="Upsell" onError="img2txt(this)"/> Now go back to the No Variable state by double clicking on the green bar, and add another HTML item on the report. Put it just before the list. Define this HTML as: <script>function img2txt(img) {txt = img.alt;img.parentNode.innerHTML=txt;}</script> Now run the report to test it. As you can see, if the image is missing, the report will now handle it gracefully and show some text instead of an error image. How it works... Here we are using our custom code to display the image, instead of using CRS's in-built Image component.We have pulled an HTML item onto the report and defined it to display different images depending on the condition using the <img border="0" /> tag. This tag allows us to define an alternative text and onError event as well. We are using the onError event to call our custom made JavaScript function called img2txt .This function replaces the HTML item with a text which was originally defined as 'alternative text'. Hence, if green.jpg is missing, this function will replace it with a text item Upsell. There's more... As we are using HTML code and JavaScript in this technique, it works in HTML format only. This technique will be useful for a lot of graphical reports (dashboards, scorecards, online product catalogues, and so on). Dynamic links to external website (Google Map example) In this recipe, we will introduce you to the 'Hyperlink' component. A report shows retailers information by products. It shows various fields like Retailer name, Contact information, City, and Postal zone. Business wants to have a link to Google maps that will show retailer's place on the map using the Postal zone information. As the addresses might change in the backend, the technique needs to be dynamic to pick up the latest postal zone. Getting ready Create a simple list report that shows Retailers information by Product lines. How to do it... From the 'Insertable Objects' toolbox, drag a hyperlink object onto the report as a new column. Change its Text property to Map. Set the URL Source Type to Report Expression and define the report expression as follows: 'http://maps.google.com/maps?q=' + [Query1].[City] Run the report to test it. As you can see, there is a link for each retailer record. If you Shift+click on the link, it will open the Google map for corresponding postal zone in a new window. How it works... Here we are using the 'Hyperlink' component of CRS. We can define the URL as any static link. However, for our requirements, we have defined a report expression. This allows us to provide a dynamic link which picks up the latest postal zone from the database. We are passing the postal zone to Google Maps as part of a URL. The hyperlink component works in HTML as well as Excel and PDF formats of report. This object currently does not have the property to define whether the link target should open in a new window or the same window. Just clicking on the link, opens the target in same window; whereas Shift+click opens in a new window. There's more... You can use this technique to call any external website that accepts parameters within a URL. You can pass multiple parameters too.   Alternating drill link In this recipe, we will learn about a limitation of drill link and overcoming it using Render Variable. There is a crosstab report which shows sales quantity by month and order method. We need to provide drill-through facility from the intersection. However, the drill-through target needs to be different, depending on the order method. If order method is e-mail, the drill-through from intersection should go to a report called 'Alternating Drill Link—Drill Report 2'. For all other order methods, it should go to 'Alternating Drill Link—Drill Report 1'. Getting ready Create a crosstab report to serve as the main report. Drag Month (shipment) on rows, Order method on columns and Sales Quantity on the intersection. Create two list reports to serve as drill reports. In the sample provided with this book, we have used two list reports for this. One accepts the Order method and Month. The other accepts only month and is designed to work for the order method 'E-mail'. How to do it... Create a drill through to first drill the report from the crosstab intersection. Now make sure that the report objects are unlocked. Select the intersection text item (which now looks like hyperlink as there is already a drill-through defined). Hold the Ctrl key down and drag the text to its right within a cell. This should create a copy of the text item within that cell and will look like the following: Now select this copy of the text item. Hit the drill-through button to open definitions. Delete the existing drill-through to first report. Create a new drill to a second report. So, now we have two text items in the cell, each going to different drill reports. Create a string type of Conditional Variable. Define it as: if ([Query1].[Order method] = 'E-mail') then ('E-mail')else ('Other')Call it OrderMethod and define the two values to be E-mail and Other. Now go back to the report page. Select the first text item from intersection. Open its Render Variable property . Choose the OrderMethod variable and select to Render for: Other. Similarly, define Render Variable for the second text item, but choose to Render for: E-mail. Run the report to test it. You will see that clicking on the intersection numbers opens first drill report for any order method other than E-mail. Whereas for the numbers under E-mail, the second drill report opens.
Read more
  • 0
  • 0
  • 4260
Modal Close icon
Modal Close icon