Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Save more on your purchases! discount-offer-chevron-icon
Savings automatically calculated. No voucher code required.
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds

How-To Tutorials - Data

1210 Articles
article-image-installing-quicksight-application
Packt
20 Jan 2017
4 min read
Save for later

Installing QuickSight Application

Packt
20 Jan 2017
4 min read
In this article by Rajesh Nadipalli, the author of the book Effective Business Intelligence with QuickSight, we will see how you can install the Amazon QuickSight app from the Apple iTunes store for no cost. You can search for the app from the iTunes store and then proceed to download and install or alternatively you can follow this link to download the app. (For more resources related to this topic, see here.) Amazon QuickSight app is certified to work with iOS devices running iOS v9.0 and above. Once you have the app installed, you can then proceed to login to your QuickSight account as shown in the following screenshot: Figure 1.1: QuickSight sign in The Amazon QuickSight app is designed to access dashboards and analyses on your mobile device. All interactions on the app are read-only and changes you make on your device are not applied to the original visuals so that you can explore without any worry. Dashboards on the go After you login to the QuickSight app, you will first see the list of dashboards associated to your QuickSight account for easy access. If you don't see dashboards, then click on Dashboards icon from the menu at the bottom of your mobile device as shown in the following screenshot: Figure 1.2: Accessing dashboards You will now see the list of dashboards associated to your user ID. Dashboard detailed view From the dashboard listing, select the USA Census Dashboard, which will then redirect you to the detailed dashboard view. In the detailed dashboard view you will see all visuals that are part of that dashboard. You can click on the arrow to the extreme top right of each visual to open the specific chart in full screen mode as shown in the following screenshot. In the scatter plot analysis shown in the following screenshot, you can further click on any of the dots to get specific values about that bubble. In the following screenshot the selected circle is for zip code 94027 which has PopulationCount of 7,089 and MedianIncome of $216,905 and MeanIncome of $336,888: Figure 1.3: Dashboard visual Dashboard search QuickSight mobile app also provides a search feature, which is handy if you know only partial name of the dashboard. Follow the following steps to search for a dashboard: First ensure you are in the dashboards tab by clicking on the Dashboards icon from the bottom menu. Next click on the search icon seen on the top right corner. Next type the partial name. In the following example, i have typed Usa. QuickSight now searches for all dashboards that have the word Usa in it and lists them out. You can next click on the dashboard to get details about that specific dashboard as shown in the following screenshot: Figure 1.4: Dashboard search Favorite a dashboard QuickSight provides a convenient way to bookmark your dashboards by setting them as favorites. To use this feature, first identify which dashboards you often use and click on the star icon to it's right side as shown in the following screenshot. Next to access all of your favorites, click on the Favorites tab and the list is then refined to only those dashboards you had previously identified as favorite: Figure 1.5: Dashboard favorites Limitations of mobile app While dashboards are fairly easy to interact with on the mobile app, there are key limitations when compared to the standard browser version, which I am listing as follows: You cannot create share dashboards to others using the mobile app. You cannot zoom in/out from the visual, which would be really good in scenarios where the charts are dense. Chart legends are not shown. Summary We have seen how to install Amazon QuickSight app and using this app you can browse, search, and view dashboards. We have covered how to access dashboards, search, favorite, and its detailed view. We have also seen some limitations of mobile app. Resources for Article: Further resources on this subject: Introduction to Practical Business Intelligence [article] MicroStrategy 10 [article] Making Your Data Everything It Can Be [article]
Read more
  • 0
  • 0
  • 3037

article-image-openam-oracle-dsee-and-multiple-data-stores
Packt
03 Feb 2011
6 min read
Save for later

OpenAM: Oracle DSEE and Multiple Data Stores

Packt
03 Feb 2011
6 min read
  OpenAM Written and tested with OpenAM Snapshot 9—the Single Sign-On (SSO) tool for securing your web applications in a fast and easy way The first and the only book that focuses on implementing Single Sign-On using OpenAM Learn how to use OpenAM quickly and efficiently to protect your web applications with the help of this easy-to-grasp guide Written by Indira Thangasamy, core team member of the OpenSSO project from which OpenAM is derived Real-world examples for integrating OpenAM with various applications Oracle Directory Server Enterprise Edition The Oracle Directory Server Enterprise Edition type of identity store is predominantly used by customers and natively supported by the OpenSSO server. It is the only data store where all the user management features offered by OpenSSO is supported with no exceptions. In the console it is labeled as Sun DS with OpenSSO schema. After Oracle acquired Sun Microsystems Inc., the brand name for the Sun Directory Server Enterprise Edition has been changed to Oracle Directory Server Enterprise Edition (ODSEE). You need to have the ODSEE configured prior to creating the data store either from the console or CLI as shown in the following section. Creating a data store for Oracle DSEE Creating a data store from the OpenSSO console is the easiest way to achieve this task. However, if you want to automate this process in a repeatable fashion, then using the ssoadm tool is the right choice. However, the problem here is to obtain the list of the attributes and their corresponding values. It is not documented anywhere in the publicly available documentation for OpenSSO. Let me show you the easy way out of this by providing the required options and their values for the ssoadm: ./ssoadm create-datastore -m odsee_datastore -t LDAPv3ForAMDS -u amadmin-f /tmp/.passwd_of_amadmin -D data_store_odsee.txt -e / This command will create the data store that talks to an Oracle DSEE store. The key options in the preceding command are LDAPv3ForAMDS (instructing the server to create a datastore of type ODSEE) and the properties that have been included in the data_ store_odsee.txt. You can find the complete contents of this file as part of the code bundle provided by Packt Publishers. Some excerpts from the file are given as follows: sun-idrepo-ldapv3-config-ldap-server=odsee.packt-services. net:4355 sun-idrepo-ldapv3-config-authid=cn=directory manager sun-idrepo-ldapv3-config-authpw=dssecret12 com.iplanet.am.ldap.connection.delay.between.retries=1000 sun-idrepo-ldapv3-config-auth-naming-attr=uid <contents removed to save paper space > sun-idrepo-ldapv3-config-users-search-attribute=uid sun-idrepo-ldapv3-config-users-search-filter=(objectclass= inetorgperson) sunIdRepoAttributeMapping= sunIdRepoClass=com.sun.identity.idm.plugins.ldapv3.LDAPv3Repo sunIdRepoSupportedOperations=filteredrole=read,create,edit, delete sunIdRepoSupportedOperations=group=read,create,edit,delete sunIdRepoSupportedOperations=realm=read,create,edit,delete, service sunIdRepoSupportedOperations=role=read,create,edit,delete sunIdRepoSupportedOperations=user=read,create,edit,delete, service Among these properties, the first three provide critical information for the whole thing to work. As it is evident from the name of the property they denote the ODSEE server name, port, bind DN, and the password. The password is in plain text so you need to remove the password from the input file after creating the data store, for security reasons. Updating the data store Like we discussed in the previous section, one can invoke the update-datastore sub command with the appropriate properties and its values along with the -a switch to the ssoadm tool. If you have more properties to be updated, then put them in a text file and use it with the -D option like the create-datastore sub command. Deleting the data store A data store can be deleted by just selecting it's specific name from the console. There will be no warnings issued while deleting the data stores, and you could eventually delete all of the data stores in a realm. Be cautious about this behavior, you might end up deleting all of them unintentionally. Deleting data store does not remove any existing data in the underlying LDAP server, it only removes the configuration from the OpenSSO server: ./ssoadm delete-datastores -e / -m odsee_datastore -f /tmp/ .passwd_of_amadmin -u amadmin Data store for OpenDS OpenDS is one of the popular LDAP servers that is completely written in Java and available freely under open source license. As a matter of fact the embedded configuration store that is built in the OpenSSO server is the embedded version of OpenDS. It has been fully tested with the OpenDS standalone version for the identity store usage. The data store creation and management are pretty much similar to the steps described in the foregoing section, except the type of store and the corresponding properties' values. The properties and their values are given in the file data_store_opends.txt (available as part of code bundle). Invoke the ssoadm tool with this property file after making appropriate changes to fit to your deployment. Here is the sample command that creates the datastore for OpenDS: ./ssoadm create-datastore -u amadmin -f /tmp/.passwd_of_amadmin -e / -m "OpenDS-store" -t LDAPv3ForOpenDS -D data_store_opends.txt Data store for Tivoli DS The IBM Tivoli Directory Server 6.2 is one of the supported LDAP servers for the OpenSSO server to provide authentication and authorization services. A specific sub configuration LDAPv3ForTivoli is available out of the box to support this server. You can find the data_store_tivoli.txt to create a new data store by supplying the -t LDAPv3ForTivoli option to the ssoadm tool. Here is the sample command that creates the datastore for Tivoli DS. The sample (data_store_tivoli. txt can be found as part of the code bundle) file contains the entries including the default group, that are shipped with the Tivoli DS for easy understanding. You can customize it to any valid values: ./ssoadm create-datastore -u amadmin -f /tmp/.passwd_of_amadmin -e / -m "Tivoli-store" -t LDAPv3ForTivoli -D data_store_tivoli.txt Data store for Active Directory Microsoft Active Directory provides most of the LDAPv3 features including support for persistent search notifications. Creating a data store for this is also a straightforward process and is available out-of-the-box. You can find the data_ store_ad.txt to create a new data store by supplying the -t LDAPv3ForAD option to the ssoadm tool. Here is the sample command that creates the datastore for AD: ./ssoadm create-datastore -u amadmin -f /tmp/.passwd_of_amadmin -e / -m "AD-store" -t LDAPv3ForAD -D data_store_ad.txt . Data store for Active Directory Application Mode Microsoft Active Directory Application Mode (ADAM) is the lightweight version of the Active directory with simplified schema. In the ADAM instance it is possible to set user password over LDAP unlike Active Directory where password-related operations must happen over LDAPS. Creating a data store for this is also a straightforward process and is available out-of-the-box. You can find the data_store_adam. txt to create a new data store by supplying the -t LDAPv3ForADAM option to the ssoadm tool: ./ssoadm create-datastore -u amadmin -f /tmp/.passwd_of_amadmin -e / -m "ADAM-store" -t LDAPv3ForADAM -D data_store_adam.txt
Read more
  • 0
  • 0
  • 3034

article-image-pentaho-reporting-building-interactive-reports-html
Packt
26 Oct 2009
1 min read
Save for later

Pentaho Reporting: Building Interactive Reports in HTML

Packt
26 Oct 2009
1 min read
Interactive HTML report properties All reporting elements share a common set of HTML-related properties that may be used to create a dynamic report. Below is a list of properties and their uses: HTML Properties class This property sets the class attribute of the current HTML entity to the specified value. name This property sets the name attribute of the current HTML entity to the specified value. title This property sets the title attribute of the current HTML entity to the specified value. xml-id This property allows the naming of the current HTML entity, setting the id attribute, making it possible to reference in outside scripts. append-body This property allows the placement of raw HTML within the body of the HTML document, prior to the rendering of the current element. append-body-footer This property allows the placement of raw HTML within the body of the HTML document, after the rendering of the current element. append-header Defined only at the master report level, this property allows the inclusion of raw HTML within the header of the HTML document generated. This location is traditionally used to load additional CSS files, as well as external JavaScript files.
Read more
  • 0
  • 0
  • 3033

article-image-line-area-and-scatter-charts
Packt
05 Apr 2013
10 min read
Save for later

Line, Area, and Scatter Charts

Packt
05 Apr 2013
10 min read
(For more resources related to this topic, see here.) Introducing line charts First let's start with a single series line chart. We will use one of the many data provided by The World Bank organization at www.worldbank.org. The following is the code snippet to create a simple line chart which shows the percentage of population ages, 65 and above, in Japan for the past three decades: var chart = new Highcharts.Chart({chart: {renderTo: 'container'},title: {text: 'Population ages 65 and over (% of total)',},credits: {position: {align: 'left',x: 20},text: 'Data from The World Bank'},yAxis: {title: {text: 'Percentage %'}},xAxis: {categories: ['1980', '1981','1982', ... ],labels: {step: 5}},series: [{name: 'Japan - 65 and over',data: [ 9, 9, 9, 10, 10, 10, 10 ... ]}]}); The following is the display of the simple chart: Instead of specifying the year number manually as strings in categories, we can use the pointStart option in the series config to initiate the x-axis value for the first point. So we have an empty xAxis config and series config, as follows: xAxis: {},series: [{pointStart: 1980,name: 'Japan - 65 and over',data: [ 9, 9, 9, 10, 10, 10, 10 ... ]}] Although this simplifies the example, the x-axis labels are automatically formatted by Highcharts utility method, numberFormat, which adds a comma after every three digits. The following is the outcome on the x axis: To resolve the x-axis label, we overwrite the label's formatter option by simply returning the value to bypass the numberFormat method being called. Also we need to set the allowDecimals option to false. The reason for that is when the chart is resized to elongate the x axis, decimal values are shown. The following is the final change to use pointStart for the year values: xAxis: {labels:{formatter: function() {// 'this' keyword is the label objectreturn this.value;}},allowDecimals: false},series: [{pointStart: 1980,name: 'Japan - 65 and over',data: [ 9, 9, 9, 10, 10, 10, 10 ... ]}] Extending to multiple series line charts We can include several more line series and set the Japan series by increasing the line width to be 6 pixels wide, as follows: series: [{lineWidth: 6,name: 'Japan',data: [ 9, 9, 9, 10, 10, 10, 10 ... ]}, {Name: 'Singapore',data: [ 5, 5, 5, 5, ... ]}, {...}] The line series for Japanese population becomes the focus in the chart, as shown in the following screenshot: Let's move on to a more complicated line graph. For the sake of demonstrating inverted line graphs, we use the chart.inverted option to flip the y and x axes to opposite orientations. Then we change the line colors of the axes to match the same series colors. We also disable data point markers for all the series and finally align the second series to the second entry in the y-axis array, as follows: chart: {renderTo: 'container',inverted: true,},yAxis: [{title: {text: 'Percentage %'},lineWidth: 2,lineColor: '#4572A7'}, {title: {text: 'Age'},opposite: true,lineWidth: 2,lineColor: '#AA4643'}],plotOptions: {series: {marker: {enabled: false}}},series: [{name: 'Japan - 65 and over',type: 'spline',data: [ 9, 9, 9, ... ]}, {name: 'Japan - Life Expectancy',yAxis: 1,data: [ 76, 76, 77, ... ]}] The following is the inverted graph with double y axes: The data representation of the chart may look slightly odd as the usual time labels are swapped to the y axis and the data trend is awkward to comprehend. The inverted option is normally used for showing data in a noncontinuous form and in bar format. If we interpret the data from the graph, 12 percent of the population is 65 and over, and the life expectancy is 79 in 1990. By setting plotOptions.series.marker.enabled to false it switches off all the data point markers. If we want to display a point marker for a particular series, we can either switch off the marker globally and then set the marker on an individual series, or the other way round. plotOptions: {series: {marker: {enabled: false}}},series: [{marker: {enabled: true},name: 'Japan - 65 and over',type: 'spline',data: [ 9, 9, 9, ... ]}, { The following graph demonstrates that only the 65 and over series has point markers: Sketching an area chart In this section, we are going to use our very first example and turn it into a more stylish graph (based on the design of wind energy poster by Kristin Clute), which is an area spline chart. An area spline chart is generated using the combined properties of area and spline charts. The main data line is plotted as a spline curve and the region underneath the line is filled in a similar color with a gradient and an opaque style. Firstly, we want to make the graph easier for viewers to look up the values for the current trend, so we move the y axis next to the latest year, that is, to the opposite side of the chart: yAxis: { ....opposite:true} The next thing is to remove the interval lines and have a thin axis line along the y axis: yAxis: { ....gridLineWidth: 0,lineWidth: 1,} Then we simplify the y-axis title with a percentage sign and align it to the top of the axis: yAxis: { ....title: {text: '(%)',rotation: 0,x: 10,y: 5,align: 'high'},} As for the x axis, we thicken the axis line with a red color and remove the interval ticks: xAxis: { ....lineColor: '#CC2929',lineWidth: 4,tickWidth: 0,offset: 2} For the chart title, we move the title to the right of the chart, increase the margin between the chart and the title, and then adopt a different font for the title: title: {text: 'Population ages 65 and over (% of total) -Japan ',margin: 40,align: 'right',style: {fontFamily: 'palatino'}} After that we are going to modify the whole series presentation, we first set the chart.type property from 'line' to 'areaspline'. Notice that setting the properties inside this series object will overwrite the same properties defined in plotOptions.areaspline and so on in plotOptions.series. Since so far there is only one series in the graph, there is no need to display the legend box. We can disable it with the showInLegend property. We then smarten the area part with gradient color and the spline with a darker color: series: [{showInLegend: false,lineColor: '#145252',fillColor: {linearGradient: {x1: 0, y1: 0,x2: 0, y2: 1},stops:[ [ 0.0, '#248F8F' ] ,[ 0.7, '#70DBDB' ],[ 1.0, '#EBFAFA' ] ]},data: [ ... ]}] After that, we introduce a couple of data labels along the line to indicate that the ranking of old age population has increased over time. We use the values in the series data array corresponding to the year 1995 and 2010, and then convert the numerical value entries into data point objects. Since we only want to show point markers for these two years, we turn off markers globally in plotOptions.series. marker.enabled and set the marker on, individually inside the point objects accompanied with style settings: plotOptions: {series: {marker: {enabled: false}}},series: [{ ...,data:[ 9, 9, 9, ...,{ marker: {radius: 2,lineColor: '#CC2929',lineWidth: 2,fillColor: '#CC2929',enabled: true},y: 14}, 15, 15, 16, ... ]}] We then set a bounding box around the data labels with round corners (borderRadius) in the same border color (borderColor) as the x axis. The data label positions are then finely adjusted with the x and y options. Finally, we change the default implementation of the data label formatter. Instead of returning the point value, we print the country ranking. series: [{ ...,data:[ 9, 9, 9, ...,{ marker: {...},dataLabels: {enabled: true,borderRadius: 3,borderColor: '#CC2929',borderWidth: 1,y: -23,formatter: function() {return "Rank: 15th";}},y: 14}, 15, 15, 16, ... ]}] The final touch is to apply a gray background to the chart and add extra space into spacingBottom. The extra space for spacingBottom is to avoid the credit label and x-axis label getting too close together, because we have disabled the legend box. chart: {renderTo: 'container',spacingBottom: 30,backgroundColor: '#EAEAEA'}, When all these configurations are put together, it produces the exact chart, as shown in the screenshot at the start of this section. Mixing line and area series In this section we are going to explore different plots including line and area series together, as follows: Projection chart, where a single trend line is joined with two series in different line styles Plotting an area spline chart with another step line series Exploring a stacked area spline chart, where two area spline series are stacked on top of each other Simulating a projection chart The projection chart has spline area with the section of real data and continues in a dashed line with projection data. To do that we separate the data into two series, one for real data and the other for projection data. The following is the series configuration code for the future data up to 2024. This data is based on the National Institute of Population and Social Security Research report (http://www.ipss.go.jp/pp-newest/e/ppfj02/ppfj02.pdf). series: [{name: 'project data',type: 'spline',showInLegend: false,lineColor: '#145252',dashStyle: 'Dash',data: [ [ 2010, 23 ], [ 2011, 22.8 ],... [ 2024, 28.5 ] ]}] The future series is configured as a spline in a dashed line style and the legend box is disabled, because we want to show both series as being from the same series. Then we set the future (second) series color the same as the first series. The final part is to construct the series data. As we specify the x-axis time data with the pointStart property, we need to align the projection data after 2010. There are two approaches that we can use to specify the time data in a continuous form, as follows: Insert null values into the second series data array for padding to align with the real data series Specify the second series data in tuples, which is an array with both time and projection data Next we are going to use the second approach because the series presentation is simpler. The following is the screenshot only for the future data series: The real data series is exactly the same as the graph in the screenshot at the start of the Sketching an area chart section, except without the point markers and data label decorations. The next step is to join both series together, as follows: series: [{name: 'real data',type: 'areaspline',....}, {name: 'project data',type: 'spline',....}] Since there is no overlap between both series data, they produce a smooth projection graph: Contrasting spline with step line In this section we are going to plot an area spline series with another line series but in a step presentation. The step line transverses vertically and horizontally only according to the changes in series data. It is generally used for presenting discrete data, that is, data without continuous/gradual movement. For the purpose of showing a step line, we will continue from the first area spline example. First of all, we need to enable the legend by removing the disabled showInLegend setting and also remove dataLabels in the series data. Next is to include a new series, Ages 0 to 14, in the chart with a default line type. Then we will change the line style slightly differently into steps. The following is the configuration for both series: series: [{name: 'Ages 65 and over',type: 'areaspline',lineColor: '#145252',pointStart: 1980,fillColor: {....},data: [ 9, 9, 9, 10, ...., 23 ]}, {name: 'Ages 0 to 14',// default type is line seriesstep: true,pointStart: 1980,data: [ 24, 23, 23, 23, 22, 22, 21,20, 20, 19, 18, 18, 17, 17, 16, 16, 16,15, 15, 15, 15, 14, 14, 14, 14, 14, 14,14, 14, 13, 13 ]}] The following screenshot shows the second series in the stepped line style:
Read more
  • 0
  • 0
  • 3032

article-image-openam-backup-recovery-and-logging
Packt
03 Feb 2011
9 min read
Save for later

OpenAM: Backup, Recovery, and Logging

Packt
03 Feb 2011
9 min read
  OpenAM Written and tested with OpenAM Snapshot 9—the Single Sign-On (SSO) tool for securing your web applications in a fast and easy way The first and the only book that focuses on implementing Single Sign-On using OpenAM Learn how to use OpenAM quickly and efficiently to protect your web applications with the help of this easy-to-grasp guide Written by Indira Thangasamy, core team member of the OpenSSO project from which OpenAM is derived Real-world examples for integrating OpenAM with various applications OpenSSO provides utilities that can be invoked with proper procedure to backup the server configuration data. When a crash or data corruption occurs, the server administrator must initiate a recovery operation. Recovering a backup involves the following two distinct operations: Restoring the configuration files Restoring the XML configuration data that was backed up earlier In general, recovery refers to the various operations involved in restoring, rolling forward, and rolling back a backup. Backup and recovery refers to the various strategies and operations involved in protecting the configuration database against data loss, and reconstructing the database should a loss occur. In this article, you should be able to learn about how to use the tools provided by OpenSSO to perform the following: OpenSSO configuration backup and recovery Test to production Trace and debug level logging for troubleshooting Audit log configuration using flat file Audit log configuration using RDBMS Securing the audit logs from intrusion OpenSSO deals with only backing up the configuration data of the server as the identity such as users, groups, and roles data backup and recovery will be handled by the enterprise level identity management suite of products. Backing up configuration data I am sure you are familiar now with the different configuration stores supported by the OpenSSO, an embedded store (based on OpenDS), and a highly scalable Directory Server Enterprise Edition. Regardless of the underlying configuration type, there are certain files that are created in the local file system on the host where the server is deployed. These files (such as bootstrap file) contain critical pieces of information that helps the application to initialize, any corruption in these files could cause the server application not to start. Hence it becomes necessary to backup the configuration data stored in the file system. As a result, we could term the backup and recovery as a two step process: Backup the configuration files in the local file system Backup the OpenSSO configuration data in the LDAP configuration Let us briefly discuss what each option means and when to apply them. Backing up the OpenSSO configuration files Typically the server can fail to come up for two reasons. Either because it could not find the right configuration file that will locate the configuration data store or because the data contained in the configuration store is corrupted. It is the simplest case I took up for this discussion because there could be umpteen reasons that could cause a server to fail to start up. OpenSSO provides a subcommand export-svc-cfg as part of the ssoadm command line interface. Using this the customer can only backup the configuration data that is stored in the configuration store, provided the configuration store is up and running. This backup will not help if the disk that holds the configuration files such as the bootstrap and OpenDS schema files crashed, because the backup obtained using the export-svc-cfg will not contain these schema and bootstrap files. This jeopardizes the backup and recovery process for the OpenSSO. This is why backing up the configuration directory becomes inevitable and vital for the recovery process. To backup the OpenSSO configuration directory, you can just use any file archive tool of your choice. To perform this backup, log on to the OpenSSO host as the OpenSSO configuration user, and execute the following command: zip -r /tmp/opensso_config.zip /export/ssouser/opensso1/ This will backup all the contents of the configuration directory (in this case /export/ssouser/opensso1). Though this will be the recommended way to backup, there may be server audit and debug logs that could fill up the disk space as you perform a periodic backup of this directory. The critical files and directories that need to be backed up are as follows: bootstrap opends (whole directory if present) .version .configParam certificate stores The rest of the content of this directory can be restored from your staging area. If you have customized any of the service schema under the <opensso-config>/config/xml directory, then make sure you back them up. This backup is in itself enough to bring up the corrupted OpenSSO server. When you backup the opends directory all the OpenSSO configuration information will also get backed up, so you really do not need to have the backup file that you would generate using the export-svc-cfg. This kind of backup will be extremely useful and is the only way to perform the crash recovery. If the OpenDS is itself corrupted due to its internal database or index corruption, it will not start. Hence one cannot access the OpenSSO server or ssoadm command line tool to restore the XML backup. So, it is a must to backup your configuration directory from the file system periodically. Backing up the OpenSSO configuration data The process of backing up the OpenSSO service configuration data is slightly different from the complete backup of the overall system deployment. When the subcommand export-svc-cfg is invoked, the underlying code exports all the nodes under the ou=services,ROOT_SUFFIX of the configuration directory server: ./ssoadm export-svc-cfg -u amadmin -f /tmp/passwd_of_amadmin -e secretkeytoencryptpassword -o /tmp/svc-config-bkup.xml To perform this, you need to have the ssoadm command line tool configured. The options supplied to this command are self-explanatory except maybe the -e . This takes a random string that will be used as the encryption secret to encrypt the password entries in the service configuration data. For example, the RADIUS server's share secret value. You need this key to restore the data back to the server. The OpenSSO and its configuration directory server must be running in good condition in order to be successful with this export operation. This backup will be useful in the following cases: Administrator accidentally deleted some of the authentication configurations Administrator accidentally changed some of the configuration properties Somehow the agent profiles have lost their configuration data Want to reset to factory defaults In any case, one should be able to authenticate to OpenSSO as an admin to restore the configuration data. If the server is not in that state, then crash recovery is the only option. In the embedded store configuration case this means unzipping the file system configuration backup obtained as described in the Backing up the OpenSSO configuration files section. For the configuration data that is stored in the Directory Server Enterprise Edition, the customer should use the tools that are bundled with the Oracle Directory Server Enterprise Edition to backup and restore. Crash recovery and restore In the previous section, we briefly covered the crash recovery part of it. When a crash occurs in the embedded or remote configuration store, the server will not come up again unless it is restored back to a valid state. This may involve restoring the proper database state and indexes using a known valid state backup. This backup may have been obtained by using the ODSEE backup tools or simply zipping up the configuration file system of OpenSSO, as described in the Backing up the OpenSSO configuration files section. You need to bring back the OpenSSO server to a state where the administrator can log in to access the console. At this point the configuration exported to XML, as described in the Backing up the OpenSSO configuration data section can be used. Here is a sample execution of the import-svccfg subcommand. It is recommended to backup your vanilla configuration data from the file system periodically to use it in the crash recovery case (where the embedded store itself is corrupted). Backup of configuration data using the export-svc-cfg should be done frequently: ./ssoadm import-svc-cfg -u amadmin -f /tmp/passwd_of_amadmin -e mysecretenckey -X /tmp/svc-config-bkup.xml This will throw an error (because we have intentionally provided a wrong key) claiming that the secret key provided was wrong (actually it will show a string such as the following, that is a known bug): import-service-configuration-secret-key This is the key name that is supposed to contain a corresponding localizable error string. If you provide the correct encryption key, then it will import successfully: ./ssoadm import-svc-cfg -u amadmin -f /tmp/passwd_of_amadmin -e secretkeytoencryptpassword -X /tmp/svc-config-bkup.xml Directory Service contains existing data. Do you want to delete it? [y|N] y Please wait while we import the service configuration... Service Configuration was imported. Note that it prompts before overwriting the existing data to make sure that the current configuration is not overwritten accidentally. There is no incremental restore so be cautious while performing this operation. An import with a wrong version of the restore file could damage a working configuration. It is always recommended to backup the existing configuration before importing an existing configuration backup file. If you do not want to import the current file just enter N and the command will terminate without harming your data. Well, what happens if customers do not have the configuration files backed up? Suppose customers do not have the copy of the configuration files to restore, they can reconfigure the OpenSSO web application by accessing the configurator (after cleaning up existing configuration). Once they configure the server, they should be able to restore the XML backup. Nevertheless, the new configuration must match all the configuration parameters that were provided earlier including the hostname and port details. This information can be found in the .configParam file. If you are planning to export the configuration to a different server than the original server, then you should be referring to the Test to production section, that covers the details on how customers can migrate the test configuration to a production server. It requires more steps than simply restoring the configuration data.
Read more
  • 0
  • 0
  • 3021

article-image-getting-started-apache-nutch
Packt
18 Dec 2013
13 min read
Save for later

Getting Started with Apache Nutch

Packt
18 Dec 2013
13 min read
(For more resources related to this topic, see here.) Introduction of Apache Nutch Apache Nutch is a very robust and scalable tool for webcrawling and it can be integrated with scripting language i.e Python for web crawling. You can use it whenever your application contains huge data and you want to apply crawling on your data. Apache Nutch is an Open Source WebCrawler Software which is used for crawling websites. You can create your own search engine like google if you understand Apache Nutch clearly. It will provide you your own search engine using which you can increase your application page rank in searching and also customize your application searching according to your needs. It is extensible and scalable. It facilitates for parsing, indexing, creating your own search engine, customize search according to needs, scalability, robustness and ScoringFilter for custom implementations. ScoringFilter is a Java class which is used while creating Apache Nutch plugin. It is used for manipulating scoring variables. We can run Apache Nutch on a single machine as well as distributed environment like Apache Hadoop. It is written in Java. We can find broken links using Apache Nutch, create a copy of all the visited pages for searching over for example: Build indexes. We can find Web page hyperlinks in an automated manner. Apache Nutch can be integrated with Apache Solr easily and we can index all the webpages which are crawled by Apache Nutch to Apache Solr. We can then use Apache Solr for searching the webpages which are indexed by Apache Nutch. Apache Solr is a search platform which is built on top of Apache Lucene. It can be used for searching any type of data for example webpages. Crawling your first website Crawling is driven by Apache Nutch crawling tool and certain related tools for building and maintaining several data structures. It includes web database, the index and a set of segments. Once Apache Nutch has indexed the webpages to Apache Solr, you can search for the required webpage(s) in Apache Solr. Apache Solr Installation Apache Solr is a search platform which is built on top of Apache Lucene. It can be used for searching any type of data for example webpages. It’s a very powerful searching mechanism and provides full-text search, dynamic clustering, database integration, rich document handling and many more. Apache SOLR will be used for indexing urls which are crawled by Apache Nutch and then one can search the details in Apache SOLR crawled by Apache Nutch. Crawling your website using the crawl script Apache Nutch 2.2.1 comes with the facility of crawl script which does crawling by just executing one single script. In earlier version, we have to manually do each step like generating data, fetching data, parsing data and so on for perfrom crawling. Crawling the web, the CrawlDb, and URL filters When user invokes crawling command in Apache Nutch 1.x, crawlDB is generated by Apache Nutch which is nothing but a directory which contains details about crawling. In Apache 2.x, crawlDB is not present. Instead Apache Nutch keeps all the crawling data directly into the database. InjectorJob The injector will add the necessary urls to the crawldb. Crawldb is the directory which is created by Apache Nutch for storing data related to crawling. You need to provide urls to InjectorJob either by downloading urls from internet or writing your own file which contains urls. Let’s say you have created one directory called urls which contains all the urls that needs to be injected in cralwdb. Following command will be used for perform the InjectorJob: #bin/nutch inject crawl/crawldb urls Urls will be directory which contains all the urls which needs to be injected in crawldb. Crawl/crawldb is the directory in which injected urls will be placed. After performing this job, you have number of unfetched urls inside your database i.e crawldb. GeneratorJob Once we have done with the InjectorJob, now it’s time to fetch the injected urls from crawldb. So for fetching the urls, you need to perform GeneratorJob before. Follwing command will be used for GeneratorJob: #bin/nutch generate crawl/crawldb crawl/segments Crawldb is the directory from where urls are generated. Segments is the directory which is used by GeneratorJob to fetch the necessary information required for crawling. FetcherJob The job of the fetch is to fetch the urls which are generated by GeneratorJob. It will use the input provided by GeneratorJob. Follwing command will be used for FetcherJob: #bin/nutch fetch –all Here I have provided input parameters –all which means this job will fetch all the urls which are generated by GeneratorJob. You can use different input parameters according to your needs. ParserJob After FetcherJob, ParserJob is to parse the urls which are fetched by FetcherJob. Follwing command will be used for ParserJob: # bin/nutch parse –all I have used input parameters –all which will parse all the urls which are fetched by FetcherJob. You can use different input parameter according to your needs. DbUpdaterJob Once the ParserJob has been completed, we need to update the database by providing results of the FetcherJob. This will update the respected databases with the last fetched urls. Following command will be used for DbUpdaterJob: # bin/nutch updatedb crawl/crawldb –all After performing this job, database will contain both updated entries of all the initial pages and also contains the new entities which are correspond to the newly discovered pages which are linked from the initial set. Invertlinks Before applying indexing, we need to first invert all the links. After this we will be able to index incoming anchor text with the pages. Following command will be used for Invertlinks: # bin/nutch invertlinks crawl/linkdb -dir crawl/segments Apache Hadoop Apache Hadoop is designed for running your application on servers where there will be lot of computers in which one will be master computer and rest will be the slave computers. So it’s huge data warehouse. Master computers are the computers which will direct slave computers for data processing. So processing is done by slave computers. This is the reason why Apache Hadoop is used for processing huge amount of data as process is divided into the number of slave computers and that’s why Apache Hadoop gives highest throughput for any processing. So as data will increase, you need to increase number of slave computers. That’s how Apache Hadoop functionality runs. Integration of Apache Nutch with Apache Hadoop Apache Nutch can be easily integrated with Apache Hadoop and we can make our process much faster than running Apache Nutch on single machine. After integrating Apache Nutch with Apache Hadoop, we can perform crawling on Apache Hadoop cluster environment. So the process will be much faster and we will get highest amount of throughput. Apache Hadoop Setup with Cluster This setup is not required a huge hardware to purchase and running Apache Nutch and Apache Hadoop. It is designed in such a way to make the use of hardware maximum. Formatting the HDFS filesystem using the NameNode HDFS stands for Hadoop Distributed File system is a directory which is used by Apache Hadoop for storage purpose. So it’s the directory which stroes all the data related to Apache Hadoop. It has two components as NameNode and DataNode in which NameNode manages the filesystem metadata and DataNodes actually stores the data. It’s highly configurable and suited well for many installations. When there are very large clusters, at that time configuration needs to be tuned. The first step for getting start your Apache Hadoop is the formatting Hadoop filesystem which is implemented on top of the local filesystem of your cluster(which will include only your local machine if you have followed). Setting up the deployment architecture of Apache Nutch We have to setup Apache Nutch on each of the machine which we are using. In this case, we are using six machines cluster. So we have to setup Apache Nutch on each machine. For the less number of machines in our cluster configuration, we can setup manually on each machine. But when the machines are more, let’s say we have 100 machines in our cluster environment. So we can’t setup on each machine manually. For that we require some deployment tool such as Chef or ateleast distributed ssh. You can refer to http://www.opscode.com/chef/ for getting familiar with Chef. You can refer http://www.ibm.com/developerworks/aix/library/au-satdistadmin/for getting familiar with distributed ssh.I will just demonstrate about running Apache Hadoop on Ubuntu for Single-Node Cluster. If you want to go for running Apache Hadoop on Ubuntu for Multi-Node cluster then I have already provided reference link above. You can follow that and configure the same. Once we have done with the deployment of Apache Nutch to single machine, we will run this script start-all.sh that will start the services on the master node and data nodes. It means the script will begin the hadoop daemons on the master node and so we are able to login into all the slave nodes using ssh command as explained above and will begin daemons on the slave nodes. The start-all.sh script expects that Apache Nutch should be put on the same location on each machine. It is also expecting that Apache Hadoop is storing the data at the same filepath on each machine. The start-all.sh script which starts the daemons on the master and slave nodes are going to use password-less login using ssh. Introduction of Apache Nutch configuration with Eclipse Apache Nutch can be easily configured with Eclipse. After that we can perform crawling easily using Eclipse. So need to perform crawling from command line. We can use eclipse for all the operations of crawling which we are doing from command line.Instructions are provided for fixing a development environment for Apache Nutch with Eclipse IDE. It's supposed to give a comprehensive starting resource for configuring, building, crawling and debugging of Apache Nutch within the above of context. Following are the prerequisites for Apache Nutch integration with Eclipse: Get the latest version of Eclipse from http://www.eclipse.org/downloads/packages/release/juno/r All the required subsequent are available from the Eclipse Marketplace. But if they are not, you can download eclipse market place as follows http://marketplace.eclipse.org/marketplace-client-intro Once you've configuired Eclipse, Download as per here http://subclipse.tigris.org/. If you have faced a problem with the 1.8.x release, try 1.6.x. This may resolve compatability issues. Download IvyDE plugin for Eclipse as here http://ant.apache.org/ivy/ivyde/download.cgi Download m2e plugin for Eclipse here http://marketplace.eclipse.org/content/maven-integration-eclipse Introduction of Apache Accumulo Accumulo is basically used as the datastore for storing data. So same way as we are using different databases like MySQL, Oracle, etc. So same way Apache Accumulo can be used. The key point of Apache Accumulo is, it is running on Apache Hadoop Cluster environment. So that's a very good feature with Accumulo.Accumulo sorted, distributed key/value store could be a strong, scalable, high performance information storage and retrieval system. Apache Accumulo depends on Google's BigTable design and is built ontop of Apache Hadoop, ,Thrift and Zookeeper. Apache Accumulo features a some novel improvement on the BigTable design within a form of cell-based access management and the server-side programming mechanism which will do modificationication in key/value pairs at varied points within the data management process Introduction of Apache Gora Apache Gora open source framework providesin-memory data model and persistence for large data. Apache Gora supports persisting to column stores, key and value stores, document stores and RDBMSs and analyzing the data with extensive Apache Hadoop MapReduce support. Supported Datastores Apache Gora presently supports the subsequent datastores: AccumuloProphetess PApache Hbase Amazon DynamoDB Use of Apache Gora Although there are many excellent ORM frameworks for relational databases and data modeling in NoSQL data stores different profoundly from their relative cousins. DataD-model agnostic frameworks like JDO aren't comfortable to be used cases, wherever one has to use the complete power of data models in column stores. Gora fills the thegap giving user an easy-to-use in-memory data model plus persistence for large data frameworkproviding data store specific mappings and also in built Apache Hadoop support. Integration of Apache Nutch with Apache Accumulo In this section, we are going to cover the integration process for integrating Apache Nutch with Apache Accumulo. Apache Accumulo is basically used for a huge data storeage. It is built on the top of Apache Hadoop, Zookeeper and Thrift. So a potential use of integrating Apache Nutch with Apache Accumulo is when our application has huge data to process and we want to run our application in cluste environment. At that time we can use Apache Accumulo as data storage purpose. As Apache Accumulo only running with Apache Hadoop, maximum use of Apache Accumulo would be in cluster based environment. So first we will start with the configuration of Apache GORA with Apache Nutch. Then we will setup Apache Hadoop and Zookeeper. Then we will do installation and configuration of Apache Accumulo. Then we will test Apache Accumulo and at the end we will see Crawling with Apache Nutch on Apache Accumulo. Setup Apache Hadoop and Apache Zookeeper for Apache Nutch Apache Zookeeper is a centralized service which is used for maintaining configuration information, provideses distributed synchronization, naming and also provideses group services. All these services are used by distributed applications in one or another manner. So all these services are provided by zookeeper so you don’t have to write these services from scratch. You can use these services for implementing consensus, management, group, leader election and presence protocols and you can also build it for your own requirements. Apache Accumulo is built on the top of Apache Hadoop, Zookeeper. So we must configure Apache Accumulo within Apache Hadoop and Apache Zookeeper. You can referrer to http://www.covert.io/post/18414889381/accumulo-nutch-and-gora for any queries related to setup. Integration of Apache Nutch with MySQL In this section, we are going to integrate Apache Nutch with MySQL. So after that you can crawled webpages in Apache Nutch that will be stored in MYSQL. So you can go to MySQL and check your crawled webpages and also perform necessary operations. We will start with the introduction of MySQL then we will cover what is the need of integrating MySQL with Apache Nutch. After that we will see configuration of MySQL with Apache Nutch and at the end we will do crawling with Apache Nutch on MySQL. So let’s just start with the introduction of MYSQL. Summary We covered the following: Downloading Apache Hadoop and Apache Nutch Perform Crawling on Apache Hadoop Cluster in Apache Nutch Apache Nutch configuration with eclipse Installation steps of building Apache Nutch with Eclipse Crawling in Eclipse Configuration of Apache GORA with Apache Nutch Installation and Configuration of Apache Accumulo Crawling with Apache Nutch on Apache Accumulo Need of integrating MySQL with Apache Nutch   Resources for Article: Further resources on this subject: Getting Started with the Alfresco Records Management Module [Article] Making Big Data Work for Hadoop and Solr [Article] Apache Solr PHP Integration [Article]
Read more
  • 0
  • 0
  • 3017
Unlock access to the largest independent learning library in Tech for FREE!
Get unlimited access to 7500+ expert-authored eBooks and video courses covering every tech area you can think of.
Renews at $19.99/month. Cancel anytime
Packt
30 Mar 2015
28 min read
Save for later

PostgreSQL – New Features

Packt
30 Mar 2015
28 min read
In this article, Jayadevan Maymala, author of the book, PostgreSQL for Data Architects, you will see how to troubleshoot the initial hiccups faced by people who are new to PostgreSQL. We will look at a few useful, but not commonly used data types. We will also cover pgbadger, a nifty third-party tool that can run through a PostgreSQL log. This tool can tell us a lot about what is happening in the cluster. Also, we will look at a few key features that are part of PostgreSQL 9.4 release. We will cover a couple of useful extensions. (For more resources related to this topic, see here.) Interesting data types We will start with the data types. PostgreSQL does have all the common data types we see in databases. These include: The number data types (smallint, integer, bigint, decimal, numeric, real, and double) The character data types (varchar, char, and text) The binary data types The date/time data types (including date, timestamp without timezone, and timestamp with timezone) BOOLEAN data types However, this is all standard fare. Let's start off by looking at the RANGE data type. RANGE This is a data type that can be used to capture values that fall in a specific range. Let's look at a few examples of use cases. Cars can be categorized as compact, convertible, MPV, SUV, and so on. Each of these categories will have a price range. For example, the price range of a category of cars can start from $15,000 at the lower end and the price range at the upper end can start from $40,000. We can have meeting rooms booked for different time slots. Each room is booked during different time slots and is available accordingly. Then, there are use cases that involve shift timings for employees. Each shift begins at a specific time, ends at a specific time, and involves a specific number of hours on duty. We would also need to capture the swipe-in and swipe-out time for employees. These are some use cases where we can consider range types. Range is a high-level data type; we can use int4range as the appropriate subtype for the car price range scenario. For the booking the meeting rooms and shifting use cases, we can consider tsrange or tstzrange (if we want to capture time zone as well). It makes sense to explore the possibility of using range data types in most scenarios, which involve the following features: From and to timestamps/dates for room reservations Lower and upper limit for price/discount ranges Scheduling jobs Timesheets Let's now look at an example. We have three meeting rooms. The rooms can be booked and the entries for reservations made go into another table (basic normalization principles). How can we find rooms that are not booked for a specific time period, say, 10:45 to 11:15? We will look at this with and without the range data type: CREATE TABLE rooms(id serial, descr varchar(50));   INSERT INTO rooms(descr) SELECT concat('Room ', generate_series(1,3));   CREATE TABLE room_book (id serial , room_id integer, from_time timestamp, to_time timestamp , res tsrange);   INSERT INTO room_book (room_id,from_time,to_time,res) values(1,'2014-7-30 10:00:00', '2014-7-30 11:00:00', '(2014-7-30 10:00:00,2014-7-30 11:00:00)');   INSERT INTO room_book (room_id,from_time,to_time,res) values(2,'2014-7-30 10:00:00', '2014-7-30 10:40:00', '(2014-7-30 10:00,2014-7-30 10:40:00)');   INSERT INTO room_book (room_id,from_time,to_time,res) values(2,'2014-7-30 11:20:00', '2014-7-30 12:00:00', '(2014-7-30 11:20:00,2014-7-30 12:00:00)');   INSERT INTO room_book (room_id,from_time,to_time,res) values(3,'2014-7-30 11:00:00', '2014-7-30 11:30:00', '(2014-7-30 11:00:00,2014-7-30 11:30:00)'); PostgreSQL has the OVERLAPS operator. This can be used to get all the reservations that overlap with the period for which we wanted to book a room: SELECT room_id FROM room_book WHERE (from_time,to_time) OVERLAPS ('2014-07-30 10:45:00','2014-07-30 11:15:00'); If we eliminate these room IDs from the master list, we have the list of rooms available. So, we prefix the following command to the preceding SQL: SELECT id FROM rooms EXCEPT We get a room ID that is not booked from 10:45 to 11:15. This is the old way of doing it. With the range data type, we can write the following SQL statement: SELECT id FROM rooms EXCEPT SELECT room_id FROM room_book WHERE res && '(2014-07-30 10:45:00,2014-07-30 11:15:00)'; Do look up GIST indexes to improve the performance of queries that use range operators. Another way of achieving the same is to use the following command: SELECT id FROM rooms EXCEPT SELECT room_id FROM room_book WHERE '2014-07-30 10:45:00' < to_time AND '2014-07-30 11:15:00' > from_time; Now, let's look at the finer points of how a range is represented. The range values can be opened using [ or ( and closed with ] or ). [ means include the lower value and ( means exclude the lower value. The closing (] or )) has a similar effect on the upper values. When we do not specify anything, [) is assumed, implying include the lower value, but exclude the upper value. Note that the lower bound is 3 and upper bound is 6 when we mention 3,5, as shown here: SELECT int4range(3,5,'[)') lowerincl ,int4range(3,5,'[]') bothincl, int4range(3,5,'()') bothexcl , int4range(3,5,'[)') upperexcl; lowerincl | bothincl | bothexcl | upperexcl -----------+----------+----------+----------- [3,5)       | [3,6)       | [4,5)       | [3,5) Using network address types The network address types are cidr, inet, and macaddr. These are used to capture IPv4, IPv6, and Mac addresses. Let's look at a few use cases. When we have a website that is open to public, a number of users from different parts of the world access it. We may want to analyze the access patterns. Very often, websites can be used by users without registering or providing address information. In such cases, it becomes even more important that we get some insight into the users based on the country/city and similar location information. When anonymous users access our website, an IP is usually all we get to link the user to a country or city. Often, this becomes our not-so-accurate unique identifier (along with cookies) to keep track of repeat visits, to analyze website-usage patterns, and so on. The network address types can also be useful when we develop applications that monitor a number of systems in different networks to check whether they are up and running, to monitor resource consumption of the systems in the network, and so on. While data types (such as VARCHAR or BIGINT) can be used to store IP addresses, it's recommended to use one of the built-in types PostgreSQL provides to store network addresses. There are three data types to store network addresses. They are as follows: inet: This data type can be used to store an IPV4 or IPV6 address along with its subnet. The format in which data is to be inserted is Address/y, where y is the number of bits in the netmask. cidr: This data type can also be used to store networks and network addresses. Once we specify the subnet mask for a cidr data type, PostgreSQL will throw an error if we set bits beyond the mask, as shown in the following example: CREATE TABLE nettb (id serial, intclmn inet, cidrclmn cidr); CREATE TABLE INSERT INTO nettb (intclmn , cidrclmn) VALUES ('192.168.64.2/32', '192.168.64.2/32'); INSERT 0 1 INSERT INTO nettb (intclmn , cidrclmn) VALUES ('192.168.64.2/24', '192.168.64.2/24'); ERROR: invalid cidr value: "192.168.64.2/24" LINE 1: ...b (intclmn , cidrclmn) VALUES ('192.168.64.2/24', '192.168.6...                                                              ^ DETAIL: Value has bits set to right of mask. INSERT INTO nettb (intclmn , cidrclmn) VALUES ('192.168.64.2/24', '192.168.64.0/24'); INSERT 0 1 SELECT * FROM nettb; id |     intclmn     |   cidrclmn     ----+-----------------+----------------- 1 | 192.168.64.2   | 192.168.64.2/32 2 | 192.168.64.2/24 | 192.168.64.0/24 Let's also look at a couple of useful operators available within network address types. Does an IP fall in a subnet? This can be figured out using <<=, as shown here: SELECT id,intclmn FROM nettb ; id |   intclmn   ----+-------------- 1 | 192.168.64.2 3 | 192.168.12.2 4 | 192.168.13.2 5 | 192.168.12.4   SELECT id,intclmn FROM nettb where intclmn <<= inet'192.168.12.2/24'; id |   intclmn   3 | 192.168.12.2 5 | 192.168.12.4   SELECT id,intclmn FROM nettb where intclmn <<= inet'192.168.12.2/32'; id |   intclmn   3 | 192.168.12.2 The operator used in the preceding command checks whether the column value is contained within or equal to the value we provided. Similarly, we have the equality operator, that is, greater than or equal to, bitwise AND, bitwise OR, and other standard operators. The macaddr data type can be used to store Mac addresses in different formats. hstore for key-value pairs A key-value store available in PostgreSQL is hstore. Many applications have requirements that make developers look for a schema-less data store. They end up turning to one of the NoSQL databases (Cassandra) or the simple and more prevalent stores such as Redis or Riak. While it makes sense to opt for one of these if the objective is to achieve horizontal scalability, it does make the system a bit complex because we now have more moving parts. After all, most applications do need a relational database to take care of all the important transactions along with the ability to write SQL to fetch data with different projections. If a part of the application needs to have a key-value store (and horizontal scalability is not the prime objective), the hstore data type in PostgreSQL should serve the purpose. It may not be necessary to make the system more complex by using different technologies that will also add to the maintenance overhead. Sometimes, what we want is not an entirely schema-less database, but some flexibility where we are certain about most of our entities and their attributes but are unsure about a few. For example, a person is sure to have a few key attributes such as first name, date of birth, and a couple of other attributes (irrespective of his nationality). However, there could be other attributes that undergo change. A U.S. citizen is likely to have a Social Security Number (SSN); someone from Canada has a Social Insurance Number (SIN). Some countries may provide more than one identifier. There can be more attributes with a similar pattern. There is usually a master attribute table (which links the IDs to attribute names) and a master table for the entities. Writing queries against tables designed on an EAV approach can get tricky. Using hstore may be an easier way of accomplishing the same. Let's see how we can do this using hstore with a simple example. The hstore key-value store is an extension and has to be installed using CREATE EXTENSION hstore. We will model a customer table with first_name and an hstore column to hold all the dynamic attributes: CREATE TABLE customer(id serial, first_name varchar(50), dynamic_attributes hstore); INSERT INTO customer (first_name ,dynamic_attributes) VALUES ('Michael','ssn=>"123-465-798" '), ('Smith','ssn=>"129-465-798" '), ('James','ssn=>"No data" '), ('Ram','uuid=>"1234567891" , npr=>"XYZ5678", ratnum=>"Somanyidentifiers" '); Now, let's try retrieving all customers with their SSN, as shown here: SELECT first_name, dynamic_attributes FROM customer        WHERE dynamic_attributes ? 'ssn'; first_name | dynamic_attributes Michael   | "ssn"=>"123-465-798" Smith     | "ssn"=>"129-465-798" James     | "ssn"=>"No data" Also, those with a specific SSN: SELECT first_name,dynamic_attributes FROM customer        WHERE dynamic_attributes -> 'ssn'= '123-465-798'; first_name | dynamic_attributes - Michael   | "ssn"=>"123-465-798" If we want to get records that do not contain a specific SSN, just use the following command: WHERE NOT dynamic_attributes -> 'ssn'= '123-465-798' Also, replacing it with WHERE NOT dynamic_attributes ? 'ssn'; gives us the following command: first_name |                          dynamic_attributes         ------------+----------------------------------------------------- Ram       | "npr"=>"XYZ5678", "uuid"=>"1234567891", "ratnum"=>"Somanyidentifiers" As is the case with all data types in PostgreSQL, there are a number of functions and operators available to fetch data selectively, update data, and so on. We must always use the appropriate data types. This is not just for the sake of doing it right, but because of the number of operators and functions available with a focus on each data type; hstore stores only text. We can use it to store numeric values, but these values will be stored as text. We can index the hstore columns to improve performance. The type of index to be used depends on the operators we will be using frequently. json/jsonb JavaScript Object Notation (JSON) is an open standard format used to transmit data in a human-readable format. It's a language-independent data format and is considered an alternative to XML. It's really lightweight compared to XML and has been steadily gaining popularity in the last few years. PostgreSQL added the JSON data type in Version 9.2 with a limited set of functions and operators. Quite a few new functions and operators were added in Version 9.3. Version 9.4 adds one more data type: jsonb.json, which is very similar to JSONB. The jsonb data type stores data in binary format. It also removes white spaces (which are insignificant) and avoids duplicate object keys. As a result of these differences, JSONB has an overhead when data goes in, while JSON has extra processing overhead when data is retrieved (consider how often each data point will be written and read). The number of operators available with each of these data types is also slightly different. As it's possible to cast one data type to the other, which one should we use depends on the use case. If the data will be stored as it is and retrieved without any operations, JSON should suffice. However, if we plan to use operators extensively and want indexing support, JSONB is a better choice. Also, if we want to preserve whitespace, key ordering, and duplicate keys, JSON is the right choice. Now, let's look at an example. Assume that we are doing a proof of concept project for a library management system. There are a number of categories of items (ranging from books to DVDs). We wouldn't have information about all the categories of items and their attributes at the piloting stage. For the pilot stage, we could use a table design with the JSON data type to hold various items and their attributes: CREATE TABLE items (    item_id serial,    details json ); Now, we will add records. All DVDs go into one record, books go into another, and so on: INSERT INTO items (details) VALUES ('{                  "DVDs" :[                         {"Name":"The Making of Thunderstorms", "Types":"Educational",                          "Age-group":"5-10","Produced By":"National Geographic"                          },                          {"Name":"My nightmares", "Types":"Movies", "Categories":"Horror",                          "Certificate":"A", "Director":"Dracula","Actors":                                [{"Name":"Meena"},{"Name":"Lucy"},{"Name":"Van Helsing"}]                          },                          {"Name":"My Cousin Vinny", "Types":"Movies", "Categories":"Suspense",                          "Certificate":"A", "Director": "Jonathan Lynn","Actors":                          [{"Name":"Joe "},{"Name":"Marissa"}] }] }' ); A better approach would be to have one record for each item. Now, let's take a look at a few JSON functions: SELECT   details->>'DVDs' dvds, pg_typeof(details->>'DVDs') datatype      FROM items; SELECT   details->'DVDs' dvds ,pg_typeof(details->'DVDs') datatype      FROM items; Note the difference between ->> and -> in the following screenshot. We are using the pg_typeof function to clearly see the data type returned by the functions. Both return the JSON object field. The first function returns text and the second function returns JSON: Now, let's try something a bit more complex: retrieve all movies in DVDs in which Meena acted with the following SQL statement: WITH tmp (dvds) AS (SELECT json_array_elements(details->'DVDs') det FROM items) SELECT * FROM tmp , json_array_elements(tmp.dvds#>'{Actors}') as a WHERE    a->>'Name'='Meena'; We get the record as shown here: We used one more function and a couple of operators. The json_array_elements expands a JSON array to a set of JSON elements. So, we first extracted the array for DVDs. We also created a temporary table, which ceases to exist as soon as the query is over, using the WITH clause. In the next part, we extracted the elements of the array actors from DVDs. Then, we checked whether the Name element is equal to Meena. XML PostgreSQL added the xml data type in Version 8.3. Extensible Markup Language (XML) has a set of rules to encode documents in a format that is both human-readable and machine-readable. This data type is best used to store documents. XML became the standard way of data exchanging information across systems. XML can be used to represent complex data structures such as hierarchical data. However, XML is heavy and verbose; it takes more bytes per data point compared to the JSON format. As a result, JSON is referred to as fat-free XML. XML structure can be verified against XML Schema Definition Documents (XSD). In short, XML is heavy and more sophisticated, whereas JSON is lightweight and faster to process. We need to configure PostgreSQL with libxml support (./configure --with-libxml) and then restart the cluster for XML features to work. There is no need to reinitialize the database cluster. Inserting and verifying XML data Now, let's take a look at what we can do with the xml data type in PostgreSQL: CREATE TABLE tbl_xml(id serial, docmnt xml); INSERT INTO tbl_xml(docmnt ) VALUES ('Not xml'); INSERT INTO tbl_xml (docmnt)        SELECT query_to_xml( 'SELECT now()',true,false,'') ; SELECT xml_is_well_formed_document(docmnt::text), docmnt        FROM tbl_xml; Then, take a look at the following screenshot: First, we created a table with a column to store the XML data. Then, we inserted a record, which is not in the XML format, into the table. Next, we used the query_to_xml function to get the output of a query in the XML format. We inserted this into the table. Then, we used a function to check whether the data in the table is well-formed XML. Generating XML files for table definitions and data We can use the table_to_xml function if we want to dump the data from a table in the XML format. Append and_xmlschema so that the function becomes table_to_xml_and_xmlschema, which will also generate the schema definition before dumping the content. If we want to generate just the definitions, we can use table_to_xmlschema. PostgreSQL also provides the xpath function to extract data as follows: SELECT xpath('/table/row/now/text()',docmnt) FROM tbl_xml        WHERE id = 2;                xpath               ------------------------------------ {2014-07-29T16:55:00.781533+05:30} Using properly designed tables with separate columns to capture each attribute is always the best approach from a performance standpoint and update/write-options perspective. Data types such as json/xml are best used to temporarily store data when we need to provide feeds/extracts/views to other systems or when we get data from external systems. They can also be used to store documents. The maximum size for a field is 1 GB. We must consider this when we use the database to store text/document data. pgbadger Now, we will look at a must-have tool if we have just started with PostgreSQL and want to analyze the events taking place in the database. For those coming from an Oracle background, this tool provides reports similar to AWR reports, although the information is more query-centric. It does not include data regarding host configuration, wait statistics, and so on. Analyzing the activities in a live cluster provides a lot of insight. It tells us about load, bottlenecks, which queries get executed frequently (we can focus more on them for optimization). It even tells us if the parameters are set right, although a bit indirectly. For example, if we see that there are many temp files getting created while a specific query is getting executed, we know that we either have a buffer issue or have not written the query right. For pgbadger to effectively scan the log file and produce useful reports, we should get our logging configuration right as follows: log_destination = 'stderr' logging_collector = on log_directory = 'pg_log' log_filename = 'postgresql-%Y-%m-%d.log' log_min_duration_statement = 0 log_connections = on log_disconnections = on log_duration = on log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d ' log_lock_waits = on track_activity_query_size = 2048 It might be necessary to restart the cluster for some of these changes to take effect. We will also ensure that there is some load on the database using pgbench. It's a utility that ships with PostgreSQL and can be used to benchmark PostgreSQL on our servers. We can initialize the tables required for pgbench by executing the following command at shell prompt: pgbench -i pgp This creates a few tables on the pgp database. We can log in to psql (database pgp) and check: \dt              List of relations Schema |       Name      | Type | Owner   --------+------------------+-------+---------- public | pgbench_accounts | table | postgres public | pgbench_branches | table | postgres public | pgbench_history | table | postgres    public | pgbench_tellers | table | postgres Now, we can run pgbench to generate load on the database with the following command: pgbench -c 5 -T10 pgp The T option passes the duration for which pgbench should continue execution in seconds, c passes the number of clients, and pgp is the database. At shell prompt, execute: wget https://github.com/dalibo/pgbadger/archive/master.zip Once the file is downloaded, unzip the file using the following command: unzip master.zip Use cd to the directory pgbadger-master as follows: cd pgbadger-master Execute the following command: ./pgbadger /pgdata/9.3/pg_log/postgresql-2014-07-31.log –o myoutput.html Replace the log file name in the command with the actual name. It will generate a myoutput.html file. The HTML file generated will have a wealth of information about what happened in the cluster with great charts/tables. In fact, it takes quite a bit of time to go through the report. Here is a sample chart that provides the distribution of queries based on execution time: The following screenshot gives an idea about the number of performance metrics provided by the report: If our objective is to troubleshoot performance bottlenecks, the slowest individual queries and most frequent queries under the top drop-down list is the right place to start. Once the queries are identified, locks, temporary file generation, and so on can be studied to identify the root cause. Of course, EXPLAIN is the best option when we want to refine individual queries. If the objective is to understand how busy the cluster is, the Overview section and Sessions are the right places to explore. The logging configuration used may create huge log files in systems with a lot of activity. Tweak the parameters appropriately to ensure that this does not happen. With this, we covered most of the interesting data types, an interesting extension and a must-use tool from PostgreSQL ecosystem. Now, let's cover a few interesting features in PostgreSQL Version 9.4. Features over time Applying filters in Versions 8.0, 9.0, and 9.4 gives us a good idea about how quickly features are getting added to the database. Interesting features in 9.4 Each version of PostgreSQL adds many features grouped into different categories (such as performance, backend, data types, and so on). We will look at a few features that are more likely to be of interest (because they help us improve performance or they make maintenance and configuration easy). Keeping the buffer ready As we saw earlier, reads from disk have a significant overhead compared to those from memory. There are quite a few occasions when disk reads are unavoidable. Let's see a few examples. In a data warehouse, the Extract, Transform, Load (ETL) process, which may happen once a day usually, involves a lot of raw data getting processed in memory before being loaded into the final tables. This data is mostly transactional data. The master data, which does not get processed on a regular basis, may be evicted from memory as a result of this churn. Reports typically depend a lot on master data. When users refresh their reports after ETL, it's highly likely that the master data will be read from disk, resulting in a drop in the response time. If we could ensure that the master data as well as the recently processed data is in the buffer, it can really improve user experience. In a transactional system like an airline reservation system, a change to the fare rule may result in most of the fares being recalculated. This is a situation similar to the one described previously, ensuring that the fares and availability data for the most frequently searched routes in the buffer can provide a better user experience. This applies to an e-commerce site selling products also. If the product/price/inventory data is always available in memory, it can be retrieved very fast. You must use PostgreSQL 9.4 for trying out the code in the following sections. So, how can we ensure that the data is available in the buffer? A pg_prewarm module has been added as an extension to provide this functionality. The basic syntax is very simple: SELECT pg_prewarm('tablename');. This command will populate the buffers with data from the table. It's also possible to mention the blocks that should be loaded into the buffer from the table. We will install the extension in a database, create a table, and populate some data. Then, we will stop the server, drop buffers (OS), and restart the server. We will see how much time a SELECT count(*) takes. We will repeat the exercise, but we will use pg_prewarm before executing SELECT count(*) at psql: CREATE EXTENSION pg_prewarm; CREATE TABLE myt(id SERIAL, name VARCHAR(40)); INSERT INTO myt(name) SELECT concat(generate_series(1,10000),'name'); Now, stop the server using pg_ctl at the shell prompt: pg_ctl stop -m immediate Clean OS buffers using the following command at the shell prompt (will need to use sudo to do this): echo 1 > /proc/sys/vm/drop_caches The command may vary depending on the OS. Restart the cluster using pg_ctl start. Then, execute the following command: SELECT COUNT(*) FROM myt; Time: 333.115 ms We should repeat the steps of shutting down the server, dropping the cache, and starting PostgreSQL. Then, execute SELECT pg_prewarm('myt'); before SELECT count(*). The response time goes down significantly. Executing pg_prewarm does take some time, which is close to the time taken to execute the SELECT count(*) against a cold cache. However, the objective is to ensure that the user does not experience a delay. SELECT COUNT(*) FROM myt; count ------- 10000 (1 row) Time: 7.002 ms Better recoverability A new parameter called recovery_min_apply_delay has been added in 9.4. This will go to the recovery.conf file of the slave server. With this, we can control the replay of transactions on the slave server. We can set this to approximately 5 minutes and then the standby will replay the transaction from the master when the standby system time is 5 minutes past the time of commit at the master. This provides a bit more flexibility when it comes to recovering from mistakes. When we keep the value at 1 hour, the changes at the master will be replayed at the slave after one hour. If we realize that something went wrong on the master server, we have about 1 hour to stop the transaction replay so that the action that caused the issue (for example, accidental dropping of a table) doesn't get replayed at the slave. Easy-to-change parameters An ALTER SYSTEM command has been introduced so that we don't have to edit postgresql.conf to change parameters. The entry will go to a file named postgresql.auto.conf. We can execute ALTER SYSTEM SET work_mem='12MB'; and then check the file at psql: \! more postgresql.auto.conf # Do not edit this file manually! # It will be overwritten by ALTER SYSTEM command. work_mem = '12MB' We must execute SELECT pg_reload_conf(); to ensure that the changes are propagated. Logical decoding and consumption of changes Version 9.4 introduces physical and logical replication slots. We will look at logical slots as they let us track changes and filter specific transactions. This lets us pick and choose from the transactions that have been committed. We can grab some of the changes, decode, and possibly replay on a remote server. We do not have to have an all-or-nothing replication. As of now, we will have to do a lot of work to decode/move the changes. Two parameter changes are necessary to set this up. These are as follows: The max_replication_slots parameter (set to at least 1) and wal_level (set to logical). Then, we can connect to a database and create a slot as follows: SELECT * FROM pg_create_logical_replication_slot('myslot','test_decoding'); The first parameter is the name we give to our slot and the second parameter is the plugin to be used. Test_decoding is the sample plugin available, which converts WAL entries into text representations as follows: INSERT INTO myt(id) values (4); INSERT INTO myt(name) values ('abc'); Now, we will try retrieving the entries: SELECT * FROM pg_logical_slot_peek_changes('myslot',NULL,NULL); Then, check the following screenshot: This function lets us take a look at the changes without consuming them so that the changes can be accessed again: SELECT * FROM pg_logical_slot_get_changes('myslot',NULL,NULL); This is shown in the following screenshot: This function is similar to the peek function, but the changes are no longer available to be fetched again as they get consumed. Summary In this article, we covered a few data types that data architects will find interesting. We also covered what is probably the best utility available to parse the PostgreSQL log file to produce excellent reports. We also looked at some of the interesting features in PostgreSQL version 9.4, which will be of interest to data architects. Resources for Article: Further resources on this subject: PostgreSQL as an Extensible RDBMS [article] Getting Started with PostgreSQL [article] PostgreSQL Cookbook - High Availability and Replication [article]
Read more
  • 0
  • 0
  • 3012

article-image-migrating-ms-access-2003-data-using-oracle-sql-developer-12
Packt
24 Oct 2009
7 min read
Save for later

Migrating MS Access 2003 Data using the Oracle SQL Developer 1.2

Packt
24 Oct 2009
7 min read
Introduction Business needs often necessitate data migration from a smaller, less secure database to a higher end, faster database server with a more reliable availability. A typical scenario is the migration of data from a desktop sized database such as MS Access or Fox Pro to any other higher end database servers such as MS SQL Server, Oracle, DB2 or SQL Anywhere Server. Most of the database vendors provide tools to migrate from third party to their own database servers. In his three previous articles, the author has described the built-in tools to migrate from MS Access to SQL 2000 Server, SQL Anywhere Server, and from Oracle 10G XE to SQL Anywhere server.   In an earlier article on this site, the author showed how you may connect to an MS Access 2003 database and execute SQL statements using the Oracle SQL Developer 1.2 tool. In this tutorial the author shows you how to migrate an MS Access database to an Oracle 10G XE Server delineating all the steps involved in the migration process. Oracle SQL Developer 1.2 with this latest version is sometimes called the Migration version as it supports migrating data from three vendors (MySQL, SQL Server and MS Access) to an Oracle database. In fact, it has been designed to migrate from more than one version of MS Access. This feature was not available in the version 1.1 of this tool. Overview of this Tutorial Like in the earlier article, a simple MS Access 2003 database file will be created with just one table, a query and a linked table. This database file, about 292 KB, will be migrated to Oracle 10G XE database. Oracle 10G XE, by design, can have just one database on a computer. However, you can have separate applications by having different user schemas. Oracle 10G XE comes bundled with a sample database schema and data which can be accessed by using the credentials, username hr with a password hr. For the purposes of this example a new user will be created and his authentication will be used for creating necessary migration related schemas to be stored in a repository. This will become clear as you follow the various details and the steps. Once the ‘Repository’ is created then you can begin by capturing the metadata of the source followed by converting the captured source information into Oracle specific model where a mapping between the source data and the Oracle will be accomplished. After this process, you generate the data definition language script which will create the Oracle objects such as tables, views, etc. In the final step these tables will be populated by transferring the data from the source to Oracle 10G XE. MS Access 2003 Source An empty MS Access database file TestMigration.mdb is created in the default directory, My Documents. An Employees table will be imported, an Orders table will be linked and a TestQuery based on selecting a few columns of Employees table will be created. The Employees table and the Orders table may be found in the Northwind Database that ships with most of the MS Access versions. Creating a New User in Oracle 10G XE As described in the overview, the MS Access Database will be migrated to a User schema in Oracle 10G XE, but this requires reating this schema. Only a user with DBA privileges can create a new user. Open the Homepage of the Oracle 10G XE Server. Login with the credentials you supplied while installing the software where the user is system and the password is what you chose at that time, as shown in the next figure.   This gives you access to several of the tools that you can use to administer as well as work with database objects. Click on the icon for Administration and follow the drop-downs till you get to the menu item, Create User, as shown in the next figure. Create a new user MigrateAccess with some password that you choose and confirm. Keep the account status unlocked. This uses the default tablespace called USERS. The default user privilege does not include the DBA role but for this example, the DBA is also included by placing a check mark in this selection. Also several other system wide privileges are also granted. Please follow steps described in the earlier article for the details. The next figure shows all the details filled in. After this when you click the Create button you will have created the user, MigrateAccess. When you click the button Create, you will notice that the ‘bread crumb’ will change to Manage Database Users. You will notice that the new user MigrateAccess has been added to the list of users, as shown in the next figure. As no expiry was set for this user in the previous screen, you can notice that there is no expiry shown in the following screen. Now if you logout (remember you logged in as SYSTEM) and login with the new credentials, MigrateAccess/[chosen password] you can access all the tools on the database. Of course, all the objects (tables, views, etc) will be empty. Creating the Repository to Store Schemas Migration using this tool requires an Oracle database schema to store the Meta data it collects about the source. You will create a connection from the Oracle SQL Developer to the Oracle 10 XE, in which, you just finished creating a new user schema. This user’s schema is where the repository contents will be stored. Making a connection to the Oracle Right click on the Connections node, and from the drop-down menu select New Connection. This brings up the New / Select Database Connection (this has been described in the earlier referenced article) window. It comes up with the default connection to an Oracle database. It even recognizes the local Oracle 10G XE, capturing all its details as shown. You need to provide a Connection Name, a Username and a Password. The connection name is your choice (herein called conMigrate) and the user name and password is the same that was used while creating the new user MigrateAccess. When you click on the button ‘Test’, a (success) status message will be posted to this form above the Help button, as shown in the next figure after a little while, preceded by a little progress window. Now click on the OK button on the New / Select Database Connection window. This adds the conMigrate connection to the list of Connections as shown in the next figure. Notice that objects are all empty as we discussed earlier. Create Repository Click on the main menu item Migrate. From the drop-down, click on Repository Management –> Create Repository as shown in the next figure. This brings up the Create Repository window showing the connection conMigrate as shown in the next figure. You may connect or disconnect this from the tool as long as the authentication information is available. Now click on the Create button. This brings up the Installing Repository window which reports the various objects installed and finally shows a message “Repository Built Successfully” as shown in the next figure. Click on the Close button on this window. Now login to the Oracle 10G XE with the credentials for the user MigrateAccess, and click on the object browser. Now you see all the Tables, Views, etc in the repository as shown. You will notice that either two more windows, named captured and converted models appear below the Connections node in Oracle SQL Developer, or if they are not found in the Connections node, you may find in the submenu of the main menu, View. The next figure shows the submenus of the View menu. Connect to the Source Database Right click on the connection node and establish a new connection so that you can connect to the source database, conTestMigration as shown in the next figure. When you click the Test button you will see a message that gets posted to the screen indicating the connection was a success. Click on the Connect button. This adds the conTestMigrate connection to the list of Connections in the navigator window.
Read more
  • 0
  • 0
  • 3006

article-image-tabular-models
Packt
16 Jan 2017
15 min read
Save for later

Tabular Models

Packt
16 Jan 2017
15 min read
In this article by Derek Wilson, the author of the book Tabular Modeling with SQL Server 2016 Analysis Services Cookbook, you will learn the following recipes: Opening an existing model Importing data Modifying model relationships Modifying model measures Modifying model columns Modifying model hierarchies Creating a calculated table Creating key performance indicators (KPIs) Modifying key performance indicators (KPIs) Deploying a modified model (For more resources related to this topic, see here.) Once the new data is loaded into the model, we will modify various pieces of the model, including adding a new Key Performance Indicator. Next, we will perform calculations to see how to create and modify measures and columns. Opening an existing model We will open the model. To make modifications to your deployed models, we will need to open the model in the Visual Studio designer. How to do it… Open your solution, by navigating to File | Open | Project/Solution. Then select the folder and solution Chapter3_Model and select Open. Your solution is now open and ready for modification. How it works… Visual Studio stores the model as a project inside of a solution. In Chapter 3 we created a new project and saved it as Chapter3_Model. To make modifications to the model we open it in Visual Studio. Importing data The crash data has many columns that store the data in codes. In order to make this data useful for reporting, we need to add description columns. In this section, we will create four code tables by importing data into a SQL Server database. Then, we will add the tables to your existing model. Getting ready In the database on your SQL Server, run the following scripts to create the four tables and populate them with the reference data: Create the Major Cause of Accident Reference Data table: CREATE TABLE [dbo].[MAJCSE_T](   [MAJCSE] [int] NULL,   [MAJOR_CAUSE] [varchar](50) NULL ) ON [PRIMARY] Then, populate the table with data: INSERT INTO MAJCSE_T VALUES (20, 'Overall/rollover'), (21, 'Jackknife'), (31, 'Animal'), (32, 'Non-motorist'), (33, 'Vehicle in Traffic'), (35, 'Parked motor vehicle'), (37, 'Railway vehicle'), (40, 'Collision with bridge'), (41, 'Collision with bridge pier'), (43, 'Collision with curb'), (44, 'Collision with ditch'), (47, 'Collision culvert'), (48, 'Collision Guardrail - face'), (50, 'Collision traffic barrier'), (53, 'impact with Attenuator'), (54, 'Collision with utility pole'), (55, 'Collision with traffic sign'), (59, 'Collision with mailbox'), (60, 'Collision with Tree'), (70, 'Fire'), (71, 'Immersion'), (72, 'Hit and Run'), (99, 'Unknown') Create the table to store the lighting conditions at the time of the crash: CREATE TABLE [dbo].[LIGHT_T](   [LIGHT] [int] NULL,   [LIGHT_CONDITION] [varchar](30) NULL ) ON [PRIMARY] Now, populate the data that shows the descriptions for the codes: INSERT INTO LIGHT_T VALUES (1, 'Daylight'), (2, 'Dusk'), (3, 'Dawn'), (4, 'Dark, roadway lighted'), (5, 'Dark, roadway not lighted'), (6, 'Dark, unknown lighting'), (9, 'Unknown') Create the table to store the road conditions: CREATE TABLE [dbo].[CSRFCND_T](   [CSRFCND] [int] NULL,   [SURFACE_CONDITION] [varchar](50) NULL ) ON [PRIMARY] Now populate the road condition descriptions: INSERT INTO CSRFCND_T VALUES (1, 'Dry'), (2, 'Wet'), (3, 'Ice'), (4, 'Snow'), (5, 'Slush'), (6, 'Sand, Mud'), (7, 'Water'), (99, 'Unknown') Finally, create the weather table: CREATE TABLE [dbo].[WEATHER_T](   [WEATHER] [int] NULL,   [WEATHER_CONDITION] [varchar](30) NULL ) ON [PRIMARY] Then populate the weather condition descriptions. INSERT INTO WEATHER_T VALUES (1, 'Clear'), (2, 'Partly Cloudy'), (3, 'Cloudy'), (5, 'Mist'), (6, 'Rain'), (7, 'Sleet, hail, freezing rain'), (9, 'Severe winds'), (10, 'Blowing Sand'), (99, 'Unknown') You now have the tables and data required to complete the recipes in this chapter. How to do it… From your open model, change to the Diagram view in model.bim. Navigate to Model | Import from Data Source then select Microsoft SQL Server on the Table Import Wizard and click on Next. Set your Server Name to Localhost and change the Database name to Chapter3 and click on Next. Enter your admin account username and password and click on Next. You want to select from a list of tables the four tables that were created at the beginning. Click on Finish to import the data. How it works… This recipe opens the table import wizard and allows us to select the four new tables that are to be added to the existing model. The data is then imported into your Tabular Model workspace. Once imported, the data is now ready to be used to enhance the model. Modifying model relationships We will create the necessary relationships for the new tables. These relationships will be used in the model in order for the SSAS engine to perform correct calculations. How to do it… Open your model to the diagram view and you will see the four tables that you imported from the previous recipe. Select the CSRFCND field in the CSRFCND_T table and drag the CSRFCND table in the Crash_Data table. Select the LIGHT field in the LIGHT_T table and drag to the LIGHT table in the Crash_Data table. Select the MAJCSE field in the MAJCSE_T table and drag to the MAJCSE table in the Crash_Data table. Select the WEATHER field in the WEATHER_T table and drag to the WEATHER table in the Crash_Data table. How it works… Each table in this section has a relationship built between the code columns and the Crash_Data table corresponding columns. These relationships allow for DAX calculations to be applied across the data tables. Modifying model measures Now that there are more tables in the model, we are going to add an additional measure to perform quick calculations on data. The measure will use a simple DAX calculation since it is focused on how to add or modify the model measures. How to do it… Open the Chapter 3 model project to the Model.bim folder and make sure you are in grid view. Select the cell under Count_of_Crashes and in the fx bar add the following DAX formula to create Sum_of_Fatalities: Sum_of_Fatalities:=SUM(Crash_Data[FATALITIES]) Then, hit Enter to create the calculation: In the properties window, enter Injury_Calculations in the Display Folder. Then, change the Format to Whole Number and change the Show Thousand Separator to True. Finally, add to Description Total Number of Fatalities Recorded: How it works… In this recipe, we added a new measure to the existing model that calculates the total number of fatalities on the Crash_Data table. Then we added a new folder for the users to see the calculation. We also modified the default behavior of the calculation to display as a whole number and show commas to make the numbers easier to interpret. Finally, we added a description to the calculation that users will be able to see in the reporting tools. If we did not make these changes in the model, each user will be required to make the changes each time they accessed the model. By placing the changes in the model, everyone will see the data in the same format. Modifying model columns We will modify the properties of the columns on the WEATHER table. Modifications to the columns in a table make the information easier for your users to understand in the reporting tools. Some properties determine how the SSAS engine uses the fields when creating the model on the server. How to do it… In Model.bim, make sure you are in the grid view and change to the WEATHER_T tab. Select WEATHER column to view the available Properties and make the following changes: Hiddenproperty to True  Uniqueproperty to True Sort By ColumnselectWEATHER_CONDITION Summarize By to Count Next, select the WEATHER_CONDITION column and modify the following properties. Description add Weather at time of crash Default Labelproperty to True How it works… This recipe modified the properties of the measure to make it better for your report users to access the data. The WEATHER code column was hidden so it will not be visible in the reporting tools and the WEATHER_CONDITION was sorted in alphabetical order. You set the default aggregation to Count and then added a description for the column. Now, when this dimension is added to a report only the WEATHER_CONDITION column will be seen and pre-sorted based on the WEATHER_CONDITION field. It will also use count as the aggregation type to provide the number of each type of weather conditions. If you were to add another new description to the table, it would automatically be sorted correctly. Modifying model hierarchies Once you have created a hierarchy, you may want to remove or modify the hierarchy from your model. We will make modifications to the Calendar_YQMD hierarchy. How to do it… Open Model.bim to the diagram view and find the Master_Calendar_T table. Review the Calendar_YQMD hierarchy and included columns. Select the Quarter_Name column and right-click on it to bring up the menu. Select Remove from Hierarchy to delete Quarter_Name from the hierarchy and confirm on the next screen by selecting Remove from Hierarchy. Select the Calendar_YQMD hierarchy and right-click on it and select Rename. Change the name to Calendar_YMD and hit on Enter. How it works… In this recipe, we opened the diagram view and selected the Master_Calendar_T table to find the existing hierarchy. After selecting the Quarter_Name column in the hierarchy, we used the menus to view the available options for modifications. Then we selected the option to remove the column from the hierarchy. Finally, we updated the name of the hierarchy to let users know that the quarter column is not included. There’s more… Another option to remove fields from the hierarchy is to select the column and then press the delete key. Likewise, you can double-click on the Calendar_YQMD hierarchy to bring up the edit window for the name. Then edit the name and hit Enter to save the change in the designer. Creating a calculated table Calculated tables are created dynamically using functions or DAX queries. They are very useful if you need to create a new table based on information in another table. For example, you could have a date table with 30 years of data. However, most of your users only look at the last five years of information when running most of their analysis. Instead of creating a new table you can dynamically make a new table that only stores the last five years of dates. You will use a single DAX query to filter the Master_Calendar_T table to the last 5 years of data. How to do it… OpenModel.bim to the grid view and then select the Table menu and New Calculated Table. A new data tab is created. In the function box, enter this DAX formula to create a date calendar for the last 5 years: FILTER(MasterCalendar_T, MasterCalendar_T[Date]>=DATEADD(MasterCalendar_T[Date],6,YEAR)) Double-click on the CalculatedTable 1 tab and rename to Last_5_Years_T. How it works… It works by creating a new table in the model that is built from a DAX formula. In order to limit the number of years shown, the DAX formula reduces the total number of dates available for the last 5 years of dates. There’s more… After you create a calculated table, you will need to create the necessary relationships and hierarchies just like a regular table: Switch to the diagram view in the model.bim and you will be able to see the new table. Create a new hierarchy and name it Last_5_Years_YQM and include Year, Quarter_Name, Month_Name, and Date Replace the Master_Calendar_T relationship with the Date column from the Last_5_Years_T date column to the Crash_Date.Crash_Date column. Now, the model will only display the last 5 years of crash data when using the Last_5_Years_T table in the reporting tools. The Crash_Data table still contains all of the records if you need to view more than 5 years of data. Creating key performance indicators (KPIs) Key performance indicators are business metrics that show the effectiveness of a business objective. They are used to track actual performance against budgeted or planned value such as Service Level Agreements or On-Time performance. The advantage of creating a KPI is the ability to quickly see the actual value compared to the target value. To add a KPI, you will need to have a measure to use as the actual and another measure that returns the target value. In this recipe, we will create a KPI that tracks the number of fatalities and compares them to the prior year with the goal of having fewer fatalities each year. How to do it… Open the Model.bim to the grid view and select an empty cell and create a new measure named Last_Year_Fatalities:Last_Year_Fatalities:=CALCULATE(SUM(Crash_Data[FATALITIES]),DATEADD(MasterCalendar_T[Date],-1, YEAR)) Select the already existing Sum_of_measure then right-click and select Create KPI…. On the Key Performance Indicator (KPI) window, select Last_Year_Fatalities as the Target Measure. Then, select the second set of icons that have red, yellow, and green with symbols. Finally, change the KPI color scheme to green, yellow, and red and make the scores 90 and 97, and then click on OK. The Sum_of_Fatalites measure will now have a small graph next to it in the measure grid to show that there is a KPI on that measure. How it works… You created a new calculation that compared the actual count of fatalities compared to the same number for the prior year. Then you created a new KPI that used the actual and Last_Year_Fatalities measure. In the KPI window, you setup thresholds to determine when a KPI is red, yellow, or green. For this example, you want to show that having less fatalities year over year is better. Therefore, when the KPI is 97% or higher the KPI will show red. For values that are in the range of 90% to 97% the KPI is yellow and anything below 90% is green. By selecting the icons with both color and symbols, users that are color-blind can still determine the appropriate symbol of the KPI. Modifying key performance indicators (KPIs) Once you have created a KPI, you may want to remove or modify the KPI from your model. You will make modifications to the Last_Year_Fatalities hierarchy. How to do it… Open Model.bim to the Grid view and select the Sum_of_Fatalities measure then right-click to bring up Edit KPI settings…. Edit the appropriate settings to modify an existing KPI. How it works… Just like models, KPIs will need to be modified after being initially designed. The icon next to a measure denotes that a KPI is defined on the measure. Right-clicking on the measure brings up the menu that allows you to enter the Edit KPI setting. Deploying a modified model Once you have completed the changes to your model, you have two options for deployment. First, you can deploy the model and replace the existing model. Alternatively, you can change the name of your model and deploy it as a new model. This is often useful when you need to test changes and maintain the existing model as is. How to do it… Open the Chapter3_model project in Visual Studio. Select the Project menu and select Chapter3_Model Properties… to bring up the Properties menu and review the Server and Database properties. To overwrite an existing model make no changes and click on OK. Select the Build menu from the Chapter3_Model project and select the Deploy Chapter3_Model option. On the following screens, enter the impersonation credentials for your data and hit OK to deploy the changes. How it works… the model that is on your local machine and submits the changes to the server. By not making any changes to the existing model properties, a new deployment will overwrite the old model. All of your changes are now published on the server and users can begin to leverage the changes. There’s more… Sometimes you might want to deploy your model to a different database without overwriting the existing environment. This could be to try out a new model or test different functionality with users that you might want to implement. You can modify the properties of the project to deploy to a different server such as development, UAT, or production. Likewise, you can also change the database name to deploy the model to the same server or different servers for testing. Open the Project menu and then select Chapter3_Model Properties. Change the name of the Database to Chapter4_Model and click on OK. Next, on the Build menu, select Deploy Chapter3_Model to deploy the model to the same server under the new name of Chapter4_Model. When you review the Analysis Services databases in SQL Server Management Studio, you will now see a database for Chapter3_Model and Chapter4_Model. Summary After building a model, we will need to maintain and enhance the model as the business users update or change their requirements. We will begin by adding additional tables to the model that contain the descriptive data columns for several code columns. Then we will create relationships between these new tables and the existing data tables. Resources for Article: Further resources on this subject: Say Hi to Tableau [article] Data Tables and DataTables Plugin in jQuery 1.3 with PHP [article] Data Science with R [article]
Read more
  • 0
  • 0
  • 3000

article-image-build-chatbot
Packt
07 Dec 2016
23 min read
Save for later

Build a Chatbot

Packt
07 Dec 2016
23 min read
In this article written by Alexander T. Combs, author of the book Python Machine Learning Blueprints, we are going to learn how to construct a chatbot from scratch. Along the way, we'll learn more about the history of the field and its future prospects. We'll cover the following topics in this article: The Turing test The history of chatbots The design of chatbots Building a chatbot (For more resources related to this topic, see here.) The Turing test In 1950, Alan Turing posed the question, “Can machines think?” in one of his more famous papers. Being the practical genius that he was, he decided not to tackle this question head on but to pose it in the framework of the “problem of other minds” instead. This problem poses the question, “How do we truly know that other people have minds like our own?”. As we can only observe their behavior and not the inner workings of their mind, we must take it on faith that they are like us. Turing proposed that if a machine behaved as if it was intelligent, then in fact, we should view it as such. This, in a nutshell, is the Turing test. Passing this test means convincing humans that a machine is a fellow human. You may wonder at this point whether a program ever passed the Turing test successfully. The answer is that there really is no official Turing test. The closest thing to an official event is the Loebner Prize. This is a contest held annually that awards modest prizes to chatbots that appear to be the most convincingly human to a panel of judges. If this sounds rigorous and academic, it has been described as anything but. Loebner's sponsorship for the contest is said to have been a product of his total disdain for labor of any kind. Hopefully, by this point, you realized that any claims about “passing the Turing test” are more spectacle than actual science. However, despite this, Turing's original concept is a useful construct. The history of chatbots I still remember the day my father sat me down in front of his IBM XT with its black and green flickering monitor and asked me to talk to “it”. The conversation went something like the following: >Hello? How are you feeling today? >Hi. Who is this? >Hi there… how are you today? >I'm fine. You didn't tell me who you are. >We should be discussing you, not me. >We should? Why? >Let's change focus a bit… Tell me about your family. >You are impossible to have a conversation with. >Can you elaborate on that? >You are killing me. >Perhaps you would like me to be killing you. >Ok, you are freaking me out. >Perhaps you're really talking about yourself? >What?? That makes no sense. >How would an answer to that help you? >END >Very Interesting. >QUIT >Thank you for talking with me. >#$###$#@@…. The “it” that I talked to was a program called ELIZA. Written in the mid-1960s at MIT's Artificial Intelligence Lab by Joseph Weizenbaum, ELIZA was created to mimic the responses of a Rogerian psychotherapist. Though nearly comical when examined in any depth, the program was capable of convincing some users that they were chatting with an actual human. This was a remarkable feat considering it was a scant 200 lines of code that used randomization and regular expressions to parrot back responses. Even today, this simple program remains a staple of popular culture. If you ask Siri who ELIZA is, she will tell you she is a friend and brilliant psychiatrist. If ELIZA was an early example of chatbots, what have we seen after this? In recent years, there has been an explosion of new chatbots; most notable of these is Cleverbot. Cleverbot was released to the world via the web in 1997. Since then, this bot has racked up hundreds of millions of conversions. Unlike early chatbots, Cleverbot (as the name suggests) appears to become more intelligent with each conversion. Though the exact details of the workings of the algorithm are difficult to find, it is said to work by recording all conversations in a database and finding the most appropriate response by identifying the most similar questions and responses in the database. I made up a nonsensical question in the following screenshot, and you can see that it found something similar to the object of my question in terms of a string match. I persisted: Again I got something…similar? You'll also notice that topics can persist across the conversation. In response to my answer, I was asked to go into more detail and justify my answer. This is one of the things that appears to make Cleverbot, well, clever. While chatbots that learn from humans can be quite amusing, they can also have a darker side. Just this past year, Microsoft released a chatbot named Tay on Twitter. People were invited to ask questions of Tay, and Tay would respond in accordance with her “personality”. Microsoft had apparently programmed the bot to appear to be 19-year-old American girl. She was intended to be your virtual “bestie”; the only problem was she started sounding like she would rather hang with the Nazi youth than you. As a result of these unbelievably inflammatory tweets, Microsoft was forced to pull Tay off Twitter and issue an apology: “As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we'll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.” -March 25, 2016 Official Microsoft Blog Clearly, brands that want to release chatbots into the wild in the future should take a lesson from this debacle. There is no doubt that brands are embracing chatbots. Everyone from Facebook to Taco Bell is getting in on the game. Witness the TacoBot: Yes, this is a real thing, and despite the stumbles such as Tay, there is a good chance the future of UI looks a lot like TacoBot. One last example might even help explain why. Quartz recently launched an app that turns news into a conversation. Rather than lay out the day's stories as a flat list, you are engaged in a chat as if you were getting news from a friend. David Gasca, a PM at Twitter, describes his experience using the app in a post on Medium. He describes how the conversational nature invoked feelings that were normally only triggered in human relationships. This is his take on how he felt when he encountered an ad in the app: "Unlike a simple display ad, in a conversational relationship with my app, I feel like I owe something to it: I want to click. At the most subconscious level, I feel the need to reciprocate and not let the app down: The app has given me this content. It's been very nice so far and I enjoyed the GIFs. I should probably click since it's asking nicely.” If this experience is universal—and I expect that it is—this could be the next big thing in advertising, and have no doubt that advertising profits will drive UI design: “The more the bot acts like a human, the more it will be treated like a human.” -Mat Webb, technologist and co-author of Mind Hacks At this point, you are probably dying to know how these things work, so let's get on with it! The design of chatbots The original ELIZA application was two-hundred odd lines of code. The Python NLTK implementation is similarly short. An excerpt can be seen at the following link from NLTK's website (http://www.nltk.org/_modules/nltk/chat/eliza.html). I have also reproduced an except below: # Natural Language Toolkit: Eliza # # Copyright (C) 2001-2016 NLTK Project # Authors: Steven Bird <stevenbird1@gmail.com> # Edward Loper <edloper@gmail.com> # URL: <http://nltk.org/> # For license information, see LICENSE.TXT # Based on an Eliza implementation by Joe Strout <joe@strout.net>, # Jeff Epler <jepler@inetnebr.com> and Jez Higgins <mailto:jez@jezuk.co.uk>. # a translation table used to convert things you say into things the # computer says back, e.g. "I am" --> "you are" from future import print_function # a table of response pairs, where each pair consists of a # regular expression, and a list of possible responses, # with group-macros labelled as %1, %2. pairs = ((r'I need (.*)',("Why do you need %1?", "Would it really help you to get %1?","Are you sure you need %1?")),(r'Why don't you (.*)', ("Do you really think I don't %1?","Perhaps eventually I will %1.","Do you really want me to %1?")), [snip](r'(.*)?',("Why do you ask that?", "Please consider whether you can answer your own question.", "Perhaps the answer lies within yourself?", "Why don't you tell me?")), (r'quit',("Thank you for talking with me.","Good-bye.", "Thank you, that will be $150. Have a good day!")), (r'(.*)',("Please tell me more.","Let's change focus a bit... Tell me about your family.","Can you elaborate on that?","Why do you say that %1?","I see.", "Very interesting.","%1.","I see. And what does that tell you?","How does that make you feel?", "How do you feel when you say that?")) ) eliza_chatbot = Chat(pairs, reflections) def eliza_chat(): print("Therapistn---------") print("Talk to the program by typing in plain English, using normal upper-") print('and lower-case letters and punctuation. Enter "quit" when done.') print('='*72) print("Hello. How are you feeling today?") eliza_chatbot.converse() def demo(): eliza_chat() if name demo() == " main ": As you can see from this code, input text was parsed and then matched against a series of regular expressions. Once the input was matched, a randomized response (that sometimes echoed back a portion of the input) was returned. So, something such as I need a taco would trigger a response of Would it really help you to get a taco? Obviously, the answer is yes, and fortunately, we have advanced to the point that technology can provide one to you (bless you, TacoBot), but this was still in the early days. Shockingly, some people did actually believe ELIZA was a real human. However, what about more advanced bots? How are they constructed? Surprisingly, most of the chatbots that you're likely to encounter don't even use machine learning; they use what's known as retrieval-based models. This means responses are predefined according to the question and the context. The most common architecture for these bots is something called Artificial Intelligence Markup Language (AIML). AIML is an XML-based schema to represent how the bot should interact to the user's input. It's really just a more advanced version of how ELIZA works. Let's take a look at how responses are generated using AIML. First, all inputs are preprocessed to normalize them. This means when you input “Waaazzup???”, it is mapped to “WHAT IS UP”. This preprocessing step funnels down the myriad ways of saying the same thing into one input that can run against a single rule. Punctuation and other extraneous inputs are removed as well at this point. Once this is complete, the input is matched against the appropriate rule. The following is a sample template: <category> <pattern>WHAT IS UP</pattern> <template>The sky, duh. Pfft. Humans...</template> </category> This is the basic setup, but you can also layer in wildcards, randomization, and prioritization schemes. For example, the following pattern uses wildcard matching: <category> <pattern>* FOR ME<pattern> <template>I'm a bot. I don't <star/>. Ever.</template> </category> Here, the *wildcard matches one or more words before FORMEand then repeats these back in the output template. If the user were to type in Danceforme!, the response would be, I'm a bot. I don't dance. Ever. As you can see, these rules don't make for anything that approximates any type of real intelligence, but there are a few tricks that strengthen the illusion. One of the better ones is the ability to generate responses conditioned to a topic. For example, here is a rule that invokes a topic: <category> <pattern>I LIKE TURTLES</pattern> <template>I feel like this whole <set name="topic">turle</set> thing could be a problem. What do you like about them? </template> </category> Once the topic is set, then the rules specific to that context can be matched: <topic name="turtles"> <category> <pattern>* SHELL IS *</pattern> <template>I dislike turtles primarily because of their shells. What other creepy things do you like about turtles? </template> </category> <category> <pattern>* HIDE *</pattern> <template>I wish like a turtle that I could hide from this conversation.</template> </category> </topic> Let's see what this interaction looks like: >I like turtles! >I feel like this whole turtle thing could be a problem. What do you like about them? >I like how they hide in their shell >I wish like a turtle I could hide from this conversation. You can see that the continuity across the conversation adds a measure of realism. You probably think that this can't be state-of-the-art in this age of deep learning, and you're right. While most bots are rule-based, the next generation of chatbots are emerging, and they are based on neural networks. In 2015, Oriol Vinyas and Quoc Le of Google published a paper (http://arxiv.org/pdf/1506.05869v1.pdf), which described the construction of a neural network, based on sequence-to-sequence models. This type of model maps an input sequence, such as “ABC”, to an output sequence, such as “XYZ”. These inputs and outputs can be translations from one language to another for example. However, in the case of their work here, the training data was not language translation, but rather tech support transcripts and movie dialog. While the results from both models are both interesting, it was the interactions that were based on movie model that stole the headlines. The following are sample interactions taken from the paper: None of this was explicitly encoded by humans or present in a training set as asked, and yet, looking at this is, it is frighteningly like speaking with a human. However, let's see more… Note that the model responds with what appears to be knowledge of gender (he, she), of place (England), and career (player). Even questions of meaning, ethics, and morality are fair game: The conversation continues: If this transcript doesn't give you a slight chill of fear for the future, there's a chance you may already be some sort of AI. I wholeheartedly recommend reading the entire paper. It isn't overly technical, and it will definitely give you a glimpse of where this technology is headed. We talked a lot about the history, types, and design of chatbots, but let's now move on to building our own! Building a chatbot Now, having seen what is possible in terms of chatbots, you most likely want to build the best, most state-of-the-art, Google-level bot out there, right? Well, just put that out of your mind right now because we will do just the opposite! We will build the best, most awful bot ever! Let me tell you why. Building a chatbot comparable to what Google built takes some serious hardware and time. You aren't going to whip up a model on your MacBook Pro that takes anything less than a month or two to run with any type of real training set. This means that you will have to rent some time on an AWS box, and not just any box. This box will need to have some heavy-duty specs and preferably be GPU-enabled. You are more than welcome to attempt such a thing. However, if your goal is just to build something very cool and engaging, I have you covered here. I should also warn you in advance, although Cleverbot is no Tay, the conversations can get a bit salty. If you are easily offended, you may want to find a different training set. Ok, let's get started! First, as always, we need training data. Again, as always, this is the most challenging step in the process. Fortunately, I have come across an amazing repository of conversational data. The notsocleverbot.com site has people submit the most absurd conversations they have with Cleverbot. How can you ask for a better training set? Let's take a look at a sample conversation between Cleverbot and a user from the site: So, this is where we'll begin. We'll need to download the transcripts from the site to get started: You'll just need to paste the link into the form on the page. The format will be like the following: http://www.notsocleverbot.com/index.php?page=1. Once this is submitted, the site will process the request and return a page back that looks like the following: From here, if everything looks right, click on the pink Done button near the top right. The site will process the page and then bring you to the following page: Next, click on the Show URL Generator button in the middle: Next, you can set the range of numbers that you'd like to download from. For example, 1-20, by 1 step. Obviously, the more pages you capture, the better this model will be. However, remember that you are taxing the server, so please be considerate. Once this is done, click on Add to list and hit Return in the text box, and you should be able to click on Save. It will begin running, and when it is complete, you will be able to download the data as a CSV file. Next, we'll use our Jupyter notebook to examine and process the data. We'll first import pandasand the Python regular expressions library, re. We will also set the option in pandasto widen our column width so that we can see the data better: import pandas as pd import re pd.set_option('display.max_colwidth',200) Now, we'll load in our data: df = pd.read_csv('/Users/alexcombs/Downloads/nscb.csv') df The preceding code will result in the following output: As we're only interested in the first column, the conversation data, we'll parse this out: convo = df.iloc[:,0] convo The preceding code will result in the following output: You should be able to make out that we have interactions between User and Cleverbot, and that either can initiate the conversation. To get the data in the format that we need, we'll have to parse it into question and response pairs. We aren't necessarily concerned with who says what, but we are concerned with matching up each response to each question. You'll see why in a bit. Let's now perform a bit of regular expression magic on the text: clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] The preceding code results in the following output: Okay, there's a lot of code there. What just happened? We first created a list to hold our question and response tuples. We then passed our conversations through a function to split them into these pairs using regular expressions. Finally, we set it all into a pandas DataFramewith columns labelled qand a. We will now apply a bit of algorithm magic to match up the closest question to the one a user inputs: from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) What we did in the preceding code was to import our TfidfVectorizationlibrary and the cosine similarity library. We then used our training data to create a tf-idf matrix. We can now use this to transform our own new questions and measure the similarity to existing questions in our training set. We covered cosine similarity and tf-idf algorithms in detail, so flip back there if you want to understand how these work under the hood. Let's now get our similarity scores: my_q = vectorizer.transform(['Hi. My name is Alex.']) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) top5 = rs.iloc[0:5] top5 The preceding code results in the following output: What are we looking at here? This is the cosine similarity between the question I asked and the top five closest questions. To the left is the index and on the right is the cosine similarity. Let's take a look at these: convo_frame.iloc[top5.index]['q'] This results in the following output: As you can see, nothing is exactly the same, but there are definitely some similarities. Let's now take a look at the response: rsi = rs.index[0] rsi convo_frame.iloc[rsi]['a'] The preceding code results in the following output: Okay, so our bot seems to have an attitude already. Let's push further. We'll create a handy function so that we can test a number of statements easily: def get_response(q): my_q = vectorizer.transform([q]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] get_response('Yes, I am clearly more clever than you will ever be!') This results in the following output: We have clearly created a monster, so we'll continue: get_response('You are a stupid machine. Why must I prove anything to you?') This results in the following output: I'm enjoying this. Let's keep rolling with it: get_response('My spirit animal is a menacing cat. What is yours?') To which I responded: get_response('I mean I didn't actually name it.') This results in the following output: Continuing: get_response('Do you have a name suggestion?') This results in the following output: To which I respond: get_response('I think it might be a bit aggressive for a kitten') This results in the following output: I attempt to calm the situation: get_response('No need to involve the police.') This results in the following output: And finally, get_response('And I you, Cleverbot') This results in the following output: Remarkably, this may be one of the best conversations I've had in a while: bot or no bot. Now that we have created this cake-based intelligence, let's set it up so that we can actually chat with it via text message. We'll need a few things to make this work. The first is a twilio account. They will give you a free account that lets you send and receive text messages. Go to http://ww.twilio.com and click to sign up for a free developer API key. You'll set up some login credentials and they will text your phone to confirm your number. Once this is set up, you'll be able to find the details in their Quickstart documentation. Make sure that you select Python from the drop-down menu in the upper left-hand corner. Sending messages from Python code is a breeze, but you will need to request a twilio number. This is the number that you will use to send a receive messages in your code. The receiving bit is a little more complicated because it requires that you to have a webserver running. The documentation is succinct, so you shouldn't have that hard a time getting it set up. You will need to paste a public-facing flask server's URL in under the area where you manage your twilio numbers. Just click on the number and it will bring you to the spot to paste in your URL: Once this is all set up, you will just need to make sure that you have your Flask web server up and running. I have condensed all the code here for you to use on your Flask app: from flask import Flask, request, redirect import twilio.twiml import pandas as pd import re from sklearn.feature_extraction.text import TfidfVectorizer from sklearn.metrics.pairwise import cosine_similarity app = Flask( name ) PATH_TO_CSV = 'your/path/here.csv' df = pd.read_csv(PATH_TO_CSV) convo = df.iloc[:,0] clist = [] def qa_pairs(x): cpairs = re.findall(": (.*?)(?:$|n)", x) clist.extend(list(zip(cpairs, cpairs[1:]))) convo.map(qa_pairs); convo_frame = pd.Series(dict(clist)).to_frame().reset_index() convo_frame.columns = ['q', 'a'] vectorizer = TfidfVectorizer(ngram_range=(1,3)) vec = vectorizer.fit_transform(convo_frame['q']) @app.route("/", methods=['GET', 'POST']) def get_response(): input_str = request.values.get('Body') def get_response(q): my_q = vectorizer.transform([input_str]) cs = cosine_similarity(my_q, vec) rs = pd.Series(cs[0]).sort_values(ascending=0) rsi = rs.index[0] return convo_frame.iloc[rsi]['a'] resp = twilio.twiml.Response() if input_str: resp.message(get_response(input_str)) return str(resp) else: resp.message('Something bad happened here.') return str(resp) It looks like there is a lot going on, but essentially we use the same code that we used before, only now we grab the POST data that twilio sends—the text body specifically—rather than the data we hand-entered before into our get_requestfunction. If all goes as planned, you should have your very own weirdo bestie that you can text anytime, and what could be better than that! Summary In this article, we had a full tour of the chatbot landscape. It is clear that we are just on the cusp of an explosion of these sorts of applications. The Conversational UI revolution is just about to begin. Hopefully, this article has inspired you to create your own bot, but if not, at least perhaps you have a much richer understanding of how these applications work and how they will shape our future. I'll let the app say the final words: get_response("Say goodbye, Clevercake") Resources for Article: Further resources on this subject: Supervised Machine Learning [article] Unsupervised Learning [article] Specialized Machine Learning Topics [article]
Read more
  • 0
  • 0
  • 2997
article-image-cloudera-hadoop-and-hp-vertica
Packt
29 Oct 2013
9 min read
Save for later

Cloudera Hadoop and HP Vertica

Packt
29 Oct 2013
9 min read
(For more resources related to this topic, see here.) Cloudera Hadoop Hadoop is one of the names we think about when it comes to Big Data. I'm not going into details about it since there is plenty of information out there; moreover, like somebody once said, "If you decided to use Hadoop for your data warehouse, then you probably have a good reason for it". Let's not forget: it is primarily a distributed filesystem, not a relational database. That said, there are many cases when we may need to use this technology for number crunching, for example, together with MicroStrategy for analysis and reporting. There are mainly two ways to leverage Hadoop data from MicroStrategy: the first is Hive and the second is Impala. They both work as SQL bridges to the underlying Hadoop structures, converting standard SELECT statements into jobs. The connection is handled by a proprietary 32-bit ODBC driver available for free from the Cloudera website. In my tests, Impala resulted largely faster than Hive, so I will show you how to use it from our MicroStrategy virtual machine. Please note that I am using Version 9.3.0 for consistency with the rest of the book. If you're serious about Big Data and Hadoop, I strongly recommend upgrading to 9.3.1 for enhanced performance and easier setup. See MicroStrategy knowledge base document TN43588 : Post-Certification of Cloudera Impala 1.0 with MicroStrategy 9.3.1 . The ODBC driver is the same for both Hive and Impala, only the driver settings change. Connecting to a Hadoop database To show how we can connect to a Hadoop database, I will use two virtual machines: one with MicroStrategy Suite and the second with Cloudera Hadoop distribution, specifically, a virtual appliance that is available for download from their website. The configuration of the Hadoop cluster is out of scope; moreover, I am not a Hadoop expert. I'll simply give some hints, feel free to use any other configuration/vendor, the procedure and ODBC parameters should be similar. Getting ready Start by going to http://at5.us/AppAU1 The Cloudera VM download is almost 3 GB (cloudera-quickstart-vm-4.3.0-vmware.tar.gz) and features the CH4 version. After unpacking the archive, you'll find a cloudera-quickstart-vm-4.3.0-vmware.ovf file that can be opened with VMware, see screen capture: Accept the defaults and click on Import to generate the cloudera-quickstart-vm-4.3.0-vmware virtual machine. Before starting the Cloudera appliance, change the network card settings from NAT to Bridged since we need to access the database from another VM: Leave the rest of the parameters, as per the default, and start the machine. After a while, you'll be presented with a graphical interface of Centos Linux. If the network has started correctly, the machine should have received an IP address from your network DHCP. We need a fixed rather than dynamic address in the Hadoop VM, so: Open the System | Preferences | Network Connections menu. Select the name of your card (should be something like Auto eth1 ) and click on Edit… . Move to the IPv4 Settings tab and change the Method from Automatic (DHCP) to Manual . Click on the Add button to create a new address. Ask your network administrator for details here and fill Address , Netmask , and Gateway . Click on Apply… and when prompted type the root password cloudera and click on Authenticate . Then click on Close . Check if the change was successful by opening a Terminal window (Applications | System Tools | Terminal ) and issue the ifconfig command, the answer should include the address that you typed in step 4. From the MicroStrategy Suite virtual machine, test if you can ping the Cloudera VM. When we first start Hadoop, there are no tables in the database, so we create the samples: In the Cloudera virtual machine, from the main page in Firefox open Cloudera Manager , click on I Agree in the Information Assurance Policy dialog. Log in with username admin and password admin. Look for a line with a service named oozie1 , notice that it is stopped. Click on the Actions button and select Start… . Confirm with the Start button in the dialog. A Status window will pop up, wait until the Progress is reported as Finished and close it. Now click on the Hue button in the bookmarks toolbar. Sign up with username admin and password admin, you are now in the Hue home page. Click on the first button in the blue Hue toolbar (tool tip: About Hue ) to go to the quick Start Wizard . Click on the Next button to go to Step 2: Examples tab. Click on Beeswax (Hive UI) and wait until a message over the toolbar says Examples refreshed . Now in the Hue toolbar, click on the seventh button from the left (tool tip: Metastore Manager ), you will see the default database with two tables: sample_07 and sample_08 . Enable the checkbox of sample_08 and click on the Browse Data button. After a while the Results tab shows a grid with data. So far so good. We now go back to the Cloudera Manager to start the Impala service. Click on the Cloudera Manager bookmark button. In the Impala1 row, open the Actions menu and choose Start… , then confirm Start . Wait until the Progress says Finished , then click on Close in the command details window. Go back to Hue and click on the fourth button on the toolbar (tool tip: Cloudera Impala (TM) Query UI ). In the Query Editor text area, type select * from sample_08 and click on Execute to see the table content. Next, we open the MicroStrategy virtual machine and download the 32-bit Cloudera ODBC Driver for Apache Hive, Version 2.0 from http://at5.us/AppAU2. Download the ClouderaHiveODBCSetup_v2_00.exe file and save it in C:install. How to do it... We install the ODBC driver: Run C:installClouderaHiveODBCSetup_v2_00.exe and click on the Next button until you reach Finish at the end of the setup, accepting every default. Go to Start | All Programs | Administrative Tools | Data Sources (ODBC) to open the 32-bit ODBC Data Source Administrator (if you're on 64-bit Windows, it's in the SysWOW64 folder). Click on System DSN and hit the Add… button. Select Cloudera ODBC Driver for Apache Hive and click on Finish . Fill the Hive ODBC DSN Configuration with these case-sensitive parameters (change the Host IP according to the address used in step 4 of the Getting ready section): Data Source Name : Cloudera VM Host : 192.168.1.40 Port : 21050 Database : default Type : HS2NoSasl Click on OK and then on OK again to close the ODBC Data Source Administrator. Now open the MicroStrategy Desktop application and log in with administrator and the corresponding password. Right-click on MicroStrategy Analytics Modules and select Create New Project… . Click on the Create project button and name it HADOOP, uncheck Enable Change Journal for this project and click on OK . When the wizard finishes creating the project click on Select tables from the Warehouse Catalog and hit the button labeled New… . Click on Next and type Cloudera VM in the Name textbox of the Database Instance Definition window. In this same window, open the Database type combobox and scroll down until you find Generic DBMS . Click on Next . In Local system ODBC data sources , pick Cloudera VM and type admin in both Database login and Password textboxes. Click on Next , then on Finish , and then on OK . When a Warehouse Catalog Browser error appears, click on Yes . In the Warehouse Catalog Options window, click on Edit… on the right below Cloudera VM . Select the Advanced tab and enable the radio button labeled Use 2.0 ODBC calls in the ODBC Version group. Click on OK . Now select the category Catalog | Read Settings in the left tree and enable the first radio button labeled Use standard ODBC calls to obtain the database catalog . Click on OK to close this window. When the Warehouse Catalog window appears, click on the lightning button (tool tip: Read the Warehouse Catalog ) to refresh the list of available tables. Pick sample_08 and move it to the right of the shopping cart. Then right-click on it and choose Import Prefix . Click on Save and Close and then on OK twice to close the Project Creation Assistant . You can now open the project and update the schema. From here, the procedure to create objects is the same as in any other project: Go to the Schema Objects | Attributes folder, and create a new Job attribute with these columns: ID : Table: sample_08 Column: code DESC : Table: sample_08 Column: description Go to the Fact folder and create a new Salary fact with salary column. Update the schema. Go to the Public Objects | Metrics folder and create a new Salary metric based on the Salary fact with Sum as aggregation function. Go to My Personal Objects | My Reports and create a new report with the Job attribute and the Salary metric: There you go; you just created your first Hadoop report. How it works... Executing Hadoop reports is no different from running any other standard DBMS reports. The ODBC driver handles the communication with Cloudera machine and Impala manages the creation of jobs to retrieve data. From MicroStrategy perspective, it is just another SELECT query that returns a dataset. There's more... Impala and Hive do not support the whole set of ANSI SQL syntax, so in some cases you may receive an error if a specific feature is not implemented: See the Cloudera documentation for details. HP Vertica Vertica Analytic Database is grid-based, column-oriented, and designed to manage large, fast-growing volumes of data while providing rapid query performance. It features a storage organization that favors SELECT statements over UPDATE and DELETE plus a high compression that stores columns of homogeneous datatype together. The Community (free) Edition allows up to three hosts and 1 TB of data, which is fairly sufficient for small to medium BI projects with MicroStrategy. There are several clients available for different operating systems, including 32-bit and 64-bit ODBC drivers for Windows.
Read more
  • 0
  • 0
  • 2986

article-image-r-and-its-diverse-possibilities
Packt
16 Dec 2016
11 min read
Save for later

R and its Diverse Possibilities

Packt
16 Dec 2016
11 min read
In this article by Jen Stirrup, the author of the book Advanced Analytics with R and Tableau, We will cover, with examples, the core essentials of R programming such as variables and data structures in R such as matrices, factors, vectors, and data frames. We will also focus on control mechanisms in R ( relational operators, logical operators, conditional statements, loops, functions, and apply) and how to execute these commands in R to get grips before proceeding to article that heavily rely on these concepts for scripting complex analytical operations. (For more resources related to this topic, see here.) Core essentials of R programming One of the reasons for R’s success is its use of variables. Variables are used in all aspects of R programming. For example, variables can hold data, strings to access a database, whole models, queries, and test results. Variables are a key part of the modeling process, and their selection has a fundamental impact on the usefulness of the models. Therefore, variables are an important place to start since they are at the heart of R programming. Variables In the following section we will deal with the variables—how to create variables and working with variables. Creating variables It is very simple to create variables in R, and to save values in them. To create a variable, you simply need to give the variable a name, and assign a value to it. In many other languages, such as SQL, it’s necessary to specify the type of value that the variable will hold. So, for example, if the variable is designed to hold an integer or a string, then this is specified at the point at which the variable is created. Unlike other programming languages, such as SQL, R does not require that you specify the type of the variable before it is created. Instead, R works out the type for itself, by looking at the data that is assigned to the variable. In R, we assign variables using an assignment variable, which is a less than sign (<) followed by a hyphen (-). Put together, the assignment variable looks like so: Working with variables It is important to understand what is contained in the variables. It is easy to check the content of the variables using the lscommand. If you need more details of the variables, then the ls.strcommand will provide you with more information. If you need to remove variables, then you can use the rm function. Data structures in R The power of R resides in its ability to analyze data, and this ability is largely derived from its powerful data types. Fundamentally, R is a vectorized programming language. Data structures in R are constructed from vectors that are foundational. This means that R’s operations are optimized to work with vectors. Vector The vector is a core component of R. It is a fundamental data type. Essentially, a vector is a data structure that contains an array where all of the values are the same type. For example, they could all be strings, or numbers. However, note that vectors cannot contain mixed data types. R uses the c() function to take a list of items and turns them into a vector. Lists R contains two types of lists: a basic list, and a named list. A basic list is created using the list() operator. In a named list, every item in the list has a name as well as a value. named lists are a good mapping structure to help map data between R and Tableau. In R, lists are mapped using the $ operator. Note, however, that the list label operators are case sensitive. Matrices Matrices are two-dimensional structures that have rows and columns. The matrices are lists of rows. It’s important to note that every cell in a matrix has the same type. Factors A factor is a list of all possible values of a variable in a string format. It is a special string type, which is chosen from a specified set of values known as levels. They are sometimes known as categorical variables. In dimensional modeling terminology, a factor is equivalent to a dimension, and the levels represent different attributes of the dimension. Note that factors are variables that can only contain a limited number of different values. Data frames The data frame is the main data structure in R. It’s possible to envisage the data frame as a table of data, with rows and columns. Unlike the list structure, the data frame can contain different types of data. In R, we use the data.frame() command in order to create a data frame. The data frame is extremely flexible for working with structured data, and it can ingest data from many different data types. Two main ways to ingest data into data frames involves the use of many data connectors, which connect to data sources such as databases, for example. There is also a command, read.table(), which takes in data. Data Frame Structure Here is an example, populated data frame. There are three columns, and two rows. The top of the data frame is the header. Each horizontal line afterwards holds a data row. This starts with the name of the row, and then followed by the data itself. Each data member of a row is called a cell. Here is an example data frame, populated with data: Example Data Frame Structure df = data.frame( Year=c(2013, 2013, 2013), Country=c("Arab World","Carribean States", "Central Europe"), LifeExpectancy=c(71, 72, 76)) As always, we should read out at least some of the data frame so we can double-check that it was set correctly. The data frame was set to the df variable, so we can read out the contents by simply typing in the variable name at the command prompt: To obtain the data held in a cell, we enter the row and column co-ordinates of the cell, and surround them by square brackets []. In this example, if we wanted to obtain the value of the second cell in the second row, then we would use the following: df[2, "Country"] We can also conduct summary statistics on our data frame. For example, if we use the following command: summary(df) Then we obtain the summary statistics of the data. The example output is as follows: You’ll notice that the summary command has summarized different values for each of the columns. It has identified Year as an integer, and produced the min, quartiles, mean, and max for year. The Country column has been listed, simply because it does not contain any numeric values. Life Expectancy is summarized correctly. We can change the Year column to a factor, using the following command: df$Year <- as.factor(df$Year) Then, we can rerun the summary command again: summary(df) On this occasion, the data frame now returns the correct results that we expect: As we proceed throughout this book, we will be building on more useful features that will help us to analyze data using data structures, and visualize the data in interesting ways using R. Control structures in R R has the appearance of a procedural programming language. However, it is built on another language, known as S. S leans towards functional programming. It also has some object-oriented characteristics. This means that there are many complexities in the way that R works. In this section, we will look at some of the fundamental building blocks that make up key control structures in R, and then we will move onto looping and vectorized operations. Logical operators Logical operators are binary operators that allow the comparison of values: Operator Description <  less than <= less than or equal to >  greater than >= greater than or equal to == exactly equal to != not equal to !x Not x x | y x OR y x & y x AND y isTRUE(x) test if X is TRUE For loops and vectorization in R Specifically, we will look at the constructs involved in loops. Note, however, that it is more efficient to use vectorized operations rather than loops, because R is vector-based. We investigate loops here, because they are a good first step in understanding how R works, and then we can optimize this understanding by focusing on vectorized alternatives that are more efficient. More information about control flows can be obtained by executing the command at the command line: Help?Control The control flow commands take decisions and make decisions between alternative actions. The main constructs are for, while, and repeat. For loops Let’s look at a for loop in more detail. For this exercise, we will use the Fisher iris dataset, which is installed along with R by default. We are going to produce summary statistics for each species of iris in the dataset. You can see some of the iris data by typing in the following command at the command prompt: head(iris) We can divide the iris dataset so that the data is split by species. To do this, we use the split command, and we assign it to the variable called IrisBySpecies: IrisBySpecies <- split(iris,iris$Species) Now, we can use a for loop in order to process the data in order to summarize it by species. Firstly, we will set up a variable called output, and set it to a list type. For each species held in the IrisBySpecies variable, we set it to calculate the minimum, maximum, mean, and total cases. It is then set to a data frame called output.df, which is printed out to the screen: output <- list() for(n in names(IrisBySpecies)){ ListData <- IrisBySpecies[[n]] output[[n]] <- data.frame(species=n, MinPetalLength=min(ListData$Petal.Length), MaxPetalLength=max(ListData$Petal.Length), MeanPetalLength=mean(ListData$Petal.Length), NumberofSamples=nrow(ListData)) output.df <- do.call(rbind,output) } print(output.df) The output is as follows: We used a for loop here, but they can be expensive in terms of processing. We can achieve the same end by using a function that uses a vector called Tapply. Tapply processes data in groups. Tapply has three parameters; the vector of data, the factor that defines the group, and a function. It works by extracting the group, and then applying the function to each of the groups. Then, it returns a vector with the results. We can see an example of tapply here, using the same dataset: output <- data.frame(MinPetalLength=tapply(iris$Petal.Length,iris$Species,min), MaxPetalLength=tapply(iris$Petal.Length,iris$Species,max), MeanPetalLength=tapply(iris$Petal.Length,iris$Species,mean), NumberofSamples=tapply(iris$Petal.Length,iris$Species,length)) print(output) This time, we get the same output as previously. The only difference is that by using a vectorized function, we have concise code that runs efficiently. To summarize, R is extremely flexible and it’s possible to achieve the same objective in a number of different ways. As we move forward through this book, we will make recommendations about the optimal method to select, and the reasons for the recommendation. Functions R has many functions that are included as part of the installation. In the first instance, let’s look to see how we can work smart by finding out what functions are available by default. In our last example, we used the split() function. To find out more about the split function, we can simply use the following command: ?split Or we can use: help(split) It’s possible to get an overview of the arguments required for a function. To do this, simply use the args command: args(split) Fortunately, it’s also possible to see examples of each function by using the following command: example(split) If you need more information than the documented help file about each function, you can use the following command. It will go and search through all the documentation for instances of the keyword: help.search("split") If you  want to search the R project site from within RStudio, you can use the RSiteSearch command. For example: RSiteSearch("split") Summary In this article, we have looked at various essential structures in working with R. We have looked at the data structures that are fundamental to using R optimally. We have also taken the view that structures such as for loops can often be done better as vectorized operations. Finally, we have looked at the ways in which R can be used to create functions in order to simply code. Resources for Article: Further resources on this subject: Getting Started with Tableau Public [article] Creating your first heat map in R [article] Data Modelling Challenges [article]
Read more
  • 0
  • 0
  • 2982

article-image-getting-started-apache-solr
Packt
02 Dec 2011
8 min read
Save for later

Getting Started with Apache Solr

Packt
02 Dec 2011
8 min read
  (For more resources on Apache, see here.) We're going to get started by downloading Solr, examine its directory structure, and then finally run it. This sets you up for the next section, which tours a running Solr server. Get Solr: You can download Solr from its website: http://lucene.apache.org/ solr/. The last Solr release this article was written for is version 3.4. Solr has had several relatively minor point-releases since 3.1 and it will continue. In general I recommend using the latest release since Solr and Lucene's code are extensively tested. Lucid Imagination also provides a Solr distribution called "LucidWorks for Solr". As of this writing it is Solr 3.2 with some choice patches that came after to ensure its stability and performance. It's completely open source; previous LucidWorks releases were not as they included some extras with use limitations. LucidWorks for Solr is a good choice if maximum stability is your chief concern over newer features. Get Java: The only prerequisite software needed to run Solr is Java 5 (a.k.a. java version 1.5) or later—ideally Java 6. Typing java –version at a command line will tell you exactly which version of Java you are using, if any. Use latest version of Java! The initial release of Java 7 included some serious bugs that were discovered shortly before its release that affect Lucene and Solr. The release of Java 7u1 on October 19th, 2011 resolves these issues. These same bugs occurred with Java 6 under certain JVM switches, and Java 6u29 resolves them. Therefore, I advise you to use the latest Java release. Java is available on all major platforms including Windows, Solaris, Linux, and Apple. Visit http://www.java.com to download the distribution for your platform. Java always comes with the Java Runtime Environment (JRE) and that's all Solr requires. The Java Development Kit (JDK) includes the JRE plus the Java compiler and various diagnostic utility programs. One such useful program is jconsole, and so the JDK distribution is recommended. Solr is a Java-based web application, but you don't need to be particularly familiar with Java in order to use it. Solr's installation directory structure When you unzip Solr after downloading it, you should find a relatively straightforward directory structure: client: Convenient language-specific client APIs for talking to Solr. Ignore the client directory Most client libraries are maintained by other organizations, except for the Java client SolrJ which lies in the dist/ directory. client/ only contains solr-ruby , which has fallen out of favor compared to rsolr —both of which are Ruby Solr clients. contrib: Solr contrib modules. These are extensions to Solr. The final JAR file for each of these contrib modules is actually in dist/; so the actual files here are mainly the dependent JAR files. analysis-extras: A few text analysis components that have large dependencies. There are some "ICU" Unicode classes for multilingual support, a Chinese stemmer, and a Polish stemmer. clustering: A engine for clustering search results. dataimporthandler: The DataImportHandler (DIH) —a very popular contrib module that imports data into Solr from a database and some other sources. extraction: Integration with Apache Tika– a framework for extracting text from common file formats. This module is also called SolrCell and Tika is also used by the DIH's TikaEntityProcessor. uima: Integration with Apache UIMA—a framework for extracting metadata out of text. There are modules that identify proper names in text and identify the language, for example. To learn more, see Solr's wiki: http://wiki.apache.org/solr/SolrUIMA. velocity: Simple Search UI framework based on the Velocity templating language. dist: Solr's WAR and contrib JAR files. The Solr WAR file is the main artifact that embodies Solr as a standalone file deployable to a Java web server. The WAR does not include any contrib JARs. You'll also find the core of Solr as a JAR file, which you might use if you are embedding Solr within an application, and Solr's test framework as a JAR file, which is to assist in testing Solr extensions. You'll also see SolrJ's dependent JAR files here. docs: Documentation—the HTML files and related assets for the public Solr website, to be precise. It includes a good quick tutorial, and of course Solr's API. Even if you don't plan on extending the API, some parts of it are useful as a reference to certain pluggable Solr configuration elements—see the listing for the Java package org.apache.solr.analysis in particular. example: A complete Solr server, serving as an example. It includes the Jetty servlet engine (a Java web server), Solr, some sample data and sample Solr configurations. The interesting child directories are: example/etc: Jetty's configuration. Among other things, here you can change the web port used from the pre-supplied 8983 to 80 (HTTP default). exampledocs: Sample documents to be indexed into the default Solr configuration, along with the post.jar program for sending the documents to Solr. example/solr: The default, sample Solr configuration. This should serve as a good starting point for new Solr applications. It is used in Solr's tutorial. example/webapps: Where Jetty expects to deploy Solr from. A copy of Solr's WAR file is here, which contains Solr's compiled code. Solr's home directory and Solr cores When Solr starts, the very first thing it does is determine where the Solr home directory is. There are various ways to tell Solr where it is, but by default it's the directory named simply solr relative to the current working directory where Solr is started. You will usually see a solr.xml file in the home directory, which is optional but recommended. It mainly lists Solr cores. For simpler configurations like example/solr, there is just one Solr core, which uses Solr's home directory as its core instance directory . A Solr core holds one Lucene index and the supporting Solr configuration for that index. Nearly all interactions with Solr are targeted at a specific core. If you want to index different types of data separately or shard a large index into multiple ones then Solr can host multiple Solr cores on the same Java server. A Solr core's instance directory is laid out like this: conf: Configuration files. The two I mention below are very important, but it will also contain some other .txt and .xml files which are referenced by these two. conf/schema.xml: The schema for the index including field type definitions with associated analyzer chains. conf/solrconfig.xml: The primary Solr configuration file. conf/xslt: Various XSLT files that can be used to transform Solr's XML query responses into formats such as Atom and RSS. conf/velocity: HTML templates and related web assets for rapid UI prototyping using Solritas. The soon to be discussed "browse" UI is implemented with these templates. data: Where Lucene's index data lives. It's binary data, so you won't be doing anything with it except perhaps deleting it occasionally to start anew. lib: Where extra Java JAR files can be placed that Solr will load on startup. This is a good place to put contrib JAR files, and their dependencies. Running Solr Now we're going to start up Jetty and finally see Solr running albeit without any data to query yet. We're about to run Solr directly from the unzipped installation. This is great for exploring Solr and doing local development, but it's not what you would seriously do in a production scenario. In a production scenario you would have a script or other mechanism to start and stop the servlet engine with the operating system—Solr does not include this. And to keep your system organized, you should keep the example directly as exactly what its name implies—an example. So if you want to use the provided Jetty servlet engine in production, a fine choice then copy the example directory elsewhere and name it something else. First go to the example directory, and then run Jetty's start.jar file by typing the following command: >>cd example >>java -jar start.jar The > > notation is the command prompt. These commands will work across *nix and DOS shells. You'll see about a page of output, including references to Solr. When it is finished, you should see this output at the very end of the command prompt: 2008-08-07 14:10:50.516::INFO: Started SocketConnector @ 0.0.0.0:8983 The 0.0.0.0 means it's listening to connections from any host (not just localhost, notwithstanding potential firewalls) and 8983 is the port. If Jetty reports this, then it doesn't necessarily mean that Solr was deployed successfully. You might see an error such as a stack trace in the output if something went wrong. Even if it did go wrong, you should be able to access the web server: http://localhost:8983. Jetty will give you a 404 page but it will include a list of links to deployed web applications, which will just be Solr for this setup. Solr is accessible at: http://localhost:8983/solr, and if you browse to that page, then you should either see details about an error if Solr wasn't loaded correctly, or a simple page with a link to Solr's admin page, which should be http://localhost:8983/solr/admin/. You'll be visiting that link often. To quit Jetty (and many other command line programs for that matter), press Ctrl+C on the keyboard.
Read more
  • 0
  • 0
  • 2979
Packt
22 Aug 2013
7 min read
Save for later

Pentaho – Using Formulas in Our Reports

Packt
22 Aug 2013
7 min read
(For more resources related to this topic, see here.) At the end of the article, we propose that you make some modifications to the report created in this article. Starting practice In this article, we will create a copy of the report, then we will do the necessary changes in its layout; the final result is as follows: As we can observe in the previous screenshot, the rectangle that is to the left of each title changes color. We'll see how to do this, and much more, shortly. Time for action – making a copy of the previous report In this article, we will use an already created report. To do so, we will open it and save it with the name 09_Using_Formulas.prpt. Then we will modify its layout to fit this article. Finally, we will establish default values for our parameters. The steps for making a copy of the previous report are as follows: We open the report 07_Adding_Parameters.prpt that we created. Next, we create a copy by going to File | Save As... and saving it with the name 09_Using_Formulas.prpt. We will modify our report so that it looks like the following screenshot: As you can see, we have just added a rectangle in the Details section, a label (Total) in the Details Header section, and we have modified the name of the label found in the Report Header section. To easily differentiate this report from the one used previously, we have also modified its colors to grayscale. Later in this article, we will make the color of the rectangle vary according to the formula, so itis important that the rest of the report does not have too many colors so the result are easy for the end user to see. We will establish default values in our parameters so we can preview the report without delays caused by having to choose the values for ratings, year, and month. We go to the Data tab, select the SelectRating parameter, right-click on it, and choose the Edit Parameter... option: In Default Value, we type the value [G]: Next, we click on OK to continue. We should do something similar for SelectYear and SelectMonth: For SelectYear, the Default Value will be 2005. For SelectMonth, the Default Value will be 5. Remember that the selector shows the names of the months, but internally the months' numbers are used; so, 5 represents May. What just happened? We created a copy of the report 07_Adding_Parameters.prpt and saved it with the name 09_Using_Formulas.prpt. We changed the layout of the report, adding new objects and changing the colors. Then we established default values for the parameters SelectRating, SelectYear, and SelectMonth. Formulas To manage formulas, PRD implements the open standard OpenFormula. According to OpenFormula's specifications: "OpenFormula is an open format for exchanging recalculated formulas between office application implementations, particularly for spreadsheets. OpenFormula defines the types, syntax, and semantics for calculated formulas, including many predefined functions and operations, so that formulas can be exchanged between applications and produce substantively equal outputs when recalculated with equal inputs. Both closed and open source software can implement OpenFormula." For more information on OpenFormula, refer to the following links: Wikipedia: http://en.wikipedia.org/wiki/OpenFormula Specifications: https://www.oasis-open.org/committees/download.php/16826/openformula-spec-20060221.html Web: http ://www.openformula.org/ Pentaho wiki: http://wiki.pentaho.com/display/Reporting/Formula+Expressions Formulas are used for greatly varied purposes, and their use depends on the result one wants to obtain. Formulas let us carry out simple and complex calculations based on fixed and variable values and include predefined functions that let us work with text, databases, date and time, let us make calculations, and also include general information functions and user-defined functions. They also use logical operators (AND, OR, and so on) and comparative operators (>, <, and so on). Creating formulas There are two ways to create formulas: By creating a new function and by going to Common | Open Formula By pressing the button in a section's / an object's Style or Attributes tab, or to configure some feature In the report we are creating in this article, we will create formulas using both methods. Using the first method, general-use formulas can be created. That is, the result will be an object that can either be included directly in our report or used as a value in another function, style, or attribute. We can create objects that make calculations at a general level to be included in sections that include Report Header, Group Footer, and so on, or we can make calculations to be included in the Details section. In this last case, the formula will make its calculation row by row. With this last example, we can make an important differentiation with respect to aggregate functions as they usually can only calculate totals and subtotals. Using the second method, we create specific-use functions that affect the value of the style or attribute of an individual object. The way to use these functions is simple. Just choose the value you want to modify in the Style and Attributes tabs and click on the button that appears on their right. In this way, you can create formulas that dynamically assign values to an object's color, position, width, length, format, visibility, and so on. Using this technique, stoplights can be created by assigning different values to an object according to a calculation, progress bars can be created by changing an object's length, and dynamic images can be placed in the report using the result of a formula to calculate the image's path. As we have seen in the examples, using formulas in our reports gives us great flexibility in applying styles and attributes to objects and to the report itself, as well as the possibility of creating our own objects based on complex calculations. By using formulas correctly, you will be able to give life to your reports and adapt them to changing contexts. For example, depending on which user executes the report, a certain image can appear in the Report Header section, or graphics and subreports can be hidden if the user does not have sufficient permissions. The formula editor The formula editor has a very intuitive and easy-to-use UI that in addition to guiding us in creating formulas, tells us, whenever possible, the value that the formula will return. In the following screenshot, you can see the formula editor: We will explain its layout with an example. Let's suppose that we added a new label and we want to create a formula that returns the value of Attributes.Value. For this purpose, we do the following: Select the option to the right of Attributes.Value. This will open the formula editor. In the upper-left corner, there is a selector where we can specify the category of functions that we want to see. Below this, we find a list of the functions that we can use to create our own formulas. In the lower-left section, we can see more information about the selected function; that is, the type of value that it will return and a general description: We choose the CONCATENATE function by double-clicking on it, and in the lower-right section, we can see the formula (Formula:) that we will use. We type in =CONCATENATE(Any), and an assistant will open in the upper-right section that will guide us in entering the values we want to concatenate. We could complete the CONCATENATE function by adding some fixed values and some variables; take the following example: If there is an error in the text of the formula, text will appear to warn us. Otherwise, the formula editor will try to show us the result that our formula will return. When it is not possible to visualize the result that a formula will return, this is usually because the values used are calculated during the execution of the report. Formulas should always begin with the = sign. Initially, one tends to use the help that the formula editor provides, but later, with more practice, it will become evident that it is much faster to type the formula directly. Also, if you need to enter complex formulas or add various functions with logical operators, the formula editor will not be of use.
Read more
  • 0
  • 0
  • 2977

article-image-replication
Packt
24 Dec 2014
5 min read
Save for later

Cassandra High Availability: Replication

Packt
24 Dec 2014
5 min read
This article by Robbie Strickland, the author of Cassandra High Availability, describes the data replication architecture used in Cassandra. Replication is perhaps the most critical feature of a distributed data store, as it would otherwise be impossible to make any sort of availability guarantee in the face of a node failure. As you already know, Cassandra employs a sophisticated replication system that allows fine-grained control over replica placement and consistency guarantees. In this article, we'll explore Cassandra's replication mechanism in depth. Let's start with the basics: how Cassandra determines the number of replicas to be created and where to locate them in the cluster. We'll begin the discussion with a feature that you'll encounter the very first time you create a keyspace: the replication factor. (For more resources related to this topic, see here.) The replication factor On the surface, setting the replication factor seems to be a fundamentally straightforward idea. You configure Cassandra with the number of replicas you want to maintain (during keyspace creation), and the system dutifully performs the replication for you, thus protecting you when something goes wrong. So by defining a replication factor of three, you will end up with a total of three copies of the data. There are a number of variables in this equation. Let's start with the basic mechanics of setting the replication factor. Replication strategies One thing you'll quickly notice is that the semantics to set the replication factor depend on the replication strategy you choose. The replication strategy tells Cassandra exactly how you want replicas to be placed in the cluster. There are two strategies available: SimpleStrategy: This strategy is used for single data center deployments. It is fine to use this for testing, development, or simple clusters, but discouraged if you ever intend to expand to multiple data centers (including virtual data centers such as those used to separate analysis workloads). NetworkTopologyStrategy: This strategy is used when you have multiple data centers, or if you think you might have multiple data centers in the future. In other words, you should use this strategy for your production cluster. SimpleStrategy As a way of introducing this concept, we'll start with an example using SimpleStrategy. The following Cassandra Query Language (CQL) block will allow us to create a keyspace called AddressBook with three replicas: CREATE KEYSPACE AddressBookWITH REPLICATION = {   'class' : 'SimpleStrategy',   'replication_factor' : 3}; The data is assigned to a node via a hash algorithm, resulting in each node owning a range of data. Let's take another look at the placement of our example data on the cluster. Remember the keys are first names, and we determined the hash using the Murmur3 hash algorithm. The primary replica for each key is assigned to a node based on its hashed value. Each node is responsible for the region of the ring between itself (inclusive) and its predecessor (exclusive). While using SimpleStrategy, Cassandra will locate the first replica on the owner node (the one determined by the hash algorithm), then walk the ring in a clockwise direction to place each additional replica, as follows: Additional replicas are placed in adjacent nodes when using manually assigned tokens In the preceding diagram, the keys in bold represent the primary replicas (the ones placed on the owner nodes), with subsequent replicas placed in adjacent nodes, moving clockwise from the primary. Although each node owns a set of keys based on its token range(s), there is no concept of a master replica. In Cassandra, unlike make other database designs, every replica is equal. This means reads and writes can be made to any node that holds a replica of the requested key. If you have a small cluster where all nodes reside in a single rack inside one data center, SimpleStrategy will do the job. This makes it the right choice for local installations, development clusters, and other similar simple environments where expansion is unlikely because there is no need to configure a snitch (which will be covered later in this section). For production clusters, however, it is highly recommended that you use NetworkTopologyStrategy instead. This strategy provides a number of important features for more complex installations where availability and performance are paramount. NetworkTopologyStrategy When it's time to deploy your live cluster, NetworkTopologyStrategy offers two additional properties that make it more suitable for this purpose: Rack awareness: Unlike SimpleStrategy, which places replicas naively, this feature attempts to ensure that replicas are placed in different racks, thus preventing service interruption or data loss due to failures of switches, power, cooling, and other similar events that tend to affect single racks of machines. Configurable snitches: A snitch helps Cassandra to understand the topology of the cluster. There are a number of snitch options for any type of network configuration. Here's a basic example of a keyspace using NetworkTopologyStrategy: CREATE KEYSPACE AddressBookWITH REPLICATION = {   'class' : 'NetworkTopologyStrategy',   'dc1' : 3,   'dc2' : 2}; In this example, we're telling Cassandra to place three replicas in a data center called dc1 and two replicas in a second data center called dc2. Summary In this article, we introduced the foundational concepts of replication and consistency. In our discussion, we outlined the importance of the relationship between replication factor and consistency level, and their impact on performance, data consistency, and availability. By now, you should be able to make sound decisions specific to your use cases. This article might serve as a handy reference in the future as it can be challenging to keep all these details in mind. Resources for Article: Further resources on this subject: An overview of architecture and modeling in Cassandra [Article] Basic Concepts and Architecture of Cassandra [Article] About Cassandra [Article]
Read more
  • 0
  • 0
  • 2972
Modal Close icon
Modal Close icon